code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# Adversarial Example Generation
# ==============================
#
# **Author:** `<NAME> <https://github.com/inkawhich>`__
#
# If you are reading this, hopefully you can appreciate how effective some
# machine learning models are. Research is constantly pushing ML models to
# be faster, more accurate, and more efficient. However, an often
# overlooked aspect of designing and training models is security and
# robustness, especially in the face of an adversary who wishes to fool
# the model.
#
# This tutorial will raise your awareness to the security vulnerabilities
# of ML models, and will give insight into the hot topic of adversarial
# machine learning. You may be surprised to find that adding imperceptible
# perturbations to an image *can* cause drastically different model
# performance. Given that this is a tutorial, we will explore the topic
# via example on an image classifier. Specifically we will use one of the
# first and most popular attack methods, the Fast Gradient Sign Attack
# (FGSM), to fool an MNIST classifier.
#
#
#
# Threat Model
# ------------
#
# For context, there are many categories of adversarial attacks, each with
# a different goal and assumption of the attacker’s knowledge. However, in
# general the overarching goal is to add the least amount of perturbation
# to the input data to cause the desired misclassification. There are
# several kinds of assumptions of the attacker’s knowledge, two of which
# are: **white-box** and **black-box**. A *white-box* attack assumes the
# attacker has full knowledge and access to the model, including
# architecture, inputs, outputs, and weights. A *black-box* attack assumes
# the attacker only has access to the inputs and outputs of the model, and
# knows nothing about the underlying architecture or weights. There are
# also several types of goals, including **misclassification** and
# **source/target misclassification**. A goal of *misclassification* means
# the adversary only wants the output classification to be wrong but does
# not care what the new classification is. A *source/target
# misclassification* means the adversary wants to alter an image that is
# originally of a specific source class so that it is classified as a
# specific target class.
#
# In this case, the FGSM attack is a *white-box* attack with the goal of
# *misclassification*. With this background information, we can now
# discuss the attack in detail.
#
# Fast Gradient Sign Attack
# -------------------------
#
# One of the first and most popular adversarial attacks to date is
# referred to as the *Fast Gradient Sign Attack (FGSM)* and is described
# by Goodfellow et. al. in `Explaining and Harnessing Adversarial
# Examples <https://arxiv.org/abs/1412.6572>`__. The attack is remarkably
# powerful, and yet intuitive. It is designed to attack neural networks by
# leveraging the way they learn, *gradients*. The idea is simple, rather
# than working to minimize the loss by adjusting the weights based on the
# backpropagated gradients, the attack *adjusts the input data to maximize
# the loss* based on the same backpropagated gradients. In other words,
# the attack uses the gradient of the loss w.r.t the input data, then
# adjusts the input data to maximize the loss.
#
# Before we jump into the code, let’s look at the famous
# `FGSM <https://arxiv.org/abs/1412.6572>`__ panda example and extract
# some notation.
#
# .. figure:: /_static/img/fgsm_panda_image.png
# :alt: fgsm_panda_image
#
# From the figure, $\mathbf{x}$ is the original input image
# correctly classified as a “panda”, $y$ is the ground truth label
# for $\mathbf{x}$, $\mathbf{\theta}$ represents the model
# parameters, and $J(\mathbf{\theta}, \mathbf{x}, y)$ is the loss
# that is used to train the network. The attack backpropagates the
# gradient back to the input data to calculate
# $\nabla_{x} J(\mathbf{\theta}, \mathbf{x}, y)$. Then, it adjusts
# the input data by a small step ($\epsilon$ or $0.007$ in the
# picture) in the direction (i.e.
# $sign(\nabla_{x} J(\mathbf{\theta}, \mathbf{x}, y))$) that will
# maximize the loss. The resulting perturbed image, $x'$, is then
# *misclassified* by the target network as a “gibbon” when it is still
# clearly a “panda”.
#
# Hopefully now the motivation for this tutorial is clear, so lets jump
# into the implementation.
#
#
#
# +
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
# NOTE: This is a hack to get around "User-agent" limitations when downloading MNIST datasets
# see, https://github.com/pytorch/vision/issues/3497 for more information
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
# -
# Implementation
# --------------
#
# In this section, we will discuss the input parameters for the tutorial,
# define the model under attack, then code the attack and run some tests.
#
# Inputs
# ~~~~~~
#
# There are only three inputs for this tutorial, and are defined as
# follows:
#
# - **epsilons** - List of epsilon values to use for the run. It is
# important to keep 0 in the list because it represents the model
# performance on the original test set. Also, intuitively we would
# expect the larger the epsilon, the more noticeable the perturbations
# but the more effective the attack in terms of degrading model
# accuracy. Since the data range here is $[0,1]$, no epsilon
# value should exceed 1.
#
# - **pretrained_model** - path to the pretrained MNIST model which was
# trained with
# `pytorch/examples/mnist <https://github.com/pytorch/examples/tree/master/mnist>`__.
# For simplicity, download the pretrained model `here <https://drive.google.com/drive/folders/1fn83DF14tWmit0RTKWRhPq5uVXt73e0h?usp=sharing>`__.
#
# - **use_cuda** - boolean flag to use CUDA if desired and available.
# Note, a GPU with CUDA is not critical for this tutorial as a CPU will
# not take much time.
#
#
#
epsilons = [0, .05, .1, .15, .2, .25, .3]
pretrained_model = "data/lenet_mnist_model.pth"
use_cuda=True
# Model Under Attack
# ~~~~~~~~~~~~~~~~~~
#
# As mentioned, the model under attack is the same MNIST model from
# `pytorch/examples/mnist <https://github.com/pytorch/examples/tree/master/mnist>`__.
# You may train and save your own MNIST model or you can download and use
# the provided model. The *Net* definition and test dataloader here have
# been copied from the MNIST example. The purpose of this section is to
# define the model and dataloader, then initialize the model and load the
# pretrained weights.
#
#
#
# +
# LeNet Model definition
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
# MNIST Test dataset and dataloader declaration
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
])),
batch_size=1, shuffle=True)
# Define what device we are using
print("CUDA Available: ",torch.cuda.is_available())
device = torch.device("cuda" if (use_cuda and torch.cuda.is_available()) else "cpu")
# Initialize the network
model = Net().to(device)
# Load the pretrained model
model.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
# Set the model in evaluation mode. In this case this is for the Dropout layers
model.eval()
# -
# FGSM Attack
# ~~~~~~~~~~~
#
# Now, we can define the function that creates the adversarial examples by
# perturbing the original inputs. The ``fgsm_attack`` function takes three
# inputs, *image* is the original clean image ($x$), *epsilon* is
# the pixel-wise perturbation amount ($\epsilon$), and *data_grad*
# is gradient of the loss w.r.t the input image
# ($\nabla_{x} J(\mathbf{\theta}, \mathbf{x}, y)$). The function
# then creates perturbed image as
#
# \begin{align}perturbed\_image = image + epsilon*sign(data\_grad) = x + \epsilon * sign(\nabla_{x} J(\mathbf{\theta}, \mathbf{x}, y))\end{align}
#
# Finally, in order to maintain the original range of the data, the
# perturbed image is clipped to range $[0,1]$.
#
#
#
# FGSM attack code
def fgsm_attack(image, epsilon, data_grad):
# Collect the element-wise sign of the data gradient
sign_data_grad = data_grad.sign()
# Create the perturbed image by adjusting each pixel of the input image
perturbed_image = image + epsilon*sign_data_grad
# Adding clipping to maintain [0,1] range
perturbed_image = torch.clamp(perturbed_image, 0, 1)
# Return the perturbed image
return perturbed_image
# Testing Function
# ~~~~~~~~~~~~~~~~
#
# Finally, the central result of this tutorial comes from the ``test``
# function. Each call to this test function performs a full test step on
# the MNIST test set and reports a final accuracy. However, notice that
# this function also takes an *epsilon* input. This is because the
# ``test`` function reports the accuracy of a model that is under attack
# from an adversary with strength $\epsilon$. More specifically, for
# each sample in the test set, the function computes the gradient of the
# loss w.r.t the input data ($data\_grad$), creates a perturbed
# image with ``fgsm_attack`` ($perturbed\_data$), then checks to see
# if the perturbed example is adversarial. In addition to testing the
# accuracy of the model, the function also saves and returns some
# successful adversarial examples to be visualized later.
#
#
#
def test( model, device, test_loader, epsilon ):
# Accuracy counter
correct = 0
adv_examples = []
# Loop over all examples in test set
for data, target in test_loader:
# Send the data and label to the device
data, target = data.to(device), target.to(device)
# Set requires_grad attribute of tensor. Important for Attack
data.requires_grad = True
# Forward pass the data through the model
output = model(data)
init_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
# If the initial prediction is wrong, dont bother attacking, just move on
if init_pred.item() != target.item():
continue
# Calculate the loss
loss = F.nll_loss(output, target)
# Zero all existing gradients
model.zero_grad()
# Calculate gradients of model in backward pass
loss.backward()
# Collect datagrad
data_grad = data.grad.data
# Call FGSM Attack
perturbed_data = fgsm_attack(data, epsilon, data_grad)
# Re-classify the perturbed image
output = model(perturbed_data)
# Check for success
final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability
if final_pred.item() == target.item():
correct += 1
# Special case for saving 0 epsilon examples
if (epsilon == 0) and (len(adv_examples) < 5):
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )
else:
# Save some adv examples for visualization later
if len(adv_examples) < 5:
adv_ex = perturbed_data.squeeze().detach().cpu().numpy()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )
# Calculate final accuracy for this epsilon
final_acc = correct/float(len(test_loader))
print("Epsilon: {}\tTest Accuracy = {} / {} = {}".format(epsilon, correct, len(test_loader), final_acc))
# Return the accuracy and an adversarial example
return final_acc, adv_examples
# Run Attack
# ~~~~~~~~~~
#
# The last part of the implementation is to actually run the attack. Here,
# we run a full test step for each epsilon value in the *epsilons* input.
# For each epsilon we also save the final accuracy and some successful
# adversarial examples to be plotted in the coming sections. Notice how
# the printed accuracies decrease as the epsilon value increases. Also,
# note the $\epsilon=0$ case represents the original test accuracy,
# with no attack.
#
#
#
# +
accuracies = []
examples = []
# Run test for each epsilon
for eps in epsilons:
acc, ex = test(model, device, test_loader, eps)
accuracies.append(acc)
examples.append(ex)
# -
# Results
# -------
#
# Accuracy vs Epsilon
# ~~~~~~~~~~~~~~~~~~~
#
# The first result is the accuracy versus epsilon plot. As alluded to
# earlier, as epsilon increases we expect the test accuracy to decrease.
# This is because larger epsilons mean we take a larger step in the
# direction that will maximize the loss. Notice the trend in the curve is
# not linear even though the epsilon values are linearly spaced. For
# example, the accuracy at $\epsilon=0.05$ is only about 4% lower
# than $\epsilon=0$, but the accuracy at $\epsilon=0.2$ is 25%
# lower than $\epsilon=0.15$. Also, notice the accuracy of the model
# hits random accuracy for a 10-class classifier between
# $\epsilon=0.25$ and $\epsilon=0.3$.
#
#
#
plt.figure(figsize=(5,5))
plt.plot(epsilons, accuracies, "*-")
plt.yticks(np.arange(0, 1.1, step=0.1))
plt.xticks(np.arange(0, .35, step=0.05))
plt.title("Accuracy vs Epsilon")
plt.xlabel("Epsilon")
plt.ylabel("Accuracy")
plt.show()
# Sample Adversarial Examples
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Remember the idea of no free lunch? In this case, as epsilon increases
# the test accuracy decreases **BUT** the perturbations become more easily
# perceptible. In reality, there is a tradeoff between accuracy
# degredation and perceptibility that an attacker must consider. Here, we
# show some examples of successful adversarial examples at each epsilon
# value. Each row of the plot shows a different epsilon value. The first
# row is the $\epsilon=0$ examples which represent the original
# “clean” images with no perturbation. The title of each image shows the
# “original classification -> adversarial classification.” Notice, the
# perturbations start to become evident at $\epsilon=0.15$ and are
# quite evident at $\epsilon=0.3$. However, in all cases humans are
# still capable of identifying the correct class despite the added noise.
#
#
#
# Plot several examples of adversarial samples at each epsilon
cnt = 0
plt.figure(figsize=(8,10))
for i in range(len(epsilons)):
for j in range(len(examples[i])):
cnt += 1
plt.subplot(len(epsilons),len(examples[0]),cnt)
plt.xticks([], [])
plt.yticks([], [])
if j == 0:
plt.ylabel("Eps: {}".format(epsilons[i]), fontsize=14)
orig,adv,ex = examples[i][j]
plt.title("{} -> {}".format(orig, adv))
plt.imshow(ex, cmap="gray")
plt.tight_layout()
plt.show()
# Where to go next?
# -----------------
#
# Hopefully this tutorial gives some insight into the topic of adversarial
# machine learning. There are many potential directions to go from here.
# This attack represents the very beginning of adversarial attack research
# and since there have been many subsequent ideas for how to attack and
# defend ML models from an adversary. In fact, at NIPS 2017 there was an
# adversarial attack and defense competition and many of the methods used
# in the competition are described in this paper: `Adversarial Attacks and
# Defences Competition <https://arxiv.org/pdf/1804.00097.pdf>`__. The work
# on defense also leads into the idea of making machine learning models
# more *robust* in general, to both naturally perturbed and adversarially
# crafted inputs.
#
# Another direction to go is adversarial attacks and defense in different
# domains. Adversarial research is not limited to the image domain, check
# out `this <https://arxiv.org/pdf/1801.01944.pdf>`__ attack on
# speech-to-text models. But perhaps the best way to learn more about
# adversarial machine learning is to get your hands dirty. Try to
# implement a different attack from the NIPS 2017 competition, and see how
# it differs from FGSM. Then, try to defend the model from your own
# attacks.
#
#
#
| 08-deep-learning/labs/01_labs_DeepLearning/adversarial_fgsm_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import cv2
attendance = []
faceCascade = cv2.CascadeClassifier('Cascades/haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
cap.set(3,640) # set Width
cap.set(4,480) # set Height
while True:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, scaleFactor=1.2,minNeighbors=5,minSize=(20, 20))
# print(faces)
if type(faces) == tuple:
attendance.append('Absent')
else:
attendance.append('Present')
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
cv2.imshow('video',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
percentage = attendance.count('Present')/(attendance.count('Present')+attendance.count('Absent'))
print("Percentage of time a face detected is {:.2f}%".format(percentage*100))
break
cap.release()
cv2.destroyAllWindows()
| Sathvick_OpenCV/percentage_of_face_detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
from shutil import copy2
import csv
# +
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# The %tensorflow_version magic only works in colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import os
import numpy as np
import matplotlib.pyplot as plt
# -
import tensorflow_hub as hub
import pandas as pd
tf.__version__
# Increase precision of presented data for better side-by-side comparison
pd.set_option("display.precision", 8)
data_root = "../dataset/AWEDataset/awe-train"
# +
IMAGE_SHAPE = (224, 224)
TRAINING_DATA_DIR = "/".join([data_root, "train"])
VALIDATE_DATA_DIR = "/".join([data_root, "val"])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
rotation_range=15,
brightness_range=[0.5, 1.5],
horizontal_flip=False,
vertical_flip=False,
fill_mode='nearest')
valid_datagen = ImageDataGenerator(rescale=1./255)
valid_generator = valid_datagen.flow_from_directory(
VALIDATE_DATA_DIR,
subset="training",
shuffle=True,
target_size=IMAGE_SHAPE)
train_generator = train_datagen.flow_from_directory(
TRAINING_DATA_DIR,
subset="training",
shuffle=True,
target_size=IMAGE_SHAPE)
# -
for image_batch, label_batch in train_generator:
break
image_batch.shape, label_batch.shape
# +
# label_batch
# +
print (train_generator.class_indices)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('labels.txt', 'w') as f:
f.write(labels)
print (valid_generator.class_indices)
labels = '\n'.join(sorted(valid_generator.class_indices.keys()))
with open('labels.txt', 'w') as f:
f.write(labels)
# -
x, y = next(train_generator)
from matplotlib import cm
from mpl_toolkits.axes_grid1 import ImageGrid
import math
def show_grid(image_list,nrows,ncols,label_list=None,show_labels=False,savename=None,figsize=(10,10),showaxis='off'):
if type(image_list) is not list:
if(image_list.shape[-1]==1):
image_list = [image_list[i,:,:,0] for i in range(image_list.shape[0])]
elif(image_list.shape[-1]==3):
image_list = [image_list[i,:,:,:] for i in range(image_list.shape[0])]
fig = plt.figure(None, figsize,frameon=False)
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(nrows, ncols), # creates 2x2 grid of axes
axes_pad=0.3, # pad between axes in inch.
share_all=True,
)
for i in range(nrows*ncols):
ax = grid[i]
ax.imshow(image_list[i],cmap='Greys_r') # The AxesGrid object work as a list of axes.
ax.axis('off')
if show_labels:
ax.set_title(str(np.argmax(label_list[i])))
if savename != None:
plt.savefig(savename,bbox_inches='tight')
show_grid(x,5,5,label_list=y,show_labels=True,figsize=(20,10), savename="ear-augment-4.png")
| code/notebooks/Keras-image-augmentation-visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from IPython.display import display
from sklearn.datasets import load_iris
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import mglearn
iris_dataset = load_iris()
# -
print("iris_dataset의 키: \n{}".format(iris_dataset.keys()))
print(iris_dataset['DESCR'][:193] + "\n...")
print("타깃의 이름: {}".format(iris_dataset['target_names']))
print("특성의 이름: \n{}".format(iris_dataset['feature_names']))
print("data의 타입: {}".format(type(iris_dataset['data'])))
print("data의 크기: {}".format(iris_dataset['data'].shape))
print("data의 처음 다섯 행:\n{}".format(iris_dataset['data'[:5]]))
print("target의 타입: {}".format(type(iris_dataset['target'])))
print("target의 크기: {}".format(iris_dataset['target'].shape))
print("타깃:\n{}".format(iris_dataset['target']))
# +
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
iris_dataset['data'],
iris_dataset['target'],
random_state = 0
)
# -
print("X_train의 크기: {}".format(X_train.shape))
print("y_train의 크기: {}".format(y_train.shape))
print("X_test의 크기: {}".format(X_test.shape))
print("y_test의 크기: {}".format(y_test.shape))
# +
iris_dataframe = pd.DataFrame(X_train, columns=iris_dataset.feature_names)
pd.plotting.scatter_matrix(
iris_dataframe,
c=y_train,
figsize=(15,15),
marker='o',
hist_kwds={'bins': 20},
s=60,
alpha=.8,
cmap=mglearn.cm3
)
# -
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
X_new = np.array([[5,2.9, 1, 0.2]])
print("X_new.shape: {}".format(X_new.shape))
prediction = knn.predict(X_new)
print("예측: {}".format(prediction))
print("예측한 타깃의 이름: {}".format(iris_dataset['target_names'][prediction]))
y_pred = knn.predict(X_test)
print("테스트 세트에 대한 예측값:\n {}".format(y_pred))
print("테스트 세트의 정확도: {:.2f}".format(np.mean(y_pred == y_test)))
print("테스트 세트의 정확도: {:.2f}".format(knn.score(X_test, y_test)))
| chapter01/practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''pytorch'': conda)'
# name: python37664bitpytorchconda0cdad03962454fdfb22b6d3ea1ad8fae
# ---
# http://d2l.ai/chapter_multilayer-perceptrons/mlp.html
# %matplotlib inline
from d2l import torch as d2l
import torch
x = torch.arange(-8.0, 8.0, 0.1, requires_grad=True)
y = torch.relu(x)
d2l.set_figsize((4, 2.5))
d2l.plot(x.detach(), y.detach(), 'x', 'relu(x)')
y.backward(torch.ones_like(x), retain_graph=True)
d2l.plot(x.detach(), x.grad, 'x', 'grad of relu')
# 对z求和不就是等价于z点乘一个一样维度的全为1的矩阵吗?
#
# https://zhuanlan.zhihu.com/p/83172023
y = torch.sigmoid(x)
d2l.plot(x.detach(), y.detach(), 'x', 'sigmoid(x)')
# Clear out previous gradients.
x.grad.data.zero_()
y.backward(torch.ones_like(x),retain_graph=True)
d2l.plot(x.detach(), x.grad, 'x', 'grad of tanh')
y = torch.tanh(x)
d2l.plot(x.detach(), y.detach(), 'x', 'tanh(x)')
# Clear out previous gradients.
x.grad.data.zero_()
y.backward(torch.ones_like(x),retain_graph=True)
d2l.plot(x.detach(), x.grad, 'x', 'grad of tanh')
| Ch04_MP/4-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#save model Using joblib
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from sklearn.ensemble import (BaggingClassifier, ExtraTreesClassifier,
RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, VotingClassifier)
from sklearn.linear_model import (LinearRegression, LogisticRegressionCV,
LogisticRegression, SGDClassifier, Ridge, Lasso, ElasticNet, LassoCV, RidgeCV, ElasticNetCV)
from sklearn.svm import LinearSVC, NuSVC, SVC
from xgboost import XGBClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import MultinomialNB, GaussianNB
from sklearn.neural_network import MLPClassifier
from sklearn import cross_validation
from sklearn.cross_validation import train_test_split, cross_val_score
from sklearn.model_selection import train_test_split, cross_val_predict, cross_validate, GridSearchCV
from sklearn.metrics import accuracy_score, f1_score, mean_absolute_error, r2_score,confusion_matrix, classification_report
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectFromModel
from sklearn.preprocessing import StandardScaler, LabelEncoder, OneHotEncoder
from sklearn import preprocessing
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.externals import joblib
from yellowbrick.classifier import ClassificationReport, ROCAUC, ConfusionMatrix
from yellowbrick.features import FeatureImportances
# %matplotlib inline
# -
url = "https://github.com/georgetown-analytics/Box-Office/blob/master/Lasso%20Model.csv"
# +
X = ["Budget_Real_Log", "Holiday", "Summer", "Spring", "Fall", "Winter",
'Rating_RT', 'Rating_IMDB', 'Rating_Metacritic','isCollection','Length',
'Genre_Drama', 'Genre_Comedy', 'Genre_Action_Adventure', 'Genre_Thriller_Horror',
'Genre_Romance', 'Genre_Crime_Mystery', 'Genre_Animation', 'Genre_Scifi',
'Genre_Documentary', 'Genre_Other',
'Rated_G_PG', 'Rated_PG-13', 'Rated_R', 'Rated_Other',
'Comp_Disney','Comp_DreamWorks', 'Comp_Fox', 'Comp_Lionsgate',
'Comp_MGM', 'Comp_Miramax', 'Comp_Paramount', 'Comp_Sony',
'Comp_Universal', 'Comp_WarnerBros', 'Comp_Other',
'Revenue_Actor_Real_Log','Revenue_Director_Real_Log', 'Revenue_Writer_Real_Log']
seed=3
X_train, X_test = cross_validation.train_test_split(X, test_size = 0.2, random_state=seed)
# -
dataframe = pd.read_csv(url, X=X)
array = dataframe.values
# +
#Lasso Regression
lassoreg = LassoCV(fit_intercept=True, normalize=True) #alpha=0.001,
lassoreg.fit(X_train, y_train)
print(lassoreg.score(X_test, y_test))
print(" Training set MAE:", mean_absolute_error(np.exp(y_train), np.exp(lassoreg.predict(X_train))), "dollars", '\n',
"Training set r_squared:", r2_score(y_train, lassoreg.predict(X_train)), '\n',
"Testing set MAE:", mean_absolute_error(np.exp(y_test), np.exp(lassoreg.predict(X_test))), "dollars", '\n',
"Testing set r_squared:", r2_score(y_test, lassoreg.predict(X_test)), '\n',)
plt.figure(figsize=(15,3))
plt.bar(X.columns, lassoreg.coef_)
plt.xticks(rotation=90)
plt.xlabel("features")
plt.ylabel("Lasso coef_")
plt.show()
plt.scatter(np.exp(y_test), np.exp(lassoreg.predict(X_test)))
plt.xlabel('True Values')
plt.ylabel('Predictions')
plt.show()
# +
# save the model to disk
two_up = os.path.abspath(os.path.join(os.getcwd(),"../.."))
filename = two_up + r'\Users\lanceliu\Downloads\lasso_20180901.pkl'
vc_check = joblib.load(filename)
print(vc_check)
oz = ClassificationReport(vc_check, support=True)
oz.fit(X_train_std, y_train_std)
oz.score(X_test_std, y_test_std)
oz.poof()
# -
#load the model from disk
loaded_model = joblib.load(filename)
result = loaded_model.score(X_test, Y_test)
print(result)
| final_models/Data Product/Lasso Model Rough Draft (1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # The Lifecycle of a Plot
#
#
# This tutorial aims to show the beginning, middle, and end of a single
# visualization using Matplotlib. We'll begin with some raw data and
# end by saving a figure of a customized visualization. Along the way we'll try
# to highlight some neat features and best-practices using Matplotlib.
#
# .. currentmodule:: matplotlib
#
# <div class="alert alert-info"><h4>Note</h4><p>This tutorial is based off of
# `this excellent blog post <http://pbpython.com/effective-matplotlib.html>`_
# by <NAME>. It was transformed into this tutorial by <NAME>.</p></div>
#
# A note on the Object-Oriented API vs Pyplot
# ===========================================
#
# Matplotlib has two interfaces. The first is an object-oriented (OO)
# interface. In this case, we utilize an instance of :class:`axes.Axes`
# in order to render visualizations on an instance of :class:`figure.Figure`.
#
# The second is based on MATLAB and uses
# a state-based interface. This is encapsulated in the :mod:`pyplot`
# module. See the `pyplot tutorials
# <sphx_glr_tutorials_introductory_pyplot.py>`
# for a more in-depth look at the pyplot interface.
#
# Most of the terms are straightforward but the main thing to remember
# is that:
#
# * The Figure is the final image that may contain 1 or more Axes.
# * The Axes represent an individual plot (don't confuse this with the word
# "axis", which refers to the x/y axis of a plot).
#
# We call methods that do the plotting directly from the Axes, which gives
# us much more flexibility and power in customizing our plot. See the
# `object-oriented examples <api_examples>` for many examples of how
# this approach is used.
#
# <div class="alert alert-info"><h4>Note</h4><p>In general, try to use the object-oriented interface over the pyplot
# interface.</p></div>
#
# Our data
# ========
#
# We'll use the data from the post from which this tutorial was derived.
# It contains sales information for a number of companies.
#
#
# +
# sphinx_gallery_thumbnail_number = 10
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
data = {'Barton LLC': 109438.50,
'<NAME>': 103569.59,
'<NAME>': 112214.71,
'Jerde-Hilpert': 112591.43,
'Keeling LLC': 100934.30,
'Koepp Ltd': 103660.54,
'Kulas Inc': 137351.96,
'Trantow-Barrows': 123381.38,
'White-Trantow': 135841.99,
'Will LLC': 104437.60}
group_data = list(data.values())
group_names = list(data.keys())
group_mean = np.mean(group_data)
# -
# Getting started
# ===============
#
# This data is naturally visualized as a barplot, with one bar per
# group. To do this with the object-oriented approach, we'll first generate
# an instance of :class:`figure.Figure` and
# :class:`axes.Axes`. The Figure is like a canvas, and the Axes
# is a part of that canvas on which we will make a particular visualization.
#
# <div class="alert alert-info"><h4>Note</h4><p>Figures can have multiple axes on them. For information on how to do this,
# see the `Tight Layout tutorial
# <sphx_glr_tutorials_intermediate_tight_layout_guide.py>`.</p></div>
#
#
fig, ax = plt.subplots()
# Now that we have an Axes instance, we can plot on top of it.
#
#
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
# Controlling the style
# =====================
#
# There are many styles available in Matplotlib in order to let you tailor
# your visualization to your needs. To see a list of styles, we can use
# :mod:`pyplot.style`.
#
#
print(plt.style.available)
# You can activate a style with the following:
#
#
plt.style.use('fivethirtyeight')
# Now let's remake the above plot to see how it looks:
#
#
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
# The style controls many things, such as color, linewidths, backgrounds,
# etc.
#
# Customizing the plot
# ====================
#
# Now we've got a plot with the general look that we want, so let's fine-tune
# it so that it's ready for print. First let's rotate the labels on the x-axis
# so that they show up more clearly. We can gain access to these labels
# with the :meth:`axes.Axes.get_xticklabels` method:
#
#
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
# If we'd like to set the property of many items at once, it's useful to use
# the :func:`pyplot.setp` function. This will take a list (or many lists) of
# Matplotlib objects, and attempt to set some style element of each one.
#
#
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
# It looks like this cut off some of the labels on the bottom. We can
# tell Matplotlib to automatically make room for elements in the figures
# that we create. To do this we'll set the ``autolayout`` value of our
# rcParams. For more information on controlling the style, layout, and
# other features of plots with rcParams, see
# `sphx_glr_tutorials_introductory_customizing.py`.
#
#
# +
plt.rcParams.update({'figure.autolayout': True})
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
# -
# Next, we'll add labels to the plot. To do this with the OO interface,
# we can use the :meth:`axes.Axes.set` method to set properties of this
# Axes object.
#
#
fig, ax = plt.subplots()
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
ax.set(xlim=[-10000, 140000], xlabel='Total Revenue', ylabel='Company',
title='Company Revenue')
# We can also adjust the size of this plot using the :func:`pyplot.subplots`
# function. We can do this with the ``figsize`` kwarg.
#
# <div class="alert alert-info"><h4>Note</h4><p>While indexing in NumPy follows the form (row, column), the figsize
# kwarg follows the form (width, height). This follows conventions in
# visualization, which unfortunately are different from those of linear
# algebra.</p></div>
#
#
fig, ax = plt.subplots(figsize=(8, 4))
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
ax.set(xlim=[-10000, 140000], xlabel='Total Revenue', ylabel='Company',
title='Company Revenue')
# For labels, we can specify custom formatting guidelines in the form of
# functions by using the :class:`ticker.FuncFormatter` class. Below we'll
# define a function that takes an integer as input, and returns a string
# as an output.
#
#
# +
def currency(x, pos):
"""The two args are the value and tick position"""
if x >= 1e6:
s = '${:1.1f}M'.format(x*1e-6)
else:
s = '${:1.0f}K'.format(x*1e-3)
return s
formatter = FuncFormatter(currency)
# -
# We can then apply this formatter to the labels on our plot. To do this,
# we'll use the ``xaxis`` attribute of our axis. This lets you perform
# actions on a specific axis on our plot.
#
#
# +
fig, ax = plt.subplots(figsize=(6, 8))
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
ax.set(xlim=[-10000, 140000], xlabel='Total Revenue', ylabel='Company',
title='Company Revenue')
ax.xaxis.set_major_formatter(formatter)
# -
# Combining multiple visualizations
# =================================
#
# It is possible to draw multiple plot elements on the same instance of
# :class:`axes.Axes`. To do this we simply need to call another one of
# the plot methods on that axes object.
#
#
# +
fig, ax = plt.subplots(figsize=(8, 8))
ax.barh(group_names, group_data)
labels = ax.get_xticklabels()
plt.setp(labels, rotation=45, horizontalalignment='right')
# Add a vertical line, here we set the style in the function call
ax.axvline(group_mean, ls='--', color='r')
# Annotate new companies
for group in [3, 5, 8]:
ax.text(145000, group, "New Company", fontsize=10,
verticalalignment="center")
# Now we'll move our title up since it's getting a little cramped
ax.title.set(y=1.05)
ax.set(xlim=[-10000, 140000], xlabel='Total Revenue', ylabel='Company',
title='Company Revenue')
ax.xaxis.set_major_formatter(formatter)
ax.set_xticks([0, 25e3, 50e3, 75e3, 100e3, 125e3])
fig.subplots_adjust(right=.1)
plt.show()
# -
# Saving our plot
# ===============
#
# Now that we're happy with the outcome of our plot, we want to save it to
# disk. There are many file formats we can save to in Matplotlib. To see
# a list of available options, use:
#
#
print(fig.canvas.get_supported_filetypes())
# We can then use the :meth:`figure.Figure.savefig` in order to save the figure
# to disk. Note that there are several useful flags we'll show below:
#
# * ``transparent=True`` makes the background of the saved figure transparent
# if the format supports it.
# * ``dpi=80`` controls the resolution (dots per square inch) of the output.
# * ``bbox_inches="tight"`` fits the bounds of the figure to our plot.
#
#
# +
# Uncomment this line to save the figure.
# fig.savefig('sales.png', transparent=False, dpi=80, bbox_inches="tight")
| docs/plotting/matplotlib/lifecycle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sets, Collections, & Exception Handling
#
# ## Sets
# * create a new empty set
# * print that set
x = set([])
# * create a non empty set
# * print that set
x = set(["louis", "jonathan", "josh"])
x
# * iterate over the set and print results
for i in x:
print(i)
# * add one item to the set
x.add("lila")
x
# * add multiple items to the set
y = set(["vishal", "mohammed"])
x |= y
x
# * remove an item from a set if it is present in the set
x.remove("vishal")
x
# * find maximum and minimum values of the set
print(max(x))
print(min(x))
# * print the length of the set
print(len(x))
# * create an intersection of x and y
x.intersection(y)
# * create an union of x and y
x.union(y)
# * create difference between x and y
x.difference(y)
# ---------------
# ## Collections
# * for each word in a sentence count the occurence
# * **sentence:** *black cat jumped over white cat*
from collections import Counter
s = "black cat jumped over white cat"
words = s.split()
Counter(words)
# * print the most common words
d = Counter(words).most_common()
d
# * count the occurences of words in the same sentence but now use **defaultdict**
from collections import OrderedDict, defaultdict
s3 = "black cat jumped over white cat"
words2 = s.split()
od = OrderedDict(Counter(words))
od2 = OrderedDict(sorted(od.items(), key=lambda t: t[1]))
od2
# * create deque from list set used in first exercise
from collections import deque
deq = deque(x)
deq
# * append number 10 to deque
deq.append(10)
deq
# * remove element from the right end from deque
deq.pop()
deq
# * remove element from the left end from deque
deq.popleft()
deq
# * delete all elements from deque
deq.clear()
deq
# * create named tuple (people) with name and surname as position names
# +
from collections import namedtuple
student = namedtuple("student", "fname, surname")
s1 = student("louis", "rossi")
s1
# -
# * print name and surname
s1
# _________________
# ## Exception handling
# Now, let's practice with **errors and exception handling**
#
# * Transform all string elements from a list to upper, if the element is not a string don't transform it.
# * Use a try & except block without using the 'if' statement.
# +
exception_string = ["louis", "josh", "lila", "vishal", "jonathan", 1]
for i in range(len(exception_string)):
try:
exception_string[i] = exception_string[i].upper()
except:
print("error happened")
print(exception_string)
# tried if i == type(str)
# tried if type(i) == str:
# -
# ### We have created a function below:
#
# <NAME> has family and friends. Help him remind himself the type of relation he has with his family and friends.
#
# Given a string with a name, return the relation of that person to Luke.
#
# **Person --> Relation**
# - <NAME> --> father
# - Leia --> sister
# - Han --> brother in law
# - R2D2 --> droid
#
# #### Examples
#
# > relation_to_luke("<NAME>") ➞ "Luke, I am your father."
# >
# > relation_to_luke("Leia") ➞ "Luke, I am your sister."
# >
# > relation_to_luke("Han") ➞ "Luke, I am your brother in law."
thedict = {
"<NAME>": "father",
"leia": "sister",
"han": "brother in law",
"r2d2": "driod"
}
def relation_to_luke(name):
try:
print("Luke, I am your " + thedict[name.lower()] + ".")
except KeyError:
print("No relation")
# #### Task I
# Fix errors in the function above so we can run following code
relation_to_luke("<NAME>")
relation_to_luke("Leia")
relation_to_luke("Han")
relation_to_luke("R2D2")
# #### Task II
# Use exception handling so we can run the function with any string. In this case, the function will return following:
#
# **relation_to_luke("aaaa") ➞ "aaaa is not in the relation with Luke"**
#
# **Note:** Do **Not** use an **if** statement for this
relation_to_luke("aaaa")
| w1/d2-day_2/sets_collections_exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Min Heap Construction
# [link]()
# Do not edit the class below except for the buildHeap,
# siftDown, siftUp, peek, remove, and insert methods.
# Feel free to add new properties and methods to the class.
class MinHeap:
def __init__(self, array):
# Do not edit the line below.
self.heap = self.buildHeap(array)
def buildHeap(self, array):
# Write your code here.
pass
def siftDown(self):
# Write your code here.
pass
def siftUp(self):
# Write your code here.
pass
def peek(self):
# Write your code here.
pass
def remove(self):
# Write your code here.
pass
def insert(self, value):
# Write your code here.
pass
# ## My Solution
# Do not edit the class below except for the buildHeap,
# siftDown, siftUp, peek, remove, and insert methods.
# Feel free to add new properties and methods to the class.
class MinHeap:
def __init__(self, array):
# Do not edit the line below.
self.heap = self.buildHeap(array)
def buildHeap(self, array):
# Write your code here.
self.heap = [x for x in array]
finalIdx = len(self.heap) - 1
finalParentIdx = (finalIdx - 1) // 2
for i in reversed(range(finalParentIdx + 1)):
print(i)
self.heapifyDown(i)
return self.heap
def heapifyDown(self, idx):
while idx < len(self.heap):
if 2 * idx + 1 >= len(self.heap):
break
elif 2 * idx + 1 < len(self.heap) and 2 * idx + 2 >= len(self.heap):
if self.heap[idx] > self.heap[2 * idx + 1]:
self.switch(idx, 2 * idx + 1)
idx = 2 * idx + 1
else:
break
elif 2 * idx + 2 < len(self.heap):
smallerIdx = 2 * idx + 1 if self.heap[2 * idx + 1] <= self.heap[2 * idx + 2] else 2 * idx + 2
if self.heap[idx] > self.heap[smallerIdx]:
self.switch(idx, smallerIdx)
idx = smallerIdx
else:
break
def switch(self, i, j):
self.heap[i], self.heap[j] = self.heap[j], self.heap[i]
def siftDown(self):
# Write your code here.
self.heapifyDown(0)
def siftUp(self):
# Write your code here.
idx = len(self.heap) - 1
while idx > 0:
parentIdx = (idx - 1) // 2
if self.heap[parentIdx] > self.heap[idx]:
self.switch(parentIdx, idx)
idx = parentIdx
else:
break
def peek(self):
# Write your code here.
return self.heap[0]
def remove(self):
# Write your code here.
self.switch(0, len(self.heap) - 1)
topValue = self.heap.pop()
self.siftDown()
return topValue
def insert(self, value):
# Write your code here.
self.heap.append(value)
self.siftUp()
def heapifyDown(self, idx):
# a more elegant way
while idx < len(self.heap):
smallest = idx
for c in [2 * idx + 1, 2 * idx + 2]:
if c < len(self.heap) and self.heap[c] < self.heap[smallest]: smallest = c
if smallest != idx:
self.switch(idx, smallest)
idx = smallest
else:
break
# ## Expert Solution
class MinHeap:
def __init__(self, array):
self.heap = self.buildHeap(array)
# O(n) time | O(1) space
def buildHeap(self, array):
firstParentIdx = (len(array) - 2) // 2
for currentIdx in reversed(range(firstParentIdx + 1)):
self.siftDown(currentIdx, len(array) - 1, array)
return array
# O(log(n)) time | O(1) time
def siftDown(self, currentIdx, endIdx, heap):
childOneIdx = currentIdx * 2 + 1
while childOneIdx <= endIdx:
childTwoIdx = currentIdx * 2 + 2 if currentIdx * 2 + 2 <= endIdx else -1
if childTwoIdx != -1 and heap[childTwoIdx] < heap[childOneIdx]:
idxToSwap = childTwoIdx
else:
idxToSwap = childOneIdx
if heap[idxToSwap] < heap[currentIdx]:
self.swap(currentIdx, idxToSwap, heap)
currentIdx = idxToSwap
childOneIdx = currentIdx * 2 + 1
else:
return
# O(log(n)) time | O(1) space
def siftUp(self, currentIdx, heap):
parentIdx = (currentIdx - 1) // 2
while currentIdx > 0 and heap[currentIdx] < heap[parentIdx]:
self.swap(currentIdx, parentIdx, heap)
currentIdx = parentIdx
parentIdx = (currentIdx - 1)// 2
# O(1) time | O(1) space
def peek(self):
return self.heap[0]
# O(log(n)) time | O(1) space
def remove(self):
self.swap(0, len(self.heap) - 1, self.heap)
valueToRemove = self.heap.pop()
self.siftDown(0, len(self.heap) - 1, self.heap)
return valueToRemove
# O(log(n)) time | O(1) space
def insert(self, value):
self.heap.append(value)
self.siftUp(len(self.heap) - 1, self.heap)
def swap(self, i, j, heap):
heap[i], heap[j] = heap[j], heap[i]
# ## Thoughts
# ## binary heap
# ### property
# A heap is a specialized **tree-based data structure** which is essentially an almost **complete tree** that statisfies the **heap property**.
#
# - is a binary tree
# - satisfy completeness property: the binary heap has to have all of its levels filled up completely, except the last level which, if it is partly filled up, has to be filled up from the left to the right.
# - satisfy heap property:
# - for min heap: a parent's value is smaller than, or equal to, its children's values.
# - for max heap: a parent's value is larger than, or equal to, its children's values.
#
# so the root value is the smallest value in a min heap.
#
# ### store in an array
# if currentNode's index is i in the array:
# - childOne -> 2i+1
# - childTwo -> 2i+2
# - parentNode -> floor((i-1)/2)
#
# ### method
# #### public & private methods
# ```java
# class MinHeap {
# public:
# buildHeap(): // create an empty heap.
# buildHeap(const vector<int>& data); // create a heap from a vector.
# int peek() const; // return the min element.
# int remove(); // extract the min element.
# void insert(int key); // add a new element to the heap.
# int size() const; // return the size of the heap.
# private:
# void siftUp(int index);
# void siftDown(int index);
# vector<int> data_;
# };
# ```
#
# #### siftUp
# continuously swap the target node with its parent nodes until it's in its correct position.
# O(logn) time | O(1) space
#
# #### siftDown
# continuously swap the target node with its smaller direct child nodes until it's in its correct position.
# O(logn) time | O(1) space
#
# #### insert a value
# O(logn) time | O(1) space
# ```
# insert(value):
# append the value after the last position
# idx = index of the inserted element (last one)
# siftUp(idx)
# ```
#
# #### remove the top node
# O(logn) time | O(1) space
# ```
# remove():
# swap the first and the last values
# pop the last value out
# siftDown(0)
# ```
#
# #### build heap
# O(n) time| O(1) space
#
# it seems the time complexity is O(nlogn), but when calucate more detailedly, it will take 2n steps totally, so it's O(n) time:
# - the last level nodes take O(0) time to siftDown
# - the last 2 level nodes take O(1) time to siftDown
# - the last 3 level nodes take O(2) time to siftDown
# - ...
# - the first level node takes O(logn) time to siftDown
#
# add them up will be 2n.
#
# also, we can start from the first value, and iteratively use siftUp() to build the min heap, but this will take O(nlogn) time.
# ```
# buildHeap():
# finalIdx = length - 1
# finalParentIdx = (finalIdx - 1) / 2
# for i from finalParentIdx to 0:
# siftDown(i)
# ```
#
# ### application
# - heapsort O(nlogn)
# - dijkstra's algorithm O(|e|log|v|)
# - priority queue
# - selection algorithm:
# - select top k elements among new
# - sorting: O(nlogn)
# - binary heap: O(n + klogn)
| algoExpert/min_heap_construction/solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="eSi7LsrV29jM"
import pandas as pd
import numpy as np
from scipy.sparse import csr_matrix
from tqdm import tqdm
# -
# # SVD
#
# Una aproximación es el uso de una SVD como método de extracción de tópicos por autor. Si a un autor se le ve como un conjunto de títulos en los que ha colaborado, se puede generar una matriz binaria de co-ocurrencia con autores en las filas y artículos en las columnas con dimensiones $93,912 \times 423,380$.
# - [Sparse matrix](#Sparse-matrix)
# - [SVD](#SVD)
# - [Evaluation](#Evaluation)
# - [Cosine similarity](#Cosine)
# - [Euclidean distance](#Euclidean)
# - [Train-test](#Train-test)
# + [markdown] id="vcWsN5vnMzaL"
# <a name="Sparse-matrix"></a>
# ## Sparse matrix
#
# Debido a que la matriz es gigantesca y la mayoría de sus elementos son cero se debe tratar como una matriz sparse.
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="eXfgqMAF29jV" outputId="66755563-2f84-43db-c239-f002404a5a0a"
df1 = pd.read_csv('../Data/1990_2000_1_filtered_authorships.csv')
df2 = pd.read_csv('../Data/1990_2000_2_filtered_authorships.csv')
df = pd.concat([df1, df2])
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="rjPPUbsh29jY" outputId="09abd87f-d0e2-46d1-df48-752e0505199b"
df['id_article'].unique().size, df['author'].unique().size
# + id="HAaLve_z29jZ"
authors_dict = dict(zip(df['author'].unique(), np.arange(df['author'].unique().size)))
articles_dict = dict(zip(df['id_article'].unique(), np.arange(df['id_article'].unique().size)))
# + id="pEYIBAwo29jZ"
rows = [authors_dict[x] for x in df['author'].values]
cols = [articles_dict[x] for x in df['id_article'].values]
data = np.ones(df.shape[0])
X = csr_matrix((data, (rows, cols)))
# + [markdown] id="R1ScXCifM_bL"
# <a name="SVD"></a>
# ## SVD
#
# Nos quedaremos únicamente con las primeras 30 componentes de la SVD.
# + id="XaNxzZ_K29ja"
from sklearn.decomposition import TruncatedSVD
import pickle
# + colab={"base_uri": "https://localhost:8080/"} id="rQOOAHiX29jb" outputId="63526ef0-65f0-4325-beb7-7e90806e7393"
svd = TruncatedSVD(n_components=30, n_iter=10, random_state=42)
svd.fit(X)
# + colab={"base_uri": "https://localhost:8080/"} id="HIKnDzVu29jb" outputId="cee5d320-1c9a-4215-a0f2-caecb6c4db37"
X_svd = svd.transform(X)
X_svd.shape
# -
with open('SVD.pickle', 'wb') as f:
pickle.dump(X_svd, f)
# + [markdown] id="9Q5saI5rNG48"
# <a name="Evaluation"></a>
# ## Evaluation
# +
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.metrics import classification_report, f1_score, accuracy_score
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="FovuU1Fz29jc" outputId="f8b5261f-c0e2-416a-ddcf-6f0f0b78226b"
df_test = pd.read_csv('../Data/sample_features_test2021-01-05.csv')
df_test = df_test[['source', 'target', 'connected']]
df_test.head()
# + [markdown] id="BUJlo1TENJwR"
# <a name="Cosine"></a>
# ### Cosine similarity
#
# Cuanto se trata de vectores dispersos lo mejor es usar la militud coseno.
# + id="I83XMzmR70YY"
y_pred = []
for source, target in zip(df_test['source'], df_test['target']):
X_source = X_svd[authors_dict[source], :].reshape(1,-1)
X_target = X_svd[authors_dict[target], :].reshape(1,-1)
cos_sim = cosine_similarity(X_source, X_target)[0][0]
pred = 1 if cos_sim > 0.5 else 0
y_pred.append(pred)
df_test['cosine'] = y_pred
# + colab={"base_uri": "https://localhost:8080/"} id="aBX4Dlry-sWV" outputId="be874b34-a1e3-477d-e1d7-24241c2ed597"
print(classification_report(df_test['connected'], df_test['cosine']))
print('F1: {:.4f}'.format(f1_score(df_test['connected'], df_test['cosine'])))
print('Accuracy: {:.4f}'.format(accuracy_score(df_test['connected'], df_test['cosine'])))
# + [markdown] id="EWEoQGW2NhJ0"
# <a name="Euclidean"></a>
# ### Euclidean distance
#
# La distancia euclideana no funciona tan bien como la similitud coseno.
# + id="KKEpO05y_WAA"
y_pred = []
for source, target in zip(df_test['source'], df_test['target']):
X_source = X_svd[authors_dict[source], :].reshape(1,-1)
X_target = X_svd[authors_dict[target], :].reshape(1,-1)
euclidean_dist = euclidean_distances(X_source, X_target)[0][0]
y_pred.append(euclidean_dist)
# Es mejor normalizar para poder hacer la comparación
y_pred = np.array(y_pred)/np.max(y_pred)
y_pred = [1 if y < 0.5 else 0 for y in y_pred]
df_test['euclidean'] = y_pred
# + colab={"base_uri": "https://localhost:8080/"} id="oHPOHaEVQFyT" outputId="5d37e4d3-bf23-4897-afbd-fd1e22fd2855"
print(classification_report(df_test['connected'], df_test['euclidean']))
print('F1: {:.4f}'.format(f1_score(df_test['connected'], df_test['euclidean'])))
print('Accuracy: {:.4f}'.format(accuracy_score(df_test['connected'], df_test['euclidean'])))
# -
# <a name="Train-test"></a>
# ## Train-test
#
# Ahora le añadiremos una característica más a nuestros datos: su similitud coseno.
df_samples = pd.read_csv("../Data/sample_features2021-02-08.csv")
df_samples.head()
simil = []
for index, row in tqdm(df_samples.iterrows(), total=len(df_samples)):
a = svd.transform(X[authors_dict[row['source']]])
b = svd.transform(X[authors_dict[row['target']]])
simil.append(cosine_similarity(a,b)[0][0])
df_samples['cos_sim'] = simil
df_samples_t = pd.read_csv("../Data/sample_features_test2021-01-06.csv")
df_samples_t = df_samples_t.drop(['Unnamed: 0'], axis=1)
df_samples_t
simil = []
for index, row in tqdm(df_samples_t.iterrows(), total=len(df_samples_t)):
a = svd.transform(X[authors_dict[row['source']]])
b = svd.transform(X[authors_dict[row['target']]])
simil.append(cosine_similarity(a,b)[0][0])
df_samples_t['cos_sim'] = simil
df_samples.to_csv('training.csv', index=False)
df_samples_t.to_csv('test.csv', index=False)
| SVD/SVD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: a330
# language: python
# name: a330
# ---
# # Lab 6: Pandas
# In this lab, we'll work through some of the basics of using Pandas, using a few different tabular data sets. Ultimately, one need not do anything particularly fancy with DataFrames for them to be useful as data containers. But we would like to highlight a few extra abilities these objects have, that illustrate situations where we may actually have a strong reason to use pandas over another library.
# ## Problem 1: HII regions + Planetary Nebulae measurements in M81
#
# For our first data set, we're going to look at a file (`table2.dat`), which contains measurements of the flux and intensity of various ions' line emission from a set of known emitting objects (PNs and HII regions) in the M81 galaxy.
#
# The columns of this file are `name`, `ion`, `wl`, `flux`, and `I` (intensity). Two of the columns are string-valued (name and ion), three are numerical-values (wl, flux, I). This mix of strings and floats tells us before we even decide how to read in this file that `numpy` data structures won't be usable, as they demand all values in an array to have the same `dtype`.
#
# ### Problem 1.1
#
# Using the `pd.read_csv()` function shown in the lecture, read this data file into a dataframe called `df`, and print it.
# ```{hint}
# You can get a "pretty" visualization of a dataframe by simply typing its name into a jupyter cell -- as long as it's the last line of the cell, the dataframe will print more nicely than typing `print(df)`. This does not work outside of notebooks.
# ```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# %matplotlib inline
# ### Problem 1.2
#
# Though it doesn't show up in the clean representation above, the strings associated with the name and ion columns above have trailing and leading spaces that we don't want.
#
# Use a *list comprehension* to modify the data frame such that each value in the name and ion columns are replaced with a `.strip()` version of themselves.
# ### Problem 1.3
#
# Write a function `select_object` which takes in as an argument the name of an HII region or planetrary nebula, and filters the dataframe for only the entries for that object using `df.loc[]`. Consider having the dataframe be an optional argument you set to `df`, the dataframe we are
# working with.
#
# Have your function take in an optional argument `drop_empty=True` which additionally selects only those rows where the flux/intensity is **not** zero.
# +
# Your function should produce the following
# -
# ### Problem 1.4
#
# Write a function `select_ion_by_wavelength()` which takes in the name of an ion and its wavelength (and a dataframe), and returns the filtered data frame for all objects, but only ions for the selected wavelengths.
#
# As before, have a `drop_empty` optional argument to not include entries where the flux and intensity are zero.
#
# Additionally, as the index is now uniquely identified by the name of the PN/HII region, set the index to be the name column.
# ### Problem 1.5
# It will be helpful to know for a given ion which wavelengths are avalable and have data in the dataframe. Write a function `get_wavelenghs_by_ion()` that determines for a given input ion, which wavelengths are available.
#
# **Bonus + 0.5: Some ions of forbidden transitions like `[OII]` have brackets in the name. Add a bit to your get_wavelengths_by_ion code that allows the user to enter either `"[OII]"` or `"OII"` and get the same answer.**
#
# Additionally, make a convenience function `get_ions()` that just returns the full list of ions represented in the dataframe.
#
# ```{hint}
# The `.unique()` method in pandas will be useful here.
# ```
#
# **Show that your function works by selecting the wavelengths for `[NII]` and `[OII]`.**
# +
# Your function should produce the following
# +
# Show [NII]
# +
# Show [OII]
# -
# ### Problem 1.6
#
# Rather than having all these convenience functions littered around our code, let's go ahead and make a class, `FluxTable`, which initializes with our dataframe, and then has all of the functions created above as methods. The input DataFrame, `df`, should be accessible as an attribute as well.
#
# When you're done, you should be able to do something like the following
# ```
# ions = FluxTable(df)
# print(ions.df)
# ions.get_ions()
# ions.get_wavelengths_by_ions('[OII]')
# PN3m = ions.select_object('PN3m')
# ```
# +
# Your Class Here
# -
# **Show that your class works by running the above examples.**
# +
# Run Get ions
# -
# ### Bonus: (+1) Problem 1.7
#
# Finally, let's add one final method to our class. This method will be called `query`, and it will act as a bit of a "catch all", allowing the user to query for certain conditions as desired.
#
# Your `query` method should take as it's primary argument a string containing a comma separated list of desired columns. It should then have optional arguments for `name`, `ion`, and `wl`, which are by default set to `None`. For name and ion, the goal is to allow the user to specify specific ones. For `wl`, we'll go one step further and allow either a specific wavelength, or a range of wavelengths input as a string of the form `>4343` or `<3050` or `2010-5000`.
#
# The usage of this method will look something like
#
# ```
# ft.query('name,flux',ion='[OII]',wl='3000-5000')
# ```
#
# You will of course need to do some string checking (particularly with wl) to figure out what the user wants, and then you can use your filtering methods you already wrote to successfully construct a result dataframe to return.
# ## Problem 2
#
# In this problem, we're going to use the [3DHST catalog](https://archive.stsci.edu/prepds/3d-hst/), which contains numerous measurements of galaxies at high redshift taken with HST. This program was led at Yale(!) and the "3D" refers to the fact that beyond just imaging, spectroscopic information was also obtained.
#
# We'll be using a subset of the catalog set up for the GOODS-South field, a well studied patch of the sky. The data are split across four `fits` files -- but we'll be using `pandas` to join them together!
#
# - the `.cat` file contains the primary catalog
# - the `.fout` file contains the output of `FAST`, a quick template fitting program for galaxies.
# - the `.RF` file contains the Rest Frame (de-redshifted) colors of the galaxies in common bands
# - the `.zout` file contains redshift estimates for all galaxies (either spec-z or photo-z) made using the EAZY redshift fitting code (also Yale!)
#
# ### Problem 2.1
#
# **Load the four datasets into Python, and create dataframes for each.** For ease of following the solutions, we suggest you name these
#
# - `cat_df` for the catalog
# - `fast_df` for the fast output
# - `rf_df` for the RF file
# - `z_df` for the redshifts
#
# **Examine each of these dataframes to see what types of columns they have.**
#
# ```{hint}
# Remember that the default extension for tabular data (as this is) will be 1, not 0 as we are used to for images. You can run pd.DataFrame() directly on the data attribute of this extension
# ```
from astropy.io import fits
# +
# create cat_df
# +
#When you look at cat_df it should look like this
# +
# Make fast_df
# +
# look at it
# +
#Make rf_df
# +
# look at it
# +
# make z_df
# +
# look at it
# -
# ### Problem 2.2
#
# You should notice that every one of these tables has 50507 rows. This is a relief! It means the creators were consistent, and that each object has a row in each table.
#
# You should also notice one column, `id`, is a unique identifier for each row in each table (and is consistent across tables). Pandas has assigned its own index (it's off by 1 from the id column), but we might as well use `id`.
#
# **For each of your four dataframes, set 'id' to be the index column. Show one of the df's to show this worked.**
# +
# overwrite dataframes to have 'id' as index
# +
# Show one of your dataframes to confirm this worked.
# -
# ### Problem 2.3
#
# Instead of working with these four dataframes separately, let's join them all into one MEGA DATAFRAME.
#
# By setting 'id' as the index for each, we'll be able to merge the dataframes using `pd.merge` with the `left_index` and `right_index` parameters set to true.
#
# **Create your mega dataframe, which we'll simply call `df`, by consecutively merging the first two, then third and fourth dataframes.**
#
# ```{hint}
# You should end up with 215 columns in your final dataframe.
# ```
# +
#Merge the dataframes
# -
# ### Problem 2.4
#
# Let's take a look at the redshift distribution of sources in the catalog.
#
# There are several estimates of the photometric redshift, the one we want to use is `z_peak` for photometry, or, if available, `z_spec_x`, from spectroscopy.
#
# **What percentage of the catalog have measured spectroscopic redshifts? The `z_spec_x` column is set to `-1` if no spectroscopic redshift exits.**
# +
# calculate here
# -
# ### Problem 2.5
#
# Write a function `get_redshift()` which takes in an object ID, and returns `z_spec_x` for that source if it's not -1, otherwise returns `z_peak`. Because `id` is a special word in python, we suggest using `objid` as the input, and setting df=df as an optional argument. You can make this a memory lite function by using `df.loc[]` to pull the row and only the two columns you need for this.
#
# There are two additional "flagged" values: -99.0 and -99.9 -- Have your function output np.nan if this is the value in the table.
#
# Your function should return the redshift as well as a flag (string) 's' or 'p', or 'f' for spectroscopic or photometric (or fail, if you're returning nan).
# +
#Your get redshift function here
# -
# Confirm your function works by testing it on objid 150; your results should match mine below:
# +
# Run your function
# -
# ### Problem 2.6
#
# Now that we can get the best redshift for each row, use a list comprehension to grab these values for every object. You can index the output tuple of your function at 0 to drop the flag for now.
#
# Once you have this, plot a histogram of the redshifts, using `fig, ax = plt.subplots`. Make your plot nice!
#
# ```{note}
# My list comprehension takes ~15 seconds to run. It's a lot of data! If you wish, you may try to find a more optimized solution built on a full column operation rather than a loop. One possibility is to take the spec-z column, mask any bad values, and then replace those entries in the z-phot column and plot that...
# ```
# +
#Get all (best) redshifts
# +
# We suggest making it an array after the list comprehension.
# -
# ```{hint}
# You'll want to index just the non-nan values to make your plot. You can access the non-nan elements in an array via `arr[~np.isnan(arr)]`, with the tilde serving to invert the selection.
# ```
# +
# Your plot here, mine for reference
# -
# ### Problem 2.7
#
# Now do the same, but separately plot the distributions of redshift for those with spectroscopic redshifts and those that only have photometric. For this, you'll want to set `density=True` in your `hist`, and play with the linestyles and opacities so that both are visible.
#
# **Bonus (+0.5): Use KDE from seaborn to smoothly represent the distribution**.
# +
# build specz list
# +
# build photz list
# +
# Plot it; here's mine!
# -
# **Do the differences between the two distributions make sense? Why or why not?**
# +
# answer here
# -
# ## Problem 3
#
# The [UVJ diagram](https://iopscience.iop.org/article/10.3847/2041-8213/ab2f8c/pdf) is a useful diagnostic tool in extragalactic astronomy which allows one to relatively accurately diagnose a galaxy as star forming or quiescent, even in the presence of dust. You can read more at the link if you're unfamiliar. It is composed by plotting the "U-V" color of the galaxy against the "V-J" colors. You'll likely know U and V (these are from the Johnsons-Cousin's filter system). J is a filter in the near infrared.
#
# In this problem, we're going to write a function that can create a UVJ diagram for subsets of our data, cutting on mass and redshift.
#
# You'll need to access the following columns in the data (besides redshift, which you've already handled):
#
# - stellar mass: the mass of all the stars in the galaxy. (column: `lmass`, flagged value of -1)
# - star formation rate: rate at which galaxy is forming stars. (column: `lsfr`, flagged value of -99.0)
# - U band flux density (column: `l153`, flagged value of -99.0)
# - V band flux density (column: `l155`, flagged value of -99.0)
# - J band flux density (column: `l161`, flagged value of -99.0)
#
# ### Problem 3.1
#
# For step one, we need to be able to filter our dataframe for particular mass and redshift ranges.
#
# Write a function, `select_galaxies()`, which takes as arguments `M, delta_M, z, delta_z` (and our dataframe).
# It should then return a df of only systems between M-deltaM to M+deltaM, and z-deltaz to z+deltaz. The columns it should return are the ones specified above, plus 'z'.
#
# There is actually a column in `rf_df` called `z`, that contains the spec_z if available or the peak z if not. At the time of writing, I cannot determine why this column was not included in the merge. In any case, set `df['z']` equal to `rf_df.z` before continuing, as you'll use it below.
#
#
# ```{note}
# All masses and sfrs are in log units.
# ```
#
# Try your function out using a mass of 10, delta M of 0.5 (i.e., a bin from 9.5 - 10.5), a redshift of 1, and a delta z of 0.25.
# +
# Add rf_df.z to the dataframe df
# -
# Define your select galaxies function
# Here's the output for the provided example
# ### Problem 3.2
#
# Great, we can now get subsamples in mass/redshift bins. This is important because the UVJ diagram actually changes as a function of mass and redshift.
#
# Next we need to get the colors out. Write a function `get_colors()` which takes the same arguments as your `select_galaxies()` function above. Inside, it should run `select_galaxies` passing through the arguments, and then from the resulting data frame, calculate U-V and U-J (see below). Add these as columns to said dataframe with names 'U-V' and 'V-J' and return it.
#
# Run this function with the same mass/redshift bin from above and look at it.
#
# ```{warning}
# As noted above, the U,V, and J band data are in Fnu (flux densities). Thus, a color is computed via -2.5*log10(Lfilter1/Lfilter2)
# ```
#
# +
#Define get colors
# +
#Here's the output you're looking for.
# -
# ### Problem 3.3
#
# Now that we can easily grab U-V and V-J colors for subsamples, we're ready to plot!
#
# Next, set your xlim and ylim to (0,2) in V-J (x axis) and (0,2.8) in U-V (y axis).
#
# Once you have the distribution plotted nicely, use the definitions of the bounding box provided in [Whitaker et al. 2011](https://iopscience.iop.org/article/10.1088/0004-637X/735/2/86/pdf) (Eqns 15, Fig 17 for example) to draw the box where quiescent galaxies sit.
#
# Finally, let's add the galaxies. For this example, use a log mass of 9.5 with a delta M of 2 (so, galaxies from $10^{7.5}$ $M_{\odot}$ to $10^{11.5}$ $M_{\odot}$), in the redshift range of 0.5 to 1.5. Can you identify the quiescent sequence?
#
# ### Bonus (+2)
#
# (+1) Now that you can easily plot up a UVJ diagram for a given mass/redshift bin, make a plot with 6 panels. Across the 3, plot the UVJ diagram in the redshift bins 0-0.5, 0.5-1.0, 1.0-2.0. In the top panel, include galaxies in the mass range 8-9.5, and in the bottom, 9.5-11.
#
# (+1) Feeling extra fancy? Use the conditions on the UVJ quiescent box to color the quiescent galaxies red and the star forming galaxies blue in you plots.
#
#
# ## Wrap Up
#
# We've barely scratched the surface for what we can do with `pandas`, but we hope that if you're planning to work with "large" amounts of data (say, 200 columns and 500,000 rows), then pandas is a great tool for working with them.
| Lab6/Lab6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
import pandas as pd
import graphlab as gl
sf = gl.SFrame('../data/jokes.dat', format = 'tsv')
sf[3]
# +
with open('../data/jokes.dat','r') as f:
df = pd.DataFrame(i for i in f)
print(df)
# -
df.iloc(4)
numpymatrx = df.as_matrix()
numpymatrx
numpymatrx[0:30]
df_n = pd.read_table('../data/jokes.dat')
df_n
| src/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sos
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SoS
# language: sos
# name: sos
# ---
# + [markdown] kernel="SoS" tags=[]
# # The `sos_targets` data type
# + [markdown] kernel="SoS" tags=[]
# * **Difficulty level**: intemediate
# * **Time need to lean**: 25 minutes or less
# * **Key points**:
# * All SoS step variables such as `_input`, `_output`, `step_input` and `step_output` are of type `sos_targets`
# * `sos_targets` consists of **labeled SoS targets**, has optional **groups**, special **format specification**, and some member functions.
# + [markdown] kernel="SoS" tags=[]
# ## SoS targets
# + [markdown] kernel="SoS" tags=[]
# A **target** is an object that can be created and detected. A SoS step can take a list of targets as input, check the existence of a list of dependent targets, and produce a list of targets as output. **Input, output and dependent targets for steps and substeps are exposed to you as special variables `_input`, `_output`, `_depends`, `step_input`, and `step_output` that are all in type `sos_targets`**.
# + [markdown] kernel="SoS" tags=[]
# `sos_targets` contains a list of targets (of type `BaseTarget` is your are curious), which can be`file_target` that represents a file on the file system, `sos_variable` that represents a defined variable, `R_Library` that represents a R library, or some other types. Please refer to [SoS targets](targets.html) for details about SoS targets.
# + [markdown] kernel="SoS" tags=[]
# ## `sos_targets` data type
# + [markdown] kernel="SoS" tags=[]
# ### Construction of `sos_targets`
# + [markdown] kernel="SoS" tags=[]
# In SoS, the `input` statement mostly creates a `step_input` object with provided parameters. That is to say,
#
# ```
# input: 'a.txt', 'b.txt', group_by=1
# ```
# is almost equivalent to
#
# ```
# step_input = sos_targets('a.txt', 'b.txt', group_by=1)
# ```
# and we can use `sos_targets` objects directly in an `input` statement in more complicated cases.
# + [markdown] kernel="SoS" tags=[]
# Variable `_input` represents the input targets for each substep (`groups` of `sos_targets` as we will see later).
#
# In the cases that a step contains only one substep, `step_input` is the same as `_input`. For example, variables `step_input` and `_input` of the following step are `sos_targets` objects with a single `file_target` object:
# + kernel="SoS" tags=[]
input: 'sos_datatypes.ipynb'
print(f"step_input={step_input:r}")
print(f"_input={_input:r}")
sh: expand=True
wc -l {_input}
# + [markdown] kernel="SoS" tags=[]
# and if you have multiple input files, you can pass them altogether as a `sos_targets` with two `file_target`
# + kernel="SoS" tags=[]
input: 'sos_datatypes.ipynb', 'sos_magics.ipynb'
print(f"step_input={step_input:r}")
print(f"_input={_input:r}")
sh: expand=True
wc -l {_input[0]}
wc -l {_input[1]}
# + [markdown] kernel="SoS" tags=[]
# or separately as two groups of inputs:
# + kernel="SoS" tags=[]
input: 'sos_datatypes.ipynb', 'sos_magics.ipynb', group_by=1
print(f"\nstep_input={step_input:r}")
print(f"_input={_input:r}")
sh: expand=True
wc -l {_input}
# + [markdown] kernel="SoS" tags=[]
# In this case, the step input contains two `file_target`:
# ```
# step_input = sos_targets('SoS_Syntax.ipynb', 'SoS_Magics.ipynb')`
# ```
# but the step process is executed twice, with
# ```
# _input = sos_targets('SoS_Syntax.ipynb')
# _input = sos_targets('SoS_Magics.ipynb')
# ```
# respectively. Because `_input` contains only one element, it is not necessary to use `_input[0]` in the script.
# + [markdown] kernel="SoS" tags=[]
# ### A list of targets
# + [markdown] kernel="SoS" tags=[]
# `sos_targets` type keeps a list of `BaseTargets` objects. It can be initialized from one or more `str` (for `file_target`), or other targets. Lists of targets or dictionary of targets (discussed later) will be flattened and concatenated so the end result will always be an one-dimensional list.
#
# The variables appear to be a sequence that can be sliced and iterated. For example, the following statement creates a `sos_targets` object with three filenames from a single filename and a list of two filenames:
# + kernel="SoS" tags=[]
targets = sos_targets('a.txt', ['b.txt', 'c.txt'])
targets
# + [markdown] kernel="SoS" tags=[]
# You can access one or more elements of a `sos_targets` or iterate through it
# + kernel="SoS" tags=[]
targets[2]
# + kernel="SoS" tags=[]
targets[1:]
# + kernel="SoS" tags=[]
for t in targets:
print(t)
# + [markdown] kernel="SoS" tags=[]
# To convert a `paths` object to a regular list, you can use function `list`
# + kernel="SoS" tags=[]
list(targets)
# + [markdown] kernel="SoS" tags=[]
# or slice part of the `paths` using slices
# + kernel="SoS" tags=[]
type(targets[1:])
# + [markdown] kernel="SoS"
# ### Named paths
# + [markdown] kernel="SoS"
# Under the hood paths are presented as type `path` (derived from `pathlib.Path`) and file targets are presented as type `file_target` that is derived from `path`. Paths that starts with `~` and `#` will be expanded automatically where
#
# 1. Paths that starts with `~` will be expanded with `os.path.expanduser`.
# 2. Paths that starts with `#name` will be expanded according to the hosts that the workflow is executed. The `name` should be defined in the host definition under the keys `paths` or `shared`.
# + kernel="SoS"
p = path('~/a.txt')
print(repr(p))
print(p)
# + kernel="SoS"
p = path('#home/a.txt')
print(repr(p))
print(p)
# + [markdown] kernel="SoS"
# Now, if the same workflow is executed on `docker`, a remote host with different `#home`, the output is different.
# + kernel="SoS"
# %run -r docker -c ~/docker.yml -v1
p = path('#home/a.txt')
print(repr(p))
print(p)
# + [markdown] kernel="Bash" tags=[]
# ### Format specification
# + [markdown] kernel="SoS" tags=[]
# `sos_targets` **accepts a list of format options to easily format path in different formats**. Here is a summary of format options with their effects:
# + [markdown] kernel="SoS" tags=[]
#
# | convertor |operation| effect | operant | output |
# | :----------|:----| :----- | :----- | :-------|
# | `a` | absolute path | `abspath()` | `test.sos` | `/path/to/test.sos` |
# | `b` | base filename | `basename())` | `{home}/SoS/test.sos` | `test.sos` |
# | `e` | escape | `replace(' ', '\\ ')` | `file 1.txt` | `file\ 1.txt`|
# | `d` | directory name | `dirname()` or `'.'` | `/path/to/test.sos` | `/path/to` |
# | `l` | expand link | `realpath()` | `test.sos` | `/realpath/to/test.sos` |
# | `n` | remove extension | `splitext()[0]` | `/path/to/test.sos` | `/path/to/test` |
# | `p` | posix name | `replace('\\', '/')...` | `c:\\Users` | `/c/Users` |
# | `q` |quote | `quoted()` | `file 1.txt` | `'file 1.txt'`|
# | `r` | repr | `repr()` | `file.txt` | `'file.txt'` |
# | `s` | str | `str()` | `file.txt` | `file.txt` |
# | `R` | resolve remote and other targets | `.resolve()`| `remote('a.txt')` | `a.txt`|
# | `U` | undo expanduser | `replace(expanduser('~'), '~')` | `/home/user/test.sos` | `~/test.sos` |
# | `x` | file extension | `splitext()[1]` | `~/SoS/test.sos` | `.sos` |
# | `,` | join with comma | `','.join()` | `['a.txt', 'b.txt']` | `a.txt,b.txt`|
#
# + [markdown] kernel="SoS" tags=[]
# These format options allow you to pass filenames to scripts in different formats. For example, it would be perfectly OK to pass `~/a.txt` to a shell script, but a `u` formatter should be added if you are passing the filename to a script that does not understand `~` in filenames. For example,
# + kernel="SoS" output_cache="[]" tags=[]
# %preview -n name filename basefilename expanded parparname shortname
file = sos_targets('~/sos/examples/update_toc.sos')
name = f"{file:n}"
filename = f"{file:b}"
basefilename = f"{file:bn}"
expanded = f"{file:u}"
parparname = f"{file:ddb}"
shortname = f"{file:U}"
# + [markdown] kernel="SoS" tags=[]
# An important difference between the formatting of `sos_targets` and regular lists of `BaseTarget` is that **formatting are applied to each item and joint by space or comma**. For example, whereas a regular list is formatted as a list
# + kernel="SoS" tags=[]
target_list = ['a.txt', 'b.txt', 'c.txt']
f"{target_list}"
# + [markdown] kernel="SoS" tags=[]
# A `sos_targets` is formatted as
# + kernel="SoS" tags=[]
f"{targets}"
# + [markdown] kernel="SoS" tags=[]
# or separated by `,` with format option `","`
# + kernel="SoS" tags=[]
f"{targets:,}"
# + [markdown] kernel="SoS" tags=[]
# or after formatting each element with specified formatter
# + kernel="SoS" tags=[]
f"{targets:r,}"
# + [markdown] kernel="SoS" tags=[]
# ### `sos_targets` with a single target
# + [markdown] kernel="SoS" tags=[]
# One particular consequence of this format rule is that a `sos_targets` with only one element will be formatted exactly like a single target so you can use `_input` (a `sos_targets`) in place of `_input[0]` (a `file_target`) if you know there is only one target inside `_input`:
# + kernel="SoS" tags=[]
single = sos_targets('sos_datatypes.ipynb')
f"{single[0]} is the same as {single}"
# + [markdown] kernel="SoS" tags=[]
# As a matter of fact, if a `sos_targets` has only one element, it will pass unrecognized attributes and functions to this element, so that
# + kernel="SoS" tags=[]
single.suffix
# + kernel="SoS" tags=[]
single.resolve()
# + kernel="SoS" tags=[]
import os
os.path.getsize(single)
# + [markdown] kernel="SoS" tags=[]
# Basically, **you can use `_input` exactly as `_input[0]` if there is only one `file_target` in `_input`**.
# + [markdown] kernel="SoS" tags=[]
#
# ### Attributes of target
# + [markdown] kernel="SoS" tags=[]
# Targets in `sos_targets` can be associated with arbitrary attributes. These attributes are usually assigned with option `paired_with` of an `input` statement.
#
# Option `paired_with` accepts a dictionary and assigns attributes to each of the targets with specified values. For example,
# + kernel="SoS" tags=[]
!touch a.txt b.txt
input: 'a.txt', 'b.txt', paired_with={'sample': ['A', 'B']}
print(_input[0].sample)
print(_input[1].sample)
# + [markdown] kernel="SoS" tags=[]
# Although targets and their attributes are usually set in an `input` statement, you can create targets and set attributes directly. For example
# + kernel="SoS" tags=[]
file_a = file_target('a.txt').set('sample', 'A')
print(file_a.sample)
print(file_a.get('sample'))
# + [markdown] kernel="SoS" tags=[]
# Here the `target.set(name, value)` function sets an attribute to the `target`, `target.get(name, default=None)` get the value of attribute `name`, and returns `default` if `name` is not a valid attribute. It is therefore a safer way to retrieve an attribute than `target.name` if you are uncertain if attribute `name` exists for `target`.
# + [markdown] kernel="SoS" tags=[]
# ### Labels of targets
# + [markdown] kernel="SoS" tags=[]
# Targets in a `sos_targets` has an attribute `label`, which correspond to the step that the target is specified (input) or generated (output). For example, the `label` of a `sos_targets` that is directly specified in a step is the name of step.
# + kernel="SoS" tags=[]
# %run -s force -v1
[step_10]
input: 'sos_datatypes.ipynb'
print(_input.labels)
# + [markdown] kernel="SoS" tags=[]
# If you have multiple inputs, you can sparate them into different groups using keyword arguments
# + kernel="SoS" tags=[]
!touch a.bam b.bam a.bai b.bai
input: bam=['a.bam', 'b.bam'], bai=['a.bai', 'a.bai']
print(_input)
print(_input.labels)
# + [markdown] kernel="SoS" tags=[]
# If the input target is inherited from another step, the source will the name of that step.
# + kernel="SoS" tags=[]
# %run -v0
[10]
output: 'a.txt'
_output.touch()
[11]
print(_input.labels)
# + [markdown] kernel="SoS" tags=[]
# In a more complex case when the source comes from multiple input steps and the present step, the `labels` attribute points out the source of each target:
# + kernel="SoS" tags=[]
!touch c.txt
# %run -v0
[step_10]
output: 'a.txt'
_output.touch()
[step_20]
output: 'b.txt'
_output.touch()
[step_30]
input: 'c.txt', output_from(['step_10', 'step_20'])
print(_input)
print(_input.labels)
# + [markdown] kernel="SoS" tags=[]
# Although the use of keyword argument will override the default source
# + kernel="SoS" tags=[]
!touch c.txt
# %run -v1
[step_10]
output: 'a.txt'
_output.touch()
[step_20]
output: 'b.txt'
_output.touch()
[step_30]
input: 'c.txt', prev=output_from(['step_10', 'step_20'])
print(_input)
print(_input.labels)
# + [markdown] kernel="SoS" tags=[]
# The `source` information can be used to select subsets of targets according to their labels. For example, `_intput['prev']` would generate a `sos_targets` with all targets from source `prev`.
# + kernel="SoS" tags=[]
!touch c.txt
# %run -v1
[step_10]
output: 'a.txt'
_output.touch()
[step_20]
output: 'b.txt'
_output.touch()
[step_30]
input: 'c.txt', output_from(['step_10', 'step_20'])
print(_input['step_10'])
print(_input['step_10'].labels)
# + [markdown] kernel="SoS" tags=[]
# ### `groups` of `sos_targets`
# + [markdown] kernel="SoS" tags=[]
# As we have seen, targets in a `sos_targets` can be grouped in many ways and `_input` contains subsets of the targets and is the input for each substep. For example, in the following example, the 4 input files are grouped into two groups of the same size. The step is executed twice, each time for a different group. `step_input.groups` contains a list of `sos_targets` that becomes `_input` of the substep.
# + kernel="SoS" tags=[]
!touch c.txt d.txt
input: 'a.txt', 'b.txt', 'c.txt', 'd.txt', group_by=2
print(f'\nGroup {_index}')
print(step_input.groups)
print(_input)
# + [markdown] kernel="SoS" tags=[]
# You usually do not need to access `groups` of `sos_targets` directly but knowing the existence of `groups` would help you understand how groups are passed from one step to another.
#
# For example, in the following workflow, when step `10` obtains `output_from` step `A`, it obtains a `step_output` with 4 groups, which then becomes the `_input` of each substep of step `10`.
# + kernel="SoS" tags=[]
# %run -v1
[A]
input: for_each=dict(i=range(4))
output: f'test_{i}.txt'
_output.touch()
[10]
input: output_from('A')
print(step_input)
print(_input)
# + [markdown] kernel="SoS" tags=[]
# ### `zap` file targets
# + [markdown] kernel="SoS" tags=[]
# `sos_targets` accepts the `zap()` function which `zap` all file targets in ths list. This technique is usually used to remove large intermediate files during the execution of the workflow. For example, if you have a workflow that downloads and processs large files, you can do something like
#
# ```sos
# [download: provides='{file}.fastq']
# download: expand=True
# http://some_url/{file}.fastq
#
# [default]
# input: [f'{x}.fastq' for x in range(1000)], group_by=1
# output: _input.with_suffix('.bam')
# sh: expand=True
# process _input to _output
#
# _input.zap()
# ```
#
# In this example, 1000 `fastq` files are downloaded and processed, but the input files are zapped after they are processed. Although the files have been removed, re-running the workflow will not download and process the files again because the downloaded files still considered to exist by SoS.
| src/user_guide/sos_targets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Network
#
# Copyright (c) 2017 <NAME>
#
# Use of this source code is governed by an MIT-style license that can be found in the LICENSE file at
# https://github.com/miloiloloo/diploma_2017_method/blob/master/LICENSE
# +
import numpy as np
import pickle
import math
import matplotlib.pyplot as plt
# %matplotlib inline
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Input, Dense, Convolution2D, MaxPooling2D, UpSampling2D,core
from keras.layers import BatchNormalization, Activation, Reshape, LeakyReLU, Dropout, Flatten,Cropping2D
from keras.models import Model
from keras.datasets import mnist
from keras.optimizers import Adam
from keras.callbacks import Callback
from keras import backend as K
from keras.engine.topology import Layer, merge
from keras.engine import InputSpec
from keras import activations, initializations, regularizers, constraints,callbacks
from keras import backend as K
from sklearn.manifold import TSNE
import sklearn.metrics
from scipy import ndimage
import random
from datetime import datetime
# -
def load_learning_data(file_path):
''' Load data from learning data file and check it
It returns (assemblies, probabilities)
'''
''' Load data from learning data file '''
try:
f = open(file_path, 'rb')
save = pickle.load(f)
assemblies = save['assemblies']
probabilities = save['probabilities']
f.close()
del save
except Exception as e:
print("Unable to load the file: " + file_path)
raise
''' Check learning data '''
if assemblies.shape[0] != probabilities.shape[0]:
print("Incorrect sizes: assemblies (" + str(assemblies.shape[0]) + ") and probabilities (" + str(probabilities.shape[0]) + "). They must be equal")
raise Exception
return (assemblies, probabilities)
def extract_train_valid_test_data(assemblies, probabilities, train_size, valid_size, test_size):
''' Separate learning data to train, valid and test data
It returns (train_assemblies, train_probabilities, valid_assemblies, valid_probabilities, test_assemblies, test_probabilities)
'''
''' CHECK INPUT '''
''' Incorrect input 1: incorrect sizes of assemblies and probabilities '''
if assemblies.shape[0] != probabilities.shape[0]:
raise Exception
''' Incorrect input 2: size of train/valid/test sets can't be negative '''
if train_size < 0 or valid_size < 0 or test_size < 0:
raise Exception
''' Incorrect input 3: incorrect sizes of train, valid and test_size'''
if train_size + valid_size + test_size > assemblies.shape[0]:
raise Exception
''' ALGORITHM & OUTPUT '''
''' Assemblies and indecies permutation '''
permutation_indecies = np.random.permutation(assemblies.shape[0])
assemblies = assemblies[permutation_indecies, :, :, :]
probabilities = probabilities[permutation_indecies, :]
''' Train set '''
train_assemblies = assemblies[0 : train_size, :, :, :]
train_probabilities = probabilities[0 : train_size, :]
''' Valid set '''
valid_assemblies = assemblies[train_size : train_size + valid_size, :, :, :]
valid_probabilities = probabilities[train_size : train_size + valid_size, :]
''' Test set '''
test_assemblies = assemblies[train_size + valid_size : train_size + valid_size + test_size, :, :, :]
test_probabilities = probabilities[train_size + valid_size : train_size + valid_size + test_size, :]
return (train_assemblies, train_probabilities, valid_assemblies, valid_probabilities, test_assemblies, test_probabilities)
def print_probabilities_stat(probabilities):
''' Print probabilities mean '''
for class_idx in range (0, probabilities.shape[1]):
print("class #" + str(class_idx) + ":\t" + str(np.mean(probabilities[:, class_idx])))
return
def get_labels_from_probabilities(probabilities):
''' Get labels from probabilities '''
labels = np.zeros(shape=probabilities.shape)
for idx in range(0, probabilities.shape[0]):
labels[idx, np.argmax(probabilities[idx, :])] = 1
return labels
def normalization(train_set, valid_set, test_set):
''' Normalize sets '''
''' Count mean and max '''
train_mean = np.mean(train_set)
train_max = np.max(train_set)
''' Normalize '''
train_set = (train_set - train_mean) / train_max
valid_set = (valid_set - train_mean) / train_max
test_set = (test_set - train_mean) / train_max
''' OUTPUT '''
return (train_set, valid_set, test_set)
# +
def unite_learning_data(tuple_of_assemblies, tuple_of_probabilities):
return (np.concatenate(tuple_of_assemblies, axis=0), np.concatenate(tuple_of_probabilities, axis=0))
def permutate_learning_data(assemblies, probabilities):
permutation_indecies = np.random.permutation(assemblies.shape[0])
assemblies = assemblies[permutation_indecies, :, :, :]
probabilities = probabilities[permutation_indecies, :]
return (assemblies, probabilities)
def get_learning_data_of_experiment(patch_size, offset, number_of_neighbours_per_side, number_of_experiment):
array_of_assemblies = []
array_of_probabilities = []
# Warning: do not forget write dir path
input_directory_path = "./learning_data/size_" + str(patch_size) + "_offset_" + str(offset) + "_left_" + str(number_of_neighbours_per_side) + "_right_" + str(number_of_neighbours_per_side) + "/"
part_idx = 1
try:
while True:
input_learning_data_file_path = input_directory_path + str(number_of_experiment) + "_" + str(part_idx) + ".pickle"
assemblies, probabilities = load_learning_data(input_learning_data_file_path)
print(input_learning_data_file_path)
array_of_assemblies = array_of_assemblies + [assemblies]
array_of_probabilities = array_of_probabilities + [probabilities]
part_idx += 1
except:
print("\n")
if len(array_of_assemblies) == 0 or len(array_of_probabilities) == 0:
raise Exception
return unite_learning_data(tuple(array_of_assemblies), tuple(array_of_probabilities))
# -
class mygenerator:
def __init__(self, assemblies, probabilities, batch_size, min_stretch_k, max_stretch_k):
''' Check input '''
if assemblies.shape[0] == 0:
raise Exception
if assemblies.shape[0] != probabilities.shape[0]:
raise Exception
if batch_size <= 0:
raise Exception
if min_stretch_k <= 0:
raise Exception
if max_stretch_k <= 0:
raise Exception
''' Predefined parameters '''
self._NOISE_AMPLITUDE = 0.005
''' Init '''
self._idx = 0
self._assemblies = assemblies
self._probabilities = probabilities
self._batch_size = batch_size
self._min_stretch_k = min_stretch_k
self._max_stretch_k = max_stretch_k
self._permutation = np.random.permutation(self._assemblies.shape[0])
def generate(self):
result_assemblies = np.random.rand(
self._batch_size,
self._assemblies.shape[1],
self._assemblies.shape[2],
self._assemblies.shape[3]
)
result_assemblies = self._NOISE_AMPLITUDE * result_assemblies
result_probabilities = np.ndarray(
shape=(self._batch_size, self._probabilities.shape[1])
)
for batch_idx in range(0, self._batch_size):
''' Init probability '''
result_probabilities[batch_idx, :] = self._probabilities[self._permutation[self._idx], :]
''' Count stretch k '''
stretch_k_x = random.uniform(self._min_stretch_k, self._max_stretch_k)
stretch_k_y = random.uniform(self._min_stretch_k, self._max_stretch_k)
''' '''
for patch_idx in range(0, self._assemblies.shape[3]):
zoom_patch = ndimage.zoom(self._assemblies[self._permutation[self._idx], :, :, patch_idx], [stretch_k_x, stretch_k_y])
offset_x = (self._assemblies.shape[1] - zoom_patch.shape[0])/2
offset_y = (self._assemblies.shape[2] - zoom_patch.shape[1])/2
from_x = 0
from_y = 0
to_x = 0
to_y = 0
if zoom_patch.shape[0] <= self._assemblies.shape[1]:
from_x = 0
to_x = zoom_patch.shape[0]
else:
from_x = -offset_x
to_x = from_x + self._assemblies.shape[1]
if zoom_patch.shape[1] <= self._assemblies.shape[2]:
from_y = 0
to_y = zoom_patch.shape[1]
else:
from_y = -offset_y
to_y = from_y + self._assemblies.shape[2]
''' Init assembly '''
result_assemblies[
batch_idx,
from_x + offset_x : to_x + offset_x,
from_y + offset_y : to_y + offset_y,
patch_idx
] = zoom_patch[
from_x : to_x,
from_y : to_y
]
die = random.choice([0, 1])
if die == 1:
''' T '''
result_assemblies[batch_idx, :, :, :] = np.flipud(result_assemblies[batch_idx, :, :, :])
if self._idx == (self._assemblies.shape[0] - 1):
self._permutation = np.random.permutation(self._assemblies.shape[0])
self._idx = (self._idx + 1) % self._assemblies.shape[0]
return (result_assemblies, result_probabilities)
# +
''' Load data '''
patch_size = 64
offset = 2
number_of_neighbours_per_side = 1
print("\n")
assemblies1, probabilities1 = get_learning_data_of_experiment(patch_size, offset, number_of_neighbours_per_side, 1)
assemblies2, probabilities2 = get_learning_data_of_experiment(patch_size, offset, number_of_neighbours_per_side, 2)
assemblies3, probabilities3 = get_learning_data_of_experiment(patch_size, offset, number_of_neighbours_per_side, 3)
assemblies4, probabilities4 = get_learning_data_of_experiment(patch_size, offset, number_of_neighbours_per_side, 4)
a = (assemblies1, assemblies2, assemblies3, assemblies4)
p = (probabilities1, probabilities2, probabilities3, probabilities4)
tr_a = []
tr_p = []
v_a = []
v_p = []
te_a = []
te_p = []
for i in range(0, 4):
assemblies = a[i]
probabilities = p[i]
dev1 = int(0.75*assemblies.shape[0])
dev2 = dev1 + int(0.125*assemblies.shape[0])
tr_a = tr_a + [assemblies[0:dev1,:,:,:]]
tr_p = tr_p + [probabilities[0:dev1,:]]
v_a = v_a + [assemblies[dev1:dev2,:,:,:]]
v_p = v_p + [probabilities[dev1:dev2,:]]
te_a = te_a + [assemblies[dev2:assemblies.shape[0],:,:,:]]
te_p = te_p + [probabilities[dev2:assemblies.shape[0],:]]
train_assemblies, train_probabilities = unite_learning_data(tr_a, tr_p)
valid_assemblies, valid_probabilities = unite_learning_data(v_a, v_p)
test_assemblies, test_probabilities = unite_learning_data(te_a, te_p)
permutation_indecies = np.random.permutation(train_assemblies.shape[0])
train_assemblies = train_assemblies[permutation_indecies, :, :, :]
train_probabilities = train_probabilities[permutation_indecies, :]
train_assemblies = np.clip(train_assemblies, 0, 0.1)
permutation_indecies = np.random.permutation(valid_assemblies.shape[0])
valid_assemblies = valid_assemblies[permutation_indecies, :, :, :]
valid_probabilities = valid_probabilities[permutation_indecies, :]
valid_assemblies = np.clip(valid_assemblies, 0, 0.1)
permutation_indecies = np.random.permutation(test_assemblies.shape[0])
test_assemblies = test_assemblies[permutation_indecies, :, :, :]
test_probabilities = test_probabilities[permutation_indecies, :]
test_assemblies = np.clip(test_assemblies, 0, 0.1)
train_set_size = train_assemblies.shape[0]
valid_set_size = valid_assemblies.shape[0]
test_set_size = test_assemblies.shape[0]
all_sets_size = train_set_size + valid_set_size + test_set_size
print("\nSets")
print("-------")
print("All set size:\t" + str(all_sets_size))
print("Train size:\t" + str(train_set_size))
print("Valid set size:\t" + str(valid_set_size))
print("Test set size:\t" + str(test_set_size))
print("-------\n")
print("Train probabilities stat")
print("-------")
print_probabilities_stat(train_probabilities)
print("-------\n")
print("Valid probabilities stat")
print("-------")
print_probabilities_stat(valid_probabilities)
print("-------\n")
print("Test probabilities stat")
print("-------")
print_probabilities_stat(test_probabilities)
print("-------\n")
''' Normalization '''
train_assemblies, valid_assemblies, test_assemblies = normalization(
train_set = train_assemblies,
valid_set = valid_assemblies,
test_set = test_assemblies
)
''' Balancing '''
''' Get train probabilities stat '''
train_probabilities_stat = np.zeros(shape=(train_probabilities.shape[1]), dtype=train_probabilities.dtype)
for class_idx in range(0, train_probabilities.shape[1]):
train_probabilities_stat[class_idx] = np.mean(train_probabilities[:, class_idx])
''' Balance '''
for train_idx in range(0, train_probabilities.shape[0]):
class_idx = np.argmax(train_probabilities[train_idx, :])
train_probabilities[train_idx, :] = np.zeros(shape=(train_probabilities.shape[1]))
train_probabilities[train_idx, class_idx] = 1/(1 + train_probabilities_stat[class_idx])
# +
assembly_size = number_of_neighbours_per_side * 2 + 1
conv_nb_filters = None
if assembly_size < 8:
conv_nb_filters = [8, 16, 32, 64]
if assembly_size >= 8 and assembly_size < 16:
conv_nb_filters = [16, 32, 48, 64]
if assembly_size >= 16 and assembly_size < 32:
conv_nb_filters = [32, 44, 54, 64]
if assembly_size >= 32:
raise Exception
input_ = Input((64, 64, assembly_size), name='input')
conv2d_1 = Convolution2D(nb_filter=conv_nb_filters[0],
nb_row=4,
nb_col=4,
activation='relu',
border_mode='same',
name='conv2d_1')(input_)
maxpool2d_1 = MaxPooling2D(pool_size=(2, 2), name='max_pool2d_1')(conv2d_1)
dropout_1 = Dropout(0.1, name='dropout_1')(maxpool2d_1)
conv2d_2 = Convolution2D(nb_filter=conv_nb_filters[1],
nb_row=4,
nb_col=4,
activation='relu',
border_mode='same',
name='conv_2d_2')(dropout_1)
maxpool2d_2 = MaxPooling2D(pool_size=(2, 2), name='max_pool_2d_2')(conv2d_2)
dropout_2 = Dropout(0.1, name='dropout_2')(maxpool2d_2)
conv2d_3 = Convolution2D(nb_filter=conv_nb_filters[2],
nb_row=4,
nb_col=4,
activation='relu',
border_mode='same',
name='conv_2d_3')(dropout_2)
maxpool2d_3 = MaxPooling2D(pool_size=(2, 2), name='max_pool_2d_3')(conv2d_3)
dropout_3 = Dropout(0.1, name='dropout_3')(maxpool2d_3)
conv2d_4 = Convolution2D(nb_filter=conv_nb_filters[3],
nb_row=4,
nb_col=4,
activation='relu',
border_mode='same',
name='conv_2d_4')(dropout_3)
maxpool2d_4 = MaxPooling2D(pool_size=(2, 2), name='max_pool_2d_4')(conv2d_4)
reshape_1 = Reshape((4 * 4 * 64,), name='reshape_1')(maxpool2d_4)
dense_1 = Dense(output_dim=64, activation='relu', name='dense_1')(reshape_1)
dense_2 = Dense(output_dim=5, name='dense_2')(dense_1)
activation_1 = Activation('softmax', name='activation_1')(dense_2)
model = Model(input=input_, output=activation_1)
model_for_tsne = Model(input=input_, output=dense_2)
# +
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# +
model_for_tsne.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model_for_tsne.summary()
# +
import threading
class threadsafe_iter:
"""Takes an iterator/generator and makes it thread-safe by
serializing call to the `next` method of given iterator/generator.
"""
def __init__(self, it):
self.it = it
self.lock = threading.Lock()
def __iter__(self):
return self
def next(self):
with self.lock:
return self.it.next()
def threadsafe_generator(f):
"""A decorator that takes a generator function and makes it thread-safe.
"""
def g(*a, **kw):
return threadsafe_iter(f(*a, **kw))
return g
my_generator = mygenerator(train_assemblies, train_probabilities, 32, 0.9, 1.1)
@threadsafe_generator
def generate():
while True:
[data,classes] = my_generator.generate()
yield (data, classes)
# -
sess = K.get_session()
sess.run(tf.initialize_all_variables())
''' Train model '''
model.fit_generator(
generator=generate(),
samples_per_epoch=6400,
nb_epoch = 10,
validation_data=(valid_assemblies,valid_probabilities))
def print_confusion_stat(probabilities, predicted_probabilities):
''' Check input '''
if probabilities.shape[0] != predicted_probabilities.shape[0]:
raise Exception
if probabilities.shape[1] != predicted_probabilities.shape[1]:
raise Exception
''' Go to classes '''
for class_idx in range(0, probabilities.shape[1]):
print("class #" + str(class_idx))
TP = 0.0
TN = 0.0
FP = 0.0
FN = 0.0
for idx in range(0, probabilities.shape[0]):
if np.argmax(predicted_probabilities[idx, :]) == class_idx:
if np.argmax(probabilities[idx, :]) == class_idx:
TP += 1.0
else:
FP += 1.0
else:
if np.argmax(probabilities[idx, :]) == class_idx:
FN += 1.0
else:
TN += 1.0
if TP + FN != 0:
TPR = TP/(TP + FN)
else:
print("NO TPR, FNR, BM")
TPR = 0
if TN + FP != 0:
TNR = TN/(TN + FP)
else:
print("NO TNR, FPR, BM")
TNR = 0
if TP + FP != 0:
PPV = TP/(TP + FP)
else:
print("NO PPV, FDR, MK")
PPV = 0
if TN + FN != 0:
NPV = TN/(TN + FN)
else:
print("NO NPV, FOR, MK")
NPV = 0
FNR = 1 - TPR
FPR = 1 - TNR
FDR = 1 - PPV
FOR = 1 - NPV
ACC = (TP + TN)/(TP + FN + TN + FP)
F_1 = (2*TP)/(2*TP + FP + FN)
if TP + FP != 0 and TP + FN != 0 and TN + FP != 0 and TN + FN != 0:
MCC = (TP*TN - FP*FN)/(math.sqrt((TP + FP) * (TP + FN) * (TN + FP) * (TN + FN)))
else:
print("NO MCC")
MCC = 0
BM = TPR + TNR - 1
MK = PPV + NPV - 1
print("-------")
print("TP:\t" + str(TP))
print("TN:\t" + str(TN))
print("FP:\t" + str(FP))
print("FN:\t" + str(FN))
print("TPR:\t" + str(TPR))
print("TNR:\t" + str(TNR))
print("PPV:\t" + str(PPV))
print("NPV:\t" + str(NPV))
print("FNR:\t" + str(FNR))
print("FPR:\t" + str(FPR))
print("FDR:\t" + str(FDR))
print("FOR:\t" + str(FOR))
print("ACC:\t" + str(ACC))
print("F_1:\t" + str(F_1))
print("MCC:\t" + str(MCC))
print("BM:\t" + str(BM))
print("MK:\t" + str(MK))
print("-------\n")
return
def show_stat_table(probabilities, predicted_probabilities):
''' Check input '''
if probabilities.shape[0] != predicted_probabilities.shape[0]:
raise Exception
if probabilities.shape[1] != predicted_probabilities.shape[1]:
raise Exception
class_stat_table = np.zeros(shape=(probabilities.shape[1], probabilities.shape[1]), dtype=np.float32)
for idx in range(0, probabilities.shape[0]):
first_idx = np.argmax(probabilities[idx, :])
second_idx = np.argmax(predicted_probabilities[idx, :])
class_stat_table[first_idx, second_idx] += 1
for first_idx in range(0, probabilities.shape[1]):
class_stat_table[first_idx, :] = class_stat_table[first_idx, :] / np.sum(class_stat_table[first_idx, :])
return class_stat_table
# +
''' Test model '''
test_score = model.evaluate(test_assemblies, test_probabilities, batch_size=16)
''' Print model's test stat '''
print("\n\nTest model stat")
print("-------")
print("Loss:\t\t" + str(test_score[0]))
print("Accuracy:\t" + str(test_score[1]))
print("-------\n")
test_predicted_probabilities = model.predict(test_assemblies)
print_confusion_stat(
probabilities=test_probabilities,
predicted_probabilities = test_predicted_probabilities
)
cst = show_stat_table(
probabilities=test_probabilities,
predicted_probabilities = test_predicted_probabilities
)
# +
import itertools
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Purples):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.figure()
plot_confusion_matrix(np.round(cst,2), classes=['UNDEF', 'LOCOM', 'IMMOB', 'REAR', 'GROOM'],
title='Confusion matrix, without normalization')
plt.show()
# -
RS = 0
tsne = TSNE(n_components=2, n_iter=1000, random_state=RS, metric='euclidean')
X_t = tsne.fit_transform(model_for_tsne.predict(test_assemblies))
y = np.zeros(shape=(test_probabilities.shape[0]))
for ind in range(0, test_probabilities.shape[0]):
y[ind] = np.argmax(test_probabilities[ind,:])
plt.figure()
#plt.scatter(X_t[np.where(y == 0), 0],X_t[np.where(y == 0), 1],marker='o', color='r',linewidth='1', label='undef')
plt.scatter(X_t[np.where(y == 1), 0],X_t[np.where(y == 1), 1],marker='o', color='g',linewidth='0.05', label='locom')
plt.scatter(X_t[np.where(y == 2), 0],X_t[np.where(y == 2), 1],marker='o', color='b',linewidth='0.05', label='immob')
plt.scatter(X_t[np.where(y == 3), 0],X_t[np.where(y == 3), 1],marker='o', color='c',linewidth='0.05', label='rear')
plt.scatter(X_t[np.where(y == 4), 0],X_t[np.where(y == 4), 1],marker='o', color='m',linewidth='0.05', label='groom')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
| network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="JS6jP7tpCrLr"
# ## Digit Recognizer
# Learn computer vision fundamentals with the famous MNIST dat
#
# https://www.kaggle.com/c/digit-recognizer
#
# ### Competition Description
# MNIST ("Modified National Institute of Standards and Technology") is the de facto “hello world” dataset of computer vision. Since its release in 1999, this classic dataset of handwritten images has served as the basis for benchmarking classification algorithms. As new machine learning techniques emerge, MNIST remains a reliable resource for researchers and learners alike.
#
# In this competition, your goal is to correctly identify digits from a dataset of tens of thousands of handwritten images. We’ve curated a set of tutorial-style kernels which cover everything from regression to neural networks. We encourage you to experiment with different algorithms to learn first-hand what works well and how techniques compare.
#
# ### Practice Skills
# Computer vision fundamentals including simple neural networks
#
# Classification methods such as SVM and K-nearest neighbors
#
# #### Acknowledgements
# More details about the dataset, including algorithms that have been tried on it and their levels of success, can be found at http://yann.lecun.com/exdb/mnist/index.html. The dataset is made available under a Creative Commons Attribution-Share Alike 3.0 license.
# + colab={} colab_type="code" id="LgcAAVmXgBPm"
import pandas as pd
import math
import numpy as np
import matplotlib.pyplot as plt, matplotlib.image as mpimg
from sklearn.model_selection import train_test_split
import tensorflow as tf
# %matplotlib inline
# + colab={} colab_type="code" id="EITzxKZRgBPy"
from tensorflow import keras
from tensorflow.keras import models
from tensorflow.keras import losses,optimizers,metrics
from tensorflow.keras import layers
# + [markdown] colab_type="text" id="4PihLjAggBP1"
# ## Data Preparation
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="cra90fdEsmsI" outputId="1b5e320d-9f3f-4571-a3b3-88564bcd7270"
from google.colab import drive
drive.mount('/content/gdrive')
# + colab={} colab_type="code" id="K58DeQH2gBP2"
labeled_images = pd.read_csv('gdrive/My Drive/dataML/train.csv')
#labeled_images = pd.read_csv('train.csv')
images = labeled_images.iloc[:,1:]
labels = labeled_images.iloc[:,:1]
train_images, test_images,train_labels, test_labels = train_test_split(images, labels, test_size=0.01)
# + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="At9soAW0qOs4" outputId="949d98c0-2251-4adc-e1a1-c7aa980cca2e"
print(train_images.shape)
print(train_labels.shape)
print(test_images.shape)
print(test_labels.shape)
# + [markdown] colab_type="text" id="f87cEn1_xfqI"
# ## Keras
# + [markdown] colab_type="text" id="UfkBgDM1gBP5"
# #### convert the data to the right type
# + colab={} colab_type="code" id="-X3Uu-o_gBP6"
x_train = train_images.values.reshape(train_images.shape[0],28,28,1)/255
x_test = test_images.values.reshape(test_images.shape[0],28,28,1)/255
y_train = train_labels.values
y_test = test_labels.values
# + colab={"base_uri": "https://localhost:8080/", "height": 282} colab_type="code" id="7vboLIlsgBP9" outputId="a2428797-704d-4688-ffe1-74f9f691c258"
plt.imshow(x_train[12].squeeze())
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="o5tcP2-KVHu-" outputId="4ea52c49-87c4-42f3-b7f4-0167b9e1eeb8"
x_train.shape
# + [markdown] colab_type="text" id="b6I1adl5gBQD"
# #### convert the data to the right type
# + [markdown] colab_type="text" id="xeMZR7ntgBQI"
# ### convert class vectors to binary class matrices - this is for use in the
# ### categorical_crossentropy loss below
# + colab={} colab_type="code" id="8e2qjHPLgBQJ"
y_train = keras.utils.to_categorical(y_train)
y_test = keras.utils.to_categorical(y_test)
# + [markdown] colab_type="text" id="LT78eGccgBQN"
# ### Creating the Model
#
# + colab={"base_uri": "https://localhost:8080/", "height": 374} colab_type="code" id="AMOStnPCFWSI" outputId="d2235c6f-f03c-4603-ce84-aaa7f06bc680"
model = models.Sequential()
model.add(layers.Conv2D(filters = 12, kernel_size=(6,6), strides=(1,1),
padding = 'same', activation = 'relu',
input_shape = (28,28,1)))
model.add(layers.Conv2D(filters = 24,kernel_size=(5,5),strides=(2,2),
padding = 'same', activation = 'relu'))
model.add(layers.Conv2D(filters = 48,kernel_size=(4,4),strides=(2,2),
padding = 'same', activation = 'relu'))
model.add(layers.Flatten())
model.add(layers.Dense(units=200, activation='relu'))
model.add(layers.Dropout(0.25))
model.add(layers.Dense(units=10, activation='softmax'))
model.summary()
# + colab={} colab_type="code" id="BLdOV8vYgBQS"
adam = keras.optimizers.Adam(lr = 0.0001)
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=adam,
metrics=['accuracy'])
# + colab={"base_uri": "https://localhost:8080/", "height": 3434} colab_type="code" id="pxI5Gi4xgBQV" outputId="9c9b10c7-19bd-4e74-eef1-e1cb482e3256"
H = model.fit(x_train, y_train,
batch_size=100,
epochs=100,
verbose=1,
validation_data=(x_test, y_test))
# + colab={} colab_type="code" id="CHDGYz_Mki4G"
H.history.keys()
# + colab={"base_uri": "https://localhost:8080/", "height": 282} colab_type="code" id="G1s_nCBWk-91" outputId="bbb8e973-b569-4f0a-d9ae-def57593bf0a"
plt.plot(H.history['acc'])
plt.plot(H.history['val_acc'],'r')
# + colab={"base_uri": "https://localhost:8080/", "height": 282} colab_type="code" id="-AGE67k2lGxL" outputId="3919dfd2-25e5-4ff6-c9ed-6cd31a6b31e1"
plt.plot(H.history['loss'])
plt.plot(H.history['val_loss'],'r')
# + [markdown] colab_type="text" id="4RIGVSojmy-R"
# ### Predict
# + colab={} colab_type="code" id="tnigE74rm39a"
unlabeled_images_test = pd.read_csv('gdrive/My Drive/dataML/test.csv')
#unlabeled_images_test = pd.read_csv('test.csv')
# + colab={} colab_type="code" id="3Fotf1KrpXVE"
X_unlabeled = unlabeled_images_test.values.reshape(unlabeled_images_test.shape[0],28,28,1)/255
# + colab={} colab_type="code" id="jeRnCHzdutQ0"
y_pred = model.predict(X_unlabeled)
# + colab={} colab_type="code" id="3M_fePteu-3X"
y_label = np.argmax(y_pred, axis=1)
# + [markdown] colab_type="text" id="RX5PuSUmvRri"
# ### Save csv
# + colab={} colab_type="code" id="zU5Q1fSRvVbn"
imageId = np.arange(1,y_label.shape[0]+1).tolist()
prediction_pd = pd.DataFrame({'ImageId':imageId, 'Label':y_label})
prediction_pd.to_csv('gdrive/My Drive/dataML/out_cnn08.csv',sep = ',', index = False)
# + [markdown] colab_type="text" id="_qetEX7AgBQY"
# # Tensorflow
# + [markdown] colab_type="text" id="4keUL7d0gBQZ"
# ### Helper functions for batch learning
# + colab={} colab_type="code" id="czdbjPfcgBQd"
def one_hot_encode(vec, vals=10):
'''
For use to one-hot encode the 10- possible labels
'''
n = len(vec)
out = np.zeros((n, vals))
out[range(n), vec] = 1
return out
# + colab={} colab_type="code" id="Afc2XmK2gBQh"
class CifarHelper():
def __init__(self):
self.i = 0
# Intialize some empty variables for later on
self.training_images = None
self.training_labels = None
self.test_images = None
self.test_labels = None
def set_up_images(self):
print("Setting Up Training Images and Labels")
# Vertically stacks the training images
self.training_images = train_images.as_matrix()
train_len = self.training_images.shape[0]
# Reshapes and normalizes training images
self.training_images = self.training_images.reshape(train_len,28,28,1)/255
# One hot Encodes the training labels (e.g. [0,0,0,1,0,0,0,0,0,0])
self.training_labels = one_hot_encode(train_labels.as_matrix().reshape(-1), 10)
print("Setting Up Test Images and Labels")
# Vertically stacks the test images
self.test_images = test_images.as_matrix()
test_len = self.test_images.shape[0]
# Reshapes and normalizes test images
self.test_images = self.test_images.reshape(test_len,28,28,1)/255
# One hot Encodes the test labels (e.g. [0,0,0,1,0,0,0,0,0,0])
self.test_labels = one_hot_encode(test_labels.as_matrix().reshape(-1), 10)
def next_batch(self, batch_size):
# Note that the 100 dimension in the reshape call is set by an assumed batch size of 100
x = self.training_images[self.i:self.i+batch_size]
y = self.training_labels[self.i:self.i+batch_size]
self.i = (self.i + batch_size) % len(self.training_images)
return x, y
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="1qAiPT1-gBQp" outputId="579a01c2-db46-4e51-8279-eb66d5844e35"
# Before Your tf.Session run these two lines
ch = CifarHelper()
ch.set_up_images()
# During your session to grab the next batch use this line
# (Just like we did for mnist.train.next_batch)
# batch = ch.next_batch(100)
# + [markdown] colab_type="text" id="QUkUv8sKgBQw"
# ## Creating the Model
#
#
# + [markdown] colab_type="text" id="l1XSMzIpgBQx"
# ** Create 2 placeholders, x and y_true. Their shapes should be: **
#
# * X shape = [None,28,28,1]
# * Y_true shape = [None,10]
#
# ** Create three more placeholders
# * lr: learning rate
# * step:for learning rate decay
# * drop_rate
# + colab={} colab_type="code" id="8Y_4DDQvgBQz"
X = tf.placeholder(tf.float32, shape=[None,28,28,1])
Y_true = tf.placeholder(tf.float32, shape=[None,10])
lr = tf.placeholder(tf.float32)
step = tf.placeholder(tf.int32)
drop_rate = tf.placeholder(tf.float32)
# + [markdown] colab_type="text" id="5egQVsS4NhxW"
# ### Initialize Weights and bias
# neural network structure for this sample:
#
# X [batch, 28, 28, 1]
#
# Layer 1: conv. layer 6x6x1=>6, stride 1 W1 [6, 6, 1, 6] , B1 [6]
#
# Y1 [batch, 28, 28, 6]
#
# Layer 2: conv. layer 5x5x6=>12, stride 2 W2 [5, 5, 6, 12] , B2 [12]
#
# Y2 [batch, 14, 14, 12]
#
# Layer 3: conv. layer 4x4x12=>24, stride 2 W3 [4, 4, 12, 24] , B3 [24]
#
# Y3 [batch, 7, 7, 24] => reshaped to YY [batch, 7*7*24]
#
# Layer 4: fully connected layer (relu+dropout), W4 [7*7*24, 200] B4 [200]
#
# Y4 [batch, 200]
#
# Layer 5: fully connected layer (softmax) W5 [200, 10] B5 [10]
#
# Y [batch, 10]
# + colab={} colab_type="code" id="UD6cWs60OnGn"
# three convolutional layers with their channel counts, and a
# fully connected layer (the last layer has 10 softmax neurons)
K = 12 # first convolutional layer output depth
L = 24 # second convolutional layer output depth
M = 48 # third convolutional layer
N = 200 # fully connected layer
# + colab={} colab_type="code" id="xiVQdlDsN1W-"
W1 = tf.Variable(tf.truncated_normal([6,6,1,K], stddev=0.1))
B1 = tf.Variable(tf.ones([K])/10)
W2 = tf.Variable(tf.truncated_normal([5,5,K,L], stddev=0.1))
B2 = tf.Variable(tf.ones([L])/10)
W3 = tf.Variable(tf.truncated_normal([4,4,L,M], stddev=0.1))
B3 = tf.Variable(tf.ones([M])/10)
W4 = tf.Variable(tf.truncated_normal([7*7*M,N], stddev=0.1))
B4 = tf.Variable(tf.ones([N])/10)
W5 = tf.Variable(tf.truncated_normal([N, 10], stddev=0.1))
B5 = tf.Variable(tf.zeros([10]))
# + [markdown] colab_type="text" id="u4RDJq8pO_Og"
# ### layers
# + colab={} colab_type="code" id="DM5_O098O4Di"
Y1 = tf.nn.relu(tf.nn.conv2d(X, W1, strides = [1,1,1,1], padding='SAME') + B1)
Y2 = tf.nn.relu(tf.nn.conv2d(Y1,W2, strides = [1,2,2,1], padding='SAME') + B2)
Y3 = tf.nn.relu(tf.nn.conv2d(Y2,W3, strides = [1,2,2,1], padding='SAME') + B3)
#flat the inputs for the fully connected nn
YY3 = tf.reshape(Y3, shape = (-1,7*7*M))
Y4 = tf.nn.relu(tf.matmul(YY3, W4) + B4)
Y4d = tf.nn.dropout(Y4,rate = drop_rate)
Ylogits = tf.matmul(Y4d, W5) + B5
Y = tf.nn.softmax(Ylogits)
# + [markdown] colab_type="text" id="tT4TvNz-gBRI"
# ### Loss Function
# + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" id="wkqQ5GurgBRJ" outputId="9145b309-a37b-4f3a-9b69-0b99eec766fc"
#cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_true,logits=Ylogits))
cross_entropy = tf.losses.softmax_cross_entropy(onehot_labels = Y_true, logits = Ylogits)
#cross_entropy = -tf.reduce_mean(y_true * tf.log(Ylogits)) * 1000.0
# + [markdown] colab_type="text" id="jmnEUVWxgBRM"
# ### Optimizer
# + colab={} colab_type="code" id="MoyIlzCagBRN"
lr = 0.0001 + tf.train.exponential_decay(learning_rate = 0.003,
global_step = step,
decay_steps = 2000,
decay_rate = 1/math.e
)
#optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.005)
optimizer = tf.train.AdamOptimizer(learning_rate=lr)
train = optimizer.minimize(cross_entropy)
# + [markdown] colab_type="text" id="47rAzeVNgBRP"
# ### Intialize Variables
# + colab={} colab_type="code" id="7WG1AszIgBRQ"
init = tf.global_variables_initializer()
# + [markdown] colab_type="text" id="rseSYLjggBRb"
# ### Saving the Model
# + colab={} colab_type="code" id="OwdUgOG4gBRc"
saver = tf.train.Saver()
# + [markdown] colab_type="text" id="xGdHTpE0gBRe"
# ## Graph Session
#
# ** Perform the training and test print outs in a Tf session and run your model! **
# + colab={"base_uri": "https://localhost:8080/", "height": 10237} colab_type="code" id="pQHEYbyZgBRf" outputId="cc718ec8-8ef0-42c4-b230-898ac94d6e09"
history = {'acc_train':list(),'acc_val':list(),
'loss_train':list(),'loss_val':list(),
'learning_rate':list()}
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = ch.next_batch(100)
sess.run(train, feed_dict={X: batch[0], Y_true: batch[1], step: i, drop_rate: 0.25})
# PRINT OUT A MESSAGE EVERY 100 STEPS
if i%100 == 0:
# Test the Train Model
feed_dict_train = {X: batch[0], Y_true: batch[1], drop_rate: 0.25}
feed_dict_val = {X:ch.test_images, Y_true:ch.test_labels, drop_rate: 0}
matches = tf.equal(tf.argmax(Y,1),tf.argmax(Y_true,1))
acc = tf.reduce_mean(tf.cast(matches,tf.float32))
history['acc_train'].append(sess.run(acc, feed_dict = feed_dict_train))
history['acc_val'].append(sess.run(acc, feed_dict = feed_dict_val))
history['loss_train'].append(sess.run(cross_entropy, feed_dict = feed_dict_train))
history['loss_val'].append(sess.run(cross_entropy, feed_dict = feed_dict_val))
history['learning_rate'].append(sess.run(lr, feed_dict = {step: i}))
print("Iteration {}:\tlearning_rate={:.6f},\tloss_train={:.6f},\tloss_val={:.6f},\tacc_train={:.6f},\tacc_val={:.6f}"
.format(i,history['learning_rate'][-1],
history['loss_train'][-1],
history['loss_val'][-1],
history['acc_train'][-1],
history['acc_val'][-1]))
print('\n')
saver.save(sess,'models_saving/my_model.ckpt')
# + colab={"base_uri": "https://localhost:8080/", "height": 282} colab_type="code" id="s5rC6lSIAR-P" outputId="1c59d87f-9158-4a6d-8f25-151c9890c407"
plt.plot(history['acc_train'],'b')
plt.plot(history['acc_val'],'r')
# + colab={"base_uri": "https://localhost:8080/", "height": 282} colab_type="code" id="tkToA3CjBxJe" outputId="b0930a09-31e6-4eeb-b099-035d807097dc"
plt.plot(history['loss_train'],'b')
plt.plot(history['loss_val'],'r')
# + colab={"base_uri": "https://localhost:8080/", "height": 282} colab_type="code" id="NJPdR4IFb-uc" outputId="4409d520-8b2b-42fa-a9ac-61875d25abdf"
plt.plot(history['learning_rate'])
# + [markdown] colab_type="text" id="X9uiFJ-ogBRi"
# ### Loading a Model
# + colab={} colab_type="code" id="Doot7_0fY5tF"
unlabeled_images_test = pd.read_csv('gdrive/My Drive/dataML/test.csv')
#unlabeled_images_test = pd.read_csv('test.csv')
X_unlabeled = unlabeled_images_test.values.reshape(unlabeled_images_test.shape[0],28,28,1)/255
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="dbtjpi8JgBRj" outputId="ee65fc89-ef7e-4b99-b2bb-4d24d4e518c3"
with tf.Session() as sess:
# Restore the model
saver.restore(sess, 'models_saving/my_model.ckpt')
# Fetch Back Results
label = sess.run(Y, feed_dict={X:X_unlabeled,drop_rate:0})
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="MZ1xS7oJZYkV" outputId="6d483b80-a575-49e8-ce54-db9c91392100"
label
# + [markdown] colab_type="text" id="ojRKNc76gBRx"
# ## Predict the unlabeled test sets using the model
# + colab={} colab_type="code" id="yQrGiou8gBRy"
imageId = np.arange(1,label.shape[0]+1).tolist()
# + colab={} colab_type="code" id="9Oj02pY3gBR1"
prediction_pd = pd.DataFrame({'ImageId':imageId, 'Label':label})
# + colab={} colab_type="code" id="VLjMgeXEgBR4"
prediction_pd.to_csv('gdrive/My Drive/dataML/out_cnn4.csv',sep = ',', index = False)
# + colab={} colab_type="code" id="ivz_HMLnZ684"
| 4_Clasification_DigitRecognizer/.ipynb_checkpoints/3_DL_Multi_Layer_CNN_for_DigitRecognizer-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# -
# # Model Deployment with Merlin Inference API
#
# ## Overview
#
# In the previous notebook we explained and showed how we can preprocess data with NVTabular, and train an TF MLP model using NVTabular KerasSequenceLoader. We learned how to save a workflow, a trained TF model, and the ensemble model. In this notebook, we will show example request scripts sent to triton inference server
# - to transform new/streaming data with NVTabular library
# - to generate prediction results for new data from trained model
# - to deploy the end-to-end pipeline.
# ## Getting Started
# +
# External dependencies
import os
from time import time
import warnings
from tritonclient.utils import *
import tritonclient.grpc as grpcclient
import nvtabular
import cudf
from timeit import default_timer as timer
from datetime import timedelta
# -
# We define our base directory containing the raw and processed data.
MODEL_PATH = os.environ.get('MODEL_BASE_DIR', '/model/models/')
INPUT_DATA_DIR = os.environ.get('INPUT_DATA_DIR', '/model/data/')
# Let's deactivate the warnings before sending requests.
import warnings
warnings.filterwarnings('ignore')
# ## Verify Triton Is Running Correctly
# Use Triton’s ready endpoint to verify that the server and the models are ready for inference. Replace `localhost` with your host ip address.
import tritonhttpclient
try:
triton_client = tritonhttpclient.InferenceServerClient(url="localhost:8000", verbose=True)
print("client created.")
except Exception as e:
print("channel creation failed: " + str(e))
triton_client.is_server_live()
# The HTTP request returns status 200 if Triton is ready and non-200 if it is not ready.
# ## Send request to Triton IS to transform raw dataset
# Now we send a request to the running triton inference server using our raw validation set in parquet format. This request is going to load the saved NVTabular workflow and then transform the new dataset samples.
# +
# read in the workflow (to get input/output schema to call triton with)
MODEL_NAME_NVT = os.environ.get('MODEL_NAME_NVT', 'movielens_nvt')
MODEL_PATH_NVT = os.path.join(MODEL_PATH, MODEL_NAME_NVT)
workflow = nvtabular.Workflow.load(os.path.join(MODEL_PATH_NVT, "1/workflow"))
# read in a batch of data to get transforms for
batch = cudf.read_parquet(os.path.join(INPUT_DATA_DIR, "valid.parquet"), num_rows=3)[workflow.column_group.input_column_names]
print("raw data:\n", batch, "\n")
# convert the batch to a triton inputs
columns = [(col, batch[col][0:3]) for col in workflow.column_group.input_column_names]
inputs = []
col_dtypes = [np.int64, np.int64]
for i, (name, col) in enumerate(columns):
d = col.values_host.astype(col_dtypes[i])
d = d.reshape(len(d),1)
inputs.append(grpcclient.InferInput(name, d.shape, np_to_triton_dtype(col_dtypes[i])))
inputs[i].set_data_from_numpy(d)
# placeholder variables for the output
outputs = [grpcclient.InferRequestedOutput(name) for name in workflow.column_group.columns]
# make the request
# replace <localhost> with your host ip address.
with grpcclient.InferenceServerClient("localhost:8001") as client:
response = client.infer(MODEL_NAME_NVT, inputs, request_id="1",outputs=outputs)
# convert output from triton back to a nvt dataframe
output = cudf.DataFrame({col: response.as_numpy(col).T[0] for col in workflow.column_group.columns})
print("transformed data:\n", output)
# -
# ## Running the MovieLens rating classification example
# A minimal model repository for a TensorFlow SavedModel model is:
# ```
# <model-repository-path>/<model-name>/
# config.pbtxt
# 1/
# model.savedmodel/
# <saved-model files>
# ```
#
# Let's check out our model repository layout. You can install `tree` library with `apt-get install tree`, and then run `tree /model/models/` to print out the model repository layout as below:
# ```
# /model/models/
# ├── movielens
# │ ├── 1
# │ └── config.pbtxt
# ├── movielens_nvt
# │ ├── 1
# │ │ ├── model.py
# │ │ └── workflow
# │ │ ├── categories
# │ │ │ ├── unique.movieId.parquet
# │ │ │ └── unique.userId.parquet
# │ │ ├── metadata.json
# │ │ └── workflow.pkl
# │ └── config.pbtxt
# └── movielens_tf
# ├── 1
# │ └── model.savedmodel
# │ ├── assets
# │ ├── saved_model.pb
# │ └── variables
# │ ├── variables.data-00000-of-00001
# │ └── variables.index
# └── config.pbtxt
# ```
# You can see that we have a config.pbtxt file. Each model in a model repository must include a model configuration that provides required and optional information about the model. Typically, this configuration is provided in a `config.pbtxt` file specified as [ModelConfig protobuf](https://github.com/triton-inference-server/server/blob/r20.12/src/core/model_config.proto).
# +
# read in a batch of data to get transforms for
batch = cudf.read_parquet(os.path.join(INPUT_DATA_DIR, "valid/*.parquet"), num_rows=3)
batch = batch[batch.columns][0:3]
batch = batch.drop(columns=["rating"])
print("input data:\n", batch, "\n")
inputs = []
for i, col in enumerate(batch.columns):
d = batch[col].values_host.astype(np.int32)
d = d.reshape(len(d),1)
inputs.append(grpcclient.InferInput(col, d.shape, np_to_triton_dtype(np.int32)))
inputs[i].set_data_from_numpy(d)
outputs = [grpcclient.InferRequestedOutput("dense_3")]
MODEL_NAME_TF = os.environ.get('MODEL_NAME_TF', 'movielens_tf')
with grpcclient.InferenceServerClient("localhost:8001") as client:
response = client.infer(MODEL_NAME_TF, inputs, request_id="1",outputs=outputs)
print("predicted sigmoid result:\n", response.as_numpy('dense_3'))
# -
# ## END-2-END INFERENCE PIPELINE
# In this request example below, we show that we can feed raw unprocessed parquet file, and obtain final prediction results coming from the last layer of the TF model that we built in `movilens_TF` notebook. The output we get is a sigmoid value.
# We use `InferInput` to describe the tensors we'll be sending to the server. It needs the name of the input, the shape of the tensor we'll be passing to the server, and its datatype.
# ## Send request to Triton IS to generate prediction results for raw dataset
# +
# read in the workflow (to get input/output schema to call triton with)
batc_size = 64
batch = cudf.read_parquet(os.path.join(INPUT_DATA_DIR, "valid.parquet"), num_rows=3, columns=['userId', 'movieId'])
batch = batch[batch.columns][0:3]
print("raw data:\n", batch, "\n")
# convert the batch to a triton inputs
inputs = []
col_names = ['userId', 'movieId']
col_dtypes = [np.int64, np.int64]
for i, col in enumerate(batch.columns):
d = batch[col].values_host.astype(col_dtypes[i])
d = d.reshape(len(d),1)
inputs.append(grpcclient.InferInput(col_names[i], d.shape, np_to_triton_dtype(col_dtypes[i])))
inputs[i].set_data_from_numpy(d)
# placeholder variables for the output
outputs = [grpcclient.InferRequestedOutput("dense_3")]
MODEL_NAME_ENSEMBLE = os.environ.get('MODEL_NAME_ENSEMBLE', 'movielens')
# build a client to connect to our server.
# This InferenceServerClient object is what we'll be using to talk to Triton.
# make the request with tritonclient.grpc.InferInput object
with grpcclient.InferenceServerClient("localhost:8001") as client:
response = client.infer(MODEL_NAME_ENSEMBLE, inputs, request_id="1",outputs=outputs)
print("predicted sigmoid result:\n", response.as_numpy('dense_3'))
# -
# Let's send request for a larger batch size and measure the total run time and throughput.
# +
# read in the workflow (to get input/output schema to call triton with)
batch_size = 64
batch = cudf.read_parquet(os.path.join(INPUT_DATA_DIR, "valid.parquet"), num_rows=batch_size, columns=['userId', 'movieId'])
batch = batch[batch.columns][0:batch_size]
print("raw data:\n", batch, "\n")
start = time()
# convert the batch to a triton inputs
inputs = []
col_names = ['userId', 'movieId']
col_dtypes = [np.int64, np.int64]
for i, col in enumerate(batch.columns):
d = batch[col].values_host.astype(col_dtypes[i])
d = d.reshape(len(d),1)
inputs.append(grpcclient.InferInput(col_names[i], d.shape, np_to_triton_dtype(col_dtypes[i])))
inputs[i].set_data_from_numpy(d)
# placeholder variables for the output
outputs = [grpcclient.InferRequestedOutput("dense_3")]
MODEL_NAME_ENSEMBLE = os.environ.get('MODEL_NAME_ENSEMBLE', 'movielens')
# build a client to connect to our server.
# This InferenceServerClient object is what we'll be using to talk to Triton.
# make the request with tritonclient.grpc.InferInput object
with grpcclient.InferenceServerClient("localhost:8001") as client:
response = client.infer(MODEL_NAME_ENSEMBLE, inputs, request_id="1",outputs=outputs)
t_final = time() - start
print("predicted sigmoid result:\n", response.as_numpy('dense_3'), "\n")
print(f"run_time(sec): {t_final} - rows: {batch_size} - inference_thru: {batch_size / t_final}")
| examples/inference_triton/inference-TF/movielens-inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # cadCAD Tutorials: The Robot and the Marbles, part 4
# In parts [1](../robot-marbles-part-1/robot-marbles-part-1.ipynb) and [2](../robot-marbles-part-2/robot-marbles-part-2.ipynb) we introduced the 'language' in which a system must be described in order for it to be interpretable by cadCAD and some of the basic concepts of the library:
# * State Variables
# * Timestep
# * State Update Functions
# * Partial State Update Blocks
# * Simulation Configuration Parameters
# * Policies
#
# In [part 3](../robot-marbles-part-3/robot-marbles-part-3.ipynb) we covered how to describe the presence of asynchronous subsystems within the system being modeled in cadCAD.
#
# So far, all the examples referred to deterministic systems: no matter how many times you ran one of those simulations, the results would be the same. However, systems are more commonly non-deterministic, and modelling them as deterministic might be an oversimplification sometimes.
#
# In this notebook, we'll cover cadCAD's support for modelling non-deterministic systems and Monte Carlo simulations. But first let's copy the base configuration with which we ended Part 3. Here's the description of that system:
#
# __The robot and the marbles__
# * Picture a box (`box_A`) with ten marbles in it; an empty box (`box_B`) next to the first one; and __two__ robot arms capable of taking a marble from any one of the boxes and dropping it into the other one.
# * The robots are programmed to take one marble at a time from the box containing the largest number of marbles and drop it in the other box. They repeat that process until the boxes contain an equal number of marbles.
# * The robots act __asynchronously__; robot 1 acts once every two timesteps, and robot 2 acts once every three timesteps.
# +
# %%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# List of all the state variables in the system and their initial values
initial_conditions = {
'box_A': 10, # as per the description of the example, box_A starts out with 10 marbles in it
'box_B': 0 # as per the description of the example, box_B starts out empty
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Settings of general simulation parameters, unrelated to the system itself
# `T` is a range with the number of discrete units of time the simulation will run for;
# `N` is the number of times the simulation will be run (Monte Carlo runs)
# In this example, we'll run the simulation once (N=1) and its duration will be of 10 timesteps
# We'll cover the `M` key in a future article. For now, let's leave it empty
simulation_parameters = {
'T': range(10),
'N': 1,
'M': {}
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# We specify the robot arm's logic in a Policy Function
def robot_arm(params, step, sL, s):
add_to_A = 0
if (s['box_A'] > s['box_B']):
add_to_A = -1
elif (s['box_A'] < s['box_B']):
add_to_A = 1
return({'add_to_A': add_to_A, 'add_to_B': -add_to_A})
robots_periods = [2,3] # Robot 1 acts once every 2 timesteps; Robot 2 acts once every 3 timesteps
def robot_arm_1(params, step, sL, s):
_robotId = 1
if s['timestep']%robots_periods[_robotId-1]==0: # on timesteps that are multiple of 2, Robot 1 acts
return robot_arm(params, step, sL, s)
else:
return({'add_to_A': 0, 'add_to_B': 0}) # for all other timesteps, Robot 1 doesn't interfere with the system
def robot_arm_2(params, step, sL, s):
_robotId = 2
if s['timestep']%robots_periods[_robotId-1]==0: # on timesteps that are multiple of 3, Robot 2 acts
return robot_arm(params, step, sL, s)
else:
return({'add_to_A': 0, 'add_to_B': 0}) # for all other timesteps, Robot 2 doesn't interfere with the system
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# We make the state update functions less "intelligent",
# ie. they simply add the number of marbles specified in _input
# (which, per the policy function definition, may be negative)
def increment_A(params, step, sL, s, _input):
y = 'box_A'
x = s['box_A'] + _input['add_to_A']
return (y, x)
def increment_B(params, step, sL, s, _input):
y = 'box_B'
x = s['box_B'] + _input['add_to_B']
return (y, x)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# In the Partial State Update Blocks,
# the user specifies if state update functions will be run in series or in parallel
# and the policy functions that will be evaluated in that block
partial_state_update_blocks = [
{
'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
'robot_arm_1': robot_arm_1,
'robot_arm_2': robot_arm_2
},
'variables': { # The following state variables will be updated simultaneously
'box_A': increment_A,
'box_B': increment_B
}
}
]
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
from cadCAD.configuration import Configuration
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
config = Configuration(initial_state=initial_conditions, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_config=simulation_parameters #dict containing simulation parameters
)
from cadCAD.engine import ExecutionMode, ExecutionContext, Executor
exec_mode = ExecutionMode()
exec_context = ExecutionContext(exec_mode.single_proc)
executor = Executor(exec_context, [config]) # Pass the configuration object inside an array
raw_result, tensor = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
# %matplotlib inline
import pandas as pd
df = pd.DataFrame(raw_result)
# -
df.plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(df['timestep'].drop_duplicates()),
colormap = 'RdYlGn',
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
# # Non-determinism
# Non-deterministic systems exhibit different behaviors on different runs for the same input. The order of heads and tails in a series of 3 coin tosses, for example, is non deterministic.
#
# Our robots and marbles system is currently modelled as a deterministic system. Meaning that every time we run the simulation: none of the robots act on timestep 1; robot 1 acts on timestep 2; robot 2 acts on timestep 3; an so on.
#
# If however we were to define that at every timestep each robot would act with a probability P, then we would have a non-deterministic (probabilistic) system. Let's make the following changes to our system.
# * Robot 1: instead of acting once every two timesteps, there's a 50% chance it will act in any given timestep
# * Robot 2: instead of acting once every three timesteps, there's a 33.33% chance it will act in any given timestep
# +
# %%capture
from numpy.random import rand
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# We specify each of the robots logic in a Policy Function
robots_probabilities = [0.5,1/3] # Robot 1 acts with a 50% probability; Robot 2, 33.33%
def robot_arm_1(params, step, sL, s):
_robotId = 1
if rand()<robots_probabilities[_robotId-1]: # draw a random number between 0 and 1; if it's smaller than the robot's parameter, it acts
return robot_arm(params, step, sL, s)
else:
return({'add_to_A': 0, 'add_to_B': 0}) # otherwise, the robot doesn't interfere with the system
def robot_arm_2(params, step, sL, s):
_robotId = 2
if rand()<robots_probabilities[_robotId-1]: # draw a random number between 0 and 1; if it's smaller than the robot's parameter, it acts
return robot_arm(params, step, sL, s)
else:
return({'add_to_A': 0, 'add_to_B': 0}) # otherwise, the robot doesn't interfere with the system
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# In the Partial State Update Blocks,
# the user specifies if state update functions will be run in series or in parallel
# and the policy functions that will be evaluated in that block
partial_state_update_blocks = [
{
'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
'robot_arm_1': robot_arm_1,
'robot_arm_2': robot_arm_2
},
'variables': { # The following state variables will be updated simultaneously
'box_A': increment_A,
'box_B': increment_B
}
}
]
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# -
# If we run the simulation with those configurations, the system is unlikely to exhibit the same behavior as it did in its deterministic version
# +
# %%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
config = Configuration(initial_state=initial_conditions, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_config=simulation_parameters #dict containing simulation parameters
)
exec_mode = ExecutionMode()
exec_context = ExecutionContext(exec_mode.single_proc)
executor = Executor(exec_context, [config]) # Pass the configuration object inside an array
raw_result, tensor = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
df = pd.DataFrame(raw_result)
# -
df.plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(df['timestep'].drop_duplicates()),
colormap = 'RdYlGn',
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
# And if we run it again, it returns yet another result
# +
# %%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
config = Configuration(initial_state=initial_conditions, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_config=simulation_parameters #dict containing simulation parameters
)
exec_mode = ExecutionMode()
exec_context = ExecutionContext(exec_mode.single_proc)
executor = Executor(exec_context, [config]) # Pass the configuration object inside an array
raw_result, tensor = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
df = pd.DataFrame(raw_result)
# -
df.plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(df['timestep'].drop_duplicates()),
colormap = 'RdYlGn',
yticks=list(range(1+(df['box_A']+df['box_B']).max())));
# In order to take advantage of cadCAD's Monte Carlo simulation features, we should modify the configuration file so as to define the number of times we want the same simulation to be run. This is done in the `N` key of the `simulation_parameters` dict.
# +
# %%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Settings of general simulation parameters, unrelated to the system itself
# `T` is a range with the number of discrete units of time the simulation will run for;
# `N` is the number of times the simulation will be run (Monte Carlo runs)
# In this example, we'll run the simulation once (N=1) and its duration will be of 10 timesteps
# We'll cover the `M` key in a future article. For now, let's leave it empty
simulation_parameters = {
'T': range(10),
'N': 50, # We'll run the same simulation 50 times; the random events in each simulation are independent
'M': {}
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
config = Configuration(initial_state=initial_conditions, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_config=simulation_parameters #dict containing simulation parameters
)
exec_mode = ExecutionMode()
exec_context = ExecutionContext(exec_mode.single_proc)
executor = Executor(exec_context, [config]) # Pass the configuration object inside an array
raw_result, tensor = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
df = pd.DataFrame(raw_result)
# -
from IPython.display import display
tmp_rows = pd.options.display.max_rows
pd.options.display.max_rows = 10
display(df.set_index(['run', 'timestep', 'substep']))
pd.options.display.max_rows = tmp_rows
# Plotting two of those runs allows us to see the different behaviors over time.
df[df['run']==1].plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(11)),
colormap = 'RdYlGn');
df[df['run']==9].plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(11)),
colormap = 'RdYlGn');
# If we plot all those runs onto a single chart, we can see every possible trajectory for the number of marbles in each box.
ax = None
for i in range(simulation_parameters['N']):
ax = df[df['run']==i+1].plot('timestep', ['box_A', 'box_B'],
grid=True,
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+max(df['box_A'].max(),df['box_B'].max()))),
legend = (ax == None),
colormap = 'RdYlGn',
ax = ax
)
# For some analyses, it might make sense to look at the data in aggregate. Take the median for example:
dfmc_median = df.groupby(['timestep', 'substep']).median().reset_index()
dfmc_median.plot('timestep', ['box_A', 'box_B'],
grid=True,
xticks=list(dfmc_median['timestep'].drop_duplicates()),
yticks=list(range(int(1+max(dfmc_median['box_A'].max(),dfmc_median['box_B'].max())))),
colormap = 'RdYlGn'
)
# Or look at edge cases
# +
max_final_A = df[df['timestep']==df['timestep'].max()]['box_A'].max()
# max_final_A
slow_runs = df[(df['timestep']==df['timestep'].max()) &
(df['box_A']==max_final_A)]['run']
slow_runs = list(slow_runs)
slow_runs
ax = None
for i in slow_runs:
ax = df[df['run']==i].plot('timestep', ['box_A', 'box_B'],
grid=True,
xticks=list(df['timestep'].drop_duplicates()),
yticks=list(range(1+max(df['box_A'].max(),df['box_B'].max()))),
legend = (ax == None),
colormap = 'RdYlGn',
ax = ax
)
# -
# We invite the reader to fork this code and come up with answers for their other questions that might be interesting to look at. For example:
# * How often does box B momentarily contain more marbles than box A?
# * What's the frequency distribution of the time to reach equilibrium?
# * What's the probability distribution of the waiting times of each one of the robots?
| tutorials/robot-marbles-part-4/robot-marbles-part-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 数据扩增
#
# 对训练集进行扩增,验证集和测试集不要动
# + tags=[]
# %matplotlib inline
# %config InlineBackend.figure_format = 'svg'
import warnings
warnings.filterwarnings('ignore')
import re, jieba, requests, json, time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# + tags=[]
train = pd.read_csv('../data/TrainSet.csv')
train['content'] = train['content'].astype(str)
print('train has {} rows and {} columns'.format(train.shape[0], train.shape[1]))
train.head()
# +
def translate(word):
'''访问有道词典 api'''
url = 'http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=null'
# 传输的参数,其中 i 为需要翻译的内容
key = {
'type': "AUTO",
'i': word,
"doctype": "json",
"version": "2.1",
"keyfrom": "fanyi.web",
"ue": "UTF-8",
"action": "FY_BY_CLICKBUTTON",
"typoResult": "true"
}
# key 这个字典为发送给有道词典服务器的内容
response = requests.post(url, data=key)
# 判断服务器是否响应成功
if response.status_code == 200:
# 返回响应的结果
return response.text
else:
print("有道词典调用失败")
# 响应失败就返回空
return None
def get_main(word):
'''如果输入不只一句话,则将翻译过的字符串拼接起来'''
list_trans = translate(word)
result = json.loads(list_trans)
stri = ''
for i in result['translateResult'][0]:
stri += i['tgt']
return stri
def translate_reverse(x):
'''反向翻译'''
x = get_main(x)
time.sleep(0.1) # 程序休眠0.1
return get_main(x)
# -
# 去除空格等
train['content'] = train['content'].apply(lambda x: x.replace('\u3000','') \
.replace('\n','') \
.replace('\r','') \
.strip())
train['content'].head(15)
train.to_csv('../data/TrainSet0.csv',encoding='gb18030', index=False)
translate_reverse('在这场比赛中,您将可以访问结合了基因表达(特征)和细胞存活率(标签)数据的独特数据集。')
# + tags=[]
train0 = train.copy()
start = time.time()
train0['content'] = train0['content'].apply(lambda x: translate_reverse(x))
print('This program costs {:.2f} seconds'.format(time.time()-start))
train0.head()
# -
sample = train.iloc[17,0]
sample
#type(sample)
train0.iloc[17,0] # 展示一下扩增后的例子
# ## 数据扩增计划
#
# 原始数据330行
# + 使用反向翻译以整句为单位将数据扩增一倍
# + 使用反向翻译(日文)以整句为单位将数据扩增一倍
# + 在分词后,将一些词语随机进行反向翻译
# + 在分词后,将一些词语随机替换为 噪音
# +
train0.to_csv('../data/augment.csv',encoding='gb18030', index=False)
get_main('训练')
# +
import urllib.request
import urllib.parse
import json
# 等待用户输入需要翻译的单词
#content = input('请输入需要翻译的单词:')
content = '疯狂的海鲜'
# 有道翻译的url链接
url = 'http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=null'
# 发送给有道服务器的数据
data = {}
# 需要翻译的文字
data['i'] = content
# 下面这些都先按照我们之前抓包获取到的数据
data['from'] = 'zh'
data['to'] = 'fr'
data['smartresult'] = 'dict'
data['client'] = 'fanyideskweb'
data['salt'] = '<PASSWORD>'
data['sign'] = '997742c66698b25b43a3a5030e1c2ff2'
data['doctype'] = 'json'
data['version'] = '2.1'
data['keyfrom'] = 'fanyi.web'
data['action'] = 'FY_BY_CL1CKBUTTON'
data['typoResult'] = 'true'
data['type'] = 'ZH_CN2fr'
# 对数据进行编码处理
data = urllib.parse.urlencode(data).encode('utf-8')
# 创建一个Request对象,把url和data传进去,并且需要注意的使用的是POST请求
request = urllib.request.Request(url=url, data=data, method='POST')
# 打开这个请求
response = urllib.request.urlopen(request)
# 读取返回来的数据
result_str = response.read().decode('utf-8')
# 把返回来的json字符串解析成字典
result_dict = json.loads(result_str)
# 获取翻译结果
print('翻译的结果是:%s' % result_dict)
# +
def translate_ja(word):
'''访问有道词典 api'''
url = 'http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule&smartresult=ugc&sessionFrom=null'
# 传输的参数,其中 i 为需要翻译的内容
key = {
'i': word,
"doctype": "json",
"version": "2.1",
"keyfrom": "fanyi.web",
"ue": "UTF-8",
"action": "FY_BY_CLICKBUTTON",
# "from": "zh",
"to": "ja" ,
"typoResult": "true",
"type":"ZH_CN2JA"
}
# key 这个字典为发送给有道词典服务器的内容
response = requests.post(url, data=key)
# 判断服务器是否响应成功
if response.status_code == 200:
# 返回响应的结果
return response.text
else:
print("有道词典调用失败")
# 响应失败就返回空
return None
def get_main_japan(word):
'''如果输入不只一句话,则将翻译过的字符串拼接起来'''
list_trans = translate_ja(word)
result = json.loads(list_trans)
stri = ''
for i in result['translateResult'][0]:
stri += i['tgt']
return stri
def translate_reverse_ja(x):
'''反向翻译'''
x = get_main_japan(x)
time.sleep(0.1) # 程序休眠0.1
return get_main(x)
#get_main_japan('训练')
translate_reverse_ja('训练模型')
# -
translate_reverse_ja('国家实行网络安全等级保护制度。网络运营者应当按照网络安全等级保护制度的要求,履行下列安全保护义务,保障网络免受干扰、破坏或者未经授权的访问,防止网络数据泄露或者被窃取、篡改:(一)制定内部安全管理制度和操作规程,确定网络安全负责人,落实网络安全保护责任;(二)采取防范计算机病毒和网络攻击、网络侵入等危害网络安全行为的技术措施;(三)采取监测、记录网络运行状态、网络安全事件的技术措施,并按照规定留存相关的网络日志不少于六个月;(四)采取数据分类、重要数据备份和加密等措施;(五)法律、行政法规规定的其他义务。')
get_main_japan('国家实行网络安全等级保护制度。网络运营者应当按照网络安全等级保护制度的要求,履行下列安全保护义务,保障网络免受干扰、破坏或者未经授权的访问,防止网络数据泄露或者被窃取、篡改')
# +
from googletrans import Translator
translator = Translator(service_urls=["translate.google.cn"])
# 调用翻译函数,指定原语言的代码(en),和目标语言的代码(zh-CN)
# -
result = translator.translate('您好,我是大树。你个大傻逼。想不想喝星巴克。', src='zh-CN', dest='fr')
print(result.text)
| .ipynb_checkpoints/data_augmentations-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: fisi2028
# language: python
# name: fisi2028
# ---
# +
import pylab
import numpy as np
np.seterr(all='raise')
np.random.seed(13)
import scipy as sp
import pandas as pd # tablas de datos -> bases datos
import seaborn as sns; sns.set() # graficar (sobre matplotlib) ampliar la funcionalidad de matplotlib
import matplotlib as mpl
import matplotlib.pyplot as plt
# mpl.rc('text', usetex=True)
# %matplotlib inline
# from tqdm import tqdm
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
# -
# ### Omitir por el momento
def calc_error(res,e,names=[]):
tmp_i = np.zeros(len(res.x))
etol = e * max(1, abs(res.fun))
for i in range(len(res.x)):
tmp_i[i] = 1.0
hess_inv_i = res.hess_inv(tmp_i)[i]
uncertainty_i = np.sqrt(etol * hess_inv_i)
tmp_i[i] = 0.0
if len(names) > 0:
print('{0} = {1:12.4e} ± {2:.1e}'.format(names[i], res.x[i], uncertainty_i))
else:
print('x^{0} = {1:12.4e} ± {2:.1e}'.format(i, res.x[i], uncertainty_i))
# + [markdown] tags=[]
# # Revisar bases de datos conocidas
# [SciKit Learn](https://scikit-learn.org/stable/datasets/index.html)
# <!-- X,Y = datasets.load_boston(return_X_y=True) -->
# -
A = pd.DataFrame(housing['data'], columns=housing['feature_names'])
b = pd.DataFrame(housing['target'], columns=["MEDV"])
A.insert(0,'c0',1)
# -36.9419202
# -3.69419202 -> numerico de precision
A
# $$\mathbb{A}\cdot\vec{x} + c_0 \cdot \mathbb{1} = \vec{b} $$
#
# Tenemos que utilizar minimimos cuadrados?
#
# $$MSE=\min_{\vec{x}} || \vec{b} - \mathbb{A}\cdot\vec{x} ||^2$$
# La funcion de costo -> Cost function
#
#
#
# $$\vec{x} = (\mathbb{A}^T\cdot\mathbb{A})^{-1}\cdot\mathbb{A}^T\vec{b} $$
#
# y = rho()*sigma_y/sigma_x
#
# $$X = x_1+x_2+x_3+\dots$$
# Mi intención,
# $$CRIM\times x_1+ZN\times x_2+INDUS\times x_3+\dots = b$$
Anumpy = np.matrix(A.to_numpy(dtype=np.float64)) # ""
bnumpy = np.matrix(b.to_numpy(dtype=np.float64))
# # %%timeit
X_opt = np.linalg.inv(Anumpy.T*Anumpy)*Anumpy.T*bnumpy
print(X_opt.T)
np.array(Anumpy*X_opt).flatten()
sns.scatterplot(np.array(Anumpy*X_opt).flatten(),np.array(bnumpy).flatten())
plt.plot([0,10],[0,10])
predicciones = np.array(Anumpy*X_opt).flatten()
observaciones = np.array(bnumpy).flatten()
bbarrita = observaciones.mean()
r2 = 1-(np.linalg.norm(predicciones - observaciones)**2)/(np.linalg.norm(observaciones - bbarrita)**2)
r2
# # Implementemos nuestro algoritmo de Optimización por <NAME>
# La idea es encontrar $X_{opt}$ de forma iterativa sin invertir la matriz
# + tags=[]
# Definamos la func de costo (debe devolver un escalar)
def L(x,A,b):
# (b_pred-b_obs)^2
# m es el numero de datos
# n es el numero de parametros == 13
m,n = A.shape
X = np.matrix(x).T
DeltaB=(A*X-b) # b gorro - b
return (DeltaB.T*DeltaB)[0,0]/m # matriz 1x1
def dLdx(x,A,b): # gradiente de L
# (b_pred-b_obs)^2
# m es el numero de datos
# n es el numero de parametros == 13
m,n = A.shape
X = np.matrix(x).T
DeltaB=(A*X-b)
return (2/m)*np.array(A.T*DeltaB).flatten() # como un vector [1,2,3] y no como un "vector matriz" [[1],[2],[3]]
# -
# encontrar una forma iterativa de actualizar X para ir minimizando la funcion de costo L
x = np.zeros(Anumpy.shape[1])
epsilon = 2e-8
cost = []
N = 100
for it in range(N):
x = x - epsilon*dLdx(x,Anumpy,bnumpy)
cost.append(L(x,Anumpy,bnumpy))
sns.scatterplot(np.arange(N)+1,cost)
# plt.xscale('log')
# plt.yscale('log')
L(np.array(X_opt).flatten(),Anumpy,bnumpy)
np.array(X_opt).flatten()
# # %%timeit
res1 = sp.optimize.minimize(fun=L,x0=np.zeros(Anumpy.shape[1]), args = (Anumpy,bnumpy), tol=1e-10)
res1['x']
e=1e-10
# # %%timeit
res2 = sp.optimize.minimize(fun=L,jac=dLdx, x0=np.zeros(Anumpy.shape[1]), args = (Anumpy,bnumpy), method='Newton-CG', tol=e)
res2['x']
# # %%timeit
res3 = sp.optimize.minimize(fun=L,jac=dLdx, x0=np.zeros(Anumpy.shape[1]), args = (Anumpy,bnumpy), method='L-BFGS-B', tol=e)
res3['x']
L(res1['x'],Anumpy,bnumpy)
L(res2.x,Anumpy,bnumpy)
L(res3.x,Anumpy,bnumpy)
L(np.array(X_opt).flatten(),Anumpy,bnumpy)
calc_error(res3,e,names=[])
# # Fit a funciones no lineales
print("Nuestra caja experimental!")
a = 3/2
b = 4
c = -3
N=100
x = np.linspace(0.2,10,N) # 0.2 y 10 en N elementos igualmente espaciados
y = a/(1+np.exp(c*(x-b))) # modelo teorico -> principio fisico, matematico, biologico...
x1 = x + np.random.exponential(0.01,size=N)
y1 = y + np.random.normal(0,0.05,size=N) # ruido gaussiano
x2 = x + np.random.normal(0,0.03,size=N)
y2 = y + np.random.exponential(0.05,size=N) # ruido exponencial
sns.scatterplot(x,y)
sns.scatterplot(x,y2)
sns.scatterplot(x1,y1)
sns.scatterplot(x2,y2)
# # ¿Cómo hacer el fit a la función?
# $$f(x) = a\frac{1}{1+e^{b(x-c)}}$$
#como encuentro yo a, b y c?
# cual seria nuestra funcion de costo
def f(parametros,x): # parametros es un vector de 3 componentes
return parametros[0]/(1+np.exp(parametros[1]*(x-parametros[2])))
def Lfit(parametros,x,y): # funcion de costo MSE (No es la mejor!)
# L = promedio sobre todos los puntos (f(a,b,c;x)-y)^2
# parametros np.array([a,b,c])
deltaY=f(parametros,x) - y
return np.dot(deltaY,deltaY)/len(y)
print("Ajuste para el primer set: x1,y1")
e=1e-8
# ansatz: a=1,b=0,c=0
res1 = sp.optimize.minimize(fun=Lfit, x0=np.array([1,0,0]), args = (x2,y2), method='L-BFGS-B', tol=e)
res1
y1_pred = f(res1.x,x1)
sns.scatterplot(x1,y1)
plt.plot(x1,y1_pred,color='r')
r2 = 1-np.sum((y1_pred-y1)**2)/np.sum((y1-y1.mean())**2)
r2
calc_error(res1,e,names=['a','b','c'])
# # ¿Cómo hacer el fit a la función?
# $$f(x) = a + b\tanh(c(x-d))$$
#como encuentro yo a, b y c?
# cual seria nuestra funcion de costo
def ftilde(parametros,x):
return parametros[0]+parametros[1]*np.tanh(parametros[2]*(x-parametros[3]))
def Lfit(parametros,x,y):
# L = promedio sobre todos los puntos (f(a,b,c,d;x)-y)^2
# parametros np.array([a,b,c,d])
deltaY=ftilde(parametros,x) - y
return np.dot(deltaY,deltaY)/len(y)
print("Ajuste para el primer set: x1,y1")
e=1e-8
# ansatz: a=0,b=1,c=0,d=0
res1 = sp.optimize.minimize(fun=Lfit, x0=np.array([0,1,0,0]), args = (x1,y1), method='L-BFGS-B', tol=e)
res1
y1_pred = ftilde(res1.x,x1)
sns.scatterplot(x1,y1)
plt.plot(x1,y1_pred,color='r')
r2 = 1-np.sum((y1_pred-y1)**2)/np.sum((y1-y1.mean())**2)
r2
calc_error(res1,e,names=['a','b','c','d'])
| ejercicios/semana7-8/sistemas-lineales.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import argparse
from plantcv import plantcv as pcv
import analyze_area
import json
import cv2
from matplotlib import pyplot as plt
import numpy as np
import csv
import cluster_jordan
import math
import show_objects
# set to "plot" to see image at every step (takes longer)
pcv.params.debug = None
# -
map_dict = {}
with open("./key_G.csv") as map:
map = csv.reader(map, delimiter=",")
for img, plate, exp in map:
img = img[-12:]
map_dict[f"{img}"] = (plate, exp)
imagelink = f"./G_plates/IMG_8513.JPG"
area_dict = {}
photo = imagelink[-12:]
area_dict[f'plate'] = map_dict[f'{photo.upper()}']
label = f"exp_{area_dict['plate'][1]}_plate_{area_dict['plate'][0]}_{photo[0:8]}"
print(label)
# +
image = cv2.imread(imagelink)
shape = np.shape(image)
if shape[0] > shape[1]:
image = pcv.rotate(image, 90, crop = None)
image = image[0:,:-400]
plt.imshow(image)
# -
#image = cv2.imread(f"{imagelink}")
thresh = pcv.rgb2gray_hsv(rgb_img=image, channel="h")
thresh = pcv.gaussian_blur(img=thresh, ksize=(101, 101), sigma_x=0, sigma_y=None)
thresh = pcv.threshold.binary(gray_img=thresh, threshold=80, max_value=325, object_type="light")
#thresh = pcv.threshold.otsu(gray_img=thresh, max_value=255, object_type='light')
fill = pcv.fill(bin_img=thresh, size=350000)
dilate = pcv.dilate(gray_img=fill, ksize=120, i=1)
id_objects, obj_hierarchy = pcv.find_objects(img=image, mask=dilate)
cnt = id_objects[0]
x,y,w,h = cv2.boundingRect(cnt)
img = image[(y):(y+h),(x):(x+w)]
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
# +
angle = rect[2]
if angle < 4.0 and angle > -4.0:
center = rect[0]
width, height = rect[1]
height = height-20
M = cv2.getRotationMatrix2D(center,angle,1.0)
img = cv2.warpAffine(img, M, (int(width-100), int(height)))
else:
width = w-100
height = h-70
img = img[0:, 100:-100]
plt.imshow(img)
# -
# +
blur = pcv.gaussian_blur(img=img, ksize=(21, 21), sigma_x=0, sigma_y=None)
b = pcv.rgb2gray_lab(rgb_img=blur, channel="b")
avg = np.average(img)
std = np.std(img)
if avg > 220 and std < 25:
b = pcv.hist_equalization(b)
t = 251
else:
t = 140
# defining a threshold between the leaf and the background
pcv.params.debug = "plot"
b_thresh = pcv.threshold.binary(gray_img=b, threshold= t-6, max_value=255, object_type="light")
pcv.params.debug = None
# filling in small gaps within each leaf
bsa_fill1 = pcv.fill(bin_img=b_thresh, size=300)
#bsa_fill1 = pcv.fill_holes(bin_img=bsa_fill1)
bsa_fill1 = pcv.closing(gray_img=bsa_fill1)
bsa_fill1 = pcv.erode(gray_img = bsa_fill1, ksize = 3, i = 1)
bsa_fill1 = pcv.dilate(gray_img=bsa_fill1, ksize = 3, i = 1)
bsa_fill1 = pcv.fill(bin_img=bsa_fill1, size=300)
# -
id_objects, obj_hierarchy = pcv.find_objects(img=img, mask=bsa_fill1)
pcv.params.debug = "plot"
roi_contour, roi_hierarchy = pcv.roi.rectangle(img=img, x=150, y=250, h=400, w=int(width-450))
pcv.params.debug = None
# +
# gives 4 diff outputs
# list of objs, hierarchies say object or hole w/i object
roi_objects, hierarchy, kept_mask, obj_area = pcv.roi_objects(img=img,
roi_contour=roi_contour, roi_hierarchy=roi_hierarchy,
object_contour=id_objects, obj_hierarchy=obj_hierarchy, roi_type="partial")
# clustering defined leaves into individual plants using predefined rows/cols
pcv.params.debug = "plot"
clusters_i, contours, hierarchies = cluster_jordan.cluster_contours(img=img, roi_objects=roi_objects,
roi_obj_hierarchy=hierarchy, nrow=2, ncol=6, show_grid=True)
pcv.params.debug = None
# split the clusters into individual images for analysis
output_path, imgs, masks = cluster_jordan.cluster_contour_splitimg(rgb_img=img,
grouped_contour_indexes=clusters_i, contours=contours,
hierarchy=hierarchies)
# -
sus = False
num_plants = 0
for i in range(0,6):
pos = 7-(i+1)
if clusters_i[i][0] != None:
id_objects, obj_hierarchy = pcv.find_objects(img=imgs[num_plants], mask=masks[num_plants])
obj, mask1 = pcv.object_composition(img=imgs[num_plants], contours=id_objects, hierarchy=obj_hierarchy)
m = cv2.moments(obj)
area = m['m00']
num_plants += 1
center, expect_r = cv2.minEnclosingCircle(obj)
r = math.sqrt(area/math.pi)
leaf_error = False
if r <= 0.35*expect_r:
leaf_error = True
sus = True
print(f"warning: there may be an error detecting leaf {pos}")
with open('./plates_info.csv') as dataab:
opendata = csv.reader(dataab, delimiter=',')
for exp, plate, tube, loc, CS_number, nsource, nconc in opendata:
if plate == area_dict['plate'][0] and exp == area_dict['plate'][1]:
loc = int(loc)
if pos == loc:
entry = {'position':pos, 'tube':tube, 'area':area, 'suspicious':leaf_error}
area_dict[f'plant_{pos}'] = entry
break
else:
entry = {'position':pos, 'tube':None, 'area':area, 'suspicious':leaf_error}
area_dict[f'plant_{pos}'] = entry
else:
with open('./plates_info.csv') as dataab:
opendata = csv.reader(dataab, delimiter=',')
for exp, plate, tube, loc, CS_number, nsource, nconc in opendata:
if plate == area_dict['plate'][0] and exp == area_dict['plate'][1]:
loc = int(loc)
if pos == loc:
entry = {'position':pos, 'tube':tube, 'area':0, 'suspicious':None}
area_dict[f'plant_{pos}'] = entry
break
else:
entry = {'position':pos, 'tube':None, 'area':0, 'suspicious':None}
area_dict[f'plant_{pos}'] = entry
print(area_dict)
scale_crop = img
roi_contour_scale, roi_hierarchy_scale = pcv.roi.rectangle(img=scale_crop, x=125, y=height-600, h=500, w=width-500)
a_scale = pcv.rgb2gray_lab(rgb_img=scale_crop, channel="b")
a_scale = pcv.hist_equalization(a_scale)
a_scale = pcv.gaussian_blur(img=a_scale, ksize=(21, 21), sigma_x=0, sigma_y=None)
a_scale_thresh = pcv.threshold.binary(gray_img=a_scale, threshold=245, max_value=255, object_type="light")
if len(np.unique(a_scale_thresh)) != 1:
a_scale_thresh = pcv.closing(gray_img=a_scale_thresh)
if len(np.unique(a_scale_thresh)) != 1:
a_scale_thresh = pcv.fill(bin_img = a_scale_thresh, size= 4000)
if len(np.unique(a_scale_thresh)) != 1:
a_scale_thresh = pcv.fill_holes(bin_img=a_scale_thresh)
a_scale_thresh = pcv.erode(gray_img=a_scale_thresh, ksize = 9, i = 1)
a_scale_thresh = pcv.fill(bin_img=a_scale_thresh, size=500)
a_scale_thresh = pcv.dilate(gray_img=a_scale_thresh, ksize = 9, i = 1)
id_scale, obj_hierarchy = pcv.find_objects(img=scale_crop, mask=a_scale_thresh)
pcv.params.debug = "plot"
roi_scale, scale_hierarchy, scale_mask, scale_area = pcv.roi_objects(img=scale_crop,
roi_contour=roi_contour_scale, roi_hierarchy=roi_hierarchy_scale,
object_contour=id_scale, obj_hierarchy=obj_hierarchy, roi_type="partial")
pcv.params.debug = None
count = 0
area_dict['scale'] = 0
for object in roi_scale:
m = cv2.moments(object)
area = m['m00']
(x,y), expect_r = cv2.minEnclosingCircle(object)
r = math.sqrt(area/math.pi)
if r >= 0.85*expect_r and area > 8000:
id_scale = object
hier = scale_hierarchy[0][count]
area_dict[f'scale'] = area
print(area)
break
count += 1
# +
#area_dict['scale'] = 0
# -
if area_dict['scale'] == 0:
h_scale = pcv.rgb2gray_hsv(rgb_img=scale_crop, channel="v")
h_scale = pcv.hist_equalization(h_scale)
h_scale = pcv.gaussian_blur(img=h_scale, ksize=(31, 31), sigma_x=0, sigma_y=None)
h_scale_thresh = pcv.threshold.binary(gray_img=h_scale, threshold=230, max_value=255, object_type="light")
h_scale_thresh = pcv.closing(gray_img=h_scale_thresh)
h_scale_thresh = pcv.fill(bin_img = h_scale_thresh, size= 12000)
h_scale_thresh = pcv.fill_holes(bin_img=h_scale_thresh)
h_scale_thresh = pcv.erode(gray_img = h_scale_thresh, ksize = 9, i = 1)
h_scale_thresh = pcv.fill(bin_img= h_scale_thresh, size=500)
h_scale_thresh = pcv.dilate(gray_img= h_scale_thresh, ksize = 9, i = 1)
id_scale_2, obj_hierarchy_2 = pcv.find_objects(img=scale_crop, mask=h_scale_thresh)
pcv.params.debug = "plot"
roi_scale, scale_hierarchy, scale_mask, scale_area = pcv.roi_objects(img=scale_crop,
roi_contour=roi_contour_scale, roi_hierarchy=roi_hierarchy_scale,
object_contour=id_scale_2, obj_hierarchy=obj_hierarchy_2, roi_type="partial")
pcv.params.debug = None
count2 = 0
for object in roi_scale:
m = cv2.moments(object)
area = m['m00']
(x,y), expect_r = cv2.minEnclosingCircle(object)
r = math.sqrt(area/math.pi)
if r >= 0.90*expect_r and area > 8000:
id_scale = object
hier = scale_hierarchy[0][count2]
area_dict[f'scale'] = area
print(area)
break
count2 += 1
# +
#area_dict["plant_2"]["suspicious"] = True
#sus = False
# -
if sus != True and area_dict["scale"] != 0:
areas = {}
areas[f'{photo}'] = area_dict
print(areas)
dict = {}
if not os.path.isfile('results_G3.json'):
with open('results_G3.json', 'w') as file:
json.dump(dict, file)
with open('results_G3.json') as json_file:
data = json.load(json_file)
data.update(areas)
with open('results_G3.json', 'w') as outfile:
json.dump(data, outfile)
pcv.params.debug = "print"
pcv.params.debug_outdir = "./G_plates/out"
if area_dict['scale'] != 0:
objs = roi_objects + [id_scale]
scale_hier = np.array([[hier]])
hs = np.append(hierarchy, scale_hier, axis=1)
show_objects.show_objects(img, objs, hs, label)
else:
objs = roi_objects + roi_scale
hs = np.append(hierarchy, scale_hierarchy, axis=1)
label = label+"FAIL"
show_objects.show_objects(img, objs, hs, label, red = 255)
pcv.params.debug = "plot"
| ultralite_manual.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
b1=SpinBasis(1//2)
Pkg.add("QuantumOptics")
# +
module QuantumOptics
# -
using QuantumOptics
b1=SpinBasis(1//2)
b2=SpinBasis(1//2)
bcomp=b1⊗b2
ψ₀= (spinup(b1) ⊗ spinup(b2)-spindown(b1)⊗spindown(b2))/sqrt(2)
l1=[]
l2=[]
dm(ψ₀)
# +
ψ₀= (spinup(b1) ⊗ spinup(b2)-spindown(b1)⊗spindown(b2))/sqrt(2)
ψ₀= (spinup(b1)+spindown(b1))/sqrt(2) ⊗ spinup(b2)
# -
rho12=dm(ψ₀)
sigma1xcomp=embed(bcomp,1,sigmax(b1))
expect(sigma1xcomp,ψ₀)
tr(sigma1xcomp*rho12)
rho1=ptrace(ψ₀,2)
tr(sigmax(b1)*rho1)
tr(rho12^2)
tr(sigmax(b1)*rho12)
tr(rho12⊗sigmay(b1))
tr(rho12⊗sigmaz(b1))
sigmaz(b1)
rho1=ptrace(ψ₀,2)
rho1*(sigmax(b1))^2
rho1*sigmay(b1)
rho1*sigmaz(b1)
| TensorNetworks/Density Operator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# 강화 학습 (DQN) 튜토리얼
# =====================================
# **Author**: `<NAME> <https://github.com/apaszke>`_
# **번역**: `황성수 <https://github.com/adonisues>`_
#
# 이 튜토리얼에서는 `OpenAI Gym <https://gym.openai.com/>`__ 의
# CartPole-v0 태스크에서 DQN (Deep Q Learning) 에이전트를 학습하는데
# PyTorch를 사용하는 방법을 보여드립니다.
#
# **태스크**
#
# 에이전트는 연결된 막대가 똑바로 서 있도록 카트를 왼쪽이나 오른쪽으로
# 움직이는 두 가지 동작 중 하나를 선택해야 합니다.
# 다양한 알고리즘과 시각화 기능을 갖춘 공식 순위표를
# `Gym 웹사이트 <https://gym.openai.com/envs/CartPole-v0>`__ 에서 찾을 수 있습니다.
#
# .. figure:: /_static/img/cartpole.gif
# :alt: cartpole
#
# cartpole
#
# 에이전트가 현재 환경 상태를 관찰하고 행동을 선택하면,
# 환경이 새로운 상태로 *전환* 되고 작업의 결과를 나타내는 보상도 반환됩니다.
# 이 태스크에서 매 타임스텝 증가마다 보상이 +1이 되고, 만약 막대가 너무 멀리
# 떨어지거나 카트가 중심에서 2.4 유닛 이상 멀어지면 환경이 중단됩니다.
# 이것은 더 좋은 시나리오가 더 오랫동안 더 많은 보상을 축적하는 것을 의미합니다.
#
# 카트폴 태스크는 에이전트에 대한 입력이 환경 상태(위치, 속도 등)를 나타내는
# 4개의 실제 값이 되도록 설계되었습니다. 그러나 신경망은 순수하게 그 장면을 보고
# 태스크를 해결할 수 있습니다 따라서 카트 중심의 화면 패치를 입력으로 사용합니다.
# 이 때문에 우리의 결과는 공식 순위표의 결과와 직접적으로 비교할 수 없습니다.
# 우리의 태스크는 훨씬 더 어렵습니다.
# 불행히도 모든 프레임을 렌더링해야되므로 이것은 학습 속도를 늦추게됩니다.
#
# 엄밀히 말하면, 현재 스크린 패치와 이전 스크린 패치 사이의 차이로 상태를 표시할 것입니다.
# 이렇게하면 에이전트가 막대의 속도를 한 이미지에서 고려할 수 있습니다.
#
# **패키지**
#
# 먼저 필요한 패키지를 가져옵니다. 첫째, 환경을 위해
# `gym <https://gym.openai.com/docs>`__ 이 필요합니다.
# (`pip install gym` 을 사용하여 설치하십시오).
# 또한 PyTorch에서 다음을 사용합니다:
#
# - 신경망 (``torch.nn``)
# - 최적화 (``torch.optim``)
# - 자동 미분 (``torch.autograd``)
# - 시각 태스크를 위한 유틸리티들 (``torchvision`` - `a separate
# package <https://github.com/pytorch/vision>`__).
#
#
#
# +
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from PIL import Image
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
env = gym.make('CartPole-v0').unwrapped
# matplotlib 설정
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
plt.ion()
# GPU를 사용할 경우
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# -
# 재현 메모리(Replay Memory)
# -------------------------------
#
# 우리는 DQN 학습을 위해 경험 재현 메모리를 사용할 것입니다.
# 에이전트가 관찰한 전환(transition)을 저장하고 나중에 이 데이터를
# 재사용할 수 있습니다. 무작위로 샘플링하면 배치를 구성하는 전환들이
# 비상관(decorrelated)하게 됩니다. 이것이 DQN 학습 절차를 크게 안정시키고
# 향상시키는 것으로 나타났습니다.
#
# 이를 위해서 두개의 클래스가 필요합니다:
#
# - ``Transition`` - 우리 환경에서 단일 전환을 나타내도록 명명된 튜플.
# 그것은 화면의 차이인 state로 (state, action) 쌍을 (next_state, reward) 결과로 매핑합니다.
# - ``ReplayMemory`` - 최근 관찰된 전이를 보관 유지하는 제한된 크기의 순환 버퍼.
# 또한 학습을 위한 전환의 무작위 배치를 선택하기위한
# ``.sample ()`` 메소드를 구현합니다.
#
#
# +
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward'))
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""transition 저장"""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
# -
# 이제 모델을 정의합시다. 그러나 먼저 DQN이 무엇인지 간단히 요약해 보겠습니다.
#
# DQN 알고리즘
# -------------
#
# 우리의 환경은 결정론적이므로 여기에 제시된 모든 방정식은 단순화를 위해
# 결정론적으로 공식화됩니다. 강화 학습 자료은 환경에서 확률론적 전환에
# 대한 기대값(expectation)도 포함할 것입니다.
#
# 우리의 목표는 할인된 누적 보상 (discounted cumulative reward)을
# 극대화하려는 정책(policy)을 학습하는 것입니다.
# $R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t$, 여기서
# $R_{t_0}$ 는 *반환(return)* 입니다. 할인 상수,
# $\gamma$, 는 $0$ 과 $1$ 의 상수이고 합계가
# 수렴되도록 보장합니다. 에이전트에게 불확실한 먼 미래의 보상이
# 가까운 미래의 것에 비해 덜 중요하게 만들고, 이것은 상당히 합리적입니다.
#
# Q-learning의 주요 아이디어는 만일 함수 $Q^*: State \times Action \rightarrow \mathbb{R}$ 를
# 가지고 있다면 반환이 어덯게 될지 알려줄 수 있고,
# 만약 주어진 상태(state)에서 행동(action)을 한다면, 보상을 최대화하는
# 정책을 쉽게 구축할 수 있습니다:
#
# \begin{align}\pi^*(s) = \arg\!\max_a \ Q^*(s, a)\end{align}
#
# 그러나 세계(world)에 관한 모든 것을 알지 못하기 때문에,
# $Q^*$ 에 도달할 수 없습니다. 그러나 신경망은
# 범용 함수 근사자(universal function approximator)이기 때문에
# 간단하게 생성하고 $Q^*$ 를 닮도록 학습할 수 있습니다.
#
# 학습 업데이트 규칙으로, 일부 정책을 위한 모든 $Q$ 함수가
# Bellman 방정식을 준수한다는 사실을 사용할 것입니다:
#
# \begin{align}Q^{\pi}(s, a) = r + \gamma Q^{\pi}(s', \pi(s'))\end{align}
#
# 평등(equality)의 두 측면 사이의 차이는
# 시간차 오류(temporal difference error), $\delta$ 입니다.:
#
# \begin{align}\delta = Q(s, a) - (r + \gamma \max_a Q(s', a))\end{align}
#
# 오류를 최소화하기 위해서 `Huber
# loss <https://en.wikipedia.org/wiki/Huber_loss>`__ 를 사용합니다.
# Huber loss 는 오류가 작으면 평균 제곱 오차( mean squared error)와 같이
# 동작하고 오류가 클 때는 평균 절대 오류와 유사합니다.
# - 이것은 $Q$ 의 추정이 매우 혼란스러울 때 이상 값에 더 강건하게 합니다.
# 재현 메모리에서 샘플링한 전환 배치 $B$ 에서 이것을 계산합니다:
#
# \begin{align}\mathcal{L} = \frac{1}{|B|}\sum_{(s, a, s', r) \ \in \ B} \mathcal{L}(\delta)\end{align}
#
# \begin{align}\text{where} \quad \mathcal{L}(\delta) = \begin{cases}
# \frac{1}{2}{\delta^2} & \text{for } |\delta| \le 1, \\
# |\delta| - \frac{1}{2} & \text{otherwise.}
# \end{cases}\end{align}
#
# Q-네트워크
# ^^^^^^^^^^^
#
# 우리 모델은 현재와 이전 스크린 패치의 차이를 취하는
# CNN(convolutional neural network) 입니다. 두가지 출력 $Q(s, \mathrm{left})$ 와
# $Q(s, \mathrm{right})$ 가 있습니다. (여기서 $s$ 는 네트워크의 입력입니다)
# 결과적으로 네트워크는 주어진 현재 입력에서 각 행동의 *기대값* 을 예측하려고 합니다.
#
#
#
class DQN(nn.Module):
def __init__(self, h, w, outputs):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
# Linear 입력의 연결 숫자는 conv2d 계층의 출력과 입력 이미지의 크기에
# 따라 결정되기 때문에 따로 계산을 해야합니다.
def conv2d_size_out(size, kernel_size = 5, stride = 2):
return (size - (kernel_size - 1) - 1) // stride + 1
convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w)))
convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h)))
linear_input_size = convw * convh * 32
self.head = nn.Linear(linear_input_size, outputs)
# 최적화 중에 다음 행동을 결정하기 위해서 하나의 요소 또는 배치를 이용해 호촐됩니다.
# ([[left0exp,right0exp]...]) 를 반환합니다.
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1))
# 입력 추출
# ^^^^^^^^^^^^^^^^
#
# 아래 코드는 환경에서 렌더링 된 이미지를 추출하고 처리하는 유틸리티입니다.
# 이미지 변환을 쉽게 구성할 수 있는 ``torchvision`` 패키지를 사용합니다.
# 셀(cell)을 실행하면 추출한 예제 패치가 표시됩니다.
#
#
#
# +
resize = T.Compose([T.ToPILImage(),
T.Resize(40, interpolation=Image.CUBIC),
T.ToTensor()])
def get_cart_location(screen_width):
world_width = env.x_threshold * 2
scale = screen_width / world_width
return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART
def get_screen():
# gym이 요청한 화면은 400x600x3 이지만, 가끔 800x1200x3 처럼 큰 경우가 있습니다.
# 이것을 Torch order (CHW)로 변환한다.
screen = env.render(mode='rgb_array').transpose((2, 0, 1))
# 카트는 아래쪽에 있으므로 화면의 상단과 하단을 제거하십시오.
_, screen_height, screen_width = screen.shape
screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)]
view_width = int(screen_width * 0.6)
cart_location = get_cart_location(screen_width)
if cart_location < view_width // 2:
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
slice_range = slice(-view_width, None)
else:
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
# 카트를 중심으로 정사각형 이미지가 되도록 가장자리를 제거하십시오.
screen = screen[:, :, slice_range]
# float 으로 변환하고, rescale 하고, torch tensor 로 변환하십시오.
# (이것은 복사를 필요로하지 않습니다)
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
# 크기를 수정하고 배치 차원(BCHW)을 추가하십시오.
return resize(screen).unsqueeze(0).to(device)
env.reset()
plt.figure()
plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),
interpolation='none')
plt.title('Example extracted screen')
plt.show()
# -
# 학습
# --------
#
# 하이퍼 파라미터와 유틸리티
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# 이 셀은 모델과 최적화기를 인스턴스화하고 일부 유틸리티를 정의합니다:
#
# - ``select_action`` - Epsilon Greedy 정책에 따라 행동을 선택합니다.
# 간단히 말해서, 가끔 모델을 사용하여 행동을 선택하고 때로는 단지 하나를
# 균일하게 샘플링할 것입니다. 임의의 액션을 선택할 확률은
# ``EPS_START`` 에서 시작해서 ``EPS_END`` 를 향해 지수적으로 감소할 것입니다.
# ``EPS_DECAY`` 는 감쇠 속도를 제어합니다.
# - ``plot_durations`` - 지난 100개 에피소드의 평균(공식 평가에서 사용 된 수치)에 따른
# 에피소드의 지속을 도표로 그리기 위한 헬퍼. 도표는 기본 훈련 루프가
# 포함 된 셀 밑에 있으며, 매 에피소드마다 업데이트됩니다.
#
#
#
# +
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
TARGET_UPDATE = 10
# AI gym에서 반환된 형태를 기반으로 계층을 초기화 하도록 화면의 크기를
# 가져옵니다. 이 시점에 일반적으로 3x40x90 에 가깝습니다.
# 이 크기는 get_screen()에서 고정, 축소된 렌더 버퍼의 결과입니다.
init_screen = get_screen()
_, _, screen_height, screen_width = init_screen.shape
# gym 행동 공간에서 행동의 숫자를 얻습니다.
n_actions = env.action_space.n
policy_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net = DQN(screen_height, screen_width, n_actions).to(device)
target_net.load_state_dict(policy_net.state_dict())
target_net.eval()
optimizer = optim.RMSprop(policy_net.parameters())
memory = ReplayMemory(10000)
steps_done = 0
def select_action(state):
global steps_done
sample = random.random()
eps_threshold = EPS_END + (EPS_START - EPS_END) * \
math.exp(-1. * steps_done / EPS_DECAY)
steps_done += 1
if sample > eps_threshold:
with torch.no_grad():
# t.max (1)은 각 행의 가장 큰 열 값을 반환합니다.
# 최대 결과의 두번째 열은 최대 요소의 주소값이므로,
# 기대 보상이 더 큰 행동을 선택할 수 있습니다.
return policy_net(state).max(1)[1].view(1, 1)
else:
return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long)
episode_durations = []
def plot_durations():
plt.figure(2)
plt.clf()
durations_t = torch.tensor(episode_durations, dtype=torch.float)
plt.title('Training...')
plt.xlabel('Episode')
plt.ylabel('Duration')
plt.plot(durations_t.numpy())
# 100개의 에피소드 평균을 가져 와서 도표 그리기
if len(durations_t) >= 100:
means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
means = torch.cat((torch.zeros(99), means))
plt.plot(means.numpy())
plt.pause(0.001) # 도표가 업데이트되도록 잠시 멈춤
if is_ipython:
display.clear_output(wait=True)
display.display(plt.gcf())
# -
# 학습 루프
# ^^^^^^^^^^^^^
#
# 최종적으로 모델 학습을 위한 코드.
#
# 여기서, 최적화의 한 단계를 수행하는 ``optimize_model`` 함수를 찾을 수 있습니다.
# 먼저 배치 하나를 샘플링하고 모든 Tensor를 하나로 연결하고
# $Q(s_t, a_t)$ 와 $V(s_{t+1}) = \max_a Q(s_{t+1}, a)$ 를 계산하고
# 그것들을 손실로 합칩니다. 우리가 설정한 정의에 따르면 만약 $s$ 가
# 마지막 상태라면 $V(s) = 0$ 입니다.
# 또한 안정성 추가 위한 $V(s_{t+1})$ 계산을 위해 목표 네트워크를 사용합니다.
# 목표 네트워크는 대부분의 시간 동결 상태로 유지되지만, 가끔 정책
# 네트워크의 가중치로 업데이트됩니다.
# 이것은 대개 설정한 스텝 숫자이지만 단순화를 위해 에피소드를 사용합니다.
#
#
#
def optimize_model():
if len(memory) < BATCH_SIZE:
return
transitions = memory.sample(BATCH_SIZE)
# Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for
# detailed explanation). 이것은 batch-array의 Transitions을 Transition의 batch-arrays로
# 전환합니다.
batch = Transition(*zip(*transitions))
# 최종이 아닌 상태의 마스크를 계산하고 배치 요소를 연결합니다
# (최종 상태는 시뮬레이션이 종료 된 이후의 상태)
non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,
batch.next_state)), device=device, dtype=torch.uint8)
non_final_next_states = torch.cat([s for s in batch.next_state
if s is not None])
state_batch = torch.cat(batch.state)
action_batch = torch.cat(batch.action)
reward_batch = torch.cat(batch.reward)
# Q(s_t, a) 계산 - 모델이 Q(s_t)를 계산하고, 취한 행동의 열을 선택합니다.
# 이들은 policy_net에 따라 각 배치 상태에 대해 선택된 행동입니다.
state_action_values = policy_net(state_batch).gather(1, action_batch)
# 모든 다음 상태를 위한 V(s_{t+1}) 계산
# non_final_next_states의 행동들에 대한 기대값은 "이전" target_net을 기반으로 계산됩니다.
# max(1)[0]으로 최고의 보상을 선택하십시오.
# 이것은 마스크를 기반으로 병합되어 기대 상태 값을 갖거나 상태가 최종인 경우 0을 갖습니다.
next_state_values = torch.zeros(BATCH_SIZE, device=device)
next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()
# 기대 Q 값 계산
expected_state_action_values = (next_state_values * GAMMA) + reward_batch
# Huber 손실 계산
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))
# 모델 최적화
optimizer.zero_grad()
loss.backward()
for param in policy_net.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
# 아래에서 주요 학습 루프를 찾을 수 있습니다. 처음으로 환경을
# 재설정하고 ``상태`` Tensor를 초기화합니다. 그런 다음 행동을
# 샘플링하고, 그것을 실행하고, 다음 화면과 보상(항상 1)을 관찰하고,
# 모델을 한 번 최적화합니다. 에피소드가 끝나면 (모델이 실패)
# 루프를 다시 시작합니다.
#
# 아래에서 `num_episodes` 는 작게 설정됩니다. 노트북을 다운받고
# 의미있는 개선을 위해서 300 이상의 더 많은 에피소드를 실행해 보십시오.
#
#
#
# +
num_episodes = 50
for i_episode in range(num_episodes):
# 환경과 상태 초기화
env.reset()
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
for t in count():
# 행동 선택과 수행
action = select_action(state)
_, reward, done, _ = env.step(action.item())
reward = torch.tensor([reward], device=device)
# 새로운 상태 관찰
last_screen = current_screen
current_screen = get_screen()
if not done:
next_state = current_screen - last_screen
else:
next_state = None
# 메모리에 변이 저장
memory.push(state, action, next_state, reward)
# 다음 상태로 이동
state = next_state
# 최적화 한단계 수행(목표 네트워크에서)
optimize_model()
if done:
episode_durations.append(t + 1)
plot_durations()
break
#목표 네트워크 업데이트, 모든 웨이트와 바이어스 복사
if i_episode % TARGET_UPDATE == 0:
target_net.load_state_dict(policy_net.state_dict())
print('Complete')
env.render()
env.close()
plt.ioff()
plt.show()
# -
# 다음은 전체 결과 데이터 흐름을 보여주는 다이어그램입니다.
#
# .. figure:: /_static/img/reinforcement_learning_diagram.jpg
#
# 행동은 무작위 또는 정책에 따라 선택되어, gym 환경에서 다음 단계 샘플을 가져옵니다.
# 결과를 재현 메모리에 저장하고 모든 반복에서 최적화 단계를 실행합니다.
# 최적화는 재현 메모리에서 무작위 배치를 선택하여 새 정책을 학습합니다.
# "이전" target_net은 최적화에서 기대 Q 값을 계산하는 데에도 사용되고,
# 최신 상태를 유지하기 위해 가끔 업데이트됩니다.
#
#
#
| docs/_downloads/9da0471a9eeb2351a488cd4b44fc6bbf/reinforcement_q_learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ntds_2018
# language: python
# name: ntds_2018
# ---
# +
import tensorflow as tf
import script_config as sc
import pandas as pd
import heapq as hq
import numpy as np
import csv
data_folder = sc._config_data_folder
hops = sc._config_hops
max_list_size = sc._config_relation_list_size_neighborhood
# -
users = pd.read_csv(data_folder+"filtered_users.csv",delimiter=',').values
relations = pd.read_csv(data_folder+"filtered_relations.csv",delimiter=',').values
# BFS search to find neighborhood of radius "hops"
def find_neighborhood(user):
# Looking for elements in numpy array (better than lists)
def element_in_narray(narray,row):
count = np.where(narray==row)
if len(count[0]) is 0:
return False
else:
return True
# Function Global Variables
current_id = "{:07d}".format(user[0])
current_hops = 0
relations_idx = 0 # Index of next free spot in retained_relations
neighbors_idx = 0 # Index of next free spot in visited_neighbors
queue_head = 0 # Index of queue head
queue_tail = 1 # Index of next free spot in queue
# Data Structures
queue = np.ndarray(max_list_size,dtype='i4')
visited_neighbors = np.ndarray(max_list_size,dtype='i4')
retained_relations = np.ndarray(max_list_size,dtype='i4, i4, i4, i4, i4')
queue[0] = int(current_id)
# Loop until queue is empty
while( queue_head != queue_tail ):
current_id = "{:07d}".format(queue[queue_head])
queue_head += 1
# Treat incoming edges and outgoing edges equally
relations_1 = relations[np.where(relations[:,2] == int(current_id))]
relations_2 = relations[np.where(relations[:,3] == int(current_id))]
neigh_ids = np.union1d(relations_1[:,3],relations_2[:,2])
# Cutoff Condition
if current_hops + 1 <= hops:
for neigh_id in neigh_ids:
# Check that node has not been visited
# and has not been marked to be visited
if ( not element_in_narray(visited_neighbors[:neighbors_idx],int(neigh_id)))\
and ( not element_in_narray(queue[queue_head:queue_tail],int(neigh_id))):
if queue_tail == max_list_size:
raise MemoryError("Increase _config_list_size_neighborhood_creation \
from config.py")
# Mark node to be visited
queue[queue_tail] = int(neigh_id)
queue_tail += 1
for relation_set in [relations_1,relations_2] :
for relation in relation_set:
# Memory Checking
if relations_idx == max_list_size:
raise MemoryError("Increase _config_list_size_neighborhood_creation \
from config.py")
relation_tuple = (int(relation[0]),int(relation[1]),int(relation[2]),
int(relation[3]),int(relation[4]))
# Only add relations with visited neighbors (directionally agnostic)
if element_in_narray(visited_neighbors[:neighbors_idx],relation_tuple[2]) or \
element_in_narray(visited_neighbors[:neighbors_idx],relation_tuple[3]):
# Retain Relation if not already done
if not element_in_narray(retained_relations[:relations_idx],\
np.array([relation_tuple],dtype='i4,i4,i4,i4,i4')):
retained_relations[relations_idx] = relation_tuple
relations_idx += 1
# Memory Checking
if neighbors_idx == max_list_size:
raise MemoryError("Increase _config_list_size_neighborhood_creation \
from config.py")
# Mark node as visited
visited_neighbors[neighbors_idx] = int(current_id)
neighbors_idx += 1
return visited_neighbors[:neighbors_idx], retained_relations[:relations_idx]
# !mkdir /nvme/drive_1/NTDS_Final/local_list/
# !rm -r /nvme/drive_1/NTDS_Final/local_list/filtered
# !mkdir /nvme/drive_1/NTDS_Final/local_list/filtered
i = 0
max_size = 0
for user in users:
neighs, rels = find_neighborhood(user)
np.savez_compressed(data_folder+"local_list/filtered/"+str(user[0]),\
local_neighbors=neighs,local_relations=rels)
i+=1
if len(neighs)>max_size:
max_size = len(neighs)
print("\r"+str(i) +" neighborhoods processed / Max Size: "+str(max_size), sep=' ', end='', flush=True)
| raphael_temp/Local_List_Creation_(Pre-Filtered).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using Pòlya-Gamma Auxiliary Variables for Binary Classification
#
# ## Overview
#
# In this notebook, we'll demonstrate how to use Pòlya-Gamma auxiliary variables to do efficient inference for Gaussian Process binary classification as in reference [1].
# We will also use natural gradient descent, as described in more detail in the [Natural gradient descent](./Natural_Gradient_Descent.ipynb) tutorial.
#
#
# [1] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. [Efficient Gaussian process classification using Pòlya-Gamma data augmentation](https://arxiv.org/abs/1802.06383). Proceedings of the AAAI Conference on Artificial Intelligence. 2019.
#
# ## Pòlya-Gamma Augmentation
#
# When a Gaussian Process prior is paired with a Gaussian likelihood inference can be done exactly with a simple closed form expression.
# Unfortunately this attractive feature does not carry over to non-conjugate likelihoods like the Bernoulli likelihood that arises in the context of binary classification with a logistic link function.
# Sampling-based stochastic variational inference offers a general strategy for dealing with non-conjugate likelihoods; see the [corresponding tutorial](./Non_Gaussian_Likelihoods.ipynb).
#
# Another possible strategy is to introduce additional latent variables that restore conjugacy.
# This is the strategy we follow here.
# In particular we are going to introduce a Pòlya-Gamma auxiliary variable for each data point in our training dataset.
# The [Polya-Gamma](https://arxiv.org/abs/1205.0310) distribution $\rm{PG}$ is a univariate distribution with support on the positive real line.
# In our context it is interesting because if $\omega_i$ is distributed according to $\rm{PG}(1,0)$ then the logistic likelihood $\sigma(\cdot)$ for data point $(x_i, y_i)$ can be represented as
#
# \begin{align}
# \sigma(y_i f_i) = \frac{1}{1 + \exp(-y_i f_i)} = \tfrac{1}{2} \mathbb{E}_{\omega_i \sim \rm{PG}(1,0)} \left[ \exp \left(\tfrac{1}{2} y_i f_i - \tfrac{\omega_i}{2} f_i^2 \right) \right]
# \end{align}
#
# where $y_i \in \{-1, 1\}$ is the binary label of data point $i$
# and $f_i$ is the Gaussian Process prior evaluated at input $x_i$.
# The crucial point here is that $f_i$ appears quadratically in the exponential within the expectation.
# In other words, conditioned on $\omega_i$, we can integrate out $f_i$ exactly, just as if we were doing regression with a Gaussian likelihood. For more details please see the original reference.
#
# ## Setup
# +
import tqdm
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
# Make plots inline
# %matplotlib inline
# -
# For this example notebook, we'll create a simple artificial dataset.
# +
import os
from math import floor
# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)
N = 100
X = torch.linspace(-1., 1., N)
probs = (torch.sin(X * math.pi).add(1.).div(2.))
y = torch.distributions.Bernoulli(probs=probs).sample()
X = X.unsqueeze(-1)
train_n = int(floor(0.8 * N))
indices = torch.randperm(N)
train_x = X[indices[:train_n]].contiguous()
train_y = y[indices[:train_n]].contiguous()
test_x = X[indices[train_n:]].contiguous()
test_y = y[indices[train_n:]].contiguous()
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
# -
# Let's plot our artificial dataset.
# Note that here the binary labels are 0/1-valued; we will need to be careful to translate between this representation and the -1/1 representation that is most natural in the context of Pòlya-Gamma augementation.
plt.plot(train_x.squeeze(-1), train_y, 'o')
# The following steps create the dataloader objects. See the [SVGP regression notebook](./SVGP_Regression_CUDA.ipynb) for details.
# +
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=100000, shuffle=False)
test_dataset = TensorDataset(test_x, test_y)
test_loader = DataLoader(test_dataset, batch_size=1024, shuffle=False)
# -
# ## Variational Inference with PG Auxiliaries
#
# We define a Bernoulli likelihood that leverages Pòlya-Gamma augmentation.
# It turns out that we can derive closed form updates for the Pòlya-Gamma auxiliary variables. To deal with the Gaussian Process we introduce inducing points and inducing locations.
# In particular we will need to learn a variational covariance matrix and a variational mean vector that control the inducing points. (See the discussion in the [SVGP tutorial](Approximate_GP_Objective_Functions.ipynb) for more details.)
# We will use natural gradient updates to deal with these two variational parameters; this will allow us to take large steps, thus yielding fast convergence.
# +
class PGLikelihood(gpytorch.likelihoods._OneDimensionalLikelihood):
# this method effectively computes the expected log likelihood
# contribution to Eqn (10) in Reference [1].
def expected_log_prob(self, target, input, *args, **kwargs):
mean, variance = input.mean, input.variance
# Compute the expectation E[f_i^2]
raw_second_moment = variance + mean.pow(2)
# Translate targets to be -1, 1
target = target.to(mean.dtype).mul(2.).sub(1.)
# We detach the following variable since we do not want
# to differentiate through the closed-form PG update.
c = raw_second_moment.detach().sqrt()
# Compute mean of PG auxiliary variable omega: 0.5 * Expectation[omega]
# See Eqn (11) and Appendix A2 and A3 in Reference [1] for details.
half_omega = 0.25 * torch.tanh(0.5 * c) / c
# Expected log likelihood
res = 0.5 * target * mean - half_omega * raw_second_moment
# Sum over data points in mini-batch
res = res.sum(dim=-1)
return res
# define the likelihood
def forward(self, function_samples):
return torch.distributions.Bernoulli(logits=function_samples)
# define the marginal likelihood using Gauss Hermite quadrature
def marginal(self, function_dist):
prob_lambda = lambda function_samples: self.forward(function_samples).probs
probs = self.quadrature(prob_lambda, function_dist)
return torch.distributions.Bernoulli(probs=probs)
# define the actual GP model (kernels, inducing points, etc.)
class GPModel(gpytorch.models.ApproximateGP):
def __init__(self, inducing_points):
variational_distribution = gpytorch.variational.NaturalVariationalDistribution(inducing_points.size(0))
variational_strategy = gpytorch.variational.VariationalStrategy(
self, inducing_points, variational_distribution, learn_inducing_locations=True
)
super(GPModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ZeroMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# we initialize our model with M = 30 inducing points
M = 30
inducing_points = torch.linspace(-2., 2., M, dtype=train_x.dtype, device=train_x.device).unsqueeze(-1)
model = GPModel(inducing_points=inducing_points)
model.covar_module.base_kernel.initialize(lengthscale=0.2)
likelihood = PGLikelihood()
if torch.cuda.is_available():
model = model.cuda()
likelihood = likelihood.cuda()
# -
# ### Setup optimizers
#
# We will use a `NGD` (Natural Gradient Descent) optimizer to deal with the inducing point covariance matrix and corresponding mean vector, while we will use the `Adam` optimizer for all other parameters (the kernel hyperparmaeters as well as the inducing point locations).
# Note that we use a pretty large learning rate for the `NGD` optimizer.
# +
variational_ngd_optimizer = gpytorch.optim.NGD(model.variational_parameters(), num_data=train_y.size(0), lr=0.1)
hyperparameter_optimizer = torch.optim.Adam([
{'params': model.hyperparameters()},
{'params': likelihood.parameters()},
], lr=0.01)
# -
# ### Define training loop
# +
model.train()
likelihood.train()
mll = gpytorch.mlls.VariationalELBO(likelihood, model, num_data=train_y.size(0))
num_epochs = 1 if smoke_test else 100
epochs_iter = tqdm.notebook.tqdm(range(num_epochs), desc="Epoch")
for i in epochs_iter:
minibatch_iter = tqdm.notebook.tqdm(train_loader, desc="Minibatch", leave=False)
for x_batch, y_batch in minibatch_iter:
### Perform NGD step to optimize variational parameters
variational_ngd_optimizer.zero_grad()
hyperparameter_optimizer.zero_grad()
output = model(x_batch)
loss = -mll(output, y_batch)
minibatch_iter.set_postfix(loss=loss.item())
loss.backward()
variational_ngd_optimizer.step()
hyperparameter_optimizer.step()
# -
# ### Visualization and Evaluation
# push training data points through model
train_mean_f = model(train_x).loc.data.cpu()
# plot training data with y being -1/1 valued
plt.plot(train_x.squeeze(-1), train_y.mul(2.).sub(1.), 'o')
# plot mean gaussian process posterior mean evaluated at training data
plt.plot(train_x.squeeze(-1).cpu(), train_mean_f.cpu(), 'x')
# As expected the Gaussian Process posterior mean (plotted in orange) gives confident predictions in the regions
# where the correct label is unambiguous (e.g. for x ~ 0.5) and gives unconfident predictions in regions where
# the correct label is ambiguous (e.g. x ~ 0.0).
#
# We compute the negative log likelihood (NLL) and classification accuracy on the held-out test data.
model.eval()
likelihood.eval()
with torch.no_grad():
nlls = -likelihood.log_marginal(test_y, model(test_x))
acc = (likelihood(model(test_x)).probs.gt(0.5) == test_y.bool()).float().mean()
print('Test NLL: {:.4f}'.format(nlls.mean()))
print('Test Acc: {:.4f}'.format(acc.mean()))
| examples/04_Variational_and_Approximate_GPs/PolyaGamma_Binary_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="C7oVbe_pLr3C"
# # Assignment 3 - Named Entity Recognition (NER)
#
# Welcome to the third programming assignment of Course 3. In this assignment, you will learn to build more complicated models with Trax. By completing this assignment, you will be able to:
#
# - Design the architecture of a neural network, train it, and test it.
# - Process features and represents them
# - Understand word padding
# - Implement LSTMs
# - Test with your own sentence
#
# ## Outline
# - [Introduction](#0)
# - [Part 1: Exploring the data](#1)
# - [1.1 Importing the Data](#1.1)
# - [1.2 Data generator](#1.2)
# - [Exercise 01](#ex01)
# - [Part 2: Building the model](#2)
# - [Exercise 02](#ex02)
# - [Part 3: Train the Model ](#3)
# - [Exercise 03](#ex03)
# - [Part 4: Compute Accuracy](#4)
# - [Exercise 04](#ex04)
# - [Part 5: Testing with your own sentence](#5)
#
# + [markdown] colab_type="text" id="ftT-5-yynCtl"
# <a name="0"></a>
# # Introduction
#
# We first start by defining named entity recognition (NER). NER is a subtask of information extraction that locates and classifies named entities in a text. The named entities could be organizations, persons, locations, times, etc.
#
# For example:
#
# <img src = 'ner.png' width="width" height="height" style="width:600px;height:150px;"/>
#
# Is labeled as follows:
#
# - French: geopolitical entity
# - Morocco: geographic entity
# - Christmas: time indicator
#
# Everything else that is labeled with an `O` is not considered to be a named entity. In this assignment, you will train a named entity recognition system that could be trained in a few seconds (on a GPU) and will get around 75% accuracy. Then, you will load in the exact version of your model, which was trained for a longer period of time. You could then evaluate the trained version of your model to get 96% accuracy! Finally, you will be able to test your named entity recognition system with your own sentence.
# + colab={"base_uri": "https://localhost:8080/", "height": 459} colab_type="code" id="JEY_jlQQR9SP" outputId="825f37fd-cf03-483a-a6b1-3d70da6f70f1"
# #!pip -q install trax==1.3.1
import trax
from trax import layers as tl
import os
import numpy as np
import pandas as pd
from utils import get_params, get_vocab
import random as rnd
# set random seeds to make this notebook easier to replicate
trax.supervised.trainer_lib.init_random_number_generators(33)
# + [markdown] colab_type="text" id="_PpjG5MuLr3F"
# <a name="1"></a>
# # Part 1: Exploring the data
#
# We will be using a dataset from Kaggle, which we will preprocess for you. The original data consists of four columns, the sentence number, the word, the part of speech of the word, and the tags. A few tags you might expect to see are:
#
# * geo: geographical entity
# * org: organization
# * per: person
# * gpe: geopolitical entity
# * tim: time indicator
# * art: artifact
# * eve: event
# * nat: natural phenomenon
# * O: filler word
#
# + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" id="-Jur1JnXnCtr" outputId="88584ab6-0a15-489b-db5b-eb474e129c4f"
# display original kaggle data
data = pd.read_csv("ner_dataset.csv", encoding = "ISO-8859-1")
train_sents = open('data/small/train/sentences.txt', 'r').readline()
train_labels = open('data/small/train/labels.txt', 'r').readline()
print('SENTENCE:', train_sents)
print('SENTENCE LABEL:', train_labels)
print('ORIGINAL DATA:\n', data.head(5))
del(data, train_sents, train_labels)
# + [markdown] colab_type="text" id="xoH6yBWVfzTb"
# <a name="1.1"></a>
# ## 1.1 Importing the Data
#
# In this part, we will import the preprocessed data and explore it.
# + colab={} colab_type="code" id="UauHjIKHWC0N"
vocab, tag_map = get_vocab('data/large/words.txt', 'data/large/tags.txt')
t_sentences, t_labels, t_size = get_params(vocab, tag_map, 'data/large/train/sentences.txt', 'data/large/train/labels.txt')
v_sentences, v_labels, v_size = get_params(vocab, tag_map, 'data/large/val/sentences.txt', 'data/large/val/labels.txt')
test_sentences, test_labels, test_size = get_params(vocab, tag_map, 'data/large/test/sentences.txt', 'data/large/test/labels.txt')
# + [markdown] colab_type="text" id="mcQi6EmWnCty"
# `vocab` is a dictionary that translates a word string to a unique number. Given a sentence, you can represent it as an array of numbers translating with this dictionary. The dictionary contains a `<PAD>` token.
#
# When training an LSTM using batches, all your input sentences must be the same size. To accomplish this, you set the length of your sentences to a certain number and add the generic `<PAD>` token to fill all the empty spaces.
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="sm2P8y7zNgdU" outputId="1f1d077a-7c57-42df-fb67-48dd00da39ca"
# vocab translates from a word to a unique number
print('vocab["the"]:', vocab["the"])
# Pad token
print('padded token:', vocab['<PAD>'])
# + [markdown] colab_type="text" id="IY6BTBjunCt1"
# The tag_map corresponds to one of the possible tags a word can have. Run the cell below to see the possible classes you will be predicting. The prepositions in the tags mean:
# * I: Token is inside an entity.
# * B: Token begins an entity.
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="ZzMamaPcQXWP" outputId="4f04364c-a88c-4e77-f8bb-9bfcb422d5e9"
print(tag_map)
# + [markdown] colab_type="text" id="3F1sUP_MnCt5"
# So the coding scheme that tags the entities is a minimal one where B- indicates the first token in a multi-token entity, and I- indicates one in the middle of a multi-token entity. If you had the sentence
#
# **"<NAME>w to Miami on Friday"**
#
# the outputs would look like:
#
# ```
# <NAME>
# flew O
# to O
# Miami B-geo
# on O
# Friday B-tim
# ```
#
# your tags would reflect three tokens beginning with B-, since there are no multi-token entities in the sequence. But if you added Sharon's last name to the sentence:
#
# **"<NAME> flew to Miami on Friday"**
#
# ```
# <NAME>
# Floyd I-per
# flew O
# to O
# Miami B-geo
# on O
# Friday B-tim
# ```
#
# then your tags would change to show first "Sharon" as B-per, and "Floyd" as I-per, where I- indicates an inner token in a multi-token sequence.
# + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="xM9B_Rwxd01i" outputId="db098ed6-4351-41f7-cfdb-e45dd3798ebf"
# Exploring information about the data
print('The number of outputs is tag_map', len(tag_map))
# The number of vocabulary tokens (including <PAD>)
g_vocab_size = len(vocab)
print(f"Num of vocabulary words: {g_vocab_size}")
print('The vocab size is', len(vocab))
print('The training size is', t_size)
print('The validation size is', v_size)
print('An example of the first sentence is', t_sentences[0])
print('An example of its corresponding label is', t_labels[0])
# + [markdown] colab_type="text" id="IPd5a-4_nCt8"
# So you can see that we have already encoded each sentence into a tensor by converting it into a number. We also have 16 possible classes, as shown in the tag map.
#
#
# <a name="1.2"></a>
# ## 1.2 Data generator
#
# In python, a generator is a function that behaves like an iterator. It will return the next item. Here is a [link](https://wiki.python.org/moin/Generators) to review python generators.
#
# In many AI applications it is very useful to have a data generator. You will now implement a data generator for our NER application.
#
# <a name="ex01"></a>
# ### Exercise 01
#
# **Instructions:** Implement a data generator function that takes in `batch_size, x, y, pad, shuffle` where x is a large list of sentences, and y is a list of the tags associated with those sentences and pad is a pad value. Return a subset of those inputs in a tuple of two arrays `(X,Y)`. Each is an array of dimension (`batch_size, max_len`), where `max_len` is the length of the longest sentence *in that batch*. You will pad the X and Y examples with the pad argument. If `shuffle=True`, the data will be traversed in a random form.
#
# **Details:**
#
# This code as an outer loop
# ```
# while True:
# ...
# yield((X,Y))
# ```
#
# Which runs continuously in the fashion of generators, pausing when yielding the next values. We will generate a batch_size output on each pass of this loop.
#
# It has two inner loops.
# 1. The first stores in temporal lists the data samples to be included in the next batch, and finds the maximum length of the sentences contained in it. By adjusting the length to include only the size of the longest sentence in each batch, overall computation is reduced.
#
# 2. The second loop moves those inputs from the temporal list into NumPy arrays pre-filled with pad values.
#
# There are three slightly out of the ordinary features.
# 1. The first is the use of the NumPy `full` function to fill the NumPy arrays with a pad value. See [full function documentation](https://numpy.org/doc/1.18/reference/generated/numpy.full.html).
#
# 2. The second is tracking the current location in the incoming lists of sentences. Generators variables hold their values between invocations, so we create an `index` variable, initialize to zero, and increment by one for each sample included in a batch. However, we do not use the `index` to access the positions of the list of sentences directly. Instead, we use it to select one index from a list of indexes. In this way, we can change the order in which we traverse our original list, keeping untouched our original list.
#
# 3. The third also relates to wrapping. Because `batch_size` and the length of the input lists are not aligned, gathering a batch_size group of inputs may involve wrapping back to the beginning of the input loop. In our approach, it is just enough to reset the `index` to 0. We can re-shuffle the list of indexes to produce different batches each time.
# + colab={} colab_type="code" id="tP7zQC8knCt_"
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: data_generator
def data_generator(batch_size, x, y, pad, shuffle=False, verbose=False):
'''
Input:
batch_size - integer describing the batch size
x - list containing sentences where words are represented as integers
y - list containing tags associated with the sentences
shuffle - Shuffle the data order
pad - an integer representing a pad character
verbose - Print information during runtime
Output:
a tuple containing 2 elements:
X - np.ndarray of dim (batch_size, max_len) of padded sentences
Y - np.ndarray of dim (batch_size, max_len) of tags associated with the sentences in X
'''
# count the number of lines in data_lines
num_lines = len(x)
# create an array with the indexes of data_lines that can be shuffled
lines_index = [*range(num_lines)]
# shuffle the indexes if shuffle is set to True
if shuffle:
rnd.shuffle(lines_index)
index = 0 # tracks current location in x, y
while True:
buffer_x = [0] * batch_size # Temporal array to store the raw x data for this batch
buffer_y = [0] * batch_size # Temporal array to store the raw y data for this batch
### START CODE HERE (Replace instances of 'None' with your code) ###
# Copy into the temporal buffers the sentences in x[index : index + batch_size]
# along with their corresponding labels y[index : index + batch_size]
# Find maximum length of sentences in x[index : index + batch_size] for this batch.
# Reset the index if we reach the end of the data set, and shuffle the indexes if needed.
max_len = 0
for i in range(batch_size):
# if the index is greater than or equal to the number of lines in x
if index >= num_lines:
# then reset the index to 0
index = 0
# re-shuffle the indexes if shuffle is set to True
if shuffle:
rnd.shuffle(lines_index)
# The current position is obtained using `lines_index[index]`
# Store the x value at the current position into the buffer_x
buffer_x[i] = x[lines_index[index]]
# Store the y value at the current position into the buffer_y
buffer_y[i] = y[lines_index[index]]
lenx = len(x[lines_index[index]]) #length of current x[]
if lenx > max_len:
max_len = lenx #max_len tracks longest x[]
# increment index by one
index += 1
# create X,Y, NumPy arrays of size (batch_size, max_len) 'full' of pad value
X = np.full((batch_size, max_len), pad)
Y = np.full((batch_size, max_len), pad)
# copy values from lists to NumPy arrays. Use the buffered values
for i in range(batch_size):
# get the example (sentence as a tensor)
# in `buffer_x` at the `i` index
x_i = buffer_x[i]
# similarly, get the example's labels
# in `buffer_y` at the `i` index
y_i = buffer_y[i]
# Walk through each word in x_i
for j in range(len(x_i)):
# store the word in x_i at position j into X
X[i, j] = x_i[j]
# store the label in y_i at position j into Y
Y[i, j] = y_i[j]
### END CODE HERE ###
if verbose: print("index=", index)
yield((X,Y))
# + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" id="s3fwE3PMhOW4" outputId="12022225-1498-4acb-c0c8-cd02b71c091d"
batch_size = 5
mini_sentences = t_sentences[0: 8]
mini_labels = t_labels[0: 8]
dg = data_generator(batch_size, mini_sentences, mini_labels, vocab["<PAD>"], shuffle=False, verbose=True)
X1, Y1 = next(dg)
X2, Y2 = next(dg)
print(Y1.shape, X1.shape, Y2.shape, X2.shape)
print(X1[0][:], "\n", Y1[0][:])
# + [markdown] colab_type="text" id="W-qWOhFunCuH"
# **Expected output:**
# ```
# index= 5
# index= 2
# (5, 30) (5, 30) (5, 30) (5, 30)
# [ 0 1 2 3 4 5 6 7 8 9 10 11
# 12 13 14 9 15 1 16 17 18 19 20 21
# 35180 35180 35180 35180 35180 35180]
# [ 0 0 0 0 0 0 1 0 0 0 0 0
# 1 0 0 0 0 0 2 0 0 0 0 0
# 35180 35180 35180 35180 35180 35180]
# ```
# + [markdown] colab_type="text" id="4SWxKhkVLr3P"
# <a name="2"></a>
# # Part 2: Building the model
#
# You will now implement the model. You will be using Google's TensorFlow. Your model will be able to distinguish the following:
# <table>
# <tr>
# <td>
# <img src = 'ner1.png' width="width" height="height" style="width:500px;height:150px;"/>
# </td>
# </tr>
# </table>
#
# The model architecture will be as follows:
#
# <img src = 'ner2.png' width="width" height="height" style="width:600px;height:250px;"/>
#
# Concretely:
#
# * Use the input tensors you built in your data generator
# * Feed it into an Embedding layer, to produce more semantic entries
# * Feed it into an LSTM layer
# * Run the output through a linear layer
# * Run the result through a log softmax layer to get the predicted class for each word.
#
# Good news! We won't make you implement the LSTM unit drawn above. However, we will ask you to build the model.
#
# <a name="ex02"></a>
# ### Exercise 02
#
# **Instructions:** Implement the initialization step and the forward function of your Named Entity Recognition system.
# Please utilize help function e.g. `help(tl.Dense)` for more information on a layer
#
# - [tl.Serial](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/combinators.py#L26): Combinator that applies layers serially (by function composition).
# - You can pass in the layers as arguments to `Serial`, separated by commas.
# - For example: `tl.Serial(tl.Embeddings(...), tl.Mean(...), tl.Dense(...), tl.LogSoftmax(...))`
#
#
# - [tl.Embedding](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.py#L113): Initializes the embedding. In this case it is the dimension of the model by the size of the vocabulary.
# - `tl.Embedding(vocab_size, d_feature)`.
# - `vocab_size` is the number of unique words in the given vocabulary.
# - `d_feature` is the number of elements in the word embedding (some choices for a word embedding size range from 150 to 300, for example).
#
#
# - [tl.LSTM](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/rnn.py#L87):`Trax` LSTM layer of size d_model.
# - `LSTM(n_units)` Builds an LSTM layer of n_cells.
#
#
#
# - [tl.Dense](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.py#L28): A dense layer.
# - `tl.Dense(n_units)`: The parameter `n_units` is the number of units chosen for this dense layer.
#
#
# - [tl.LogSoftmax](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.py#L242): Log of the output probabilities.
# - Here, you don't need to set any parameters for `LogSoftMax()`.
#
#
# **Online documentation**
#
# - [tl.Serial](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#module-trax.layers.combinators)
#
# - [tl.Embedding](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Embedding)
#
# - [tl.LSTM](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.rnn.LSTM)
#
# - [tl.Dense](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.Dense)
#
# - [tl.LogSoftmax](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.core.LogSoftmax)
# + colab={} colab_type="code" id="vL5u72u8Lr3Q"
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: NER
def NER(vocab_size=35181, d_model=50, tags=tag_map):
'''
Input:
vocab_size - integer containing the size of the vocabulary
d_model - integer describing the embedding size
Output:
model - a trax serial model
'''
### START CODE HERE (Replace instances of 'None' with your code) ###
model = tl.Serial(
tl.Embedding(vocab_size, d_model), # Embedding layer
tl.LSTM(d_model), # LSTM layer
tl.Dense(len(tags)), # Dense layer with len(tags) units
tl.LogSoftmax() # LogSoftmax layer
)
### END CODE HERE ###
return model
# + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="BrGdYpPvLr3U" outputId="3a3e721c-8e63-40ac-f8fa-e500f0abdad5"
# initializing your model
model = NER()
# display your model
print(model)
# + [markdown] colab_type="text" id="p636VCSanCuS"
# **Expected output:**
# ```
# Serial[
# Embedding_35181_50
# LSTM_50
# Dense_17
# LogSoftmax
# ]
# ```
#
# + [markdown] colab_type="text" id="4LkjXxxhLr3Z"
# <a name="3"></a>
# # Part 3: Train the Model
#
# This section will train your model.
#
# Before you start, you need to create the data generators for training and validation data. It is important that you mask padding in the loss weights of your data, which can be done using the `id_to_mask` argument of `trax.supervised.inputs.add_loss_weights`.
# + colab={} colab_type="code" id="lPBR1YrRmEAH"
from trax.supervised import training
rnd.seed(33)
batch_size = 64
# Create training data, mask pad id=35180 for training.
train_generator = trax.supervised.inputs.add_loss_weights(
data_generator(batch_size, t_sentences, t_labels, vocab['<PAD>'], True),
id_to_mask=vocab['<PAD>'])
# Create validation data, mask pad id=35180 for training.
eval_generator = trax.supervised.inputs.add_loss_weights(
data_generator(batch_size, v_sentences, v_labels, vocab['<PAD>'], True),
id_to_mask=vocab['<PAD>'])
# + [markdown] colab_type="text" id="-SdkBrFVnCuV"
# <a name='3.1'></a>
# ### 3.1 Training the model
#
# You will now write a function that takes in your model and trains it.
#
# As you've seen in the previous assignments, you will first create the [TrainTask](https://trax-ml.readthedocs.io/en/stable/trax.supervised.html#trax.supervised.training.TrainTask) and [EvalTask](https://trax-ml.readthedocs.io/en/stable/trax.supervised.html#trax.supervised.training.EvalTask) using your data generator. Then you will use the `training.Loop` to train your model.
#
# <a name="ex03"></a>
# ### Exercise 03
#
# **Instructions:** Implement the `train_model` program below to train the neural network above. Here is a list of things you should do:
# - Create the trainer object by calling [`trax.supervised.training.Loop`](https://trax-ml.readthedocs.io/en/latest/trax.supervised.html#trax.supervised.training.Loop) and pass in the following:
#
# - model = [NER](#ex02)
# - [training task](https://trax-ml.readthedocs.io/en/latest/trax.supervised.html#trax.supervised.training.TrainTask) that uses the train data generator defined in the cell above
# - loss_layer = [tl.CrossEntropyLoss()](https://github.com/google/trax/blob/22765bb18608d376d8cd660f9865760e4ff489cd/trax/layers/metrics.py#L71)
# - optimizer = [trax.optimizers.Adam(0.01)](https://github.com/google/trax/blob/03cb32995e83fc1455b0c8d1c81a14e894d0b7e3/trax/optimizers/adam.py#L23)
# - [evaluation task](https://trax-ml.readthedocs.io/en/latest/trax.supervised.html#trax.supervised.training.EvalTask) that uses the validation data generator defined in the cell above
# - metrics for `EvalTask`: `tl.CrossEntropyLoss()` and `tl.Accuracy()`
# - in `EvalTask` set `n_eval_batches=10` for better evaluation accuracy
# - output_dir = output_dir
#
# You'll be using a [cross entropy loss](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.metrics.CrossEntropyLoss), with an [Adam optimizer](https://trax-ml.readthedocs.io/en/latest/trax.optimizers.html#trax.optimizers.adam.Adam). Please read the [trax](https://trax-ml.readthedocs.io/en/latest/trax.html) documentation to get a full understanding. The [trax GitHub](https://github.com/google/trax) also contains some useful information and a link to a colab notebook.
# + colab={} colab_type="code" id="WV27PerULr3a"
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: train_model
def train_model(NER, train_generator, eval_generator, train_steps=1, output_dir='model'):
'''
Input:
NER - the model you are building
train_generator - The data generator for training examples
eval_generator - The data generator for validation examples,
train_steps - number of training steps
output_dir - folder to save your model
Output:
training_loop - a trax supervised training Loop
'''
### START CODE HERE (Replace instances of 'None' with your code) ###
train_task = training.TrainTask(
train_generator, # A train data generator
loss_layer = tl.CrossEntropyLoss(), # A cross-entropy loss function
optimizer = trax.optimizers.Adam(0.01), # The adam optimizer
)
eval_task = training.EvalTask(
labeled_data = eval_generator, # A labeled data generator
metrics = [tl.CrossEntropyLoss(), tl.Accuracy()], # Evaluate with cross-entropy loss and accuracy
n_eval_batches = 10 # Number of batches to use on each evaluation
)
training_loop = training.Loop(
NER, # A model to train
train_task, # A train task
eval_task = eval_task, # The evaluation task
output_dir = output_dir) # The output directory
# Train with train_steps
training_loop.run(n_steps = train_steps)
### END CODE HERE ###
return training_loop
# + [markdown] colab_type="text" id="4tIc4nuonCue"
# On your local machine, you can run this training for 1000 train_steps and get your own model. This training takes about 5 to 10 minutes to run.
# + colab={"base_uri": "https://localhost:8080/", "height": 578} colab_type="code" id="VU-j8hs-nCue" outputId="fbbbda7d-b6dd-42e4-a4c6-58c0a6e349b6"
train_steps = 100 # In coursera we can only train 100 steps
# !rm -f 'model/model.pkl.gz' # Remove old model.pkl if it exists
# Train the model
training_loop = train_model(NER(), train_generator, eval_generator, train_steps)
# + [markdown] colab_type="text" id="p1QvV66ZLr3i"
# **Expected output (Approximately)**
#
# ```
# ...
# Step 1: train CrossEntropyLoss | 2.94375849
# Step 1: eval CrossEntropyLoss | 1.93172036
# Step 1: eval Accuracy | 0.78727312
# Step 100: train CrossEntropyLoss | 0.57727730
# Step 100: eval CrossEntropyLoss | 0.36356260
# Step 100: eval Accuracy | 0.90943187
# ...
# ```
# This value may change between executions, but it must be around 90% of accuracy on train and validations sets, after 100 training steps.
# + [markdown] colab_type="text" id="lQTurbC0nCuh"
# We have trained the model longer, and we give you such a trained model. In that way, we ensure you can continue with the rest of the assignment even if you had some troubles up to here, and also we are sure that everybody will get the same outputs for the last example. However, you are free to try your model, as well.
# + colab={} colab_type="code" id="ecIG67nenCui"
# loading in a pretrained model..
model = NER()
model.init(trax.shapes.ShapeDtype((1, 1), dtype=np.int32))
# Load the pretrained model
model.init_from_file('model.pkl.gz', weights_only=True)
# + [markdown] colab_type="text" id="c4r-gXOZLr3j"
# <a name="4"></a>
# # Part 4: Compute Accuracy
#
# You will now evaluate in the test set. Previously, you have seen the accuracy on the training set and the validation (noted as eval) set. You will now evaluate on your test set. To get a good evaluation, you will need to create a mask to avoid counting the padding tokens when computing the accuracy.
#
# <a name="ex04"></a>
# ### Exercise 04
#
# **Instructions:** Write a program that takes in your model and uses it to evaluate on the test set. You should be able to get an accuracy of 95%.
#
# + [markdown] colab_type="text" id="AmIvd_GXnCuk"
#
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>More Detailed Instructions </b></font>
# </summary>
#
# * *Step 1*: model(sentences) will give you the predicted output.
#
# * *Step 2*: Prediction will produce an output with an added dimension. For each sentence, for each word, there will be a vector of probabilities for each tag type. For each sentence,word, you need to pick the maximum valued tag. This will require `np.argmax` and careful use of the `axis` argument.
# * *Step 3*: Create a mask to prevent counting pad characters. It has the same dimension as output. An example below on matrix comparison provides a hint.
# * *Step 4*: Compute the accuracy metric by comparing your outputs against your test labels. Take the sum of that and divide by the total number of **unpadded** tokens. Use your mask value to mask the padded tokens. Return the accuracy.
# </detail>
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="qaSy_NywnCul" outputId="7ceb7d3f-6948-48e0-afd9-003bfe0d0a70"
#Example of a comparision on a matrix
a = np.array([1, 2, 3, 4])
a == 2
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="3H0Kx1rnnCun" outputId="cf3789c6-374e-402b-e833-8a2e5c5d8d33"
# create the evaluation inputs
x, y = next(data_generator(len(test_sentences), test_sentences, test_labels, vocab['<PAD>']))
print("input shapes", x.shape, y.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="rh16zSTonCuq" outputId="26aaa4e9-b34b-4c0a-99e4-35c0af52470b"
# sample prediction
tmp_pred = model(x)
print(type(tmp_pred))
print(f"tmp_pred has shape: {tmp_pred.shape}")
# + [markdown] colab_type="text" id="78l5MTSBnCut"
# Note that the model's prediction has 3 axes:
# - the number of examples
# - the number of words in each example (padded to be as long as the longest sentence in the batch)
# - the number of possible targets (the 17 named entity tags).
# + colab={} colab_type="code" id="8ek59ro9nCut"
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: evaluate_prediction
def evaluate_prediction(pred, labels, pad):
"""
Inputs:
pred: prediction array with shape
(num examples, max sentence length in batch, num of classes)
labels: array of size (batch_size, seq_len)
pad: integer representing pad character
Outputs:
accuracy: float
"""
### START CODE HERE (Replace instances of 'None' with your code) ###
## step 1 ##
outputs = np.argmax(pred, axis=2)
print("outputs shape:", outputs.shape)
## step 2 ##
mask = labels != pad
print("mask shape:", mask.shape, "mask[0][20:30]:", mask[0][20:30])
## step 3 ##
accuracy = np.sum(outputs == labels) / float(np.sum(mask))
### END CODE HERE ###
return accuracy
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="yCWFwt3m1sgL" outputId="701d6b4d-b9b6-41f7-80b3-d0c043880704"
accuracy = evaluate_prediction(model(x), y, vocab['<PAD>'])
print("accuracy: ", accuracy)
# + [markdown] colab_type="text" id="NcTJupqo5RcP"
# **Expected output (Approximately)**
# ```
# outputs shape: (7194, 70)
# mask shape: (7194, 70) mask[0][20:30]: [ True True True False False False False False False False]
# accuracy: 0.9543761281155191
# ```
#
# + [markdown] colab_type="text" id="b2FEleAFLr3r"
# <a name="5"></a>
# # Part 5: Testing with your own sentence
#
# + [markdown] colab_type="text" id="EOeTPAx_Lr3t"
# Below, you can test it out with your own sentence!
# + colab={} colab_type="code" id="0K4SyB20cHRf"
# This is the function you will be using to test your own sentence.
def predict(sentence, model, vocab, tag_map):
s = [vocab[token] if token in vocab else vocab['UNK'] for token in sentence.split(' ')]
batch_data = np.ones((1, len(s)))
batch_data[0][:] = s
sentence = np.array(batch_data).astype(int)
output = model(sentence)
outputs = np.argmax(output, axis=2)
labels = list(tag_map.keys())
pred = []
for i in range(len(outputs[0])):
idx = outputs[0][i]
pred_label = labels[idx]
pred.append(pred_label)
return pred
# + colab={"base_uri": "https://localhost:8080/", "height": 170} colab_type="code" id="vLZCHoiULr3u" outputId="fab815fd-0472-4eaf-968a-abff1f5cfff5"
# Try the output for the introduction example
#sentence = "Many French citizens are goin to visit Morocco for summer"
#sentence = "<NAME> flew to Miami last Friday"
# New york times news:
sentence = "<NAME>, the White House director of trade and manufacturing policy of U.S, said in an interview on Sunday morning that the White House was working to prepare for the possibility of a second wave of the coronavirus in the fall, though he said it wouldn’t necessarily come"
s = [vocab[token] if token in vocab else vocab['UNK'] for token in sentence.split(' ')]
predictions = predict(sentence, model, vocab, tag_map)
for x,y in zip(sentence.split(' '), predictions):
if y != 'O':
print(x,y)
# + [markdown] colab_type="text" id="NHYbSnYKnCu6"
# ** Expected Results **
#
# ```
# <NAME>
# <NAME>
# White B-org
# House I-org
# Sunday B-tim
# morning I-tim
# White B-org
# House I-org
# coronavirus B-tim
# fall, B-tim
# ```
| NLP/Learn_by_deeplearning.ai/Course 3 - Sequence Models/Labs/Week 3/C3_W3_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basics Notebook
# This notebook will demonstrate how to use the fundamental features of geospacepy-lite
import datetime
import numpy as np
import matplotlib.pyplot as plt
# # Time conversion
# The special_datetime module can convert between times from Python datetime to other time representations.
#
# It's useful for it's own sake, but also, geospacepy-lite functions which requires times use Julian date.
#
# There are two types of time conversion functions:
# * Scalar functions - converts one datetime to or from another time type
# * Vector functions - converts many datetimes (as a list or array) to or from another time type
# +
from geospacepy.special_datetime import datetime2doy,datetimearr2doy
print('Scalar conversion')
dt = datetime.datetime(2020,1,1)
doy = datetime2doy(dt)
print(dt,'->',doy)
print('Vector conversion (from list to numpy array)')
dts = [dt+datetime.timedelta(hours=h) for h in range(3)]
doys = datetimearr2doy(dts)
print(dts,'->',doys)
# -
# # Solar Position
# Many types of analysis of geospatial data require knowing where the sun is relative to a position on the ground or in space.
# +
from geospacepy.special_datetime import datetimearr2jd
from geospacepy.sun import solar_position_almanac,local_mean_solar_time,solar_zenith_angle
dts = [datetime.datetime(2020,1,1)+datetime.timedelta(hours=h) for h in range(24)]
jds = datetimearr2jd(dts)
#Position of the sun in inertial frame (does not depend on earth's rotation)
ra_rad,dec_rad = solar_position_almanac(jds)
#The local solar time and solar zenith angle are representations of solar position that are relative to a particlar location
glat,glon = 40.01,-105.27 #Boulder, Colorado (geospacepy-lite uses west-longitudes-are-negative convention)
lsts = local_mean_solar_time(jds,glon)
szas_rad = solar_zenith_angle(jds,glat,glon)
#Geospacepy-lite functions return angles in radians (unless angle is a latitude or longitude [degrees],
#or a time, that can also be understood as an angle, e.g. local solar time [hours])
f,axs = plt.subplots(4,1)
axs[0].plot(dts,np.degrees(ra_rad),'b.',label='Solar Right Ascension [deg]')
axs[1].plot(dts,np.degrees(dec_rad),'r.',label='Solar Declination [deg]')
axs[2].plot(dts,lsts/np.pi*12,'g.',label='Local Solar Time [hr]')
axs[3].plot(dts,np.degrees(szas_rad),'k.',label='Solar Zenith Angle [deg]')
for ax in axs:
ax.legend()
f.autofmt_xdate()
plt.show()
# -
| Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame()
df['age'] = [14, 12, 11, 10, 8, 6, 8]
sum(df['age']) / len(df['age'])
import numpy as np
np.mean(df['age'])
np.median(df['age'])
import statistics
statistics.mode(df['age'])
df['age'].var()
np.var(df.age)
np.std(df['age'], ddof=1)
np.std(df['age'] ,ddof=1) / np.sqrt(len(df['age']))
# I would choose the median age as my method of central tendency due to the high level of variance between the age of the children
df.describe()
df['age'] = [14, 12, 11, 10, 8, 7, 8]
sum(df['age']) / len(df['age'])
np.mean(df['age'])
np.median(df['age'])
statistics.mode(df['age'])
np.var(df.age)
np.std(df['age'], ddof=1)
np.std(df['age'] ,ddof=1) / np.sqrt(len(df['age']))
df.describe()
df['age'] = [14, 12, 11, 10, 8, 7, 1]
np.mean(df['age'])
np.median(df['age'])
statistics.mode(df['age'])
(values, counts) = np.unique(df['age'], return_counts=True)
ind = np.argmax(counts)
values[ind]
np.var(df.age)
np.std(df['age'], ddof=1)
np.std(df['age'] ,ddof=1) / np.sqrt(len(df['age']))
df.describe()
bb = pd.DataFrame()
bb['Percent'] = [20, 23, 17]
bb.describe()
df['age'] = [14, 12, 11, 11, 8, 7, 7]
statistics.mode(df['age'])[0]
from collections import Counter
c = Counter(df['age'])
c.most_common(3)
| Thinkful Program/Basic Stats and Probability/5.5 exercises/5.5 Exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # What is `torch.nn` *really*?
#
# ## A quick journey: from neural net "from scratch", to fully utilizing `torch.nn`, `torch.optim`, `Dataset`, and `DataLoader`
# *by <NAME>, fast.ai. Thanks to <NAME> and <NAME>.*
#
# PyTorch provides the elegantly designed modules and classes `torch.nn`, `torch.optim`, `Dataset`, and `DataLoader` to help you create and
# train neural networks. In order to fully utilize their power and customize them for your problem, you need to really understand exactly what they're doing. To develop this understanding, we will first train basic neural net on the MNIST data set without using any features from these models; we will initially only use the most basic PyTorch tensor functionality. Then, we will incrementally add one feature from `torch.nn`, `torch.optim`, `Dataset`, or `DataLoader` at a time, showing exactly what each piece does, and how it works to make the code either more concise, or more flexible.
#
# **This tutorial assumes you already have PyTorch installed, and are familiar with the basics of tensor operations.** (If you're familiar with Numpy array operations, you'll find the PyTorch tensor operations used here nearly identical).
# ## MNIST data setup
# We will use the classic [MNIST](http://deeplearning.net/data/mnist/) dataset, which consists of black-and-white images of hand-drawn digits (between 0 and 9).
#
# We will use [pathlib](https://docs.python.org/3/library/pathlib.html) for dealing with paths (part of the Python 3 standard library), and will download the dataset using [requests](http://docs.python-requests.org/en/master/). We will only import modules when we use them, so you can see exactly what's being used at each point.
# +
from pathlib import Path
import requests
DATA_PATH = Path('data')
PATH = DATA_PATH/'mnist'
PATH.mkdir(parents=True, exist_ok=True)
URL='http://deeplearning.net/data/mnist/'
FILENAME='mnist.pkl.gz'
if not (PATH/FILENAME).exists():
content = requests.get(URL+FILENAME).content
(PATH/FILENAME).open('wb').write(content)
# -
# This dataset is in numpy array format, and has been stored using pickle, a python-specific format for serializing data.
# +
import pickle, gzip
with gzip.open(PATH/FILENAME, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
# -
# Each image is 28 x 28, and is being stored as a flattened row of length 784 (=28x28). Let's take a look at one; we need to reshape it to 2d first.
# +
# %matplotlib inline
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28,28)), cmap="gray")
x_train.shape
# -
# PyTorch uses `torch.tensor`, rather than numpy arrays, so we need to convert our data.
# +
import torch
x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid))
n,c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
# -
# ## Neural net from scratch (no `torch.nn`)
# Let's first create a model using nothing but PyTorch tensor operations. We're assuming you're already familiar with the basics of neural networks. (If you're not, you can learn them at [course.fast.ai](http://course.fast.ai).)
#
# PyTorch provides methods to create random or zero-filled tensors, which we will use to create our weights and bias for a simple linear model. These are just regular tensors, with one very special addition: we tell PyTorch that they require a gradient. This causes PyTorch to record all of the operations done on the tensor, so that it can calculate the gradient during back-propagation *automatically*!
#
# For the weights, we set `requires_grad` **after** the initialization, since we don't want that step included in the gradient. (Note that a trailling `_` in PyTorch signifies that the operation is performed in-place.)
#
# *NB: We are initializing the weights here with [Xavier initialisation](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) (by multiplying with 1/sqrt(n)).*
# +
import math
weights = torch.randn(784,10)/math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
# -
# Thanks to PyTorch's ability to calculate gradients automatically, we can use any standard Python function (or callable object) as a model! So let's just write a plain matrix multiplication and broadcasted addition to create a simple linear model. We also need an activation function, so we'll write `log_softmax` and use it. Remember: although PyTorch provides lots of pre-written loss functions, activation functions, and so forth, you can easily write your own using plain python. PyTorch will even create fast GPU or vectorized CPU code for your function automatically.
# +
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb): return log_softmax(xb @ weights + bias)
# -
# In the above, the '@' stands for the dot product operation. We will call our function on one batch of data (in this case, 64 images). This is one *forward pass*. Note that our predictions won't be any better than random at this stage, since we start with random weights.
# +
bs=64 # batch size
xb = x_train[0:bs] # a mini-batch from x
preds = model(xb) # predictions
preds[0], preds.shape
# -
# As you see, the `preds` tensor contains not only the tensor values, but also a gradient function. We'll use this later to do backprop.
#
# Let's implement negative log-likelihood to use as the loss function (again, we can just use standard Python):
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss_func = nll
# Let's check our loss with our random model, so we can see if we improve after a backprop pass later.
yb = y_train[0:bs]
loss_func(preds, yb)
# Let's also implement a function to calculate the accuracy of our model. For each prediction, if the index with the largest value matches the target value, then the prediction was correct.
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds==yb).float().mean()
# Let's check the accuracy of our random model, so we can see if our accuracy improves as our loss improves.
accuracy(preds, yb)
# We can now run a training loop. For each iteration, we will:
#
# - select a mini-batch of data (of size `bs`)
# - use the model to make predictions
# - calculate the loss
# - `loss.backward()` updates the gradients of the model, in this case, `weights` and `bias`.
# - We now use these gradients to update the weights and bias. We do this within the `torch.no_grad()` context manager, because we do not want these actions to be recorded for our next calculation of the gradient. You can read more about how PyTorch's Autograd records operations [here](https://pytorch.org/docs/stable/notes/autograd.html).
# - We then set the gradients to zero, so that we are ready for the next loop. Otherwise, our gradients would record a running tally of all the operations that had happened (i.e. `loss.backward()` *adds* the gradients to whatever is already stored, rather than replacing them).
# *Handy tip: you can use the standard python debugger to step through PyTorch code, allowing you to check the various variable values at each step. Uncomment `set_trace()` below to try it out.*
# +
from IPython.core.debugger import set_trace
lr = 0.5 # learning rate
epochs = 2 # how many epochs to train for
# -
for epoch in range(epochs):
for i in range((n-1)//bs + 1):
# set_trace()
start_i = i*bs
end_i = start_i+bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
# That's it: we've created and trained a minimal neural network (in this case, a logistic regression, since we have no hidden layers) entirely from scratch!
#
# Let's check the loss and accuracy and compare those to what we got earlier. We expect that the loss will have decreased and accuracy to have increased, and they have.
loss_func(model(xb), yb), accuracy(model(xb), yb)
# ## Using `torch.nn.functional`
# We will now refactor our code, so that it does the same thing as before, only we'll start taking advantage of PyTorch's `nn` classes to make it more concise and flexible. At each step from here, we should be making our code one or more of: shorter, more understandable, and/or more flexible.
#
# The first and easiest step is to make our code shorter by replacing our hand-written activation and loss functions with those from `torch.nn.functional` (which is generally imported into the namespace `F` by convention). This module contains all the functions in the `torch.nn` library (whereas other parts of the library contain classes). As well as a wide range of loss and activation functions, you'll also find here some convenient functions for creating neural nets, such as pooling functions. (There are also functions for doing convolutions, linear layers, etc, but as we'll see, these are usually better handled using other parts of the library.)
#
# If you're using negative log likelihood loss and log softmax activation, then Pytorch provides a single function `F.cross_entropy` that combines the two. So we can even remove the activation function from our model.
# +
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb): return xb @ weights + bias
# -
# Note that we no longer call `log_softmax` in the `model` function. Let's confirm that our loss and accuracy are the same as before:
loss_func(model(xb), yb), accuracy(model(xb), yb)
# ## Refactor using nn.Module
# Next up, we'll use `nn.Module` and `nn.Parameter`, for a clearer and more concise training loop. We subclass `nn.Module` (which itself is a class and able to keep track of state). In this case, we want to create a class that holds our weights, bias, and method for the forward step. `nn.Module` has a number of attributes and methods (such as `.parameters()` and `.zero_grad()`) which we will be using.
#
# **NB**: `nn.Module` (uppercase M) is a PyTorch specific concept, and is a class we'll be using a lot. `nn.Module` is not to be confused with the Python concept of a (lowercase m) [module](https://docs.python.org/3/tutorial/modules.html), which is a file of Python code that can be imported.
# +
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784,10)/math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb): return xb @ self.weights + self.bias
# -
# Since we're now using an object instead of just using a function, we first have to instantiate our model:
model = Mnist_Logistic()
# Now we can calculate the loss in the same way as before. Note that `nn.Module` objects are used as if they are functions (i.e they are *callable*), but behind the scenes Pytorch will call our `forward` method automatically.
loss_func(model(xb), yb)
# Previously for our training loop we had to update the values for each parameter by name, and manually zero out the grads for each parameter separately, like this:
#
# ```python
# with torch.no_grad():
# weights -= weights.grad * lr
# bias -= bias.grad * lr
# weights.grad.zero_()
# bias.grad.zero_()
# ```
#
# Now we can take advantage of model.parameters() and model.zero_grad() (which are both defined by PyTorch for `nn.Module`) to make those steps more concise and less prone to the error of forgetting some of our parameters, particularly if we had a more complicated model:
#
# ```python
# with torch.no_grad():
# for p in model.parameters(): p -= p.grad * lr
# model.zero_grad()
# ```
#
# We'll wrap our little training loop in a `fit` function so we can run it again later.
def fit():
for epoch in range(epochs):
for i in range((n-1)//bs + 1):
start_i = i*bs
end_i = start_i+bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters(): p -= p.grad * lr
model.zero_grad()
fit()
# Let's double-check that our loss has gone down:
loss_func(model(xb), yb)
# ## Refactor using nn.Linear
# We continue to refactor our code. Instead of manually defining and initializing `self.weights` and `self.bias`, and calculating `xb @ self.weights + self.bias`, we will instead use the Pytorch class [nn.Linear](https://pytorch.org/docs/stable/nn.html#linear-layers) for a linear layer, which does all that for us. Pytorch has many types of predefined layers that can greatly simplify our code, and often makes it faster too.
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784,10)
def forward(self, xb): return self.lin(xb)
# We instantiate our model and calculate the loss in the same way as before:
model = Mnist_Logistic()
loss_func(model(xb), yb)
# We are still able to use our same `fit` method as before.
fit()
loss_func(model(xb), yb)
# ## Refactor using optim
# Pytorch also has a package with various optimization algorithms, `torch.optim`. We can use the `step` method from our optimizer to take a forward step, instead of manually updating each parameter.
#
# This will let us replace our previous manually coded optimization step:
#
# ```python
# with torch.no_grad():
# for p in model.parameters(): p -= p.grad * lr
# model.zero_grad()
# ```
#
# and instead use just:
#
# ```python
# opt.step()
# opt.zero_grad()
# ```
#
# (`optim.zero_grad()` resets the gradient to 0 and we need to call it before computing the gradient for the next minibatch.)
from torch import optim
# We'll define a little function to create our model and optimizer so we can reuse it in the future.
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model,opt = get_model()
loss_func(model(xb), yb)
for epoch in range(epochs):
for i in range((n-1)//bs + 1):
start_i = i*bs
end_i = start_i+bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
loss_func(model(xb), yb)
# ## Refactor using Dataset
# PyTorch has an abstract Dataset class. A Dataset can be anything that has a `__len__` function (called by Python's standard `len` function) and a `__getitem__` function as a way of indexing into it. [This tutorial](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html) walks through a nice example of creating a custom FacialLandmarkDataset class as a subclass of Dataset.
#
# PyTorch's [TensorDataset](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#TensorDataset) is a Dataset wrapping tensors. By defining a length and way of indexing, this also gives us a way to iterate, index, and slice along the first dimension of a tensor. This will make it easier to access both the independent and dependent variables in the same line as we train.
from torch.utils.data import TensorDataset
# Both `x_train` and `y_train` can be combined in a single TensorDataset, which will be easier to iterate over and slice.
train_ds = TensorDataset(x_train, y_train)
# Previously, we had to iterate through minibatches of x and y values separately:
#
# ```python
# xb = x_train[start_i:end_i]
# yb = y_train[start_i:end_i]
# ```
#
# Now, we can do these two steps together:
#
# ```python
# xb,yb = train_ds[i*bs : i*bs+bs]
# ```
model,opt = get_model()
for epoch in range(epochs):
for i in range((n-1)//bs + 1):
xb,yb = train_ds[i*bs : i*bs+bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
loss_func(model(xb), yb)
# ## Refactor using DataLoader
# Pytorch's `DataLoader` is responsible for managing batches. You can create a `DataLoader` from any `Dataset`. `DataLoader` makes it easier to iterate over batches. Rather than having to use `train_ds[i*bs : i*bs+bs]`, the DataLoader gives us each minibatch automatically.
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
# Previously, our loop iterated over batches (xb, yb) like this:
#
# ```python
# for i in range((n-1)//bs + 1):
# xb,yb = train_ds[i*bs : i*bs+bs]
# pred = model(xb)
# ```
#
# Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader:
#
# ```python
# for xb,yb in train_dl:
# pred = model(xb)
# ```
model,opt = get_model()
for epoch in range(epochs):
for xb,yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
loss_func(model(xb), yb)
# Thanks to Pytorch's `nn.Module`, `nn.Parameter`, `Dataset`, and `DataLoader`, our training loop is now dramatically smaller and easier to understand. Let's now try to add the basic features necessary to create effecive models in practice.
# # Add validation
# ## First try
# In section 1, we were just trying to get a reasonable training loop set up for use on our training data. In reality, you **always** should also have a [validation set](http://www.fast.ai/2017/11/13/validation-sets/), in order to identify if you are overfitting.
#
# Shuffling the training data is [important](https://www.quora.com/Does-the-order-of-training-data-matter-when-training-neural-networks) to prevent correlation between batches and overfitting. On the other hand, the validation loss will be identical whether we shuffle the validation set or not. Since shuffling takes extra time, it makes no sense to shuffle the validation data.
#
# We'll use a batch size for the validation set that is twice as large as that for the training set. This is because the validation set does not need backpropagation and thus takes less memory (it doesn't need to store the gradients). We take advantage of this to use a larger batch size and compute the loss more quickly.
# +
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs*2)
# -
# We will calculate and print the validation loss at the end of each epoch.
#
# (Note that we always call `model.train()` before training, and `model.eval()` before inference, because these are used by layers such as `nn.BatchNorm2d` and `nn.Dropout` to ensure appropriate behaviour for these different phases.)
model,opt = get_model()
for epoch in range(epochs):
model.train()
for xb,yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb)
for xb,yb in valid_dl)
print(epoch, valid_loss/len(valid_dl))
# ## Create fit() and get_data()
# We'll now do a little refactoring of our own. Since we go through a similar process twice of calculating the loss for both the training set and the validation set, let's make that into its own function, "`loss_batch`", which computes the loss for one batch.
#
# We pass an optimizer in for the training set, and use it to perform backprop. For the validation set, we don't pass an optimizer, so the method doesn't perform backprop.
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
# `fit` runs the necessary operations to train our model and compute the training and validation losses for each epoch.
# +
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb,yb in train_dl: loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses,nums = zip(*[loss_batch(model, loss_func, xb, yb)
for xb,yb in valid_dl])
val_loss = np.sum(np.multiply(losses,nums)) / np.sum(nums)
print(epoch, val_loss)
# -
# `get_data` returns dataloaders for the training and validation sets.
def get_data(train_ds, valid_ds, bs):
return (DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs*2))
# Now, our whole process of obtaining the data loaders and fitting the model can be run in 3 lines of code:
train_dl,valid_dl = get_data(train_ds, valid_ds, bs)
model,opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
# You can use these basic 3 lines of code to train a wide variety of models. Let's see if we can use them to train a convolutional neural network (CNN)!
# # Switch to CNN
# ## First try
# We are now going to build our neural network with three convolutional layers. Because none of the functions in the previous section assume anything about the model form, we'll be able to use them to train a CNN without any modification.
#
# We will use Pytorch's predefined [Conv2d](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d) class as our convolutional layer. We define a CNN with 3 convolutional layers. Each convolution is followed by a ReLU. At the end, we perform an average pooling. (Note that `view` is PyTorch's version of numpy's `reshape`)
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1,1,28,28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1,xb.size(1))
lr=0.1
# [Momentum](http://cs231n.github.io/neural-networks-3/#sgd) is a variation on stochastic gradient descent that takes previous updates into account as well and generally leads to faster training.
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
# ## nn.Sequential
# `torch.nn` has another handy class we can use to simply our code: [`Sequential`](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential). A `Sequential` object runs each of the modules contained within it, in a sequential manner. This is a simpler way of writing our neural network.
#
# To take advantage of this, we need to be able to easily define a **custom layer** from a given function. For instance, PyTorch doesn't have a `view` layer, and we need to create one for our network. `Lambda` will create a layer that we can then use when defining a network with `Sequential`.
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func=func
def forward(self, x): return self.func(x)
def preprocess(x): return x.view(-1,1,28,28)
# The model created with `Sequential` is simply:
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0),-1))
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
# ## Wrapping `DataLoader`
# Our CNN is fairly concise, but it only works with MNIST, because:
# - It assumes the input is a 28\*28 long vector
# - It assumes that the final CNN grid size is 4\*4 (since that's the average pooling kernel size we used)
#
# Let's get rid of these two assumptions, so our model works with any 2d single channel image. First, we can remove the initial Lambda layer but moving the data preprocessing into a generator:
def preprocess(x,y): return x.view(-1,1,28,28),y
class WrappedDataLoader():
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self): return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches: yield(self.func(*b))
train_dl,valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
# Next, we can replace `nn.AvgPool2d` with `nn.AdaptiveAvgPool2d`, which allows us to define the size of the *output* tensor we want, rather than the *input* tensor we have. As a result, our model will work with any size input.
# +
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0),-1))
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
# -
# Let's try it out:
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
# ## Using your GPU
# If you're lucky enough to have access to a CUDA-capable GPU (you can rent one for about about $0.50/hour from most cloud providers) you can use it to speed up your code. First check that your GPU is working in Pytorch:
torch.cuda.is_available()
# And then create a device object for it:
dev = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Let's update `preprocess` to move batches to the GPU:
# +
def preprocess(x,y): return x.view(-1,1,28,28).to(dev),y.to(dev)
train_dl,valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
# -
# Finally, we can move our model to the GPU.
model.to(dev);
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
# You should find it runs faster now:
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
# # Closing thoughts
# We now have general data pipeline and training loop which you can use for training many types of models using Pytorch. To see how simple training a model can now be, take a look at the `mnist_sample` sample notebook.
#
# Of course, there are many things you'll want to add, such as data augmentation, hyperparameter tuning, monitoring training, transfer learning, and so forth. These features are available in the fastai library, which has been developed using the same design approach shown in this tutorial, providing a natural next step for practitioners looking to take their models further.
# We promised at the start of this tutorial we'd explain through example each of `torch.nn`, `torch.optim`, `Dataset`, and `DataLoader`. So let's summarize what we've seen:
#
# - `torch.nn`
# - `Module`: creates a callable which behaves like a function, but can also contain state (such as neural net layer weights). It knows what `Parameter`s it contains and can zero all their gradients, loop through them for weight updates, etc.
# - `Parameter`: a wrapper for a tensor that tells a `Module` that it has weights that need updating during backprop. Only tensors with the `requires_grad` attribute set are updated
# - `functional`: a module (usually imported into the `F` namespace by convention) which contains activation functions, loss functions, etc, as well as non-stateful versions of layers such as convolutional and linear layers.
# - `torch.optim`: Contains optimizers such as `SGD`, which update the weights of `Parameter`s during the backward step
# - `Dataset`: An abstract interface of objects with a `__len__` and a `__getitem__`, including classes provided with Pytorch such as `TensorDataset`
# - `DataLoader`: Takes any `Dataset` and creates an iterator which returns batches of data.
| dev_nb/001a_nn_basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Hands-on example (Azure Resources)
# !python --version
# +
import os
import warnings
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#Sklearn
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
#Azure
import azureml.core
from azureml.core import Workspace
from azure.storage.blob import BlobServiceClient
#MLFlow
import mlflow
import mlflow.sklearn
from mlflow.entities import ViewType
# -
# ## 0. The data
# * The data set used in this example is from http://archive.ics.uci.edu/ml/datasets/Wine+Quality
# * <NAME>, <NAME>, <NAME>, <NAME> and <NAME>.
# * Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
# +
data_path = "/Users/antfra/Desktop/MLFlow In Azure/data/wine-quality.csv" #put your path here
data = pd.read_csv(data_path)
data.sample(10)
# -
# ## 1. Tracking experiments
# ### Tracking stores
# MLflow supports two types of backend stores: *file store* and *database-backed* store.
#
# - Local file path (specified as file:/my/local/dir), where data is just directly stored locally. Defaults to `mlruns/`
# - Database encoded as <dialect>+<driver>://<username>:<password>@<host>:<port>/<database>. Mlflow supports the dialects mysql, mssql, sqlite, and postgresql. For more details, see SQLAlchemy database uri.
# - HTTP server (specified as https://my-server:5000), which is a server hosting an MLFlow tracking server.
# - Databricks workspace (specified as databricks or as databricks://<profileName>, a Databricks CLI profile.
#
# ### Artifact stores
# - Amazon S3
# - Azure Blob Storage
# - Google Cloud Storage
# - FTP server
# - SFTP Server
# - NFS
# - HDFS
# Start the MLflow tracking server by
#
# ```
# mlflow server \
# --backend-store-uri postgresql+psycopg2://<username>:<password>@<host>:<port>/<database> \
# --default-artifact-root wasbs://<container>@<storage account>.blob.core.windows.net/<path> \
# --host 0.0.0.0
# --port 5000
# ```
#
# or use the default storage method to write to `mlruns/`.
# mlflow server --backend-store-uri mlruns/ --default-artifact-root mlruns/ --host 0.0.0.0 --port 5000
remote_server_uri = "http://0.0.0.0:5000" # set to your server URI
mlflow.set_tracking_uri(remote_server_uri) # or set the MLFLOW_TRACKING_URI in the env
mlflow.tracking.get_tracking_uri()
# ### Choose Experiment
exp_name = "ElasticNet_wine"
mlflow.set_experiment(exp_name)
# ### What do we track?
#
# - **Code Version**: Git commit hash used for the run (if it was run from an MLflow Project)
# - **Start & End Time**: Start and end time of the run
# - **Source**: what code run?
# - **Parameters**: Key-value input parameters.
# - **Metrics**: Key-value metrics, where the value is numeric (can be updated over the run)
# - **Artifacts**: Output files in any format.
# +
def eval_metrics(actual, pred):
# compute relevant metrics
rmse = np.sqrt(mean_squared_error(actual, pred))
mae = mean_absolute_error(actual, pred)
r2 = r2_score(actual, pred)
return rmse, mae, r2
def load_data(data_path):
data = pd.read_csv(data_path)
# Split the data into training and test sets. (0.75, 0.25) split.
train, test = train_test_split(data)
# The predicted column is "quality" which is a scalar from [3, 9]
train_x = train.drop(["quality"], axis=1)
test_x = test.drop(["quality"], axis=1)
train_y = train[["quality"]]
test_y = test[["quality"]]
return train_x, train_y, test_x, test_y
def train(alpha=0.5, l1_ratio=0.5,full_view=False):
# train a model with given parameters
warnings.filterwarnings("ignore")
np.random.seed(40)
# Read the wine-quality csv file (make sure you're running this from the root of MLflow!)
data_path = "data/wine-quality.csv"
train_x, train_y, test_x, test_y = load_data(data_path)
# Useful for multiple runs (only doing one run in this sample notebook)
with mlflow.start_run():
# Execute ElasticNet
lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
lr.fit(train_x, train_y)
# Evaluate Metrics
predicted_qualities = lr.predict(test_x)
(rmse, mae, r2) = eval_metrics(test_y, predicted_qualities)
# Print out metrics
print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio))
print(" RMSE: %s" % rmse)
print(" MAE: %s" % mae)
print(" R2: %s" % r2)
# Log parameter, metrics, and model to MLflow
mlflow.log_param(key="alpha", value=alpha)
mlflow.log_param(key="l1_ratio", value=l1_ratio)
mlflow.log_metric(key="rmse", value=rmse)
mlflow.log_metrics({"mae": mae, "r2": r2})
mlflow.log_artifact(data_path)
print("Save to: {}".format(mlflow.get_artifact_uri()))
if full_view:
print("Run IDs: \n{}".format(mlflow.search_runs(ViewType.ACTIVE_ONLY)))
else:
pass
mlflow.sklearn.log_model(lr, "model")
# -
train(0.5, 0.5)
train(0.2, 0.2)
train(0.1, 0.1,full_view=True)
# ### 1.1 Comparing runs
# Run `mlflow ui` in a terminal or `http://your-tracking-server-host:5000` to view the experiment log and visualize and compare different runs and experiments. The logs and the model artifacts are saved in the `mlruns` directory (or where you specified).
# ## 2. Packaging the experiment as a MLflow project as conda env
#
# Change directory to `mlproject`.
#
# Specify the entrypoint for this project by creating a `MLproject` file and adding an conda environment with a `conda.yaml`. You can copy the yaml file from the experiment logs.
#
# To run this project, invoke `mlflow run . -P alpha=0.42`. After running this command, MLflow runs your training code in a new Conda environment with the dependencies specified in `conda.yaml`.
#
# ## 3. Deploy the model
#
# Deploy the model locally by running
#
# `mlflow models serve -m mlruns/0/f5f7c052ddc5469a852aa52c14cabdf1/artifacts/model/ -h 0.0.0.0 -p 1234`
# Test the endpoint:
#
# `curl -X POST -H "Content-Type:application/json; format=pandas-split" --data '{"columns":["alcohol", "chlorides", "citric acid", "density", "fixed acidity", "free sulfur dioxide", "pH", "residual sugar", "sulphates", "total sulfur dioxide", "volatile acidity"],"data":[[12.8, 0.029, 0.48, 0.98, 6.2, 29, 3.33, 1.2, 0.39, 75, 0.66]]}' http://0.0.0.0:1234/invocations`
# You can also simply build a docker image from your model
#
# `mlflow models build-docker -m mlruns/1/d671f37a9c7f478989e67eb4ff4d1dac/artifacts/model/ -n elastic_net_wine`
#
# and run the container with
#
# `docker run -p 8080:8080 elastic_net_wine`.
#
# Or you can directly deploy to Microsoft Azure ML.
# ## 4. Tagging runs
# +
from datetime import datetime
from mlflow.tracking import MlflowClient
client = MlflowClient()
experiments = client.list_experiments() # returns a list of mlflow.entities.Experiment
print(experiments)
# -
# get the run
_run = client.get_run(run_id="aeffb800d7194a6f8472c03fcb5bbf5d")
print(_run)
# add a tag to the run
dt = datetime.now().strftime("%d-%m-%Y (%H:%M:%S.%f)")
client.set_tag(_run.info.run_id, "deployed", dt)
| notebooks/hands_on_example Azure Resources.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 第3章 pandasでデータを処理しよう
#
# ### 3-10: データ可視化
# +
# リスト3.10.1:show()を使用したグラフの表示
import pandas as pd
import matplotlib.pyplot as plt
ax = pd.Series([1, 2, 3]).plot()
ax.set_title("Line Chart")
plt.show()
# -
# リスト3.10.2:スタイルの変更
plt.style.use("ggplot")
# リスト3.10.3:Seriesからplotメソッドを実行
ser = pd.Series([1, 2, 3])
ax = ser.plot()
ax.set_title("Line Chart")
plt.show()
# リスト3.10.4:DataFrameからplotメソッドを実行
df = pd.DataFrame({"a": [1, 2, 3], "b": [3, 2, 1]})
ax = df.plot()
ax.set_title("Line Chart")
plt.show()
# +
# リスト3.10.5:折れ線グラフの作成
import os
base_url = (
"https://raw.githubusercontent.com/practical-jupyter/sample-data/master/anime/"
)
anime_stock_returns_csv = os.path.join(base_url, "anime_stock_returns.csv")
anime_stock_returns_df = pd.read_csv(anime_stock_returns_csv, index_col=0)
ax = anime_stock_returns_df.plot()
ax.set_title("stock returns")
plt.show()
# -
# リスト3.10.6:Y軸の範囲が異なる折れ線グラフ
anime_stock_price_csv = os.path.join(base_url, "anime_stock_price.csv")
anime_stock_price_df = pd.read_csv(anime_stock_price_csv, index_col=0)
ax = anime_stock_price_df.plot(secondary_y=["IG Port"])
ax.set_title("secondary_y")
ax.set_ylabel("TOEI ANIMATION")
ax.right_ax.set_ylabel("IG Port")
plt.show()
# リスト3.10.7:subplotsを使用した折れ線グラフ
ax1, ax2 = anime_stock_price_df.plot(subplots=True)
ax1.set_title("subplot1")
ax2.set_title("subplot2")
plt.show()
# リスト3.10.8:散布図
anime_master_csv = os.path.join(base_url, "anime_master.csv")
anime_master_df = pd.read_csv(anime_master_csv)
ax = anime_master_df.plot.scatter(x="members", y="rating")
ax.set_title("Scatter")
plt.show()
# リスト3.10.9:棒グラフ
anime_genre_top10_pivoted_csv = os.path.join(base_url, "anime_genre_top10_pivoted.csv")
anime_genre_top10_pivoted_df = pd.read_csv(anime_genre_top10_pivoted_csv, index_col=0)
ax = anime_genre_top10_pivoted_df.plot.bar()
plt.show()
# リスト3.10.10:対数軸を使用した棒グラフ
ax = anime_genre_top10_pivoted_df.plot.bar(logy=True)
ax.legend(bbox_to_anchor=(1, 1))
plt.show()
# リスト3.10.11:積み上げ棒グラフ
ax = anime_genre_top10_pivoted_df.plot.bar(stacked=True)
ax.set_title("stacked")
plt.show()
# リスト3.10.12:ヒストグラム
ax = anime_master_df["rating"].hist(bins=100)
ax.set_title("Histogram")
plt.show()
# リスト3.10.13:箱ひげ図
ax = anime_genre_top10_pivoted_df.plot.box()
ax.set_title("Box Plot")
plt.show()
# リスト3.10.14:円グラフ
anime_genre_top10_csv = os.path.join(base_url, "anime_genre_top10.csv")
anime_genre_top10_df = pd.read_csv(anime_genre_top10_csv)
ax = anime_genre_top10_df.groupby("genre").sum()["members"].plot.pie(figsize=(5, 5))
ax.set_title("Pie Chart")
ax.set_ylabel("") # Y軸ラベルを削除
plt.show()
| sample-code/notebooks/3-10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="rOvvWAVTkMR7"
# # Introduction
#
# Welcome to the **Few Shot Object Detection for TensorFlow Lite** Colab. Here, we demonstrate fine tuning of a SSD architecture (pre-trained on COCO) on very few examples of a *novel* class. We will then generate a (downloadable) TensorFlow Lite model for on-device inference.
#
# **NOTE:** This Colab is meant for the few-shot detection use-case. To train a model on a large dataset, please follow the [TF2 training](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_training_and_evaluation.md#training) documentation and then [convert](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md) the model to TensorFlow Lite.
# + [markdown] id="3U2sv0upw04O"
# # Set Up
# + [markdown] id="vPs64QA1Zdov"
# ## Imports
# + id="H0rKBV4uZacD"
# Support for TF2 models was added after TF 2.3.
# !pip install tf-nightly
# + id="oi28cqGGFWnY"
import os
import pathlib
# Clone the tensorflow models repository if it doesn't already exist
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
# !git clone --depth 1 https://github.com/tensorflow/models
# + id="NwdsBdGhFanc"
# Install the Object Detection API
# %%bash
# cd models/research/
protoc object_detection/protos/*.proto --python_out=.
# cp object_detection/packages/tf2/setup.py .
python -m pip install .
# + id="uZcqD4NLdnf4"
import matplotlib
import matplotlib.pyplot as plt
import os
import random
import io
import imageio
import glob
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from IPython.display import display, Javascript
from IPython.display import Image as IPyImage
import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import colab_utils
from object_detection.utils import config_util
from object_detection.builders import model_builder
# %matplotlib inline
# + [markdown] id="IogyryF2lFBL"
# ##Utilities
# + id="-y9R0Xllefec"
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: a file path.
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
img_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(img_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
def plot_detections(image_np,
boxes,
classes,
scores,
category_index,
figsize=(12, 16),
image_name=None):
"""Wrapper function to visualize detections.
Args:
image_np: uint8 numpy array with shape (img_height, img_width, 3)
boxes: a numpy array of shape [N, 4]
classes: a numpy array of shape [N]. Note that class indices are 1-based,
and match the keys in the label map.
scores: a numpy array of shape [N] or None. If scores=None, then
this function assumes that the boxes to be plotted are groundtruth
boxes and plot all boxes as black with no classes or scores.
category_index: a dict containing category dictionaries (each holding
category index `id` and category name `name`) keyed by category indices.
figsize: size for the figure.
image_name: a name for the image file.
"""
image_np_with_annotations = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_annotations,
boxes,
classes,
scores,
category_index,
use_normalized_coordinates=True,
min_score_thresh=0.8)
if image_name:
plt.imsave(image_name, image_np_with_annotations)
else:
plt.imshow(image_np_with_annotations)
# + [markdown] id="sSaXL28TZfk1"
# ## Rubber Ducky data
#
# We will start with some toy data consisting of 5 images of a rubber
# ducky. Note that the [COCO](https://cocodataset.org/#explore) dataset contains a number of animals, but notably, it does *not* contain rubber duckies (or even ducks for that matter), so this is a novel class.
# + id="SQy3ND7EpFQM"
# Load images and visualize
train_image_dir = 'models/research/object_detection/test_images/ducky/train/'
train_images_np = []
for i in range(1, 6):
image_path = os.path.join(train_image_dir, 'robertducky' + str(i) + '.jpg')
train_images_np.append(load_image_into_numpy_array(image_path))
plt.rcParams['axes.grid'] = False
plt.rcParams['xtick.labelsize'] = False
plt.rcParams['ytick.labelsize'] = False
plt.rcParams['xtick.top'] = False
plt.rcParams['xtick.bottom'] = False
plt.rcParams['ytick.left'] = False
plt.rcParams['ytick.right'] = False
plt.rcParams['figure.figsize'] = [14, 7]
for idx, train_image_np in enumerate(train_images_np):
plt.subplot(2, 3, idx+1)
plt.imshow(train_image_np)
plt.show()
# + [markdown] id="LbOe9Ym7xMGV"
# # Transfer Learning
#
# + [markdown] id="Dqb_yjAo3cO_"
# ## Data Preparation
#
# First, we populate the groundtruth with pre-annotated bounding boxes.
#
# We then add the class annotations (for simplicity, we assume a single 'Duck' class in this colab; though it should be straightforward to extend this to handle multiple classes). We also convert everything to the format that the training
# loop below expects (e.g., everything converted to tensors, classes converted to one-hot representations, etc.).
# + id="wIAT6ZUmdHOC"
gt_boxes = [
np.array([[0.436, 0.591, 0.629, 0.712]], dtype=np.float32),
np.array([[0.539, 0.583, 0.73, 0.71]], dtype=np.float32),
np.array([[0.464, 0.414, 0.626, 0.548]], dtype=np.float32),
np.array([[0.313, 0.308, 0.648, 0.526]], dtype=np.float32),
np.array([[0.256, 0.444, 0.484, 0.629]], dtype=np.float32)
]
# By convention, our non-background classes start counting at 1. Given
# that we will be predicting just one class, we will therefore assign it a
# `class id` of 1.
duck_class_id = 1
num_classes = 1
category_index = {duck_class_id: {'id': duck_class_id, 'name': 'rubber_ducky'}}
# Convert class labels to one-hot; convert everything to tensors.
# The `label_id_offset` here shifts all classes by a certain number of indices;
# we do this here so that the model receives one-hot labels where non-background
# classes start counting at the zeroth index. This is ordinarily just handled
# automatically in our training binaries, but we need to reproduce it here.
label_id_offset = 1
train_image_tensors = []
gt_classes_one_hot_tensors = []
gt_box_tensors = []
for (train_image_np, gt_box_np) in zip(
train_images_np, gt_boxes):
train_image_tensors.append(tf.expand_dims(tf.convert_to_tensor(
train_image_np, dtype=tf.float32), axis=0))
gt_box_tensors.append(tf.convert_to_tensor(gt_box_np, dtype=tf.float32))
zero_indexed_groundtruth_classes = tf.convert_to_tensor(
np.ones(shape=[gt_box_np.shape[0]], dtype=np.int32) - label_id_offset)
gt_classes_one_hot_tensors.append(tf.one_hot(
zero_indexed_groundtruth_classes, num_classes))
print('Done prepping data.')
# + [markdown] id="b3_Z3mJWN9KJ"
# Let's just visualize the rubber duckies as a sanity check
#
# + id="YBD6l-E4N71y"
dummy_scores = np.array([1.0], dtype=np.float32) # give boxes a score of 100%
plt.figure(figsize=(30, 15))
for idx in range(5):
plt.subplot(2, 3, idx+1)
plot_detections(
train_images_np[idx],
gt_boxes[idx],
np.ones(shape=[gt_boxes[idx].shape[0]], dtype=np.int32),
dummy_scores, category_index)
plt.show()
# + [markdown] id="ghDAsqfoZvPh"
# ## Load mobile-friendly model
#
# In this cell we build a mobile-friendly single-stage detection architecture (SSD MobileNet V2 FPN-Lite) and restore all but the classification layer at the top (which will be randomly initialized).
#
# **NOTE**: TensorFlow Lite only supports SSD models for now.
#
# For simplicity, we have hardcoded a number of things in this colab for the specific SSD architecture at hand (including assuming that the image size will always be 320x320), however it is not difficult to generalize to other model configurations (`pipeline.config` in the zip downloaded from the [Model Zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.)).
#
#
#
# + id="9J16r3NChD-7"
# Download the checkpoint and put it into models/research/object_detection/test_data/
# !wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz
# !tar -xf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz
# !if [ -d "models/research/object_detection/test_data/checkpoint" ]; then rm -Rf models/research/object_detection/test_data/checkpoint; fi
# !mkdir models/research/object_detection/test_data/checkpoint
# !mv ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/checkpoint models/research/object_detection/test_data/
# + id="RyT4BUbaMeG-"
tf.keras.backend.clear_session()
print('Building model and restoring weights for fine-tuning...', flush=True)
num_classes = 1
pipeline_config = 'models/research/object_detection/configs/tf2/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config'
checkpoint_path = 'models/research/object_detection/test_data/checkpoint/ckpt-0'
# This will be where we save checkpoint & config for TFLite conversion later.
output_directory = 'output/'
output_checkpoint_dir = os.path.join(output_directory, 'checkpoint')
# Load pipeline config and build a detection model.
#
# Since we are working off of a COCO architecture which predicts 90
# class slots by default, we override the `num_classes` field here to be just
# one (for our new rubber ducky class).
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
model_config.ssd.num_classes = num_classes
model_config.ssd.freeze_batchnorm = True
detection_model = model_builder.build(
model_config=model_config, is_training=True)
# Save new pipeline config
pipeline_proto = config_util.create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(pipeline_proto, output_directory)
# Set up object-based checkpoint restore --- SSD has two prediction
# `heads` --- one for classification, the other for box regression. We will
# restore the box regression head but initialize the classification head
# from scratch (we show the omission below by commenting out the line that
# we would add if we wanted to restore both heads)
fake_box_predictor = tf.compat.v2.train.Checkpoint(
_base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
# _prediction_heads=detection_model._box_predictor._prediction_heads,
# (i.e., the classification head that we *will not* restore)
_box_prediction_head=detection_model._box_predictor._box_prediction_head,
)
fake_model = tf.compat.v2.train.Checkpoint(
_feature_extractor=detection_model._feature_extractor,
_box_predictor=fake_box_predictor)
ckpt = tf.compat.v2.train.Checkpoint(model=fake_model)
ckpt.restore(checkpoint_path).expect_partial()
# To save checkpoint for TFLite conversion.
exported_ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt_manager = tf.train.CheckpointManager(
exported_ckpt, output_checkpoint_dir, max_to_keep=1)
# Run model through a dummy image so that variables are created
image, shapes = detection_model.preprocess(tf.zeros([1, 320, 320, 3]))
prediction_dict = detection_model.predict(image, shapes)
_ = detection_model.postprocess(prediction_dict, shapes)
print('Weights restored!')
# + [markdown] id="pCkWmdoZZ0zJ"
# ## Eager training loop (Fine-tuning)
#
# Some of the parameters in this block have been set empirically: for example, `learning_rate`, `num_batches` & `momentum` for SGD. These are just a starting point, you will have to tune these for your data & model architecture to get the best results.
#
#
#
#
# + id="nyHoF4mUrv5-"
tf.keras.backend.set_learning_phase(True)
# These parameters can be tuned; since our training set has 5 images
# it doesn't make sense to have a much larger batch size, though we could
# fit more examples in memory if we wanted to.
batch_size = 5
learning_rate = 0.15
num_batches = 1000
# Select variables in top layers to fine-tune.
trainable_variables = detection_model.trainable_variables
to_fine_tune = []
prefixes_to_train = [
'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead',
'WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead']
for var in trainable_variables:
if any([var.name.startswith(prefix) for prefix in prefixes_to_train]):
to_fine_tune.append(var)
# Set up forward + backward pass for a single train step.
def get_model_train_step_function(model, optimizer, vars_to_fine_tune):
"""Get a tf.function for training step."""
# Use tf.function for a bit of speed.
# Comment out the tf.function decorator if you want the inside of the
# function to run eagerly.
@tf.function
def train_step_fn(image_tensors,
groundtruth_boxes_list,
groundtruth_classes_list):
"""A single training iteration.
Args:
image_tensors: A list of [1, height, width, 3] Tensor of type tf.float32.
Note that the height and width can vary across images, as they are
reshaped within this function to be 320x320.
groundtruth_boxes_list: A list of Tensors of shape [N_i, 4] with type
tf.float32 representing groundtruth boxes for each image in the batch.
groundtruth_classes_list: A list of Tensors of shape [N_i, num_classes]
with type tf.float32 representing groundtruth boxes for each image in
the batch.
Returns:
A scalar tensor representing the total loss for the input batch.
"""
shapes = tf.constant(batch_size * [[320, 320, 3]], dtype=tf.int32)
model.provide_groundtruth(
groundtruth_boxes_list=groundtruth_boxes_list,
groundtruth_classes_list=groundtruth_classes_list)
with tf.GradientTape() as tape:
preprocessed_images = tf.concat(
[detection_model.preprocess(image_tensor)[0]
for image_tensor in image_tensors], axis=0)
prediction_dict = model.predict(preprocessed_images, shapes)
losses_dict = model.loss(prediction_dict, shapes)
total_loss = losses_dict['Loss/localization_loss'] + losses_dict['Loss/classification_loss']
gradients = tape.gradient(total_loss, vars_to_fine_tune)
optimizer.apply_gradients(zip(gradients, vars_to_fine_tune))
return total_loss
return train_step_fn
optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
train_step_fn = get_model_train_step_function(
detection_model, optimizer, to_fine_tune)
print('Start fine-tuning!', flush=True)
for idx in range(num_batches):
# Grab keys for a random subset of examples
all_keys = list(range(len(train_images_np)))
random.shuffle(all_keys)
example_keys = all_keys[:batch_size]
# Note that we do not do data augmentation in this demo. If you want a
# a fun exercise, we recommend experimenting with random horizontal flipping
# and random cropping :)
gt_boxes_list = [gt_box_tensors[key] for key in example_keys]
gt_classes_list = [gt_classes_one_hot_tensors[key] for key in example_keys]
image_tensors = [train_image_tensors[key] for key in example_keys]
# Training step (forward pass + backwards pass)
total_loss = train_step_fn(image_tensors, gt_boxes_list, gt_classes_list)
if idx % 100 == 0:
print('batch ' + str(idx) + ' of ' + str(num_batches)
+ ', loss=' + str(total_loss.numpy()), flush=True)
print('Done fine-tuning!')
ckpt_manager.save()
print('Checkpoint saved!')
# + [markdown] id="cYk1_9Fc2lZO"
# # Export & run with TensorFlow Lite
#
#
# + [markdown] id="y0nsDVEd9SuX"
# ## Model Conversion
#
# First, we invoke the `export_tflite_graph_tf2.py` script to generate a TFLite-friendly intermediate SavedModel. This will then be passed to the TensorFlow Lite Converter for generating the final model.
#
# To know more about this process, please look at [this documentation](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md).
# + id="dyrqHSQQ7WKE" language="bash"
# python models/research/object_detection/export_tflite_graph_tf2.py \
# --pipeline_config_path output/pipeline.config \
# --trained_checkpoint_dir output/checkpoint \
# --output_directory tflite
# + id="m5hjPyR78bgs"
# !tflite_convert --saved_model_dir=tflite/saved_model --output_file=tflite/model.tflite
# + [markdown] id="WHlXL1x_Z3tc"
# ## Test .tflite model
# + id="WcE6OwrHQJya"
test_image_dir = 'models/research/object_detection/test_images/ducky/test/'
test_images_np = []
for i in range(1, 50):
image_path = os.path.join(test_image_dir, 'out' + str(i) + '.jpg')
test_images_np.append(np.expand_dims(
load_image_into_numpy_array(image_path), axis=0))
# Again, uncomment this decorator if you want to run inference eagerly
def detect(interpreter, input_tensor):
"""Run detection on an input image.
Args:
interpreter: tf.lite.Interpreter
input_tensor: A [1, height, width, 3] Tensor of type tf.float32.
Note that height and width can be anything since the image will be
immediately resized according to the needs of the model within this
function.
Returns:
A dict containing 3 Tensors (`detection_boxes`, `detection_classes`,
and `detection_scores`).
"""
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# We use the original model for pre-processing, since the TFLite model doesn't
# include pre-processing.
preprocessed_image, shapes = detection_model.preprocess(input_tensor)
interpreter.set_tensor(input_details[0]['index'], preprocessed_image.numpy())
interpreter.invoke()
boxes = interpreter.get_tensor(output_details[0]['index'])
classes = interpreter.get_tensor(output_details[1]['index'])
scores = interpreter.get_tensor(output_details[2]['index'])
return boxes, classes, scores
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="tflite/model.tflite")
interpreter.allocate_tensors()
# Note that the first frame will trigger tracing of the tf.function, which will
# take some time, after which inference should be fast.
label_id_offset = 1
for i in range(len(test_images_np)):
input_tensor = tf.convert_to_tensor(test_images_np[i], dtype=tf.float32)
boxes, classes, scores = detect(interpreter, input_tensor)
plot_detections(
test_images_np[i][0],
boxes[0],
classes[0].astype(np.uint32) + label_id_offset,
scores[0],
category_index, figsize=(15, 20), image_name="gif_frame_" + ('%02d' % i) + ".jpg")
# + id="ZkMPOSQE0x8C"
imageio.plugins.freeimage.download()
anim_file = 'duckies_test.gif'
filenames = glob.glob('gif_frame_*.jpg')
filenames = sorted(filenames)
last = -1
images = []
for filename in filenames:
image = imageio.imread(filename)
images.append(image)
imageio.mimsave(anim_file, images, 'GIF-FI', fps=5)
display(IPyImage(open(anim_file, 'rb').read()))
# + [markdown] id="yzaHWsS58_PQ"
# ## (Optional) Download model
#
# This model can be run on-device with **TensorFlow Lite**. Look at [our SSD model signature](https://www.tensorflow.org/lite/models/object_detection/overview#uses_and_limitations) to understand how to interpret the model IO tensors. Our [Object Detection example](https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection) is a good starting point for integrating the model into your mobile app.
#
# Refer to TFLite's [inference documentation](https://www.tensorflow.org/lite/guide/inference) for more details.
# + id="gZ6vac3RAY3j"
from google.colab import files
files.download('tflite/model.tflite')
| TF2_eager_few_shot_od_training_tflite.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as pl
import matplotlib as mpl
import pandas as pd
from astropy.table import Table
import sys
sys.path.append("../")
import read_mist_models
from matplotlib import colors
from scipy.interpolate import interp1d
import utils
from astropy.coordinates import SkyCoord
from astropy import units as u
pl.rc('xtick', labelsize=20)
pl.rc('ytick', labelsize=20)
pl.rc('axes', labelsize=25)
pl.rc('axes', titlesize=30)
pl.rc('legend', handlelength=10)
pl.rc('legend', fontsize=20)
# %matplotlib inline
# -
# #### Load in output from round.py and crossmatch with k2dr2 table from Bailer-Jones et al
data = utils.read_round("../output/out.dat")
k2dr2 = Table.read('../k2_dr2_1arcsec.fits', format='fits')
k2dr2 = k2dr2.to_pandas()
df = pd.merge(k2dr2, data, left_on='epic_number', right_on='epic_number')
# #### Read in mist isochrone (for picking main sequence) and parsec isochrones (for converting from gaia mags to Johnson mags)
iso = read_mist_models.ISOCMD('../MIST_iso_5da0dbfba0a60.iso.cmd')
mist = iso.isocmds[iso.age_index(9.0)]
isonames1 = ('Zini','Age','Mini','Mass','logL','logTe','logg','label','McoreTP',
'C_O','period0','period1','pmode','Mloss','tau1m','X','Y','Xc','Xn','Xo',
'Cexcess','Z','mbolmag','Gmag','G_BPmag','G_RPmag','B_Tmag','V_Tmag',
'Jmag','Hmag','Ksmag')
parsec1 = pd.read_table('../output783328222883.dat', delim_whitespace=True, header=None, comment='#', names=isonames1)
isonames2 = ('Zini', 'Age', 'Mini', 'Mass', 'logL', 'logTe', 'logg', 'label', 'McoreTP',
'C_O', 'period0', 'period1', 'pmode', 'Mloss', 'tau1m', 'X', 'Y', 'Xc', 'Xn',
'Xo', 'Cexcess', 'Z', 'mbolmag', 'Umag', 'Bmag', 'Vmag', 'Rmag', 'Imag', 'Jmag', 'Hmag', 'Kmag')
parsec2 = pd.read_table('../output632510793236.dat', delim_whitespace=True, header=None, comment='#', names=isonames2)
iok = np.where((parsec1['label'] < 2) & (parsec1['Gmag'] > 1.7))[0][::-1]
# #### Select stars with good gaia solutions
# +
# select stars with good gaia results
good_parallax = df["parallax_error"] < 0.1
unimodal_distance_result = (df["r_modality_flag"] == 1) & (df["r_result_flag"] == 1)
has_finite_bp_rp = np.isfinite(df["bp_rp"])
good_bp = df["phot_bp_mean_flux_error"]/df[u'phot_bp_mean_flux'] < 0.01
good_rp = df[u'phot_rp_mean_flux_error']/df[u'phot_rp_mean_flux'] < 0.01
good_mg = df[u'phot_g_mean_flux_error']/df[u'phot_g_mean_flux'] < 0.01
in_r_range = (df["r_est"] > 0) & (df["r_est"] < 500)
mask = good_parallax & unimodal_distance_result & has_finite_bp_rp & good_bp & good_rp & good_mg & in_r_range
# -
# #### Interpolate from Gaia GP-RP color to Johnson B-V color
# +
iso_bp_rp = mist['Gaia_BP_MAWb'] - mist['Gaia_RP_MAW']
iso_mg = mist['Gaia_G_MAW']
mass_mask = (mist['initial_mass'] < 2.0) & (mist['initial_mass'] > 0.2)
iso_bp_rp = iso_bp_rp[mass_mask]
iso_mg = iso_mg[mass_mask]
in_color_range = (df["bp_rp"] > min(iso_bp_rp)) & (df["bp_rp"] < max(iso_bp_rp))
mask = mask & in_color_range
interpolator = interp1d(iso_bp_rp, iso_mg)
iso_mg_interp = interpolator(df[mask]['bp_rp'])
# -
# #### Select main sequence
# +
correction = 5*np.log10(df[mask]["r_est"])-5 # get absolute mag Mg from relative mg by applying distance correction
bp_rp, mg = np.array(df[mask]["bp_rp"]), np.array(df[mask]["phot_g_mean_mag"])-correction
is_ms = (mg - iso_mg_interp < 0.2) & (iso_mg_interp - mg < 0.4)
fig, ax = pl.subplots(figsize=(10, 10))
# only show 2D-histogram for bins with more than 10 stars in them
h = ax.hist2d(bp_rp[is_ms], mg[is_ms], bins=100, cmin=10, norm=colors.PowerNorm(0.5), zorder=0.5)
# fill the rest with scatter
ax.scatter(bp_rp[is_ms], mg[is_ms], alpha=1, s=1, color='k', zorder=0)
ax.plot(iso_bp_rp, iso_mg, 'r', linewidth=3)
ax.invert_yaxis()
cb = fig.colorbar(h[3])
ax.set_xlabel('$G_{BP} - G_{RP}$')
ax.set_ylabel('$M_G$')
cb.set_label("Stellar Density")
pl.show()
# -
# #### Plot period vs. Gaia color
# +
relative_uncertainty = df['logperiod_sd']/df['logperiod_mean']
snr = df['logamp_mean']-df['logs2_mean']
nobeehive = (df['k2_campaign_str'] != b'5') & (df['k2_campaign_str'] != b'16')
good_period = ((df['logperiod_neff'] > 7000) &
(np.abs(np.exp(df['logperiod_mean']) - df['acfpeak']) < 1) &
(relative_uncertainty < 0.1) & (np.exp(df['logperiod_mean']) < 33) &
(snr > np.log(10)))
period = np.exp(df[mask & is_ms & good_period]['logperiod_mean'])
logperiod_error = df[mask & is_ms & good_period]['logperiod_sd']
logperiod = df[mask & is_ms & good_period]['logperiod_mean']
color = df[mask & is_ms & good_period]['bp_rp']
df.to_hdf('allgood.h5', key='df', mode='w')
fig = pl.figure(figsize=(15, 10))
pl.semilogy(color, period, 'o', alpha=0.2, color='k')
pl.xlabel(r"$\mathrm{G}_\mathrm{GP}-\mathrm{G}_\mathrm{RP}$ (mag)")
pl.ylabel("rotation period (days)")
pl.savefig("../figures/period.pdf")
# -
# #### Convert Gaia colors to B-V
relative_uncertainty = df['logperiod_sd']/df['logperiod_mean']
good_period = ((df['logperiod_neff'] > 7000) &
(np.abs(np.exp(df['logperiod_mean']) - df['acfpeak']) < 1) &
(relative_uncertainty < 0.02) & (np.exp(df['logperiod_mean']) < 37) &
(snr > 5))
period = np.exp(df[mask & is_ms & good_period]['logperiod_mean'])
BV_ms = np.interp(df[mask & is_ms & good_period]['bp_rp'], parsec1['G_BPmag'][iok] - parsec1['G_RPmag'][iok],
parsec2['Bmag'][iok] - parsec2['Vmag'][iok])
# #### Define several different gyrochrones
# +
# gyrochrones
def MM09e2(B_V, age):
'''
Eqn 2
http://adsabs.harvard.edu/abs/2009ApJ...695..679M
'''
a = 0.50
b = 0.15
P = np.sqrt(age) * (np.sqrt(B_V - a)) - b * (B_V - a)
return P
def MM09e3(B_V, age):
''' Eqn 3 '''
c = 0.77
d = 0.40
f = 0.60
P = age**0.52 * (c * (B_V - d)**f)
return P
def MH2008(B_V, age):
'''
Equations 12,13,14 from Mamajek & Hillenbrand (2008)
http://adsabs.harvard.edu/abs/2008ApJ...687.1264M
Coefficients from Table 10
Parameters
----------
B_V (B-V) color
age in Myr
Returns
-------
period in color
'''
a = 0.407
b = 0.325
c = 0.495
n = 0.566
f = a * np.power(B_V - c, b)
g = np.power(age, n)
P = f * g
return P
def Angus2015(B_V, age):
'''
Compute the rotation period expected for a star of a given color (temp) and age
NOTE: - input Age is in MYr
- output Period is in days
Eqn 15 from Angus+2015
http://adsabs.harvard.edu/abs/2015MNRAS.450.1787A
'''
P = (age ** 0.55) * 0.4 * ((B_V - 0.45) ** 0.31)
return P
# -
# #### Fine a good isochrone for separating modalities
fig = pl.figure(figsize=(15, 10))
pl.semilogy(BV_ms, period, 'o', alpha=0.3, color='k')
pl.semilogy(BV_ms, MM09e3(BV_ms, 500), '.')
# #### Plot histogram of rotation periods relative to gyrochrone
BV_mask = (BV_ms > 1.25) & (BV_ms < 1.3)
pl.hist(period[BV_mask] - MM09e3(BV_ms, 500)[BV_mask], bins=30);
# #### Use Gaia to find galactocentric coordinates
ra, dec, dist = list(df[mask & is_ms & good_period]['ra']), list(df[mask & is_ms & good_period]['dec']), list(df[mask & is_ms & good_period]['r_est'])
coords = [SkyCoord(ra = ra[i]*u.degree, dec = dec[i]*u.degree, distance = dist[i]*u.parsec) for i in range(len(ra))]
df_good = df[mask & is_ms & good_period]
color_good = color[mask & is_ms & good_period]
period_good = period[mask & is_ms & good_period]
df_good['galcen_x'] = [c.galactocentric.x.value for c in coords]
df_good['galcen_y'] = [c.galactocentric.y.value for c in coords]
df_good['galcen_z'] = [c.galactocentric.z.value for c in coords]
# #### Plot stars in galactocentric coordinates
pl.figure(figsize=(10, 10))
pl.plot(df_good['galcen_x'][df_good['galcen_y'] > 0], df_good['galcen_y'][df_good['galcen_y'] > 0], 'r.')
pl.plot(df_good['galcen_x'][df_good['galcen_y'] < 0], df_good['galcen_y'][df_good['galcen_y'] < 0], 'b.')
pl.xlabel("galactocentric x")
pl.ylabel("galactocentric y")
pl.savefig("../figures/position.pdf")
# +
xcoord = df_good['galcen_x'] + 8300
ycoord = df_good['galcen_y']
angle = np.arctan(ycoord/xcoord)*180/np.pi
forward = (np.abs(angle) > 60) & (ycoord > 0)
backward = (np.abs(angle) > 60) & (ycoord < 0)
pl.figure(figsize=(10, 10))
pl.plot(df_good['galcen_x'], df_good['galcen_y'], 'k.')
pl.plot(df_good['galcen_x'][forward], df_good['galcen_y'][forward], 'r.')
pl.plot(df_good['galcen_x'][backward], df_good['galcen_y'][backward], 'b.')
pl.xlabel("galactocentric x")
pl.ylabel("galactocentric y")
# -
# #### Plot period-color diagram separated by direction
# +
fig = pl.figure(figsize=(15, 10))
pl.semilogy(color_good[forward], period_good[forward], 'o', alpha=0.3, color='b', label="y < 0")
pl.semilogy(color_good[backward], period_good[backward], 'o', alpha=0.3, color='r', label="y > 0")
pl.xlabel(r"$\mathrm{G}_\mathrm{GP}-\mathrm{G}_\mathrm{RP}$ (mag)")
pl.ylabel("rotation period (days)")
leg = pl.legend(loc='lower right')
for lh in leg.legendHandles:
lh._legmarker.set_alpha(1)
lh._legmarker.set_markersize(10)
leg.get_frame().set_linewidth(3)
pl.savefig("../figures/period_direction.pdf")
# +
BV_mask = (BV_ms > 1.25) & (BV_ms < 1.35)
nobeehive = (df_good['k2_campaign_str'] != b'5') & (df_good['k2_campaign_str'] != b'16')
#y_mask = df_good['galcen_y'] < 0
pl.hist(period[BV_mask & forward & nobeehive] - MM09e2(BV_ms, 500)[BV_mask & forward & nobeehive & nobeehive], bins=30, alpha=0.5, color='b', density=True);
#y_mask = df_good['galcen_y'] > 0
pl.hist(period[BV_mask & backward & nobeehive] - MM09e2(BV_ms, 500)[BV_mask & backward], bins=30, alpha=0.5, color='r', density=True);
# -
| round/make_cuts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import json
import pandas as pd
# # Load Data
# +
def load_jsonl(fname):
fin = open(fname, encoding="utf-8")
data = []
for line in fin:
d = json.loads(line.strip())
data.append(d)
return data
def save_jsonl(data, filename):
with open(filename, "w", encoding="utf-8") as fo:
for idx, d in enumerate(data):
fo.write(json.dumps(d, ensure_ascii=False))
fo.write("\n")
# -
split = ["train", "valid", "test"]
wisesight = {}
for s in split:
d = load_jsonl(f"Datasets/WisesightSentiment/{s}.jsonl")
wisesight[s] = d
print(f"Loaded {s}: {len(d)} sents")
testmisp = load_jsonl(f"Datasets/WisesightSentiment/test-misp.jsonl")
trainmisp = load_jsonl(f"Datasets/WisesightSentiment/few-shot/train-misp-3000.jsonl")
# # Preprocess & Tokenize
from pythainlp.tokenize import word_tokenize
word_tokenize("ฉันรักแมว", engine="deepcut")
# +
from tqdm import tqdm
from functools import partial
from thai2transformers import preprocess
from typing import Collection, Callable
import demoji
def word_tokenize_deepcut(s):
return word_tokenize(s, engine="deepcut")
def _process_transformers(
text: str,
pre_rules: Collection[Callable] = [
preprocess.fix_html,
preprocess.rm_brackets,
preprocess.replace_newlines,
preprocess.rm_useless_spaces,
preprocess.replace_spaces,
preprocess.replace_rep_after,
],
tok_func: Callable = word_tokenize_deepcut,
post_rules: Collection[Callable] = [preprocess.ungroup_emoji, preprocess.replace_wrep_post],
lowercase: bool = False
) -> str:
if lowercase:
text = text.lower()
for rule in pre_rules:
text = rule(text)
toks = tok_func(text)
for rule in post_rules:
toks = rule(toks)
return toks
def replace_emoji(s):
return demoji.replace_with_desc(s, "")
space_token = " "
preprocessor=partial(
_process_transformers,
pre_rules = [
replace_emoji,
preprocess.fix_html,
preprocess.rm_brackets,
preprocess.replace_newlines,
preprocess.rm_useless_spaces,
# preprocess.replace_rep_after
],
lowercase=False
)
# -
for s in split:
sents = wisesight[s]
print(f"Tokenizing {s}:")
for sent in tqdm(sents, total=len(sents)):
sent["tokenized"] = preprocessor(sent["text"])
save_jsonl(sents, f"Datasets/WisesightSentiment/tokenized_{s}.jsonl")
# ### Tokenize with misspelling
# +
import itertools
def tokenize_misp_sents(sents):
for sent in tqdm(sents):
engine = "deepcut"
sent["tokenized"] = []
segments = []
segwords = []
mispTokens = sorted(sent["misp_tokens"], key=lambda x: x["s"], reverse=False)
text = sent["text"]
lastToken = ""
idx, seenTokens = 0, []
for m in mispTokens:
overlapped = False
for p in seenTokens:
if m["s"] < p["t"]:
overlapped = True
if overlapped:
continue
s = text[idx:m["s"]]
w = text[m["s"]:m["t"]]
t = text[m["t"]:]
idx += len(s)+len(w)
ts = preprocessor(s)
segments.append((s, s))
segments.append((w, m["corr"]))
segwords.append((ts, ts))
segwords.append(([w], [m["corr"]]))
lastToken = t
seenTokens.append(m)
if len(seenTokens)==0:
t = preprocessor(text)
segments = [(text, text)]
segwords = [(t, t)]
else:
t = preprocessor(lastToken)
segments.append((lastToken, lastToken))
segwords.append((t, t))
sent["tokenized"] = preprocessor(sent["text"]) #blindly tokenize
# sent["tokenized"] = list(itertools.chain(*[s[0] for s in segwords]))
# sent["tokenized"] = list(itertools.chain(*[s[1] for s in segwords]))
# sent["tokenized"] = word_tokenize("".join([s[1] for s in segments]), engine=engine)
sent["segments"] = segwords
return sents
# -
sents = tokenize_misp_sents(testmisp)
save_jsonl(sents, f"Datasets/WisesightSentiment/tokenized_test-misp.jsonl")
sents = tokenize_misp_sents(trainmisp)
save_jsonl(sents, f"Datasets/WisesightSentiment/tokenized_train-misp-3000.jsonl")
| notebooks/Tokenize Wisesight Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/XavierCarrera/Platzi-Master-DS-Exercises/blob/master/Basic_Exercise_DS_Python3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="Pxumk8j0UuWg"
import numpy as np
import pandas as pd
# + id="6N9uoFtqVy_b" outputId="5f2cdd1a-1b78-4321-b3e8-70f734b412e1" colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount('/content/drive/')
# + id="-bO-VaHFWAke"
# %cd '/content/drive/My Drive/Colab Notebooks/db'
# !ls
# + id="bjYQQSxFWDM-" outputId="f70b700f-0346-4bc2-924e-3c16c3c2fdab" colab={"base_uri": "https://localhost:8080/", "height": 34}
df = pd.read_csv('houseelf_earlength_dna_data.csv')
df.columns
# + id="Rua8lC4iWLZr" outputId="b602eff8-afea-4115-cd20-347f694627e8" colab={"base_uri": "https://localhost:8080/", "height": 170}
df["earlength"].describe()
# + id="vRb3Zcf9X3E2" outputId="c6ef10c6-7251-49c1-9e75-42fb359619e9" colab={"base_uri": "https://localhost:8080/", "height": 34}
y = np.where(df["earlength"] >= 10, 1, 0)
y
# + id="g90A3jRUaTp3"
gc = df
gc.insert(2, "Ear Classification", y, True)
# + id="w52wJyBudlBB" outputId="4ccb070d-f762-4c75-a11d-68c56618ef5d" colab={"base_uri": "https://localhost:8080/", "height": 359}
gc["Ear Classification"] = gc["Ear Classification"].replace([0], "Small ear")
gc["Ear Classification"] = gc["Ear Classification"].replace([1], "Big ear")
gc
# + id="gV2wKO9md1ez" outputId="cc0256fe-e785-4921-ea20-8d7a6c81b62b" colab={"base_uri": "https://localhost:8080/", "height": 17}
from google.colab import files
df.to_csv('grangers_analysis.csv')
files.download('grangers_analysis.csv')
| Basic_Exercise_DS_Python3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.2 64-bit
# metadata:
# interpreter:
# hash: 9139ca13fc640d8623238ac4ed44beace8a76f86a07bab6efe75c2506e18783d
# name: python3
# ---
# # Data Augmentation for tabular data on inbalanced dataset
# ## How to augment highly inbalanced data with fake data
# +
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import nn, optim
from torch.autograd import Variable
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
import mlprepare as mlp
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
# -
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
DATA_PATH = 'data/creditcard.csv'
df = pd.read_csv(DATA_PATH, sep=',')
df.head()
df_base = df.copy()
cols = df_base.columns
# We need to normalize Time and Amount
# +
mean_time=df_base['Time'].mean()
mean_amount=df_base['Amount'].mean()
std_time=df_base['Time'].std()
std_amount=df_base['Amount'].std()
df_base['Time']=(df_base['Time']-mean_time)/std_time
df_base['Amount']=(df_base['Amount']-mean_amount)/std_amount
# -
# Class=1 means that this was indeed a fraud case, class=0 means no fraud. This dataset is highly imbalanced:
df_base['Class'].value_counts()
# I want to create fake data based on the 492 cases, which I will then use to improve the model. Let's first train a simple RandomForest.
X_train, X_test, y_train, y_test = mlp.split_df(df_base, dep_var='Class', test_size=0.3, split_mode='random')
y_test.value_counts()
#Ratio of the two classes:
y_test.value_counts()[0]/y_test.value_counts()[1]
# ### RandomForest with Oversampling
# Let's first use the class_weight provided by sklearn to deal with this highly inbalanced data.
def rf(xs, y, n_estimators=40, max_samples=500,
max_features=0.5, min_samples_leaf=5, **kwargs):
return RandomForestClassifier(n_jobs=-1, n_estimators=n_estimators,
max_samples=max_samples, max_features=max_features,
min_samples_leaf=min_samples_leaf, oob_score=True, class_weight={0:1,1:543}).fit(xs, y)
m = rf(X_train, y_train)
confusion_matrix(y_test, np.round(m.predict(X_test)))
# With this technique we get about 39 out of 157 Fraud cases, although the results vary quite a lot!
# # Fake Data with VAE
# We want only the data points where y_train/test_train =1
X_train_fraud = X_train.iloc[np.where(y_train==1)[0]]
X_test_fraud = X_test.iloc[np.where(y_test==1)[0]]
# Let's build a dataloader for our data, still keeping the pre-defined training/test datasets the way they were.
from torch.utils.data import Dataset, DataLoader
class DataBuilder(Dataset):
def __init__(self, dataset):
self.x = dataset.values
self.x = torch.from_numpy(self.x).to(torch.float)
self.len=self.x.shape[0]
def __getitem__(self,index):
return self.x[index]
def __len__(self):
return self.len
# +
traindata_set=DataBuilder(X_train_fraud)
testdata_set=DataBuilder(X_test_fraud)
trainloader=DataLoader(dataset=traindata_set,batch_size=1024)
testloader=DataLoader(dataset=testdata_set,batch_size=1024)
# -
# Define the Variational Autoencoder (for more information check out my earlier [blogpost](https://lschmiddey.github.io/fastpages_/2021/03/14/tabular-data-variational-autoencoder.html)).
class Autoencoder(nn.Module):
def __init__(self,D_in,H=50,H2=12,latent_dim=3):
#Encoder
super(Autoencoder,self).__init__()
self.linear1=nn.Linear(D_in,H)
self.lin_bn1 = nn.BatchNorm1d(num_features=H)
self.linear2=nn.Linear(H,H2)
self.lin_bn2 = nn.BatchNorm1d(num_features=H2)
self.linear3=nn.Linear(H2,H2)
self.lin_bn3 = nn.BatchNorm1d(num_features=H2)
# Latent vectors mu and sigma
self.fc1 = nn.Linear(H2, latent_dim)
self.bn1 = nn.BatchNorm1d(num_features=latent_dim)
self.fc21 = nn.Linear(latent_dim, latent_dim)
self.fc22 = nn.Linear(latent_dim, latent_dim)
# Sampling vector
self.fc3 = nn.Linear(latent_dim, latent_dim)
self.fc_bn3 = nn.BatchNorm1d(latent_dim)
self.fc4 = nn.Linear(latent_dim, H2)
self.fc_bn4 = nn.BatchNorm1d(H2)
# Decoder
self.linear4=nn.Linear(H2,H2)
self.lin_bn4 = nn.BatchNorm1d(num_features=H2)
self.linear5=nn.Linear(H2,H)
self.lin_bn5 = nn.BatchNorm1d(num_features=H)
self.linear6=nn.Linear(H,D_in)
self.lin_bn6 = nn.BatchNorm1d(num_features=D_in)
self.relu = nn.ReLU()
def encode(self, x):
lin1 = self.relu(self.lin_bn1(self.linear1(x)))
lin2 = self.relu(self.lin_bn2(self.linear2(lin1)))
lin3 = self.relu(self.lin_bn3(self.linear3(lin2)))
fc1 = F.relu(self.bn1(self.fc1(lin3)))
r1 = self.fc21(fc1)
r2 = self.fc22(fc1)
return r1, r2
def reparameterize(self, mu, logvar):
if self.training:
std = logvar.mul(0.5).exp_()
eps = Variable(std.data.new(std.size()).normal_())
return eps.mul(std).add_(mu)
else:
return mu
def decode(self, z):
fc3 = self.relu(self.fc_bn3(self.fc3(z)))
fc4 = self.relu(self.fc_bn4(self.fc4(fc3)))
lin4 = self.relu(self.lin_bn4(self.linear4(fc4)))
lin5 = self.relu(self.lin_bn5(self.linear5(lin4)))
return self.lin_bn6(self.linear6(lin5))
def forward(self, x):
mu, logvar = self.encode(x)
z = self.reparameterize(mu, logvar)
return self.decode(z), mu, logvar
class customLoss(nn.Module):
def __init__(self):
super(customLoss, self).__init__()
self.mse_loss = nn.MSELoss(reduction="sum")
# x_recon ist der im forward im Model erstellte recon_batch, x ist der originale x Batch, mu ist mu und logvar ist logvar
def forward(self, x_recon, x, mu, logvar):
loss_MSE = self.mse_loss(x_recon, x)
loss_KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return loss_MSE + loss_KLD
D_in = traindata_set.x.shape[1]
H = 50
H2 = 12
model = Autoencoder(D_in, H, H2).to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
loss_mse = customLoss()
# ## Train Model
log_interval = 50
val_losses = []
train_losses = []
test_losses = []
def train(epoch):
model.train()
train_loss = 0
for batch_idx, data in enumerate(trainloader):
data = data.to(device)
optimizer.zero_grad()
recon_batch, mu, logvar = model(data)
loss = loss_mse(recon_batch, data, mu, logvar)
loss.backward()
train_loss += loss.item()
optimizer.step()
if epoch % 200 == 0:
print('====> Epoch: {} Average training loss: {:.4f}'.format(
epoch, train_loss / len(trainloader.dataset)))
train_losses.append(train_loss / len(trainloader.dataset))
def test(epoch):
with torch.no_grad():
test_loss = 0
for batch_idx, data in enumerate(testloader):
data = data.to(device)
optimizer.zero_grad()
recon_batch, mu, logvar = model(data)
loss = loss_mse(recon_batch, data, mu, logvar)
test_loss += loss.item()
if epoch % 200 == 0:
print('====> Epoch: {} Average test loss: {:.4f}'.format(
epoch, test_loss / len(testloader.dataset)))
test_losses.append(test_loss / len(testloader.dataset))
epochs = 1500
for epoch in range(1, epochs + 1):
train(epoch)
test(epoch)
# We're still improving so keep going
epochs = 2500
optimizer = optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(1, epochs + 1):
train(epoch)
test(epoch)
epochs = 500
optimizer = optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(1, epochs + 1):
train(epoch)
test(epoch)
# Let's look at the results:
with torch.no_grad():
for batch_idx, data in enumerate(testloader):
data = data.to(device)
optimizer.zero_grad()
recon_batch, mu, logvar = model(data)
recon_row = recon_batch[0].cpu().numpy()
recon_row = np.append(recon_row, [1])
real_row = testloader.dataset.x[0].cpu().numpy()
real_row = np.append(real_row, [1])
df = pd.DataFrame(np.stack((recon_row, real_row)), columns = cols)
df
sigma = torch.exp(logvar/2)
mu.mean(axis=0), sigma.mean(axis=0)
# sample z from q
no_samples = 20
q = torch.distributions.Normal(mu.mean(axis=0), sigma.mean(axis=0))
z = q.rsample(sample_shape=torch.Size([no_samples]))
with torch.no_grad():
pred = model.decode(z).cpu().numpy()
df_fake = pd.DataFrame(pred)
df_fake['Class']=1
df_fake.columns = cols
df_fake['Class'] = np.round(df_fake['Class']).astype(int)
df_fake['Time'] = (df_fake['Time']*std_time)+mean_time
df_fake['Amount'] = (df_fake['Amount']*std_amount)+mean_amount
df_fake.head()
df_fake['Amount'].mean()
df.groupby('Class').mean()['Amount']
# ## Use fake data for oversampling in RandomForest
y_train.value_counts()
# So let's build about 190.000 fake fraud detection cases:
no_samples = 190_000
q = torch.distributions.Normal(mu.mean(axis=0), sigma.mean(axis=0))
z = q.rsample(sample_shape=torch.Size([no_samples]))
with torch.no_grad():
pred = model.decode(z).cpu().numpy()
# Concat to our X_train:
X_train_augmented = np.vstack((X_train.values, pred))
y_train_augmented = np.append(y_train.values, np.repeat(1,no_samples))
X_train_augmented.shape
# We now have roughly as many fraud cases as we have non-fraud cases.
# ## Train RandomForest
def rf_aug(xs, y, n_estimators=40, max_samples=500,
max_features=0.5, min_samples_leaf=5, **kwargs):
return RandomForestClassifier(n_jobs=-1, n_estimators=n_estimators,
max_samples=max_samples, max_features=max_features,
min_samples_leaf=min_samples_leaf, oob_score=True).fit(xs, y)
m_aug = rf_aug(X_train_augmented, y_train_augmented)
confusion_matrix(y_test, np.round(m_aug.predict(X_test)))
confusion_matrix(y_test, np.round(m.predict(X_test)))
# Look at that! We managed to find 127 out of 157! If our goal was to detect as many of the fraud cases, then we highly succeeded. Maybe you should think of this technique when you're dealing with highly inbalanced datasets in the future.
| _notebooks/2021-03-17-data-augmentation-tabular-data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # More API Examples
#
# This notebook contains EVEN MORE API examples so you can get an idea of the types of services available. There's a world of API's out there for the taking, and we cannot teach them all to you. We can only teach you how they work in general... the details are 100% up to you!
#
# # Caller Id/ Get a location for a Phone number
#
# This uses the cosmin phone number lookup API as found on https://market.mashape.com/explore
#
# This api requires `headers` to be passed into the `get()` request. The API key and the requested output of `json` are sent into the header.
#
# Enter a phone number as input like `3154432911` and then the API will output JSON data consisting of caller ID data and GPS coordinates.
# +
import requests
phone = input("Enter your phone number: ")
params = { 'phone' : phone }
headers={ "X-Mashape-Key": "<KEY>",
"Accept": "application/json" }
response = requests.get("https://cosmin-us-phone-number-lookup.p.mashape.com/get.php", params=params, headers=headers )
phone_data = response.json()
phone_data
# -
# # Get current exchange rates
#
# This example uses http://fixer.io to get the current currency exchange rates.
#
# +
import requests
params = { 'base' : 'USD' } # US Dollars
response = requests.get("http://api.fixer.io/latest", params=params )
rates = response.json()
rates
# -
# # GeoIP lookup: Find the lat/lng of an IP Address
#
# Every computer on the internet has a unique IP Address. This service when given an IP address will return back where that IP Address is located. Pretty handy API which is commonly used with mobile devices to determine approximate location when the GPS is turned off.
ip = "192.168.3.11"
response = requests.get('http://freegeoip.net/json/' + ip)
response.json()
# ## An API for the political junkie in you...
#
# The Sunlight Foundation has some pretty awesome API's for the retrieval of political data. For example, here we use the Congress API
# https://sunlightlabs.github.io/congress/ to retrieve the names of the legislators for the city of Syracuse's postal code.
#
# congress API
zip_code = '13210'
params = { 'zip' : zip_code }
response = requests.get('https://congress.api.sunlightfoundation.com/legislators/locate', params = params)
legislators = response.json()
for legislator in legislators['results']:
l = legislator
print("%s %s (%s-%s) email: %s" % (l['first_name'], l['last_name'], l['chamber'], l['party'],l['oc_email']))
# ## Searching iTunes
#
# Here's an example of the iTunes search API. I'm searching for "Mandatory fun" and printing out the track names.
term = 'Mandatory Fun'
params = { 'term' : term }
response = requests.get('https://itunes.apple.com/search', params = params)
search = response.json()
for r in search['results']:
print(r['trackName'])
# # Earthquakes anyone?
#
# Here's an example of the significant earthquakes from the past week. Information on this API can be found here:
#
# http://earthquake.usgs.gov/earthquakes/feed/v1.0/geojson.php
#
response = requests.get('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/significant_week.geojson')
quakes = response.json()
for q in quakes['features']:
print(q['properties']['title'])
| content/lessons/11/Watch-Me-Code/WMC3-More-API-Examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Generative Adversarial Network
#
# In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
#
# GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from <NAME> and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
#
# * [Pix2Pix](https://affinelayer.com/pixsrv/)
# * [CycleGAN](https://github.com/junyanz/CycleGAN)
# * [A whole list](https://github.com/wiseodd/generative-models)
#
# The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
#
# 
#
# The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
#
# The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
# +
# %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
# -
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
# ## Model Inputs
#
# First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks.
#
# >**Exercise:** Finish the `model_inputs` function below. Create the placeholders for `inputs_real` and `inputs_z` using the input sizes `real_dim` and `z_dim` respectively.
def model_inputs(real_dim, z_dim):
inputs_real =
inputs_z =
return inputs_real, inputs_z
# ## Generator network
#
# 
#
# Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
#
# #### Variable Scope
# Here we need to use `tf.variable_scope` for two reasons. Firstly, we're going to make sure all the variable names start with `generator`. Similarly, we'll prepend `discriminator` to the discriminator variables. This will help out later when we're training the separate networks.
#
# We could just use `tf.name_scope` to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also _sample from it_ as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the `reuse` keyword for `tf.variable_scope` to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
#
# To use `tf.variable_scope`, you use a `with` statement:
# ```python
# with tf.variable_scope('scope_name', reuse=False):
# # code here
# ```
#
# Here's more from [the TensorFlow documentation](https://www.tensorflow.org/programmers_guide/variable_scope#the_problem) to get another look at using `tf.variable_scope`.
#
# #### Leaky ReLU
# TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to `tf.maximum`. Typically, a parameter `alpha` sets the magnitude of the output for negative values. So, the output for negative input (`x`) values is `alpha*x`, and the output for positive `x` is `x`:
# $$
# f(x) = max(\alpha * x, x)
# $$
#
# #### Tanh Output
# The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
#
# >**Exercise:** Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the `reuse` keyword argument from the function to `tf.variable_scope`.
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope # finish this
# Hidden layer
h1 =
# Leaky ReLU
h1 =
# Logits and tanh output
logits =
out =
return out
# ## Discriminator
#
# The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
#
# >**Exercise:** Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the `reuse` keyword argument from the function arguments to `tf.variable_scope`.
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope # finish this
# Hidden layer
h1 =
# Leaky ReLU
h1 =
logits =
out =
return out, logits
# ## Hyperparameters
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
# ## Build network
#
# Now we're building the network from the functions defined above.
#
# First is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z.
#
# Then, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes.
#
# Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as `g_model`. So the real data discriminator is `discriminator(input_real)` while the fake discriminator is `discriminator(g_model, reuse=True)`.
#
# >**Exercise:** Build the network from the functions you defined earlier.
# +
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z =
# Generator network here
g_model =
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real =
d_model_fake, d_logits_fake =
# -
# ## Discriminator and Generator Losses
#
# Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will by sigmoid cross-entropies, which we can get with `tf.nn.sigmoid_cross_entropy_with_logits`. We'll also wrap that in `tf.reduce_mean` to get the mean for all the images in the batch. So the losses will look something like
#
# ```python
# tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# ```
#
# For the real image logits, we'll use `d_logits_real` which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter `smooth`. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like `labels = tf.ones_like(tensor) * (1 - smooth)`
#
# The discriminator loss for the fake data is similar. The logits are `d_logits_fake`, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
#
# Finally, the generator losses are using `d_logits_fake`, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
#
# >**Exercise:** Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
# +
# Calculate losses
d_loss_real =
d_loss_fake =
d_loss =
g_loss =
# -
# ## Optimizers
#
# We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use `tf.trainable_variables()`. This creates a list of all the variables we've defined in our graph.
#
# For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with `generator`. So, we just need to iterate through the list from `tf.trainable_variables()` and keep variables that start with `generator`. Each variable object has an attribute `name` which holds the name of the variable as a string (`var.name == 'weights_0'` for instance).
#
# We can do something similar with the discriminator. All the variables in the discriminator start with `discriminator`.
#
# Then, in the optimizer we pass the variable lists to the `var_list` keyword argument of the `minimize` method. This tells the optimizer to only update the listed variables. Something like `tf.train.AdamOptimizer().minimize(loss, var_list=var_list)` will only train the variables in `var_list`.
#
# >**Exercise: ** Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using `AdamOptimizer`, create an optimizer for each network that update the network variables separately.
# +
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars =
g_vars =
d_vars =
d_train_opt =
g_train_opt =
# -
# ## Training
# +
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# -
# ## Training loss
#
# Here we'll check out the training losses for the generator and discriminator.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
# -
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
# ## Generator samples from training
#
# Here we can view samples of images from the generator. First we'll look at images taken while training.
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
# These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
_ = view_samples(-1, samples)
# Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
# +
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
# -
# It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
# ## Sampling from the generator
#
# We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
| GAN/Intro_to_GANs_Exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="3516UUoRlaz8" colab_type="code" colab={}
lower = 1042000
upper = 702648265
for num in range(lower,upper+1):
temp=num
sum=0
while temp>0:
digit=temp%10
sum=sum+digit**3
temp=temp//10
if sum==num:
print ("First armstrong no is ",num)
break
| Assignment(Day4)-B-7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # An Introduction to `pandas`
#
# Pandas! They are adorable animals. You might think they are [the worst animal ever](https://www.reddit.com/r/todayilearned/comments/3azkqx/til_naturalist_chris_packham_said_he_would_eat/cshqy9y) but that is not true. You might sometimes think `pandas` is the worst library every, and that is only *kind of* true.
#
# The important thing is **use the right tool for the job**. `pandas` is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along.
#
# Now let's start coding. Hopefully you did `pip install pandas` before you started up this notebook.
# +
# import pandas, but call it pd. Why? Because that's What People Do.
# -
import pandas as pd
# When you import pandas, you use `import pandas as pd`. That means instead of typing `pandas` in your code you'll type `pd`.
#
# You don't *have* to, but every other person on the planet will be doing it, so you might as well.
# Now we're going to read in a file. Our file is called `NBA-Census-10.14.2013.csv` because we're **sports moguls**. `pandas` can `read_` different types of files, so try to figure it out by typing `pd.read_` and hitting tab for autocomplete.
# We're going to call this df, which means "data frame"
# It isn't in UTF-8 (I saved it from my mac!) so we need to set the encoding
df = pd.read_csv("NBA-Census-10.14.2013.csv", encoding ="mac_roman")
#this is a data frame (df)
# **A dataframe is basically a spreadsheet**, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day.
#
# # Selecting rows
#
# Now let's look at our data, since that's what data is for
# Let's look at all of it
df
# If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling?
# Look at the first few rows
df.head() #shows first 5 rows
# ...but maybe we want to see more than a measly five results?
# Let's look at MORE of the first few rows
df.head(10)
# But maybe we want to make a basketball joke and see the **final four?**
# Let's look at the final few rows
df.tail(4)
# So yes, `head` and `tail` work kind of like the terminal commands. That's nice, I guess.
#
# But maybe we're incredibly demanding (which we are) and we want, say, **the 6th through the 8th row** (which we do). Don't worry (which I know you were), we can do that, too.
# Show the 6th through the 8th rows
df[5:8]
# It's kind of like an array, right? Except where in an array we'd say `df[0]` this time we need to give it two numbers, the start and the end.
#
# # Selecting columns
#
# But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age.
# Get the names of the columns, just because
#columns_we_want = ['Name', 'Age']
#df[columns_we_want]
# If we want to be "correct" we add .values on the end of it
df.columns
# Select only name and age
# Combing that with .head() to see not-so-many rows
columns_we_want = ['Name', 'Age']
df[columns_we_want].head()
# +
# We can also do this all in one line, even though it starts looking ugly
# (unlike the cute bears pandas looks ugly pretty often)
df[['Name', 'Age',]].head()
# -
# **NOTE:** That was not `df['Name', 'Age']`, it was `df[['Name', 'Age']]`. You'll definitely type it wrong all of the time. When things break with pandas it's probably because you forgot to put in a million brackets.
# # Describing your data
#
# A powerful tool of pandas is being able to select a portion of your data, *because who ordered all that data anyway*.
df.head()
# I want to know how **many people are in each position**. Luckily, pandas can tell me!
# Grab the POS column, and count the different values in it.
df['POS'].value_counts()
# **Now that was a little weird, yes** - we used `df['POS']` instead of `df[['POS']]` when viewing the data's details.
#
# But now I'm curious about numbers: **how old is everyone?** Maybe we could, I don't know, get some statistics about age? Some statistics to **describe** age?
#race
race_counts = df['Race'].value_counts()
race_counts
# Summary statistics for Age
df['Age'].describe()
df.describe()
# That's pretty good. Does it work for everything? How about the money?
df['2013 $'].describe()
#The result is the result, because the Money is a string.
# Unfortunately because that has dollar signs and commas it's thought of as a string. **We'll fix it in a second,** but let's try describing one more thing.
# Doing more describing
df['Ht (In.)'].describe()
# That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data.
#
# # Manipulating data
#
# Oh wait there is, HA HA HA.
# Take another look at our inches, but only the first few
df['Ht (In.)'].head()
# Divide those inches by 12
#number_of_inches = 300
#number_of_inches / 12
df['Ht (In.)'].head() / 12
# Let's divide ALL of them by 12
df['Ht (In.)'] / 12
# Can we get statistics on those?
height_in_feet = df['Ht (In.)'] / 12
height_in_feet.describe()
# Let's look at our original data again
df.head(3)
# Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do `blahblah['feet'] = blahblah['Ht (In.)'] / 12`, but since this is pandas, we can't. Right? **Right?**
# Store a new column
df['feet'] = df['Ht (In.)'] / 12
df.head()
# That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer?
# Can't just use .replace
# Need to use this weird .str thing
# Can't just immediately replace the , either
# Need to use the .str thing before EVERY string method
# Describe still doesn't work.
# Let's convert it to an integer using .astype(int) before we describe it
# Maybe we can just make them millions?
# Unfortunately one is "n/a" which is going to break our code, so we can make n/a be 0
# Remove the .head() piece and save it back into the dataframe
# The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall.
#
# But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys!
#
# # Sorting and sub-selecting
# +
# This is just the first few guys in the dataset. Can we order it?
# +
# Let's try to sort them, ascending value
df.sort_values('feet')
# -
# Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending.
# It isn't descending = True, unfortunately
df.sort_values('feet', ascending=False).head()
# We can use this to find the oldest guys in the league
df.sort_values('Age', ascending=False).head()
# Or the youngest, by taking out 'ascending=False'
df.sort_values('feet').head()
# But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ `describe` them! And we don't want to dunk on everyone, only the players above 7 feet tall.
#
# First, we need to check out **boolean things.**
# Get a big long list of True and False for every single row.
df['feet'] > 6.5
# We could use value counts if we wanted
above_or_below_six_five = df['feet'] > 6.5
above_or_below_six_five.value_counts()
# But we can also apply this to every single row to say whether YES we want it or NO we don't
# +
# Instead of putting column names inside of the brackets, we instead
# put the True/False statements. It will only return the players above
# seven feet tall
df[df['feet'] > 6.5]
# -
df['Race'] == 'Asian'
df[]
# Or only the guards
df['POS'] == 'G'.head()
#People below 6 feet
df['feet'] < 6.5
#Every column you ant to query needs parenthesis aroung it
#Guards that are higher than 6.5
#this is combination of both
df[(df['POS'] == 'G') & (df['feet'] < 6.5)].head()
#We can save stuff
centers = df[df['POS'] == 'C']
guards = df[df['POS'] == 'G']
centers['feet'].describe()
guards['feet'].describe()
# It might be easier to break down the booleans into separate variables
# We can save this stuff
# Maybe we can compare them to taller players?
# # Drawing pictures
#
# Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. **Okay?????** Okay.
# This will scream we don't have matplotlib.
# `matplotlib` is a graphing library. It's the Python way to make graphs!
# this will open up a weird window that won't do anything
# So instead you run this code
# But that's ugly. There's a thing called ``ggplot`` for R that looks nice. We want to look nice. We want to look like ``ggplot``.
# Import matplotlib
# What's available?
# Use ggplot
# Make a histogram
# Try some other styles
# That might look better with a little more customization. So let's customize it.
# Pass in all sorts of stuff!
# Most from http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html
# .range() is a matplotlib thing
# I want more graphics! **Do tall people make more money?!?!**
# How does experience relate with the amount of money they're making?
# At least we can assume height and weight are related
# At least we can assume height and weight are related
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
# We can also use plt separately
# It's SIMILAR but TOTALLY DIFFERENT
| 07/.ipynb_checkpoints/07 - Introduction to Pandas-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="TUZbVcBc89Kh" tags=["remove_cell"]
# # Alugar, economizar e pagar à vista ou financiar um imóvel? Um estudo de caso.
# > Temos aqui um estudo de caso em matemática financeira resolvido em Python. O exercício envolve calcular diferentes cenários com relação a aquisição - ou não - de um imóvel. Além disso, veja como manipular tabelas com o pacote Pandas e a produção de gráficos com Matplotlib.
#
# - author: [<NAME>, <NAME>]
# - categories: [Educação Financeira, Engenharia Econômica, Pandas, Matplotlib, Python]
# + [markdown] colab_type="text" id="ZRK6jcsI89Ki"
# ## Introdução
#
# A matemática financeira é uma disciplina fundamental na atuação de profissionais de diversos setores e, adicionalment, possui importante papel na gestão de recursos próprios e no gerenciamento do orçamento doméstico. É justamente nesse ponto que muitas pessoas têm seu primeiro contato com programação, e talvez nem se deem conta disso, ao utilizar alguma aplicação de manipulação de planilhas para controlar os gastos de casa. Verdade seja dita, planilhas são estruturas de dados muito úteis.
#
# Esta postagem trata de um estudo de cenários didáticos sobre a aquisição - ou não - de um imóvel. Ele cobre três situações:
# * Comprar com uma entrada e financiamento;
# * Alugar e investir mensalmente;
# * Economizar e comprar à vista.
#
# Para tanto, exemplifica-se como resolver o problema proposto com o emprego de duas importantes ferramentas:
#
# * [Pandas](https://pandas.pydata.org/) é um pacote Python que fornece estruturas de dados rápidas, flexíveis e expressivas, projetadas para tornar o trabalho com dados “relacionais” ou “rotulados” fáceis e intuitivos. O objetivo é ser o alicerce fundamental de alto nível para a análise prática de dados do mundo real em Python. Além disso, tem o objetivo mais amplo de se tornar a mais prestigiada e flexível ferramenta de análise / manipulação de dados de código aberto disponível em qualquer linguagem. Pandas é bem adequado para muitos tipos diferentes de dados:
# * Dados tabulares com colunas de tipos heterogêneos, como em uma tabela SQL, arquivo `.csv` ou planilha do Excel;
# * Dados de séries temporais ordenados e não ordenados (não necessariamente de frequência fixa);
# * Dados de matriz arbitrária (homogeneamente digitados ou heterogêneos) com rótulos de linha e coluna;
# * Qualquer outra forma de conjuntos de dados observacionais / estatísticos. Os dados realmente não precisam ser rotulados para serem colocados em uma estrutura de dados de pandas.
# * [Matplotlib](https://matplotlib.org/) é uma biblioteca de plotagem 2D do Python, que produz figuras de qualidade de publicação em uma variedade de formatos impressos e ambientes interativos entre plataformas. Matplotlib pode ser usado em scripts Python, nos shells do Python e do IPython, no notebook Jupyter, nos servidores de aplicativos da web e em quatro kits de ferramentas de interface gráfica do usuário. **Matplotlib tenta tornar as coisas fáceis simples e as coisas difíceis possíveis**. Você pode gerar gráficos, histogramas, espectros de potência, gráficos de barras, gráficos de erros, diagramas de dispersão, etc., com apenas algumas linhas de código.
# + colab={} colab_type="code" id="FWYgQiX789Kj"
# As primeiras linhas de código tratam
# de importar ambas bibliotecas
import pandas as pd
import matplotlib.pyplot as plt
# + colab={} colab_type="code" id="Qws5FZtl89Ko"
# Esse bloco modifica alguns dos valores padrões para
# apresentação das figuras
plt.rcdefaults()
# https://matplotlib.org/3.1.0/gallery/style_sheets/style_sheets_reference.html
plt.style.use('ggplot')
# https://matplotlib.org/3.1.1/tutorials/introductory/customizing.html
plt.rcParams.update({'figure.dpi' : 100,
"figure.figsize" : (6, 6),
"axes.formatter.limits" : (-8, 8)
})
# + colab={} colab_type="code" id="izzwQTSC89Kr"
# Esse bloco desliga algumas mensagens de aviso que estava recebendo do Pandas,
# deve-se ter cautela ao fazer esse tipo de coisa, mas as mensagens estavam
# atrapalhando a apresentação visual desse post
#https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
pd.options.mode.chained_assignment = None # default='warn'
# + [markdown] colab_type="text" id="JzKBuctR89Ku"
# Se reproduzir esse conteúdo em partes ou em sua totalidade, forneça um link para o material original:
# * https://fschuch.com/blog/2020/04/11/alugar-economizar-e-pagar-a-vista-ou-financiar-um-imovel-um-estudo-de-caso
#
# E por favor, apoie os nossos autores [@fschuch](https://twitter.com/fschuch) e [@mathiazst](https://twitter.com/mathiazst).
# + colab={} colab_type="code" id="Dobxq0qm89Ku"
'''
Se reproduzir esse conteúdo em partes ou em
sua totalidade, forneça um link para o
material original:
https://fschuch.com/blog/2020/04/11/alugar-economizar-e-pagar-a-vista-ou-financiar-um-imovel-um-estudo-de-caso
E por favor, apoie os nossos autores:
https://twitter.com/fschuch
https://twitter.com/mathiazst
'''
def copyright():
plt.annotate('© 2020 Aprenda.py, por <NAME> & <NAME>',
xy=(0.5,0.01),
xycoords='axes fraction',
ha='center', va='bottom');
# + [markdown] colab_type="text" id="tElOw2wg89Kw"
# > Nota: Essa não é uma recomendação de compra. Lucros passados não são garantia de lucros futuros. Esse é um estudo de cenários didáticos e hipotéticos. Os autores se eximem completamente de qualquer responsabilidade sobre o uso, interpretação e consequências do uso direto ou indireto de qualquer informação contida nesse material.
# + [markdown] colab_type="text" id="VWBAixSa89Kw"
# ## Sistemas de Amortização
#
# Quando falamos em sistemas de pagamento, ou sistema de amortização, existem quatro parâmetros fundamentais:
# * Tempo total \\(N\\);
# * Taxa de juros \\(i\\);
# * Saldo devedor inicial \\(SD_0\\);
# * Valor da parcela, que por sua vez é subdividido em:
# * Amortização, valor que efetivamente abate parte do saldo devedor;
# * Juros, valor pago como remuneração ao financiador,
#
# onde observa-se que:
# \\[ \text{Amortização} = \text{Parcela} - \text{Juros}. \\]
# + [markdown] colab_type="text" id="Tyx5CPXm89Kw"
# Pode-se citar pelo menos dois modelos clássicos que tratam dessa relação:
#
# * **Sistema de Amortização Constante (SAC)**: Como o próprio nome sugere, a amortização é constante ao longo de todo o tempo:
# $$
# \text{Amortização}_n = \dfrac{SD_0}{N}
# $$
# Os juros são obtidos ao multiplicar a taxa de juros pelo saldo devedor do período anterior:
# $$
# \text{Juros}_n = i \times SD_{n-1}
# $$
# E como vimos, a parcela é a soma dos dois anteriores:
# $$
# \text{Parcela}_n = \text{Juros}_n + \text{Amortização}_n.
# $$
# Note que nesse sistema o saldo devedor decresce linearmente, além disso, as prestações diminuem gradualmente com o passar do tempo.
#
# * Outra opção é a **Tabela Price**, ou sistema francês de amortização. Aqui, o valor das parcelas é constante no tempo, e obtido por meio de equação:
# $$
# \text{Parcela} = SD_0 \dfrac{i}{1-(1+i)^{-n}}.
# $$
# Os juros são novamente obtidos por:
# $$
# \text{Juros}_n = i \times SD_{n-1}.
# $$
# E por fim obtemos o valor da amortização de cada parcela como:
# $$
# \text{Amortização}_n = \text{Parcela}_n - \text{Juros}_n.
# $$
# + [markdown] colab_type="text" id="ssfDIdJf89Kx"
# Tendo tudo isso em vista, podemos construir uma rotina em Python que nos retorne um `DataFrame` em Pandas, que nada mais é do que uma tabela. Ele inclui os valores obtidos para juros, amortização, parcela e saldo devedor para cada período `n`, em função da escolha do sistema de pagamento (SAC ou Price), da taxa de juros `i`, do número de períodos de tempo `N` e do saldo devedor inicial `SD0`. Segue a função:
# + colab={} colab_type="code" id="B0yR5fj889Kx"
def sistema_pagamento(sis,i,N,SD0):
'''
Calcula os juros, amortização, valor das
parcelas e saldo devedor em função do
sistema de amortização escolhido
Args:
sis (str): Sistema de amortização
(SAC ou Price)
i (float): Taxa de juros
N (int): Períodos de tempo
SD0 (float): Saldo devedor inicial
Returns:
df: DataFrame com as colunas juros,
amortização, valor das parcelas
e saldo devedor
'''
df = pd.DataFrame(columns=['Juros',
'Amortização',
'Parcela',
'Saldo Devedor'],
index=range(N+1)
)
df['Saldo Devedor'][0] = SD0
if sis.lower() == 'sac':
df['Amortização'][1:] = SD0/N
for n in df.index[1:]:
df['Juros'][n] = round(df['Saldo Devedor'][n-1]*i,2)
df['Parcela'][n] = df['Juros'][n]+df['Amortização'][n]
df['Saldo Devedor'][n] = df['Saldo Devedor'][n-1] - df['Amortização'][n]
elif sis.lower() == 'price':
df['Parcela'][1:] = round(SD0*(i)/(1-(1+i)**(-N)),2)
for n in df.index[1:]:
df['Juros'][n] = round(df['Saldo Devedor'][n-1]*i,2)
df['Amortização'][n] = df['Parcela'][n] - df['Juros'][n]
df['Saldo Devedor'][n] = df['Saldo Devedor'][n-1] - df['Amortização'][n]
else:
print('Valor inválido para sis, tente novamente com sac ou price')
# Aqui ajustamos a última parcela caso tenha valor residual devido ao arredondamento
df['Parcela'][N] += df['Saldo Devedor'][N]
df['Saldo Devedor'][N] -= df['Saldo Devedor'][N]
return df
# + [markdown] colab_type="text" id="_cepuZz089Ky"
# Agora podemos ver um exemplo da função em ação para ambos os sistemas de pagamento, para um taxa de juros de 5%, 4 períodos de tempo e saldo devedor inicial de R$1.000:
# + colab={} colab_type="code" id="ik1tWUjH89Ky" outputId="0721f843-8fd4-4198-c4da-51fed93ac464"
sistema_pagamento('sac',0.05,4,1000)
# + [markdown] colab_type="text" id="_o5e6HBP89K2"
# Note na tabela acima algumas posições marcadas com `NaN`, abreviação para não um número (do inglês para *Not a Number*). Eles ocorreram no nosso exemplo para o tempo 0, onde valores não foram informados para algumas colunas. O `NaN` não é necessariamente um problema, a biblioteca Pandas é justamente capaz de lidar com dados faltantes (mais detalhes [aqui](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html)). Perceba que essas células podem ser definidos para qualquer valor desejado com o método `fillna()`, vamos utiliza-lo no segundo exemplo:
# + colab={} colab_type="code" id="pGVH3bpc89K2" outputId="516fcb9f-f354-4bc0-b432-b36eb7bd7440"
sistema_pagamento('price',0.05,4,1000).fillna(0)
# + [markdown] colab_type="text" id="7nFgo6Ot89K4"
# Uma das vantagens de se trabalhar com dados tabulares é que eles podem ser facilmente transformados em gráfico, veja como fazemos isso com apenas algumas linhas de código:
# + colab={} colab_type="code" id="_ovaF3Bs89K4" outputId="1234300b-a1e3-4ea3-f8bf-2d192edd312f"
fig, (ax1, ax2) = plt.subplots(nrows=2,
ncols=1,
sharex=True,
sharey=True)
sistema_pagamento('sac',0.05,30,1000).plot(ax=ax1,title='Sistema SAC')
sistema_pagamento('price',0.05,30,1000).plot(ax=ax2,title='Tabela Price')
ax2.set_xlabel('Tempo')
ax1.set_ylabel('Valor - R$')
ax2.set_ylabel('Valor - R$');
# + [markdown] colab_type="text" id="4LGbfR4q89K5"
# Note na figura acima todos os comentários que fizemos anteriormente sobre ambas as formas de pagamento.
# + [markdown] colab_type="text" id="VGYTWseu89K5" toc-hr-collapsed=false
# ## Cenários
# + [markdown] colab_type="text" id="5LVTzJYb89K6"
# Aqui estabelecemos os parâmetros de cálculo que serão empregados nos diferentes cenários. São eles:
# * Valor do imóvel `valor_do_imovel`;
# * Valor da entrada `entrada`;
# * Taxa de juros anual para o financiamento `taxa_financeamento_anual`;
# * Taxa anual de aluguel `taxa_aluguel_anual`: Fração do preço total do imóvel que seria paga como aluguel em um ano;
# * Rendimento anual esperado caso os aportes sejam investidos `rendimento_investimentos_anual`;
# * Quantos anos são esperados para o pagamento `tempo_anos`;
# * Sistema de amortização `sistema` (SAC ou Price).
#
# Além disso, assume-se que nestes exemplos, todos os parâmetros mantenham-se constantes ao longo do tempo, o que certamente não ocorre em situações reais.
# + colab={} colab_type="code" id="eFP4ST7589K6"
valor_do_imovel = 500000.00
entrada = 100000.00
taxa_financeamento_anual = 0.0942
taxa_aluguel_anual = 0.04
rendimento_investimentos_anual = 0.08
tempo_anos = 30
sistema = 'SAC'
#sistema = 'PRICE'
# + [markdown] colab_type="text" id="jOq5FxRJ89K8"
# Agora obtemos a taxa de juros mensal correspondente aos valores anualizados que utilizamos como entrada. Lembre-se que:
#
# $$
# i_{\text{mensal}} = (1+ i_{\text{anual}})^\frac{1}{12}-1,
# $$
#
# de maneira que podemos escrever a seguinte função:
# + colab={} colab_type="code" id="EqQXzhiH89K9"
def taxa_aa_para_am(i):
'''
Função recebe uma taxa de juros anual
e retorna a taxa mensal equivalente.
'''
return (1.+i)**(1./12.)-1.
# + [markdown] colab_type="text" id="sjOWrBRO89LI"
# No seguinte bloco obtemos a valor a ser financiado como o valor do imóvel menos o valor da entrada, além disso, convertemos as taxas para termos mensais, assim como o tempo:
# + colab={} colab_type="code" id="hmgYSSmJ89LI"
valor_do_financiamento = valor_do_imovel - entrada
taxa_financeamento = taxa_aa_para_am(taxa_financeamento_anual)
taxa_aluguel = taxa_aa_para_am(taxa_aluguel_anual)
rendimento_investimentos = taxa_aa_para_am(rendimento_investimentos_anual)
tempo = tempo_anos * 12
# + [markdown] colab_type="text" id="RM7gC3vf89LJ"
# ### Financiar
#
# O primeiro cenário consiste em financiar um imóvel, e para tanto basta aplicarmos a função do sistema de pagamentos que construimos na etapa inicial desse estudo:
# + colab={} colab_type="code" id="p9NLB2HY89LJ"
financiar = sistema_pagamento(
sistema,
taxa_financeamento,
tempo,
valor_do_financiamento
)
# + [markdown] colab_type="text" id="pHFedSBL89LK"
# Lembre-se que em Python é sempre possível acessar o manual de qualquer função, inclusive da que acabamos de criar, com o comando:
#
# ```python
# help(sistema_pagamento)
# ```
#
# Para fins comparativos, vamos estabelecer a evolução temporal do `Patrimônio - Imóvel` como a soma acumulativa dos valores de amortização (valor da parcela que efetivamente abate o saldo devedor) e da entrada, enquanto `Custo - Juros` será a soma acumulativa dos valores de juros (valor da parcela que remunera a instituição financiadora).
# + colab={} colab_type="code" id="RFPHNMDW89LK"
financiar['Patrimônio - Imóvel'] = financiar['Amortização'].cumsum() + entrada
financiar['Custo - Juros'] = financiar['Juros'].cumsum()
# + [markdown] colab_type="text" id="olIQXory89LL"
# Podemos visualizar todos os elementos da nossa tabela:
# + colab={} colab_type="code" id="2ednDgQU89LL" outputId="711dcf3e-d37c-4d7e-c9f3-f338391c3b65"
financiar
# + [markdown] colab_type="text" id="rzt--EHT89LM"
# Ou facilmente graficar os resultados para o primeiro cenário:
# + colab={} colab_type="code" id="FCcxHNyE89LM" outputId="36a9dd2f-c8c4-454d-d5f6-02c518f226c1"
financiar[['Patrimônio - Imóvel',
'Custo - Juros']
].plot.area(title='Financiar')
plt.xlabel('Tempo (meses)'); plt.ylabel('Valor (R$)')
copyright()
# + [markdown] colab_type="text" id="IgxfwcFQ89LN"
# Veja o que dizem os números:
# + colab={} colab_type="code" id="ol5VxnLv89LO"
val_juros = round(financiar['Custo - Juros'][tempo],2)
val_imov = round(financiar['Patrimônio - Imóvel'][tempo],2)
val_total = val_juros + val_imov
# + colab={} colab_type="code" id="GUK7H3FI89LP" outputId="14926a2f-3f97-4871-f869-d4822c6b70ec"
print(f"Ao longo de {tempo} meses:")
print(f" O montante total de R${val_total} foi desembolsado, sendo")
print(f" R${val_juros} para a instituição financeira ({round(100*val_juros/val_total,2)}% do total)")
print(f" e R${val_imov} foram aportados no imóvel ({round(100*val_imov/val_total,2)}% do total).")
# + [markdown] colab_type="text" id="_S6rBeq989LQ"
# ### Alugar e Aportar Mensalmente
#
# O segundo cenário avalia não comprar, mas sim alugar o imóvel pelo tempo estipulado. Entretanto, considera-se que todos os valores que seriam gastos com o financiamento no caso anterior serão convertidos em aportes em aplicações financeiras.
# + colab={} colab_type="code" id="pGck-z0Q89LQ"
# Inicializamos um DataFrame vazio
alugar = pd.DataFrame(index=range(tempo+1))
# Calculamos o valor do aluguel
aluguel = round((valor_do_imovel)*taxa_aluguel,2)
alugar['Aluguel'] = aluguel
# Aluguel no tempo zero é igual a zero
alugar['Aluguel'][0] = 0.0
# Aqui calculamos o custo com aluguel como o somatório
# de todos os valores pagos
alugar['Custo - Aluguel'] = alugar['Aluguel'].cumsum()
# O aporte em aplicações financeiras se da pela diferença
# entre o que seria pago de financiamento no exemplo anterior
# e o valor do aluguel do imóvel
alugar['Aportes'] = financiar['Parcela'] - aluguel
# E o aporte inicial é o valor que estaria disponível como entrada
alugar['Aportes'][0] = entrada
# + [markdown] colab_type="text" id="NtJG3djX89LR"
# Nesse exemplo faremos uma separação do `Patrimônio` em duas partes, a fração que é proveniente dos aportes como `Patrimônio - Principal`, enquanto a parte proveniente do rendimento dos juros será denominada `Patrimônio - Rendimentos`, que podem ser calculados como segue:
# + colab={} colab_type="code" id="oDU4bkeO89LR"
# Aqui a variável é basicamente inicializada
alugar['Patrimônio'] = alugar['Aportes']
# O patrimônio é realmente calculado neste laço
for n in alugar.index[1:]:
alugar['Patrimônio'][n] = alugar['Aportes'][n] + alugar['Patrimônio'][n-1] * (1. + rendimento_investimentos)
# Por fim, a fração Principal é tida como o somatório de todos os aportes
alugar['Patrimônio - Principal'] = alugar['Aportes'].cumsum()
# E os rendimentos são obtidos pela seguinte subtração
alugar['Patrimônio - Rendimentos'] = alugar['Patrimônio'] - alugar['Patrimônio - Principal']
# + [markdown] colab_type="text" id="6XOuXWxm89LS"
# Feito todos os cálculos, podemos analisar os resultados
# + colab={} colab_type="code" id="y1nEhXjg89LS" outputId="2c1ddc56-d4de-4c24-8dd2-d6851ffe40ad"
alugar[['Patrimônio - Principal',
'Patrimônio - Rendimentos',
'Custo - Aluguel']
].plot.area(title='Alugar e Aportar Mensalmente')
plt.xlabel('Tempo (meses)')
plt.ylabel('Valor (R$)')
copyright()
# + [markdown] colab_type="text" id="rlgxjTX789LT"
# Veja o que dizem os números:
# + colab={} colab_type="code" id="wWdmxu3789LT"
#hide
val_aluguel = round(alugar['Custo - Aluguel'][tempo],2)
val_principal = round(alugar['Patrimônio - Principal'][tempo],2)
val_rendimentos = round(alugar['Patrimônio - Rendimentos'][tempo],2)
val_total = round(val_principal + val_rendimentos,2)
# + colab={} colab_type="code" id="d6AZ6XFl89LU" outputId="e64ddc9c-a3da-4b8a-d784-b061e137bce1"
#hide_input
print(f"Ao longo de {tempo} meses:")
print(f" R${val_aluguel} foram desembolsados com aluguel.")
print(f" O montante total em investimentos é de R${val_total}, sendo:")
print(f" R${val_principal} proveniente dos aportes ({round(100*val_principal/val_total,2)}% do total)")
print(f" e R${val_rendimentos} dos rendimentos ({round(100*val_rendimentos/val_total,2)}% do total).")
# + [markdown] colab_type="text" id="rzJDYY0k89LV"
# ### Economizar e Comprar à Vista
# + [markdown] colab_type="text" id="_u6vwg3389LV"
# O terceiro cenário considera a hipótese de alugar um imóvel e investir a diferença que haveria para um possível financiamento, assim como no caso anterior do aluguel. A diferença é que aqui o imóvel será comprado quando os investimentos atingirem o valor necessário. Nesse momento, o pagamento do aluguel será encerrado e os valores serão convertidos em mais aporte.
#
# O patrimônio será composto agora de três partes, além da fração que é proveniente dos aportes como `Patrimônio - Principal` e da parte proveniente do rendimento dos juros, denominada `Patrimônio - Rendimentos`, teremos o `Patrimônio - Imóvel`.
#
# Veja o cálculo:
# + colab={} colab_type="code" id="c4w7rj-G89LV" outputId="bb36963a-4689-4ea4-f0c6-919e9e5d0c32"
# A parte inicial desse cenário é igual ao anterior,
# então iniciamos copiando os resultados
comprar = alugar.copy()
comprar['Patrimônio - Imóvel'] = 0.0
# A diferença é que o imóvel será comprado quando
# se atingir o saldo disponível, obtemos essa
# valor da planilha com o seguinte comando
tcompra = comprar[comprar['Patrimônio']>=valor_do_imovel].first_valid_index()
# Escrevemos na tela para conferência
print(f'O imóvel será comprado no mês {tcompra}')
# Nesse instante compramos o imóvel
comprar['Patrimônio - Imóvel'][tcompra::] += valor_do_imovel
# E descontamos o valor da compra do
# montante que estava investido
comprar['Patrimônio'][tcompra::] -= valor_do_imovel
comprar['Patrimônio - Principal'][tcompra] -= valor_do_imovel - comprar['Patrimônio - Rendimentos'][tcompra]
comprar['Patrimônio - Rendimentos'][tcompra] = 0.0
# Então redirecionamos todo o valor que seria gasto
# com aluguel a partir daqui para mais aportes
comprar['Aportes'][tcompra::] += comprar['Aluguel'][tcompra::]
# Zeramos a atualizamos o cálculo com custo de aluguel
comprar['Aluguel'][tcompra::] = 0.0
comprar['Custo - Aluguel'] = comprar['Aluguel'].cumsum()
# Por fim, calcula-se a evolução do patrimônio a
# partir da data da compra do imóvel
for n in alugar.index[tcompra+1:]:
comprar['Patrimônio - Principal'][n] = comprar['Patrimônio - Principal'][n-1] + comprar['Aportes'][n]
comprar['Patrimônio - Rendimentos'][n] = comprar['Patrimônio'][n-1] * rendimento_investimentos + comprar['Patrimônio - Rendimentos'][n-1]
comprar['Patrimônio'][n] = comprar['Patrimônio - Principal'][n] + comprar['Patrimônio - Rendimentos'][n]
# + [markdown] colab_type="text" id="BeMdu-xA89LW"
# E produzimos a figura do caso:
# + colab={} colab_type="code" id="7jrIumEa89LW" outputId="b3e67100-36e0-495f-a2ff-babac5254112"
comprar[['Patrimônio - Imóvel',
'Patrimônio - Principal',
'Patrimônio - Rendimentos',
'Custo - Aluguel']
].plot.area(title='Economizar e Comprar à Vista')
plt.xlabel('Tempo (meses)')
plt.ylabel('Valor (R$)')
copyright()
# + [markdown] colab_type="text" id="6EydFvcD89LX"
# Veja o que dizem os números:
# + colab={} colab_type="code" id="n9AGPBeG89LX"
#hide
val_aluguel = round(comprar['Custo - Aluguel'][tempo],2)
val_principal = round(comprar['Patrimônio - Principal'][tempo],2)
val_rendimentos = round(comprar['Patrimônio - Rendimentos'][tempo],2)
val_total = round(val_principal + val_rendimentos + valor_do_imovel,2)
# + colab={} colab_type="code" id="z-qVVm0889LY" outputId="2d4a6177-826b-46f3-da1a-e45cc354b013"
#hide_input
print(f"Ao longo de {tempo} meses:")
print(f" R${val_aluguel} foram desembolsados com {tcompra} meses de aluguel,")
print(f" O montante total em investimentos foi de R${val_total}, sendo:")
print(f" R${val_principal} proveniente dos aportes ({round(100*val_principal/val_total,2)}% do total)")
print(f" e R${val_rendimentos} dos rendimentos ({round(100*val_rendimentos/val_total,2)}% do total),")
print(f" além de R${valor_do_imovel} do imóvel ({round(100*valor_do_imovel/val_total,2)}% do total),")
# + [markdown] colab_type="text" id="iXbl5BSy89LZ"
# ## Síntese dos Resultados
#
# Para sintetizar tudo o que vimos até aqui, criaremos uma tabela auxiliar apenas com os dados observados ao final do período de estudos, e isso é feito facilmente em um DataFrame com o comando `.tail(1)`:
# + colab={} colab_type="code" id="eLvq-u_p89LZ" outputId="4f7ecd69-31b6-4949-cd90-a6ef219fbf3d"
# Criamos um DataFrame vazio
summary = pd.DataFrame()
# Adicionamos os valores obtidos na tempo final de cada um dos cenários
summary = summary.append(alugar.tail(1), ignore_index=True, sort=False)
summary = summary.append(comprar.tail(1), ignore_index=True, sort=False)
summary = summary.append(financiar.tail(1), ignore_index=True, sort=False)
# Vamos eliminar as colunas da tabela que não nos interessam
summary.drop(['Aluguel', 'Aportes', 'Patrimônio', 'Juros',
'Amortização', 'Parcela', 'Saldo Devedor'], axis=1, inplace=True)
# E renomear as linhas de acordo com cada caso
summary.index = ['Alugar', 'Comprar à Vista', 'Financiar']
# Por fim mostramos na tela
summary.fillna(0)
# + [markdown] colab_type="text" id="SmurGbBN89La"
# Por fim, apresentamos a figura:
# + colab={} colab_type="code" id="ZHzaZuz189La" outputId="70ff2728-bc3d-450d-a8af-8de2bb4de866"
summary[['Patrimônio - Imóvel',
'Patrimônio - Principal',
'Patrimônio - Rendimentos',
'Custo - Aluguel',
'Custo - Juros']
].plot.barh(stacked=True)
plt.title('Estudo de caso: Financiar, economizar e pagar \n à vista ou alugar um imóvel?')
plt.xlabel('Valor (R$)')
plt.locator_params(axis='x', nbins=5)
copyright()
# + [markdown] colab_type="text" id="W1YfBEgk89Lb"
# # Conclusão
# + [markdown] colab_type="text" id="jDBStVxx89Lb"
# Nesse estudo de caso buscamos identificar as possíveis diferenças nos resultados de financiar quatro quintos de um imóvel, alugar um imóvel para morar e investir o montante que seria desembolsado com a compra, e pagar aluguel enquanto poupa o dinheiro para comprá-lo à vista. Para quaisquer exercícios deste tipo, o valor da taxa de juros é sempre o principal determinante.
# Vamos considerar os juros como os valores pagos pela posse do dinheiro, onde você os paga quando é um agente deficitário – tem menos dinheiro do que necessita e precisa tomar emprestado – e os recebe quando é um agente superavitário – tem mais dinheiro do que precisa e investe o que sobra -, e que tem sua taxa definida pelas escolhas intertemporais dos indivíduos, as quais acabam por determinar sua oferta e demanda de equilíbrio.
# Para fins de simplificação e comparação dos três cenários em questão, mantivemos constantes as receitas e despesas das famílias, assim como a taxa de juros do financiamento em 9,42% a.a., do aluguel em 4% a.a. e dos rendimentos financeiros em 8% a.a.
# Os resultados mostram que ao final do período considerado, caso você não atribua valor – tenha prazer - ao fato de se considerar o dono do imóvel, os benefícios pecuniários serão muito maiores se for pago aluguel e investido os valores que seriam gastos com a compra do imóvel. Se por algum motivo essa não for uma alternativa, é mais vantajoso poupar o dinheiro enquanto paga o aluguel para efetuar a compra do imóvel à vista, ao final dos primeiros 29% do período.<br>
#
# -----
#
#
#
# > **<NAME>**,<br>
# > Dr. Eng. Mecânico pela PUCRS. Possui experiência em mecânica dos fluidos computacional, fenômenos de transporte, programação, métodos numéricos, educação financeira e outros.<br>
# > [<EMAIL>](mailto:<EMAIL>) [@fschuch](https://twitter.com/fschuch)<br>
#
# <br>
#
# > **<NAME>**,<br>
# > Doutorando em Economia com ênfase em Finanças pela Universidade Católica de Brasília. Assessor acadêmico, pesquisador e professor no IDP.<br>
# > [@mathiazst](https://twitter.com/mathiazst)
#
#
# -----
| content/post/2020-matematica-financeira-estudo-de-caso-imovel/Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''base'': conda)'
# name: python3
# ---
# +
from lxml import etree
import pandas as pd
from collections import Counter
import os
import glob
import re
import matplotlib.pyplot as plt
import numpy as np
from collections import Counter
from numpy import array
import numpy as np
import subprocess
# -
from tokenizers import Tokenizer
import torch
# +
BIG_FILE_URL = 'https://raw.githubusercontent.com/dscape/spell/master/test/resources/big.txt'
# Let's download the file and save it somewhere
from requests import get
with open('big.txt', 'wb') as big_f:
response = get(BIG_FILE_URL, )
if response.status_code == 200:
big_f.write(response.content)
else:
print("Unable to get the file: {}".format(response.reason))
# +
# For the user's convenience `tokenizers` provides some very high-level classes encapsulating
# the overall pipeline for various well-known tokenization algorithm.
# Everything described below can be replaced by the ByteLevelBPETokenizer class.
from tokenizers import Tokenizer
from tokenizers.decoders import ByteLevel as ByteLevelDecoder
from tokenizers.models import BPE
from tokenizers.normalizers import Lowercase, NFKC, Sequence, BertNormalizer
from tokenizers.pre_tokenizers import ByteLevel
# First we create an empty Byte-Pair Encoding model (i.e. not trained model)
tokenizer = Tokenizer(BPE())
# Then we enable lower-casing and unicode-normalization
# The Sequence normalizer allows us to combine multiple Normalizer that will be
# executed in order.
tokenizer.normalizer = Sequence([
BertNormalizer(),
Lowercase()
])
# Our tokenizer also needs a pre-tokenizer responsible for converting the input to a ByteLevel representation.
tokenizer.pre_tokenizer = ByteLevel()
# And finally, let's plug a decoder so we can recover from a tokenized input to the original one
tokenizer.decoder = ByteLevelDecoder()
# +
from tokenizers.trainers import BpeTrainer
# We initialize our trainer, giving him the details about the vocabulary we want to generate
trainer = BpeTrainer(vocab_size=25000, show_progress=True, initial_alphabet=ByteLevel.alphabet())
tokenizer.train(files=["big.txt"], trainer=trainer)
print("Trained vocab size: {}".format(tokenizer.get_vocab_size()))
# -
# You will see the generated files in the output.
tokenizer.model.save('.')
# +
# Let's tokenizer a simple input
tokenizer.model = BPE('vocab.json', 'merges.txt')
encoding = tokenizer.encode("This is a simple input to be tokenized")
print("Encoded string: {}".format(encoding.tokens))
decoded = tokenizer.decode(encoding.ids)
print("Decoded string: {}".format(decoded))
# -
import pytorch
from transformers import BertTokenizer
# +
# BertTokenizer.from_pretrained?
# +
#tz = BertTokenizer.from_pretrained("bert-base-uncased")
tz = BertTokenizer.from_pretrained("bert-base-cased")
# -
sent = "Let's learn deep learning!"
# +
# Encode the sentence
encoded = tz.encode_plus(
text=sent, # the sentence to be encoded
add_special_tokens=True, # Add [CLS] and [SEP]
max_length = 64, # maximum length of a sentence
pad_to_max_length=True, # Add [PAD]s
return_attention_mask = True, # Generate the attention mask
return_tensors = 'np', # ask the function to return PyTorch tensors
)
# Get the input IDs and attention mask in tensor format
input_ids = encoded['input_ids']
attn_mask = encoded['attention_mask']
# -
tz.tokenize("<NAME>")
input_ids
| code/python/notebooks/20210616_test_tokenizers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exercises for Chapter 3: Classification
# ### 1) Build a classifier for the MNIST dataset that predicts at 97% accuracy
# Just in the interest of time, I won't be too focused on achieving 97% accuracy. I want to keep moving on to other parts in the book. I know what I would do and I'm just practicing the basics of what would be done.
from sklearn.datasets import fetch_mldata
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, RandomizedSearchCV
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
from sklearn.neighbors import KNeighborsClassifier
from scipy.stats import randint
mnist = fetch_mldata('MNIST original')
X, y = mnist['data'], mnist['target']
X = X.astype(np.float64)
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# +
idx = 501
digit = X[idx].reshape(28, 28)
plt.imshow(digit, cmap=matplotlib.cm.binary, interpolation='nearest')
plt.axis('off')
plt.title("Digit {}".format(y[idx]))
plt.show()
# -
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# +
pipe = Pipeline([
('scale', StandardScaler()),
('decom', PCA(n_components=64)),
('model', KNeighborsClassifier(n_jobs=-1))
])
rand_dists = {
'model__n_neighbors': randint(3, 10),
'model__weights': ['uniform', 'distance'],
'model__algorithm': ['ball_tree', 'kd_tree', 'brute']
}
rand_grid = RandomizedSearchCV(pipe, param_distributions=rand_dists, verbose=2, n_iter=5, cv=2)
# -
rand_grid.fit(X_train, y_train)
est = rand_grid.best_estimator_
est
est.score(X_train, y_train)
est.score(X_test, y_test)
from sklearn.externals import joblib
joblib.dump(est, r'..\saved_models\03_knn_best_est.joblib')
# ## 2) Write a function that shifts the MNIST image in each cardinal direction. Then add a shifted image for each image to the training set.
# +
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.shift.html#scipy.ndimage.shift
def im_shift_one(arr, direction):
dir_map = {'up': [0, 1], 'down': [0, -1], 'left': [-1, 0], 'right': [1, 0]}
| taylor_completed_exercises/exercises_03_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: chromprocess-env
# language: python
# name: chromprocess-env
# ---
# # ChromProcess Introduction Part 1
#
# This series of notebooks runs through one possible way of using the code contained in ChromProcess, thereby introducing its core functionality. These notebooks should work fine in the folder they are found in if ChromProcess can be imported (see installation instructions). ChromProcessIntroduction_1 shows how chromatograms and their associated data can be loaded and how peak information can be extracted from them and stored.
#
# # What is ChromProcess?
#
# ChromProcess is a collection of Python functions and objects which provide a framework for manipulating sets of chromatographic data. The code is currently flexible enough to work with what we deal with in the [Huck Lab](https://www.hucklab.com).
#
# There are several types of information required to reproducibly assign and/or quantify chromatographic peaks. ChromProcess is one way of interfacing these various information sources. By 'interfacing', I mean placing the data in in an appropriate format and organisational structure.
#
# # Source Files
#
# ## Experiment-specific files
#
# Experimental data and 'metadata' (primary data), go in the front of the analysis pipeline. Files containing information about the experiment should not be altered during analysis. Primary data files consist of several chromatographic data files (as of writing ChromProcess can load `.cdf`, `.txt` or `.csv` format) and a conditions file (comma-separated values `.csv`). The chromatographic data files should be named in such a way that they can be sorted programatically. For example, the sequence `chrom_001.csv`, `chrom_002.csv`, `chrom_003.csv` would be appropriate. The structure and contents of an example conditions file are shown below.
# +
import csv
with open('Sample_analysis/Example/ExperimentalData/example_conditions.csv', "r") as f:
csv_reader = csv.reader(f, delimiter=',')
for row in csv_reader:
line = row[0] + ': '
line += ','.join([x for x in row[1:] if x != ''])
print(line)
# -
# - Dataset: Reference code for the experiment
# - start_experiment_information: # tag for experiment information field
# - series_values: ordered values associated with the chromatograms (e.g. timepoints)
# - series_unit: the units for series_values; format: {name}/ {unit}
# - end_experiment_information: # tag for experiment information field
# - start_conditions: # tag for experiment conditions field
# - Information between these two tags forms key: list pairs, e.g.
# - formaldehyde_concentration/ M, 0.2
# - water_flow_rate/ L/h, 0.1, 0.005, 0.0001
# - end_conditions: # tag for experiment conditions field
# Keeping track of the operations performed on the primary data is an important component of a reproducible pipeline. To record and input details of analysis operations, an analysis details file (`.csv`) is also included alongside the primary data. The structure and contents of an example analysis file are shown below.
with open('Sample_analysis/Example/Analysis/example_analysis_details.csv', "r") as f:
csv_reader = csv.reader(f, delimiter=',')
for row in csv_reader:
line = row[0] + ': '
line += ','.join([x for x in row[1:] if x != ''])
print(line)
# The fields are as follows:
#
# - Dataset: Experiment code
# - Method: Analysis method used to collect the data (e.g. GCMS or HPLC).
# - regions: pairs of lower, upper retention times which outline regions of the chromatogram in which the program will search for peaks.
# - internal_standard_region: a pair of lower, upper bounds between which the internal standard (internal standard) should lie.
# - extract_mass_spectra: whether to extract mass spectra information from the files during the analysis (TRUE) or not (FALSE).
# - peak_pick_threshold: Threshold (as a fraction of the highest signal in a region) above which peaks will be detected.
# - dilution_factor: Dilution factor applied during sample preparation (multiplying the concentrations derived from peak integrals by this value will convert them into those present in the unprepared sample)
# - dilution_factor_error: standard error for the dilution factor.
# - internal_standard_concentration: Concentration of the internal standard (internal standard)
# - internal_standard_concentration_error: Standard error of the concentration of the internal standard (internal standard).
# Additionally, a local assignments file (initially an empty `.csv` file) can be added to the project. These files can be arranged in folders however you wish. Whichever organisation scheme used must be systematic. An example struture is shown below:
# + active=""
# EXPCODE
# ├── Analysis
# │ ├── EXPCODE_analysis_details.csv
# │ └── EXPCODE_local_assignments.csv
# └── Data
# ├── EXPCODE_conditions.csv
# ├── EXPCODE_chrom_001.csv
# ├── EXPCODE_chrom_002.csv
# └── EXPCODE_chrom_003.csv
# -
# ## Files applicable to several experiments
#
# Calibration information for an instrument may be applicable to one or more sets of data. This information can therefore be stored separately from the primary data. Alternatively, a copy of a calibration file can be included within the directory of each experiment. This method has the benefit of providing a more unambiguous association of the data to the calibration. On the other hand, if changes must be made to a calibration file (e.g. adding a new calibration for a compound), multiple files must be updated which may be more labour-intensive and error-prone. Either way, creating a file which assigns each experiment to a calibration file (including paths to each file) is beneficial, as is creating a workflow for updating analyses in response to changes to the source files.
# # Overview of an Example Analysis Pipeline
#
# The first step is to create peak table files containing peak positions, boundaries and integrals for each chromatogram. Each chromatogram is loaded in as a `Chromatogram` object. The analysis and conditions files are also loaded as `Analysis_Information` and `Experiment_Conditions` objects, respectively. Information in the analysis file is used to find peaks in each chromatogram before each peak is integrated. The `Chromatogram` can then be used to create a peak table with associated condition information, if required.
# Before beginning the analysis, the chromatograms should be inspected and information should be input into the analysis file as appropriate (regions, concentrations, etc.). First, the source files are directly converted into objects:
# +
import os
from ChromProcess.Loading import chrom_from_csv
chromatogram_directory = 'Sample_analysis/Example/ExperimentalData/ExampleChromatograms'
chromatogram_files = os.listdir(chromatogram_directory)
chromatogram_files.sort()
chroms = []
for f in chromatogram_files:
chroms.append(chrom_from_csv(f'{chromatogram_directory}/{f}'))
# -
# Before proceeding with the analysis, the chromatograms are inspected for quality (up to you as a scientist!) and for regions to be selected for peak picking. The plots can be generated by any plotting method that you are comfortable with. Below is an example using `matplotlib`.
#
# When picking regions for peak picking, the idea is to select regions in the chromatogram in which the peaks have similar intensities. An arbitrary number of boundaries can be input. At one extreme, a single boundary for the beginning and end of the chromatogram can be use. At the other logical extreme, boundaries for individual peaks can be input. Bear in mind that the current implementation of peak picking does not account for retention time drift between chromatograms, and the regions passed to each chromatogram are the same. Therefore, 'manual' peak picking in the manner may not be accurate.
# +
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
for c in chroms:
ax.plot(c.time, c.signal, label = c.filename)
plt.show()
# -
from ChromProcess.Loading import analysis_from_csv
from ChromProcess.Loading import conditions_from_csv
conditions_file = 'Sample_analysis/Example/ExperimentalData/example_conditions.csv'
analysis_file = 'Sample_analysis/Example/Analysis/example_analysis_details.csv'
conditions = conditions_from_csv(conditions_file)
analysis = analysis_from_csv(analysis_file)
# Next, a peak for the internal standard is picked using information in `analysis` (the function modifies the chromatogram object passed to it by inserting the internal standard information, currently, only one internal standard is supported). This step assumes that the internal standard selected in the chromatograms is a single peak, with no peaks originating from the sample overlapping it (within the `analysis.internal_standard_region`).
# +
from ChromProcess.Processing import internal_standard_integral
is_start = analysis.internal_standard_region[0]
is_end = analysis.internal_standard_region[1]
for c in chroms:
internal_standard_integral(c, is_start, is_end)
# -
# Information from `analysis` (originating in the analysis information file described above) is again used to pick peaks in defined regions of each chromatogram. The functions add peak information into chromatograms.
# +
from ChromProcess.Processing import find_peaks_in_region
from ChromProcess.Processing import add_peaks_to_chromatogram
from ChromProcess.Processing import integrate_chromatogram_peaks
threshold = analysis.peak_pick_threshold
for chrom in chroms:
for reg in analysis.regions:
peaks = find_peaks_in_region(
chrom,
reg[0],
reg[1],
threshold = threshold
)
add_peaks_to_chromatogram(peaks, chrom)
integrate_chromatogram_peaks(chrom)
# -
# Peaks collections can then be written directly from each `Chromatogram` whilst inserting information from conditions files if required. Note here that the ordering of chromatograms must be the same as the order or series values in the conditions file.
# +
import os
# Output peak collection
peak_collection_directory = 'Sample_analysis/Example/ExperimentalData/ExamplePeakCollections'
os.makedirs(peak_collection_directory, exist_ok = True)
for c,v in zip(chroms, conditions.series_values):
c.write_peak_collection(filename = f'{peak_collection_directory}/{c.filename}',
header_text = f"{conditions.series_unit},{v}\n",
)
# -
# It make also be convenient to store the chromatographic signals in a simple `.csv` format for use with other software. These files can be created using one of Chromatogram's methods:
chromatogram_output_folder = 'Sample_analysis/Example/ExperimentalData/ChromatogramCSV'
for c in chroms:
c.write_to_csv(filename = f'{chromatogram_output_folder}/{c.filename}')
# Peaks can also be loaded from peak_collection csv files.
# +
from ChromProcess.Loading.peak.peak_from_csv import peak_from_csv
for c, chrom in enumerate(chroms, 1):
peak_file = f"{peak_collection_directory}/chrom_00{c}.csv"
peak_features = peak_from_csv(peak_file, chrom)
add_peaks_to_chromatogram(peak_features, chrom)
# -
# That concludes this tutorial notebook. The next tutorial notebook will deal with processing the peak tables into series data.
| ChromProcess_Introduction_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Simularium Conversion Tutorial : SpringSaLaD Data
from IPython.display import Image
from simulariumio.springsalad import SpringsaladConverter, SpringsaladData
from simulariumio import DisplayData, DISPLAY_TYPE, MetaData, ModelMetaData, InputFileData
# This notebook provides example python code for converting your own simulation trajectories into the format consumed by the Simularium Viewer. It creates a .simularium JSON file which you can drag and drop onto the viewer like this:
# 
# ***
# ## Prepare your spatial data
# The Simularium `SpringsaladConverter` consumes spatiotemporal data from SpringSaLaD.
#
# The converter requires a `SpringsaladData` object as a parameter ([see documentation](https://allen-cell-animated.github.io/simulariumio/simulariumio.springsalad.html#simulariumio.springsalad.springsalad_data.SpringsaladData).
#
# If you'd like to specify PDB or OBJ files or color for rendering an agent type, add a `DisplayData` object for that agent type, as shown below ([see documentation](https://allen-cell-animated.github.io/simulariumio/simulariumio.data_objects.html#module-simulariumio.data_objects.display_data)).
#
example_data = SpringsaladData(
sim_view_txt_file=InputFileData(
file_path="../simulariumio/tests/data/springsalad/Simulation0_SIM_VIEW_Run0.txt",
),
meta_data=MetaData(
trajectory_title="Some parameter set",
model_meta_data=ModelMetaData(
title="Some agent-based model",
version="8.1",
authors="<NAME>",
description=(
"An agent-based model run with some parameter set"
),
doi="10.1016/j.bpj.2016.02.002",
source_code_url="https://github.com/allen-cell-animated/simulariumio",
source_code_license_url="https://github.com/allen-cell-animated/simulariumio/blob/main/LICENSE",
input_data_url="https://allencell.org/path/to/native/engine/input/files",
raw_output_data_url="https://allencell.org/path/to/native/engine/output/files",
),
),
display_data={
"GREEN": DisplayData(
name="Ligand",
radius=1.0,
display_type=DISPLAY_TYPE.OBJ,
url="b.obj",
color="#dfdacd",
),
"RED": DisplayData(
name="Receptor Kinase#B site",
radius=2.0,
color="#0080ff",
),
"GRAY": DisplayData(
name="Receptor Kinase",
radius=1.0,
display_type=DISPLAY_TYPE.OBJ,
url="b.obj",
color="#dfdacd",
),
"CYAN": DisplayData(
name="Receptor Kinase#K site",
radius=2.0,
color="#0080ff",
),
"BLUE": DisplayData(
name="Substrate",
radius=1.0,
display_type=DISPLAY_TYPE.OBJ,
url="b.obj",
color="#dfdacd",
),
},
draw_bonds=True,
)
# ## Convert and save as .simularium JSON file
# Once your data is shaped like in the `example_data` object, you can use the converter to generate the file at the given path:
SpringsaladConverter(example_data).write_JSON("example_springsalad")
# ## Visualize in the Simularium viewer
# In a supported web-browser (Firefox or Chrome), navigate to https://simularium.allencell.org/ and import your file into the view.
| examples/Tutorial_springsalad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2
import os
import numpy as np
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras.models import load_model
import cv2
modelcnnlstm=tf.keras.models.load_model("Movie.h5")
test=[]
def load_images_from_folder(folder):
image=[]
im=[]
c=1
for filename in os.listdir(folder):
img = cv2.imread(os.path.join(folder,filename))
n_img=cv2.resize(img,(100,100))
if img is not None :
im.append(n_img)
if c%10==0:
image.append(im)
#image.append(x)
im=[]
test.append(image)
image=[]
c=c+1
load_images_from_folder("sampleframes")
def detection_of_violent_activities(vid_seq):
#reshaping the 10 frames as per the model
print(len(vid_seq))
#vid_seq=vid_seq/225
test=[]
test.append(vid_seq)
inputtestshape=[]
inputtestshape.append(test)
#convert the list into an array
inputtestshape=np.array(inputtestshape).reshape(-1,10,100,100,3)
#predicting the probabilities of violent and non-violent activities
prediction=modelcnnlstm.predict_proba(inputtestshape)
print(prediction)
#calculation the maximum probability
val=prediction[0][1]*100
result=np.argmax(prediction)
return(val)
detection_of_violent_activities(test)
# -
print(prediction[0][result]*100)
8.6445965e-05*100
7.7089721e-01*100
import numpy as np
a=[1,2,3,4]
a=np.array(a)
print(a)
print(list(a))
| DemoRun.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Python
#
# ### Control Flow
import time
import random
# A brief note about measuring time to run a cell
print(time.time()/((3600*24*365)+13)) # The Unix time started in 1970...
# + active=""
# t0 = time.time()
# <code>
# print(time.time() - t0)
# -
# ### 1- Create a script that prints the numbers from 1 to 100, following these rules:
#
# 1. in those that are multiples of 3, print "Fizz" instead of the number.
# 2. in those that are multiples of 5 print "Buzz" instead of the number.
# 3. For numbers that are multiples of both 3 and 5, print "FizzBuzz".
# ### 2 - Given the list:
list1 = [1,2,3,'4',5,6.3,7.4 + 2j,"123",[1,2,3], 93, "98"]
# ### Create a script or function to print each element of a list, along with its predecessor and its successor. Follow these rules:
#
# 1. If this element is numeric or a string, print its representational value; and
# 2. If the element is a container but not a string (tuple, list, dict, set, frozenset), print each element of the sequence.
# ### 3- Using the "random" library, create a script to average the sum of two 6-sided dice (D6) in 10,000 releases.
# ### 4 - [Project Euler - Problem 3](https://projecteuler.net/problem=3)
#
# The prime factors of 13195 are 5, 7, 13 and 29.
# What is the largest prime factor of the number 600851475143 ?
# ### 5 - [Project Euler - Problem 4](https://projecteuler.net/problem=4)
#
# #### Largest palindrome product
#
# "A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99."
#
# **Find the largest palindrome made from the product of two 3-digit numbers.**
# ###### Hint: How to reverse a string?
# + jupyter={"outputs_hidden": false}
'A random string'[-1::-1]
# -
# ### 6 - [Project Euler - Problem 14](https://projecteuler.net/problem=14)
#
# #### Longest Collatz sequence
#
#
# The following iterative sequence is defined for the set of positive integers:
#
# n → n/2 (n is even)
# n → 3n + 1 (n is odd)
#
# Using the rule above and starting with 13, we generate the following sequence:
#
# 13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1
# It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.
#
# **Which starting number, under one million, produces the longest chain?**
#
# NOTE: Once the chain starts the terms are allowed to go above one million.
| Exercises/Exercise_2_Control_Flow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import os
from datetime import datetime
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
import scikitplot.metrics as skplot
import tensorflow as tf
import tensorflow_hub as hub
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
# -
df = pd.read_csv('../Data/cleaned english.csv')
df
label_encoder = {'NAG' : 0, 'OAG' : 1, 'CAG' : 2}
df = df.replace({'Sub-task A' : label_encoder})
train, test = train_test_split(df, test_size=0.2, random_state=42) # HANDLE TEST TRAIN SPLIT
DATA_COLUMN = 'Text'
LABEL_COLUMN = 'Sub-task A'
label_list = df['Sub-task A'].unique() # Use the InputExample class from BERT's run_classifier code to create examples from the data
print(label_list)
# +
train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
# +
# This is a path to an uncased (all lowercase) version of BERT
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(
signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
# -
tokenizer.tokenize(train['Text'].iloc[0])
# set sequences,
# look here for more info - https://github.com/google-research/bert/blob/master/README.md#out-of-memory-issues
MAX_SEQ_LENGTH = 64
# Convert our train and test features to InputFeatures that BERT understands.
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
def create_model(is_predicting, input_ids, input_mask, segment_ids, labels, num_labels):
"""Creates a classification model."""
bert_module = hub.Module(
BERT_MODEL_HUB,
trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# Use "pooled_output" for classification tasks on an entire sentence.
# Use "sequence_outputs" for token-level output.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
# Create our own layer to tune for politeness data.
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
# Dropout helps prevent overfitting
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
# Convert labels into one-hot encoding
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
predicted_labels = tf.squeeze(
tf.argmax(log_probs, axis=-1, output_type=tf.int32))
# If we're predicting, we want predicted labels and the probabiltiies.
if is_predicting:
return (predicted_labels, log_probs)
# If we're train/eval, compute loss between predicted and actual label
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, predicted_labels, log_probs)
# +
# model_fn_builder creates our model function
# using the passed parameters for num_labels, learning_rate, etc.
def model_fn_builder(num_labels, learning_rate, num_train_steps, num_warmup_steps):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)
# TRAIN and EVAL
if not is_predicting:
(loss, predicted_labels, log_probs) = create_model(is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
train_op = bert.optimization.create_optimizer(loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)
# Calculate evaluation metrics.
def metric_fn(label_ids, predicted_labels):
accuracy = tf.metrics.accuracy(label_ids, predicted_labels)
true_pos = tf.metrics.true_positives(label_ids, predicted_labels)
true_neg = tf.metrics.true_negatives(label_ids, predicted_labels)
false_pos = tf.metrics.false_positives(label_ids, predicted_labels)
false_neg = tf.metrics.false_negatives(label_ids, predicted_labels)
return {
"eval_accuracy": accuracy,
"true_positives": true_pos,
"true_negatives": true_neg,
"false_positives": false_pos,
"false_negatives": false_neg
}
eval_metrics = metric_fn(label_ids, predicted_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
else:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
eval_metric_ops=eval_metrics)
else:
(predicted_labels, log_probs) = create_model(is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
predictions = {
'probabilities': log_probs,
'labels': predicted_labels
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
# Return the actual model function in the closure
return model_fn
# -
# Compute train and warmup steps from batch size
BATCH_SIZE = 64
LEARNING_RATE = 1e-5
NUM_TRAIN_EPOCHS = 50.0
# Warmup is a period of time where the learning rate
# is small and gradually increase; usually helps training.
WARMUP_PROPORTION = 0.1
# Model configs
SAVE_CHECKPOINTS_STEPS = 500
SAVE_SUMMARY_STEPS = 100
# Compute # train and warmup steps from batch size
num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
OUTPUT_DIR = '../Cache/Models/BERT/'
# Specify outpit directory and number of checkpoint steps to save
run_config = tf.estimator.RunConfig(
model_dir=OUTPUT_DIR,
save_summary_steps=SAVE_SUMMARY_STEPS,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS)
# +
model_fn = model_fn_builder(
num_labels=len(label_list),
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": BATCH_SIZE})
# -
# Create an input function for training. drop_remainder = True for using TPUs.
train_input_fn = bert.run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=False)
print(f'Beginning Training!')
current_time = datetime.now()
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("Training time ", datetime.now() - current_time)
test_input_fn = run_classifier.input_fn_builder(
features=test_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=False)
estimator.evaluate(input_fn=test_input_fn, steps=None)
def getPrediction(in_sentences):
labels = ['NAG', 'OAG', 'CAG']
input_examples = [run_classifier.InputExample(
guid="", text_a=x, text_b=None, label=0) for x in in_sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(
input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(
features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]
pred_sentences = list(test['Text'])
predictions = getPrediction(pred_sentences)
x = np.array(predictions)
np.shape(x)
texts = []
predicted_sentiment = []
for i in range(len(predictions)):
texts.append(predictions[i][0])
predicted_sentiment.append(predictions[i][2])
output_dict = {'Text' : texts, 'predicted_label' : predicted_sentiment}
output_df = pd.DataFrame(output_dict)
output_df = output_df.replace({'predicted_label' : label_encoder})
if not os.path.exists('output'):
os.makedirs('output')
output_df.to_excel('output/output_bert.xlsx', index = False)
# ## Accuracy
score = accuracy_score(output_df['predicted_label'], test[LABEL_COLUMN])
print("The test accuracy is : {} %".format(score * 100))
skplot.plot_confusion_matrix(test[LABEL_COLUMN], output_df['predicted_label'])
print(classification_report(test[LABEL_COLUMN], output_df['predicted_label']))
| Data Models/BERT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
import pandas as pd
from cyclum.hdfrw import hdf2mat
import numpy as np
from math import pi
pseudotime = hdf2mat("/home/shaoheng/Documents/data/McDavid/pc3-pseudotime.h5").values
cpt = pd.read_pickle('/home/shaoheng/Documents/data/McDavid/pc3_cpt.pkl').values
# + pycharm={"name": "#%%\n", "is_executing": false}
def empirical_precision(pseudotime, label, granularity: int = 100):
pseudotime = np.squeeze(np.array(pseudotime))
pseudotime = pseudotime % (2 * pi)
n = label.shape[0]
label = np.squeeze(np.array(label))
unique_label = np.unique(label)
onehot_label = np.repeat(unique_label.reshape([-1, 1]), n, axis=1) == label
splits = np.linspace(0, 2 * pi, granularity)
# 0---i-----j-----k---2pi:
# [k,2pi)U[0, i), [i, j), [j, k)
p = 0
for i in range(granularity): # 0 <= i < granularity
for j in range(i, granularity): # i <= j < granularity
for k in range(j, granularity): # j <= k < granularity
x = (pseudotime < splits[i]) | (pseudotime >= splits[k])
y = (pseudotime >= splits[i]) & (pseudotime < splits[j])
z = (pseudotime >= splits[j]) & (pseudotime < splits[k])
r = np.vstack([x, y, z])
p = max(p, np.sum(onehot_label[[0, 1, 2], :] & r) / n)
p = max(p, np.sum(onehot_label[[1, 2, 0], :] & r) / n)
p = max(p, np.sum(onehot_label[[2, 0, 1], :] & r) / n)
p = max(p, np.sum(onehot_label[[2, 1, 0], :] & r) / n)
p = max(p, np.sum(onehot_label[[1, 0, 2], :] & r) / n)
p = max(p, np.sum(onehot_label[[0, 2, 1], :] & r) / n)
return p
empirical_precision(pseudotime, cpt)
| tests/comparisons/empirical-precision.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: servtestenv
# language: python
# name: servtestenv
# ---
# ## Training Notebook
# +
import tensorflow as tf
from keras import backend as K
import tifffile as tiff
import numpy as np
import pandas as pd
import cv2
from shapely.geometry import MultiPolygon, Polygon
from shapely.wkt import loads as wkt_loads
import shapely.wkt
import shapely.affinity
from collections import defaultdict
size = 160
s = 835
smooth = 1e-12
def mask_to_polygons(mask, epsilon=5, min_area=1.):
"""
converts a mask into polygons.
"""
contours, hierarchy = cv2.findContours(((mask == 1) * 255).astype(np.uint8), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_KCOS)
approx_contours = [cv2.approxPolyDP(cnt, epsilon, True)
for cnt in contours]
if not contours:
return MultiPolygon()
cnt_children = defaultdict(list)
child_contours = set()
assert hierarchy.shape[0] == 1
for idx, (_, _, _, parent_idx) in enumerate(hierarchy[0]):
if parent_idx != -1:
child_contours.add(idx)
cnt_children[parent_idx].append(approx_contours[idx])
# create actual polygons filtering by area (removes artifacts)
all_polygons = []
for idx, cnt in enumerate(approx_contours):
if idx not in child_contours and cv2.contourArea(cnt) >= min_area:
assert cnt.shape[1] == 1
poly = Polygon(
shell=cnt[:, 0, :],
holes=[c[:, 0, :] for c in cnt_children.get(idx, [])
if cv2.contourArea(c) >= min_area])
all_polygons.append(poly)
# approximating polygons might have created invalid ones, fix them
all_polygons = MultiPolygon(all_polygons)
if not all_polygons.is_valid:
all_polygons = all_polygons.buffer(0)
# Sometimes buffer() converts a simple Multipolygon to just a Polygon,
# need to keep it a Multi throughout
if all_polygons.type == 'Polygon':
all_polygons = MultiPolygon([all_polygons])
return all_polygons
def adjust_contrast(bands, lower_percent=2, higher_percent=98):
"""
to adjust the contrast of the image
bands is the image
"""
out = np.zeros_like(bands).astype(np.float32)
n = bands.shape[2]
for i in range(n):
a = 0 # np.min(band)
b = 1 # np.max(band)
c = np.percentile(bands[:, :, i], lower_percent)
d = np.percentile(bands[:, :, i], higher_percent)
t = a + (bands[:, :, i] - c) * (b - a) / (d - c)
t[t < a] = a
t[t > b] = b
out[:, :, i] = t
return out.astype(np.float32)
#def M(image_id):
# # __author__ = amaia
# # https://www.kaggle.com/aamaia/dstl-satellite-imagery-feature-detection/rgb-using-m-bands-example
# zip_path = 'sixteen_band.zip'
# tgtImg = '{}_M.tif'.format(image_id)
# with zipfile.ZipFile(zip_path) as myzip:
# files_in_zip = myzip.namelist()
# for fname in files_in_zip:
# if fname.endswith(tgtImg):
# with myzip.open(fname) as myfile:
# img = tiff.imread(myfile)
# img = np.rollaxis(img, 0, 3)
# return img
def M(img_pat):
with open("vlj_img_1029.tiff", 'rb') as myfile:
img = tiff.imread(myfile)
img = np.rollaxis(img, 0, 3)
return img
class Dataset:
def __init__(self, images_dir, mask_dir):
self.ids = images_dir
self.images_fps = images_dir
self.masks_fps = mask_dir
def __getitem__(self, i):
# read data
image = np.load(self.images_fps[i])
mask = np.load(self.masks_fps[i])
image = np.stack(image, axis=-1).astype('float')
mask = np.stack(mask, axis=-1).astype('float')
#image = np.transpose(image, (1,0,2))
#mask = np.transpose(mask, (1,0,2))
image = np.transpose(image, (0,2,1))
mask = np.transpose(mask, (0,2,1))
return image, mask
def __len__(self):
return len(self.ids)
class Dataloder(tf.keras.utils.Sequence):
def __init__(self, dataset, batch_size=1, shuffle=False):
self.dataset = dataset
self.batch_size = batch_size
self.shuffle = shuffle
self.indexes = np.arange(len(dataset))
def __getitem__(self, i):
# collect batch data
start = i * self.batch_size
stop = (i + 1) * self.batch_size
data = []
for j in range(start, stop):
data.append(self.dataset[j])
batch = [np.stack(samples, axis=0) for samples in zip(*data)]
#print(len(batch))
return tuple(batch)
def __len__(self):
return len(self.indexes) // self.batch_size
def jaccard_coef(y_true, y_pred):
"""
Jaccard Index: Intersection over Union.
J(A,B) = |A∩B| / |A∪B|
= |A∩B| / |A|+|B|-|A∩B|
"""
intersection = K.sum(y_true * y_pred, axis=[0, -1, -2])
total = K.sum(y_true + y_pred, axis=[0, -1, -2])
union = total - intersection
jac = (intersection + smooth) / (union+ smooth)
return K.mean(jac)
def SegNet():
tf.random.set_seed(32)
classes= 10
img_input = tf.keras.layers.Input(shape=(size, size, 8))
x = img_input
# Encoder
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 23))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 43))(x)
x = tf.keras.layers.MaxPooling2D((2, 2), strides=(2, 2))(x)
x = tf.keras.layers.Dropout(0.25)(x)
x = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 32))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 41))(x)
x = tf.keras.layers.Conv2D(128, (3, 3), activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 33))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.MaxPooling2D((2, 2), strides=(2, 2))(x)
x = tf.keras.layers.Dropout(0.5)(x)
x = tf.keras.layers.Conv2D(256, (3, 3), activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 35))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(256, (3, 3), activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 54))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(256, (3, 3), activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 39))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Dropout(0.5)(x)
#Decoder
x = tf.keras.layers.UpSampling2D(size=(2, 2))(x)
x = tf.keras.layers.Conv2D(128, kernel_size=3, activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 45))(x)
x = tf.keras.layers.Conv2D(128, kernel_size=3, activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 41))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(128, kernel_size=3, activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 49))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Dropout(0.25)(x)
x = tf.keras.layers.UpSampling2D(size=(2, 2))(x)
x = tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 18))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(64, kernel_size=3, activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 21))(x)
x = tf.keras.layers.BatchNormalization()(x)
x = tf.keras.layers.Conv2D(classes, kernel_size=3, activation='relu', padding='same', kernel_initializer = tf.keras.initializers.he_normal(seed= 16))(x)
x = tf.keras.layers.Dropout(0.25)(x)
x = tf.keras.layers.Activation("softmax")(x)
model = tf.keras.Model(img_input, x)
model.compile(optimizer=tf.keras.optimizers.Adam(lr=1e-4),loss='binary_crossentropy', metrics=[jaccard_coef])
return model
# +
model = SegNet()
model.load_weights("model-weights.hdf5")
img = adjust_contrast(M(1)).copy()
res = model.predict(img.reshape((1, 160, 160, 8)))
threshold = 0.5
pred_binary_mask = res >= threshold
# Get all the predicted classes into a dataframe
DF = pd.DataFrame(columns=["image", "class", "poly", 'Multi'])
image, cl , ploy, t_l = [],[],[], []
i = 0
for j in range(10):
ab = mask_to_polygons(pred_binary_mask[0, :,:,j], epsilon=1)
t = shapely.wkt.dumps(ab)
t_l.append(t)
image.append(i+1)
cl.append(j+1)
ploy.append(len(ab))
df = pd.DataFrame(list(zip(image, cl, ploy, t_l)), columns = ['image', 'class', 'poly', 'Multi'])
DF = pd.concat([DF,df], ignore_index=True)
# -
DF
DF
# +
import requests
import base64
with open("vlj_img_1029.tiff", "rb") as image_file:
#base64.b64encode(image_file.read()).decode('utf-8')
data = image_file.read()
r = requests.post("https://hxy1cn1sl8.execute-api.us-east-1.amazonaws.com/Prod/segment_tiff", data=data)
print(r.status_code)
# -
import pickle
df = pickle.loads(r.content)
df
import mysql.connector
mydb = mysql.connector.connect(
host="database-2.c0a84dqmckxu.us-east-1.rds.amazonaws.com",
user="admin",
password="<PASSWORD>",
database="images"
)
mycursor = mydb.cursor()
sql = "INSERT INTO images (id, fileName) VALUES (%s, %s)"
val = (uid, file_name)
mycursor.execute(sql, val)
mydb.commit()
mydb.close()
| serverless/predict-app/training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import numpy as np
# # # !/usr/bin/env python3
# # -*- coding: utf-8 -*-
# """
# Created on 20181219
# @author: zhangji
# Trajection of a ellipse, Jeffery equation.
# """
# # %pylab inline
# pylab.rcParams['figure.figsize'] = (25, 11)
# fontsize = 40
# import numpy as np
# import scipy as sp
# from scipy.optimize import leastsq, curve_fit
# from scipy import interpolate
# from scipy.interpolate import interp1d
# from scipy.io import loadmat, savemat
# # import scipy.misc
# import matplotlib
# from matplotlib import pyplot as plt
# from matplotlib import animation, rc
# import matplotlib.ticker as mtick
# from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes
# from mpl_toolkits.mplot3d import Axes3D, axes3d
# from sympy import symbols, simplify, series, exp
# from sympy.matrices import Matrix
# from sympy.solvers import solve
# from IPython.display import display, HTML
# from tqdm import tqdm_notebook as tqdm
# import pandas as pd
# import re
# from scanf import scanf
# import os
# import glob
# from codeStore import support_fun as spf
# from src.support_class import *
# from src import stokes_flow as sf
# rc('animation', html='html5')
# PWD = os.getcwd()
# font = {'size': 20}
# matplotlib.rc('font', **font)
# np.set_printoptions(linewidth=90, precision=5)
# %load_ext autoreload
# %autoreload 2
from tqdm import tqdm_notebook
import os
import glob
import natsort
import numpy as np
import scipy as sp
from scipy.optimize import leastsq, curve_fit
from scipy import interpolate, integrate
from scipy import spatial, signal
# from scipy.interpolate import interp1d
from scipy.io import loadmat, savemat
# import scipy.misc
from IPython.display import display, HTML
import pandas as pd
import pickle
import re
from scanf import scanf
import matplotlib
# matplotlib.use('agg')
from matplotlib import pyplot as plt
import matplotlib.colors as colors
from matplotlib import animation, rc
import matplotlib.ticker as mtick
from mpl_toolkits.axes_grid1.inset_locator import inset_axes, zoomed_inset_axes
from mpl_toolkits.mplot3d import Axes3D, axes3d
from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable
from mpl_toolkits.mplot3d.art3d import Line3DCollection
from matplotlib import cm
from tqdm.notebook import tqdm as tqdm_notebook
from tqdm import tqdm
from time import time
from src.support_class import *
from src import jeffery_model as jm
from codeStore import support_fun as spf
from codeStore import support_fun_table as spf_tb
# # %matplotlib notebook
# %matplotlib inline
rc('animation', html='html5')
fontsize = 40
PWD = os.getcwd()
# -
fig = plt.figure(figsize=(2, 2))
fig.patch.set_facecolor('white')
ax0 = fig.add_subplot(1, 1, 1)
# +
job_dir = 'ecoC01B05_wt0.04_psi_rada'
t_headle = '(.*?).pickle'
# +
n_load = 10000
rand_mode=False
t_dir = os.path.join(PWD, job_dir)
_ = spf_tb.load_rand_data_pickle_dir_v2(t_dir, t_headle, n_load=n_load, rand_mode=rand_mode)
ini_theta_list, ini_phi_list, ini_psi_list, std_eta_list, psi_max_phi_list, \
theta_autocorrelate_fre_list, phi_autocorrelate_fre_list, psi_autocorrelate_fre_list, \
eta_autocorrelate_fre_list, dx_list, dy_list, dz_list, pickle_path_list = _
# -
t_name = os.path.join(os.getcwd(), 'ecoC01B05_phase_Peclet', '%s.pickle' % job_dir)
with open(t_name, 'wb') as handle:
pickle.dump(_, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('save to %s' % t_name)
# +
n_hist = 1000
figsize = np.array((16, 9)) * 0.5
dpi = 100
# use_autocorrelate_fre_list = theta_autocorrelate_fre_list
# use_autocorrelate_fre_list = phi_autocorrelate_fre_list
use_autocorrelate_fre_list = eta_autocorrelate_fre_list
tmax_fre_list = np.hstack([t1[0, 0] for t1 in use_autocorrelate_fre_list])
# tmax_fre_list = tmax_fre_list[tmax_fre_list < 0.04]
fig, axi = plt.subplots(1, 1, figsize=figsize, dpi=dpi)
t1 = axi.hist(tmax_fre_list, n_hist)
axi.set_yscale('log')
bin_edges = np.histogram_bin_edges(tmax_fre_list, n_hist)
case_idx0 = np.digitize(tmax_fre_list, bin_edges)
case_idx = np.ones_like(case_idx0) * -1
for i1, i0 in enumerate(np.unique(case_idx0)):
tidx = np.isclose(case_idx0, i0)
case_idx[tidx] = i1
assert np.all(case_idx >= 0)
print(np.vstack((np.unique(case_idx), np.bincount(case_idx))))
print()
for use_case, n_case in zip(np.unique(case_idx), np.bincount(case_idx)):
tidx = np.isclose(case_idx, use_case)
# np.mean(psi_max_phi_list[tidx][psi_max_phi_list[tidx] > np.pi])
print(use_case, n_case, np.mean(std_eta_list[tidx][:, 0] / np.pi), (std_eta_list[tidx][:, 1] / np.pi).max())
# +
tidx = np.isclose(case_idx, 11)
tpath = pickle_path_list[tidx][215]
with open(tpath, 'rb') as handle:
tpick = pickle.load(handle)
Table_t = tpick['Table_t']
Table_dt = tpick['Table_dt']
Table_X = tpick['Table_X']
Table_P = tpick['Table_P']
Table_P2 = tpick['Table_P2']
Table_theta = tpick['Table_theta']
Table_phi = tpick['Table_phi']
Table_psi = tpick['Table_psi']
Table_eta = tpick['Table_eta']
idx = Table_t > 5000
spf_tb.show_table_result_v2(Table_t[idx], Table_dt[idx], Table_X[idx], Table_P[idx], Table_P2[idx],
Table_theta[idx], Table_phi[idx], Table_psi[idx], Table_eta[idx])
# +
# tidx = np.isclose(case_idx, 0)
# tidx = np.isclose(case_idx, 2)
tidx = np.isclose(case_idx, 5)
figsize = np.array((16, 9)) * 0.5
dpi = 100
fig, axs = plt.subplots(2, 2, figsize=figsize, dpi=dpi)
for axi, use_autocorrelate_fre_list in zip(axs.ravel(), (theta_autocorrelate_fre_list, phi_autocorrelate_fre_list,
psi_autocorrelate_fre_list, eta_autocorrelate_fre_list)):
t1 = use_autocorrelate_fre_list[tidx][:, 0]
t2 = use_autocorrelate_fre_list[tidx][:, 1][t1[:, 0] / t1[:, 1] > 1]
print(axi.hist(t1[:, 0] / t1[:, 1], 10, log=True, ))
# print(plt.hist(dy_list[tidx], 10, log=True, ))
print('%d, %.4f, %.4f, %.4f±%.2e' % (tidx.sum(), dy_list[tidx].max(), dy_list[tidx].min(),
dy_list[tidx].mean(), dy_list[tidx].std()))
print('%d, %.4e, %.4e, %.4e±%.2e' % (tidx.sum(), dz_list[tidx].max(), dz_list[tidx].min(),
dz_list[tidx].mean(), dz_list[tidx].std()))
if t2.size > 0:
tpct = (t2[:, 1] / t2[:, 0]).max()
else:
tpct = 0
print('%.4f±%.4f, %f' % (np.mean(std_eta_list[tidx][:, 0] / np.pi), (std_eta_list[tidx][:, 1] / np.pi).max(), tpct))
# -
t1 = use_autocorrelate_fre_list[tidx][:, 0]
t2 = use_autocorrelate_fre_list[tidx][:, 1]
plt.semilogy(t1[:, 0] / t1[:, 1], t2[:, 1] / t2[:, 0], '.')
# +
use_autocorrelate_fre_list = phi_autocorrelate_fre_list
# use_autocorrelate_fre_list = eta_autocorrelate_fre_list
# tidx = np.isclose(case_idx, 2)
# tidx = np.isclose(case_idx, 0)
tidx = np.isclose(case_idx, 3)
t1 = use_autocorrelate_fre_list[tidx][:, 0]
t2 = use_autocorrelate_fre_list[tidx][:, 1][t1[:, 0] / t1[:, 1] > 1]
print(plt.hist(t1[:, 0] / t1[:, 1], 10, log=True, ))
print('%d, %.4f, %.4f, %.4f±%.2e' % (tidx.sum(), dy_list[tidx].max(), dy_list[tidx].min(),
dy_list[tidx].mean(), dy_list[tidx].std()))
if t2.size > 0:
tpct = (t2[:, 1] / t2[:, 0]).max()
else:
tpct = 0
print('%.4f±%.4f, %f' % (np.mean(std_eta_list[tidx][:, 0] / np.pi), (std_eta_list[tidx][:, 1] / np.pi).max(), tpct))
# -
use_case = 5
tidx = np.isclose(case_idx, use_case)
plt.hist(std_eta_list[tidx][:, 0] / np.pi)
dy_list.max()
t1 = psi_autocorrelate_fre_list[std_eta_list[:, 0] > np.pi * 0.8][:, 0]
t2 = psi_autocorrelate_fre_list[std_eta_list[:, 0] > np.pi * 0.8][:, 1]
t1
| head_Force/do_calculate_table/phase_map_v4_ecoC01B05_T0.04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import geopandas
import contextily as ctx
from hana_ml import dataframe as dfh
# -
hana_cloud_endpoint="8e1a286a-21d7-404d-8d7a-8c77d2a77050.hana.trial-eu10.hanacloud.ondemand.com:443"
# +
hana_cloud_host, hana_cloud_port=hana_cloud_endpoint.split(":")
cchc=dfh.ConnectionContext(port=hana_cloud_port,
address=hana_cloud_host,
user='HANAML',
password='<PASSWORD>!',
encrypt=True
)
| Python-API/usecase-examples/multimodel-analysis-airroutes/00 Logon.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 702} colab_type="code" executionInfo={"elapsed": 67468, "status": "ok", "timestamp": 1540890090718, "user": {"displayName": "\u5bae\u672c\u572d\u4e00\u90ce", "photoUrl": "https://lh5.googleusercontent.com/-5BLtx8oPSy8/AAAAAAAAAAI/AAAAAAAALtI/-tIwIsmAvCs/s64/photo.jpg", "userId": "00037817427736046144"}, "user_tz": -540} id="0U5ooEh4wR55" outputId="c2585939-ca87-42fa-b251-c97e9f4c24e3"
# #colabを使う方はこちらを使用ください。
# # !pip install torch==0.4.1
# # !pip install torchvision==0.2.1
# # !pip install numpy==1.14.6
# # !pip install matplotlib==2.1.2
# # !pip install pillow==5.0.0
# # !pip install opencv-python==3.4.3.18
# -
# # 第9章 torch.nnパッケージ
# + [markdown] colab_type="text" id="9q3fQ5LrLpp8"
# # 9.2 関数群(torch.nn.functional)
# + colab={} colab_type="code" id="FsCpiZnGwfJt"
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import numpy as np
import torchvision.transforms as transforms
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# + [markdown] colab_type="text" id="2cpkm5Bvv0x-"
# ## torch.nn.functional.conv2d
#
#
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 466, "status": "ok", "timestamp": 1540890155063, "user": {"displayName": "\u5bae\u672c\u572d\u4e00\u90ce", "photoUrl": "https://lh5.googleusercontent.com/-5BLtx8oPSy8/AAAAAAAAAAI/AAAAAAAALtI/-tIwIsmAvCs/s64/photo.jpg", "userId": "00037817427736046144"}, "user_tz": -540} id="TywwoeEqv9l1" outputId="98becc3c-3273-4181-b7c0-8ea5c79a98ed"
#conv2d関数を使用する
# 正方形のカーネルと等しいストライド
filters = torch.randn(8,4,3,3)
inputs = torch.randn(1,4,5,5)
F.conv2d(inputs, filters, padding=1).size()
# + [markdown] colab_type="text" id="CxJ-xLtLL36E"
# ## torch.nn.functional.avg_pool2d
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 488, "status": "ok", "timestamp": 1540890221188, "user": {"displayName": "\u5bae\u672c\u572d\u4e00\u90ce", "photoUrl": "https://lh5.googleusercontent.com/-5BLtx8oPSy8/AAAAAAAAAAI/AAAAAAAALtI/-tIwIsmAvCs/s64/photo.jpg", "userId": "00037817427736046144"}, "user_tz": -540} id="-qjV8kPcwqDq" outputId="db5d9db2-7ac2-4e40-9079-f642b1271cfa"
#avg_pool2d関数を使用する
input = torch.tensor([[[1,2,3,4,5,6], [1,2,3,4,5,6]]], dtype=torch.float)
F.avg_pool2d(input, kernel_size=(2,2), stride=2)
# -
| chapter9/section9_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
# default_exp imglmdb
# -
# # `imglmdb` for reading images from lmdb databases
#
# > API
#hide
from nbdev.showdoc import *
# export
import lmdb
import os.path
import pickle
import numpy
import logging
# export
class imglmdb:
def __init__(self, db_path, endianess="big"):
self.db_path = db_path
self.env = lmdb.open(db_path, subdir=os.path.isdir(db_path),
readonly=True, lock=False,readahead=False, meminit=False)
self.endianess = endianess
self.logger = logging.getLogger(__name__)
self.dropped = set()
with self.get_read_txn() as txn:
self.length = int.from_bytes(txn.get(b"__len__"), endianess)
self.names = txn.get(b'__names__').decode("utf-8").split(' ')
self.channels_of_interest = [i for i in range(len(self.names))]
self.targets = txn.get(b'__targets__', default=None)
if self.targets is not None:
self.targets = pickle.loads(targets)
self.idx_byte_length = int(numpy.ceil(numpy.floor(numpy.log2(self.length))/8.))
self.logger.info("Initialized db (%s) with length %d, and channel names %s" % (self.db_path, self.length, " ".join(self.names)))
def set_channels_of_interest(self, arr):
self.channels_of_interest = arr
self.logger.info("Channels of interest set to %s" % " ".join([self.names[i] for i in arr]))
def get_read_txn(self):
return self.env.begin(write=False)
def get_write_txn(self):
return self.env.begin(write=True)
def get_masked_image(self, idx, only_coi = False, txn = None):
ret = self.get_image(idx, only_coi, txn)
if len(ret) > 2:
return numpy.multiply(ret[0], ret[1]), ret[2]
else:
return numpy.multiply(ret[0], ret[1])
def get_image(self, idx, only_coi = False, txn = None):
if idx in self.dropped:
raise ValueError("Index is dropped. Call `reset` to reset dropped elements.")
if txn is None:
with self.get_read_txn() as txn:
i, m = pickle.loads(txn.get(int(idx).to_bytes(self.idx_byte_length, "big")))
else:
i, m = pickle.loads(txn.get(int(idx).to_bytes(self.idx_byte_length, "big")))
if only_coi:
i = i[self.channels_of_interest]
m = m[self.channels_of_interest]
if self.targets is None:
return numpy.array(i), numpy.array(m)
return numpy.array(i), numpy.array(m), self.targets[idx]
def get_images(self, idx, only_coi = False, only_image=False, masked=False):
func = self.get_masked_image if masked else self.get_image
with self.get_read_txn() as txn:
ret = []
if only_image:
for i in idx:
try:
ret.append(func(i, only_coi, txn)[0])
except ValueError:
pass
else:
for i in idx:
try:
ret.append(func(i, only_coi, txn))
except ValueError:
pass
return ret
def __iter__(self):
self.pointer = 0
return self
def __next__(self):
if self.pointer < self.length:
while self.pointer in self.dropped: # skip virtually dropped instances
self.pointer += 1
self.pointer += 1
return self.get_image(self.pointer-1)
else:
raise StopIteration()
def drop(self, idx):
self.dropped.update(idx)
def drop_commit(self):
with self.get_write_txn() as txn:
for idx in self.dropped:
txn.delete(
int(idx).to_bytes(self.idx_byte_length, "big")
)
def reset(self):
self.dropped = set()
def __del__(self):
self.env.close()
def __len__(self):
return self.length - len(self.dropped)
def __repr__(self):
return "db %s, length %d, channels %s" % (self.db_path.split("/")[-1], self.length, " ".join(self.names))
# +
# export
class multidbwrapper:
def __init__(self, db_paths, channels=[]):
self.db_paths = db_paths
self.dbs = []
self.db_start_index = []
self.length = 0
tmp_channels = []
self.__targets = []
for db_path in self.db_paths:
db = imglmdb(db_path)
self.length += len(db)
tmp_channels.append(db.names)
self.__targets.extend(db.targets)
self.__targets = numpy.array(self.__targets)
self.__label_offset = self.__targets.min()
self.__targets -= self.__label_offset
self.__classes = numpy.unique(self.__targets)
if all(len(i) == len(tmp_channels[0]) for i in tmp_channels):
if len(channels) > 0:
self.__channels_of_interest = channels
else:
self.__channels_of_interest = [i for i in range(len(tmp_channels[0]))]
else:
raise ValueError("Not all DBs contain the same amount of channels.")
@property
def targets(self):
return self.__targets
@property
def classes(self):
return self.__classes
@property
def dtype(self):
return numpy.float32
@property
def channels_of_interest(self):
return self.__channels_of_interest
@property
def label_offset(self):
return self.__label_offset
def __len__(self):
return self.length
def __str__(self):
return f"{self.length} instances from {len(self.dbs)} databases."
def _setup(self):
"""Private function used to setup databases. Required for multiprocessing.
"""
i = 0
for db_path in self.db_paths:
db = imglmdb(db_path)
db.set_channels_of_interest(self.channels_of_interest)
self.dbs.append(db)
self.db_start_index.append(i)
i += len(db)
self.db_start_index = numpy.array(self.db_start_index)
def get_image(self, index, only_coi=True):
"""Fetches instance from database based on index.
Arguments:
index {int} -- Index of instance to be fetched
Returns:
tuple -- image [, label]
"""
if len(self.dbs) == 0:
self._setup()
db_idx = sum(self.db_start_index - index <= 0)-1
db = self.dbs[db_idx]
start_idx = self.db_start_index[db_idx]
image, mask, label = db.get_image(index-start_idx, only_coi=only_coi)
return image, mask, label-self.label_offset
# -
| notebooks/imglmdb.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# <NAME>
# CIS 401 Research
# Place the Period Problem
# Process Punctuation
# -
import os
import codecs
import re
from string import punctuation
def unicode_apostrophes(doc):
''' (str) -> str
Translates unicode apostrophes to a generic apostrophe.
'''
d = {u'\u2019':'\'', u'\u2018':'\'', u'\u201C':'"', u'\u201D':'"'}
t = str.maketrans(d)
return doc.translate(t)
def reg_exp_special_periods(doc):
''' (str) -> str
Uses regular expressions to remove periods for special cases. A period is supposed to be a full stop in our context.
'''
doc = re.sub(r" \D\. ", ' ', doc) # handles middle initials
doc = re.sub(r"St\. ", 'St ', doc) # handling some titles
doc = re.sub(r"Mrs\. ", 'Mrs ', doc)
doc = re.sub(r"Ms\. ", 'Ms ', doc)
doc = re.sub(r"Mr\. ", 'Mr ', doc)
doc = re.sub(r"Dr\. ", 'Dr ', doc)
doc = re.sub(r"Sr\. ", 'Sr ', doc)
doc = re.sub(r"Jr\. ", 'Jr ', doc)
doc = re.sub(r"So\. ", 'sophomore ', doc)
doc = re.sub(r"Fr\. ", 'freshman ', doc)
doc = re.sub(r"vs\. ", 'vs ', doc) # handling some Latin
doc = re.sub(r"et al\. ", 'et al ', doc)
return doc
def handle_acronyms(doc):
''' (str) -> str
Takes out periods from an acronym.
'''
words = doc.split()
doc = ''
for i in range(len(words)):
if words[i].count('.') > 1:
words[i] = words[i].replace('.','')
doc = doc + words[i] + ' '
return doc
def reg_exp_fixes(doc):
''' (str) -> str
Uses regular expressions to modify some words for Google's Word2Vec model.
Most substitutions deal with numbers and decimal places.
'''
doc = re.sub(r"\((.*?)\)", '', doc) # remove phrases in parentheses
doc = re.sub(r"\[(.*?)\]", '', doc) # remove phrases in brackets
doc = re.sub(r"\.\.\.", '. ', doc)
doc = re.sub(r"\'s", '', doc) # handles possessives
doc = re.sub(r"\d+st", ' number ', doc) # handles numbers
doc = re.sub(r"\d+rd", ' number ', doc)
doc = re.sub(r"\d+nd", ' number ', doc)
doc = re.sub(r"\d+th", ' number ', doc)
doc = re.sub(r"\d+s", ' number ', doc)
doc = re.sub(r"\d+\.\d+", ' number ', doc)
doc = re.sub(r" \.\d+", ' number ', doc)
doc = re.sub(r"\d+", ' number ', doc)
return doc
def fix_punctuation(doc):
''' (str) -> str
Either removes punctuation or modifies it. Only sentence enders (!,?,.) and contractions remain.
'''
p = punctuation[1:3] + punctuation[7:12] + punctuation[14:20] + punctuation[22:]
d = {thing:'' for thing in p} # dictionary
d.update({'%':' percent ', '$':' dollars ',
'&':' ampersand ', '@':' at ',
'-':' ', u'\u2013':' ', u'\u2014':' ',
'!':' ! ', '?':' ? ', '.':' . '})
t = str.maketrans(d) # translation table
return doc.translate(t)
def strip_apostrophes(doc):
''' (str) -> str
Removes apostrophes that start or end a word. Does not affect contractions.
'''
words = doc.split()
words = [item.strip('\'') for item in words] # must split into words to strip apostrophes
doc = ''
for word in words:
doc = doc + word + ' '
return doc
def process_punctuation():
''' (None) -> None
Removes punctuation in .txt file that doesn't correspond to a contraction or a full stop.
Writes new .txt file to a new location in storage.
Calls strip_apostrophes, fix_punctuation, reg_exp_fixes, handle_acronyms, and reg_exp_special_periods.
'''
home = os.path.expanduser('~')
path = '\\Documents\\RULE\\UppsalaStudentCorpus\\USEtexts\\' # change file path accordingly
new_path = '\\Documents\\RULE\\UppsalaStudentCorpus\\USEdata\\' # change file path accordingly
for file in os.listdir(home + path):
with open(home + path + file, 'r') as f:
new_file = open(home + new_path + file, 'w')
document = unicode_apostrophes(f.read())
new_file.write(strip_apostrophes(fix_punctuation(handle_acronyms(reg_exp_fixes(reg_exp_special_periods(document))))))
new_file.close()
return None
process_punctuation()
| PythonFiles/ProcessPunctuation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true id="z67ODXlkbdE5" colab_type="text"
# <!--BOOK_INFORMATION-->
# <img align="left" style="padding-right:10px;" src="https://github.com/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/figures/PDSH-cover-small.png?raw=1">
#
# *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by <NAME>; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
#
# *The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
# + [markdown] deletable=true editable=true id="OU6-Z4_HbdE6" colab_type="text"
# <!--NAVIGATION-->
# < [Application: A Face Detection Pipeline](05.14-Image-Features.ipynb) | [Contents](Index.ipynb) | [Appendix: Figure Code](06.00-Figure-Code.ipynb) >
#
# <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.15-Learning-More.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
#
# + [markdown] id="bAxI5DHzbdE7" colab_type="text"
# # Further Machine Learning Resources
# + [markdown] deletable=true editable=true id="4PnLgBosbdE8" colab_type="text"
# This chapter has been a quick tour of machine learning in Python, primarily using the tools within the Scikit-Learn library.
# As long as the chapter is, it is still too short to cover many interesting and important algorithms, approaches, and discussions.
# Here I want to suggest some resources to learn more about machine learning for those who are interested.
# + [markdown] deletable=true editable=true id="TG2H_6E7bdE8" colab_type="text"
# ## Machine Learning in Python
#
# To learn more about machine learning in Python, I'd suggest some of the following resources:
#
# - [The Scikit-Learn website](http://scikit-learn.org): The Scikit-Learn website has an impressive breadth of documentation and examples covering some of the models discussed here, and much, much more. If you want a brief survey of the most important and often-used machine learning algorithms, this website is a good place to start.
#
# - *SciPy, PyCon, and PyData tutorial videos*: Scikit-Learn and other machine learning topics are perennial favorites in the tutorial tracks of many Python-focused conference series, in particular the PyCon, SciPy, and PyData conferences. You can find the most recent ones via a simple web search.
#
# - [*Introduction to Machine Learning with Python*](http://shop.oreilly.com/product/0636920030515.do): Written by <NAME> and <NAME>, this book includes a fuller treatment of the topics in this chapter. If you're interested in reviewing the fundamentals of Machine Learning and pushing the Scikit-Learn toolkit to its limits, this is a great resource, written by one of the most prolific developers on the Scikit-Learn team.
#
# - [*Python Machine Learning*](https://www.packtpub.com/big-data-and-business-intelligence/python-machine-learning): <NAME>'s book focuses less on Scikit-learn itself, and more on the breadth of machine learning tools available in Python. In particular, there is some very useful discussion on how to scale Python-based machine learning approaches to large and complex datasets.
# + [markdown] deletable=true editable=true id="uXT-5oD_bdE9" colab_type="text"
# ## General Machine Learning
#
# Of course, machine learning is much broader than just the Python world. There are many good resources to take your knowledge further, and here I will highlight a few that I have found useful:
#
# - [*Machine Learning*](https://www.coursera.org/learn/machine-learning): Taught by <NAME> (Coursera), this is a very clearly-taught free online course which covers the basics of machine learning from an algorithmic perspective. It assumes undergraduate-level understanding of mathematics and programming, and steps through detailed considerations of some of the most important machine learning algorithms. Homework assignments, which are algorithmically graded, have you actually implement some of these models yourself.
#
# - [*Pattern Recognition and Machine Learning*](http://www.springer.com/us/book/9780387310732): Written by <NAME>, this classic technical text covers the concepts of machine learning discussed in this chapter in detail. If you plan to go further in this subject, you should have this book on your shelf.
#
# - [*Machine Learning: a Probabilistic Perspective*](https://mitpress.mit.edu/books/machine-learning-0): Written by <NAME>, this is an excellent graduate-level text that explores nearly all important machine learning algorithms from a ground-up, unified probabilistic perspective.
#
# These resources are more technical than the material presented in this book, but to really understand the fundamentals of these methods requires a deep dive into the mathematics behind them.
# If you're up for the challenge and ready to bring your data science to the next level, don't hesitate to dive-in!
# + [markdown] deletable=true editable=true id="ygXzZLaJbdE9" colab_type="text"
# <!--NAVIGATION-->
# < [Application: A Face Detection Pipeline](05.14-Image-Features.ipynb) | [Contents](Index.ipynb) | [Appendix: Figure Code](06.00-Figure-Code.ipynb) >
#
# <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.15-Learning-More.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
#
| Learning Notes/Learning Notes ML - 15 Learning More.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#By-using-the-scripting-interface" data-toc-modified-id="By-using-the-scripting-interface-1"><span class="toc-item-num">1 </span>By using the scripting interface</a></span><ul class="toc-item"><li><span><a href="#Time-varying-signal" data-toc-modified-id="Time-varying-signal-1.1"><span class="toc-item-num">1.1 </span>Time varying signal</a></span></li><li><span><a href="#Steady-signal" data-toc-modified-id="Steady-signal-1.2"><span class="toc-item-num">1.2 </span>Steady signal</a></span></li></ul></li><li><span><a href="#By-using-the-function-library" data-toc-modified-id="By-using-the-function-library-2"><span class="toc-item-num">2 </span>By using the function library</a></span><ul class="toc-item"><li><span><a href="#Time-varying-signal" data-toc-modified-id="Time-varying-signal-2.1"><span class="toc-item-num">2.1 </span>Time varying signal</a></span></li><li><span><a href="#Steady-signal" data-toc-modified-id="Steady-signal-2.2"><span class="toc-item-num">2.2 </span>Steady signal</a></span></li></ul></li></ul></div>
# -
#
# # How to compute acoustic Sharpness
# This tutorial explains how to use MOSQITO to compute the acoustic Sharpness of a signal. Two approaches are possible: scripting interface and function library. The users that just need to compute SQ metrics should preferably use the scripting interface approach. The function library approach is dedicated to users who would like to integrate MOSQITO functions in another software for instance.
#
# ## By using the scripting interface
# ### Time varying signal
# An Audio object is first created by importing an audio file. In this example, the signal is imported from a .wav file. The tutorial [Audio signal basic operations](./tuto_signal_basic_operations.ipynb) gives more information about the syntax of the import and the other supported file types. It also explains how to plot the time signal, compute and plot its 1/3 octave band spectrum, compute its overall level, etc.
# +
# Add MOSQITO to the Python path
import sys
sys.path.append('..')
# %matplotlib notebook
# Import MOSQITO color sheme [Optional]
from mosqito import COLORS
# Import Audio class
from mosqito.classes.Audio import Audio
# Create an Audio object
woodpecker = Audio(
"../validations/loudness_zwicker/data/ISO_532-1/Annex B.5/Test signal 24 (woodpecker).wav",
calib=2 * 2 ** 0.5,
)
# -
# The acoustic sharpness is computed using the following command line. The function takes 2 input arguments:
# - "method", that can be set to "din", "bismarck", "aures" or "fastl" depending on the computation method chosen,
# - "skip", that corresponds to the cut of the transient effect at the beginning of the signal.
#
# The Loudness to be weighted is automatically computed by using the Zwicker method (see the corresponding [documentation](../documentation/loudness-time-varying.md) for more information)
woodpecker.compute_sharpness(method="din", skip=0.2)
# The preceeding command computes the sharpness of the audio signal as a function of time. Its value can be plotted with the following command. The "time" argument indicates that the sharpness shall be plotted over time. The optional type_plot argument is obviously used to specifies the plot type (among "curve", "bargraph", "barchart" and "quiver"). The optional color_list argument is used to specify the color scheme used for the plots.
# %matplotlib notebook
woodpecker.sharpness["din"].plot_2D_Data(
"time",
type_plot="curve",
color_list=COLORS,
)
# ### Steady signal
# For a steady signal, the syntax is almost equivalent, see below.
# +
# Create an Audio object
bbnoise = Audio(
r"..\validations\sharpness\data\broadband_2500.wav",
is_stationary=True,
)
# Compute sharpness (method is set to "din" by default)
bbnoise.compute_sharpness(method="din")
# -
# ## By using the function library
# ### Time varying signal
# The commands below shows how to compute the sharpness of a time varying signal by directly using the functions from MOSQITO.
# +
# Import useful packages
import numpy as np
import matplotlib.pyplot as plt
# Import MOSQITO functions
from mosqito.functions.shared.load import load
from mosqito.functions.sharpness.comp_sharpness import comp_sharpness
# Load signal
signal, fs = load(
"../validations/loudness_zwicker/data/ISO_532-1/Annex B.5/Test signal 24 (woodpecker).wav",
calib = 2 * 2**0.5
)
# Sharpness calculation
sharpness = comp_sharpness(False, signal, fs, method="din", skip=0.2)
# Plot
S = sharpness['values']
time = np.linspace(0,0.002*(S.size - 1),S.size)
plt.figure()
plt.plot(time, S)
plt.xlabel("Time [s]")
plt.ylabel("Sharpness, [acum]")
plt.show()
# -
# ### Steady signal
# The commands below shows how to compute the sharpness of a steady signal by directly using the functions from MOSQITO.
# +
# Import useful packages
import numpy as np
import matplotlib.pyplot as plt
# Import MOSQITO functions
from mosqito.functions.shared.load import load
from mosqito.functions.sharpness.comp_sharpness import comp_sharpness
# Load signal
signal, fs = load(
r"..\validations\sharpness\data\broadband_2500.wav",
calib = 1
)
# Sharpness calculation
sharpness = comp_sharpness(True, signal, fs, method="din")
# -
# ---
from datetime import date
print("Tutorial generation date:", date.today().strftime("%B %d, %Y"))
| tutorials/tuto_sharpness.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=false editable=false nbgrader={"cell_type": "markdown", "checksum": "2b5ecc5d390fe3fdcc1d7048181fbcbb", "grade": false, "grade_id": "cell-3a49d0c736ae4826", "locked": true, "schema_version": 3, "solution": false, "task": false}
# # Project
#
# Welcome to the group project! The project is based on the [ACM RecSys 2021 Challenge](https://recsys-twitter.com/).
#
# - Detailed information about the task, submission and grading can be found in a [dedicates site on TUWEL](https://tuwel.tuwien.ac.at/mod/page/view.php?id=1217340).
# - Information about the dataset structure [on this site on TUWEL](https://tuwel.tuwien.ac.at/mod/page/view.php?id=1218810).
# -
team_name = "team_15"
team_members = [("<NAME>","01634838"),
("<NAME>","12037284"),
("<NAME>", "01302969"),
("<NAME>", "01304039"),
("<NAME>", "11843424")]
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "3c84ed38479c0195aaa2fa1ce3f7fece", "grade": false, "grade_id": "cell-07ef37bf8c0d782b", "locked": true, "schema_version": 3, "solution": false, "task": false}
print(team_name)
print(team_members)
# -
# ### Note:
# `evaluate_test_set` moved to Section **Item-Item Collaborative Filtering**.
# +
try:
import pandas as pd
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
# !pip3 install pandas
else:
# !pip3 install --user pandas
import os
import re
import csv
import datetime
# -
path_to_data = '~/shared/data/project/training/'
val_path_to_data = '~/shared/data/project/validation/'
dataset_type = 'one_hour' # all_sorted, one_day, one_hour, one_week
val_dataset_type = "one_hour"
expanded_path = os.path.expanduser(path_to_data)
part_files = [os.path.join(expanded_path, f) for f in os.listdir(expanded_path) if dataset_type in f]
part_files = sorted(part_files, key = lambda x:x[-5:])
# +
all_features = ["text_tokens", "hashtags", "tweet_id", "present_media", "present_links", "present_domains",\
"tweet_type","language", "tweet_timestamp", "engaged_with_user_id", "engaged_with_user_follower_count",\
"engaged_with_user_following_count", "engaged_with_user_is_verified", "engaged_with_user_account_creation",\
"engaging_user_id", "enaging_user_follower_count", "enaging_user_following_count", "enaging_user_is_verified",\
"enaging_user_account_creation", "engagee_follows_engager", "reply", "retweet", "quote", "like"]
all_features_to_idx = dict(zip(all_features, range(len(all_features))))
# +
from sklearn.metrics import average_precision_score, log_loss
def calculate_ctr(gt):
positive = len([x for x in gt if x == 1])
ctr = positive/float(len(gt))
return ctr
def compute_rce(pred, gt):
cross_entropy = log_loss(gt, pred)
data_ctr = calculate_ctr(gt)
strawman_cross_entropy = log_loss(gt, [data_ctr for _ in range(len(gt))])
return (1.0 - cross_entropy/strawman_cross_entropy)*100.0
# +
val_expanded_path = os.path.expanduser(val_path_to_data)
val_part_files = [os.path.join(val_expanded_path, f) for f in os.listdir(val_expanded_path) if val_dataset_type in f]
val_part_files = sorted(val_part_files, key = lambda x:x[-5:])
val_part_files
val_data = pd.read_csv(val_part_files[0], delimiter='\x01', header=None, usecols=[2, 14, 20,21,22,23])
val_data.columns = ["tweet_id", "engaging_user_id", 'reply', 'retweet', 'quote', 'like']
val_data.reply = (~val_data.reply.isna()).astype("int")
val_data.retweet = (~val_data.retweet.isna()).astype("int")
val_data.quote = (~val_data.quote.isna()).astype("int")
val_data.like = (~val_data.like.isna()).astype("int")
val_data.to_csv('gt_validation.csv')
val_data
# -
# ## User-User Collaborative Filtering
#
# #### Authors: <NAME>, <NAME>
# +
all_features = ["text_tokens", "hashtags", "tweet_id", "present_media", "present_links", "present_domains",\
"tweet_type","language", "tweet_timestamp", "engaged_with_user_id", "engaged_with_user_follower_count",\
"engaged_with_user_following_count", "engaged_with_user_is_verified", "engaged_with_user_account_creation",\
"engaging_user_id", "enaging_user_follower_count", "enaging_user_following_count", "enaging_user_is_verified",\
"enaging_user_account_creation", "engagee_follows_engager", "reply_timestamp", "retweet_timestamp", "retweet_with_comment_timestamp", "like_timestamp"]
all_features_to_idx = dict(zip(all_features, range(len(all_features))))
# -
def load_data(filename):
data = pd.read_csv(filename, sep='\x01', names=all_features, index_col=False)
return data
# +
data = load_data(path_to_data + dataset_type)
# We choose first 5k rows in order to work faster with the data
data = data.head(5000)
# -
data.head()
# +
def columns_to_list(data, columns):
for col in columns:
data[col] = data[col].str.split('\t')
return data
def columns_to_timestamps(data, columns):
for col in columns:
data[col] = data[col].apply(lambda x: pd.Timestamp(x, unit='s'))
return data
cols_to_list = ['text_tokens', 'hashtags', 'present_media', 'present_links', 'present_domains']
data = columns_to_list(data, cols_to_list)
cols_to_timestamps = ['tweet_timestamp', 'enaging_user_account_creation', 'reply_timestamp', 'retweet_timestamp', 'retweet_with_comment_timestamp', 'like_timestamp']
data = columns_to_timestamps(data, cols_to_timestamps)
# -
pd.set_option('display.max_columns', None)
print(data.shape)
display(data.head(50))
# ### Splitting dataset into train and test
# Splitting the training set - one hour into train and test data. The training dataset is used for model training and the test dataset for testing the trained model
# +
from sklearn.model_selection import train_test_split
train_data, test_data = train_test_split(data, test_size= 0.20, random_state=42)
# -
train_data.head()
test_data.head()
# ### Evaluation
# +
def true_timestamp(t):
return int(not pd.isnull(t))
def labels(j):
to_copy = test_data.copy()
to_copy['labed'] = to_copy.apply(lambda row: true_timestamp(row[j]), axis=1)
return to_copy[['tweet_id', 'engaging_user_id', 'labed']]
def read_predictions(file):
filename = os.path.basename(file)
#print(filename)
if (filename.startswith('gt')):
to_sort = pd.read_csv(file, names=['tweet_id', 'engaging_user_id', 'labed'], header=0)
sort = to_sort.sort_values(['tweet_id', 'engaging_user_id', 'labed'])
elif (filename.startswith('pred')):
to_sort = pd.read_csv(file, names=['tweet_id', 'engaging_user_id', 'prediction'], header=0)
sort = to_sort.sort_values(['tweet_id', 'engaging_user_id', 'prediction'])
return sort
#ground truth for retweet
gt_retweet = labels('retweet_timestamp')
gt_retweet.to_csv('gt_retweet.csv')
print(read_predictions('gt_retweet.csv')[:10])
#ground truth for reply
gt_reply = labels('reply_timestamp')
gt_reply.to_csv('gt_reply.csv')
print(read_predictions('gt_reply.csv')[:10])
#ground truth for like
gt_like = labels('like_timestamp')
gt_like.to_csv('gt_like.csv')
print(read_predictions('gt_like.csv')[:10])
#ground truth for retweet with comment
gt_rc = labels('retweet_with_comment_timestamp')
gt_rc.to_csv('gt_rc.csv')
print(read_predictions('gt_rc.csv')[:10])
# -
# ### Create a Ratings Matrix
# One ratings matrix for each engagement type
# +
#creating a data frame for the unique tweets and a unique one for the engagement between users
uTID = data['tweet_id'].unique()
uTID.sort()
uUID = data['engaging_user_id'].append(data['engaged_with_user_id']).unique()
uUID.sort()
m = len(uUID)
n = len(uTID)
#creating internal ids for the users and the tweets
userId_to_userIDX = dict(zip(uUID, range(m)))
userIDX_to_userId = dict(zip(range(m), uUID))
tweetId_to_tweetIDX = dict(zip(uTID, range(n)))
tweetIDX_to_tweetId = dict(zip(range(n), uTID))
# +
#creating a dataframe for the upcoming implementation of the ratings matrix
j = ['tweet_id', 'engaging_user_id', 'reply_timestamp', 'retweet_timestamp',
'retweet_with_comment_timestamp', 'like_timestamp']
ratings = pd.concat([data['engaging_user_id'].map(userId_to_userIDX),
data['tweet_id'].map(tweetId_to_tweetIDX),
data['reply_timestamp'].notnull(),
data['retweet_timestamp'].notnull(),
data['retweet_with_comment_timestamp'].notnull(),
data['like_timestamp'].notnull()], axis = 1)
ratings.columns = ['user', 'tweet', 'reply', 'retweet', 'retweet_with_comment', 'like']
ratings.sort_values(['user', 'tweet'], inplace = True)
ratings.head(n = 20)
# +
from scipy import sparse as sp
#creating the ratings matrices
RM_reply = sp.csr_matrix((ratings.reply[ratings.reply], (ratings.user[ratings.reply], ratings.tweet[ratings.reply])),
shape=(m, n))
RM_retweet = sp.csr_matrix((ratings.retweet[ratings.retweet], (ratings.user[ratings.retweet], ratings.tweet[ratings.retweet])),
shape=(m, n))
RM_retweet_wc = sp.csr_matrix((ratings.retweet_with_comment[ratings.retweet_with_comment], (ratings.user[ratings.retweet_with_comment] , ratings.tweet[ratings.retweet_with_comment])), shape=(m, n))
RM_like = sp.csr_matrix((ratings.like[ratings.like], (ratings.user[ratings.like], ratings.tweet[ratings.like])),
shape=(m, n))
display(RM_reply.shape, RM_reply.count_nonzero())
display(RM_retweet.shape, RM_retweet.count_nonzero())
display(RM_retweet_wc.shape, RM_retweet_wc.count_nonzero())
display(RM_like.shape, RM_like.count_nonzero())
# -
# ### User-User Similarity
# +
from scipy.sparse.linalg import norm
def compute_pairwise_user_similarity(u_id, v_id, RM_type):
u = RM_type[u_id,:].copy()
v = RM_type[v_id,:].copy()
#cosine similarity formula from the slides based on the vector operations defined above
numerator = u.dot(v.T).A.item()
denominator = norm(u)*norm(v)
if denominator == 0:
similarity = 0.;
else:
similarity = numerator/denominator
return similarity
# -
#testing the function above
display(compute_pairwise_user_similarity(15, 5256, RM_reply))
display(compute_pairwise_user_similarity(5256, 1642, RM_retweet))
display(compute_pairwise_user_similarity(1642, 5422, RM_retweet_wc))
display(compute_pairwise_user_similarity(5422, 15, RM_like))
# ### User to all Users Similarity
# +
import numpy as np
def compute_user_similarities(u_id, RM_type):
uU = np.empty((m,))
#computing similarities of user u_id with all of the other users
for v_id in range(m):
uU[v_id] = compute_pairwise_user_similarity(u_id, v_id, RM_type)
return uU
# +
# Test
uU = compute_user_similarities(15, RM_reply)
display(uU[1])
uU = compute_user_similarities(5256, RM_retweet)
display(uU[50])
uU = compute_user_similarities(1642, RM_retweet_wc)
display(uU[10])
uU = compute_user_similarities(5422, RM_like)
display(uU[10])
# -
# ### User Neighbourhood
# +
#transforming from sparse matrix to dictionary of keys for easier handling
RM_reply_dok = RM_reply.todok()
RM_retweet_dok = RM_retweet.todok()
RM_retweet_wc_dok = RM_retweet_wc.todok()
RM_like_dok = RM_like.todok()
k = 10
def create_user_neighborhood(u_id, i_id, RM_type, RM_type_dok):
nh = {} ## the neighborhood dict with (user id: similarity) entries
## nh should not contain u_id and only include users that have rated i_id; there should be at most k neighbors
uU = compute_user_similarities(u_id, RM_type)
uU_copy = uU.copy() ## so that we can modify it, but also keep the original
sorted_values = np.argsort(uU_copy)[::-1]
#counter for k neighbours
ik = 0
for i in sorted_values:
# checking if i gave a rating to item i_id and making sure i is different from itself
if (i, i_id) in RM_type_dok and i!=u_id:
nh[i] = uU_copy[i]
ik+=1
if ik == k:
break
return nh
# +
# Test neighborhood
nh = create_user_neighborhood(15, 595, RM_reply, RM_reply_dok)
display(nh)
nh = create_user_neighborhood(5256, 437, RM_retweet, RM_retweet_dok)
display(nh)
nh = create_user_neighborhood(1642, 27, RM_retweet_wc, RM_retweet_wc_dok)
display(nh)
nh = create_user_neighborhood(5422, 609, RM_like, RM_like_dok)
display(nh)
# -
# Unfortunately most user neighborhoods are empty.
# ### Predict Ratings
def predict_internal_ids(u_id, i_id, RM_type, RM_type_dok):
if (u_id, i_id) in RM_type_dok:
print("user", u_id, "has engaged with item", i_id, "with", RM_type[u_id, i_id])
else:
print("user", u_id, "has not engaged with item", i_id)
print("k:", k)
nh = create_user_neighborhood(u_id, i_id, RM_type, RM_type_dok)
neighborhood_weighted_avg = 0.
numerator = 0.
denominator = 0.
for v in nh.items():
numerator += nh[v] * RM_type[v,i_id]
denominator += np.absolute(nh[v])
if denominator == 0:
neighborhood_weighted_avg = 0.;
else:
neighborhood_weighted_avg = numerator/denominator
prediction = neighborhood_weighted_avg
return prediction
#test
predict_internal_ids(15, 595, RM_reply, RM_reply_dok)
def predict_external_ids(tweet_id, engaging_user_id, RM_type, RM_type_dok):
print("user", engaging_user_id, "has internal id ", userId_to_userIDX[engaging_user_id])
print("tweet", tweet_id, "has internal id ", tweetId_to_tweetIDX[tweet_id])
return predict_internal_ids(userId_to_userIDX[engaging_user_id],tweetId_to_tweetIDX[tweet_id], RM_type, RM_type_dok)
# +
#testing different external ids
print("Reply")
predict_external_ids("00F23FACF2C4F78E32E86C0E60971078", "CC9AAACEEC69EAC26ED1FE87409C4440", RM_reply, RM_reply_dok)
print("")
print("Retweet")
predict_external_ids("00F23FACF2C4F78E32E86C0E60971078", "CC9AAACEEC69EAC26ED1FE87409C4440", RM_retweet, RM_retweet_dok)
print("")
print("Retweet with Comment")
predict_external_ids("00F23FACF2C4F78E32E86C0E60971078", "CC9AAACEEC69EAC26ED1FE87409C4440", RM_retweet_wc, RM_retweet_wc_dok)
print("")
print("Like")
predict_external_ids("DE1604F4816F6B8BD85A9478AE9D32E9", "F343F23E25FF1D7041E31E0CF4D026AD", RM_like, RM_like_dok)
# -
# ## Item-Item Collaborative Filtering
# #### Author: <NAME>
from model import *
# %%time
iicf = IICF(path_to_data, "one_day")
# +
import os
import re
import csv
import datetime
def evaluate_test_set(path_to_data, dataset_type):
expanded_path = os.path.expanduser(path_to_data)
part_files = [os.path.join(expanded_path, f) for f in os.listdir(expanded_path) if dataset_type in f]
part_files = sorted(part_files, key = lambda x:x[-5:])
i = 0
with open('results.csv', 'w') as output:
for file in part_files:
with open(file, 'r') as f:
linereader = csv.reader(f, delimiter='\x01')
last_timestamp = None
for row in linereader:
i += 1
# custom feature parser
tweet_id, user_id, features, follow, tweet_timestamp = iicf.parse_input_features(row)
# predict all targets at once for speed
reply_pred, retweet_pred, quote_pred, fav_pred = iicf.predict(tweet_id, user_id, features, follow)
# print(str(tweet_timestamp))
# print(str(reply_pred)+" "+str(retweet_pred)+" "+str(quote_pred)+" "+str(fav_pred))
# keep output structure
output.write(f'{tweet_id},{user_id},{reply_pred},{retweet_pred},{quote_pred},{fav_pred}\n')
if i % 1000 == 0:
print(f"Predicted {i} rows.", end="\r")
print(f"Predicted {i} rows.")
# -
# %%time
evaluate_test_set(val_path_to_data, val_dataset_type)
results = pd.read_csv("results.csv", header=None)
results.columns = ["tweet_id", "user_id", "reply", "retweet", "quote", "like"]
results
print("Retweet scores:")
compute_rce(results.retweet, val_data.retweet), average_precision_score(val_data.retweet, results.retweet)
print("Quote scores:")
compute_rce(results.quote, val_data.quote), average_precision_score(val_data.quote, results.quote)
print("Like scores:")
compute_rce(results.like, val_data.like), average_precision_score(val_data.like, results.like)
del iicf # free up memory
# ## Content-Based Recommender
# #### Author: <NAME>
#
# **Unfinished Code**
# ```
# from sklearn.feature_extraction.text import TfidfVectorizer
# from sklearn.metrics.pairwise import linear_kernel
#
# vectorizer = TfidfVectorizer()
#
# tfidf_text_tokens = vectorizer.fit_transform(map(str, df.text_tokens))
#
#
# tweet_engaged_with_user_like = r_df.loc[(r_df['user'] == 4) & (r_df['like'] == True)]['tweet']
# tweet_engaged_with_user_reply = r_df.loc[(r_df['user'] == 4) & (r_df['reply'] == True)]['tweet']
# tweet_engaged_with_user_retweet = r_df.loc[(r_df['user'] == 4) & (r_df['retweet'] == True)]['tweet']
# tweet_engaged_with_user_retweet_wc = r_df.loc[(r_df['user'] == 4) & (r_df['retweet_wc'] == True)]['tweet']
#
#
# def get_tweet_ids_engaged_by_user_id(user_id):
# return np.array(r_df.loc[ r_df['user'] == user_id ].index)
#
# def get_item_vector(user_id):
# tweet_ids_engaged_by_user_id = get_tweet_ids_engaged_by_user_id(user_id)
# return tfidf_text_tokens[tweet_ids_engaged_by_user_id]
#
# def get_user_engagements(user_id, engagement_type):
# return np.array( r_df.loc[ r_df['user'] == user_id ][engagement_type] )
#
#
# import sklearn.preprocessing as pp
#
# def compute_user_profile_by_rating(user_ratings):
# user_rating_weight = tfidf_vector.T.multiply(user_ratings)
# user_profile = user_rating_weight.mean(axis=1).T
# return pp.normalize(user_profile)
#
# def compute_user_profile(user_id, engagement_type):
# user_ratings = get_user_engagements(user_id, engagement_type)
# return compute_user_profile_by_rating(user_ratings)
#
#
#
# user_id = 3
# tweet_ids_engaged_by_user_id = get_tweet_ids_engaged_by_user_id(user_id)
# tfidf_vector = get_item_vector(user_id)
# user_like_engagements = get_user_engagements(user_id, 'like')
#
# print(tweet_ids_engaged_by_user_id)
# print(user_like_engagements)
#
# user_profile = compute_user_profile(user_id, 'like')
#
#
# print(user_profile[user_profile.nonzero()])
#
# def recommend(user_profile, topN=20):
# sims = linear_kernel(user_profile, tfidf_text_tokens).flatten()
# sims = sims.argsort()[::-1]
# sim_item_ids = np.array(r_df.iloc[sims]['tweet'])
#
# return list(filter(
# (lambda item_id: item_id not in tweet_ids_engaged_by_user_id), sim_item_ids
# ))[:topN]
#
# recommendations = recommend(user_profile)
# print(recommendations)
#
#
# def map_tweetIDX_to_tweetID(ids):
# tweet_id_map = pd.DataFrame()
# tweet_id_map['tweet_id'] = df['tweet_id']
# tweet_id_map['tweet'] = df['tweet_id'].map(tweetId_to_tweetIDX)
# return tweet_id_map.loc[tweet_id_map['tweet'].isin(ids)]['tweet_id']
#
#
# recommended_tweet_ids = map_tweetIDX_to_tweetID(recommendations)
#
#
# columns = ['tweet_id', 'like_timestamp']
# gt_predictions = df.loc[df['tweet_id'].isin(recommended_tweet_ids)][columns]
# hit = gt_predictions['like_timestamp'].count()
# n = len(gt_predictions.index)
# ap = hit / n
# print(ap)
# ```
# ## Fairness
# #### Author: <NAME>
# +
def read_predictions_fairness(path):
pred = pd.read_csv(path, header=None)
return pred
def read_predictions(path, columns_flag=False):
if columns_flag:
names = ['tweet_id', 'engaging_user_id', 'reply', 'retweet', 'quote', 'like']
pred = pd.read_csv(path, header=None, names=names)
else:
pred = pd.read_csv(path)
return pred
# -
def parse_line(row):
tweet_id = row[all_features_to_idx['tweet_id']]
user_id = row[all_features_to_idx['engaging_user_id']]
follower_count= int(row[all_features_to_idx["engaged_with_user_follower_count"]])
following_count = int(row[all_features_to_idx["engaged_with_user_following_count"]])
verified = bool(row[all_features_to_idx["engaged_with_user_is_verified"]])
return tweet_id, user_id, follower_count, following_count, verified
expanded_path = os.path.expanduser(val_path_to_data)
part_files = [os.path.join(expanded_path, f) for f in os.listdir(expanded_path) if dataset_type in f]
part_files = sorted(part_files, key = lambda x:x[-5:])
# +
def get_tweet_ids(path):
tweet_ids = {}
i = 0
total_entries = 0
with open(path, 'r') as f:
linereader = csv.reader(f, delimiter='\x01')
for row in linereader:
tweet_id = row[all_features_to_idx["tweet_id"]]
#print(tweet_id)
if tweet_id not in tweet_ids:
tweet_ids[tweet_id] = i
i += 1
total_entries += 1
return tweet_ids
def get_user_ids(path):
user_ids = {}
i = 0
with open(path, 'r') as f:
linereader = csv.reader(f, delimiter='\x01')
for row in linereader:
user_id = row[all_features_to_idx["engaging_user_id"]]
#print(user_id)
if user_id not in user_ids:
user_ids[user_id] = i
i += 1
return user_ids
tweet_ids = get_tweet_ids(part_files[0])
user_ids = get_user_ids(part_files[0])
# +
def tweets_data(dataset_type):
tweet_groups = pd.DataFrame(columns=['tweet_id', 'engaging_user_id', 'follower_count', 'following_count', 'verified'])
for file in part_files:
with open(file, 'r') as f:
linereader = csv.reader(f, delimiter='\x01')
for i, row in enumerate(linereader):
tweet_id, user_id, follower_count, following_count, verified = parse_line(row)
tweet_id_int = tweet_ids[tweet_id]
user_id_int = user_ids[user_id]
dic = {'tweet_id':tweet_id_int, 'engaging_user_id':user_id_int,\
'follower_count':follower_count, 'following_count':following_count, 'verified':verified}
tweet_groups = tweet_groups.append(dic, ignore_index=True)
return tweet_groups
tweet_groups = tweets_data(val_dataset_type)
# -
# ### Group by popularity
# +
def group_by_followers(df):
data = df.copy()
data = data.sort_values(by='follower_count', ascending=False)
data['group'] = np.zeros((len(data)), dtype=np.int32)
for i in range(0,round(len(data)/5)):
data.loc[i, 'group'] = 0
for i in range(round(len(data)/5), 2*round(len(data)/5)):
data.loc[i, 'group'] = 1
for i in range(2*round(len(data)/5), 3*round(len(data)/5)):
data.loc[i, 'group'] = 2
for i in range(3*round(len(data)/5), 4*round(len(data)/5)):
data.loc[i, 'group'] = 3
for i in range(4*round(len(data)/5), len(data)):
data.loc[i, 'group'] = 4
return data
groups = group_by_followers(tweet_groups)
# +
ground_truth = read_predictions("gt_validation.csv")
predictions = read_predictions("results.csv", True)
predictions['tweet_id'] = predictions['tweet_id'].map(tweet_ids)
predictions['engaging_user_id'] = predictions['engaging_user_id'].map(user_ids)
ground_truth['tweet_id'] = ground_truth['tweet_id'].map(tweet_ids)
ground_truth['engaging_user_id'] = ground_truth['engaging_user_id'].map(user_ids)
# +
from sklearn.metrics import average_precision_score, log_loss
def get_rce_fairness(c):
pred_col = {"reply": 2, "retweet": 3, "quote": 4, "like": 5}
col = pred_col[c]
preds = pd.merge(predictions, groups[['engaging_user_id', 'group']], how='inner', on = 'engaging_user_id')
gts = pd.merge(ground_truth, groups[['engaging_user_id', 'group']], how='inner', on = 'engaging_user_id')
rce = {}
average_precision = {}
accuracy = {}
print('Total rce = {0}, average precision = {1}'.format(compute_rce(preds[c], gts[c]), average_precision_score(gts[c], preds[c])))
print('RCE for {}:'.format(preds.columns[col]))
for i in range(5):
group_predictions = preds.loc[preds['group'] == i]
group_ground_truth = gts.loc[gts['group'] == i]
try:
rce[i] = compute_rce(group_predictions[c], group_ground_truth[c])
average_precision[i] = average_precision_score(group_ground_truth[c], group_predictions[c])
print("Group {0}: rce = {1}, average precision = {2}".format(i, rce[i], average_precision[i]))
except Exception as e:
print(e)
# -
col = 'reply'
get_rce_fairness(col)
col = 'retweet'
get_rce_fairness(col)
col = 'quote'
get_rce_fairness(col)
col = 'like'
get_rce_fairness(col)
# ### Group by user verification
groups_verification = tweet_groups[['tweet_id', 'engaging_user_id', 'verified']]
# +
from sklearn.metrics import average_precision_score, log_loss
def get_rce_fairness_verified(c):
pred_col = {"reply": 2, "retweet": 3, "quote": 4, "like": 5}
col = pred_col[c]
preds = pd.merge(predictions, groups_verification[['engaging_user_id', 'verified']], how='inner', on = 'engaging_user_id')
gts = pd.merge(ground_truth, groups_verification[['engaging_user_id', 'verified']], how='inner', on = 'engaging_user_id')
rce = {}
average_precision = {}
accuracy = {}
print('Total rce = {0}, average precision = {1}'.format(compute_rce(preds[c], gts[c]), average_precision_score(gts[c], preds[c])))
print('RCE for {}:'.format(predictions.columns[col]))
group_predictions_true = preds.loc[preds['verified'] == True]
group_ground_truth_true = gts.loc[gts['verified'] == True]
try:
rce_true = compute_rce(group_predictions_true[c], group_ground_truth_true[c])
average_precision_true = average_precision_score(group_ground_truth_true[c], group_predictions_true[c])
print("Verified accounts: rce = {0}, average precision = {1}".format(rce_true, average_precision_true))
except Exception as e:
print(e)
group_predictions_false = preds.loc[preds['verified'] == False]
group_ground_truth_false = gts.loc[gts['verified'] == False]
try:
rce_false = compute_rce(group_predictions_false[c], group_ground_truth_false[c])
average_precision_false = average_precision_score(group_ground_truth_false[c], group_predictions_false[c])
print("Un-verified accounts: rce = {0}, average precision = {1}".format(rce_false, average_precision_false))
except Exception as e:
pass
# -
col = 'reply'
get_rce_fairness_verified(col)
col = 'reply'
get_rce_fairness_verified(col)
col = 'retweet'
get_rce_fairness_verified(col)
col = 'quote'
get_rce_fairness_verified(col)
col = 'like'
get_rce_fairness_verified(col)
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "bf000a0073acaf52bcde389fa20cf1d6", "grade": true, "grade_id": "cell-d807d29f081e031b", "locked": true, "points": 15, "schema_version": 3, "solution": false, "task": false}
# hidden
# + deletable=false editable=false nbgrader={"cell_type": "code", "checksum": "bdcfa030c94d59246d7322f527c9ef7e", "grade": true, "grade_id": "cell-adf5f6bdd4704e08", "locked": true, "points": 0, "schema_version": 3, "solution": false, "task": false}
| submission_files/project.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nba
# language: python
# name: nba
# ---
# # Finding Games
#
# To find a single game, use the `LeagueGameFinder` class.
# While you can call it without any arguments and get ~30,000 games returned (I believe that's the max number of rows that nba.com will send in a response) across the NBA, WNBA, G-League, and international ball, it's a better idea to pass a team ID.
#
# See `nba_api.stats.static.teams` in the [Basics Notebook](Basics.ipynb) for more detail on getting a team ID.
#
# Let's try to find the last time the Celtics played the Raptors in 2017-18.
# That will be four steps:
# 1. Fetch all Celtics games.
# 2. Select just games from the 2017-18 season (SEASON_ID ending in 2017).
# 3. Select games where the opponent is the Raptors (MATCHUP contains 'TOR').
# 4. Order by date and select the last row.
#
# And last, we'll also get the play-by-play data from that game.
# ### Get All Celtics Games
# +
from nba_api.stats.static import teams
nba_teams = teams.get_teams()
# Select the dictionary for the Celtics, which contains their team ID
celtics = [team for team in nba_teams if team['abbreviation'] == 'BOS'][0]
celtics_id = celtics['id']
# +
from nba_api.stats.endpoints import leaguegamefinder
# Query for games where the Celtics were playing
gamefinder = leaguegamefinder.LeagueGameFinder(team_id_nullable=celtics_id)
# The first DataFrame of those returned is what we want.
games = gamefinder.get_data_frames()[0]
games.head()
# -
# As you can see above, the season ID is 5 digits.
# I believe the last 4 will always be the current season (2018 for the 2018-19 season).
# We can do a sanity check and look at how many games the Celtics have played in recent years.
games.groupby(games.SEASON_ID.str[-4:])[['GAME_ID']].count().loc['2015':]
# Note that some of these games are preseason and summer league, so these numbers aren't just regular season and playoffs.
# ### Filter to Games in the 2017-18 Season
# Subset the games to when the last 4 digits of SEASON_ID were 2017.
games_1718 = games[games.SEASON_ID.str[-4:] == '2017']
games_1718.head()
# ### Filter to Games Against the Raptors
# Subset the games to where MATCHUP contains 'TOR'.
raps_games_1718 = games_1718[games_1718.MATCHUP.str.contains('TOR')]
raps_games_1718.head()
# ### Sort by Game Date and Select the Last Row
last_raps_game = raps_games_1718.sort_values('GAME_DATE').iloc[-1]
last_raps_game
# There it is.
#
# We can see the game was on April 4th, was in Toronto, and ended in an 18-point Raptors victory.
# It can be confusing to read this, but the row is all relative to the Celtics (the team we queried).
# All the stats (points, rebounds, blocks, plus/minus) are theirs.
#
# If we wanted stats for both teams (a common use case), we'd need to get *both* rows for this game ID.
# There are always two, one with stats for each team.
# So back to the `LeagueGameFinder`!
game_id = last_raps_game.GAME_ID
game_id
# Get **all** the games so we can filter to an individual GAME_ID
result = leaguegamefinder.LeagueGameFinder()
all_games = result.get_data_frames()[0]
# Find the game_id we want
full_game = all_games[all_games.GAME_ID == game_id]
full_game
# Two rows, one with the Celtics' stats and one with the Raptors'.
# You may want to join these these two rows into one, so you have stats for both teams in the same observation.
#
# Because this is a common use case, I wrote a function for it.
# This function will work for larger datasets too (even though we just have one game here);
# you can run it on any game DataFrames.
# +
import pandas as pd
def combine_team_games(df, keep_method='home'):
'''Combine a TEAM_ID-GAME_ID unique table into rows by game. Slow.
Parameters
----------
df : Input DataFrame.
keep_method : {'home', 'away', 'winner', 'loser', ``None``}, default 'home'
- 'home' : Keep rows where TEAM_A is the home team.
- 'away' : Keep rows where TEAM_A is the away team.
- 'winner' : Keep rows where TEAM_A is the losing team.
- 'loser' : Keep rows where TEAM_A is the winning team.
- ``None`` : Keep all rows. Will result in an output DataFrame the same
length as the input DataFrame.
Returns
-------
result : DataFrame
'''
# Join every row to all others with the same game ID.
joined = pd.merge(df, df, suffixes=['_A', '_B'],
on=['SEASON_ID', 'GAME_ID', 'GAME_DATE'])
# Filter out any row that is joined to itself.
result = joined[joined.TEAM_ID_A != joined.TEAM_ID_B]
# Take action based on the keep_method flag.
if keep_method is None:
# Return all the rows.
pass
elif keep_method.lower() == 'home':
# Keep rows where TEAM_A is the home team.
result = result[result.MATCHUP_A.str.contains(' vs. ')]
elif keep_method.lower() == 'away':
# Keep rows where TEAM_A is the away team.
result = result[result.MATCHUP_A.str.contains(' @ ')]
elif keep_method.lower() == 'winner':
result = result[result.WL_A == 'W']
elif keep_method.lower() == 'loser':
result = result[result.WL_A == 'L']
else:
raise ValueError(f'Invalid keep_method: {keep_method}')
return result
# Combine the game rows into one. By default, the home team will be TEAM_A.
game_df = combine_team_games(full_game)
game_df
# -
# ## Play-by-play Data for a Given Game
# With a game ID (which we found above), we can easily fetch play-by-play data for that game.
from nba_api.stats.endpoints import playbyplayv2
pbp = playbyplayv2.PlayByPlayV2(game_id)
pbp = pbp.get_data_frames()[0]
pbp.head()
| docs/examples/Finding Games.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="DSUEMY8usgwJ" colab_type="text"
# # Lambda School Data Science - Survival Analysis
# 
#
# https://xkcd.com/881/
#
# The aim of survival analysis is to analyze the effect of different risk factors and use them to predict the duration of time between one event ("birth") and another ("death").
# + id="Woo0ubML3USW" colab_type="code" colab={}
#Probability density function
#Example: Hazard function
#"instantaneous likelihood of failure" ex.lifetime mortality U curve
#Cumulative Distribution Function
#CDF is the integral of the PDF
#it is the area under the curve of sigmoid
#y P event less than or equal to x
#example: Survival function
#Survival analysis is applied to retention; also known as time to event
#birth, death are general terms for the interval
#birth: person issued welfare
#death: person improoves income and is no longer eligible
#survival answers how long the interval they are on welfare
#'data censorship' occurs when subject doesn't trigger death event
#or literally dies
#logistic reg doesn't have a way to interpret censorship, and survival barely does
#defaulting end of period as death or dropping are both bad
#lifeline library
# + [markdown] id="Xp-QnVAgfud4" colab_type="text"
# # Lecture
#
# Survival analysis was first developed by actuaries and medical professionals to predict (as its name implies) how long individuals would survive. However, it has expanded into include many different applications.
# * it is referred to as **reliability analysis** in engineering
# * it can be referred to more generally as **time-to-event analysis**
#
# In the general sense, it can be thought of as a way to model anything with a finite duration - retention, churn, completion, etc. The culmination of this duration may have a "good" or "bad" (or "neutral") connotation, depending on the situation. However old habits die hard, so most often it is called survival analysis and the following definitions are still commonly used:
#
# * birth: the event that marks the beginning of the time period for observation
# * death: the event of interest, which then marks the end of the observation period for an individual
#
# ### Examples
# * Customer churn
# * birth event: customer subscribes to a service
# * death event: customer leaves the service
# * Employee retention
# * birth event: employee is hired
# * death event: employee quits
# * Engineering, part reliability
# * birth event: part is put in use
# * death event: part fails
# * Program completion
# * birth event: student begins PhD program
# * death event: student earns PhD
# * Response time
# * birth event: 911 call is made
# * death event: police arrive
# * Lambda School
# * birth event: student graduates LambdaSchool
# * death event: student gets a job!
#
# Take a moment and try to come up with your own specific example or two.
#
# #### So... if all we're predicting here is a length of time between two events, why can't we just use regular old Linear Regression?
# Well... if you have all the data, go for it. In some situations it may be reasonably effective.
#
# #### But, data for survival times are often highly skewed and, more importantly, we don't always get a chance to observe the "death" event. The current time or other factors interfere with our ability to observe the time of the event of interest. These observations are said to be _censored_.
#
# Additionally, the occurrence or non-occurrence of an event is binary - so, while the time is continuous, the event itself is in some ways similar to a binary event in logistic regression.
#
# ## Censorship in Data
#
# Suppose a new cancer treatment is developed. Researchers select 50 individuals for the study to undergo treatment and participate in post-treatment obsesrvation.
#
# ##### Birth Event = Participant begins trial
# ##### Death Event = Participant dies due to cancer or complications of cancer
# During the study:
# 1. Some participants die during the course of the study--triggering their death event
# 2. Some participants drop out or the researchers otherwise lose contact with them. The researchers have their data up intil the time they dropped out, but they don't have a death event to record
# 3. Some participants are still be alive at the end of the observation period. So again, researchers have their data up until some point, but there is no death event to record
#
# We only know the interval between the "birth" event and the "death" event for participants in category 1. All others we only know that they survived _up to_ a certain point.
#
# ### Dealing with Censored Data
#
# Without survival analysis, we could deal with censored data in two ways:
# * We could just treat the end of the observation period as the time of the death event
# * (Even worse) We could drop the censored data using the rationale that we have "incomplete data" for those observations
#
# But... both of these will underestimate survival rates for the purpose of the study. We **know** that all those individuals "survived" the "death event" past a certain point.
#
# Luckily, in the 1980s a pair of smarty pants named David (main author Cox and coauthor Oakes) did the hard math work to make it possible to incorporate additional features as predictive measures to survival time probabilities. (Fun fact, the one named Cox also came up with logistic regression with non-David coauthor, <NAME>.)
#
# ## lifelines
# It wasn't until 2014 that some other smart people made an implementation of survival analysis in Python called lifelines.
# It is built over Pandas and follows the same conventions for usage as scikit-learn.
#
# _Additional note: scikit pushed out a survival analysis implementation last year (2018) named scikit-survival that is imported by the name `sksurv`. It's super new so it may/may not have a bunch of bugs... but if you're interested you can check it out in the future. (For comparison, scikit originally came out in 2007 and Pandas came out in 2008)._
# + id="ZrCb7TULsgwP" colab_type="code" outputId="bf8f4a62-70a7-4673-81f5-1f40ee4203c8" colab={"base_uri": "https://localhost:8080/", "height": 356}
# !pip install lifelines
# + id="E8rpzq9zsgwn" colab_type="code" colab={}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import lifelines
# + id="d51G4sPqsgww" colab_type="code" outputId="4ef6d6ca-a9fe-4c42-a943-ad8979a5b73e" colab={"base_uri": "https://localhost:8080/", "height": 195}
# lifelines comes with some datasets to get you started playing around with it.
# Most of the datasets are cleaned-up versions of real datasets. Here we will
# use their Leukemia dataset comparing 2 different treatments taken from
# http://web1.sph.emory.edu/dkleinb/allDatasets/surv2datasets/anderson.dat
from lifelines.datasets import load_leukemia
leukemia = load_leukemia()
leukemia.head()
#log white blood cell count - higher is worse
#status - survival or death
#t - number of months observed
# + [markdown] id="xIlaPncgsgw7" colab_type="text"
# ### You can you any Pandas DataFrame with lifelines.
# ### The only requirement is that the DataFrame includes features that describe:
# * a duration of time for the observation
# * a binary column regarding censorship (`1` if the death event was observed, `0` if the death event was not observed)
#
# Sometimes, you will have to engineer these features. How might you go about that? What information would you need?
# + id="DQ936c5tsgw-" colab_type="code" outputId="ebbeca9c-66e0-466d-f548-1c147285d5a5" colab={"base_uri": "https://localhost:8080/", "height": 185}
leukemia.info()
# + id="MDvA8Z9rsgxL" colab_type="code" outputId="03763942-3161-45cf-db85-01d279e359f0" colab={"base_uri": "https://localhost:8080/", "height": 284}
leukemia.describe()
#mean t - mean of time measured, NOT mean survival
# + id="tDasOEocsgxQ" colab_type="code" outputId="e4c96ca8-e356-46d9-b52a-d4ffc1b8c1bc" colab={"base_uri": "https://localhost:8080/", "height": 376}
time = leukemia.t.values
event = leukemia.status.values
ax = lifelines.plotting.plot_lifetimes(time, event_observed=event)
#red - death event
#blue - survived interval
#we can predict loss function is strongly sloping down
ax.set_xlim(0, 40)
ax.grid(axis='x')
ax.set_xlabel("Time in Months")
ax.set_title("Lifelines for Survival of Leukemia Patients");
plt.plot();
# + [markdown] id="5oEzZ_aqsgxV" colab_type="text"
# ## Kaplan-Meier survival estimate
#
# The Kaplan-Meier method estimates survival probability from observed survival times. It results in a step function that changes value only at the time of each event, and confidence intervals can be computer for the survival probabilities.
#
# The KM survival curve,a plot of KM survival probability against time, provides a useful summary of the data.
# It can be used to estimate measures such as median survival time.
#
# It CANNOT account for risk factors and is NOT regression. It is *non-parametric* (does not involve parameters).
#
# However it is a good way to visualize a survival dataset, and can be useful to compare the effects of a single categorical variable.
# + id="PH92kuyPCDeG" colab_type="code" colab={}
#visualization (non parametric)
# + id="5XoM5PzGsgxX" colab_type="code" outputId="2b014b6d-4f2e-414c-d745-daa725506814" colab={"base_uri": "https://localhost:8080/", "height": 34}
kmf = lifelines.KaplanMeierFitter()
kmf.fit(time, event_observed=event)
# + id="ZIskdbM1qkKV" colab_type="code" outputId="095fccde-4994-413c-f6e1-a4b45a6977c3" colab={"base_uri": "https://localhost:8080/", "height": 171}
# !pip install -U matplotlib # Colab has matplotlib 2.2.3, we need >3
# + id="8lnpLxmhsgxc" colab_type="code" outputId="00fb915e-b127-4de7-8325-498b24ebc8ec" colab={"base_uri": "https://localhost:8080/", "height": 393}
kmf.survival_function_.plot()
#survival % over time
#acts like a cdf flipped on y - decay of likelihood
#starts at 100% alive - only ever goes down
#20% likely to be alive by 25 months
plt.title('Survival Function Leukemia Patients');
print(f'Median Survival: {kmf.median_} months after treatment')
# + id="4eSYFEfzgjyp" colab_type="code" outputId="804e67f3-0236-4be1-93b1-85260354e518" colab={"base_uri": "https://localhost:8080/", "height": 378}
kmf.survival_function_.plot.line()
# + id="udKT7uBAsgxi" colab_type="code" outputId="dfdfac91-fb02-4729-b6c9-db24c0ea8db2" colab={"base_uri": "https://localhost:8080/", "height": 410}
ax = plt.subplot(111)
#binned kmf used to compare survival between various treatment groups
#significant as they don't overlap
#treatment 1 bin
treatment = (leukemia["Rx"] == 1)
kmf.fit(time[treatment], event_observed=event[treatment], label="Treatment 1")
kmf.plot(ax=ax)
print(f'Median survival time with Treatment 1: {kmf.median_} months')
#treatment 2 group bin
kmf.fit(time[~treatment], event_observed=event[~treatment], label="Treatment 0")
kmf.plot(ax=ax)
print(f'Median survival time with Treatment 0: {kmf.median_} months')
plt.ylim(0, 1);
plt.title("Survival Times for Leukemia Treatments");
# + [markdown] id="Ej1Zb4IYsgxr" colab_type="text"
# ## Cox Proportional Hazards Model -- Survival Regression
# It assumes the ratio of death event risks (hazard) of two groups remains about the same over time.
# This ratio is called the hazards ratio or the relative risk.
#
# All Cox regression requires is an assumption that ratio of hazards is constant over time across groups.
#
# *The good news* — we don’t need to know anything about overall shape of risk/hazard over time
#
# *The bad news* — the proportionality assumption can be restrictive
# + id="Skypo6ABsgxs" colab_type="code" outputId="e6ec38a2-0c2c-4b31-b863-01b7bf03034c" colab={"base_uri": "https://localhost:8080/", "height": 440}
# Using Cox Proportional Hazards model
#assumptions: death risk between each subject is the same
cph = lifelines.CoxPHFitter()
cph.fit(leukemia, 't', event_col='status')
cph.print_summary()
#exp(coef) - as unit of coef iterates risk multiplies by exp(coef)
#every logWBC*1.68 you are 5.38 more likely to die
# + [markdown] id="sIUr2gT7sgxz" colab_type="text"
# ## Interpreting the Results
# `coef`: usually denoted with $b$, the coefficient
#
# `exp(coef)`: $e^{b}$, equals the estimate of the hazard ratio. Here, we can say that participants who received treatment 1 had ~4.5 times the hazard risk (risk of death) compared to those who received treatment 2. And for every unit the `logWBC` increased, the hazard risk increased >5 times.
#
# `se(coef)`: standard error of the coefficient (used for calculating z-score and therefore p-value)
#
# `z`: z-score $\frac{b}{se(b)}$
#
# `p`: p-value. derived from z-score. describes statistical significance. more specifically, it is the likelihood that the variable has no effect on the outcome
#
# `log(p)`: natural logarithm of p-value... used to more easily see differences in significance
#
# `lower/upper 0.95`: confidence levels for the coefficients. in this case, we can confidently say that the coefficient for `logWBC` is somewhere _between_ 1.02 and 2.34.
#
# `Signif. codes`: easily, visually identify significant variables! The more stars, the more solid (simply based on p-value). Here `logWBC` is highly significant, `Rx` is significant, and `sex` has no statistical significance
#
# `Concordance`: a measure of predictive power for classification problems (here looking at the `status` column. a value from 0 to 1 where values above 0.6 are considered good fits (the higher the better)
#
# `Likelihood ratio (LR) test`: this is a measure of how likely it is that the coefficients are not zero, and can compare the goodness of fit of a model versus an alternative null model. Is often actually calculated as a logarithm, resulting in the log-likelihood ratio statistic and allowing the distribution of the test statistic to be approximated with [Wilks' theorem](https://en.wikipedia.org/wiki/Wilks%27_theorem).
# + id="SHPFMpUqsgx0" colab_type="code" outputId="0829c9c3-2b09-4050-9208-16c028995853" colab={"base_uri": "https://localhost:8080/", "height": 378}
#white blood cell at diagnosis bins
cph.plot_covariate_groups(covariate='logWBC', values=np.arange(1.5,5,.5))
# + id="hd02Nni_sgx7" colab_type="code" outputId="a95d3a68-bcf5-4a65-9ced-bd3abb4eecf8" colab={"base_uri": "https://localhost:8080/", "height": 378}
cph.plot_covariate_groups(covariate='sex', values=[0,1])
#sex has considerably less spread on survival by bin than wbc
#this suggests the wbc is more relevant than sex
# + [markdown] id="-JRvblsIsgyB" colab_type="text"
# ### Remember how the Cox model assumes the ratio of death events between groups remains constant over time?
# Well we can check for that.
# + id="XEG4bSlUsgyC" colab_type="code" outputId="289d3d97-6337-40c1-a935-8d9a1cc03f6f" colab={"base_uri": "https://localhost:8080/", "height": 860}
cph.check_assumptions(leukemia)
#variable sex failed non-proportional test
# + id="ZO_dFbKesgyG" colab_type="code" outputId="a83db192-ed8c-48d3-bbfc-9f0deb1933f9" colab={"base_uri": "https://localhost:8080/", "height": 378}
# We can see that the sex variable is not very useful by plotting the coefficients
cph.plot()
# + id="D1rwy5wMsgyL" colab_type="code" outputId="dc25e7d4-dd46-4b32-b566-a33cb42a2b70" colab={"base_uri": "https://localhost:8080/", "height": 347}
# Let's do what the check_assumptions function suggested
cph = lifelines.CoxPHFitter()
cph.fit(leukemia, 't', event_col='status', strata=['sex'])
cph.print_summary()
cph.baseline_cumulative_hazard_.shape
# + [markdown] id="Qn4HZjeGsgyP" colab_type="text"
# Notice that this regression has `Likelihood ratio test = 74.90 on 2 df, log(p)=-37.45`, while the one that included `sex` had `Likelihood ratio test = 47.19 on 3 df, log(p)=-21.87`. The LRT is higher and log(p) is lower, meaning this is likely a better fitting model.
# + id="jL8JzzhIsgyQ" colab_type="code" outputId="a8fc4df7-d7e1-4ca2-a31e-fff221445b84" colab={"base_uri": "https://localhost:8080/", "height": 378}
cph.plot()
#model fit likelihood drastically went up after removing sex
#which was violating assumptions as per
#the effect of treatment looks worse
# + id="Py4DYu_XsgyU" colab_type="code" outputId="96a4ce6f-afdd-4eb7-e071-bc69a8eff7ff" colab={"base_uri": "https://localhost:8080/", "height": 1367}
cph.compute_residuals(leukemia, kind='score')
# + id="LMN1uGnQsgyY" colab_type="code" outputId="de3b9044-119a-4c85-b770-0b55713b6155" colab={"base_uri": "https://localhost:8080/", "height": 802}
cph.predict_cumulative_hazard(leukemia[:5])
# + id="bs_Npp6osgyc" colab_type="code" outputId="8548bbfa-1576-4bea-d612-46b0ab86d7e6" colab={"base_uri": "https://localhost:8080/", "height": 536}
surv_func = cph.predict_survival_function(leukemia[:5])
exp_lifetime = cph.predict_expectation(leukemia[:5])
plt.plot(surv_func)
exp_lifetime
# + id="z7NNzQ_5sgyh" colab_type="code" outputId="e0767338-21d5-4470-da08-70e1a30cb9f1" colab={"base_uri": "https://localhost:8080/", "height": 195}
# lifelines comes with some datasets to get you started playing around with it
# The Rossi dataset originally comes from Rossi et al. (1980),
# and is used as an example in Allison (1995).
# The data pertain to 432 convicts who were released from Maryland state prisons
# in the 1970s and who were followed up for one year after release. Half the
# released convicts were assigned at random to an experimental treatment in
# which they were given financial aid; half did not receive aid.
from lifelines.datasets import load_rossi
recidivism = load_rossi()
recidivism.head()
# Looking at the Rossi dataset, how long do you think the study lasted?
# All features are coded with numerical values, but which features do you think
# are actually categorical?
# + id="1VjqAV3ksgyk" colab_type="code" outputId="fd682c6e-d613-44d8-f3cf-1340bdfc2764" colab={"base_uri": "https://localhost:8080/", "height": 260}
recidivism.info()
# + id="LVq_ZOFgsgyq" colab_type="code" outputId="2f893d4a-d2a2-462c-85b2-84c6c56c2da4" colab={"base_uri": "https://localhost:8080/", "height": 300}
recidivism.describe()
# + [markdown] id="Wg1s1bpJsgyt" colab_type="text"
# ### These are the "lifelines" of the study participants as they attempt to avoid recidivism
# + id="XqhPJfltsgyv" colab_type="code" outputId="c6fa9349-34fe-4c0c-cb24-d7e02fdb24d5" colab={"base_uri": "https://localhost:8080/", "height": 376}
recidivism_sample = recidivism.sample(n=25)
duration = recidivism_sample.week.values
arrested = recidivism_sample.arrest.values
ax = lifelines.plotting.plot_lifetimes(duration, event_observed=arrested)
ax.set_xlim(0, 78)
ax.grid(axis='x')
ax.vlines(52, 0, 25, lw=2, linestyles='--')
ax.set_xlabel("Time in Weeks")
ax.set_title("Recidivism Rates");
plt.plot();
# + id="lz4mF5kHsgy1" colab_type="code" outputId="4a0151be-2fbc-4748-db10-da7a3c4acf94" colab={"base_uri": "https://localhost:8080/", "height": 34}
kmf = lifelines.KaplanMeierFitter()
duration = recidivism.week
arrested = recidivism.arrest
kmf.fit(duration, arrested)
# + id="RA1FMgDNsgy4" colab_type="code" outputId="7393ed69-ecca-4961-a60c-f4c3e7123f40" colab={"base_uri": "https://localhost:8080/", "height": 390}
kmf.survival_function_.plot()
plt.title('Survival Curve:\nRecidivism of Recently Released Prisoners');
# + id="PXveA6jesgy8" colab_type="code" outputId="1171a85f-1fdb-4d79-a742-63c444abb7be" colab={"base_uri": "https://localhost:8080/", "height": 376}
kmf.plot()
plt.title('Survival Function of Recidivism Data');
# + id="Mt3ucIbgsgzB" colab_type="code" outputId="f92e392f-c9d9-466b-a212-db6c4344f1d1" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(f'Median time before recidivism: {kmf.median_} weeks')
# + id="c9e5lAV7sgzH" colab_type="code" outputId="3c5489df-d99b-4e8d-8246-4193603365a3" colab={"base_uri": "https://localhost:8080/", "height": 376}
kmf_w_aid = lifelines.KaplanMeierFitter()
kmf_no_aid = lifelines.KaplanMeierFitter()
ax = plt.subplot(111)
w_aid = (recidivism['fin']==1)
t = np.linspace(0, 70, 71)
kmf_w_aid.fit(duration[w_aid], event_observed=arrested[w_aid], timeline=t, label="Received Financial Aid")
ax = kmf_w_aid.plot(ax=ax)
#print("Median survival time of democratic:", kmf.median_)
kmf_no_aid.fit(duration[~w_aid], event_observed=arrested[~w_aid], timeline=t, label="No Financial Aid")
ax = kmf_no_aid.plot(ax=ax)
#print("Median survival time of non-democratic:", kmf.median_)
plt.ylim(.5,1)
plt.title("Recidivism for Participants Who Received Financial Aid \vs. Those Who Did Not");
# + id="zdM4GOAOsgzN" colab_type="code" outputId="a85fa30c-4695-4e93-ffa0-a15f84e733e8" colab={"base_uri": "https://localhost:8080/", "height": 500}
naf = lifelines.NelsonAalenFitter()
naf.fit(duration, arrested)
print(naf.cumulative_hazard_.head())
naf.plot()
# + id="uJVPVtmYsgzR" colab_type="code" outputId="e11867c7-9a27-4c1d-833c-d07e23ffcbd4" colab={"base_uri": "https://localhost:8080/", "height": 403}
naf_w_aid = lifelines.NelsonAalenFitter()
naf_no_aid = lifelines.NelsonAalenFitter()
naf_w_aid.fit(duration[w_aid], event_observed=arrested[w_aid], timeline=t, label="Received Financial Aid")
ax = naf_w_aid.plot(loc=slice(0, 50))
naf_no_aid.fit(duration[~w_aid], event_observed=arrested[~w_aid], timeline=t, label="No Financial Aid")
ax = naf_no_aid.plot(ax=ax, loc=slice(0, 50))
plt.title("Recidivism Cumulative Hazard\nfor Participants Who Received Financial Aid \nvs. Those Who Did Not");
plt.show()
# + id="5KbuDkxjsgzV" colab_type="code" outputId="986091a0-00af-4071-bb5a-2e3eefa81544" colab={"base_uri": "https://localhost:8080/", "height": 503}
cph = lifelines.CoxPHFitter()
cph.fit(recidivism, duration_col='week', event_col='arrest', show_progress=True)
cph.print_summary()
# + id="QoWVH0XDsgzX" colab_type="code" outputId="c4635a1e-63e3-41ad-9d85-e6a2881d932f" colab={"base_uri": "https://localhost:8080/", "height": 378}
cph.plot()
# + id="r8qPQeKVsgza" colab_type="code" outputId="7fb32283-e088-43fe-d690-80f0a04e0db7" colab={"base_uri": "https://localhost:8080/", "height": 378}
cph.plot_covariate_groups('fin', [0, 1])
# + id="PPSlBmp-sgzc" colab_type="code" outputId="2c8aeb86-0152-41ea-8321-a6f858f20f1d" colab={"base_uri": "https://localhost:8080/", "height": 378}
cph.plot_covariate_groups('prio', [0, 5, 10, 15])
# + id="8H4IMry_sgzf" colab_type="code" outputId="b7e4d43f-7d3b-4844-e7bc-3ebe2976342e" colab={"base_uri": "https://localhost:8080/", "height": 206}
r = cph.compute_residuals(recidivism, 'martingale')
r.head()
# + id="4gi-OTLMsgzs" colab_type="code" outputId="60c1ad75-7a54-4520-8171-3802f8cb7e57" colab={"base_uri": "https://localhost:8080/", "height": 503}
cph = lifelines.CoxPHFitter()
cph.fit(recidivism, duration_col='week', event_col='arrest', show_progress=True)
cph.print_summary()
# + id="ZNAc3sAVsgzv" colab_type="code" outputId="5b2f5264-ba3e-42d8-9f88-34f9186ff19e" colab={"base_uri": "https://localhost:8080/", "height": 361}
cph.plot();
# + id="jSWSmL8nsgzx" colab_type="code" outputId="93a2aeff-30aa-4fa0-9356-87c4a9a17419" colab={"base_uri": "https://localhost:8080/", "height": 361}
cph.plot_covariate_groups('prio', [0, 5, 10, 15]);
# + id="iODFHl9Usgzz" colab_type="code" outputId="130437c6-b2b1-473f-f7a3-165bf7379f9b" colab={"base_uri": "https://localhost:8080/", "height": 1579}
cph.check_assumptions(recidivism)
# + [markdown] id="XY-yJuKcsgz1" colab_type="text"
# ## The Intuition - Hazard and Survival Functions
#
# ### Hazard Function - the dangerous bathtub
#
# The hazard function represents the *instantaneous* likelihood of failure. It can be treated as a PDF (probability density function), and with real-world data comes in three typical shapes.
#
# 
#
# Increasing and decreasing failure rate are fairly intuitive - the "bathtub" shaped is perhaps the most surprising, but actually models many real-world situations. In fact, life expectancy in general, and most threats to it, assume this shape.
#
# What the "bathtub" means is that - threats are highest at youth (e.g. infant mortality), but then decrease and stabilize at maturity, only to eventually re-emerge in old age. Many diseases primarily threaten children and elderly, and middle aged people are also more robust to physical trauma.
#
# The "bathtub" is also suitable for many non-human situations - often with reliability analysis, mechanical parts either fail early (due to manufacturing defects), or they survive and have a relatively long lifetime to eventually fail out of age and use.
#
# ### Survival Function (aka reliability function) - it's just a (backwards) CDF
#
# Since the hazard function can be treated as a probability density function, it makes sense to think about the corresponding cumulative distribution function (CDF). But because we're modeling time to failure, it's actually more interesting to look at the CDF backwards - this is called the complementary cumulative distribution function.
#
# In survival analysis there's a special name for it - the survival function - and it gives the probability that the object being studied will survive beyond a given time.
#
# 
#
# As you can see they all start at 1 for time 0 - at the beginning, all things are alive. Then they all move down over time to eventually approach and converge to 0. The different shapes reflect the average/expected retention of a population subject to this function over time, and as such this is a particularly useful visualization when modeling overall retention/churn situations.
#
# ### Ways to estimate/model survival analysis - terms to be aware of
# Key Components Necessary for these models - duration, and whether observation is censored.
#
# - <NAME> Estimator
# - Nelson-Aalen Estimator
# - Proportional Hazards (Cox Model, integrates covariates)
# - Additive Hazards Model (Aalen's Additive Model, when covariates are time-dependent)
#
# As with most statistics, these are all refinements of the general principles, with the math to back them up. Software packages will tend to select reasonable defaults, and allow you to use parameters to tune or select things. The math for these gets varied and deep - but feel free to [dive in](https://en.wikipedia.org/wiki/Survival_analysis) if you're curious!
# + id="y4MzWpoxInZP" colab_type="code" colab={}
#survival 3 - flattens out at high percent after short time passes
#example: website recidivism by minute
#survival 4 - flat until a breakout event where survival deflates quickly
#example: disease outbreaks
# + [markdown] id="Su5WdRa139yW" colab_type="text"
# ## Live! Let's try modeling heart attack survival
#
# https://archive.ics.uci.edu/ml/datasets/echocardiogram
# + id="4GcRG1ussgz1" colab_type="code" colab={}
# TODO - Live! (As time permits)
# + [markdown] id="8tMyVxHRa3cQ" colab_type="text"
# # Assignment - Customer Churn
#
# Treselle Systems, a data consulting service, [analyzed customer churn data using logistic regression](http://www.treselle.com/blog/customer-churn-logistic-regression-with-r/). For simply modeling whether or not a customer left this can work, but if we want to model the actual tenure of a customer, survival analysis is more appropriate.
#
# The "tenure" feature represents the duration that a given customer has been with them, and "churn" represents whether or not that customer left (i.e. the "event", from a survival analysis perspective). So, any situation where churn is "no" means that a customer is still active, and so from a survival analysis perspective the observation is censored (we have their tenure up to now, but we don't know their *true* duration until event).
#
# Your assignment is to [use their data](https://github.com/treselle-systems/customer_churn_analysis) to fit a survival model, and answer the following questions:
#
# - What features best model customer churn?
# - What would you characterize as the "warning signs" that a customer may discontinue service?
# - What actions would you recommend to this business to try to improve their customer retention?
#
# Please create at least *3* plots or visualizations to support your findings, and in general write your summary/results targeting an "interested layperson" (e.g. your hypothetical business manager) as your audience.
#
# This means that, as is often the case in data science, there isn't a single objective right answer - your goal is to *support* your answer, whatever it is, with data and reasoning.
#
# Good luck!
# + id="OIfMQKnSy8Zl" colab_type="code" outputId="87f6401d-352f-437d-eda8-c4021573d29c" colab={"base_uri": "https://localhost:8080/", "height": 342}
# Loading the data to get you started
churn_data = pd.read_csv(
'https://raw.githubusercontent.com/treselle-systems/'
'customer_churn_analysis/master/WA_Fn-UseC_-Telco-Customer-Churn.csv')
churn_data.head()
# + id="lmGBY5fX0bmu" colab_type="code" outputId="6b57713b-fcc2-4879-a213-30da3fa3d700" colab={"base_uri": "https://localhost:8080/", "height": 469}
churn_data.info() # A lot of these are "object" - some may need to be fixed...
# + [markdown] id="Vaqh-1Dqa4hz" colab_type="text"
# # Resources and stretch goals
#
# Resources:
# - [Wikipedia on Survival analysis](https://en.wikipedia.org/wiki/Survival_analysis)
# - [Wikipedia on Survival functions](https://en.wikipedia.org/wiki/Survival_function)
# - [Summary of survival analysis by a biostatistician](http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/BS704_Survival/BS704_Survival_print.html)
# - [Another medical statistics article on survival analysis](https://www.sciencedirect.com/science/article/pii/S1756231716300639)
# - [Survival analysis using R lecture slides](http://www.stat.columbia.edu/~madigan/W2025/notes/survival.pdf)
#
# Stretch goals:
# - Make ~5 slides that summarize and deliver your findings, as if you were to present them in a business meeting
# - Revisit any of the data from the lecture material, and explore/dig deeper
# - Write your own Python functions to calculate a simple hazard or survival function, and try to generate and plot data with them
| module2-survival-analysis/LS_DS_232_Survival_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#import datasets from sklearn library
from sklearn import datasets
data = datasets.load_iris()
#Import decision tree classification model and cross validation
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import accuracy_score
from sklearn.metrics import matthews_corrcoef
# -
#Extract a holdout set at the very begining
X_train_set, X_holdout, y_train_set, y_holdout = train_test_split(data.data, data.target,
stratify = data.target, random_state = 42, test_size = .20)
#Get input and output datasets values in X and Y variables
X = X_train_set
y = y_train_set
# +
#Initialize k-fold cross validation configurations
kf = KFold(n_splits=5)
scores = []
mcc_scores = []
dt = DecisionTreeClassifier(criterion='gini', max_depth = 2, \
min_samples_leaf = 0.10)
for train_index, test_index in kf.split(X):
#print("Train index: {0}, \nTest index: {1}".format(train_index, test_index))
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
dt.fit(X_train, y_train)
y_pred = dt.predict(X_test)
mcc_scores.append(matthews_corrcoef(y_test, y_pred))
scores.append(dt.score(X_train, y_train))
print("\n" + ("*" * 100))
print("The cross-validation scores using custom method are \n{0}".format(scores))
print("The mcc scores are \n{0}".format(mcc_scores))
print("*" * 100)
# -
| Dataset 1 - Heart Failure Prediction/Decision Tree Codes/.ipynb_checkpoints/test-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Try searching with time extent on new IOOS catalog
import requests, json
headers = {'Content-Type': 'application/xml'}
input='''
<csw:GetRecords xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml" outputSchema="http://www.opengis.net/cat/csw/2.0.2"
outputFormat="application/xml" version="2.0.2" service="CSW" resultType="results"
maxRecords="1000"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 http://schemas.opengis.net/csw/2.0.2/CSW-discovery.xsd">
<csw:Query typeNames="csw:Record">
<csw:ElementSetName>full</csw:ElementSetName>
<csw:Constraint version="1.1.0">
<ogc:Filter>
<ogc:And>
<ogc:BBOX>
<ogc:PropertyName>ows:BoundingBox</ogc:PropertyName>
<gml:Envelope srsName="urn:ogc:def:crs:OGC:1.3:CRS84">
<gml:lowerCorner> -158.4 20.7</gml:lowerCorner>
<gml:upperCorner> -157.2 21.6</gml:upperCorner>
</gml:Envelope>
</ogc:BBOX>
<ogc:PropertyIsLessThanOrEqualTo>
<ogc:PropertyName>apiso:TempExtent_begin</ogc:PropertyName>
<ogc:Literal>2016-12-01T16:43:00Z</ogc:Literal>
</ogc:PropertyIsLessThanOrEqualTo>
<ogc:PropertyIsGreaterThanOrEqualTo>
<ogc:PropertyName>apiso:TempExtent_end</ogc:PropertyName>
<ogc:Literal>2014-12-01T16:43:00Z</ogc:Literal>
</ogc:PropertyIsGreaterThanOrEqualTo>
<ogc:PropertyIsLike wildCard="*" singleChar="?" escapeChar="\\">
<ogc:PropertyName>apiso:AnyText</ogc:PropertyName>
<ogc:Literal>*G1SST*</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:And>
</ogc:Filter>
</csw:Constraint>
</csw:Query>
</csw:GetRecords>
''';
# ## First try old catalog
endpoint = 'http://www.ngdc.noaa.gov/geoportal/csw'
xml_string=requests.post(endpoint, data=input, headers=headers).text
print(xml_string[:2000])
# ## Now try new catalog
endpoint = 'https://dev-catalog.ioos.us/csw'
xml_string=requests.post(endpoint, data=input, headers=headers).text
print(xml_string[:2000])
# ## try new catalog without time range
input='''
<csw:GetRecords xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml" outputSchema="http://www.opengis.net/cat/csw/2.0.2"
outputFormat="application/xml" version="2.0.2" service="CSW" resultType="results"
maxRecords="1000"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 http://schemas.opengis.net/csw/2.0.2/CSW-discovery.xsd">
<csw:Query typeNames="csw:Record">
<csw:ElementSetName>full</csw:ElementSetName>
<csw:Constraint version="1.1.0">
<ogc:Filter>
<ogc:And>
<ogc:BBOX>
<ogc:PropertyName>ows:BoundingBox</ogc:PropertyName>
<gml:Envelope srsName="urn:ogc:def:crs:OGC:1.3:CRS84">
<gml:lowerCorner> -158.4 20.7</gml:lowerCorner>
<gml:upperCorner> -157.2 21.6</gml:upperCorner>
</gml:Envelope>
</ogc:BBOX>
<ogc:PropertyIsLike wildCard="*" singleChar="?" escapeChar="\\">
<ogc:PropertyName>apiso:AnyText</ogc:PropertyName>
<ogc:Literal>*G1SST*</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:And>
</ogc:Filter>
</csw:Constraint>
</csw:Query>
</csw:GetRecords>
''';
endpoint = 'https://dev-catalog.ioos.us/csw'
xml_string=requests.post(endpoint, data=input, headers=headers).text
print(xml_string[:2000])
| CSW/.ipynb_checkpoints/data.ioos.us_time_extent-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.0 64-bit (''anaconda3'': virtualenv)'
# language: python
# name: python37064bitanaconda3virtualenvdc59f3b7c1d64353bf7dcc6d7e32f36c
# ---
# +
# Import Modules
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
plt.style.use('dark_background')
# %load_ext autoreload
# %autoreload 2
pd.set_option('display.min_row', 25)
pd.set_option('display.max_column', 100)
pd.set_option('display.max_colwidth', 300)
# -
# ### Possible additional datasets:
#
# - Current Land Use Zoning Detail: https://data.seattle.gov/Land-Base/Current-Land-Use-Zoning-Detail/9nvb-wk9b
# - *Bike Rack Locations* (Contains lat&long): https://data.seattle.gov/Land-Base/Bike-Racks/pbej-cxb2
# - *Marked Crosswalks* (Contains various, including at signal or stop sign): https://data.seattle.gov/Land-Base/Marked-Crosswalks/dx75-5pzj
# - Seattle Light Poles Location (Contains lat&long): https://data.seattle.gov/Land-Base/Seattle-City-Light-Poles/f4y8-37gx
# - Traffic Signals (DOES NOT HAVE LAT&LONG) Contains if there is a bike signal or not): https://data.seattle.gov/Land-Base/Traffic-Signals/s63a-bkj8
# - *Traffic Circles* (Contains lat&long): https://data.seattle.gov/Land-Base/Traffic-Circles/hw9f-j7b8
# - *Radar Speed Signs* (Contains lat&long): https://data.seattle.gov/Land-Base/Radar-Speed-Signs/siht-4gsh
#
# ##### Unrelated:
# - Areaways: https://data.seattle.gov/Land-Base/Areaways/nmja-kgz6
# ## Import in all the datasets
streets = pd.read_csv('../data/Seattle_Streets.csv')
# Load data and set Datetime column
collisions = pd.read_csv('../data/Collisions.csv',
parse_dates={'Datetime': ['INCDTTM']},
infer_datetime_format=True)
# set datetime as index
collisions = collisions.set_index('Datetime').sort_index()
collisions.head()
collisions.describe()
# #### Bike Racks
bikeracks = pd.read_csv('../data/Bike_Racks.csv')
bikeracks.describe()
# remove duplicates
bikeracks = bikeracks[~bikeracks['latlon_tup'].duplicated()]
# #### Radar
radar = pd.read_csv('../data/Radar_Speed_Signs.csv')
radar.describe()
# no duplicates
radar[radar['latlon_tup'].duplicated()]
# #### Traffic Circles
traf_circles = pd.read_csv('../data/Traffic_Circles.csv')
traf_circles.describe()
# remove duplicates
traf_circles = traf_circles[~traf_circles['latlon_tup'].duplicated()]
# #### Marked Crosswalks
marked_cross = pd.read_csv('../data/Marked_Crosswalks.csv')
marked_cross.describe()
# remove duplicates
marked_cross = marked_cross[~marked_cross['latlon_tup'].duplicated()]
# ## Calculate Distance between 2 coordinates
import geopy.distance
# Testing out a pair of coordinates
import timeit
# +
coords_1 = (marked_cross['SHAPE_LAT'][0], marked_cross['SHAPE_LNG'][0])
coords_2 = (collisions['Y'][0], collisions['X'][0])
# %timeit geopy.distance.distance(coords_1, coords_2).miles
# -
% timeit abs(coords_1[0] - coords_2[0])
from math import acos, cos, radians, sin
# %timeit 3959 * acos(cos(radians(coords_1[0])) * cos(radians(coords_2[0])) * cos(radians(coords_2[1]) - radians(coords_1[1])) + sin(radians(coords_1[0])) * sin(radians(coords_2[0])))
# ### Finding Nearest Point (in miles)
# +
# import in library to help display progress bar of iterables
from tqdm import tqdm
getattr(tqdm, '_instances', {}).clear()
# -
# make new dataset and get rid of nan values in collisions
new_collisions = collisions.copy()
# temporarily fill with 0 in order to avoid errors when calculating distance that has no latitude/longitude
new_collisions['Y'].fillna(value=0, inplace=True)
new_collisions['X'].fillna(value=0, inplace=True)
# For each dataset, create a new column with the latitude and longitude value as a tuple since it is the input type for the geopy.distance.distance
new_collisions['latlon_tup'] = list(zip(new_collisions['Y'], new_collisions['X']))
marked_cross['latlon_tup'] = list(zip(marked_cross['SHAPE_LAT'], marked_cross['SHAPE_LNG']))
radar['latlon_tup'] = list(zip(radar['SHAPE_LAT'], radar['SHAPE_LNG']))
bikeracks['latlon_tup'] = list(zip(bikeracks['SHAPE_LAT'], bikeracks['SHAPE_LNG']))
traf_circles['latlon_tup'] = list(zip(traf_circles['SHAPE_LAT'], traf_circles['SHAPE_LNG']))
# Define a function that will calculate the distance of collisions to each of the locations of the features and find the closest one to the collision.
#
# NOTE: As you can see in the later cell, it takes about an hr to run through everything and find the nearest radar sign to each collision. For now, it will do because the calculated data will be added onto a new copy of the collisions dataset and saved as a csv.
#
# WORK ON: finding more optimal function because the other additional dataset has significantly more rows than the radar. It will not be a fun time running it all.
# +
# # Way too slow- takes hours
# def calc_latlon(collisions_latlon, feature_latlon):
# '''
# Input:
# collisions_latlon: Pd.Series of latlon tuples
# feature_latlon: Pd.Series of latlon tuples
# Returns:
# new column of nearest distance for feature
# '''
# dist_vals = []
# for i in tqdm(collisions_latlon):
# if i == (0,0):
# dist_vals.append(0)
# else:
# smallest = 100
# for x in feature_latlon:
# temp = geopy.distance.distance(i, x).miles
# if temp < smallest:
# smallest = temp
# dist_vals.append(smallest)
# return dist_vals
# +
## Faster but not accurate
# def calc_latlon_v2(collisions_latlon, feature_latlon):
# '''
# Input:
# collisions_latlon: Pd.Series of latlon tuples
# feature_latlon: Pd.Series of latlon tuples
# Returns:
# new column of nearest distance for feature
# '''
# dist_vals = []
# for i in tqdm(collisions_latlon):
# if i == (0,0):
# dist_vals.append(0)
# else:
# lat_diff = 1
# lng_diff = 1
# for x in feature_latlon:
# if (abs(i[0] - x[0]) < lat_diff) & (abs(i[1] - x[1]) < lng_diff):
# coords = x
# temp = geopy.distance.distance(i, coords).miles
# dist_vals.append(temp)
# return dist_vals
# -
def calc_latlon_v3(collisions_latlon, feature_latlon):
'''
Input:
collisions_latlon: Pd.Series of latlon tuples
feature_latlon: Pd.Series of latlon tuples
Returns:
new column of nearest distance for feature
'''
dist_vals = []
for i in tqdm(collisions_latlon):
if i == (0,0):
dist_vals.append(0)
else:
smallest = 100
for x in feature_latlon:
try:
temp = 3959 * acos(cos(radians(i[0])) * cos(radians(x[0])) * cos(radians(x[1]) - radians(i[1])) + sin(radians(i[0])) * sin(radians(x[0])))
except:
continue
if temp < smallest:
smallest = temp
dist_vals.append(smallest)
return dist_vals
bikerack_dist = calc_latlon_v3(new_collisions['latlon_tup'], bikeracks['latlon_tup'])
radar_dist = calc_latlon_v3(new_collisions['latlon_tup'], radar['latlon_tup'])
t_circles_dist = calc_latlon_v3(new_collisions['latlon_tup'], traf_circles['latlon_tup'])
marked_cross_dist = calc_latlon_v3(new_collisions['latlon_tup'], marked_cross['latlon_tup'])
new_collisions['bikerack_dist'] = bikerack_dist
new_collisions['radar_dist'] = radar_dist
new_collisions['t_circles_dist'] = t_circles_dist
new_collisions['marked_cross_dist'] = marked_cross_dist
new_collisions.head()
new_collisions.to_csv('../data/feateng_collisions.csv')
new_collisions.describe()
# Some simple EDA to see if there are big differences of the nearest radar location and the severity of the crash. So far, does not seem like there is.
person_count = new_collisions[['PEDCOUNT', 'PERSONCOUNT', 'VEHCOUNT', 'SEVERITYCODE', 'SPEEDING', 'nearest_radar']]
person_count.groupby('SEVERITYCODE').mean()
# Simple EDA of whether the crash involved speeding and the average distance of the nearest radar. The thought was that if there is a radar nearby, speeding would be less likely. This will need more exploration as well.
speed_radar = new_collisions[['ADDRTYPE', 'SPEEDING', 'nearest_radar']]
speed_radar.head()
speed_radar.groupby('SPEEDING').mean()
# average radar distance with no speeding recorded (NaN values for speeding)
speed_radar[speed_radar['SPEEDING'] != 'Y']['nearest_radar'].mean()
speed_radar.groupby('ADDRTYPE').mean()
# ### Quick correlation heatmap
# +
fig, ax = plt.subplots(figsize=(15,10))
ax = sns.heatmap(new_collisions.corr(), annot=True)
# -
| notebooks/cindy_feature_eng.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import datetime
dataset = pd.read_csv('Reliance_Stock_Price_Train.csv.',index_col="Date",parse_dates=True)
dataset.head()
dataset.isna().any()
dataset.info()
dataset['Open'].plot(figsize=(16,6))
# +
dataset["Close"] = dataset["Close"].astype(float)
# -
dataset["Volume"] = dataset["Volume"].astype(float)
# +
dataset.rolling(7).mean().head(20)
# -
dataset['Open'].plot(figsize=(16,6))
dataset.rolling(window=30).mean()['Close'].plot()
dataset['Close: 30 Day Mean'] = dataset['Close'].rolling(window=30).mean()
dataset[['Close','Close: 30 Day Mean']].plot(figsize=(16,6))
# +
dataset['Close'].expanding(min_periods=1).mean().plot(figsize=(16,6))
# -
training_set=dataset['Open']
training_set=pd.DataFrame(training_set)
# Feature Scaling
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = sc.fit_transform(training_set)
print(training_set_scaled)
# +
# Creating a data structure with 60 timesteps and 1 output
X_train = []
y_train = []
for i in range(60, 1200):
X_train.append(training_set_scaled[i-60:i, 0])
y_train.append(training_set_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
# Reshaping
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
type(X_train)
print(X_train)
# +
# Part 2 - Building the RNN
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.utils.vis_utils import plot_model
# +
# Initialising the RNN
regressor = Sequential()
# +
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(units = 1))
# -
# +
# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
# -
print(regressor.summary())
# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 100, batch_size = 32)
plot_model(regressor, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
# +
dataset_test = pd.read_csv('Reliance__Stock_Price_Test.csv',index_col="Date",parse_dates=True)
# -
real_stock_price = dataset_test.iloc[:, 1:2].values
dataset_test.head()
dataset_test.info()
dataset_test["Volume"] = dataset_test["Volume"].astype(float)
test_set=dataset_test['Openp']
test_set=pd.DataFrame(test_set)
test_set.info()
# +
dataset_total = pd.concat((dataset['Open'], dataset_test['Openp']), axis = 0)
inputs = dataset_total[len(dataset_total) - len(dataset_test) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 80):
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = regressor.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
# -
predicted_stock_price=pd.DataFrame(predicted_stock_price)
predicted_stock_price.info()
# +
# Visualising the results
plt.plot(real_stock_price, color = 'red', label = ' Reliance Stock Price real ')
plt.plot(predicted_stock_price, color = 'blue', label = ' Reliance Stock Price predicted')
plt.title('Reliance Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('Reliance Stock Price')
plt.legend()
plt.show()
# -
| Stock Market RIL .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# <a href='http://www.holoviews.org'><img src="../../assets/pn_hv_gv_bk_ds_pa.png" alt="HoloViz logos" width="40%;" align="left"/></a>
# <div style="float:right;"><h2>Exercises 3-4: Plotting</h2></div>
# ### Exercise 3
#
# In this exercise we will explore hvplot some more which we will build on in Exercise 4 to create a custom linked visualization.
#
# #### Loading the data as before
#
# We will be building a new visualization based on the same data we have cleaned and filtered in the rest of the tutorial. First we load the `DataFrame` of the `>=7` earthquakes:
# +
import numpy as np # noqa
import xarray as xr
import dask.dataframe as dd
import holoviews as hv
from holoviews import streams # noqa
import hvplot.dask # noqa
import hvplot.pandas # noqa
import hvplot.xarray # noqa: adds hvplot method to xarray objects
df = dd.read_parquet('../../data/earthquakes.parq')
df.time = df.time.astype('datetime64[ns]')
cleaned_df = df.copy()
cleaned_df['mag'] = df.mag.where(df.mag > 0)
cleaned_reindexed_df = cleaned_df.set_index(cleaned_df.time)
cleaned_reindexed_df = cleaned_reindexed_df.persist()
most_severe = cleaned_reindexed_df[cleaned_reindexed_df.mag >= 7].compute()
# -
# And next we load the population density raster data:
ds = xr.open_dataarray('../../data/gpw_v4_population_density_rev11_2010_2pt5_min.nc')
cleaned_ds = ds.where(ds.values != ds.nodatavals).sel(band=1)
cleaned_ds.name = 'population'
# #### Visualizing the raster data
#
# Start by using `hvplot.image` to visualize the whole of the population density data using the Datashader support in hvPlot. Grab a handle on this `Image` HoloViews object called `pop_density` and customize it using the `.opts` method, enabling a logarithmic color scale and a blue colormap (specifically the `'Blues'` colormap). At the end of the cell, display this object.
#
# <br><details><summary>Hint</summary><br>
#
# Don't forget to include `rasterize=True` in the `hvplot.image` call. You can use `logz=True`, `clim=(1, np.nan)` and `cmap='Blues'` in the `.opts` method call
#
# </details>
pop_density = ... # Use hvplot here to visualize the data in cleaned_ds and customize it with .opts
pop_density # Display it
# <details><summary>Solution</summary><br>
#
# ```python
# pop_density = cleaned_ds.hvplot.image(rasterize=True, logz=True, clim=(1, np.nan), cmap='Blues')
# pop_density
# ```
#
# <br></details>
# #### Visualizing the earthquake positions
#
# Now visualize the tabular data in `most_severe` by building a `hv.Points` object directly. This will be very similar to the approach shown in the tutorial but this time we want all the earthquakes to be marked with red crosses of size 100 (no need to use the `.opts` method this time). As above, get a handle on this object and display it where the handle is now called `quake_points`:
#
# <br><details><summary>Hint</summary><br>
#
# Don't forget to map the longitude and latitude dimensions to `x` and `y` in the call to `hvplot.points`.
#
# </details>
quake_points = ... # Use hvplot here to visualize the data in most_severe and customize it with .opts
quake_points
# <details><summary>Solution</summary><br>
#
# ```python
# quake_points = most_severe.hvplot.points(x='longitude', y='latitude', marker='+', size=100, color='red')
# quake_points
# ```
#
# <br></details>
# #### Enabling the `box_select` tool
#
# Now use `.opts` method to enable the Bokeh `box_select` tool on quake points.
#
# <br><details><summary>Hint</summary><br>
#
# The option is called `tools` and takes a list of tool names, in this case `'box_select'`.
#
# </details>
# <details><summary>Solution</summary><br>
#
# ```python
# quake_points.opts(tools=['tap'])
# ```
#
# <br></details>
#
#
#
#
# #### Composing these visualizations together
#
# Now overlay `quake_points` over `pop_density` to check everything looks correct. Make sure the box select tool is working as expected.
# <details><summary>Solution</summary><br>
#
# ```python
# pop_density * quake_points
# ```
#
# <br></details>
# ### Exercise 4
#
# Using `Selection1D`, define a HoloViews stream called `selection_stream` using `quake_points` as a source.
selection_stream = ...
# <details><summary>Solution</summary><br>
#
# ```python
# selection_stream = streams.Selection1D(source=quake_points)
# ```
#
# <br></details>
#
# #### Highlighting earthquakes with circles
#
# Now we want to create a circle around *all* the selected points chosen by the box select tool where each circle is centered at the latitude and longitude of a selected earthquake (10`^o` in diameter). A `hv.Ellipse` object can create a circle using the format `hv.Ellipse(x, y, diameter)` and we can build an overlay of circles from a list of `circles` using `hv.Overlay(circles)`. Using this information, complete the following callback for the `circle_marker` `DynamicMap`.
#
# <br><details><summary>Hint</summary><br>
#
# Each `circle` needs to be a `hv.Ellipse(longitude, latitude, 10)` where longitude and latitude correspond to the current earthquake row.
#
# </details>
# +
def circles_callback(index):
circles = []
if len(index) == 0:
return hv.Overlay([])
for i in index:
row = most_severe.iloc[i] # noqa
circle = ... # Define the appropriate Ellipse element here
circles.append(circle)
return hv.Overlay(circles)
# Uncomment when the above function is complete
# circle_marker = hv.DynamicMap(circles_callback, streams=[selection_stream])
# -
# <details><summary>Solution</summary><br>
#
# ```python
# def circles_callback(index):
# circles = []
# if len(index) == 0:
# return hv.Overlay([])
#
# for i in index:
# row = most_severe.iloc[i]
# circle = hv.Ellipse(row.longitude, row.latitude, 10)
# circles.append(circle)
#
# return hv.Overlay(circles)
#
# circle_marker = hv.DynamicMap(circles_callback, streams=[selection_stream])
# ```
#
# <br></details>
#
#
# Now test this works by overlaying `pop_density`, `quake_points`, and `circle_marker` together.
# <details><summary>Solution</summary><br>
#
# ```python
# pop_density * quake_points * circle_marker
# ```
# <br></details>
#
#
# #### Depth and magnitude scatter of selected earthquakes
#
# Now let us generate a scatter plot of depth against magnitude for the selected earthquakes. Define a `DynamicMap` called `depth_magnitude_scatter` that uses a callback called `depth_magnitude_callback`. This is a two-line function that returns a `Scatter` element generated by `.hvplot.scatter`.
#
# <br><details><summary>Hint</summary><br>
#
# The `index` argument of the callback can be passed straight to `most_severe.iloc` to get a filtered dataframe corresponding to the selected earthquakes.
#
# </details>
#
#
# <details><summary>Solution</summary><br>
#
# ```python
# def depth_magnitude_callback(index):
# selected = most_severe.iloc[index]
# return selected.hvplot.scatter(x='mag', y='depth')
#
# depth_magnitude_scatter = hv.DynamicMap(depth_magnitude_callback, streams=[selection_stream])
# ```
#
# <br></details>
# ### Final visualization
#
# Now overlay `pop_density`, `quake_points` and `circle_marker` and put this in a one column layout together with the linked plot, `depth_magnitude_scatter`.
# <details><summary>Solution</summary><br>
#
# ```python
# ((pop_density * quake_points * circle_marker) + depth_magnitude_scatter).cols(1)
# ```
#
# <br></details>
#
# <details><summary><b>Overall Solution</b></summary><br>
#
# After loading the data, the following (slightly cleaned up) version generates the whole visualization:
#
# ```python
# pop_density = cleaned_ds.hvplot.image(rasterize=True, logz=True, clim=(1, np.nan), cmap='Blues')
# quake_points = most_severe.hvplot.points(x='longitude', y='latitude', marker='+', size=100, color='red')
# quake_points.opts(tools=['box_select'])
#
# selection_stream = streams.Selection1D(source=quake_points)
#
# def circles_callback(index):
# circles = []
# if len(index) == 0:
# return hv.Overlay([])
# for i in index:
# row = most_severe.iloc[i]
# circle = hv.Ellipse(row.longitude, row.latitude, 10) # Define the appropriate Ellipse element here
# circles.append(circle)
#
# return hv.Overlay(circles)
#
# circle_marker = hv.DynamicMap(circles_callback, streams=[selection_stream])
#
# def depth_magnitude_callback(index):
# selected = most_severe.iloc[index]
# return selected.hvplot.scatter(x='mag', y='depth')
#
# depth_magnitude_scatter = hv.DynamicMap(depth_magnitude_callback, streams=[selection_stream])
#
# ((pop_density * quake_points * circle_marker) + depth_magnitude_scatter).cols(1)
# ```
#
# <br></details>
| examples/tutorial/exercises/Plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import collections
import copy
import glob
import json
import os
import re
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn
seaborn.set_style("whitegrid")
# -
base_dirpath = "/home/ubuntu/experiments/procgen_001"
EASY_GAME_RANGES = {
'coinrun': [0, 5, 10],
'starpilot': [0, 2.5, 64],
'caveflyer': [0, 3.5, 12],
'dodgeball': [0, 1.5, 19],
'fruitbot': [-12, -1.5, 32.4],
'chaser': [0, .5, 13],
'miner': [0, 1.5, 13],
'jumper': [0, 1, 10],
'leaper': [0, 1.5, 10],
'maze': [0, 5, 10],
'bigfish': [0, 1, 40],
'heist': [0, 3.5, 10],
'climber': [0, 2, 12.6],
'plunder': [0, 4.5, 30],
'ninja': [0, 3.5, 10],
'bossfight': [0, .5, 13],
'caterpillar': [0, 8.25, 24],
'gemjourney': [0, 1.1, 16],
'hovercraft': [0, 0.2, 18],
'safezone': [0, 0.2, 10],
}
# +
def recursive_underscore_join_list(lst):
result = ""
for i, item in enumerate(lst):
append = "_"
if isinstance(item, list):
item = recursive_underscore_join_list(item)
append = "__"
else:
item = str(item)
result += item
if i < len(lst) - 1:
result += append
return result
def flatten_string_key_dict(d):
result = dict()
for k, v in d.items():
if isinstance(v, dict):
v = flatten_string_key_dict(v)
for kk, vv in v.items():
flat_key = f"{k}_{kk}"
if isinstance(vv, list):
vv = recursive_underscore_join_list(vv)
result[flat_key] = vv
elif isinstance(v, list):
result[k] = recursive_underscore_join_list(v)
else:
result[k] = v
return result
def load_params_as_dataframe(filepath):
with open(filepath, "r") as infile:
params = json.load(infile)
params = flatten_string_key_dict(params)
return pd.DataFrame(params, index=[0])
def extract_relevant_info_from_row(row):
info = dict()
info["episode_reward_mean"] = row["episode_reward_mean"]
if "evaluation" in row:
info["evaluation_episode_reward_mean"] = row["evaluation"]["episode_reward_mean"]
return info
def load_results_as_dataframe(filepath):
data = []
with open(filepath, "r") as infile:
for row in infile:
parsed = json.loads(row)
parsed = extract_relevant_info_from_row(parsed)
data.append(parsed)
return pd.DataFrame(data)
class Run:
def __init__(self, params, results):
self.params = params
self.results = results
def as_row(self):
"""Returns a representation / summary of this run that can be used as a single row."""
row = self.params.copy()
if "evaluation_episode_reward_mean" in self.results:
row["evaluation_episode_reward_mean"] = list(self.results["evaluation_episode_reward_mean"].dropna())[-1]
return row
def __len__(self):
return len(self.results)
@property
def env(self):
return self.params["env_config_env_name"].values[0]
@classmethod
def from_filepaths(cls, params_filepath, results_filepath, min_num_iter=0):
assert os.path.exists(params_filepath)
assert os.path.exists(results_filepath)
params = load_params_as_dataframe(params_filepath)
results = load_results_as_dataframe(results_filepath)
if len(results) < min_num_iter:
return None
return cls(params, results)
def with_normalized_results(self):
norm_results = self.results.copy()
min_r, blind_r, max_r = EASY_GAME_RANGES[self.env]
norm_results = (norm_results - blind_r) / (max_r - blind_r)
return Run(self.params, norm_results)
# -
test_filepath = "/home/ubuntu/experiments/procgen_001/bigfish/bigfish_itr_0_cnn_lr_0.0005_filters_32_48_64_num_work_7_num_envs_per_125_rollout_len_16_minibatch_1750_rt_vf_loss_coeff_0.25_grad_clip_constant_5.0_1.0_intrins_reward_noop_10_phasic_32_2_simple_0_detach_False_w_data_aug_adapt_ent_0.9_0.02/custom_DataAugmentingPPOTrainer_custom_procgen_env_wrapper_0_2020-11-12_05-55-444gvfi1rl"
params_filepath = os.path.join(test_filepath, "params.json")
result_filepath = os.path.join(test_filepath, "result.json")
r = Run.from_filepaths(params_filepath, result_filepath)
print(r)
for kvp in r.params.items():
print(kvp[1])
runs = []
for root, dirs, filenames in os.walk(base_dirpath):
if "result.json" in filenames and "params.json" in filenames:
params_filepath = os.path.join(root, "params.json")
results_filepath = os.path.join(root, "result.json")
run = Run.from_filepaths(params_filepath, results_filepath)
if run is not None:
runs.append(run)
print(len(runs))
ENV_NAMES = list(set(r.env for r in runs))
print(ENV_NAMES)
# # Plot per-run performance
# +
def plot_run(run, ax, key, label):
y = run.results[key]
x = range(len(y))
ax.plot(x, y, alpha=0.8, label=label)
ax.set_title(run.env)
if label is not None:
ax.legend()
def plot_runs(runs, key="episode_reward_mean", label=None, make_figure=True):
if make_figure:
fig, axs = plt.subplots(4, 4, figsize=(16, 16))
else:
fig = plt.gcf()
axs = fig.get_axes()
axs = np.reshape(axs, (4, 4))
for run in runs:
ax_index = ENV_NAMES.index(run.env)
ax_row = ax_index // 4
ax_col = ax_index % 4
plot_run(run, axs[ax_row, ax_col], key, label=label)
# -
plot_runs(runs)
# # Plot aggregate performance
# +
class AggregateRun:
def __init__(self, runs, field, value):
self.runs = runs
self.field = field
self.value = value
def for_env(self, env):
return AggregateRun([r for r in self.runs if r.env == env], self.field, self.value)
def with_normalized_results(self):
return AggregateRun([r.with_normalized_results() for r in self.runs], self.field, self.value)
def __repr__(self):
return f"{self.field}: {self.value}"
def __len__(self):
return len(self.runs)
def aggregate_runs(runs, aggregation_key):
aggregate_runs = collections.defaultdict(list)
for run in runs:
if aggregation_key not in run.params:
aggregate_runs["missing"].append(run)
else:
key = tuple(str(v) for v in run.params[aggregation_key].values)
aggregate_runs[key].append(run)
aggregate_runs = [AggregateRun(v, aggregation_key, k) for (k, v) in aggregate_runs.items()]
return aggregate_runs
# -
aggregation_key = "aux_loss_num_sgd_iter"
agg_runs = aggregate_runs(runs, aggregation_key)
print(agg_runs)
# +
def plot_agg_run(agg_run, ax, key):
if len(agg_run) == 0:
return
min_len = min(len(r) for r in agg_run.runs)
values = [r.results[key][:min_len].dropna() for r in agg_run.runs]
if len(values) > 1:
mean = np.mean(values, axis=0)
values = np.array([np.array(v) for v in values])
stderr = np.std(values, axis=0) / np.sqrt(len(values))
ybelow = mean - stderr
yabove = mean + stderr
else:
mean = np.array(values).reshape(-1)
ybelow = None
yabove = None
x = range(len(mean))
ax.plot(x, mean, label=f"{agg_run.value}")
if ybelow is not None and yabove is not None:
ax.fill_between(x, y1=yabove, y2=ybelow, label=f"{agg_run.value}", alpha=0.5)
ax.set_title(agg_run.runs[0].env)
def plot_agg_runs(agg_runs, key="episode_reward_mean", average_envs=False):
if average_envs:
plt.figure(figsize=(8, 8))
ax = plt.gca()
for agg_run in agg_runs:
plot_agg_run(agg_run.with_normalized_results(), ax, key=key)
ax.legend()
plt.title(f"{key} averaged over envs")
else:
fig, axs = plt.subplots(4, 4, figsize=(16, 16))
for i, env_name in enumerate(ENV_NAMES):
ax_row = i // 4
ax_col = i % 4
ax = axs[ax_row, ax_col]
for agg_run in agg_runs:
plot_agg_run(agg_run.for_env(env_name), ax, key=key)
ax.legend()
fig.suptitle(key)
# -
# train: key="episode_reward_mean"
# eval: key="evaluation_episode_reward_mean"
plot_agg_runs(agg_runs, key="episode_reward_mean", average_envs=False)
plot_agg_runs(agg_runs, key="episode_reward_mean", average_envs=True)
plot_agg_runs(agg_runs, key="evaluation_episode_reward_mean", average_envs=False)
plot_agg_runs(agg_runs, key="evaluation_episode_reward_mean", average_envs=True)
# # Plot based on arbitrary subselect
# +
def run_matches_pairs(run, pairs):
for field, values in pairs.items():
if field not in run.params or run.params[field].iloc[0] not in values:
return False
return True
def subselect_runs(runs, pairs):
subselect = []
for run in runs:
if run_matches_pairs(run, pairs):
subselect.append(run)
return subselect
# +
baseline_pairs = {
"lr": [0.0005],
"model_custom_options_fc_size": [256],
"num_envs_per_worker": [64],
"grad_clip": [10],
"model_custom_options_fc_activation": ["relu"],
"env_config_env_wrapper_options_frame_stack_options_k": [2],
"model_custom_options_weight_init": ["default"],
}
pairs_label_list = [(baseline_pairs, "baseline")]
def copy_and_replace(pairs, pairs_to_replace):
ret = copy.deepcopy(pairs)
for (k,v) in pairs_to_replace.items():
ret[k] = v
return ret
pairs_label_list += [(copy_and_replace(baseline_pairs,
{"model_custom_options_fc_size": [512]}), "fc_size_512")]
pairs_label_list += [(copy_and_replace(baseline_pairs,
{"num_envs_per_worker": [128]}), "num_envs_128")]
pairs_label_list += [(copy_and_replace(baseline_pairs,
{"grad_clip": [1.0]}), "grad_clip_1.0")]
pairs_label_list += [(copy_and_replace(baseline_pairs,
{"model_custom_options_fc_activation": ["tanh"]}), "tanh")]
pairs_label_list += [(copy_and_replace(baseline_pairs,
{"env_config_env_wrapper_options_frame_stack_options_k": [3]}), "stack_3")]
pairs_label_list += [(copy_and_replace(baseline_pairs,
{"model_custom_options_weight_init": ["orthogonal"]}), "ortho_init")]
label_to_runs = {}
for (pairs, label) in pairs_label_list:
label_to_runs[label] = subselect_runs(runs, pairs)
print(f"label: {label}, num runs: {len(label_to_runs[label])}")
# -
for i, (label, runs_to_plot) in enumerate(label_to_runs.items()):
plot_runs(runs_to_plot, label=label, make_figure=i == 0)
# # Evaluation results
# +
def plot_run_eval(run, ax, count, label, normalize):
row = run.as_row()
if "evaluation_episode_reward_mean" not in row:
return 0
y = row["evaluation_episode_reward_mean"].iloc[0]
if normalize:
min_r, blind_r, max_r = EASY_GAME_RANGES[run.env]
y = (y - blind_r) / (max_r - blind_r)
x = count
ax.bar(x, y, width=0.5, alpha=0.7, label=f"{label}: {y:.4f}")
ax.set_title(run.env)
if label is not None:
ax.legend()
return 1
def plot_runs_eval(runs, ax_counts=None, label=None, make_figure=True, normalize=True):
if ax_counts is None:
ax_counts = collections.defaultdict(int)
if make_figure:
fig, axs = plt.subplots(4, 4, figsize=(16, 16))
else:
fig = plt.gcf()
axs = fig.get_axes()
axs = np.reshape(axs, (4, 4))
for run in runs:
ax_index = ENV_NAMES.index(run.env)
ax_row = ax_index // 4
ax_col = ax_index % 4
ax_count = ax_counts[ax_index]
ax_counts[ax_index] += plot_run_eval(run, axs[ax_row, ax_col], ax_count, label=label, normalize=normalize)
return ax_counts
# -
ax_counts = None
for i, (label, runs_to_plot) in enumerate(label_to_runs.items()):
ax_counts = plot_runs_eval(runs_to_plot, label=label, ax_counts=ax_counts, make_figure=i == 0)
def get_eval_scores(runs):
scores = []
for run in runs:
row = run.as_row()
if "evaluation_episode_reward_mean" not in row:
continue
y = row["evaluation_episode_reward_mean"].iloc[0]
min_r, blind_r, max_r = EASY_GAME_RANGES[run.env]
y = (y - blind_r) / (max_r - blind_r)
scores.append(y)
return scores
def hist_eval_scores(runs, label, make_figure=True):
if make_figure:
plt.figure(figsize=(12,6))
scores = get_eval_scores(runs)
plt.hist(scores, bins=5, alpha=0.4, width=0.025, label=f"{label}: {np.mean(scores):.4f}")
plt.legend()
plt.title("histogram of normalized evaluation scores")
for i, (label, runs_to_plot) in enumerate(label_to_runs.items()):
hist_eval_scores(runs_to_plot, label=label, make_figure=i == 0)
hist_eval_scores(bs_1628_sub, label="batch_size_1628", make_figure=False)
hist_eval_scores(baseline_sub, label="batch_size_814", make_figure=False)
# #### If we select the best run for each env, what's the normalized score?
best_scores = collections.defaultdict(float)
for run in runs:
row = run.as_row()
if "evaluation_episode_reward_mean" not in row:
continue
y = row["evaluation_episode_reward_mean"].iloc[0]
min_r, blind_r, max_r = EASY_GAME_RANGES[run.env]
y = (y - blind_r) / (max_r - blind_r)
best_scores[run.env] = max(best_scores[run.env], y)
scores = []
for env, score in best_scores.items():
scores.append(score)
print(f"{env}: {score:.2f}")
print(f"\nmean: {np.mean(scores):.2f}")
| notebooks/visualize_experiment_results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: sig-mlops
# language: python
# name: sig-mlops
# ---
# # News Classifier
#
# Implementation of a news classifier model using Scikit Learn's Naive Bayes implementation.
# Since this model is implemented using Scikit Learn, we can deploy it using [one of Seldon's pre-built re-usable server](https://docs.seldon.io/projects/seldon-core/en/latest/servers/sklearn.html).
# ## Training
#
# First we will train a machine learning model, which will help us classify news across multiple categories.
# ### Install dependencies
#
# We will need the following dependencies in order to run the Python code:
# +
# %%writefile ./src/requirements.txt
# You need the right versions for your model server:
# Model servers: https://docs.seldon.io/projects/seldon-core/en/latest/servers/overview.html
# For SKLearn you need a pickle and the following:
scikit-learn==0.20.3 # See https://docs.seldon.io/projects/seldon-core/en/latest/servers/sklearn.html
joblib==0.13.2
# For XGBoost you need v 0.82 and an xgboost export (not a pickle)
#xgboost==0.82
# For MLFlow you need the following, and a link to the built model:
#mlflow==1.1.0
#pandas==0.25
# For tensorflow, any models supported by tensorflow serving (less than v2.0)
# For testing
pytest==5.1.1
# -
# We can now install the dependencies using the make command:
# + language="bash"
# make install_dev
# -
# ### Download the ML data
#
# Now that we have all the dependencies we can proceed to download the data.
#
# We will download the news stories dataset, and we'll be attempting to classify across the four classes below.
# +
from sklearn.datasets import fetch_20newsgroups
categories = ["alt.atheism", "soc.religion.christian", "comp.graphics", "sci.med"]
twenty_train = fetch_20newsgroups(
subset="train", categories=categories, shuffle=True, random_state=42
)
twenty_test = fetch_20newsgroups(
subset="test", categories=categories, shuffle=True, random_state=42
)
# Printing the top 3 newstories
print("\n".join(twenty_train.data[0].split("\n")[:3]))
# -
# ### Train a model
#
# Now that we've downloaded the data, we can train the ML model using a simple pipeline with basic text pre-processors and a Multiclass naive bayes classifier
# +
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
text_clf = Pipeline(
[
("vect", CountVectorizer()),
("tfidf", TfidfTransformer()),
("clf", MultinomialNB()),
]
)
text_clf.fit(twenty_train.data, twenty_train.target)
# -
# ### Test single prediction
#
# Now that we've trained our model we can use it to predict from un-seen data.
#
# We can see below that the model is able to predict the first datapoint in the dataset correctly.
idx = 0
print(f"CONTENT:{twenty_test.data[idx][35:230]}\n\n-----------\n")
print(f"PREDICTED CLASS: {categories[twenty_test.target[idx]]}")
# ### Print accuracy
#
# We can print the accuracy of the model by running the test data and counting the number of correct classes.
# +
import numpy as np
predicted = text_clf.predict(twenty_test.data)
print(f"Accuracy: {np.mean(predicted == twenty_test.target):.2f}")
# -
# ## Deployment
#
# Now we want to be able to deploy the model we just trained. This will just be as simple as updated the model binary.
# ### Save the trained model
#
# First we have to save the trained model in the `src/` folder.
# This is the binary that we will upload to our cloud storage (which acts as model registry) and which our wrapper will load.
# +
import joblib
joblib.dump(text_clf, "src/model.joblib")
# -
# ### Update your unit test
#
# We'll write a very simple unit test that make sure that the model loads and runs as expected.
# +
# %%writefile src/test_model.py
import numpy as np
from unittest import mock
import joblib
import os
EXPECTED_RESPONSE = np.array([3, 3])
def test_model(*args, **kwargs):
data = ["text 1", "text 2"]
m = joblib.load("model.joblib")
result = m.predict(data)
assert all(result == EXPECTED_RESPONSE)
# + language="bash"
# make test
# -
# ### Updating Integration Tests
#
# We can also now update the integration tests. This is another very simple step, where we'll want to test this model specifically.
#
# +
# %%writefile integration/test_e2e_seldon_model_server.py
from seldon_core.seldon_client import SeldonClient
from seldon_core.utils import seldon_message_to_json
import numpy as np
from subprocess import run
import time
import logging
API_AMBASSADOR = "localhost:8003"
def test_sklearn_server():
data = ["From: <EMAIL> (<NAME>)\nSubject: Re: HELP for Kidney Stones ..............\nOrganization: The Avant-Garde of the Now, Ltd.\nLines: 12\nNNTP-Posting-Host: ucsd.edu\n\nAs I recall from my bout with kidney stones, there isn't any\nmedication that can do anything about them except relieve the pain.\n\nEither they pass, or they have to be broken up with sound, or they have\nto be extracted surgically.\n\nWhen I was in, the X-ray tech happened to mention that she'd had kidney\nstones and children, and the childbirth hurt less.\n\nDemerol worked, although I nearly got arrested on my way home when I barfed\nall over the police car parked just outside the ER.\n\t- Brian\n",
'From: <EMAIL> (<NAME>)\nSubject: Re: Candida(yeast) Bloom, Fact or Fiction\nOrganization: Beth Israel Hospital, Harvard Medical School, Boston Mass., USA\nLines: 37\nNNTP-Posting-Host: enterprise.bih.harvard.edu\n\nIn article <1993Apr26.<EMAIL>>\n <EMAIL> writes:\n>are in a different class. The big question seems to be is it reasonable to \n>use them in patients with GI distress or sinus problems that *could* be due \n>to candida blooms following the use of broad-spectrum antibiotics?\n\nI guess I\'m still not clear on what the term "candida bloom" means,\nbut certainly it is well known that thrush (superficial candidal\ninfections on mucous membranes) can occur after antibiotic use.\nThis has nothing to do with systemic yeast syndrome, the "quack"\ndiagnosis that has been being discussed.\n\n\n>found in the sinus mucus membranes than is candida. Women have been known \n>for a very long time to suffer from candida blooms in the vagina and a \n>women is lucky to find a physician who is willing to treat the cause and \n>not give give her advise to use the OTC anti-fungal creams.\n\nLucky how? Since a recent article (randomized controlled trial) of\noral yogurt on reducing vaginal candidiasis, I\'ve mentioned to a \nnumber of patients with frequent vaginal yeast infections that they\ncould try eating 6 ounces of yogurt daily. It turns out most would\nrather just use anti-fungal creams when they get yeast infections.\n\n>yogurt dangerous). If this were a standard part of medical practice, as \n><NAME>. says it is, then the incidence of GI distress and vaginal yeast \n>infections should decline.\n\nAgain, this just isn\'t what the systemic yeast syndrome is about, and\nhas nothing to do with the quack therapies that were being discussed.\nThere is some evidence that attempts to reinoculate the GI tract with\nbacteria after antibiotic therapy don\'t seem to be very helpful in\nreducing diarrhea, but I don\'t think anyone would view this as a\nquack therapy.\n-- \n<NAME>\<EMAIL>vard.<EMAIL>\n']
labels = [2.0, 2.0]
sc = SeldonClient(
gateway="ambassador",
gateway_endpoint=API_AMBASSADOR,
deployment_name="seldon-model-server",
payload_type="ndarray",
namespace="seldon",
transport="rest")
sm_result = sc.predict(data=np.array(data))
logging.info(sm_result)
result = seldon_message_to_json(sm_result.response)
logging.info(result)
values = result.get("data", {}).get("ndarray", {})
assert (values == labels)
# -
# ### Now push your changes to trigger the pipeline
# Because Jenkins Classic has created a CI GitOps pipeline for our repo we just need to push our changes to run all the tests
#
# We can do this by running our good old git commands:
# + language="bash"
# git add .
# git push origin master
# -
# We can now see that the pipeline has been triggered by going to the Status page inside Jenkins pipeline:
# 
# Similarly we can actually see the logs of our running job by going to the Console Output page:
# 
# ## Managing your ML Application
#
# Now that we've deployed our MLOps repo, Argo CD will sync the model implementation repository charts with our Staging environ ment automatically.
# + language="bash"
# kubectl get pods -n staging
# -
# ### Test your application in the staging environment
# +
import numpy as np
from seldon_core.seldon_client import SeldonClient
# url = !kubectl get svc ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
sc = SeldonClient(
gateway="ambassador",
gateway_endpoint="localhost:80",
deployment_name="mlops-server",
payload_type="ndarray",
namespace="staging",
transport="rest",
)
response = sc.predict(data=np.array([twenty_test.data[0]]))
response.response.data
# + language="bash"
# curl -X POST -H 'Content-Type: application/json' \
# -d "{'data': {'names': ['text'], 'ndarray': ['Hello world this is a test']}}" \
# http://localhost/seldon/staging/news-classifier-server/api/v0.1/predictions
| examples/cicd/sig-mlops-jenkins-classic/models/news_classifier/README.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime as dt
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from sklearn import metrics
# %matplotlib inline
import datetime
import keras
from keras import Sequential
from keras.layers import LSTM, Dense
import keras.backend as K
from keras.callbacks import EarlyStopping
import time
import tensorflow as tf
# -
from matplotlib import rc
# +
font = {'family' : 'sans-serif',
'size' : 24}
rc('font', **font)
# -
def periods_w_demand_new(df, sales):
nuls = []
not_nuls = []
nul_count = 0
notnul_count = 0
for i in df[sales]:
nuls.append(nul_count)
not_nuls.append(notnul_count)
if i == 0:
nul_count += 1
notnul_count = 0
else:
nul_count =0
notnul_count +=1
return pd.DataFrame([nuls, not_nuls], index=['nuls', 'notnuls']).T
class TimeSeriesModelling():
from sklearn import metrics
def __init__(self, df):
self.df = df
def set_date(self, date_index):
self.df[date_index] = pd.to_datetime(self.df[date_index])
self.df[date_index] = [x.date() for x in self.df[date_index]]
self.df.set_index(date_index, inplace=True)
def scale_data(self, df, scaler, values_index):
self.scaled = scaler.transform(df[values_index].values.reshape(-1,1))
return self.scaled
def difference_data(self, scaled):
self.scaled = pd.DataFrame(scaled, columns=['sales'])
self.diff = self.scaled.diff()
self.diff = self.diff.dropna()
return self.diff
def de_difference(self, diff_n):
origi = pd.concat([pd.DataFrame([0], columns=['sales']),
pd.DataFrame(ts.scaled, columns=['sales']).iloc[2:,:]])
origi.reset_index(inplace=True, drop=True)
value = diff_n + origi
return value.dropna()
def preprocessing(self, differenced, lagshift=1):
self.diff = differenced
for s in range(1,lagshift+1):
self.diff['shift_{}'.format(s)] = differenced['sales'].shift(s)
self.diff.dropna(inplace=True)
array_fit = (self.diff.values)
self.X, self.y = array_fit[:, 0:-1], array_fit[:, -1]
self.X = self.X.reshape(self.X.shape[0], 1, self.X.shape[1])
return self.X, self.y
def add_additional_feat(self, df, list_feat, scaler):
full_extra = df.reset_index(drop=True)
full_extra = full_extra.loc[(len(df) - len(self.X)):, list_feat]
full_extra.reset_index(inplace=True, drop=True)
#full_extra = scaler.transform(full_extra)
self.X = pd.DataFrame(ts.X.reshape(ts.X.shape[0], ts.X.shape[2]),
columns=['sales']).join(pd.DataFrame(full_extra))
#self.X = self.X.values.reshape(self.X.shape[0], 1, self.X.shape[1])
return self.X
def train_test_split(self, X, y, train_size=0.75):
self.X_train = X[:round((len(X)*train_size))]
self.X_test = X[round((len(X)*train_size)):]
self.y_train = y[:round((len(y)*train_size))]
self.y_test = y[round((len(y)*train_size)):]
return self.X_train, self.X_test, self.y_train, self.y_test
def plot_timeseries(self, values_index, train=False, test=False):
self.df[values_index].plot(kind='line', figsize=(15,5))
if train == True:
f, ax = plt.subplots(figsize=(15,5))
plt.plot(self.X_train[:,:,0].reshape(-1,1))
plt.show()
if test == True:
f, ax = plt.subplots(figsize=(15,5))
plt.plot(self.X_test[:,:,0].reshape(-1,1))
plt.show()
class BaselineModel:
def __init__(self, X_train, X_test):
self.X_train = X_train
self.X_test = X_test
def create_baseline_model(self, plot=True):
history = [x for x in self.X_train[:,:,0].reshape(-1,1)]
predictions = list()
for i in range(len(self.X_test[:,:,0].reshape(-1,1))):
# make prediction
predictions.append(history[-1])
# observation
history.append(self.X_test[:,:,0].reshape(-1,1)[i])
# report performance
rmse = np.sqrt(metrics.mean_squared_error(self.X_test[:,:,0].reshape(-1,1), predictions))
mse = metrics.mean_squared_error(self.X_test[:,:,0].reshape(-1,1), predictions)
print('RMSE: %.3f' % rmse)
print('MSE: %.3f' % mse)
self.baseline = predictions
# line plot of observed vs predicted
if plot == True:
plt.subplots(figsize=(18,8))
plt.plot(self.X_test[:,:,0].reshape(-1,1))
plt.plot(predictions)
plt.show()
# +
class ModelEvaluation:
def __init__(self, model):
self.model = model
def evaluate_model(self, X_train, X_test, y_train, y_test):
self.mse_test = self.model.evaluate(X_test, y_test, batch_size=1)
self.mse_train = self.model.evaluate(X_train, y_train, batch_size=1)
print('The model has a test MSE of {} and a train MSE of {}.'.format(self.mse_test, self.mse_train))
self.y_pred = self.model.predict(X_test, batch_size=1)
self.y_hat = self.model.predict(X_train, batch_size=1)
self.r2_test = metrics.r2_score(y_test, self.y_pred)
self.r2_train = metrics.r2_score(y_train, self.y_hat)
print('The model has a test R2 of {} and a train R2 of {}.'.format(self.r2_test, self.r2_train))
def plot_evaluation(self, y_train, y_test):
plt.subplots(figsize=(15,8))
plt.plot(y_train, c='darkorange')
plt.plot(self.y_hat, c='teal')
plt.title('Train dataset and predictions')
plt.show()
plt.subplots(figsize=(15,8))
plt.plot(y_test, c='tomato')
plt.plot(self.y_pred, c='indigo')
plt.title('Test dataset and predictions')
plt.show()
# -
def parameterize_output(x, threshold=0.5):
'''This small function just takes in a number predicted by the model,
and makes it discrete.
The function will round up or down with an equal probability, unless specified.
The aim would be to use the number of 0's in the dataset as a guideline for
rounding up or down. The proportion of 0's can be used to on a scale from 0 to 0.5
where 0 is all 0's and 0.5 is no zeros.
'''
import math
chance = np.random.uniform()
if chance <= threshold:
return np.round(x)
else:
return np.fix(x)
def is_sales(sales):
return pd.DataFrame([0 if x != 0 else 1 for x in sales], columns = ['is_sales'])
df = pd.read_csv('../Group Project Stage/snapshot_full_df.csv')
df_train = df.copy()
df_train.sort_values(['store_key', 'sku_key', 'tran_date'], inplace=True)
df_train.shape
df_train.head()
df_train = df_train.iloc[:500000]
df_train.drop('Unnamed: 0', axis=1, inplace=True)
def preprocess_df(df):
df.loc[:,'weekday'] = df.loc[:,'tran_date'].dt.weekday_name
df.loc[:,'day'] = df.loc[:,'tran_date'].dt.day
df.loc[:,'month'] = df.loc[:,'tran_date'].dt.month
df.loc[:,'week'] = df.loc[:,'tran_date'].dt.week
cat = ['store_region', 'store_grading', 'sku_department',
'sku_subdepartment', 'sku_category', 'sku_subcategory',
'time_of_week', 'monthend', 'month', 'week', 'weekday']
df.loc[:,'time_of_week'] = ['Weekend' if x in ['Saturday', 'Sunday', 'Friday'] else 'Weekday' for x in df.loc[:,'weekday']]
df.loc[:,'monthend'] = ['Monthend' if x in [25, 26, 27, 28, 29, 30,
31, 1, 2, 3, 4, 5] else 'Not-Monthend' for x in df.loc[:,'day']]
df.drop(['day'], axis=1, inplace=True)
for i in cat:
df = df.join(pd.get_dummies(df[i], prefix=i))
df.drop(cat, axis=1, inplace=True)
df = df.reset_index(drop=True)
for i in range(2,12):
df.loc[:,'agg{}'.format(i)] = df.loc[:,'sales'].rolling(i).mean()
for i in range(10,11):
df.loc[:,'max{}'.format(i)] = df.loc[:,'sales'].rolling(i).max()
for i in range(10,11):
df.loc[:,'min{}'.format(i)] = df.loc[:,'sales'].rolling(i).min()
df.dropna(inplace=True)
extra = periods_w_demand_new(df, 'sales')
sales = is_sales(df['sales'])
df = df.reset_index(drop=True).join(extra).join(sales)
return df
df_train['tran_date'] = pd.to_datetime(df_train['tran_date'])
start = time.time()
full_sales = preprocess_df(df_train)
end = time.time()
print('It took a full {} minutes'.format((end-start)/60))
full_sales.head(1)
cat = full_sales.columns[4:]
# cat
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
scaler = MinMaxScaler()
scaler.fit(full_sales['sales'].values.reshape(-1,1))
cat2 = ['selling_price', 'avg_discount', 'max10', 'min10',
'agg2', 'agg3', 'agg4', 'agg5', 'agg6', 'agg7',
'agg8', 'agg9', 'agg10', 'agg11', 'nuls', 'notnuls', 'is_sales']
scale_add = MinMaxScaler()
scale_add.fit(full_sales[cat2])
full_sales[cat2] = scale_add.transform(full_sales[cat2])
subset_df = full_sales#[full_sales['sku_key']==48676]
#
# +
ts = TimeSeriesModelling(subset_df)
#ts.set_date('tran_date')
df = ts.scale_data(subset_df, scaler=scaler, values_index='sales')
df = pd.DataFrame(df, columns=['sales'])
X, y = ts.preprocessing(df, lagshift=1)
X = ts.add_additional_feat(subset_df, cat, scale_add)
#X = X.reshape(X.shape[0], X.shape[2])
#X_train, X_test, y_train, y_test = ts.train_test_split(X, y, 0.8)
# -
X
X.shape
ts.plot_timeseries('sales')
# ## Model all features:
from keras.optimizers import Adam
early_stop = EarlyStopping(monitor='val_loss', patience=5, verbose=1)
K.clear_session()
adam = Adam(lr=0.0001)
model = Sequential()
model.add(Dense(128, input_shape=(473,), activation='tanh'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer=adam)
model.summary()
model.fit(X_train, y_train, batch_size=128, epochs=200, callbacks=[early_stop], validation_split=0.1)
me = ModelEvaluation(model)
me.evaluate_model(X_train, X_test, y_train, y_test)
me.plot_evaluation(y_train, y_test)
# ### Evaluate only one:
full_sales[['sku_key', 'store_key', 'sales']].groupby(['sku_key', 'store_key']).sum().sort_values('sales', ascending=False)
subset_df = full_sales[(full_sales['sku_key']==47593)&(full_sales['store_key'] == 4)]
subset_df.head()
subset_df.info()
ts = TimeSeriesModelling(subset_df)
df = ts.scale_data(subset_df, scaler=scaler, values_index='sales')
df = pd.DataFrame(df, columns=['sales'])
X, y = ts.preprocessing(df, lagshift=1)
X = ts.add_additional_feat(subset_df, cat, scale_add)
X = X.reshape(X.shape[0], X.shape[2])
X_train, X_test, y_train, y_test = ts.train_test_split(X, y, 0.8)
ts.plot_timeseries('sales')
X_train.shape
me = ModelEvaluation(model)
me.evaluate_model(X_train, X_test, y_train, y_test)
me.plot_evaluation(y_train, y_test)
model.save('all_features_ann.h5') # creates a HDF5 file 'my_model.h5'
from keras.models import load_model
model = load_model('all_features_ann.h5')
# # Blind predict
ts = TimeSeriesModelling(subset_df)
ts.set_date('tran_date')
subset_df.shape
# Split into training
train = subset_df.iloc[:-90]
test = subset_df.iloc[-90:]
train.tail(1)
test.head(1)
float(scaler.inverse_transform(model.predict(X[-1,:].reshape(1,X.shape[-1]))))
# +
# BLIND 90 Days
import datetime
predicted = []
zeros = ((train['sales'] == 0).sum() / len(train))
print('We have {} zeros'.format(zeros))
for i in range(90):
df = ts.scale_data(train, scaler=scaler, values_index='sales')
df = pd.DataFrame(df, columns=['sales'])
X, y = ts.preprocessing(df, lagshift=1)
X = ts.add_additional_feat(train, cat, scale_add)
X = X.reshape(X.shape[0], X.shape[2])
predict = model.predict(X[-1,:].reshape(1,X.shape[-1]))
changes = train.iloc[-1,:].copy()
changes = pd.DataFrame(changes).T
changes[cat2] = scale_add.inverse_transform(changes[cat2].values.reshape(1,-1))
#This is where the add features go
#Dates
next_date = pd.to_datetime(train.index[-1] + datetime.timedelta(days=1))
last_day = next_date.day
last_week = next_date.week
last_month = next_date.month
last_year = next_date.year
last_dayname = next_date.weekday_name
if last_day in [25, 26, 27, 28, 29, 30, 31, 1, 2, 3, 4, 5]:
changes.loc[:,'monthend_Not-Monthend'] = 0
changes.loc[:,'monthend_Monthend'] = 1
else:
changes.loc[:,'monthend_Not-Monthend'] = 1
changes.loc[:,'monthend_Monthend'] = 0
if last_dayname in ['Saturday', 'Sunday', 'Friday']:
changes.loc[:,'time_of_week_Weekday'] = 0
changes.loc[:,'time_of_week_Weekend'] = 1
else:
changes.loc[:,'time_of_week_Weekday'] = 1
changes.loc[:,'time_of_week_Weekend'] = 0
for i in range(1, 53):
changes['week_{}'.format(i)] = 0
if i == last_week:
changes['week_{}'.format(i)] = 1
for i in range(1, 13):
changes['month_{}'.format(i)] = 0
if i == last_month:
changes['month_{}'.format(i)] = 1
for i in ['weekday_Friday', 'weekday_Monday', 'weekday_Saturday',
'weekday_Sunday', 'weekday_Thursday', 'weekday_Tuesday',
'weekday_Wednesday']:
changes[i] = 0
if i == 'weekday_{}'.format(last_dayname):
changes.loc[:,i] = 1
random_nr = np.random.uniform()
if random_nr < zeros/10:
changes.loc[:,'sales'] = 0
predicted.append(0)
elif random_nr < zeros/2:
changes.loc[:,'sales'] = float(scaler.inverse_transform(predict).reshape(-1))/2
predicted.append(float(scaler.inverse_transform(predict).reshape(-1))/2)
else:
changes.loc[:,'sales'] = float(scaler.inverse_transform(predict).reshape(-1))
predicted.append(float(scaler.inverse_transform(predict).reshape(-1)))
#changes = pd.DataFrame(changes).T
changes.index = [next_date]
train = pd.concat([train, changes])
#Aggregates
for i in range(2,12):
train.loc[next_date,'agg{}'.format(i)] = train.iloc[-i:,2].rolling(i).mean()[-1]
train.dropna(inplace=True)
for i in range(10,11):
train.loc[next_date,'max{}'.format(i)] = train.iloc[-i:,2].rolling(i).max()[-1]
train.dropna(inplace=True)
for i in range(10,11):
train.loc[next_date,'min{}'.format(i)] = train.iloc[-i:,2].rolling(i).min()[-1]
train.dropna(inplace=True)
#Nuls
notnuls = []
nul_counter = 0
nulidx = -2
ntick = True
while ntick == True:
if np.fix(train.iloc[nulidx, 2]) == 0:
nul_counter += 1
nulidx -= 1
else:
ntick = False
nnul_counter = 0
nnulidx = -2
tick = True
while tick == True:
if np.fix(train.iloc[nnulidx, 2]) != 0:
nnul_counter += 1
nnulidx -= 1
else:
tick = False
is_sales = 0
if round(train.loc[next_date, 'sales']) == 0:
is_sales = 0
else:
is_sales = 1
train.loc[next_date, 'nuls'] = nul_counter
train.loc[next_date, 'notnuls'] = nnul_counter
train.loc[next_date, 'is_sales'] = is_sales
train.loc[next_date, cat2] = scale_add.transform(train.loc[next_date, cat2].values.reshape(1, -1)).reshape(-1)
# -
dates = pd.to_datetime('2012-12-12').date()
dates.day_name
# +
train.reset_index(inplace=True)
train['index'] = pd.to_datetime(train['index'])
train.set_index('index', inplace=True)
test.reset_index(inplace=True)
test['tran_date'] = pd.to_datetime(test['tran_date'])
test.set_index('tran_date', inplace=True)
# -
train.columns[0:]
train.shape
# Looks like the problem lies in the scaling. The aggregates are scaled, but the sales not. Fix this and thebmdoel could fwords
#
train.tail()
test.shape
test.tail()
train['sales'] = train['sales'].astype('float')
# +
f, ax = plt.subplots(figsize=(12,8))
train['sales'].plot(ax=ax)
test['sales'].plot(ax=ax, color='r')
ax.set_title('55991 Sales')
#ax.set_xlim(right=pd.datetime(2018,1,1))
#ax.set_ylim(bottom=-5, top=50)
f.savefig('90day_pred_vanilla_correct_zeros.png')
# -
predicted
metrics.mean_squared_error(test['sales'].values, predicted)
sum(test['sales'].values)
sum(predicted)
# One of the first things that we saw in this model it the model going rogue. I.e., it just goes off fully negative or fully positive.
# What if we combine some sort of simulation into the model, which can pick lets say a zero for the predicted value with the same probability as its ocurrance in the dataset? This will bring the model back to the values most common in the dataset.
# This could be combined with a general Monte Carlo approach, in which every prediction can be updated to a monte carlo version, or similar.
#
# Other features maybe to add which could account for the going rogue, adding a rolling min and rolling max for a number of time periods.
plot_df = pd.DataFrame(train).join(test, lsuffix='train', rsuffix='test')
plot_df.head()
plot_df = plot_df[['salestrain', 'salestest']]
plot_df.columns = ['pred', 'true']
threshold = (1-plot_df['true'].value_counts()[0]/len(plot_df))
print('Parameterization threshold is {}'.format(threshold))
plot_df['pred'] = plot_df['pred'].apply(parameterize_output, args=[threshold])
pred = plot_df['pred'].apply(parameterize_output, args=[threshold]).values
plot_df.reset_index(inplace=True)
plot_df['index'] = pd.to_datetime(plot_df['index'])
plot_df.set_index('index', inplace=True)
plot_df.head()
# +
f, ax = plt.subplots(figsize=(18,8),)
plot_df.plot(ax=ax, color=['darkviolet', 'orange'], linewidth=2.5)
ax.set_title('True and predicted sales', fontdict=font)
ax.set_xlabel('Date', fontdict=font)
ax.set_ylabel('Sales', fontdict=font)
ax.axvline('2017-10-27', color='black')
plt.tick_params(labelsize=14)
plt.show()
f.savefig('55991_unsmoothed_new_14days.png')
# -
plot_df.index[-90].date()
test_df = plot_df[plot_df.index.date >= plot_df.index[-90].date()]
rmse = np.sqrt(metrics.mean_squared_error(test_df.iloc[:,1], test_df.iloc[:,0]))
mse = metrics.mean_squared_error(test_df.iloc[:,1], test_df.iloc[:,0])
print('RMSE: %.3f' % rmse)
print('MSE: %.3f' % mse)
test_df.iloc[:,0].sum()
rmse = np.sqrt(metrics.mean_squared_error(test_df.iloc[1:,1], test_df.iloc[:-1,1]))
mse = metrics.mean_squared_error(test_df.iloc[1:,1], test_df.iloc[:-1,1])
print('RMSE: %.3f' % rmse)
print('MSE: %.3f' % mse)
test_df.iloc[:,1].sum()
| notebooks/bcx_presentation_new_subset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Imports required packages
import numpy as np
from PIL import Image
import os
# -
def scale_normalize_and_center_image(image):
"""
Scale normalize image by keeping the image if size 150 pixes in both sides
without changing aspect ratio and centers the image in 255 pixels square canvas.
Input:
image:
Returns modified image
"""
image_data = np.asarray(image)
image_data_bw = np.where(image_data!=255)
cropBox = (min(image_data_bw[0]), min(image_data_bw[1]), max(image_data_bw[0]), max(image_data_bw[1]))
image_data_new = image_data[cropBox[0]:cropBox[2]+1, cropBox[1]:cropBox[3]+1]
new_image = Image.fromarray(image_data_new)
if new_image.width >= new_image.height:
horz_scale = 150./new_image.width
new_image_resized = new_image.resize((150, round(new_image.height*horz_scale)))
else:
vert_scale = 150./new_image.height
new_image_resized = new_image.resize((round(new_image.width*vert_scale), 150))
canvas = Image.new('L', (256, 256), color='white')
canvas.paste(
new_image_resized, ((canvas.width - new_image_resized.width)//2, (canvas.height - new_image_resized.height)//2))
del image_data, image_data_bw, cropBox, image_data_new, new_image, new_image_resized
return canvas
# +
# Iterates over all the images subdirectories in the specified directory and
# performs transformations
for subdir, dirs, files in os.walk('./data/5_Train_Images_Size_Normalized+Centered/train'):
for filename in files:
filepath = subdir + os.sep + filename
if filepath.endswith(".png"):
image=Image.open(filepath)
image = scale_normalize_and_center_image(image)
image.save(filepath)
del image
| Data-Centric_AI_Competition_DeepLearning.AI/Normalizing_Image_Size_and_Centering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### linear
#
# - logistic regression
# - SVM(Support Vector Machine)
#
# ##### Non linear data
#
# - KNN
# - Decision Tree
# - Random Forest
# - Naive Bayes classifier etc
# #### Decision (user for both regression and classification)
# ***ID3 Algorithm***
# - uses entropy---->the feature having the lowest gini value selected as
# - entropy---->uncertainity in data
# - information gain--->difference b/w entropy before and after splitting the data
# ***CART ALgorithm***
# - use gini index--->the feature having the lowest gini value selected as rootnode
from sklearn.datasets import load_iris
iris=load_iris()
iris.keys()
print(iris.DESCR)
import pandas as pd
iris_df=pd.DataFrame(iris.data,columns=iris.feature_names)
iris_df["target"]=iris.target
iris_df.head()
iris_df["target"].unique()
x=iris_df.drop('target',axis=1)
x.head()
y=iris_df["target"]
y.head()
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,train_size=0.7,random_state=1)
from sklearn.tree import DecisionTreeClassifier
model=DecisionTreeClassifier()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
y_pred
from sklearn.metrics import accuracy_score,confusion_matrix,precision_score,recall_score
accuracy_score(y_test,y_pred)
confusion_matrix(y_test,y_pred)
help(precision_score)
from sklearn import tree
import matplotlib.pyplot as plt
plt.figure(figsize=(15,15))
tree.plot_tree(model)
plt.show()
from sklearn.tree import DecisionTreeClassifier
model=DecisionTreeClassifier(criterion='entropy',max_depth=3)
model.fit(x_train,y_train)
from sklearn import tree
import matplotlib.pyplot as plt
plt.figure(figsize=(15,15))
tree.plot_tree(model)
plt.show()
##Try to apply the heartdisese dataset to decsion tree algorithm and compare the accuracy
df=pd.read_csv("C:/Users/HP/Downloads/archive(3)/heart.csv")
# ***Decision Tree Regessor***
df=pd.read_csv("https://raw.githubusercontent.com/AP-State-Skill-Development-Corporation/Datasets/master/Regression/age_salary_hours.csv")
df
df["Education"].unique()
plt.scatter(df["Age"],df["Annual Salary"])
df.isna().sum()
x=df.loc[:,["Age","Weekly hours"]]
x
y=df["Annual Salary"]
y
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,train_size=0.7,random_state=1)
from sklearn.tree import DecisionTreeRegressor
model=DecisionTreeRegressor(max_depth=3)
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
from sklearn.metrics import r2_score,mean_squared_error
r2_score(y_test,y_pred)*100
mean_squared_error(y_test,y_pred)
plt.figure(figsize=(15,15))
tree.plot_tree(model)
plt.show()
# ##### Task
# - apply decision tree regressor for insurance dataset
#
| Notebooks/Day_38_Decision_tree/Decision_Tree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: MindSpore
# language: python
# name: mindspore
# ---
# # 使用NAD算法提升模型安全性
#
# [](https://gitee.com/mindspore/docs/blob/master/docs/notebook/mindspore_improve_model_security_nad.ipynb)
#
# ## 概述
#
# 本教程介绍MindArmour提供的模型安全防护手段,引导您快速使用MindArmour,为您的AI模型提供一定的安全防护能力。
#
# AI算法设计之初普遍未考虑相关的安全威胁,使得AI算法的判断结果容易被恶意攻击者影响,导致AI系统判断失准。攻击者在原始样本处加入人类不易察觉的微小扰动,导致深度学习模型误判,称为对抗样本攻击。MindArmour模型安全提供对抗样本生成、对抗样本检测、模型防御、攻防效果评估等功能,为AI模型安全研究和AI应用安全提供重要支撑。
#
# - 对抗样本生成模块支持安全工程师快速高效地生成对抗样本,用于攻击AI模型。
#
# - 对抗样本检测、防御模块支持用户检测过滤对抗样本、增强AI模型对于对抗样本的鲁棒性。
#
# - 评估模块提供多种指标全面评估对抗样本攻防性能。
#
# 这里通过图像分类任务上的对抗性攻防,以攻击算法FGSM和防御算法NAD为例,介绍MindArmour在对抗攻防上的使用方法。
#
# > 本例面向CPU、GPU、Ascend 910 AI处理器,你可以在这里下载完整的样例代码: https://gitee.com/mindspore/mindarmour/blob/master/examples/model_security/model_defenses/mnist_defense_nad.py
# ## 准备工作
#
# 本例采用LeNet5网络进行示例,将展示训练后的模型,正常验证的结果如何,使用对抗样本后的验证效果如何,在完成上述情况前,需做如下准备。
#
# 1. 下载安装跟MindSpore版本对应的MindArmour安装包。
# +
import os
import mindspore
version = mindspore.__version__
ma_link = "https://ms-release.obs.cn-north-4.myhuaweicloud.com/{0}/MindArmour/x86_64/mindarmour-{0}-cp37-cp37m-linux_x86_64.whl".format(version)
os.system("pip install {}".format(ma_link))
# -
# 2. 准备MNIST数据集。
#
# 以下示例代码将数据集下载并解压到指定位置。
# +
import os
import requests
requests.packages.urllib3.disable_warnings()
def download_dataset(dataset_url, path):
filename = dataset_url.split("/")[-1]
save_path = os.path.join(path, filename)
if os.path.exists(save_path):
return
if not os.path.exists(path):
os.makedirs(path)
res = requests.get(dataset_url, stream=True, verify=False)
with open(save_path, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(dataset_url), path))
train_path = "datasets/MNIST_Data/train"
test_path = "datasets/MNIST_Data/test"
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte", train_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte", train_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte", test_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte", test_path)
# -
# 下载的数据集文件的目录结构如下:
#
# ```text
# ./datasets/MNIST_Data
# ├── test
# │ ├── t10k-images-idx3-ubyte
# │ └── t10k-labels-idx1-ubyte
# └── train
# ├── train-images-idx3-ubyte
# └── train-labels-idx1-ubyte
# ```
# ## 建立被攻击模型
#
# 以MNIST为示范数据集,自定义的简单模型作为被攻击模型。
#
# ### 引入相关包
# +
import os
import numpy as np
from scipy.special import softmax
from mindspore import dataset as ds
from mindspore import dtype as mstype
import mindspore.dataset.vision.c_transforms as CV
import mindspore.dataset.transforms.c_transforms as C
from mindspore.dataset.vision import Inter
import mindspore.nn as nn
from mindspore.nn import SoftmaxCrossEntropyWithLogits
from mindspore import Model, Tensor, context
from mindspore.train.callback import LossMonitor
from mindarmour.adv_robustness.attacks import FastGradientSignMethod
from mindarmour.utils import LogUtil
from mindarmour.adv_robustness.evaluations import AttackEvaluate
context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
LOGGER = LogUtil.get_instance()
LOGGER.set_level("INFO")
TAG = 'demo'
# -
# ### 加载数据集
#
# 利用MindSpore的dataset提供的MnistDataset接口加载MNIST数据集。
# generate dataset for train of test
def generate_mnist_dataset(data_path, batch_size=32, repeat_size=1,
num_parallel_workers=1, index=True):
"""
create dataset for training or testing
"""
# define dataset
ds1 = ds.MnistDataset(data_path)
# define operation parameters
resize_height, resize_width = 32, 32
rescale = 1.0 / 255.0
shift = 0.0
# define map operations
resize_op = CV.Resize((resize_height, resize_width),
interpolation=Inter.LINEAR)
rescale_op = CV.Rescale(rescale, shift)
hwc2chw_op = CV.HWC2CHW()
type_cast_op = C.TypeCast(mstype.int32)
# apply map operations on images
if not index:
one_hot_enco = C.OneHot(10)
ds1 = ds1.map(operations=one_hot_enco, input_columns="label",
num_parallel_workers=num_parallel_workers)
type_cast_op = C.TypeCast(mstype.float32)
ds1 = ds1.map(operations=type_cast_op, input_columns="label",
num_parallel_workers=num_parallel_workers)
ds1 = ds1.map(operations=resize_op, input_columns="image",
num_parallel_workers=num_parallel_workers)
ds1 = ds1.map(operations=rescale_op, input_columns="image",
num_parallel_workers=num_parallel_workers)
ds1 = ds1.map(operations=hwc2chw_op, input_columns="image",
num_parallel_workers=num_parallel_workers)
# apply DatasetOps
buffer_size = 10000
ds1 = ds1.shuffle(buffer_size=buffer_size)
ds1 = ds1.batch(batch_size, drop_remainder=True)
ds1 = ds1.repeat(repeat_size)
return ds1
# ### 建立模型
#
# 这里以LeNet模型为例,您也可以建立训练自己的模型。
#
# 1. 定义LeNet模型网络。
# +
import mindspore.nn as nn
from mindspore.common.initializer import Normal
class LeNet5(nn.Cell):
"""Lenet network structure."""
# define the operator required
def __init__(self, num_class=10, num_channel=1):
super(LeNet5, self).__init__()
self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))
self.relu = nn.ReLU()
self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
self.flatten = nn.Flatten()
# use the preceding operators to construct networks
def construct(self, x):
x = self.max_pool2d(self.relu(self.conv1(x)))
x = self.max_pool2d(self.relu(self.conv2(x)))
x = self.flatten(x)
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
# -
# 2. 训练LeNet模型。利用上面定义的数据加载函数`generate_mnist_dataset`载入数据。
mnist_path = "./datasets/MNIST_Data/"
batch_size = 32
# train original model
ds_train = generate_mnist_dataset(os.path.join(mnist_path, "train"),
batch_size=batch_size, repeat_size=1,
index=False)
net = LeNet5()
loss = SoftmaxCrossEntropyWithLogits(sparse=False, reduction="mean")
opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)
model = Model(net, loss, opt, metrics=None)
model.train(3, ds_train, callbacks=[LossMonitor(1875)],
dataset_sink_mode=False)
# 3. 测试模型
# +
# prediction accuracy before attack
# get test data
ds_test = generate_mnist_dataset(os.path.join(mnist_path, "test"),
batch_size=batch_size, repeat_size=1,
index=False)
inputs = []
labels = []
for data in ds_test.create_tuple_iterator():
inputs.append(data[0].asnumpy().astype(np.float32))
labels.append(data[1].asnumpy())
test_inputs = np.concatenate(inputs)
test_labels = np.concatenate(labels)
def get_net_acc(network, inputs_data, labels):
network.set_train(False)
test_logits = net(Tensor(inputs_data)).asnumpy()
tmp = np.argmax(test_logits, axis=1) == np.argmax(labels, axis=1)
accuracy = np.mean(tmp)
return accuracy
accuracy = get_net_acc(net, test_inputs, test_labels)
LOGGER.info(TAG, 'prediction accuracy before attacking is : %s', accuracy)
# -
# 测试结果中分类精度达到了97%以上。
# ## 对抗性攻击
#
# 在进行对抗性攻击前,选取32张图片查看,没有攻击前的图片展示的效果如何。
# +
import matplotlib.pyplot as plt
count = 1
# %matplotlib inline
for i in test_inputs[:32]:
plt.subplot(4, 8, count)
plt.imshow(np.squeeze(i), cmap='gray', interpolation='nearest')
plt.xticks([])
plt.axis("off")
count += 1
plt.show()
# -
# 调用MindArmour提供的FGSM接口(FastGradientSignMethod),对验证的图片数据进行对抗性攻击。
#
# 查看之前选取的32张图片,对抗性攻击后,图片产生了什么变化。
# +
# attacking
# get adv data
attack = FastGradientSignMethod(net, eps=0.3, loss_fn=loss)
adv_data = attack.batch_generate(test_inputs, test_labels)
count = 1
# %matplotlib inline
for i in adv_data[:32]:
plt.subplot(4, 8, count)
plt.imshow(np.squeeze(i), cmap='gray', interpolation='nearest')
plt.xticks([])
count += 1
plt.axis("off")
plt.show()
# -
# 受到攻击后,图片出现了很多的类似水印的背景,但是在视觉上还是能明显地分辨出来图片是什么,但是对于模型来说,可能不一定。
#
# 接下来,验证模型在攻击后的图片分类能力。
# get accuracy of adv data on original model
adv_logits = net(Tensor(adv_data)).asnumpy()
adv_proba = softmax(adv_logits, axis=1)
tmp = np.argmax(adv_proba, axis=1) == np.argmax(test_labels, axis=1)
accuracy_adv = np.mean(tmp)
LOGGER.info(TAG, 'prediction accuracy after attacking is : %s', accuracy_adv)
attack_evaluate = AttackEvaluate(test_inputs.transpose(0, 2, 3, 1),
test_labels,
adv_data.transpose(0, 2, 3, 1),
adv_proba)
LOGGER.info(TAG, 'mis-classification rate of adversaries is : %s',
attack_evaluate.mis_classification_rate())
LOGGER.info(TAG, 'The average confidence of adversarial class is : %s',
attack_evaluate.avg_conf_adv_class())
LOGGER.info(TAG, 'The average confidence of true class is : %s',
attack_evaluate.avg_conf_true_class())
LOGGER.info(TAG, 'The average distance (l0, l2, linf) between original '
'samples and adversarial samples are: %s',
attack_evaluate.avg_lp_distance())
LOGGER.info(TAG, 'The average structural similarity between original '
'samples and adversarial samples are: %s',
attack_evaluate.avg_ssim())
# 对模型进行FGSM无目标攻击后:
#
# - 模型精度由97%以上降到不足10%;
#
# - 误分类率超过90%,成功攻击的对抗样本的预测类别的平均置信度(ACAC)为 0.70117253;
#
# - 成功攻击的对抗样本的真实类别的平均置信度(ACTC)为 0.04269705;
#
# - 同时给出了生成的对抗样本与原始样本的零范数距离、二范数距离和无穷范数距离,平均每个对抗样本与原始样本间的结构相似性为0.5092086;
#
# - 平均每生成一张对抗样本所需时间为0.003125s。
#
# FGSM无目标攻击后生成的对抗样本。从视觉角度而言,几乎没有明显变化,但是均成功误导了模型,使模型将其误分类为其他非正确类别。
# ## 对抗性防御
#
# NaturalAdversarialDefense(NAD)是一种简单有效的对抗样本防御方法,使用对抗训练的方式,在模型训练的过程中构建对抗样本,并将对抗样本与原始样本混合,一起训练模型。随着训练次数的增加,模型在训练的过程中提升对于对抗样本的鲁棒性。NAD算法使用FGSM作为攻击算法,构建对抗样本。
#
# ### 防御实现
#
# 调用MindArmour提供的NAD防御接口(NaturalAdversarialDefense)。
# +
from mindarmour.adv_robustness.defenses import NaturalAdversarialDefense
# defense
net.set_train()
nad = NaturalAdversarialDefense(net, loss_fn=loss, optimizer=opt,
bounds=(0.0, 1.0), eps=0.3)
nad.batch_defense(test_inputs, test_labels, batch_size=32, epochs=10)
# get accuracy of test data on defensed model
net.set_train(False)
test_logits = net(Tensor(test_inputs)).asnumpy()
tmp = np.argmax(test_logits, axis=1) == np.argmax(test_labels, axis=1)
accuracy = np.mean(tmp)
LOGGER.info(TAG, 'accuracy of TEST data on defensed model is : %s', accuracy)
# get accuracy of adv data on defensed model
adv_logits = net(Tensor(adv_data)).asnumpy()
adv_proba = softmax(adv_logits, axis=1)
tmp = np.argmax(adv_proba, axis=1) == np.argmax(test_labels, axis=1)
accuracy_adv = np.mean(tmp)
attack_evaluate = AttackEvaluate(test_inputs.transpose(0, 2, 3, 1),
test_labels,
adv_data.transpose(0, 2, 3, 1),
adv_proba)
LOGGER.info(TAG, 'accuracy of adv data on defensed model is : %s',
np.mean(accuracy_adv))
LOGGER.info(TAG, 'defense mis-classification rate of adversaries is : %s',
attack_evaluate.mis_classification_rate())
LOGGER.info(TAG, 'The average confidence of adversarial class is : %s',
attack_evaluate.avg_conf_adv_class())
LOGGER.info(TAG, 'The average confidence of true class is : %s',
attack_evaluate.avg_conf_true_class())
# -
# ### 防御效果
#
# 使用NAD进行对抗样本防御后,模型对于对抗样本的误分类率从90%以上降至不足30%,模型有效地防御了对抗样本。同时,模型对于原来测试数据集的分类精度达97%。
| docs/notebook/mindspore_improve_model_security_nad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # INTRACITY DRIVER
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import neural_network
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
# -
scaler = StandardScaler()
ann = neural_network.MLPRegressor(shuffle=True,
alpha=0.5,
hidden_layer_sizes=(100, 100),
max_iter=10000,
random_state=100,
verbose=False)
def crossValidate(X, y, clf):
X_train, X_test, y_train, y_test = train_test_split(X, y)
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
clf.fit(X_train, y_train)
prediction = clf.predict(X_test)
print("CV performance")
print(200 * r2_score(y_test, prediction))
print("Train performance")
print(200 * r2_score(y_train, clf.predict(X_train)))
def testRUN_crCsv(X, y, clf):
scaler.fit(X)
X = scaler.transform(X)
clf.fit(X, y)
predict = clf.predict(scaler.transform(test))
predict = predict.round(decimals=2)
predict = predict.reshape(predict.shape[0], 1)
predict = np.concatenate([id_vec, predict], axis=1)
predict = pd.DataFrame(data=predict, columns=['ID', 'FARE'])
predict.to_csv("../answer.csv", index=False, header=True)
print("Done! - check answer.csv file")
train = pd.read_csv('../data/processed_train.csv')
test = pd.read_csv('../data/processed_test.csv')
id_vec = np.array(test.loc[:, test.columns == 'ID'])
train.describe()
# +
fig=plt.figure(figsize=(8, 8), dpi= 80, facecolor='w', edgecolor='k')
X_LABEL = 'VEHICLE_TYPE'
Y_LABEL = 'WAIT_TIME'
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
# train.plot(x=X_LABEL,y=Y_LABEL)
# train.plot(kind='box', vert=False, positions=[1, 4, 5, 6, 8])
plt.scatter(train[Y_LABEL],train[X_LABEL])
# +
# Features to drop
drop_lab = ['ID', 'cooling','bus','mean_lat', 'mean_long', 'TIME_AM','YEAR','DAY','TIMESTAMP']
train.drop(drop_lab, axis=1, inplace=True)
test.drop(drop_lab, axis=1, inplace=True)
# -
X = train.drop(['FARE'], axis=1)
y = train['FARE']
# +
# Hyperparameter tuning
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
gs = GridSearchCV(ann,
param_grid={
'hidden_layer_sizes': [(8), (10),(10, 10, 10), (70, 50, 20), (15, 15, 15), (40, 40, 40)],
'random_state': [100, 1000, 10000],
'alpha': [0.01, 0.1, 1.0]
},
n_jobs=-1,
scoring=make_scorer(r2_score),
verbose=5)
gs.fit(X, y)
print("best estimator :\n",gs.best_estimator_)
print("Best parameters :\n",gs.best_params_)
print("CV RESULTS : \n",gs.cv_results_)
# -
# CROSS VALIDATION code
crossValidate(X, y, ann)
# Real testing
testRUN_crCsv(X, y, ann)
| notebooks_&_scripts/Driver.ipynb |
# ##### Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # shift_scheduling_sat
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/examples/shift_scheduling_sat.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/python/shift_scheduling_sat.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# #!/usr/bin/env python3
# Copyright 2010-2021 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Creates a shift scheduling problem and solves it."""
from absl import app
from absl import flags
from ortools.sat.python import cp_model
from google.protobuf import text_format
FLAGS = flags.FLAGS
flags.DEFINE_string('output_proto', '',
'Output file to write the cp_model proto to.')
flags.DEFINE_string('params', 'max_time_in_seconds:10.0',
'Sat solver parameters.')
def negated_bounded_span(works, start, length):
"""Filters an isolated sub-sequence of variables assined to True.
Extract the span of Boolean variables [start, start + length), negate them,
and if there is variables to the left/right of this span, surround the span by
them in non negated form.
Args:
works: a list of variables to extract the span from.
start: the start to the span.
length: the length of the span.
Returns:
a list of variables which conjunction will be false if the sub-list is
assigned to True, and correctly bounded by variables assigned to False,
or by the start or end of works.
"""
sequence = []
# Left border (start of works, or works[start - 1])
if start > 0:
sequence.append(works[start - 1])
for i in range(length):
sequence.append(works[start + i].Not())
# Right border (end of works or works[start + length])
if start + length < len(works):
sequence.append(works[start + length])
return sequence
def add_soft_sequence_constraint(model, works, hard_min, soft_min, min_cost,
soft_max, hard_max, max_cost, prefix):
"""Sequence constraint on true variables with soft and hard bounds.
This constraint look at every maximal contiguous sequence of variables
assigned to true. If forbids sequence of length < hard_min or > hard_max.
Then it creates penalty terms if the length is < soft_min or > soft_max.
Args:
model: the sequence constraint is built on this model.
works: a list of Boolean variables.
hard_min: any sequence of true variables must have a length of at least
hard_min.
soft_min: any sequence should have a length of at least soft_min, or a
linear penalty on the delta will be added to the objective.
min_cost: the coefficient of the linear penalty if the length is less than
soft_min.
soft_max: any sequence should have a length of at most soft_max, or a linear
penalty on the delta will be added to the objective.
hard_max: any sequence of true variables must have a length of at most
hard_max.
max_cost: the coefficient of the linear penalty if the length is more than
soft_max.
prefix: a base name for penalty literals.
Returns:
a tuple (variables_list, coefficient_list) containing the different
penalties created by the sequence constraint.
"""
cost_literals = []
cost_coefficients = []
# Forbid sequences that are too short.
for length in range(1, hard_min):
for start in range(len(works) - length + 1):
model.AddBoolOr(negated_bounded_span(works, start, length))
# Penalize sequences that are below the soft limit.
if min_cost > 0:
for length in range(hard_min, soft_min):
for start in range(len(works) - length + 1):
span = negated_bounded_span(works, start, length)
name = ': under_span(start=%i, length=%i)' % (start, length)
lit = model.NewBoolVar(prefix + name)
span.append(lit)
model.AddBoolOr(span)
cost_literals.append(lit)
# We filter exactly the sequence with a short length.
# The penalty is proportional to the delta with soft_min.
cost_coefficients.append(min_cost * (soft_min - length))
# Penalize sequences that are above the soft limit.
if max_cost > 0:
for length in range(soft_max + 1, hard_max + 1):
for start in range(len(works) - length + 1):
span = negated_bounded_span(works, start, length)
name = ': over_span(start=%i, length=%i)' % (start, length)
lit = model.NewBoolVar(prefix + name)
span.append(lit)
model.AddBoolOr(span)
cost_literals.append(lit)
# Cost paid is max_cost * excess length.
cost_coefficients.append(max_cost * (length - soft_max))
# Just forbid any sequence of true variables with length hard_max + 1
for start in range(len(works) - hard_max):
model.AddBoolOr(
[works[i].Not() for i in range(start, start + hard_max + 1)])
return cost_literals, cost_coefficients
def add_soft_sum_constraint(model, works, hard_min, soft_min, min_cost,
soft_max, hard_max, max_cost, prefix):
"""Sum constraint with soft and hard bounds.
This constraint counts the variables assigned to true from works.
If forbids sum < hard_min or > hard_max.
Then it creates penalty terms if the sum is < soft_min or > soft_max.
Args:
model: the sequence constraint is built on this model.
works: a list of Boolean variables.
hard_min: any sequence of true variables must have a sum of at least
hard_min.
soft_min: any sequence should have a sum of at least soft_min, or a linear
penalty on the delta will be added to the objective.
min_cost: the coefficient of the linear penalty if the sum is less than
soft_min.
soft_max: any sequence should have a sum of at most soft_max, or a linear
penalty on the delta will be added to the objective.
hard_max: any sequence of true variables must have a sum of at most
hard_max.
max_cost: the coefficient of the linear penalty if the sum is more than
soft_max.
prefix: a base name for penalty variables.
Returns:
a tuple (variables_list, coefficient_list) containing the different
penalties created by the sequence constraint.
"""
cost_variables = []
cost_coefficients = []
sum_var = model.NewIntVar(hard_min, hard_max, '')
# This adds the hard constraints on the sum.
model.Add(sum_var == sum(works))
# Penalize sums below the soft_min target.
if soft_min > hard_min and min_cost > 0:
delta = model.NewIntVar(-len(works), len(works), '')
model.Add(delta == soft_min - sum_var)
# TODO(user): Compare efficiency with only excess >= soft_min - sum_var.
excess = model.NewIntVar(0, 7, prefix + ': under_sum')
model.AddMaxEquality(excess, [delta, 0])
cost_variables.append(excess)
cost_coefficients.append(min_cost)
# Penalize sums above the soft_max target.
if soft_max < hard_max and max_cost > 0:
delta = model.NewIntVar(-7, 7, '')
model.Add(delta == sum_var - soft_max)
excess = model.NewIntVar(0, 7, prefix + ': over_sum')
model.AddMaxEquality(excess, [delta, 0])
cost_variables.append(excess)
cost_coefficients.append(max_cost)
return cost_variables, cost_coefficients
def solve_shift_scheduling(params, output_proto):
"""Solves the shift scheduling problem."""
# Data
num_employees = 8
num_weeks = 3
shifts = ['O', 'M', 'A', 'N']
# Fixed assignment: (employee, shift, day).
# This fixes the first 2 days of the schedule.
fixed_assignments = [
(0, 0, 0),
(1, 0, 0),
(2, 1, 0),
(3, 1, 0),
(4, 2, 0),
(5, 2, 0),
(6, 2, 3),
(7, 3, 0),
(0, 1, 1),
(1, 1, 1),
(2, 2, 1),
(3, 2, 1),
(4, 2, 1),
(5, 0, 1),
(6, 0, 1),
(7, 3, 1),
]
# Request: (employee, shift, day, weight)
# A negative weight indicates that the employee desire this assignment.
requests = [
# Employee 3 wants the first Saturday off.
(3, 0, 5, -2),
# Employee 5 wants a night shift on the second Thursday.
(5, 3, 10, -2),
# Employee 2 does not want a night shift on the first Friday.
(2, 3, 4, 4)
]
# Shift constraints on continuous sequence :
# (shift, hard_min, soft_min, min_penalty,
# soft_max, hard_max, max_penalty)
shift_constraints = [
# One or two consecutive days of rest, this is a hard constraint.
(0, 1, 1, 0, 2, 2, 0),
# betweem 2 and 3 consecutive days of night shifts, 1 and 4 are
# possible but penalized.
(3, 1, 2, 20, 3, 4, 5),
]
# Weekly sum constraints on shifts days:
# (shift, hard_min, soft_min, min_penalty,
# soft_max, hard_max, max_penalty)
weekly_sum_constraints = [
# Constraints on rests per week.
(0, 1, 2, 7, 2, 3, 4),
# At least 1 night shift per week (penalized). At most 4 (hard).
(3, 0, 1, 3, 4, 4, 0),
]
# Penalized transitions:
# (previous_shift, next_shift, penalty (0 means forbidden))
penalized_transitions = [
# Afternoon to night has a penalty of 4.
(2, 3, 4),
# Night to morning is forbidden.
(3, 1, 0),
]
# daily demands for work shifts (morning, afternon, night) for each day
# of the week starting on Monday.
weekly_cover_demands = [
(2, 3, 1), # Monday
(2, 3, 1), # Tuesday
(2, 2, 2), # Wednesday
(2, 3, 1), # Thursday
(2, 2, 2), # Friday
(1, 2, 3), # Saturday
(1, 3, 1), # Sunday
]
# Penalty for exceeding the cover constraint per shift type.
excess_cover_penalties = (2, 2, 5)
num_days = num_weeks * 7
num_shifts = len(shifts)
model = cp_model.CpModel()
work = {}
for e in range(num_employees):
for s in range(num_shifts):
for d in range(num_days):
work[e, s, d] = model.NewBoolVar('work%i_%i_%i' % (e, s, d))
# Linear terms of the objective in a minimization context.
obj_int_vars = []
obj_int_coeffs = []
obj_bool_vars = []
obj_bool_coeffs = []
# Exactly one shift per day.
for e in range(num_employees):
for d in range(num_days):
model.Add(sum(work[e, s, d] for s in range(num_shifts)) == 1)
# Fixed assignments.
for e, s, d in fixed_assignments:
model.Add(work[e, s, d] == 1)
# Employee requests
for e, s, d, w in requests:
obj_bool_vars.append(work[e, s, d])
obj_bool_coeffs.append(w)
# Shift constraints
for ct in shift_constraints:
shift, hard_min, soft_min, min_cost, soft_max, hard_max, max_cost = ct
for e in range(num_employees):
works = [work[e, shift, d] for d in range(num_days)]
variables, coeffs = add_soft_sequence_constraint(
model, works, hard_min, soft_min, min_cost, soft_max, hard_max,
max_cost,
'shift_constraint(employee %i, shift %i)' % (e, shift))
obj_bool_vars.extend(variables)
obj_bool_coeffs.extend(coeffs)
# Weekly sum constraints
for ct in weekly_sum_constraints:
shift, hard_min, soft_min, min_cost, soft_max, hard_max, max_cost = ct
for e in range(num_employees):
for w in range(num_weeks):
works = [work[e, shift, d + w * 7] for d in range(7)]
variables, coeffs = add_soft_sum_constraint(
model, works, hard_min, soft_min, min_cost, soft_max,
hard_max, max_cost,
'weekly_sum_constraint(employee %i, shift %i, week %i)' %
(e, shift, w))
obj_int_vars.extend(variables)
obj_int_coeffs.extend(coeffs)
# Penalized transitions
for previous_shift, next_shift, cost in penalized_transitions:
for e in range(num_employees):
for d in range(num_days - 1):
transition = [
work[e, previous_shift, d].Not(), work[e, next_shift,
d + 1].Not()
]
if cost == 0:
model.AddBoolOr(transition)
else:
trans_var = model.NewBoolVar(
'transition (employee=%i, day=%i)' % (e, d))
transition.append(trans_var)
model.AddBoolOr(transition)
obj_bool_vars.append(trans_var)
obj_bool_coeffs.append(cost)
# Cover constraints
for s in range(1, num_shifts):
for w in range(num_weeks):
for d in range(7):
works = [work[e, s, w * 7 + d] for e in range(num_employees)]
# Ignore Off shift.
min_demand = weekly_cover_demands[d][s - 1]
worked = model.NewIntVar(min_demand, num_employees, '')
model.Add(worked == sum(works))
over_penalty = excess_cover_penalties[s - 1]
if over_penalty > 0:
name = 'excess_demand(shift=%i, week=%i, day=%i)' % (s, w,
d)
excess = model.NewIntVar(0, num_employees - min_demand,
name)
model.Add(excess == worked - min_demand)
obj_int_vars.append(excess)
obj_int_coeffs.append(over_penalty)
# Objective
model.Minimize(
sum(obj_bool_vars[i] * obj_bool_coeffs[i]
for i in range(len(obj_bool_vars))) +
sum(obj_int_vars[i] * obj_int_coeffs[i]
for i in range(len(obj_int_vars))))
if output_proto:
print('Writing proto to %s' % output_proto)
with open(output_proto, 'w') as text_file:
text_file.write(str(model))
# Solve the model.
solver = cp_model.CpSolver()
if params:
text_format.Parse(params, solver.parameters)
solution_printer = cp_model.ObjectiveSolutionPrinter()
status = solver.Solve(model, solution_printer)
# Print solution.
if status == cp_model.OPTIMAL or status == cp_model.FEASIBLE:
print()
header = ' '
for w in range(num_weeks):
header += 'M T W T F S S '
print(header)
for e in range(num_employees):
schedule = ''
for d in range(num_days):
for s in range(num_shifts):
if solver.BooleanValue(work[e, s, d]):
schedule += shifts[s] + ' '
print('worker %i: %s' % (e, schedule))
print()
print('Penalties:')
for i, var in enumerate(obj_bool_vars):
if solver.BooleanValue(var):
penalty = obj_bool_coeffs[i]
if penalty > 0:
print(' %s violated, penalty=%i' % (var.Name(), penalty))
else:
print(' %s fulfilled, gain=%i' % (var.Name(), -penalty))
for i, var in enumerate(obj_int_vars):
if solver.Value(var) > 0:
print(' %s violated by %i, linear penalty=%i' %
(var.Name(), solver.Value(var), obj_int_coeffs[i]))
print()
print('Statistics')
print(' - status : %s' % solver.StatusName(status))
print(' - conflicts : %i' % solver.NumConflicts())
print(' - branches : %i' % solver.NumBranches())
print(' - wall time : %f s' % solver.WallTime())
solve_shift_scheduling(FLAGS.params, FLAGS.output_proto)
| examples/notebook/examples/shift_scheduling_sat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basic RigolWFM Usage
#
# **<NAME>**
#
# **March 2020**
# +
import sys
import numpy as np
import matplotlib.pyplot as plt
try:
import RigolWFM.wfm as rigol
except:
print("***** You need to install the module to read Rigol files first *****")
print("***** Execute the following line in a new cell, then retry *****")
print()
print("!{sys.executable} -m pip install RigolWFM")
repo = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/"
# -
# ## Introduction
#
# The idea is to create a robust, fast parser for waveform `.wfm` files created by Rigol oscilloscopes. Specifically,
#
# ```python
# import matplotlib.pyplot as plt
# import RigolWFM.wfm as rigol
#
# model = "DS1102E'
# filename = "name.wfm"
# scope_data = rigol.Wfm.from_file(filename, model)
# description = scope_data.describe()
# print(description)
#
# url = "https://somewebsite.com/path/file.wfm"
# scope_data = rigol.Wfm.from_url(url, model)
# for ch in scope_data.channels:
# plt.plot(ch.times, ch.volts, label=ch.name)
# plt.legend()
# plt.show()
# ```
# ## Motivation
#
# The `.wfm` format offers a few nice advantages
#
# * saving onto a USB drive on the scope is fast
# * uploading the `.wfm` file back to the scope is (sometimes) possible
# * no need to interface to a computer
# * the files are small (one byte per point)
# * all the settings are contained in the file header
#
# The disadvantage are that different scopes (and often different firmware version) have different formats. Worse, documentation from Rigol on these formats is sparse at best. Finally, the Rigol software to support reading these files is klunky.
# ## Possible Scope Models
#
# This program currently covers six classes of scopes.
# ### DS1000C untested
#
# Support for these models is in the program, but parsing is completely untested.
#
# Handy Abbreviations: `C`, `1000C`, `DS1000C`
#
# Specific Models: `DS1000CD`, `DS1000C`, `DS1000MD`, `DS1000M`, `DS1302CA`, `DS1202CA`, `DS1102CA`, `DS1062CA`
# ### DS1000E validated
#
# Handy Abbreviations: `D`, `1000D`, `DS1000D`
#
# Specific Models: `DS1102D`, `DS1052D`
#
# Handy Abbreviations: `E`, `1000E`, `DS1000E`
#
# Specific Models: `DS1000E`, `DS1102E`, `DS1052E`
# ### DS1000Z tested, incorrect voltages
#
# Handy Abbreviations: `Z`, `1000Z`, `DS1000Z`,
#
# Specific Models: `DS1202Z`, `DS1074Z`, `DS1104Z`, `DS1074Z-S`,
# `DS1104Z-S`, `MSO1054Z`, `DS1054Z`,
# `MSO1074Z`, `MSO1104Z`, `DS1104Z`
# ### DS2000 tested
#
# Handy Abbreviations: `2`, `2000`, `DS2000`,
#
# Specific Models: `DS2102A`, `MSO2102A`, `MSO2102A-S`,
# `DS2202A`, `MSO2202A`, `MSO2202A-S`,
# `DS2302A`, `MSO2302A`, `MSO2302A-S`
#
# ### DS4000 validated
#
# Handy Abbreviations: `4`, `4000`, `DS4000`,
#
# Specific Models: `DS4054`, `DS4052`, `DS4034`, `DS4032`, `DS4024`,
# `DS4022`, `DS4014`, `DS4012`, `MSO4054`, `MSO4052`, `MSO4034`,
# `MSO4032`, `MSO4024`, `MSO4022`, `MSO4014`, `MSO4012`]
# ### DS6000 untested
#
# Support for these models is in the program, but parsing is completely untested.
#
# Handy Abbreviations: `6`, `6000`, `DS6000`
#
# Specific Models: `DS6062`, `DS6064`, `DS6102`, `DS6104`
# ## The `Wfm` class
#
# This is a class with two basic methods to create objects from files and urls:
#
# * Wfm.from_file(file_name, model)
# * Wfm.from_url(url, model)
#
# where `model` describes the scope.
#
# It also has a methods to manipulate the data.
#
# * Wfm.describe()
# * Wfm.csv()
# * Wfm.plot()
#
# The first two return strings. The third produces a basic `matplotlib.pyplot.plt` object.
# ## Example for a remote file
#
# First let's have look at the description of the internal file structure. We see that only channel 1 has been enabled.
# raw=true is needed because this is a binary file
wfm_url = repo + "DS1102E-D.wfm" + "?raw=true"
w = rigol.Wfm.from_url(wfm_url, 'E')
# ### Sample description
description = w.describe()
print(description)
# ### Sample Plot
w.plot()
plt.show()
# ### Sample `.csv` file
# +
s = w.csv()
# just show the first few entries
rows = s.split('\n')
for i in range(5):
print(rows[i])
# -
# ## Example for a local file
#
# You will need to adjust the path and filename for your computer
# +
path = "../wfm/"
filename = "DS1102E-D.wfm"
wfm_name = path + filename
w = rigol.Wfm.from_file(wfm_name, 'E')
description = w.describe()
print(description)
# -
| docs/0-Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Wrangling with Spark
#
# This is the code used in the previous screencast. Run each code cell to understand what the code does and how it works.
#
# These first three cells import libraries, instantiate a SparkSession, and then read in the data set
# +
from pyspark.sql import SparkSession
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
from pyspark.sql.types import IntegerType
from pyspark.sql.functions import desc
from pyspark.sql.functions import asc
from pyspark.sql.functions import sum as Fsum
import datetime
import numpy as np
import pandas as pd
# %matplotlib inline
import matplotlib.pyplot as plt
# -
spark = SparkSession \
.builder \
.appName("Wrangling Data") \
.getOrCreate()
path = "data/sparkify_log_small.json"
user_log = spark.read.json(path)
# # Data Exploration
#
# The next cells explore the data set.
user_log.take(5)
user_log.printSchema()
user_log.describe().show()
user_log.describe("artist").show()
user_log.describe("sessionId").show()
user_log.count()
user_log.select("page").dropDuplicates().sort("page").show()
user_log.select(["userId", "firstname", "page", "song"]).where(user_log.userId == "1046").collect()
# # Calculating Statistics by Hour
get_hour = udf(lambda x: datetime.datetime.fromtimestamp(x / 1000.0). hour)
user_log = user_log.withColumn("hour", get_hour(user_log.ts)) # adding new column
user_log.head()
songs_in_hour = user_log.filter(user_log.page == "NextSong").groupby(user_log.hour).count().orderBy(user_log.hour.cast("float"))
songs_in_hour.show()
songs_in_hour_pd = songs_in_hour.toPandas()
songs_in_hour_pd.hour = pd.to_numeric(songs_in_hour_pd.hour)
plt.scatter(songs_in_hour_pd["hour"], songs_in_hour_pd["count"])
plt.xlim(-1, 24);
plt.ylim(0, 1.2 * max(songs_in_hour_pd["count"]))
plt.xlabel("Hour")
plt.ylabel("Songs played");
# # Drop Rows with Missing Values
#
# As you'll see, it turns out there are no missing values in the userID or session columns. But there are userID values that are empty strings.
user_log_valid = user_log.dropna(how = "any", subset = ["userId", "sessionId"])
user_log_valid.count()
user_log.select("userId").dropDuplicates().sort("userId").show()
user_log_valid = user_log_valid.filter(user_log_valid["userId"] != "")
user_log_valid.count()
# # Users Downgrade Their Accounts
#
# Find when users downgrade their accounts and then flag those log entries. Then use a window function and cumulative sum to distinguish each user's data as either pre or post downgrade events.
user_log_valid.filter("page = 'Submit Downgrade'").show()
user_log.select(["userId", "firstname", "page", "level", "song"]).where(user_log.userId == "1138").collect()
flag_downgrade_event = udf(lambda x: 1 if x == "Submit Downgrade" else 0, IntegerType())
user_log_valid = user_log_valid.withColumn("downgraded", flag_downgrade_event("page"))
user_log_valid.head()
from pyspark.sql import Window
windowval = Window.partitionBy("userId").orderBy(desc("ts")).rangeBetween(Window.unboundedPreceding, 0)
user_log_valid = user_log_valid.withColumn("phase", Fsum("downgraded").over(windowval))
user_log_valid.select(["userId", "firstname", "ts", "page", "level", "phase"]).where(user_log.userId == "1138").sort("ts").collect()
| tutorials/spark_dataframe_functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/soumik12345/Several-Days-of-Cuda/blob/master/notebooks/Cuda_Workspace.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="gaxLz4kCf1ga" outputId="8491af27-792c-4257-9263-10c598f0f592"
# !nvidia-smi -L
# + colab={"base_uri": "https://localhost:8080/", "height": 381} id="W_8bAlDJfora" outputId="26f11deb-7e09-4981-afc2-e4a83f4d2d41"
# Install colab_ssh on google colab
# !pip install colab_ssh --upgrade -q
from colab_ssh import launch_ssh_cloudflared, init_git_cloudflared
launch_ssh_cloudflared(password="<PASSWORD>")
# Optional: if you want to clone a github repository
init_git_cloudflared(
'https://github.com/soumik12345/Several-Days-of-Cuda')
# + id="7ZiniSNmfreu" outputId="ecde1679-bcc7-4408-cb36-48e29e2827d6" colab={"base_uri": "https://localhost:8080/", "height": 17} language="javascript"
# function ClickConnect(){
# console.log("Working");
# document.querySelector("colab-toolbar-button#connect").click()
# }
# setInterval(ClickConnect,60000)
# + id="Q-Wwiri2kGTw"
| notebooks/Cuda_Workspace.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Lets combine BBF and Citaty Info datasets
# +
import jellyfish
import os
import pickle
import re
import pandas as pd
from joblib import Parallel, delayed
from tqdm.auto import tqdm
# -
bbf_path = '../results/bbf'
ci_path = '../results/citaty_info/qbq'
# ### Get datasets
with open(os.path.join(bbf_path, 'bbf.pickle'), 'rb') as f:
bbf = pickle.load(f)
# +
with open(os.path.join(ci_path, 'ci.pickle'), 'rb') as f:
ci = pickle.load(f)
ci = ci.reset_index(drop=True)
# -
df = pd.concat([ci, bbf])
df = df.reset_index(drop=True)
df = df.astype(object).where(pd.notnull(df), None)
df.submitted_date = df.submitted_date.astype(object).where(df.submitted_date.notnull(), None)
# ### Delete quotes with unpropreate lenght
df = df[df['quote'].apply(lambda x: len(x) > 10 and len(x) < 350)]
df.shape
df[df.target == 1].shape
# ### Prepare tags
def del_extra_word(sstr):
return re.sub(r' +', ' ', sstr.replace(' цитаты', ''))
for ind, row in df.iterrows():
df.at[ind, 'tags'] = [del_extra_word(tag) for tag in row.tags]
# ## Delete non russian alphabet
# +
# set(''.join(df.quote.values))
# -
alphabet = {
'0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
' ', '!', '#', '%', '&', '*', '+', ',', '.', '/', '~',
':', ';', '=', '?', '@', '\\', '^', '{', '|', '}',
'"', '$', "'", '(', ')', '<', '>', '[', ']', '№',
'-', '_', '̶', '‐', '‑', '‒', '–', '—', '―', '−', '─',
'А', 'Б', 'В', 'Г', 'Д', 'Е', 'Ё', 'Ж', 'З', 'И', 'Й', 'К', 'Л', 'М', 'Н', 'О', 'П',
'Р', 'С', 'Т', 'У', 'Ф', 'Х', 'Ц', 'Ч', 'Ш', 'Щ', 'Ъ', 'Ы', 'Ь', 'Э', 'Ю', 'Я',
'а', 'б', 'в', 'г', 'д', 'е', 'ж', 'з', 'и', 'й', 'к', 'л', 'м', 'н', 'о', 'п',
'р', 'с', 'т', 'у', 'ф', 'х', 'ц', 'ч', 'ш', 'щ', 'ъ', 'ы', 'ь', 'э', 'ю', 'я', 'ё',
}
def clear_text(text):
clear_text = ''
for letter in text:
if letter in alphabet:
clear_text += letter
return clear_text
df['quote'] = df['quote'].apply(clear_text)
df['quote'] = df['quote'].str.strip()
df['quote'] = df['quote'].apply(lambda s: ' '.join(s.split()))
df['quote'] = df['quote'].apply(
lambda s: re.sub(r'''^[! ,?'"#%&*+.\/~:;=@\\\^{|}$\(\)\[\]<>№_]* ''', '', s)
)
# ### Strip whitespaces before punctuation
df['quote'] = df['quote'].apply(
lambda s: re.sub(r'''\s([! ,?'"#%&*+.\/~:;=@\\\^{|}$\(\)\[\]<>№_](?:\s|$))''', r'\1', s)
)
# # Delete duplicates
#
# ### Drop complete matches
df = df.sort_values(by=['quote'])
df.reset_index(drop=True, inplace=True);
df = df[df['quote'].apply(lambda x: len(x) > 10)]
df = df.drop_duplicates(subset='quote')
df.shape
# ### Drop duplicates
df = df.reset_index(drop=True)
def dropifjaro(ind):
r1 = df.iloc[ind]
r2 = df.iloc[ind + 1]
if (jellyfish.jaro_winkler_similarity(r1.quote, r2.quote) > 0.9):
df.at[ind, 'source'] = r1.source or r2.source
df.at[ind, 'tags'] = list(set(r1.tags + r2.tags))
df.at[ind, 'author'] = list(set(r1.author + r2.author))
df.at[ind, 'character'] = list(set(r1.character + r2.character))
df.at[ind, 'is_dialog'] = r1.is_dialog or r2.is_dialog
df.at[ind, 'target'] = r1.target or r2.target
return ind + 1
with Parallel(n_jobs=1, require='sharedmem') as parallel:
indexes4drop = parallel(
delayed(dropifjaro)(ind)
for ind in tqdm(df.index.values[:-1], total=df.shape[0])
)
df.drop(list(filter(None, indexes4drop)), inplace=True)
df.shape
df[df.target == 1].shape
# ## Save file
result_path = '../results'
with open(os.path.join(result_path, 'result.pickle'), 'wb') as f:
pickle.dump(df, f)
| notebooks/create_final_datasets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Intro to MongoDB and the Nobel Prize Dataset - Exercise
# ## 1. Instantiate `MongoClient` and connect to the to the nobel database.
from pymongo import MongoClient
client = MongoClient()
# + tags=[]
db = client.nobel
print(db)
# -
# <br>
#
# ## 2. Obtain a document from the `laureates` collection and display its contents.
doc = db.laureates.find_one({})
doc
# <br>
#
# ## 3. Obtain a list of tuples containing the first and last name of each laureate in the database who died in USA. i.e. `(FirstName, LastName)`
cursor = db.laureates.find({"diedCountry": "USA"})
names = [(doc["firstname"], doc["surname"]) for doc in cursor]
names
# <br>
#
# ## 4. Did any of the above laureates receive more than one prize? Generate a list of tuples with their first name and the total number they were awarded.
cursor = db.laureates.find({"diedCountry": "USA"})
names = [(doc["firstname"], len(doc["prizes"])) for doc in cursor if len(doc["prizes"]) > 1]
names
| workshop/primer/03_MongoDB_part1/solved/exercise_01_solved.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.0 64-bit
# language: python
# name: python3
# ---
import pandas as pd
import spacy
import numpy as np
df = pd.read_csv(r'C:\Users\water\Downloads\WF_Sales_Mar_02_2022_Decatur_GA_30033.csv').drop(columns='Unnamed: 0')
df
df.columns
rules = pd.read_csv(r'C:\Users\water\Desktop\WF\WholeFoods-Datascraping-Project-Deployment\Deployment\rules.csv').drop(columns='Unnamed: 0')
rules
list(rules.loc[index, 'item_B'])[int(np.random.randint(9, size=1))]
index = rules[rules.item_A.str.contains(item, case=False)].index
list(rules.loc[index, 'item_B'])#[int(np.random.randint(9, size=1))]
# +
input = "yogurt, smoothie"
input_items = (input.split(', '))
original_df = df.copy()
# Optimized Cart Generator with default sorted values (prime_discount Descending)
shopping_cart = pd.DataFrame(columns=original_df.columns)
for i in range(len(input_items)):
try:
try:
shopping_cart = pd.concat([shopping_cart,original_df.loc[original_df['product'].str.contains(input_items[i],case=False)].sort_values(['prime', 'prime_discount'], ascending = ('True', 'False')).head(1)], join='inner')
except Exception as e:
print(e)
shopping_cart = pd.concat([shopping_cart,original_df.loc[original_df['product'].str.contains(input_items[i].replace(' ','-'),case=False)].sort_values(['prime', 'prime_discount'], ascending = ('True', 'False')).head(1)], join='inner')
except Exception as p:
print(p)
shopping_cart
# -
# if click recommendation button:
#
# +
# rules[rules.item_A.str.contains('Pasta', case=False)] # dataframe of rules containing the parsed_product of the generated shopping cart
# index = rules[rules.item_A.str.contains('Broth', case=False)].index # index of rules containing parsed_product of the generated shopping cart
# list(rules.loc[index, 'item_B'])[:5] # Category with the highest confidence to the first category
# -
rules[rules.item_A.str.contains('chocolate', case=False)].head(10)[['item_A', 'item_B', 'confidenceAtoB']]
rules.sort_values(by='freqAB', ascending=False).head(10)
# take list of parsed products from generated shopping cart
cart_category_list = list(shopping_cart.parsed_product)
recommendation_cart = pd.DataFrame(columns=original_df.columns)
for item in cart_category_list:
# initiate search of parsed product from first row of generated shopping cart
rules[rules.item_A.str.contains(item, case=False)] # dataframe of rules containing the parsed_product of the generated shopping cart
index = rules[rules.item_A.str.contains(item, case=False)].index # index of rules containing parsed_product of the generated shopping cart
itemb = list(rules.loc[index, 'item_B'])[int(np.random.randint(9, size=1))] # pickes randomly from the top 5 associations of confidence
recommendation_cart = pd.concat([recommendation_cart,df[df['parsed_product'] == itemb].sample(1)])
recommendation_cart
# +
if any(df['product'].str.contains(',')): # clean all products to remove text after commas ||for example (product, 8 oz) (, 8oz) gets removed||
ix = df[df['product'].str.contains(',')].index # this is done to optimize word embedding/parsing
for i in range(len(ix)):
df.loc[ix[i], 'product'] = ','.join(df.loc[ix[i], 'product'].split(',')[:-1])
#############################################################################
nlp = spacy.load('en_core_web_lg') # load pretrained model & Add stop words to optimize parsing
nlp.Defaults.stop_words |= {"2pk","3pk","4pk","5pk","6pk","7pk","8pk","9pk","10pk","11pk","12pk","14pk","20ct","5ct","6ct","B.","C","B12","%"," 1L","yd","sal","oz","cup","M", "8ct"}
#############################################################################
# Create parser to extract product items "PROPN"
# we will find the proper noun token with fewest heads as the top level proper noun
# if no proper noun is found, we will designate `MISC`. function provided by https://github.com/ianyu93
def parser(x):
# Convert text into Doc object
doc = nlp(x)
dict_ = {}
for token in doc:
if not token.is_stop:
# If part-of-speech tag is not proper noun or noun, skip
if token.pos_ in ['PROPN', 'NOUN']:
# Collect length of dependencies
text = token.text
dict_[text] = []
source = token
while source.head != source :
dict_[text].append(source.text)
source = source.head
if len(dict_) == 0:
return 'MISC'
# Retrieve text with lowest dependencies
return sorted([(k, v) for k, v in dict_.items()], key=lambda x: len(x[1]))[0][0]
df['parsed_product'] = df['product'].apply(lambda x: parser(x)) # apply parser to each product and output result to a new column
##############################################
# Manual Stemming / Cleaning
try:
count_edited_values = 0
ix = df[df['parsed_product'].str.contains('Yoghurt')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Yogurt'
ix = df[df['parsed_product'].str.contains('Yogurts')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Yogurt'
ix = df[df['parsed_product'].str.contains('Yoyos')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Yogurt'
ix = df[df['parsed_product'].str.contains('Avocados')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Avocado'
ix = df[df['parsed_product'].str.contains('Avacado')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Avocado'
ix = df[df['parsed_product'].str.contains('Almonds')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Almond'
ix = df[df['parsed_product'].str.contains('Bagels')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Bagel'
ix = df[df['parsed_product'].str.contains('Lentils')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Lentil'
ix = df[df['parsed_product'].str.contains('Packets')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Packet'
ix = df[df['parsed_product'].str.contains('Sausages')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Sausage'
ix = df[df['parsed_product'].str.contains('Tomatoes')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Tomato'
ix = df[df['parsed_product'].str.contains('Tortellni')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Tortellini'
ix = df[df['parsed_product'].str.contains('Tortillas')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Tortilla'
#
ix = df[df['category'].str.contains('supplements')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'supplement'
ix = df[df['parsed_product'].str.contains('Zolli')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Snack'
ix = df[df['parsed_product'].str.contains('Zero')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Drink'
ix = df[df['company'].str.contains("GT's Synergy Kombucha")].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Kombucha'
ix = df[df['company'].str.contains('Kor Shots')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Juice'
ix = df[df['company'].str.contains('Evolution Fresh')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Juice'
ix = df[df['company'].str.contains('California Olive Ranch')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Oil'
ix = df[df['company'].str.contains('Chobani')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Yogurt'
ix = df[df['company'].str.contains('Vega')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'supplement'
ix = df[df['company'].str.contains('WTRMLN WTR')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Juice'
ix = df[df['parsed_product'].str.contains('Cream')][df['category'] == 'frozen_foods'].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Ice Cream'
ix = df[df['product'].str.contains('Bar')][df['company'] == 'KIND Snacks'].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Bars'
ix = df[df['parsed_product'].str.contains('Bar')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Bars'
ix = df[df['product'].str.contains('Milk')][df['company'] == 'So Delicious'].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Yogurt'
ix = df[df['company'].str.contains('La Quercia')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Bacon'
ix = df[df['product'].str.contains('Liquid Aminos')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Sauce'
ix = df[df['company'].str.contains('FROMAGER D AFFINOIS')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cheese'
ix = df[df['company'].str.contains('Yogi')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Tea'
ix = df[df['company'].str.contains('North Country Smokehouse')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Bacon'
ix = df[df['company'].str.contains('Brekki')][df['product'].str.contains('Oats')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Oats'
ix = df[df['company'].str.contains('Celestial Seasonings')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Tea'
ix = df[df['company'].str.contains('Kite Hill')][df['product'].str.contains('Cheese')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cheese'
ix = df[df['company'].str.contains('Icelandic Provisions')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Yogurt'
#
ix = df[df['company'].str.contains('Steaz')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Drink'
ix = df[df['company'].str.contains('Ca de Ambros')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cheese'
ix = df[df['parsed_product'].str.contains('Tagliatelle')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['product'].str.contains('Soup')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Soup'
ix = df[df['parsed_product'] == 'O'].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cereal'
ix = df[df['parsed_product'].str.contains('Cookie')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cookies'
ix = df[df['parsed_product'].str.contains('Discs')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Chocolate'
ix = df[df['parsed_product'].str.contains('Disc|Chocolate')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Chocolate'
ix = df[df['product'].str.contains('Hummus')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Hummus'
ix = df[df['company'].str.contains('The Good Crisp Company')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Chips'
ix = df[df['product'].str.contains('Crisps')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Crisps'
ix = df[df['parsed_product'].str.contains('Cracker')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Crackers'
ix = df[df['parsed_product'].str.contains('Cake')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cakes'
ix = df[df['company'].str.contains('PUR')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Gum'
ix = df[df['company'].str.contains('Sesmark')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Snack'
ix = df[df['product'].str.contains('Tilapia')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Tilapia'
ix = df[df['product'].str.contains('Shrimp')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Shrimp'
ix = df[df['product'].str.contains('Sauce')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Sauce'
ix = df[df['product'].str.contains('Shells')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['product'].str.contains('Penne')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['product'].str.contains('Girasoli')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['product'].str.contains('Ravioli')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['product'].str.contains('Mac & Cheese')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['product'].str.contains('Extra Virgin Olive Oil')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Oil'
ix = df[df['product'].str.contains('Olives')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Olives'
ix = df[df['product'].str.contains('Chicken', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Chicken'
ix = df[df['product'].str.contains('Turkey', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Turkey'
print("Cleaned " +str(count_edited_values) + " values")
ix = df[df['product'].str.contains('Broth')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Broth'
ix = df[df['product'].str.contains('Marinara')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Sauce'
ix = df[df['product'].str.contains('Juice')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Juice'
ix = df[df['product'].str.contains('Toothpaste')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Toothpaste'
ix = df[df['product'].str.contains('Medium Roast|Coffee|Brew')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Coffee'
ix = df[df['parsed_product'].str.contains('soap', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Soap'
ix = df[df['product'].str.contains('Kombucha')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Kombucha'
ix = df[df['company'].str.contains('Essentia')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Water'
ix = df[df['product'].str.contains('Soap')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Soap'
ix = df[df['company'].str.contains('AURA CACIA')][df['product'].str.contains('Oil')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Essential Oil'
ix = df[df['product'].str.contains('noodle', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Noodles'
ix = df[df['product'].str.contains('Cavatappi|Spaghetti')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['product'].str.contains('Potato Chip')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Chips'
ix = df[df['product'].str.contains('Juice')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Juice'
ix = df[df['product'].str.contains('Soup')].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Soup'
ix = df[df['product'].str.contains('sparkling', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Seltzer'
ix = df[df['product'].str.contains('flatbread', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Flatbread'
ix = df[df['category'].str.contains('desserts', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Dessert'
ix = df[df['product'].str.contains('Bread', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Bread'
ix = df[df['product'].str.contains('Crisps', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Chips'
ix = df[df['parsed_product'].str.contains('Mix', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Snack'
ix = df[df['product'].str.contains('wash', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Wash'
ix = df[df['product'].str.contains('shampoo', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Shampoo'
ix = df[df['product'].str.contains('lotion', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Lotion'
ix = df[df['product'].str.contains('Mac & Cheese', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['parsed_product'] == 'Mac'].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
print("Cleaned " +str(count_edited_values) + " values")
ix = df[df['product'].str.contains('chocolate', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Chocolate'
ix = df[df['product'].str.contains('jerky', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Jerky'
ix = df[df['product'].str.contains('Instant Oatmeal', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Oats'
ix = df[df['parsed_product'].str.contains('Oatmeal', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Oats'
ix = df[df['product'].str.contains('fusilli|macaroni', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Pasta'
ix = df[df['product'].str.contains('yogurt greek', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Yogurt'
snack_item_list = 'Chocolate, Sauce, Bars, Gum, Chips, Jerky, Cookies, Puffs, Flatbread, Bread, Hummus, Crackers, Honey, Milk, Dip, Pretzels'.split(', ')
ix = df[df['category'] == 'snacks_chips_salsas_dips'].index
for i in range(len(ix)):
count_edited_values+=1
for snack in snack_item_list:
if df.loc[ix[i], 'parsed_product'] not in snack_item_list:
df.loc[ix[i], 'parsed_product'] = 'Snack'
beauty_item_list = 'Wash, Shampoo, Conditioner, Cleanser, Balm'.split(', ')
ix = df[df['category'] == 'beauty'].index
for i in range(len(ix)):
count_edited_values+=1
for beauty in beauty_item_list:
if df.loc[ix[i], 'parsed_product'] not in beauty_item_list:
df.loc[ix[i], 'parsed_product'] = 'Wash'
beverage_item_list = 'Juice, Coffee, Tea, Kombucha, Seltzer, Water, Smoothie, Lemonade, Shake'.split(', ')
ix = df[df['category'] == 'Beverages'].index
for i in range(len(ix)):
count_edited_values+=1
for beverage in beverage_item_list:
if df.loc[ix[i], 'parsed_product'] not in beverage_item_list:
df.loc[ix[i], 'parsed_product'] = 'Beverage'
body_care_item_list = 'Wash, Essential Oil, Soap, Toothpaste, Shampoo, Lotion, Deodorant, Spray'.split(', ')
ix = df[df['category'] == 'body_care'].index
for i in range(len(ix)):
count_edited_values+=1
for body_care in body_care_item_list:
if df.loc[ix[i], 'parsed_product'] not in body_care_item_list:
df.loc[ix[i], 'parsed_product'] = 'body_care'
ix = df[df['product'].str.contains('cookies', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cookies'
ix = df[df['product'].str.contains('kefir', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Kefir'
ix = df[df['product'].str.contains('coconut milk', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Milk'
ix = df[df['product'].str.contains('Almondmilk', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Almondmilk'
ix = df[df['product'].str.contains('Chocolate Ice Cream|Mini Chocolate Sea Salt|Chocolate Chip Cookie Dough Ice Cream|Chocolate Chip Cookie Dough Non Dairy Frozen Dessert|Swiss Chocolate Gelato|Cornflake Chocolate Chip Marshmallow Ice Cream|Peanut Butter Chocolate Cookie Crush Ice Cream|Chocolate Fudge 4-Pack', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Ice Cream'
ix = df[df['product'].str.contains('Chicken Meatballs', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Chicken'
ix = df[df['product'].str.contains('Cereal', case=True)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cereal'
pantry_item_list = 'Chocolate, Pasta, Sauce, Snack, Soup, Cheese, Oil, Cereal, Broth, Rice, Tomato, Oats, Butter, Beans, Granola, Puffs, Seasoning, Honey, Noodles, Vinegar, Olives, Noodles, Hummus, Sweetener, Paste, Spread, Dip, Flour'.split(', ')
ix = df[df['category'] == 'pantry_essentials'].index
for i in range(len(ix)):
count_edited_values+=1
for pantry in pantry_item_list:
if df.loc[ix[i], 'parsed_product'] not in pantry_item_list:
df.loc[ix[i], 'parsed_product'] = 'Pantry'
ix = df[df['product'].str.contains('Lobster Bisque|chowder|beef chili', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Soup'
ix = df[df['product'].str.contains('Cheese|Cubes|Gouda|Cheddar|Mozzarella|Fondue|Brie|Feta|Cremeux', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Cheese'
ix = df[df['product'].str.contains('Drink|Beverage', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Beverage'
ix = df[df['product'].str.contains('Yogurt', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Yogurt'
dairy_item_list = 'Chocolate, Beverage, Pasta, Yogurt, Cheese, Bars, Milk, Butter, Almondmilk'.split(', ')
ix = df[df['category'] == 'dairy_eggs'].index
for i in range(len(ix)):
count_edited_values+=1
for dairy in dairy_item_list:
if df.loc[ix[i], 'parsed_product'] not in dairy_item_list:
df.loc[ix[i], 'parsed_product'] = 'Dairy'
frozen_item_list = 'Chocolate, Pasta, Yogurt, Broth, Bars, Chicken, Ice Cream, Dessert, Flatbread, Bread, Gelato'.split(', ')
ix = df[df['category'] == 'frozen_foods'].index
for i in range(len(ix)):
count_edited_values+=1
for frozen in frozen_item_list:
if df.loc[ix[i], 'parsed_product'] not in frozen_item_list:
df.loc[ix[i], 'parsed_product'] = 'Frozen'
meat_item_list = 'Chicken, Salami, Bacon, Turkey, Ham, Pork, Steak, Bison, Lamb, Beef, Sausage, Ribs'.split(', ')
ix = df[df['category'] == 'meat'].index
for i in range(len(ix)):
count_edited_values+=1
for meat in meat_item_list:
if df.loc[ix[i], 'parsed_product'] not in meat_item_list:
df.loc[ix[i], 'parsed_product'] = 'Meat'
ix = df[df['category'].str.contains('Produce', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Produce'
ix = df[df['category'].str.contains('seafood', case=False)].index
for i in range(len(ix)):
count_edited_values += 1
df.loc[ix[i], 'parsed_product'] = 'Seafood'
except Exception as e:
print(e)
print("Cleaned " +str(count_edited_values) + " values")
original_df['parsed_product'] = df['parsed_product']
# -
| Jupyter Notebooks/recommendation_system.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.2 64-bit (''3.9.2'': pyenv)'
# language: python
# name: python3
# ---
# Import libraries
# +
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import log_loss, roc_auc_score, recall_score, precision_score, average_precision_score, f1_score, classification_report, accuracy_score, plot_roc_curve, plot_precision_recall_curve, plot_confusion_matrix
# -
# Load data & preprocessing
# https://www.kaggle.com/imnikhilanand/heart-attack-prediction/data
df = pd.read_csv('data/heart_attack.csv', na_values='?')
df.columns
# +
df = df.rename(columns={'num ': 'target'})
df['target'].value_counts(dropna=False)
# -
df.info()
df = df.drop(['slope', 'ca', 'thal'], axis=1)
df = df.dropna().copy()
df.info()
df
# +
df['cp'].value_counts(dropna=False)
df['restecg'].value_counts(dropna=False)
# +
df = pd.get_dummies(df, columns=['cp', 'restecg'], drop_first=True)
df
# +
numeric_cols = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak']
cat_cols = list(set(df.columns) - set(numeric_cols) - {'target'})
cat_cols.sort()
print(numeric_cols)
print(cat_cols)
# -
# Train test split
# +
random_seed = 9
df_train, df_test = train_test_split(df, test_size=0.2, random_state=random_seed, stratify=df['target'])
print(df_train.shape)
print(df_test.shape)
print()
print(df_train['target'].value_counts(normalize=True))
print()
print(df_test['target'].value_counts(normalize=True))
# +
# Scale numerical variables
scaler = StandardScaler()
scaler.fit(df_train[numeric_cols])
def get_features_and_target_arrays(df, numeric_cols, cat_cols, scaler):
X_numeric_scaled = scaler.transform(df[numeric_cols])
X_categorical = df[cat_cols].to_numpy()
X = np.hstack((X_categorical, X_numeric_scaled))
y = df['target']
return X, y
X, y = get_features_and_target_arrays(df_train, numeric_cols, cat_cols, scaler)
# -
# Train the model
# +
clf = LogisticRegression(penalty='none') # logistic regression with no penalty term in the cost function.
clf.fit(X, y)
# -
# Evaluate the model
X_test, y_test = get_features_and_target_arrays(df_test, numeric_cols, cat_cols, scaler)
plot_roc_curve(clf, X_test, y_test)
plot_precision_recall_curve(clf, X_test, y_test)
test_prob = clf.predict_proba(X_test)[:, 1]
test_pred = clf.predict(X_test)
# +
print('Log loss = {:.5f}'.format(log_loss(y_test, test_prob)))
print('AUC = {:.5f}'.format(roc_auc_score(y_test, test_prob)))
print('Average Precision = {:.5f}'.format(average_precision_score(y_test, test_prob)))
print('\nUsing 0.5 as threshold:')
print('Accuracy = {:.5f}'.format(accuracy_score(y_test, test_pred)))
print('Precision = {:.5f}'.format(precision_score(y_test, test_pred)))
print('Recall = {:.5f}'.format(recall_score(y_test, test_pred)))
print('F1 score = {:.5f}'.format(f1_score(y_test, test_pred)))
print('\nClassification Report')
print(classification_report(y_test, test_pred))
# -
print('Confusion Matrix')
plot_confusion_matrix(clf, X_test, y_test)
# Interpret the results
coefficients = np.hstack((clf.intercept_, clf.coef_[0]))
pd.DataFrame(data={'variable': ['intercept'] + cat_cols + numeric_cols, 'coefficient': coefficients})
pd.DataFrame(data={'variable': numeric_cols, 'unit': np.sqrt(scaler.var_)})
| Logistic_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Naive Bayes
# :label:`sec_naive_bayes`
#
# Throughout the previous sections, we learned about the theory of probability and random variables. To put this theory to work, let us introduce the *naive Bayes* classifier. This uses nothing but probabilistic fundamentals to allow us to perform classification of digits.
#
# Learning is all about making assumptions. If we want to classify a new data example that we have never seen before we have to make some assumptions about which data examples are similar to each other. The naive Bayes classifier, a popular and remarkably clear algorithm, assumes all features are independent from each other to simplify the computation. In this section, we will apply this model to recognize characters in images.
#
# + origin_pos=3 tab=["tensorflow"]
# %matplotlib inline
import math
import tensorflow as tf
from d2l import tensorflow as d2l
d2l.use_svg_display()
# + [markdown] origin_pos=4
# ## Optical Character Recognition
#
# MNIST :cite:`LeCun.Bottou.Bengio.ea.1998` is one of widely used datasets. It contains 60,000 images for training and 10,000 images for validation. Each image contains a handwritten digit from 0 to 9. The task is classifying each image into the corresponding digit.
#
# Gluon provides a `MNIST` class in the `data.vision` module to
# automatically retrieve the dataset from the Internet.
# Subsequently, Gluon will use the already-downloaded local copy.
# We specify whether we are requesting the training set or the test set
# by setting the value of the parameter `train` to `True` or `False`, respectively.
# Each image is a grayscale image with both width and height of $28$ with shape ($28$,$28$,$1$). We use a customized transformation to remove the last channel dimension. In addition, the dataset represents each pixel by an unsigned $8$-bit integer. We quantize them into binary features to simplify the problem.
#
# + origin_pos=7 tab=["tensorflow"]
((train_images, train_labels),
(test_images, test_labels)) = tf.keras.datasets.mnist.load_data()
# + [markdown] origin_pos=8
# We can access a particular example, which contains the image and the corresponding label.
#
# + origin_pos=11 tab=["tensorflow"]
image, label = train_images[2], train_labels[2]
image.shape, label
# + [markdown] origin_pos=12
# Our example, stored here in the variable `image`, corresponds to an image with a height and width of $28$ pixels.
#
# + origin_pos=13 tab=["tensorflow"]
image.shape, image.dtype
# + [markdown] origin_pos=14
# Our code stores the label of each image as a scalar. Its type is a $32$-bit integer.
#
# + origin_pos=17 tab=["tensorflow"]
label, type(label)
# + [markdown] origin_pos=18
# We can also access multiple examples at the same time.
#
# + origin_pos=21 tab=["tensorflow"]
images = tf.stack([train_images[i] for i in range(10, 38)], axis=0)
labels = tf.constant([train_labels[i] for i in range(10, 38)])
images.shape, labels.shape
# + [markdown] origin_pos=22
# Let us visualize these examples.
#
# + origin_pos=23 tab=["tensorflow"]
d2l.show_images(images, 2, 9);
# + [markdown] origin_pos=24
# ## The Probabilistic Model for Classification
#
# In a classification task, we map an example into a category. Here an example is a grayscale $28\times 28$ image, and a category is a digit. (Refer to :numref:`sec_softmax` for a more detailed explanation.)
# One natural way to express the classification task is via the probabilistic question: what is the most likely label given the features (i.e., image pixels)? Denote by $\mathbf x\in\mathbb R^d$ the features of the example and $y\in\mathbb R$ the label. Here features are image pixels, where we can reshape a $2$-dimensional image to a vector so that $d=28^2=784$, and labels are digits.
# The probability of the label given the features is $p(y \mid \mathbf{x})$. If we are able to compute these probabilities, which are $p(y \mid \mathbf{x})$ for $y=0, \ldots,9$ in our example, then the classifier will output the prediction $\hat{y}$ given by the expression:
#
# $$\hat{y} = \mathrm{argmax} \> p(y \mid \mathbf{x}).$$
#
# Unfortunately, this requires that we estimate $p(y \mid \mathbf{x})$ for every value of $\mathbf{x} = x_1, ..., x_d$. Imagine that each feature could take one of $2$ values. For example, the feature $x_1 = 1$ might signify that the word apple appears in a given document and $x_1 = 0$ would signify that it does not. If we had $30$ such binary features, that would mean that we need to be prepared to classify any of $2^{30}$ (over 1 billion!) possible values of the input vector $\mathbf{x}$.
#
# Moreover, where is the learning? If we need to see every single possible example in order to predict the corresponding label then we are not really learning a pattern but just memorizing the dataset.
#
# ## The Naive Bayes Classifier
#
# Fortunately, by making some assumptions about conditional independence, we can introduce some inductive bias and build a model capable of generalizing from a comparatively modest selection of training examples. To begin, let us use Bayes theorem, to express the classifier as
#
# $$\hat{y} = \mathrm{argmax}_y \> p(y \mid \mathbf{x}) = \mathrm{argmax}_y \> \frac{p( \mathbf{x} \mid y) p(y)}{p(\mathbf{x})}.$$
#
# Note that the denominator is the normalizing term $p(\mathbf{x})$ which does not depend on the value of the label $y$. As a result, we only need to worry about comparing the numerator across different values of $y$. Even if calculating the denominator turned out to be intractable, we could get away with ignoring it, so long as we could evaluate the numerator. Fortunately, even if we wanted to recover the normalizing constant, we could. We can always recover the normalization term since $\sum_y p(y \mid \mathbf{x}) = 1$.
#
# Now, let us focus on $p( \mathbf{x} \mid y)$. Using the chain rule of probability, we can express the term $p( \mathbf{x} \mid y)$ as
#
# $$p(x_1 \mid y) \cdot p(x_2 \mid x_1, y) \cdot ... \cdot p( x_d \mid x_1, ..., x_{d-1}, y).$$
#
# By itself, this expression does not get us any further. We still must estimate roughly $2^d$ parameters. However, if we assume that *the features are conditionally independent of each other, given the label*, then suddenly we are in much better shape, as this term simplifies to $\prod_i p(x_i \mid y)$, giving us the predictor
#
# $$\hat{y} = \mathrm{argmax}_y \> \prod_{i=1}^d p(x_i \mid y) p(y).$$
#
# If we can estimate $p(x_i=1 \mid y)$ for every $i$ and $y$, and save its value in $P_{xy}[i, y]$, here $P_{xy}$ is a $d\times n$ matrix with $n$ being the number of classes and $y\in\{1, \ldots, n\}$, then we can also use this to estimate $p(x_i = 0 \mid y)$, i.e.,
#
# $$
# p(x_i = t_i \mid y) =
# \begin{cases}
# P_{xy}[i, y] & \text{for } t_i=1 ;\\
# 1 - P_{xy}[i, y] & \text{for } t_i = 0 .
# \end{cases}
# $$
#
# In addition, we estimate $p(y)$ for every $y$ and save it in $P_y[y]$, with $P_y$ a $n$-length vector. Then, for any new example $\mathbf t = (t_1, t_2, \ldots, t_d)$, we could compute
#
# $$\begin{aligned}\hat{y} &= \mathrm{argmax}_ y \ p(y)\prod_{i=1}^d p(x_t = t_i \mid y) \\ &= \mathrm{argmax}_y \ P_y[y]\prod_{i=1}^d \ P_{xy}[i, y]^{t_i}\, \left(1 - P_{xy}[i, y]\right)^{1-t_i}\end{aligned}$$
# :eqlabel:`eq_naive_bayes_estimation`
#
# for any $y$. So our assumption of conditional independence has taken the complexity of our model from an exponential dependence on the number of features $\mathcal{O}(2^dn)$ to a linear dependence, which is $\mathcal{O}(dn)$.
#
#
# ## Training
#
# The problem now is that we do not know $P_{xy}$ and $P_y$. So we need to estimate their values given some training data first. This is *training* the model. Estimating $P_y$ is not too hard. Since we are only dealing with $10$ classes, we may count the number of occurrences $n_y$ for each of the digits and divide it by the total amount of data $n$. For instance, if digit 8 occurs $n_8 = 5,800$ times and we have a total of $n = 60,000$ images, the probability estimate is $p(y=8) = 0.0967$.
#
# + origin_pos=27 tab=["tensorflow"]
X = tf.stack([train_images[i] for i in range(len(train_images))], axis=0)
Y = tf.constant([train_labels[i] for i in range(len(train_labels))])
n_y = tf.Variable(tf.zeros(10))
for y in range(10):
n_y[y].assign(tf.reduce_sum(tf.cast(Y == y, tf.float32)))
P_y = n_y / tf.reduce_sum(n_y)
P_y
# + [markdown] origin_pos=28
# Now on to slightly more difficult things $P_{xy}$. Since we picked black and white images, $p(x_i \mid y)$ denotes the probability that pixel $i$ is switched on for class $y$. Just like before we can go and count the number of times $n_{iy}$ such that an event occurs and divide it by the total number of occurrences of $y$, i.e., $n_y$. But there is something slightly troubling: certain pixels may never be black (e.g., for well cropped images the corner pixels might always be white). A convenient way for statisticians to deal with this problem is to add pseudo counts to all occurrences. Hence, rather than $n_{iy}$ we use $n_{iy}+1$ and instead of $n_y$ we use $n_{y} + 1$. This is also called *Laplace Smoothing*. It may seem ad-hoc, however it may be well motivated from a Bayesian point-of-view.
#
# + origin_pos=31 tab=["tensorflow"]
n_x = tf.Variable(tf.zeros((10, 28, 28)))
for y in range(10):
n_x[y].assign(
tf.cast(tf.reduce_sum(X.numpy()[Y.numpy() == y], axis=0), tf.float32))
P_xy = (n_x + 1) / tf.reshape((n_y + 1), (10, 1, 1))
d2l.show_images(P_xy, 2, 5);
# + [markdown] origin_pos=32
# By visualizing these $10\times 28\times 28$ probabilities (for each pixel for each class) we could get some mean looking digits.
#
# Now we can use :eqref:`eq_naive_bayes_estimation` to predict a new image. Given $\mathbf x$, the following functions computes $p(\mathbf x \mid y)p(y)$ for every $y$.
#
# + origin_pos=35 tab=["tensorflow"]
def bayes_pred(x):
x = tf.expand_dims(x, axis=0) # (28, 28) -> (1, 28, 28)
p_xy = P_xy * x + (1 - P_xy) * (1 - x)
p_xy = tf.math.reduce_prod(tf.reshape(p_xy, (10, -1)), axis=1) # p(x|y)
return p_xy * P_y
image, label = tf.cast(train_images[0], tf.float32), train_labels[0]
bayes_pred(image)
# + [markdown] origin_pos=36
# This went horribly wrong! To find out why, let us look at the per pixel probabilities. They are typically numbers between $0.001$ and $1$. We are multiplying $784$ of them. At this point it is worth mentioning that we are calculating these numbers on a computer, hence with a fixed range for the exponent. What happens is that we experience *numerical underflow*, i.e., multiplying all the small numbers leads to something even smaller until it is rounded down to zero. We discussed this as a theoretical issue in :numref:`sec_maximum_likelihood`, but we see the phenomena clearly here in practice.
#
# As discussed in that section, we fix this by use the fact that $\log a b = \log a + \log b$, i.e., we switch to summing logarithms.
# Even if both $a$ and $b$ are small numbers, the logarithm values should be in a proper range.
#
# + origin_pos=39 tab=["tensorflow"]
a = 0.1
print('underflow:', a**784)
print('logarithm is normal:', 784 * tf.math.log(a).numpy())
# + [markdown] origin_pos=40
# Since the logarithm is an increasing function, we can rewrite :eqref:`eq_naive_bayes_estimation` as
#
# $$ \hat{y} = \mathrm{argmax}_y \ \log P_y[y] + \sum_{i=1}^d \Big[t_i\log P_{xy}[x_i, y] + (1-t_i) \log (1 - P_{xy}[x_i, y]) \Big].$$
#
# We can implement the following stable version:
#
# + origin_pos=43 tab=["tensorflow"]
log_P_xy = tf.math.log(P_xy)
# TODO: Look into why this returns infs
log_P_xy_neg = tf.math.log(1 - P_xy)
log_P_y = tf.math.log(P_y)
def bayes_pred_stable(x):
x = tf.expand_dims(x, axis=0) # (28, 28) -> (1, 28, 28)
p_xy = log_P_xy * x + log_P_xy_neg * (1 - x)
p_xy = tf.math.reduce_sum(tf.reshape(p_xy, (10, -1)), axis=1) # p(x|y)
return p_xy + log_P_y
py = bayes_pred_stable(image)
py
# + [markdown] origin_pos=44
# We may now check if the prediction is correct.
#
# + origin_pos=47 tab=["tensorflow"]
tf.argmax(py, axis=0) == label
# + [markdown] origin_pos=48
# If we now predict a few validation examples, we can see the Bayes
# classifier works pretty well.
#
# + origin_pos=51 tab=["tensorflow"]
def predict(X):
return [
tf.cast(tf.argmax(bayes_pred_stable(x), axis=0), tf.int32).numpy()
for x in X]
X = tf.stack([tf.cast(train_images[i], tf.float32) for i in range(10, 38)],
axis=0)
y = tf.constant([train_labels[i] for i in range(10, 38)])
preds = predict(X)
# TODO: The preds are not correct due to issues with bayes_pred_stable()
d2l.show_images(X, 2, 9, titles=[str(d) for d in preds]);
# + [markdown] origin_pos=52
# Finally, let us compute the overall accuracy of the classifier.
#
# + origin_pos=55 tab=["tensorflow"]
X = tf.stack(
[tf.cast(train_images[i], tf.float32) for i in range(len(test_images))],
axis=0)
y = tf.constant([train_labels[i] for i in range(len(test_images))])
preds = tf.constant(predict(X), dtype=tf.int32)
# TODO: The accuracy is not correct due to issues with bayes_pred_stable()
tf.reduce_sum(tf.cast(preds == y, tf.float32)) / len(y) # Validation accuracy
# + [markdown] origin_pos=56
# Modern deep networks achieve error rates of less than $0.01$. The relatively poor performance is due to the incorrect statistical assumptions that we made in our model: we assumed that each and every pixel are *independently* generated, depending only on the label. This is clearly not how humans write digits, and this wrong assumption led to the downfall of our overly naive (Bayes) classifier.
#
# ## Summary
# * Using Bayes' rule, a classifier can be made by assuming all observed features are independent.
# * This classifier can be trained on a dataset by counting the number of occurrences of combinations of labels and pixel values.
# * This classifier was the gold standard for decades for tasks such as spam detection.
#
# ## Exercises
# 1. Consider the dataset $[[0,0], [0,1], [1,0], [1,1]]$ with labels given by the XOR of the two elements $[0,1,1,0]$. What are the probabilities for a Naive Bayes classifier built on this dataset. Does it successfully classify our points? If not, what assumptions are violated?
# 1. Suppose that we did not use Laplace smoothing when estimating probabilities and a data example arrived at testing time which contained a value never observed in training. What would the model output?
# 1. The naive Bayes classifier is a specific example of a Bayesian network, where the dependence of random variables are encoded with a graph structure. While the full theory is beyond the scope of this section (see :cite:`Koller.Friedman.2009` for full details), explain why allowing explicit dependence between the two input variables in the XOR model allows for the creation of a successful classifier.
#
# + [markdown] origin_pos=59 tab=["tensorflow"]
# [Discussions](https://discuss.d2l.ai/t/1101)
#
| scripts/d21-en/tensorflow/chapter_appendix-mathematics-for-deep-learning/naive-bayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Demo of changing face on Misty II via rerobots API
#
# This is free software, released under the Apache License, Version 2.0.
# You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0
#
# To learn more about our services, go to https://help.rerobots.net/
#
# Copyright (c) 2019 rerobots, Inc.
API_TOKEN = '' # GET YOUR TOKEN FROM https://rerobots.net/tokens
# !pip install rerobots
# !pip install paramiko
# !pip install requests
# +
import time
import matplotlib.pyplot as plt
# %matplotlib inline
from rerobots.api import Instance
## Read more about this workspace type at
## https://help.rerobots.net/workspaces/fixed_misty2fieldtrial.html
instance = Instance(['fixed_misty2'], api_token=API_TOKEN)
# +
## Wait for instance to finish initializing
while True:
if instance.get_status() == 'READY':
break
time.sleep(2)
# -
instance.get_details()
# +
## Start the "cam" add-on, which supports video streaming
while True:
payload = instance.status_addon_cam()
if payload['status'] == 'active':
break
elif payload['status'] == 'notfound':
instance.activate_addon_cam()
time.sleep(2)
# +
## Display the Misty II robot as it is now.
## The image is displayed as an HTML <img> element
## in this Jupyter Notebook.
from IPython.display import display, HTML
payload = instance.get_snapshot_cam(coding='base64', dformat='jpeg')
if not payload['success']:
time.sleep(1)
payload = instance.get_snapshot_cam(coding='base64', dformat='jpeg')
assert payload['success']
display(HTML('<img src="data:image/jpeg;base64,{}">'.format(payload['data'])))
# -
instance.activate_addon_mistyproxy()
while True:
mproxy = instance.status_addon_mistyproxy()
if mproxy['status'] == 'active':
break
time.sleep(2)
BASEURL = mproxy['url'][1]
## Change facial expression
## https://docs.mistyrobotics.com/misty-ii/rest-api/api-reference/#displayimage
import requests
res = requests.post(BASEURL + '/api/images/display', json={'FileName': 'e_Admiration.jpg'})
assert res.ok
time.sleep(1) # time for action to finish...
# +
## Display the Misty II robot as it is now
## The image is displayed as an HTML <img> element
## in this Jupyter Notebook.
from IPython.display import display, HTML
payload = instance.get_snapshot_cam(coding='base64', dformat='jpeg')
display(HTML('<img src="data:image/jpeg;base64,{}">'.format(payload['data'])))
# -
## Tilt head down
## https://docs.mistyrobotics.com/misty-ii/rest-api/api-reference/#movehead
res = requests.post(BASEURL + '/api/head', json={
'Pitch': 30,
'Roll': 0,
'Yaw': 0,
'Velocity': 80,
'Units': 'degrees',
})
assert res.ok
time.sleep(3) # time for action to finish...
# +
## Display the Misty II robot as it is now
## The image is represented as type NumPy ndarray and
## displayed by Matplotlib.
payload = instance.get_snapshot_cam(dformat='ndarray')
fig = plt.figure(figsize=(7,12))
plt.imshow(payload['data'])
# +
# Done!
instance.terminate()
| faceprint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## Eziocon
#
# A Software Development Kit for doing simple operations using python in built objects.
#
#
#
# ### Examples : Oracle Data Base
#
# <br>
# <br>
# **Topics Covered : **
#
# 1. How to manipulate your data source to use the SDK effectively
# 2. Story line Walkthrough using Sample Data created
# 3. Differences between Oracle SDK and MySQL SDK
# 4. Different examples to use the following functions :
# 5. Insert
# 6. Count
# 5. Update
# 6. Fetchone
# 7. Fetch many
# ### Importing the Libraries
#if you are using oracle Database use the following SDK
from eziocon.oracle.sdk import oracle
# <br>
# #### Initialising the credentials for Oracle Database
host = 'your database host '
pwd = '<PASSWORD>'
username = 'your database username'
port = 'your database port'
sid = 'your Sid'
# #### Intialising Object for Oracle
#
# 1. use **setConnect** function to pass the credentials.
# 2. If the connection is successful to the database the function would set object variable **connect_check** as **True**.
# 3. Without the connect_check being set as true the user will not be able to use other functions of the SDK.
# 4. This function is common for all the SDKs of the databses ( i.e. Oracle and MySQL)
obj = oracle() #creating the object
obj.setConnect(username=username,hostname=host,port=port,password=<PASSWORD>,sid=sid)
if obj.connect_check :
print("Database connection tested and working for now\n")
else:
print("Database cannot be connected \n")
# #### Let us assume we have the following empty table created in the Database
#
# <br>
#
# **Tablename** : Student
#
# 1. First_name : String : Varchar
# 2. Last_name : String : Varchar
# 3. Subject_Code : String : Varchar
# 4. Test_1_Score : Number : Numeric
# 5. Test_2_Score : Number : Numeric
# 6. Test_3_Score : Number : Numeric
# 7. CGPA : Float : CGPA
# | First_name | Last_name | Subject_Code |Test_1_Score|Test_2_Score| Test_3_Score | CGPA|
# | ------------- |:-------------:| -----: |-----: |-----: |-----: |-----: |
# | | | | | | | |
# <br>
#
# -----
#
# ** Count Operation : **
#
# ---
#
#
print(obj.count.__doc__)
obj.count(tablename="student")
# As there was no data , the table is empty we get the count to be 0. <br>
#
#
# | First name | Last name | Subject Code |Test_1_Score|Test_2_Score| Test_3_Score | CGPA|
# | ------------- |:-------------:| -----: |-----: |-----: |-----: |-----: |
# | Steve | Rogers | IT1011| 34 | 34 | 39 | 9.1 |
# | Sam | Winchester | IT1004| 32 | 35 | 39 | 9.2 |
# | Sam | Winchester | IT1005| 34 | 36 | 39 | 9.5 |
# | Sam | Winchester | IT1006| 36 | 37 | 38 | 9.4 |
# | Sam | Winchester | IT1007| 38 | 38 | 40 | 9.8 |
# | Dean | Winchester | IT1004| 22 | 25 | 28 | 8.2 |
# | Dean | Winchester | IT1006| 26 | 27 | 28 | 8.4 |
# | Dean | Winchester | IT1007| 28 | 28 | 30 | 8.8 |
# | Tony | Stark | IT1006| 40 | 40 | 40 | 10.0 |
# | Tony | Stark | IT1007| 40 | 40 | 40 | 10.0 |
#
#
# -----
#
# ** Insert Operation : **
#
# ---
#
# 1. We have to enter the above records in the database and populate it.
# 2. There are two operations : Inserting one record and Inserting bulk records at once
# 3. We will insert the record of Stever Rogers which as it has only one record : Insert one operation
# 4. The rest of the records will be inserted in bulk : Insert many operation
# +
data = [['first_name','last_name','subject_code','test_1_score','test_2_score','test_3_score','cgpa'],
['Sam','Winchester','IT1004',32,35,39,9.2],
['Sam','Winchester','IT1006',34,36,39,9.5],
['Sam','Winchester','IT1006',36,37,38,9.4],
['Sam','Winchester','IT1007',38,38,34,9.8],
['Dean','Winchester','IT1004',22,25,28,8.2],
['Dean','Winchester','IT1006',26,27,28,8.4],
['Dean','Winchester','IT1007',28,28,30,8.8],
['Tony','Stark','IT1007',40,40,40,10],
['Tony','Stark','IT1006',40,40,40,10]
]
steve_record = ['Steve','Rogers','IT1011',34,34,39,9.1]
# -
#
# ** Let us look at the docString of the function : **
#
print(obj.insert.__doc__)
columns = data[0]
print(columns) #iterator of String
#parsing steve record into dictionary
record = {}
for col,value in zip(columns,steve_record):
record[col] = value
record # inserting using dictionary
obj.insert(tablename='student',objects=record)
# <br>
# ** The record has been inserted sucessfully.**
#
# 1. We will be checking it using the count function.
# 2. We will use the where clause condition arugment in the count function to get the number of records inserted
#getting the total number of records in the table
obj.count(tablename='student')
obj.count(tablename='student',condition="first_name = 'Steve' and last_name = 'Rogers'")
# if a wrong condition is given , make Steve as stever : the count must be zero
obj.count(tablename='student',condition="first_name = 'steve' and last_name = 'Rogers'")
# #### Bulk Insert :
#
#parsing records as list of dictionaries
columns = data[0] #getting first row as the column
final_records = []
for val in data[1:]:
record = {}
for col,value in zip(columns,val):
record[col]= value
final_records.append(record)
len(final_records) # list of dictionaries of 9 records
final_records[1:3] #sample view
# <br>
#
# performing bulk insert
obj.insert(tablename="student",objects=final_records)
# ** The records have been inserted successfully.**
#
# Let's check it by performing a count operation
print(obj.count(tablename='student'))
#
#
#
# -----
#
# ** Fetch Operation : **
#
# ---
print(obj.fetchMany.__doc__)
print(obj.fetchOne.__doc__)
# <br>
# <br>
#
# ** We will do the following manipulations in fetching the data : This will give you a general idea of how to use the SDK for different selection filters **
#
#
# 1. Finding the students who have enrolled a subject
# 3. Fetching rows whose last name is like a particular a string
# 4. Fetching the list of students who have different students who have cgpa above a given threshold and given subject
# 5. Fetching the list of subjects a student has enrolled given student first name and last name
# 6. Fetch any one record given the first name of the student
# 7. Fetch the first 3 records given the first name and last name of student
#
# <br>
#
# #### 1. Finding the students who have enrolled a subject.
#
# <Br>
def get_student_names(subject_code):
where_clause = "subject_code ='" + subject_code + "'" #processing the query
return obj.fetchMany(columns=("first_name","last_name"),tablename="student",condition=where_clause)
get_student_names("IT1004")
get_student_names("IT1007")
# <br>
#
# #### 2 . Fetching rows whose last name is like a particular a string
#
# <br>
def get_student_names_Like(sub_string,column_name):
where_clause = column_name + " like " + "'%"+sub_string+"%'"
return obj.fetchMany(columns=("first_name","last_name",column_name),tablename="student",condition=where_clause)
get_student_names_Like("er","last_name")
get_student_names_Like("Winch","last_name")
get_student_names_Like("ny","First_name")
get_student_names_Like("star","First_name") #notice there is no string in the firstname matching with stark
# <br>
#
# #### 3 . Fetching the list of students who have cgpa more than a given cgpa and in given subject
#
# <br>
def get_student(cgpa,subject_code):
where_clause = "cgpa > " + str(cgpa) + " and subject_code = '" + subject_code + "'"
return obj.fetchMany(columns=("first_name","last_name"),tablename="student",condition=where_clause)
get_student(8.0,"IT1007")
get_student(9.0,"IT1004")
# <br>
#
# #### 4. Fetching the list of subjects a student has enrolled given student first name and last name
#
# <br>
def get_subjects (first_name,last_name):
where_clause = "first_name = '" +first_name+"' and last_name = '" + last_name+"'"
return obj.fetchMany(columns= ["subject_code"] ,tablename="student",condition=where_clause,return_type=2)
get_subjects("Tony","Stark") #notice the return type = 2 : sends parsed List of Dictionaries
get_subjects("Sam","Winchester")
get_subjects("Dean","Winchester")
# <br>
#
# #### 5. Fetch any one record given the first name of the student
#
# <br>
columns
def get_any_one_record(first_name):
where_clause = " first_name ='" + first_name + "'"
return obj.fetchOne(columns=tuple("*"),tablename="student",condition=where_clause)
get_any_one_record("Dean")
get_any_one_record("Steve")
get_any_one_record("Sam")
get_any_one_record("Tony")
# <br>
# <br>
#
# #### 6. Fetch first 3 records given the First name and Last name of the student
#
# We will use the rows parameter in FetchMany parameter
def get_records(first_name , last_name):
where_clause = "first_name = '" +first_name+"' and last_name = '" + last_name+"'"
return obj.fetchMany(columns= columns ,tablename="student",condition=where_clause,rows=3)
get_records("Sam","Winchester")
#
# ---------
#
# ** Update Operation **
#
# ----
#
#
print(obj.update.__doc__)
# <bR>
# We will do the following update activities to see the usage of update functions to understand the variants of the SDK.
#
# 1. Make the Dean as dean in the first name
# 2. Make <NAME> as sam winchester : Lower case updation
# 3. Update the of given student by taking the input of provided test scores and a subject code
#
# <br>
#
#
# #### 1. Make the Dean as dean in the first name ( Lower Case )
updation = {"first_name":"Dean"} # {column : value } format
obj.update(tablename="student",updations=updation,condition=" first_name = 'dean'")
get_student_names_Like("ea","first_name")
# <br>
#
# #### 2. Make <NAME> as sam winchester : Lower case updation
updation = {"first_name":"Sam","last_name":"Winchester"}
obj.update(tablename="student",updations=updation,condition=" first_name = 'sam'")
get_student_names_Like("am","first_name")
# <br>
#
# #### 3. Update the of given student by taking the input of provided test scores and a subject code
def update_record(updation,first_name,last_name,subject_code):
sc = subject_code # just to make the below query line smaller
where_clause = "first_name = '" +first_name+"' and last_name = '" + last_name+"' and subject_code='" +sc+ "'"
return obj.update(tablename="student",updations=updation,condition=where_clause)
updation = {"test_1_score":35,"test_3_score":40,"cgpa":9.4} #lets say we want to update the following columns
update_record(updation,"Sam","Winchester","IT1004")
obj.fetchOne(columns,"student","first_name='Sam' and subject_code = 'IT1004'")
# <br>
#
# ** Record Successfully Updated **
# <br>
# <br>
#
#
#
# ----
#
# ## Well That's All for now Folks : Soon will be adding more wrappers and cool stuff
#
# <br>
#
# ## Stay Tuned !!
#
# ----
| examples/Example-Oracle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="SsF4jFqvHyoX"
# This is a companion notebook for the book [Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition?a_aid=keras&a_bid=76564dff). For readability, it only contains runnable code blocks and section titles, and omits everything else in the book: text paragraphs, figures, and pseudocode.
#
# **If you want to be able to follow what's going on, I recommend reading the notebook side by side with your copy of the book.**
#
# This notebook was generated for TensorFlow 2.6.
# + [markdown] id="KbNXbAiVHyoc"
# # Introduction to Keras and TensorFlow
# + [markdown] id="64QLF5JGHyoc"
# ## What's TensorFlow?
# + [markdown] id="uTtwhMdvHyod"
# ## What's Keras?
# + [markdown] id="zGBfWs6CHyod"
# ## Keras and TensorFlow: A brief history
# + [markdown] id="rf_4WMLgHyoe"
# ## Setting up a deep-learning workspace
# + [markdown] id="XX-ecfCNHyoe"
# ### Jupyter notebooks: The preferred way to run deep-learning experiments
# + [markdown] id="2En2658OHyof"
# ### Using Colaboratory
# + [markdown] id="tbJXkEpbHyof"
# #### First steps with Colaboratory
# + [markdown] id="4vl_PLPjHyog"
# #### Installing packages with pip
# + [markdown] id="v3i42SJTHyoh"
# #### Using the GPU runtime
# + [markdown] id="fJKDA3f2Hyoh"
# ## First steps with TensorFlow
# + [markdown] id="kdM22GJ5Hyoh"
# #### Constant tensors and variables
# + [markdown] id="K2tTZSePHyoi"
# **All-ones or all-zeros tensors**
# + [markdown] id="KIqdwodfII9o"
#
# + id="eWyXv4TtHyoi" colab={"base_uri": "https://localhost:8080/"} outputId="2422bb04-7dab-4320-ebfa-cd6385d48a99"
import tensorflow as tf
x = tf.ones(shape=(2, 1))
print(x)
# + id="l2URsVPjHyoj" colab={"base_uri": "https://localhost:8080/"} outputId="1ee06ae8-948a-467d-a2c2-f9fca89b1459"
x = tf.zeros(shape=(2, 1))
print(x)
# + [markdown] id="afGQ6cmwHyoj"
# **Random tensors**
# + id="rNvK2WEuHyok" colab={"base_uri": "https://localhost:8080/"} outputId="240faf80-ceca-4e10-b5c5-4faae7bcad68"
x = tf.random.normal(shape=(3, 1), mean=0., stddev=1.)
print(x)
# + id="6ytwnpNpHyok" colab={"base_uri": "https://localhost:8080/"} outputId="9c2cd222-6815-4898-a742-64f58f04c2f7"
x = tf.random.uniform(shape=(3, 1), minval=0., maxval=1.)
print(x)
# + [markdown] id="aj-CqQU8ItWQ"
# A significant difference between NumPy arrays and TensorFlow tensors is that Tensor Flow tensors aren’t assignable: they’re constant. For instance, in NumPy, you can do the following
# + [markdown] id="r4201kDAHyok"
# **NumPy arrays are assignable**
# + id="elIhyNLZHyol"
import numpy as np
x = np.ones(shape=(2, 2))
x[0, 0] = 0.
# + [markdown] id="x5F-bPnLHyol"
# **Creating a TensorFlow variable**
# + id="yCzBJdtBHyol" colab={"base_uri": "https://localhost:8080/"} outputId="b8819eed-d8b8-4222-dd44-0dc7d9112d54"
v = tf.Variable(initial_value=tf.random.normal(shape=(3, 1)))
print(v)
# + [markdown] id="qocFef2LHyol"
# **Assigning a value to a TensorFlow variable**
# + id="nf4M0Dv-Hyom" colab={"base_uri": "https://localhost:8080/"} outputId="74debaa0-2a25-4517-98f4-0ea1823f436f"
v.assign(tf.ones((3, 1)))
# + [markdown] id="uGK2MH-DHyom"
# **Assigning a value to a subset of a TensorFlow variable**
# + id="I9_XKBRYHyom" colab={"base_uri": "https://localhost:8080/"} outputId="58e19f1f-32b9-48d5-fcf8-df36bbf0ab52"
v[0, 0].assign(3.)
# + [markdown] id="YhYwFpk8Hyom"
# **Using `assign_add`**
# + id="iPEmZBEPHyom" colab={"base_uri": "https://localhost:8080/"} outputId="29147acf-a1dc-4483-ae7b-63e161fcd16b"
v.assign_add(tf.ones((3, 1)))
# + [markdown] id="WbjK-fMqHyom"
# #### Tensor operations: Doing math in TensorFlow
# + [markdown] id="6z3E4dTQHyon"
# **A few basic math operations**
# + id="BGDAJ6BSHyon"
a = tf.ones((2, 2))
b = tf.square(a)
c = tf.sqrt(a)
d = b + c
e = tf.matmul(a, b)
e *= d
# + id="CPQYLICzI57P"
# + [markdown] id="ePtKIPKHHyon"
# #### A second look at the GradientTape API
# + [markdown] id="J5CtV4l-Hyon"
# **Using the `GradientTape`**
# + id="_O9b0RKNHyon"
input_var = tf.Variable(initial_value=5.)
with tf.GradientTape() as tape:
result = tf.square(input_var)
gradient = tape.gradient(result, input_var)
# + [markdown] id="yGjN9Cv4Hyoo"
# **Using `GradientTape` with constant tensor inputs**
# + id="VvCsU8XbHyoo"
input_const = tf.constant(3.)
with tf.GradientTape() as tape:
tape.watch(input_const)
result = tf.square(input_const)
gradient = tape.gradient(result, input_const)
# + [markdown] id="OPq7mC3yHyoo"
# **Using nested gradient tapes to compute second-order gradients**
# + id="KLqQOcObHyoo"
time = tf.Variable(10.)
with tf.GradientTape() as outer_tape:
with tf.GradientTape() as inner_tape:
position = 4.9 * time ** 2
speed = inner_tape.gradient(position, time)
acceleration = outer_tape.gradient(speed, time)
# + [markdown] id="ik0yxaAvHyoo"
# #### An end-to-end example: A linear classifier in pure TensorFlow
# + [markdown] id="e-jVPG3uHyoo"
# **Generating two classes of random points in a 2D plane**
# + id="CMa7tTizHyop"
import numpy as np
num_samples_per_class = 1000
negative_samples = np.random.multivariate_normal(
mean=[0, 3],
cov=[[1, 0.5],[0.5, 1]],
size=num_samples_per_class)
positive_samples = np.random.multivariate_normal(
mean=[3, 0],
cov=[[1, 0.5],[0.5, 1]],
size=num_samples_per_class)
# + [markdown] id="fHzRpN2ZHyop"
# **Stacking the two classes into an array with shape (2000, 2)**
# + id="YAmJi9QnHyop"
inputs = np.vstack((negative_samples, positive_samples)).astype(np.float32)
# + [markdown] id="WCJTlANLHyop"
# **Generating the corresponding targets (0 and 1)**
# + id="S9rSvcNRHyoq"
targets = np.vstack((np.zeros((num_samples_per_class, 1), dtype="float32"),
np.ones((num_samples_per_class, 1), dtype="float32")))
# + [markdown] id="IgyOUkb2Hyoq"
# **Plotting the two point classes**
# + id="3-lrHxfQHyoq"
import matplotlib.pyplot as plt
plt.scatter(inputs[:, 0], inputs[:, 1], c=targets[:, 0])
plt.show()
# + [markdown] id="ClaC2vy8Hyoq"
# **Creating the linear classifier variables**
# + id="ZrpPpuSIHyor"
input_dim = 2
output_dim = 1
W = tf.Variable(initial_value=tf.random.uniform(shape=(input_dim, output_dim)))
b = tf.Variable(initial_value=tf.zeros(shape=(output_dim,)))
# + [markdown] id="HFW8Qj_pHyor"
# **The forward pass function**
# + id="YGlyiZwoHyor"
def model(inputs):
return tf.matmul(inputs, W) + b
# + [markdown] id="ZfFpH--GHyor"
# **The mean squared error loss function**
# + id="ekJnEFXKHyor"
def square_loss(targets, predictions):
per_sample_losses = tf.square(targets - predictions)
return tf.reduce_mean(per_sample_losses)
# + [markdown] id="qo9cUcBfHyor"
# **The training step function**
# + id="fwBiI_t6Hyos"
learning_rate = 0.1
def training_step(inputs, targets):
with tf.GradientTape() as tape:
predictions = model(inputs)
loss = square_loss(predictions, targets)
grad_loss_wrt_W, grad_loss_wrt_b = tape.gradient(loss, [W, b])
W.assign_sub(grad_loss_wrt_W * learning_rate)
b.assign_sub(grad_loss_wrt_b * learning_rate)
return loss
# + [markdown] id="WjDfitriHyos"
# **The batch training loop**
# + id="ka1oEWs1Hyos"
for step in range(40):
loss = training_step(inputs, targets)
print(f"Loss at step {step}: {loss:.4f}")
# + id="E2--JOc-Hyos"
predictions = model(inputs)
plt.scatter(inputs[:, 0], inputs[:, 1], c=predictions[:, 0] > 0.5)
plt.show()
# + id="R781676MHyos"
x = np.linspace(-1, 4, 100)
y = - W[0] / W[1] * x + (0.5 - b) / W[1]
plt.plot(x, y, "-r")
plt.scatter(inputs[:, 0], inputs[:, 1], c=predictions[:, 0] > 0.5)
# + [markdown] id="oC-tNJ0wHyos"
# ## Anatomy of a neural network: Understanding core Keras APIs
# + [markdown] id="Jd-3JPYUHyot"
# ### Layers: The building blocks of deep learning
# + [markdown] id="etGdYM4PHyot"
# #### The base Layer class in Keras
# + [markdown] id="n0JrM3-xHyot"
# **A `Dense` layer implemented as a `Layer` subclass**
# + id="E-vyJbRqHyot"
from tensorflow import keras
class SimpleDense(keras.layers.Layer):
def __init__(self, units, activation=None):
super().__init__()
self.units = units
self.activation = activation
def build(self, input_shape):
input_dim = input_shape[-1]
self.W = self.add_weight(shape=(input_dim, self.units),
initializer="random_normal")
self.b = self.add_weight(shape=(self.units,),
initializer="zeros")
def call(self, inputs):
y = tf.matmul(inputs, self.W) + self.b
if self.activation is not None:
y = self.activation(y)
return y
# + id="M9QbH8gnHyot"
my_dense = SimpleDense(units=32, activation=tf.nn.relu)
input_tensor = tf.ones(shape=(2, 784))
output_tensor = my_dense(input_tensor)
print(output_tensor.shape)
# + [markdown] id="YsnZoIeUHyot"
# #### Automatic shape inference: Building layers on the fly
# + id="qaPmt1PWHyot"
from tensorflow.keras import layers
layer = layers.Dense(32, activation="relu")
# + id="s0F1a-RNHyou"
from tensorflow.keras import models
from tensorflow.keras import layers
model = models.Sequential([
layers.Dense(32, activation="relu"),
layers.Dense(32)
])
# + id="L7lAAJmaHyou"
model = keras.Sequential([
SimpleDense(32, activation="relu"),
SimpleDense(64, activation="relu"),
SimpleDense(32, activation="relu"),
SimpleDense(10, activation="softmax")
])
# + [markdown] id="LXkaPlzqHyou"
# ### From layers to models
# + [markdown] id="L4O_JCzhHyou"
# ### The "compile" step: Configuring the learning process
# + id="AzbJr07vHyou"
model = keras.Sequential([keras.layers.Dense(1)])
model.compile(optimizer="rmsprop",
loss="mean_squared_error",
metrics=["accuracy"])
# + id="3QrzQ_iwHyou"
model.compile(optimizer=keras.optimizers.RMSprop(),
loss=keras.losses.MeanSquaredError(),
metrics=[keras.metrics.BinaryAccuracy()])
# + [markdown] id="5XFdJjEHHyou"
# ### Picking a loss function
# + [markdown] id="kL99RfCBHyov"
# ### Understanding the fit() method
# + [markdown] id="jGeHXNJ4Hyov"
# **Calling `fit()` with NumPy data**
# + id="3N3JyPA1Hyov"
history = model.fit(
inputs,
targets,
epochs=5,
batch_size=128
)
# + id="Hn-ne-VZHyov"
history.history
# + [markdown] id="fzYOEd28Hyov"
# ### Monitoring loss and metrics on validation data
# + [markdown] id="xAAHqGsgHyov"
# **Using the `validation_data` argument**
# + id="azajvg37Hyow"
model = keras.Sequential([keras.layers.Dense(1)])
model.compile(optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
loss=keras.losses.MeanSquaredError(),
metrics=[keras.metrics.BinaryAccuracy()])
indices_permutation = np.random.permutation(len(inputs))
shuffled_inputs = inputs[indices_permutation]
shuffled_targets = targets[indices_permutation]
num_validation_samples = int(0.3 * len(inputs))
val_inputs = shuffled_inputs[:num_validation_samples]
val_targets = shuffled_targets[:num_validation_samples]
training_inputs = shuffled_inputs[num_validation_samples:]
training_targets = shuffled_targets[num_validation_samples:]
model.fit(
training_inputs,
training_targets,
epochs=5,
batch_size=16,
validation_data=(val_inputs, val_targets)
)
# + [markdown] id="nY4nQy9_Hyow"
# ### Inference: Using a model after training
# + id="fbabh3VKHyow"
predictions = model.predict(val_inputs, batch_size=128)
print(predictions[:10])
# + [markdown] id="pExRxPW5Hyow"
# ## Summary
| chapter03_introduction_to_keras_and_tf_i.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MNIST - Syft Duet - Data Scientist 🥁
# ## PART 1: Connect to a Remote Duet Server
#
# As the Data Scientist, you want to perform data science on data that is sitting in the Data Owner's Duet server in their Notebook.
#
# In order to do this, we must run the code that the Data Owner sends us, which importantly includes their Duet Session ID. The code will look like this, importantly with their real Server ID.
#
# ```
# import syft as sy
# duet = sy.duet('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
# ```
#
# This will create a direct connection from my notebook to the remote Duet server. Once the connection is established all traffic is sent directly between the two nodes.
#
# Paste the code or Server ID that the Data Owner gives you and run it in the cell below. It will return your Client ID which you must send to the Data Owner to enter into Duet so it can pair your notebooks.
import syft as sy
duet = sy.join_duet(loopback=True)
# ### <img src="https://github.com/OpenMined/design-assets/raw/master/logos/OM/mark-primary-light.png" alt="he-black-box" width="100"/> Checkpoint 0 : Now STOP and run the Data Owner notebook until the next checkpoint.
# ## PART 2: Setting up a Model and our Data
# The majority of the code below has been adapted closely from the original PyTorch MNIST example which is available in the `original` directory with these notebooks.
# The `duet` variable is now your reference to a whole world of remote operations including supported libraries like torch.
#
# Lets take a look at the duet.torch attribute.
# ```
# duet.torch
# ```
duet.torch
# Lets create a model just like the one in the MNIST example. We do this in almost the exact same way as in PyTorch. The main difference is we inherit from sy.Module instead of nn.Module and we need to pass in a variable called torch_ref which we will use internally for any calls that would normally be to torch.
class SyNet(sy.Module):
def __init__(self, torch_ref):
super(SyNet, self).__init__(torch_ref=torch_ref)
self.conv1 = self.torch_ref.nn.Conv2d(1, 32, 3, 1)
self.conv2 = self.torch_ref.nn.Conv2d(32, 64, 3, 1)
self.dropout1 = self.torch_ref.nn.Dropout2d(0.25)
self.dropout2 = self.torch_ref.nn.Dropout2d(0.5)
self.fc1 = self.torch_ref.nn.Linear(9216, 128)
self.fc2 = self.torch_ref.nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = self.torch_ref.nn.functional.relu(x)
x = self.conv2(x)
x = self.torch_ref.nn.functional.relu(x)
x = self.torch_ref.nn.functional.max_pool2d(x, 2)
x = self.dropout1(x)
x = self.torch_ref.flatten(x, 1)
x = self.fc1(x)
x = self.torch_ref.nn.functional.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = self.torch_ref.nn.functional.log_softmax(x, dim=1)
return output
# lets import torch and torchvision just as we normally would
import torch
import torchvision
# now we can create the model and pass in our local copy of torch
local_model = SyNet(torch)
# Next we can get our MNIST Test Set ready using our local copy of torch.
# +
# we need some transforms for the MNIST data set
local_transform_1 = torchvision.transforms.ToTensor() # this converts PIL images to Tensors
local_transform_2 = torchvision.transforms.Normalize(0.1307, 0.3081) # this normalizes the dataset
# compose our transforms
local_transforms = torchvision.transforms.Compose([local_transform_1, local_transform_2])
# -
# Lets define a few settings which are from the original MNIST example command-line args
args = {
"batch_size": 64,
"test_batch_size": 1000,
"epochs": 14,
"lr": 1.0,
"gamma": 0.7,
"no_cuda": False,
"dry_run": False,
"seed": 42, # the meaning of life
"log_interval": 10,
"save_model": True,
}
# +
from syft.util import get_root_data_path
# we will configure the test set here locally since we want to know if our Data Owner's
# private training dataset will help us reach new SOTA results for our benchmark test set
test_kwargs = {
"batch_size": args["test_batch_size"],
}
test_data = torchvision.datasets.MNIST(str(get_root_data_path()), train=False, download=True, transform=local_transforms)
test_loader = torch.utils.data.DataLoader(test_data,**test_kwargs)
test_data_length = len(test_loader.dataset)
print(test_data_length)
# -
# Now its time to send the model to our partner’s Duet Server.
# Note: You can load normal torch model weights before sending your model.
# Try training the model and saving it at the end of the notebook and then coming back and
# reloading the weights here, or you can train the same model once using the original script
# in `original` dir and load it here as well.
# +
# local_model.load("./duet_mnist.pt")
# -
model = local_model.send(duet)
# Lets create an alias for our partner’s torch called `remote_torch` so we can refer to the local torch as `torch` and any operation we want to do remotely as `remote_torch`. Remember, the return values from `remote_torch` are `Pointers`, not the real objects. They mostly act the same when using them with other `Pointers` but you can't mix them with local torch objects.
remote_torch = duet.torch
# lets ask to see if our Data Owner has CUDA
has_cuda = False
has_cuda_ptr = remote_torch.cuda.is_available()
has_cuda = bool(has_cuda_ptr.get(
request_block=True,
reason="To run test and inference locally",
timeout_secs=5, # change to something slower
))
print(has_cuda)
# +
use_cuda = not args["no_cuda"] and has_cuda
# now we can set the seed
remote_torch.manual_seed(args["seed"])
device = remote_torch.device("cuda" if use_cuda else "cpu")
print(f"Data Owner device is {device.type.get()}")
# -
# if we have CUDA lets send our model to the GPU
if has_cuda:
model.cuda(device)
else:
model.cpu()
# Lets get our params, setup an optimizer and a scheduler just the same as the PyTorch MNIST example
params = model.parameters()
optimizer = remote_torch.optim.Adadelta(params, lr=args["lr"])
scheduler = remote_torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=args["gamma"])
# Next we need a training loop so we can improve our remote model. Since we want to train on remote data we should first check if the model is remote since we will be using remote_torch in this function. To check if a model is local or remote simply use the `.is_local` attribute.
def train(model, torch_ref, train_loader, optimizer, epoch, args, train_data_length):
# + 0.5 lets us math.ceil without the import
train_batches = round((train_data_length / args["batch_size"]) + 0.5)
print(f"> Running train in {train_batches} batches")
if model.is_local:
print("Training requires remote model")
return
model.train()
for batch_idx, data in enumerate(train_loader):
data_ptr, target_ptr = data[0], data[1]
optimizer.zero_grad()
output = model(data_ptr)
loss = torch_ref.nn.functional.nll_loss(output, target_ptr)
loss.backward()
optimizer.step()
loss_item = loss.item()
train_loss = duet.python.Float(0) # create a remote Float we can use for summation
train_loss += loss_item
if batch_idx % args["log_interval"] == 0:
local_loss = None
local_loss = loss_item.get(
reason="To evaluate training progress",
request_block=True,
timeout_secs=5
)
if local_loss is not None:
print("Train Epoch: {} {} {:.4}".format(epoch, batch_idx, local_loss))
else:
print("Train Epoch: {} {} ?".format(epoch, batch_idx))
if batch_idx >= train_batches - 1:
print("batch_idx >= train_batches, breaking")
break
if args["dry_run"]:
break
# Now we can define a simple test loop very similar to the original PyTorch MNIST example.
# This function should expect a remote model from our outer epoch loop, so internally we can call `get` to download the weights to do an evaluation on our machine with our local test set. Remember, if we have trained on private data, our model will require permission to download, so we should use request_block=True and make sure the Data Owner approves our requests. For the rest of this function, we will use local `torch` as we normally would.
def test_local(model, torch_ref, test_loader, test_data_length):
# download remote model
if not model.is_local:
local_model = model.get(
request_block=True,
reason="test evaluation",
timeout_secs=5
)
else:
local_model = model
# + 0.5 lets us math.ceil without the import
test_batches = round((test_data_length / args["test_batch_size"]) + 0.5)
print(f"> Running test_local in {test_batches} batches")
local_model.eval()
test_loss = 0.0
correct = 0.0
with torch_ref.no_grad():
for batch_idx, (data, target) in enumerate(test_loader):
output = local_model(data)
iter_loss = torch_ref.nn.functional.nll_loss(output, target, reduction="sum").item()
test_loss = test_loss + iter_loss
pred = output.argmax(dim=1)
total = pred.eq(target).sum().item()
correct += total
if args["dry_run"]:
break
if batch_idx >= test_batches - 1:
print("batch_idx >= test_batches, breaking")
break
accuracy = correct / test_data_length
print(f"Test Set Accuracy: {100 * accuracy}%")
# Finally just for demonstration purposes, we will get the built-in MNIST dataset but on the Data Owners side from `remote_torchvision`.
# +
# we need some transforms for the MNIST data set
remote_torchvision = duet.torchvision
transform_1 = remote_torchvision.transforms.ToTensor() # this converts PIL images to Tensors
transform_2 = remote_torchvision.transforms.Normalize(0.1307, 0.3081) # this normalizes the dataset
remote_list = duet.python.List() # create a remote list to add the transforms to
remote_list.append(transform_1)
remote_list.append(transform_2)
# compose our transforms
transforms = remote_torchvision.transforms.Compose(remote_list)
# The DO has kindly let us initialise a DataLoader for their training set
train_kwargs = {
"batch_size": args["batch_size"],
}
train_data_ptr = remote_torchvision.datasets.MNIST(str(get_root_data_path()), train=True, download=True, transform=transforms)
train_loader_ptr = remote_torch.utils.data.DataLoader(train_data_ptr,**train_kwargs)
# +
# normally we would not necessarily know the length of a remote dataset so lets ask for it
# so we can pass that to our training loop and know when to stop
def get_train_length(train_data_ptr):
train_data_length = len(train_data_ptr)
return train_data_length
try:
if train_data_length is None:
train_data_length = get_train_length(train_data_ptr)
except NameError:
train_data_length = get_train_length(train_data_ptr)
print(f"Training Dataset size is: {train_data_length}")
# -
# ## PART 3: Training
# +
import time
args["dry_run"] = True # comment to do a full train
print("Starting Training")
for epoch in range(1, args["epochs"] + 1):
epoch_start = time.time()
print(f"Epoch: {epoch}")
# remote training on model with remote_torch
train(model, remote_torch, train_loader_ptr, optimizer, epoch, args, train_data_length)
# local testing on model with local torch
test_local(model, torch, test_loader, test_data_length)
scheduler.step()
epoch_end = time.time()
print(f"Epoch time: {int(epoch_end - epoch_start)} seconds")
if args["dry_run"]:
break
print("Finished Training")
# -
if args["save_model"]:
model.get(
request_block=True,
reason="test evaluation",
timeout_secs=5
).save("./duet_mnist.pt")
# ## PART 4: Inference
# A model would be no fun without the ability to do inference. The following code shows some examples on how we can do this either remotely or locally.
# +
import matplotlib.pyplot as plt
def draw_image_and_label(image, label):
fig = plt.figure()
plt.tight_layout()
plt.imshow(image, cmap="gray", interpolation="none")
plt.title("Ground Truth: {}".format(label))
def prep_for_inference(image):
image_batch = image.unsqueeze(0).unsqueeze(0)
image_batch = image_batch * 1.0
return image_batch
# -
def classify_local(image, model):
if not model.is_local:
print("model is remote try .get()")
return -1, torch.Tensor([-1])
image_tensor = torch.Tensor(prep_for_inference(image))
output = model(image_tensor)
preds = torch.exp(output)
local_y = preds
local_y = local_y.squeeze()
pos = local_y == max(local_y)
index = torch.nonzero(pos, as_tuple=False)
class_num = index.squeeze()
return class_num, local_y
def classify_remote(image, model):
if model.is_local:
print("model is local try .send()")
return -1, remote_torch.Tensor([-1])
image_tensor_ptr = remote_torch.Tensor(prep_for_inference(image))
output = model(image_tensor_ptr)
preds = remote_torch.exp(output)
preds_result = preds.get(
request_block=True,
reason="To see a real world example of inference",
timeout_secs=10
)
if preds_result is None:
print("No permission to do inference, request again")
return -1, torch.Tensor([-1])
else:
# now we have the local tensor we can use local torch
local_y = torch.Tensor(preds_result)
local_y = local_y.squeeze()
pos = local_y == max(local_y)
index = torch.nonzero(pos, as_tuple=False)
class_num = index.squeeze()
return class_num, local_y
# +
# lets grab something from the test set
import random
total_images = test_data_length # 10000
index = random.randint(0, total_images)
print("Random Test Image:", index)
count = 0
batch = index // test_kwargs["batch_size"]
batch_index = index % int(total_images / len(test_loader))
for tensor_ptr in test_loader:
data, target = tensor_ptr[0], tensor_ptr[1]
if batch == count:
break
count += 1
print(f"Displaying {index} == {batch_index} in Batch: {batch}/{len(test_loader)}")
if batch_index > len(data):
batch_index = 0
image_1 = data[batch_index].reshape((28, 28))
label_1 = target[batch_index]
draw_image_and_label(image_1, label_1)
# -
# classify remote
class_num, preds = classify_remote(image_1, model)
print(f"Prediction: {class_num} Ground Truth: {label_1}")
print(preds)
local_model = model.get(
request_block=True,
reason="To run test and inference locally",
timeout_secs=5,
)
# classify local
class_num, preds = classify_local(image_1, local_model)
print(f"Prediction: {class_num} Ground Truth: {label_1}")
print(preds)
# +
# We can also download an image from the web and run inference on that
from PIL import Image, ImageEnhance
import PIL.ImageOps
import os
def classify_url_image(image_url):
filename = os.path.basename(image_url)
os.system(f'curl -O {image_url}')
im = Image.open(filename)
im = PIL.ImageOps.invert(im)
# im = im.resize((28,28), Image.ANTIALIAS)
im = im.convert('LA')
enhancer = ImageEnhance.Brightness(im)
im = enhancer.enhance(3)
print(im.size)
fig = plt.figure()
plt.tight_layout()
plt.imshow(im, cmap="gray", interpolation="none")
# classify local
class_num, preds = classify_local(image_1, local_model)
print(f"Prediction: {class_num}")
print(preds)
# +
# image_url = "https://raw.githubusercontent.com/kensanata/numbers/master/0018_CHXX/0/number-100.png"
# classify_url_image(image_url)
# -
| examples/duet/mnist/MNIST_Syft_Data_Scientist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: bioinfo
# language: python
# name: bioinfo
# ---
import pandas as pd
df = pd.read_csv('count_matrix_data/filtered.csv', sep=',', index_col=0)
df.columns
X = df.values
X.shape
number_cells = X.shape[1]
number_peaks = X.shape[0]
import numpy as np
muTW = (np.sqrt(number_cells - 1) + np.sqrt(number_peaks)) ** 2
sigmaTW = (np.sqrt(number_cells - 1) + np.sqrt(number_peaks)) * (1/np.sqrt(number_cells - 1) + 1/np.sqrt(number_peaks))**(1/3)
sigmaHatNaive = X.T @ X
sigmaHatNaive.shape
bd = 3.273 * sigmaTW + muTW
bd
w, v = np.linalg.eigh(sigmaHatNaive)
np.sum(w > bd)
| NumberOfClusters.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Activity #1: Heat maps
# * we'll start with building up a heat map based on some small, randomly generate data
# * we'll use this methodology to make our plot interactive & then move on to using "real" data
# lets import our usual stuff
import pandas as pd
import bqplot
import numpy as np
import traitlets
import ipywidgets
# %matplotlib inline
# # Activity #2: Preliminary dashboarding
# * we'll use a random dataset to explore how to make dashboard-like plots that change when things are updated
# +
# now lets move on to making a preliminary
#dashboard for multi-dimensional datasets
# lets first start with some randomly generated data again
# -
# # Activity #3: Dashboarding with "real" data
# * now we'll move onto the UFO dataset and start messing around with creating a dashboard for this dataset
# lets start by loading the UFO dataset
ufos = pd.read_csv("/Users/jillnaiman/Downloads/ufo-scrubbed-geocoded-time-standardized-00.csv",
names = ["date", "city", "state", "country",
"shape", "duration_seconds", "duration",
"comment", "report_date",
"latitude", "longitude"],
parse_dates = ["date", "report_date"])
# ## Aside: downsampling
# * some folks reported having a tough time with interactivity of scatter plots with the UFO dataset
# * here we'll quickly go over some methods of downsampling that can be applied to decrease the size of our dataset
# you'll see the above takes a good long time to load on my computer
# the length of the dataset is quite large:
len(ufos)
# 80,000! So, to speed up our interactivity, we can
# randomly sample this dataset for plotting purposes
# lets down sample to 1000 samples:
nsamples = 1000
#nsamples = 5000
downSampleMask = np.random.randint(0,len(ufos)-1,nsamples)
downSampleMask
# so, downsample mask is now a list of random indicies for
# the UFO dataset
# the above doesn't disclude repeats, but we can take
# care of this with a different call:
downSampleMask = np.random.choice(range(len(ufos)-1),
nsamples, replace=False)
# lets update:
ufosDS = ufos.loc[downSampleMask]
len(ufosDS)
# so much shorter
# we can also see that this is saved as a dataframe:
ufosDS
# +
# lets make a super quick scatter plot to remind ourselves what this looks like:
x_sc = bqplot.LinearScale()
y_sc = bqplot.LinearScale()
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label='Latitude')
#(1)
#scatters = bqplot.Scatter(x = ufosDS['longitude'],
# y = ufosDS['latitude'],
# scales = {'x': x_sc, 'y': y_sc})
# (2) recall we can also color by things like duration
c_sc = bqplot.ColorScale()
#c_ax = bqplot.ColorAxis(scale = c_sc, label='Duration in sec', orientation = 'vertical', side = 'right')
#scatters = bqplot.Scatter(x = ufosDS['longitude'],
# y = ufosDS['latitude'],
# color=ufosDS['duration_seconds'],
# scales = {'x': x_sc, 'y': y_sc, 'color':c_sc})
# (3) again, we recall that there is a large range in durations, so
# it makes sense that we have a muted color pattern - we want
# to use a log colorscale
# with bqplot we can do this with:
c_ax = bqplot.ColorAxis(scale = c_sc, label='log(sec)',
orientation = 'vertical', side = 'right')
scatters = bqplot.Scatter(x = ufosDS['longitude'],
y = ufosDS['latitude'],
color=np.log10(ufosDS['duration_seconds']),
scales = {'x': x_sc, 'y': y_sc, 'color':c_sc})
fig = bqplot.Figure(marks = [scatters], axes = [x_ax, y_ax, c_ax])
fig
# +
# now we are going to use our heatmap idea to plot this data again
# note this will shmear out a lot of the nice map stuff we see above
# don't worry! We'll talk about making maps in the next class or so
# what should we color by? lets do by duration
# to get this to work with our heatmap, we're going
# to have to do some rebinning
# right now, our data is all in 1 long list
# we need to rebin things in a 2d histogram where
# the x axis is long & y is lat
# ***START WITH 10 EACH**
nlong = 20
nlat = 20
#(1)
hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'],
ufos['latitude'],
weights=ufos['duration_seconds'],
bins=[nlong,nlat])
# this returns the TOTAL duration of ufo events in each bin
hist2d
# (2)
# to do the average duration in each bin we can do:
hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'],
ufos['latitude'],
weights=ufos['duration_seconds'],
normed=True,
bins = [nlong,nlat])
hist2d
# (3) ok, lets go back to total duration
hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'],
ufos['latitude'],
weights=np.log10(ufos['duration_seconds']),
bins = [nlong,nlat])
# note that the sizes of the edges & the hist are different:
hist2d.shape, long_edges.shape, lat_edges.shape
# this is becuase the edges are bin edges, not centers
# to get bin centers we can do:
# lets do some fancy in-line forloops
long_centers = [(long_edges[i]+long_edges[i+1])*0.5 for i in range(len(long_edges)-1)]
lat_centers = [(lat_edges[i]+lat_edges[i+1])*0.5 for i in range(len(lat_edges)-1)]
long_centers, lat_centers
# (4) note: we might want to control where our bins are, we can do this by
# specifying bin edges ourselves
long_bins = np.linspace(-150, 150, nlong+1)
lat_bins = np.linspace(-40, 70, nlat+1)
long_bins, long_bins.shape
lat_bins, lat_bins.shape
hist2d, long_edges, lat_edges = np.histogram2d(ufos['longitude'],
ufos['latitude'],
weights=ufos['duration_seconds'],
bins = [long_bins,lat_bins])
# this is becuase the edges are bin edges, not centers
long_centers = [(long_edges[i]+long_edges[i+1])*0.5 for i in range(len(long_edges)-1)]
lat_centers = [(lat_edges[i]+lat_edges[i+1])*0.5 for i in range(len(lat_edges)-1)]
# (5)
# again, we want to take the log scale of things
# we're going to do this by taking the log of hist2d
# but there are some zero values in this hsitogram
# if we just take the log we get -inf
np.log10(hist2d)
# this can mess up our color scheme mapping
# (6) so we are going to "trick" our color scheme like so
hist2d[hist2d <= 0] = np.nan # set zeros to NaNs
# then take log
hist2d = np.log10(hist2d)
hist2d
# (7) finally, our histogram is actually
# transposed - this is just how numpy outputs it,
# lets put the world right side up with:
hist2d = hist2d.T
# +
# now that we have all that fancy binning out of the way,
# lets proceed as normal:
# add scales - colors, x & y
col_sc = bqplot.ColorScale(scheme="RdPu",
min=np.nanmin(hist2d),
max=np.nanmax(hist2d))
x_sc = bqplot.LinearScale()
y_sc = bqplot.LinearScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')#,
#label='log(sec)')
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label = 'Latitude')
heat_map = bqplot.GridHeatMap(color = hist2d,
row = lat_centers,
column = long_centers,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 1.0})
#***GO BACK AND PLAY WITH BIN SIZES***
# (2) lets add a label again to pritn duration
# create label again
mySelectedLabel = ipywidgets.Label()
def get_data_value(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# make sure we check out
heat_map.observe(get_data_value, 'selected')
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
#(1)
#fig
#(2)
ipywidgets.VBox([mySelectedLabel,fig])
# +
# ok, now lets build up our dashboard
# again to also show how the duration of UFO sitings in each
# selected region changes with year
# we'll do this with the same methodology we applied before
# **copy paste above***
# (1)
# (I) For the heatmap
# add scales - colors, x & y
col_sc = bqplot.ColorScale(scheme="RdPu",
min=np.nanmin(hist2d),
max=np.nanmax(hist2d))
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label = 'Latitude')
heat_map = bqplot.GridHeatMap(color = hist2d,
row = lat_centers,
column = long_centers,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 1.0})
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
# (II) Scatter plot
# scales & ax in usual way
import datetime as dt
x_scl = bqplot.DateScale(min=dt.datetime(1950,1,1),max=dt.datetime(2020,1,1)) # note: for dates on x-axis
y_scl = bqplot.LogScale()
ax_xcl = bqplot.Axis(label='Date', scale=x_scl)
ax_ycl = bqplot.Axis(label='Duration in Sec', scale=y_scl,
orientation='vertical', side='left')
# for the lineplot of duration in a region as a function of year
# lets start with a default region & year
i,j = 0,0
longs = [long_edges[i], long_edges[i+1]]
lats = [lat_edges[j],lat_edges[j+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
# we can see this selects for the upper right point of our heatmap
lats, longs, ufos['latitude'][region_mask]
# lets plot the durations as a function of year there
duration_scatt = bqplot.Scatter(x = ufos['date'][region_mask],
y = ufos['duration_seconds'][region_mask],
scales={'x':x_scl, 'y':y_scl})
fig_dur = bqplot.Figure(marks = [duration_scatt], axes = [ax_xcl, ax_ycl])
# create label again
mySelectedLabel = ipywidgets.Label()
def get_data_value(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# make sure we connect to heatmap
#heat_map.observe(get_data_value, 'selected')
# (2) now again, we want our scatter plot to react to changes
# to what we've selected so:
def get_data_value2(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# note!! i & j are swapped here to machup with hist & selection
longs = [long_edges[j], long_edges[j+1]]
lats = [lat_edges[i],lat_edges[i+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
duration_scatt.x = ufos['date'][region_mask]
duration_scatt.y = ufos['duration_seconds'][region_mask]
#print(i,j)
#print(longs,lats)
#print(ufos['date'][region_mask])
# make sure we connect to heatmap
heat_map.observe(get_data_value2, 'selected')
ipywidgets.VBox([mySelectedLabel, ipywidgets.HBox([fig,fig_dur])])
# note that when I select a deep purple place, my scatter plot is
# very laggy, this makes me think we should do this with a
# histogram/bar type plot
# +
# (I) For the heatmap
# add scales - colors, x & y
col_sc = bqplot.ColorScale(scheme="RdPu",
min=np.nanmin(hist2d),
max=np.nanmax(hist2d))
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label = 'Latitude')
heat_map = bqplot.GridHeatMap(color = hist2d,
row = lat_centers,
column = long_centers,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 1.0})
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
# (II) Bar plot
# scales & ax in usual way
x_scl = bqplot.LinearScale() # note we are back to linears
y_scl = bqplot.LinearScale()
ax_xcl = bqplot.Axis(label='Date', scale=x_scl)
ax_ycl = bqplot.Axis(label='Total duration in Sec', scale=y_scl,
orientation='vertical', side='left')
# for the lineplot of duration in a region as a function of year
# lets start with a default region & year
i,j = 0,0
longs = [long_edges[i], long_edges[i+1]]
lats = [lat_edges[j],lat_edges[j+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
# we can see this selects for the upper right point of our heatmap
lats, longs, ufos['latitude'][region_mask]
# lets plot the durations as a function of year there
ufos['year'] = ufos['date'].dt.year
dur, dur_edges = np.histogram(ufos['year'][region_mask],
weights=ufos['duration_seconds'][region_mask],
bins=10)
# like before with our histograms
dur_centers = [(dur_edges[i]+dur_edges[i+1])*0.5 for i in range(len(dur_edges)-1)]
# make histogram by hand, weighting by duration
duration_hist = bqplot.Bars(x=dur_centers, y=dur,
scales={'x':x_scl, 'y':y_scl})
fig_dur = bqplot.Figure(marks = [duration_hist], axes = [ax_xcl, ax_ycl])
# to what we've selected so:
def get_data_value(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# note!! i & j are swapped here to machup with hist & selection
longs = [long_edges[j], long_edges[j+1]]
lats = [lat_edges[i],lat_edges[i+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
if len(ufos['year'][region_mask]) > 0:
dur, dur_edges = np.histogram(ufos['year'][region_mask],
weights=ufos['duration_seconds'][region_mask],
bins=10)
dur_centers = [(dur_edges[i]+dur_edges[i+1])*0.5 for i in range(len(dur_edges)-1)]
duration_hist.x = dur_centers
duration_hist.y = dur
#else:
# duration_hist.x = np.arange(10); duration_hist.y = np.zeros(10)
# make sure we connect to heatmap
heat_map.observe(get_data_value, 'selected')
fig.layout.min_width = '500px'
fig_dur.layout.min_width = '700px'
plots = ipywidgets.HBox([fig,fig_dur])
myout = ipywidgets.VBox([mySelectedLabel, plots])
myout
# -
# ## Might not get to this...
# +
col_sc = bqplot.ColorScale(scheme="RdPu",
min=np.nanmin(hist2d),
max=np.nanmax(hist2d))
x_sc = bqplot.OrdinalScale()
y_sc = bqplot.OrdinalScale()
# create axis - for colors, x & y
c_ax = bqplot.ColorAxis(scale = col_sc,
orientation = 'vertical',
side = 'right')
x_ax = bqplot.Axis(scale = x_sc, label='Longitude')
y_ax = bqplot.Axis(scale = y_sc,
orientation = 'vertical',
label = 'Latitude')
heat_map = bqplot.GridHeatMap(color = hist2d,
row = lat_centers,
column = long_centers,
scales = {'color': col_sc,
'row': y_sc,
'column': x_sc},
interactions = {'click': 'select'},
anchor_style = {'fill':'blue'},
selected_style = {'opacity': 1.0},
unselected_style = {'opacity': 1.0})
fig = bqplot.Figure(marks = [heat_map], axes = [c_ax, y_ax, x_ax])
# (II) Bar plot for durations thorugh the years
# scales & ax in usual way
x_scl = bqplot.LinearScale() # note we are back to linears
y_scl = bqplot.LinearScale()
ax_xcl = bqplot.Axis(label='Date', scale=x_scl)
ax_ycl = bqplot.Axis(label='Total duration in Sec', scale=y_scl,
orientation='vertical', side='left')
# for the lineplot of duration in a region as a function of year
# lets start with a default region & year
i,j = 0,0
longs = [long_edges[i], long_edges[i+1]]
lats = [lat_edges[j],lat_edges[j+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
# we can see this selects for the upper right point of our heatmap
lats, longs, ufos['latitude'][region_mask]
# lets plot the durations as a function of year there
ufos['year'] = ufos['date'].dt.year
dur, dur_edges = np.histogram(ufos['year'][region_mask],
weights=ufos['duration_seconds'][region_mask],
bins=10)
# like before with our histograms
dur_centers = [(dur_edges[i]+dur_edges[i+1])*0.5 for i in range(len(dur_edges)-1)]
# make histogram by hand, weighting by duration
duration_hist = bqplot.Bars(x=dur_centers, y=dur,
scales={'x':x_scl, 'y':y_scl})
fig_dur = bqplot.Figure(marks = [duration_hist], axes = [ax_xcl, ax_ycl])
# (III) histogram for shape
x_ord = bqplot.OrdinalScale()
y_ord = bqplot.LinearScale()
ax_xord = bqplot.Axis(label='Shape', scale=x_ord)
ax_yord = bqplot.Axis(label='Freq', scale=y_ord,
orientation='vertical',
side='left')
# histogram using pandas
hist_ord = bqplot.Bars(x=ufos['shape'][region_mask].unique(),
y=ufos['shape'][region_mask].value_counts(),
scales={'x':x_ord, 'y':y_ord})
fig_shape = bqplot.Figure(marks=[hist_ord], axes=[ax_xord,ax_yord])
# to what we've selected so:
def get_data_value(change):
i,j = change['owner'].selected[0]
v = hist2d[i,j] # grab data value
mySelectedLabel.value = 'Total duration in log(sec) = ' + str(v) # set our label
# note!! i & j are swapped here to machup with hist & selection
longs = [long_edges[j], long_edges[j+1]]
lats = [lat_edges[i],lat_edges[i+1]]
region_mask = ( (ufos['latitude'] >= lats[0]) & (ufos['latitude']<=lats[1]) &\
(ufos['longitude'] >= longs[0]) & (ufos['longitude']<=longs[1]) )
dur, dur_edges = np.histogram(ufos['year'][region_mask],
weights=ufos['duration_seconds'][region_mask],
bins=10)
dur_centers = [(dur_edges[i]+dur_edges[i+1])*0.5 for i in range(len(dur_edges)-1)]
duration_hist.x = dur_centers
duration_hist.y = dur
# also update shapes
#print(ufos['shape'][region_mask])
hist_ord.x = ufos['shape'][region_mask].unique()
hist_ord.y = ufos['shape'][region_mask].value_counts()
# make sure we connect to heatmap
heat_map.observe(get_data_value, 'selected')
# lets make all the sizes look nice
fig_dur.layout.max_width = '400px'
fig_dur.layout.max_height= '300px'
fig_shape.layout.max_width = '400px'
fig_shape.layout.max_height= '300px'
fig.layout.min_width = '800px' # add to both
# dhange layout
myout = ipywidgets.VBox([mySelectedLabel, ipywidgets.HBox([fig,ipywidgets.VBox([fig_shape,fig_dur])])])
#myout = ipywidgets.VBox([mySelectedLabel,
# ipywidgets.HBox([fig_shape,fig_dur]),
# fig])
myout
# -
| week07/spring2019_prep_notebook_week06_part1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from diffprof.load_bpl_histories import load_histories, TASSO
res = load_histories(TASSO, 'conc')
halo_ids, conc_histories, log_mahs, t_bpl, lgm_min = res
# +
ihalo = np.random.randint(0, log_mahs.shape[0])
from diffprof.fit_nfw_helpers import fit_lgconc
fit_results = fit_lgconc(t_bpl, conc_histories[ihalo, :], log_mahs[ihalo, :], lgm_min)
p_best, loss_best, methd, loss_data = fit_results
# -
from diffprof.nfw_evolution import lgc_vs_lgt
lgc_bestfit = lgc_vs_lgt(np.log10(t_bpl), *p_best)
fig, ax = plt.subplots(1, 1)
__=ax.plot(t_bpl, conc_histories[ihalo, :])
__=ax.plot(t_bpl, 10**lgc_bestfit)
| notebooks/demo_concentration_fitter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from os.path import join
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# sklearn
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn import neighbors
from sklearn import metrics
# %matplotlib inline
# -
abalone_data = pd.read_csv(join('data', 'abalone.csv')) # 데이터 불러오기
print(abalone_data.shape)
abalone_data.head(10)
#Explore data
np_abalone_data = np.array(abalone_data)
print(np_abalone_data[0:5, :])
# Divide input and output variable
datax = np_abalone_data[:, 1:]
datay = np_abalone_data[:,0]
print(datax[0:5, :])
print(datay[0:5])
# trn-tst split
trnx, tstx, trny, tsty = train_test_split(datax, datay, test_size=0.3)
print(trnx.shape, tstx.shape, trny.shape, tsty.shape)
# scaling
scaler = MinMaxScaler()
scaler.fit(trnx)
trnx_scale = scaler.transform(trnx)
tstx_scale = scaler.transform(tstx)
print(np.min(trnx_scale[:,0]) , np.max(trnx_scale[:,0]))
print(np.min(tstx_scale[:,0]) , np.max(tstx_scale[:,0]))
k=6
knn_model = neighbors.KNeighborsClassifier(n_neighbors=k)
knn_model.fit(X=trnx, y=trny)
knn_pred_trn = knn_model.predict(X=trnx)
knn_pred_tst = knn_model.predict(X=tstx)
# predict train data and test data (lazy learning)
print(knn_pred_trn)
print(knn_pred_tst)
# traint error and test error
print(metrics.accuracy_score(trny, knn_pred_trn))
print(metrics.accuracy_score(tsty, knn_pred_tst))
# Decision Tree
from sklearn import tree
tree_model = tree.DecisionTreeClassifier(max_depth=5, min_samples_split=4)
tree_model.fit(X=trnx, y=trny)
tree_pred_trn = tree_model.predict(X=trnx)
tree_pred = tree_model.predict(X=tstx)
print(metrics.accuracy_score(trny, tree_pred_trn))
print(metrics.accuracy_score(tsty, tree_pred))
tree_model.feature_importances_
# draw tree graph visualization
from sklearn.tree import export_graphviz
export_graphviz(tree_model, out_file ='tree.dot')
#tree.plot_tree(tree_model)
from sklearn.metrics import confusion_matrix
confusion_matrix(tsty, tree_model.predict(tstx))
from sklearn.tree import DecisionTreeClassifier
tree_model = DecisionTreeClassifier()
tree_model.fit(X=trnx, y=trny)
# +
# NN
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(hidden_layer_sizes = (8,) , max_iter=400)
clf.fit(trnx, trny)
tsty_hat = clf.predict(tstx)
# -
print(clf)
#print(clf.loss_curve_)
print(tsty[0:10])
print(tsty_hat[0:10])
clf2 = MLPClassifier(hidden_layer_sizes=(8,13,8,), max_iter=400)
clf2.fit(trnx, trny)
tsty_hat2 = clf2.predict(tstx)
print(tsty[0:10])
print(tsty_hat2[0:10])
from sklearn.metrics import accuracy_score
print(accuracy_score(tsty, tsty_hat), accuracy_score(tsty, tsty_hat2))
| data-science/hw-3-classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DSCI 572 "lecture" 5
# Lecture outline:
#
# - Video recap (15 min)
# - True/false questions (25 min)
# - Break (5 min)
# - Activation functions (5 min)
# - Deep learing software; Keras (20 min)
# - Counting parameters (10 min)
# - Wrap-up
# +
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
# -
# #### Dependencies:
#
# - tensorflow: `conda install tensorflow`
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
plt.rcParams['font.size'] = 16
# ## Video recap (10 min)
#
# We define the function recursively:
#
# $$ x^{(l+1)} = h\left( W^{(l)} x^{(l)} + b^{(l)}\right) $$
#
# where $W^{(l)}$ is a matrix of parameters, $b^{(l)}$ is a vector of parameters.
#
# So what is $x^{(l)}$?
# * $x^{(0)}$ are the inputs
# * $x^{(L)}$ are the outputs, so we can say $\hat{y}=x^{(L)}$
# * we refer to $L-1$ as the _number of hidden layers_
#
# +
def example_1_layer_nn_predict(x,W,h): # x is a vector, W is a matrix
return h(W@x)
x = np.random.rand(5) # d = 5
W = np.random.rand(2,5) # transforming from 5 dimensions to 2 dimensions
h = lambda x: x**2 # just an example, not a typical choice...
example_1_layer_nn_predict(x,W,h)
# -
# Above, the size of the input is 5 and the size of the output is 2.
# +
def example_nn_predict(x,W,h): # x is a vector, W is a _list of matrices_
for W_l in W:
x = h(W_l@x)
return x
x = np.random.rand(5)
W1 = np.random.rand(2,5)
W2 = np.random.rand(1,2)
h = lambda x: x**2
print(x)
example_nn_predict(x, [W1,W2], h)
# -
hid = h(W1@x)
hid
h(W2@hid)
# Above, the size of the input is 5 and the size of the output is 1. This could now be used for regression, if we could somehow define a loss and find the $W$ matrices that minimize it...
#
# Also:
# - the $W^{(l)}$ do _not_ need to be square.
# - the $x^{(l)}$ for $0<l<L$ are "intermediate states"
# - there are called _hidden units_ or _hidden neurons_
# - the _values_ of these units are called _activations_
# - we often refer to the elements of $W$ as "weights" and the elements of $b$ as "biases"
# - we might not apply $h$ at the last layer... more details to come.
#
# 
#
# In the diagrams above, circles are states and arrows carry weights.
#
# Important note: neural nets map from $\mathbb{R}^d\rightarrow \mathbb{R}^k$ for some arbitrary $d$ and $k$. The outputs do not have to be scalars. We will make use of this later!
# #### Vocabulary
#
# - deep learning
# - (artificial) neural net(work)
# - NN, ANN, CNN
# - layers
# - units, neurons, activations
# - hidden, visible
# - activation function, nonlinearity
# - ReLU, sigmoid
# - backprop(agation)
# ## True/False questions (25 min)
#
# 1. Neural networks can be used for both regression and classification.
# 2. For (fully connected) neural networks, the number of parameters $\geq$ the number of features.
# 3. Linear regression is a special case of a neural network.
# 4. Neural networks are non-parametric.
#
# <br><br><br><br><br><br><br><br><br>
# 1. Any neural network with 3 hidden layers will have more parameters than any neural network with 2 hidden layers.
# 2. With neural networks, we have a potentially large number of **discrete** hyperparamaters.
# 3. With neural networks, we have a potentially large number of **discrete** parameters.
# 4. Like linear regression or logistic regression, with neural networks we can interpret each feature's weight value as a measure of the feature's importance.
#
# <br><br><br><br><br><br><br><br><br>
# ## Break (5 min)
# ## Activation functions (5 min)
#
# - $h$ is called the _activation function_.
# - Question: why do we need $h$ at all?
# - Answer: if no $h$, then we are composing a bunch of linear functions, which just leaves us with a linear function.
# - Insight: if $h$ is nonlinear, then increasing the number of "layers" increases the complexity of the overall function.
#
# In neural networks, we choose $h$ to be an _elementwise_ nonlinear function. i.e.
#
# $$h(x)\equiv\left[\begin{array}{c}h(x_1)\\h(x_2)\\ \vdots \\ h(x_d) \end{array}\right]$$
#
# Activation functions tend to be continuous, but are [not always smooth or monotonic](https://arxiv.org/pdf/1710.05941.pdf).
x = np.linspace(-10,10,1000)
h_sigmoid = lambda x: 1/(1+np.exp(-x))
plt.plot(x,h_sigmoid(x))
plt.ylim(-0.1,1.1);
plt.xlabel('x');
plt.ylabel('h(x)');
plt.title("Sigmoid (aka logistic) activation function");
# Sometimes people also use the hyperbolic tangent. It's basically the same thing but has a range of $(-1,1)$ instead of $(0,1)$.
plt.plot(x, np.tanh(x));
plt.xlabel('x');
plt.ylabel('h(x)');
plt.title("tanh activation function");
# More recently, people use the ReLU (Rectified Linear Unit)
h_relu = lambda x: np.maximum(0,x)
plt.plot(x, h_relu(x));
plt.xlabel('x');
plt.ylabel('h(x)');
plt.title("ReLU activation function");
β = 0.5
plt.plot(x, x / (1+np.exp(-β*x)));
plt.xlabel('x');
plt.ylabel('h(x)');
plt.title("swish activation function");
# - Student question: why would you ever "split" a feature into multiple values? Isn't that redundant?
# - Answer: no, it can lead to more complex functions. Let's try a regression case with 1 feature:
# +
x = np.linspace(-1,1,1000)[None]
k = 100
W1 = np.random.randn(k,1)
b1 = np.random.randn()
W2 = np.random.randn(1,k)
b2 = np.random.randn()
hid = h_relu(W1@x+b1)
y = W2 @ hid + b2
plt.plot(np.squeeze(x), np.squeeze(y));
# -
hid
# ## Deep learning software (20 min)
#
# - There's been a lot of software released lately to take care of this for you.
# - Here is some historical information, for your reference.
#
# | Name | Host language | Released | Comments | Stars on GitHub (Jan 2019) |
# |--------|-------------|---------------|----------|-----------------|
# | [Torch](http://torch.ch) | Lua | 2002 | Used at Facebook | 8k |
# | [Theano](http://deeplearning.net/software/theano/) | Python | 2007 | From U. de Montréal, going out of fashion | 9k |
# | [Caffe](http://caffe.berkeleyvision.org) | Executable with Python wrapper | 2014 | Designed for CNNs, by UC Berkeley | 27k
# | [TensorFlow](https://www.tensorflow.org) | Python | 2015 | Created by Google for both prototyping and production | 118k
# | [Keras](https://keras.io) | Python | 2015 | A front-end on top of Theano or TensorFlow | 37k
# | [PyTorch](http://pytorch.org) | Python | 2017 | Automatic differentiation through arbitrary code like Autograd | 24k |
# | [Caffe 2](https://caffe2.ai/) | Python or C++ | 2017 | Facebook, open source, [_merged with PyTorch last year_](https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html) | repo archived, 8k
# | [TensorFlow 2.0](https://www.tensorflow.org) | Python | 2019 | Keras incorporated into TensorFlow; other improvements | 118k
#
# - The current big players are **TensorFlow** and **PyTorch**.
# - A lot of people I talk to say PyTorch is easier to use.
# - However, I designed this course before PyTorch was created!
# - This course uses TensorFlow, specifically tf.keras.
# - I may change it someday, or may not, we'll see.
# - TensorFlow is extremely popular, and it works for our purposes.
#
# - If interested, see [comparison of deep learning software](https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software).
# - scikit-learn also has its [MLPRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html) and [MLPClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html).
# - These were shown in the lecture video.
# - They have very limited functionality and generally aren't recommended.
# - Even Keras is on the user-friendly-but-less-flexible side.
# - Fun fact: these classes were contributed to scikit-learn by a UBC graduate student.
# +
# generate synthetic 1D data
np.random.seed(5)
N = 200
X = np.random.rand(N,1)
y = np.sin(2*X) + np.random.randn(N,1)*0.03
plt.figure()
plt.plot(X,y,'.',markersize=10);
# +
model = Sequential()
model.add(Dense(10, input_dim=1, activation='tanh'))
model.add(Dense(15, activation='tanh'))
model.add(Dense(1, activation='linear'))
# Compile model
# This is where the magic happens!
model.compile(loss='mean_squared_error', optimizer="adam")
# Fit the model
# loss=model.evaluate(X, y,verbose=0)
# print(loss)
model.fit(X, y, epochs=1000, verbose=0)
# evaluate the model
loss = model.evaluate(X, y,verbose=0)
print(loss)
# -
# Note: in scikit-learn, we use one line of Python code to set up the model and set the hyperparameters. For example:
#
# ```python
# model = SVC(C=5, gamma=0.1)
# ```
#
# In Keras, we use multiple lines of Python code to set up the model. For example:
#
# ```python
# model = Sequential()
# model.add(Dense(10, input_dim=1, activation='tanh', kernel_initializer='lecun_uniform',))
# model.add(Dense(10, activation='tanh', kernel_initializer='lecun_uniform',))
# model.add(Dense(1, activation='linear', kernel_initializer='lecun_uniform',))
# ```
#
# - This design decision just makes the code more human-readable.
# - One line of code per layer of the network.
# - But there's nothing fundamental about it, one could also put the architecture/hypers on one line, in a file, etc.
plt.plot(X,y,'.',markersize=10,label="data")
grid = np.linspace(0,1,1000)[:,None]
plt.plot(grid, model.predict(grid),linewidth=5,label="model")
plt.legend();
# list(map(lambda x: x.shape, NN.W))
# (Repeat from video) we can also look at regression surfaces with random weights:
# +
# random weights
model = Sequential()
model.add(Dense(50, input_dim=2, activation='tanh', kernel_initializer='lecun_uniform'))
model.add(Dense(1, activation='linear', kernel_initializer='lecun_uniform',))
model.compile(loss='mean_squared_error', optimizer='sgd')
fig = plt.figure()
ax = fig.gca(projection='3d')
n = 100
X = np.linspace(-5, 5, n)
Y = np.linspace(-5, 5, n)
X, Y = np.meshgrid(X, Y)
inputs = np.append(X.flatten()[:,None], Y.flatten()[:,None],axis=1)
outputs = model.predict(inputs)
Z = np.reshape(outputs, [n,n])
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm, linewidth=0)
# -
# ## Parameters and hyperparameters (20 min)
model = Sequential()
model.add(Dense(10, input_dim=1, activation='tanh', kernel_initializer='lecun_uniform',))
model.add(Dense(5, activation='tanh', kernel_initializer='lecun_uniform',))
model.add(Dense(1, activation='linear', kernel_initializer='lecun_uniform',))
# +
# if you install graphviz and pydot, the following will "draw" the network architecture
# however I find it more confusing than helpful
# from keras.utils.vis_utils import model_to_dot
# import graphviz
# graphviz.Source(model_to_dot(model, show_shapes=True))
# -
model.summary()
# Below: we can inspect the weights themselves, and print out their shapes:
for W in model.get_weights():
print(W.shape)
# WARNING: Keras uses the opposite notation that we use. Here, if the shape of $W$ is $1\times 10$ that means transforming from $1$ number to $10$ numbers. In our notation, it's the other way around: if $W$ is $1 \times 10$ that means transforming from $10$ numbers to $1$ number. You'll see both notations out in the wild.
#
# - The Keras notation is more intuitive in that the shapes are $(d_\text{in},d_\text{out})$ rather than $(d_\text{out},d_\text{in}$).
# - Our notation is more convenient because we can write $Wx$ instead of $W^Tx$ everywhere.
# - And this also makes it consistent with notation from CPSC 340.
# #### Number of parameters
#
# - There's no agreed upon convention for the number of layers.
# - Let's say, for our purposes, that $x^{(0)}$ is an input and $x^{(L)}$ is an output, which means we have $L+1$ total layers and $L-1$ hidden layers.
# - Let $d_0,d_1,d_2,\ldots,d_L$ be the dimensionality of the layers.
# - So $d_0=d$ (input layer) and $d_L=k$ (output layer).
# - We get to pick the dimensionality of each of the hidden layers, i.e. $d_1,\ldots,d_{L-1}$; these are hyperparameters.
# #### In general, how many parameters do we have?
#
# - Weights: $d_0d_1 + d_1d_2 + \ldots + d_{L-1}d_L$
# - Biases: $d_1 + d_2 + \ldots + d_L$
# - Total:
#
# $$(d_0+1)d_1 + (d_1+1)d_2 + \ldots + (d_{L-1}+1)d_L$$
#
# - Again, this is just bookkeeping. But it's important to realize that **this is potentially a lot of parameters!!**
# - Example: in the 3Blue1Brown video, we had 13,002 parameters.
# - Example: let $L=3,d_0=d_1=d_2=d_3=1000$. Then we have millions of parameters!
# - Sometimes we have billions.
# - So we have a million-dimensional non-convex optimization problem. That's hard. More on this next class.
model = Sequential()
model.add(Dense(1000, input_dim=1000)) # d0=1000, d1=1000
model.summary()
model.add(Dense(1000)) # d2=1000
model.summary()
model.add(Dense(1000)) # d3=1000
model.summary()
# ## Wrap-up
#
# #### Summary:
#
# - "Neural networks" just refers to a class of functions.
# - These functions are weird because they're defined recursively.
# - So you can think of them having an intermediate ("hidden") state.
# - Each layer is followed by an activation function, that's applied elementwise to each "unit".
# - A lot of deep learning software has emerged recently; we'll use Keras.
# - Compared to other methods we've studied, neural networks have a lot of parameters and a lot of hyperparameters.
#
# #### Preview of next class:
#
# - Today we talked about what a neural network is.
# - Next class we'll talk about how to train it.
# - It's a bit more complicated than just "apply gradient descent" because:
# - The loss is non-convex.
# - The number of parameters is potentially huge (searching a very high-dimensional space).
# - The number of training examples often needs to be huge (computation is slow).
# - Tomorrow in DSCI 573 you will (hopefully) be discussing regularization, which we'll be using in this class right away.
# - We need methods for combatting overfitting because the number of parameters can get so large.
| data/text/lecture5_neural-networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from datetime import date
ventas = pd.read_csv('../data/raw/ventas.txt', sep='|')
ventas.rename(columns={'fecha':'fecha_venta'}, inplace=True)
ventas['fecha_venta'] = pd.to_datetime(ventas['fecha_venta'])
ventas['fecha_venta_norm'] = ventas['fecha_venta'].apply(lambda x : date(x.year,x.month,1))
# +
aggregattion = {
'unidades': {
'unidades': 'sum',
'unidades_max': 'max',
'unidades_min': 'min',
'unidades_avg': 'mean',
'num_ventas' : 'count'
},
'fecha_venta':{
'fecha_venta_max': 'max',
'fecha_venta_min': 'min'
}
}
# -
ventas = ventas.groupby(['id_pos','fecha_venta_norm','canal']).agg(aggregattion).reset_index()
ventas.head()
# Renombramos las columnas
ventas.columns = ['id_pos' , 'fecha_venta_norm' , 'canal', 'unidades',
'unidades_max', 'unidades_min', 'unidades_avg', 'num_ventas',
'fecha_venta_max', 'fecha_venta_min'
]
# Load Pos data
pos = pd.read_csv('../data/raw/pos.csv')
pos = pos[pos['id_pos'].isnull() != True]
pos = pos[pos['id_pos']!='Not Available']
ventas.shape
# convertimos a int para poder cruzar con informacion de venta
pos['id_pos'] = pos['id_pos'].astype(int)
train = pd.merge(ventas, pos, how='inner', on='id_pos')
# + active=""
# train.rename(columns={'fecha':'fecha_venta'}, inplace=True)
# -
print("Num. id_pos de ventas: ",ventas.id_pos.nunique())
print("Num. id_pos de train (ventas x pos) : ",train.id_pos.nunique())
print("Num. id_pos no encontrados: " , ventas.id_pos.nunique() - train.id_pos.nunique())
train.to_csv('../data/processed/train_aggr.csv', sep=';', index=False)
submittion[submittion.id_pos.isin(pos.id_pos)]['id_pos'].nunique()
| notebooks/load_train_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
# http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.decomposition import PCA
# http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html
import boto3
import sagemaker.amazon.common as smac
# -
# <h2>Kaggle Bike Sharing Demand Dataset Normalization</h2>
# Normalize 'temp','atemp','humidity','windspeed' and store the train and test files
# +
columns = ['count', 'season', 'holiday', 'workingday', 'weather', 'temp',
'atemp', 'humidity', 'windspeed', 'year', 'month', 'day', 'dayofweek','hour']
cols_normalize = ['temp','atemp','humidity','windspeed']
# -
df = pd.read_csv('train.csv', parse_dates=['datetime'])
df_test = pd.read_csv('test.csv', parse_dates=['datetime'])
# We need to convert datetime to numeric for training.
# Let's extract key features into separate numeric columns
def add_features(df):
df['year'] = df['datetime'].dt.year
df['month'] = df['datetime'].dt.month
df['day'] = df['datetime'].dt.day
df['dayofweek'] = df['datetime'].dt.dayofweek
df['hour'] = df['datetime'].dt.hour
add_features(df)
add_features(df_test)
df["count"] = df["count"].map(np.log1p)
df.head(2)
df_test.head(2)
# Normalize the dataset
scaler = StandardScaler()
# +
# Normalization parameters based on Training
scaler.fit(df[cols_normalize])
# -
def transform_data(scaler, df, columns):
transformed_data = scaler.transform(df[columns])
df_transformed = pd.DataFrame(transformed_data, columns=columns)
for col in df_transformed.columns:
df[col] = df_transformed[col]
transform_data(scaler, df, cols_normalize)
transform_data(scaler, df_test, cols_normalize)
df.head(2)
df_test.head(2)
# Store Original train and test data in normalized form
df.to_csv('train_normalized.csv',index=False, columns=columns)
df_test.to_csv('test_normalized.csv',index=False)
# +
# Store only the 4 numeric colums for PCA Training and Test
# Data Needs to be normalized
# -
def write_recordio_file (filename, x, y=None):
with open(filename, 'wb') as f:
smac.write_numpy_to_dense_tensor(f, x, y)
# Store All Normalized data as RecordIO File for PCA Training in SageMaker
# Need to pass as an array to create RecordIO file
X = df[['temp','atemp','humidity','windspeed']].values
write_recordio_file('bike_train_numeric_columns.recordio',X)
| 6 PCA/BikeSharingRegression/sdk1.7/biketrain_data_preparation_normalized.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sos
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SoS
# language: sos
# name: sos
# ---
# + kernel="SoS"
# %preview data
import pandas as pd
import numpy as np
data = pd.DataFrame(np.random.randn(6,4), columns=list('ABCD'))
# + kernel="R"
# %get data
data <- data + 1
data
# + kernel="JavaScript"
# %get data
data
# -
| development/docker-demo/examples/HomePage_Example_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as pl
from glob import glob
import os, time
import xavierUtils as xu
import datetime as dt
# %autosave 1000000
# ### Carregando dados:
anoFiles = glob('../dados/despesas/baixado_2019-04-26/Ano-*.csv')
anoFiles.sort()
fileDate = pd.to_datetime(pd.Series([time.ctime(os.path.getmtime(anoFiles[-1]))]))[0]
despesas = pd.concat([pd.read_csv(a, sep=';') for a in anoFiles], ignore_index=True)
# despesas['numDia'] = despesas.datEmissao.str.split('T').str.get(0).str.split('-').str.get(-1)
colNames = despesas.columns.values
# ### Dados faltantes:
xu.checkMissing(despesas)
print ', '.join(despesas.loc[despesas.ideCadastro.isnull()].txNomeParlamentar.unique())
# ### Verificando identificadores de deputado/entidade:
def ColunasBiunivocas(base, colA, colB):
temp = base[[colA,colB]].groupby(colA)[colB].nunique()
return len(temp)==np.sum(temp)
print ColunasBiunivocas(despesas.loc[despesas.ideCadastro.isnull()==False],'ideCadastro','nuDeputadoId')
print ColunasBiunivocas(despesas,'ideCadastro','nuDeputadoId')
print ColunasBiunivocas(despesas,'txNomeParlamentar','nuDeputadoId')
print ColunasBiunivocas(despesas,'nuDeputadoId','txNomeParlamentar')
# Nome com duplicidade de DepId:
numIdByDep = despesas.groupby('txNomeParlamentar')['nuDeputadoId'].nunique()
numIdByDep.loc[numIdByDep>1]
# Id das lideranças:
despesas.loc[despesas.ideCadastro.isnull()].groupby('txNomeParlamentar')['nuDeputadoId'].unique()
# +
# CONCLUSÃO: nuDeputadoId é o melhor identificador, sendo que lideranças/partidos são identificados por falta de
# idCadastro.
# -
# ### Outras colunas:
xu.mapUnique(despesas)
# ### Verificação dos valores:
# Valores negativos de documentos são bilhetes aéreos não utilizados, segundo o dadosabertos:
print ', '.join(despesas.loc[despesas.vlrDocumento<0].txtDescricao.unique())
# Existem valores líquidos negativos que não são passagens, e eles são estranhos.
print ', '.join(despesas.loc[despesas.vlrLiquido<0].txtDescricao.unique())
#despesas.loc[(despesas.vlrLiquido<0) &
# (despesas.numSubCota!=999)].to_csv('../dados/despesas_com_vlrLiquido_estranhos.csv',
# index=False, sep=';')
# Quando valor líquido existe, não existe valor de restituição:
despesas.loc[despesas.vlrLiquido!=0]['vlrRestituicao'].unique()
# Quando existe valor de restituição, não existe valor líquido:
print '# Restituições com valor líquido não-nulo:',\
len(despesas.loc[(despesas.vlrRestituicao.isnull()==False)&(despesas.vlrLiquido!=0)])
# Valores de restituição restituem quase tudo:
print ', '.join(despesas.loc[(despesas.vlrRestituicao.isnull()==False)]['txtDescricao'].unique())
despesas.loc[(despesas.nuDeputadoId==1810) & (despesas.numSubCota==3) & (despesas.numAno==2014) &
(despesas.vlrLiquido<0)]\
[['txNomeParlamentar','numAno','numMes','txtDescricao','vlrDocumento','vlrGlosa','vlrLiquido','vlrRestituicao']]
despesas.loc[(despesas.nuDeputadoId==620) & (despesas.numSubCota==10) & (despesas.numAno==2011) &
(despesas.vlrLiquido==0.19)]\
[['txNomeParlamentar','numAno','numMes','txtDescricao','vlrDocumento','vlrGlosa','vlrLiquido','vlrRestituicao']]
despesas.loc[(despesas.vlrLiquido<0) & (despesas.numSubCota!=999)][['nuDeputadoId','numSubCota','numAno']]
# ### Deflacionando
# +
# Carregando ipca:
ipca = pd.read_csv('../dados/economicos/ipca_2019-04-26.csv')
ipca['data'] = pd.to_datetime([str(ano)+'-'+str(mes).zfill(2)+'-01' for ano,mes in zip(ipca.ano.values,ipca.mes.values)])
# Juntando ipca à base de despesas:
despesas['dataCompetencia'] = pd.to_datetime([str(ano)+'-'+str(mes).zfill(2)+'-01'
for ano,mes in zip(despesas.numAno.values,despesas.numMes.values)])
despesas = despesas.join(ipca.set_index('data',drop=True)['indice'], on='dataCompetencia')
despesas.rename(axis='columns',mapper={'indice':'ipca'},inplace=True)
# -
# Deflacionar para Janeiro de 2019:
ipcaRef = ipca.loc[(ipca.ano==2019)&(ipca.mes==1)]['indice'].values[0]
print ipcaRef
# Calcula valores reais:
despesas['vlrLiqReal'] = despesas['vlrLiquido']/despesas['ipca']*ipcaRef
despesas['vlrRestReal'] = despesas['vlrRestituicao']/despesas['ipca']*ipcaRef
# ### Gastos por tipo
gastosByTipo = despesas.loc[despesas.numMes<2018].groupby('txtDescricao')['vlrLiqReal'].sum()
gastosTotais = gastosByTipo.sum()
fracGastosByTipo = (gastosByTipo/gastosTotais).sort_values()
fracGastosByTipo
# +
gastosByTipoLabels = map(lambda s: unicode(s[0],'utf-8')+unicode(s[1:],'utf-8').lower(), fracGastosByTipo.index.values)
x = np.arange(1,len(fracGastosByTipo)+1)
pl.figure(figsize=(5,10))
pl.barh(x,100*fracGastosByTipo.values)
pl.yticks(x,gastosByTipoLabels)
pl.xlabel(u'% do total de gastos', fontsize=16)
pl.gca().tick_params(labelsize=14)
#xu.saveFigWdate('graficos/total-despesas-por-tipo.pdf')
pl.show()
# -
xu.one2oneQ(despesas,'txtDescricao','numSubCota')
despesas.groupby('txtDescricao')['numSubCota'].unique().sort_index()
# ### Cálculo do valor gasto
# +
# Passagens aéreas:
#despSel = despesas.loc[despesas.numSubCota.isin([9,999,119])]
# Divulgação:
#despSel = despesas.loc[despesas.numSubCota.isin([5])]
# Consultorias:
#despSel = despesas.loc[despesas.numSubCota.isin([4])]
# Tudo:
despSel = despesas.loc[despesas.dataCompetencia>'2009-06-01']
restituicoes = despSel.groupby(['numAno','numMes'])['vlrRestReal'].sum()
reembolsos = despSel.groupby(['numAno','numMes'])['vlrLiqReal'].sum()
gastosMensais = pd.DataFrame(restituicoes).join(reembolsos, how='outer')
gastosMensais['vlrGastoReal'] = gastosMensais.vlrLiqReal - gastosMensais.vlrRestReal
gReal = gastosMensais.vlrGastoReal.values
# -
from statsmodels.tsa.seasonal import seasonal_decompose
g1st = gastosMensais.index[0]
gTest = gReal[gastosMensais.index.get_level_values('numAno')<2018]
gastosIdx = pd.date_range(start=str(g1st[0])+'-'+str(g1st[1]).zfill(2)+'-01', periods=len(gTest),freq='M')
gastosTS = pd.Series(gTest, index=gastosIdx)
result = seasonal_decompose(gastosTS, model='additive')
fullIdx = pd.date_range(start=str(g1st[0])+'-'+str(g1st[1]).zfill(2)+'-01', periods=len(gReal),freq='M')
fullTS = pd.Series(gReal, index=fullIdx)
gastosGrid = pd.Series(fullIdx)
anoLegislatura = gastosGrid[(gastosGrid.dt.month.values==1)&(gastosGrid.dt.year.isin([2011,2015,2019]))]
# +
scaleTxt = {0:u'reais',3:u'milhares de reais',6:u'milhões de reais',9:u'bilhões de reais'}
scaleExp = (int(np.log10(np.mean(gReal)))/3*3)
scale = 10**scaleExp
ymin = gReal[gReal>0].min()/scale
ymax = gReal[gReal>0].max()/scale
deltay = ymax-ymin
gMean = np.mean(gTest)/scale
fig = pl.figure(figsize=(11,7))
# Observed:
ax1 = pl.subplot(4,1,1)
pl.text(0.93,0.9,'Gastos reais', horizontalalignment='right', verticalalignment='top', transform=pl.gca().transAxes,
fontsize=16)
pl.plot(fullTS/scale)
pl.ylim([ymin,ymax])
pl.gca().tick_params(labelsize=14)
pl.axhline(gMean,color='firebrick')
# Format x-axis:
pl.xticks(gastosGrid[gastosGrid.dt.month.values==1].values)
pl.grid(axis='x', linestyle='--')
pl.gca().tick_params(labelbottom=False)
# Novas legislaturas:
[pl.axvline(a, color='k') for a in anoLegislatura]
# Trend:
pl.subplot(4,1,2, sharex=ax1)
pl.text(0.93,0.9,u'Tendência', horizontalalignment='right', verticalalignment='top', transform=pl.gca().transAxes,
fontsize=16)
pl.plot(result.trend/scale)
pl.ylim([ymin,ymax])
pl.gca().tick_params(labelsize=14)
pl.axhline(gMean,color='firebrick')
# Format x-axis:
pl.xticks(fullIdx[fullIdx.month==1])
pl.grid(axis='x', linestyle='--')
pl.gca().tick_params(labelbottom=False)
# Novas legislaturas:
[pl.axvline(a, color='k') for a in anoLegislatura]
# Seasonal:
pl.subplot(4,1,3, sharex=ax1)
pl.text(0.93,0.9,'Sazonalidade', horizontalalignment='right', verticalalignment='top', transform=pl.gca().transAxes,
fontsize=16)
pl.plot(result.seasonal/scale)
pl.ylim([-deltay/2, deltay/2])
pl.gca().tick_params(labelsize=14)
# Format x-axis:
pl.xticks(fullIdx[fullIdx.month==1])
pl.grid(axis='x', linestyle='--')
pl.gca().tick_params(labelbottom=False)
pl.axhline(0,color='firebrick')
# Novas legislaturas:
[pl.axvline(a, color='k') for a in anoLegislatura]
# Residual:
pl.subplot(4,1,4, sharex=ax1)
pl.text(0.93,0.9,u'Resíduo', horizontalalignment='right', verticalalignment='top', transform=pl.gca().transAxes,
fontsize=16)
pl.plot(result.resid/scale)
pl.ylim([-deltay/2, deltay/2])
pl.gca().tick_params(labelsize=14)
# Format x-axis:
pl.xticks(fullIdx[fullIdx.month==1])
pl.grid(axis='x', linestyle='--')
pl.axhline(0,color='firebrick')
pl.xlabel('Ano', fontsize=16)
# Novas legislaturas:
[pl.axvline(a, color='k') for a in anoLegislatura]
# Label x comum:
axComum = fig.add_subplot(111, frameon=False)
pl.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
axComum.set_ylabel(u'Gastos em '+scaleTxt[scaleExp]+' de Jan/2019', fontsize=16)
# Ajustes finais:
pl.gca().tick_params(labelsize=14)
pl.subplots_adjust(hspace=0.1)
#xu.saveFigWdate('graficos/despesas-reais-e-sazonalidade.pdf')
pl.show()
# -
result.seasonal
# ### Teste com valores
# Carregando ipca:
ipca = pd.read_csv('../dados/economicos/ipca_2019-04-15.csv')
ipca = ipca.reindex(index=ipca.index[::-1]).reset_index(drop=True)
ipcaIdx = ipca.loc[ipca.ano>=2011].indice
ipcaMesSeq = np.arange(1,1+len(ipcaIdx))
# Ajuste linear ao IPCA:
linReg = linear_model.LinearRegression()
linReg.fit(np.transpose([ipcaMesSeq]),ipcaIdx)
ipcaPred = linReg.predict(np.transpose([ipcaMesSeq]))
# Extrapola o IPCA para os meses que faltam:
ipcaExtrapMes = np.arange(ipcaMesSeq[-1],1+ipcaMesSeq[-1]+len(dTotalArr)-len(ipcaMesSeq))
ipcaExtrapIdx = linReg.predict(np.transpose([ipcaExtrapMes]))
ipcaExtrapIdx = ipcaExtrapIdx - ipcaExtrapIdx[0] + ipcaIdx.iloc[-1]
pl.plot(ipcaMesSeq,ipcaIdx,'b-',label='Real')
pl.plot(ipcaMesSeq,ipcaPred,'r-',label='Linear')
pl.plot(ipcaExtrapMes,ipcaExtrapIdx,'k-',label='Extrap.')
pl.legend()
pl.show()
ipcaFinal = np.concatenate((ipcaIdx,ipcaExtrapIdx[1:]))
# ### Deflacionando e juntando valores por mês e ano
# +
#### ATENÇÃO!!! É preciso corrigir o último mês dos gastos pelo número de dias existentes e
#### extrapolar o IPCA para os meses que não existem !!
# -
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# +
# Calcula despesas totais e agrupa por ano e mês:
restituicoes = despesas.groupby(['numAno','numMes'])['vlrRestituicao'].sum()
reembolsos = despesas.groupby(['numAno','numMes'])['vlrLiquido'].sum()
gastosMensais = pd.DataFrame(restituicoes).join(reembolsos, how='outer')
dTotal = gastosMensais.vlrLiquido - gastosMensais.vlrRestituicao
dNum = despesas.groupby(['numAno','numMes']).size()
mesSeq = np.arange(1,len(dTotal)+1)
dTotalArr = np.array(dTotal)
dNumArr = np.array(dNum)
# -
pl.figure(figsize=(15,5))
pl.plot(mesSeq,dNumArr,'r-')
pl.axvline(0*12+1,color='black',linestyle='-')
pl.axvline(1*12+1,color='gray',linestyle='--')
pl.axvline(2*12+1,color='gray',linestyle='--')
pl.axvline(3*12+1,color='gray',linestyle='--')
pl.axvline(4*12+1,color='black',linestyle='-')
pl.axvline(5*12+1,color='gray',linestyle='--')
pl.axvline(6*12+1,color='gray',linestyle='--')
pl.axvline(7*12+1,color='gray',linestyle='--')
pl.axvline(8*12+1,color='black',linestyle='-')
pl.xlabel('# do mes a partir de Jan/2011')
pl.ylabel('# de despesas registradas')
pl.show()
pl.figure(figsize=(15,5))
pl.plot(mesSeq,dTotalArr,'r-')
pl.axvline(0*12+1,color='black',linestyle='-')
pl.axvline(1*12+1,color='gray',linestyle='--')
pl.axvline(2*12+1,color='gray',linestyle='--')
pl.axvline(3*12+1,color='gray',linestyle='--')
pl.axvline(4*12+1,color='black',linestyle='-')
pl.axvline(5*12+1,color='gray',linestyle='--')
pl.axvline(6*12+1,color='gray',linestyle='--')
pl.axvline(7*12+1,color='gray',linestyle='--')
pl.axvline(8*12+1,color='black',linestyle='-')
pl.xlabel('# do mes a partir de Jan/2011')
pl.ylabel('Despesas totais nominais')
pl.show()
# Carregando ipca:
ipca = pd.read_csv('../dados/economicos/ipca_2019-04-15.csv')
ipca = ipca.reindex(index=ipca.index[::-1]).reset_index(drop=True)
ipcaIdx = ipca.loc[ipca.ano>=2011].indice
ipcaMesSeq = np.arange(1,1+len(ipcaIdx))
# Ajuste linear ao IPCA:
linReg = linear_model.LinearRegression()
linReg.fit(np.transpose([ipcaMesSeq]),ipcaIdx)
ipcaPred = linReg.predict(np.transpose([ipcaMesSeq]))
# Extrapola o IPCA para os meses que faltam:
ipcaExtrapMes = np.arange(ipcaMesSeq[-1],1+ipcaMesSeq[-1]+len(dTotalArr)-len(ipcaMesSeq))
ipcaExtrapIdx = linReg.predict(np.transpose([ipcaExtrapMes]))
ipcaExtrapIdx = ipcaExtrapIdx - ipcaExtrapIdx[0] + ipcaIdx.iloc[-1]
pl.plot(ipcaMesSeq,ipcaIdx,'b-',label='Real')
pl.plot(ipcaMesSeq,ipcaPred,'r-',label='Linear')
pl.plot(ipcaExtrapMes,ipcaExtrapIdx,'k-',label='Extrap.')
pl.legend()
pl.show()
ipcaFinal = np.concatenate((ipcaIdx,ipcaExtrapIdx[1:]))
pl.figure(figsize=(15,5))
pl.plot(mesSeq,dTotalArr/ipcaFinal*ipcaFinal[-1],'r-')
pl.axvline(0*12+1,color='black',linestyle='-')
pl.axvline(1*12+1,color='gray',linestyle='--')
pl.axvline(2*12+1,color='gray',linestyle='--')
pl.axvline(3*12+1,color='gray',linestyle='--')
pl.axvline(4*12+1,color='black',linestyle='-')
pl.axvline(5*12+1,color='gray',linestyle='--')
pl.axvline(6*12+1,color='gray',linestyle='--')
pl.axvline(7*12+1,color='gray',linestyle='--')
pl.axvline(8*12+1,color='black',linestyle='-')
pl.xlabel('# do mes a partir de Jan/2011')
pl.ylabel('Despesas totais reais')
pl.axhline(2e7,color='g')
pl.show()
from statsmodels.tsa.seasonal import seasonal_decompose
gastosTS = pd.Series(dTotalArr, index=pd.date_range(start='2011-01-01',periods=100,freq='M'))
result = seasonal_decompose(gastosTS/ipcaFinal*ipcaFinal[-1], model='additive')
result.plot()
pl.show()
# ### Gastos por categoria
despesas.groupby('numSubCota')['txtDescricao'].unique().apply(lambda x: x[0])
hyperClasse = dict(zip(sorted([ 1, 3, 5, 9, 13, 14, 15, 4, 10, 11, 999, 8, 12,
122, 120, 119, 121, 123, 137]),['Escritorio de apoio','Transporte terrestre','Informacao',
'Divulgacao','Outros', 'Transporte aereo','Comunicacao',
'Comunicacao','Informacao','Alimentacao','Hospedagem',
'Transporte terrestre','Transporte aereo','Transporte terrestre',
'Transporte terrestre', 'Transporte terrestre',
'Transporte terrestre', 'Informacao','Transporte aereo']))
anoBase = 2017
nGastos = despesas.loc[despesas.numAno==anoBase].groupby('numSubCota').size()
restituicoes = despesas.loc[despesas.numAno==anoBase].groupby('numSubCota')['vlrRestituicao'].sum()
reembolsos = despesas.loc[despesas.numAno==anoBase].groupby('numSubCota')['vlrLiquido'].sum()
gastosByTipo = pd.DataFrame(restituicoes).join(reembolsos, how='outer')
# +
nGastos = despesas.groupby('numSubCota').size()
restituicoes = despesas.groupby('numSubCota')['vlrRestituicao'].sum()
reembolsos = despesas.groupby('numSubCota')['vlrLiquido'].sum()
gastosByTipo = pd.DataFrame(restituicoes).join(reembolsos, how='outer')
gastosByTipo['fracRestituido'] = gastosByTipo.vlrRestituicao/gastosByTipo.vlrLiquido
gastosByTipo['vlrFinal'] = gastosByTipo.vlrLiquido - gastosByTipo.vlrRestituicao
gastosByTipo['fracTotal'] = gastosByTipo.vlrFinal / gastosByTipo.vlrFinal.sum()
gastosByTipo['hyperClasse'] = np.vectorize(hyperClasse.get)(gastosByTipo.index.values)
# -
gastosByTipo.groupby('hyperClasse')['fracTotal'].sum()
fracTipoGasto = gastosByTipo.groupby('hyperClasse')['fracTotal'].sum().sort_values(ascending=True)
pl.pie(fracTipoGasto.values, startangle=90, labels=fracTipoGasto.index.values)
pl.show()
# # Lixo
# Função que seria para marcar anos e meses especiais, abandonada:
def seasonMarks(mes0,ano0,mes1,ano1):
nSteps = (ano1-ano0)*12 + (mes1-mes0)
despesas.numDia.fillna('01', inplace=True)
ultimasD = despesas.iloc[-500000:]
ultimasD = pd.to_datetime(ultimasD.numAno*10000+ultimasD.numMes*100+ultimasD.numDia.astype(int),
format='%Y%m%d', errors='coerce')
ultimasD.loc[ultimasD.isnull()==False].sort_values().iloc[-1]
# +
pl.figure(figsize=(15,5))
gReal = gastosMensais.vlrGastoReal.values
mesSeq = np.arange(1,len(gReal)+1)
pl.plot(mesSeq,gReal,'r-')
pl.axvline(0*12+1,color='gray',linestyle='--')
pl.axvline(1*12+1,color='black',linestyle='-')
pl.axvline(2*12+1,color='gray',linestyle='--')
pl.axvline(3*12+1,color='gray',linestyle='--')
pl.axvline(4*12+1,color='gray',linestyle='--')
pl.axvline(5*12+1,color='black',linestyle='-')
pl.axvline(6*12+1,color='gray',linestyle='--')
pl.axvline(7*12+1,color='gray',linestyle='--')
pl.axvline(8*12+1,color='gray',linestyle='--')
pl.axvline(9*12+1,color='black',linestyle='-')
pl.xlabel('# do mes a partir de Jan/2011')
pl.ylabel('Despesas totais reais')
pl.axhline(np.mean(gReal),color='g')
pl.show()
# -
| notebooks/analise_das_despesas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
## Based on
# - https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1
# - https://www.scikit-yb.org/en/latest/quickstart.html
# - https://www.scikit-yb.org/en/latest/api/text/tsne.html
# +
## Install if needed
# #!pip install yellowbrick
# +
from sklearn.feature_extraction.text import TfidfVectorizer
from yellowbrick.text import TSNEVisualizer
from yellowbrick.datasets import load_hobbies
# -
# Load the data and create document vectors
# - Data details: https://www.scikit-yb.org/en/latest/api/datasets/hobbies.html
corpus = load_hobbies()
tfidf = TfidfVectorizer()
# Get right representation
X = tfidf.fit_transform(corpus.data)
y = corpus.target
# Create the visualizer and draw the vectors
tsne = TSNEVisualizer()
tsne.fit(X, y)
tsne.show()
# Create the visualizer and draw the vectors
tsne = TSNEVisualizer()
tsne.fit(X)
tsne.show()
# # Text data in news groups
# +
## Also see
# - https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html
# -
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'talk.religion.misc','comp.graphics', 'sci.space']
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories)
tfidf = TfidfVectorizer()
# Get right representation
X = tfidf.fit_transform(newsgroups_train.data)
y = newsgroups_train.target
# Create the visualizer and draw the vectors
tsne = TSNEVisualizer()
tsne.fit(X,y)
tsne.show()
| sample-code/l24-viz/Text t-sne.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.10.2 (''env'': venv)'
# language: python
# name: python3
# ---
import numpy as np
import scipy.optimize as optimize
# +
""" Optimization Algorithm """
""" New Matrix """
def newMat(x, Vt, k):
V_new = np.zeros((Vt.shape), dtype=np.cfloat)
if k==2:
V_new[0] = np.cos(x[0])
V_new[1] = (np.sin(x[0])) * np.exp(1j*x[1])
elif k==3:
V_new[0] = np.cos(x[0])
V_new[1] = (np.sin(x[0])) * (np.cos(x[1])) * np.exp(1j*x[2])
V_new[2] = (np.sin(x[0])) * (np.sin(x[1])) * np.exp(1j*x[3])
elif k==4:
V_new[0] = (np.cos(x[0])) * (np.cos(x[1]))
V_new[1] = (np.cos(x[0])) * (np.sin(x[1])) * np.exp(1j*x[3])
V_new[2] = (np.sin(x[0])) * (np.cos(x[2])) * np.exp(1j*x[4])
V_new[3] = (np.sin(x[0])) * (np.sin(x[2])) * np.exp(1j*x[5])
elif k==5:
V_new[0] = (np.cos(x[0])) * (np.cos(x[1]))
V_new[1] = (np.cos(x[0])) * (np.sin(x[1])) * np.exp(1j*x[3])
V_new[2] = (np.sin(x[0])) * (np.cos(x[2])) * np.exp(1j*x[4])
V_new[3] = (np.sin(x[0])) * (np.sin(x[2])) * (np.sin(x[6])) * np.exp(1j*x[5])
V_new[4] = (np.sin(x[0])) * (np.sin(x[2])) * (np.cos(x[6])) * np.exp(1j*x[7])
else:
V_new[0] = (np.cos(x[0])) * (np.cos(x[1]))
V_new[1] = (np.cos(x[0])) * (np.sin(x[1])) * np.exp(1j*x[3])
V_new[2] = (np.sin(x[0])) * (np.cos(x[2])) * np.exp(1j*x[4])
V_new[3] = (np.sin(x[0])) * (np.sin(x[2])) * (np.sin(x[6])) * np.exp(1j*x[5])
V_new[4] = (np.sin(x[0])) * (np.sin(x[2])) * (np.cos(x[6])) * np.exp(1j*x[7])
V_new[5] = (np.sin(x[0])) * (np.sin(x[2])) * (np.sin(x[6])) * (np.cos(x[8])) * np.exp(1j*x[9])
return V_new
""" Cost Function """
def costFn(x, Ut, Vt, A, k):
V_new = newMat(x, Vt, k)
Bp = np.dot(Ut, V_new)
loss = np.linalg.norm(A - Bp*np.conjugate(Bp))
return (loss)
# -
def calcResults(k, m, n):
print ("m = ",m,", n = ",n)
res = np.zeros((100,2))
for i in range(100):
A = np.random.rand(m, n)
A = A/A.sum(axis=0) # Optimize column-wise
# Classic Truncated SVD
U, L, V = np.linalg.svd(A, full_matrices=False)
Ut = U[:, :k]
Vt = V[:k]
Lt = L[:k]
At = np.dot(np.dot(Ut,np.diag(Lt)), Vt)
res[i][0] = (np.linalg.norm(A - At))
# Complex SVD
B = np.sqrt(A)
U, L, V = np.linalg.svd(B, full_matrices=False)
Ut = U[:, :k]
Vt = V[:k]
Lt = L[:k]
initial_guess = np.ones((2*(k-1),), dtype=np.longdouble)
V_new = np.zeros(Vt.shape, dtype=np.cfloat)
for col in range(Vt.shape[1]):
result = optimize.minimize(fun=costFn, x0=initial_guess, args=(Ut,Vt[:, col],A[:,col],k),
tol=1e-7, method='Nelder-Mead', options={'maxiter':1e+10})
V_new[:,col] = newMat(result.x, Vt[:, col], k)
Bp = np.dot(Ut, V_new)
res[i][1] = (np.linalg.norm(A - np.conjugate(Bp)*Bp))
if i%10==0: print(i, end=' ')
print('\n')
return res
res = calcResults(k=6, m=7, n=7)
print(res)
res.mean(axis=0)
| qsvd_complex_v.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Train classifier by layers
#
# This notebook trains a classifier that operates in two layers:
# - First we use a SVM classifier to label utterances with high degree of certainty.
# - Afterwards we use heuristics to complete the labeling
# ### Import and path definition
# +
import os
import pandas as pd
import numpy as np
import random
import pickle
import sys
import matplotlib.pyplot as plt
root_path = os.path.dirname(os.path.dirname(os.path.abspath(os.getcwd())))
sys.path.append(root_path)
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from src import phase_classification as pc
data_path = os.path.join(root_path,'data')
tables_path = os.path.join(data_path,'tables')
# -
# ### Load data
WITH_STEMMING = True
#REMOVE_STOPWORDS = True
SEED = 10
NUM_TOPICS = 60
random.seed(SEED)
file_name = '[train]IBL_topic_distribution_by_utterance_minimum_5_words_with_stemming_{}_{}.xlsx'.format(WITH_STEMMING,NUM_TOPICS)
df_data = pd.read_excel(os.path.join(tables_path,'train',file_name))
# ### Data description
df_data.head()
the_keys = list(set(df_data['phase']))
total_samples = 0
class_samples = {}
for key in the_keys:
n = list(df_data.phase.values).count(key)
#print("key {}, total {}".format(key,n))
total_samples += n
class_samples[key] = n
print(total_samples)
proportions = []
for key in the_keys:
proportion = round(class_samples[key]*1.0/total_samples,2)
print("key {}, samples: {}, prop: {}".format(key,class_samples[key],proportion))
# ### Split data
filter_rows = list(range(60))+[67,68]
row_label = 60
dfs_train,dfs_val = pc.split_df_discussions(df_data,.2,SEED)
X_train,y_train = pc.get_joined_data_from_df(dfs_train,filter_rows,row_label)
X_val,y_val = pc.get_joined_data_from_df(dfs_val,filter_rows,row_label)
len(X_train)
dfs_all,_ = pc.split_df_discussions(df_data,.0,SEED)
X_all,y_all = pc.get_joined_data_from_df(dfs_all,filter_rows,row_label)
# ### Classify first layer
import importlib
importlib.reload(pc)
class_weight = {}
for key in the_keys:
class_weight[key] = 1000.0/class_samples[key]
svc = SVC(kernel='linear',random_state=SEED,max_iter=5000,probability=True,class_weight=class_weight)
svc.fit(X_train, y_train)
print('Accuracy of SVM classifier on training set: {:.2f}'
.format(svc.score(X_train, y_train)))
print('Accuracy of SVM classifier on test set: {:.2f}'
.format(svc.score(X_val, y_val)))
pred = svc.predict(X_val)
labels = ["Phase {}".format(i) for i in range(1,6)]
df = pd.DataFrame(confusion_matrix(y_val, pred),columns=["Predicted {}".format(i) for i in labels])
df.index = labels
#print(" ")
print(classification_report(y_val, pred))
df
pred_val = svc.predict_proba(X_all)
output_first_layer = [np.argmax(pred_val[i])+1 for i in range(len(pred_val))]
df_data['first_layer'] = output_first_layer
df_data.head()
df_data.to_excel(os.path.join(tables_path,'[one_layer]'+file_name))
with open(os.path.join(data_path,'classifier_svm_one_layer_one_utterance.pickle'),'wb') as f:
pickle.dump(svc,f)
df = pd.DataFrame(confusion_matrix(y_all, output_first_layer),columns=["Predicted {}".format(i) for i in labels])
df.index = labels
print(classification_report(y_all, output_first_layer))
df
print('Accuracy of SVM classifier on training set: {:.2f}'
.format(svc.score(X_all, output_first_layer)))
| notebooks/2. Train classifier (one layer)/2-. Train classifier one layer [utterance level][60t].ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A Linear Model for Bulldozers
# %load_ext autoreload
# %autoreload 2
# +
# %matplotlib inline
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from IPython.display import display
from sklearn import metrics
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV, Lasso, LassoCV
# -
set_plot_sizes(12,14,16)
# ## Load in our data from last lesson
# +
PATH = "data/bulldozers/"
df_raw = pd.read_feather('tmp/raw')
# -
df_raw['age'] = df_raw.saleYear-df_raw.YearMade
df, y, nas, mapper = proc_df(df_raw, 'SalePrice', max_n_cat=10, do_scale=True)
def split_vals(a,n): return a[:n], a[n:]
n_valid = 12000
n_trn = len(df)-n_valid
y_train, y_valid = split_vals(y, n_trn)
raw_train, raw_valid = split_vals(df_raw, n_trn)
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
# # Linear regression for Bulldozers
# ## Data scaling
df.describe().transpose()
X_train, X_valid = split_vals(df, n_trn)
m = LinearRegression().fit(X_train, y_train)
m.score(X_valid, y_valid)
m.score(X_train, y_train)
preds = m.predict(X_valid)
rmse(preds, y_valid)
plt.scatter(preds, y_valid, alpha=0.1, s=2);
# ## Feature selection from RF
keep_cols = list(np.load('tmp/keep_cols.npy'))
', '.join(keep_cols)
df_sub = df_raw[keep_cols+['age', 'SalePrice']]
df, y, mapper, nas = proc_df(df_sub, 'SalePrice', max_n_cat=10, do_scale=True)
X_train, X_valid = split_vals(df, n_trn)
m = LinearRegression().fit(X_train, y_train)
m.score(X_valid, y_valid)
rmse(m.predict(X_valid), y_valid)
from operator import itemgetter
sorted(list(zip(X_valid.columns, m.coef_)), key=itemgetter(1))
m = LassoCV().fit(X_train, y_train)
m.score(X_valid, y_valid)
rmse(m.predict(X_valid), y_valid)
m.alpha_
coefs = sorted(list(zip(X_valid.columns, m.coef_)), key=itemgetter(1))
coefs
skip = [n for n,c in coefs if abs(c)<0.01]
# +
df.drop(skip, axis=1, inplace=True)
# for n,c in df.items():
# if '_' not in n: df[n+'2'] = df[n]**2
# -
X_train, X_valid = split_vals(df, n_trn)
m = LassoCV().fit(X_train, y_train)
m.score(X_valid, y_valid)
rmse(m.predict(X_valid), y_valid)
coefs = sorted(list(zip(X_valid.columns, m.coef_)), key=itemgetter(1))
coefs
np.savez(f'{PATH}tmp/regr_resid', m.predict(X_train), m.predict(X_valid))
| courses/ml1/bulldozer_linreg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fisheries competition 大自然保护渔业监测
# In this notebook we're going to investigate a range of different architectures for the [Kaggle fisheries competition](https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring/). The video states that vgg.py and ``vgg_ft()`` from utils.py have been updated to include VGG with batch normalization, but this is not the case. We've instead created a new file [vgg_bn.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/vgg16bn.py) and an additional method ``vgg_ft_bn()`` (which is already in utils.py) which we use in this notebook.
# %matplotlib inline
import imp
import utils
imp.reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/fish/sample/"
path = "data/fish/"
batch_size=64
batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
test2_batches = get_batches(path+'test2', shuffle=False, batch_size=1)
# Sometimes it's helpful to have just the filenames, without the path.
raw_filenames = [f.split('/')[-1] for f in filenames]
raw_test_filenames = [f.split('/')[-1] for f in test_filenames]
raw_val_filenames = [f.split('/')[-1] for f in val_filenames]
raw_test2_filenames = [f.split('/')[-1] for f in test2_batches.filenames]
raw_test2_filenames
len(raw_test2_filenames)
# + [markdown] heading_collapsed=true
# ## 建立文件夹
#
# **整个流程可以作为,视觉处理的通用流程**
# + [markdown] hidden=true
# We create the validation and sample sets in the usual way.
# -
# %pwd
# %cd data/fish
# + hidden=true
# %cd train
# %mkdir ../valid
# -
g = glob('*') # 匹配当前路径下所有的文件夹名字
for d in g:
os.mkdir('../valid/'+d) # 为valid建立各个类别的文件夹
g = glob('*/*.jpg') # 匹配train目录下所有的图片
shuffle = np.random.permutation(g) # 打乱
for i in range(500):# 随机500个图片分到valid集
os.rename(shuffle[i],'../valid/'+shuffle[i])
os.mkdir('../sample')
os.mkdir('../sample/train')
os.mkdir('../sample/valid')
# 建立sample文件夹目录,sample文件夹的文件小,训练速度快能够迅速验证想法/模型是否正确
g = glob('*')
for d in g:
os.mkdir('../sample/train/'+d)
os.mkdir('../sample/valid/'+d)
# +
# 复制文件到sample文件夹
from shutil import copyfile
g = glob('*/*jpg')
shuffle= np.random.permutation(g)
for i in range(400):
copyfile(shuffle[i], '../sample/train/' + shuffle[i])
# -
# %cd ../valid
# + hidden=true
g = glob('*/*.jpg')
shuf = np.random.permutation(g)
for i in range(200):
copyfile(shuf[i], '../sample/valid/' + shuf[i])
# %cd ..
# -
os.mkdir('results')
os.mkdir('sample/results')
# + hidden=true
# %cd ../..
# -
# **补充处理:**
# %pwd
% cd test2
g = glob('*/*.jpg')
for i in g:
x = i.split('\\')[1]
print(x)
os.rename(i,'unknown/test_stg2/')
# %pwd
# %cd ../..
# %cd ..
# ## 基本的VGG16模型
# We start with our usual VGG approach. We will be using VGG with batch normalization. We explained how to add batch normalization to VGG in the [imagenet_batchnorm notebook](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/imagenet_batchnorm.ipynb). VGG with batch normalization is implemented in [vgg_bn.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/vgg16bn.py), and there is a version of ``vgg_ft`` (our fine tuning function) with batch norm called ``vgg_ft_bn`` in [utils.py](https://github.com/fastai/courses/blob/master/deeplearning1/nbs/utils.py).
# ### 初始化模型
# 首先,我们建立一个经过微调的VGG模型作为开始。
import myvgg16bn
imp.reload(myvgg16bn)
from myvgg16bn import Vgg16BN
model = vgg_ft_bn(8) # 就8个类型,其实就是把Dense(8,activation="softmax")
# 不同取数据的方式而已,可以用get_data(),也可以用get_batches()
trn = get_data(path+'train')
val = get_data(path+'valid')
test = get_data(path+'test')
test2 = get_data(path+'test2')
save_array(path+'results/trn.dat', trn)
save_array(path+'results/val.dat', val)
save_array(path+'results/test.dat', test)
save_array(path+'results/test2.dat', test2)
trn = load_array(path+'results/trn.dat')
val = load_array(path+'results/val.dat')
test = load_array(path+'results/test.dat')
test = load_array(path+'results/test2.dat')
gen = image.ImageDataGenerator()
1e-3
model.compile(optimizer=Adam(1e-3),loss='categorical_crossentropy', metrics=['accuracy']) # 多分类交叉熵损失
model.summary()
trn.shape
val.shape
trn_labels.shape
model.fit(trn, trn_labels, batch_size=batch_size, epochs=10, validation_data=(val, val_labels))
model.save_weights(path+'results/ft1.h5')
# **效果很差**
#
#
# ### 预计算卷积层的输出
# We pre-compute the output of the last convolution layer of VGG, since we're unlikely to need to fine-tune those layers. (All following analysis will be done on just the pre-computed convolutional features.)
#
# 我们将计算VGG模型最后一层卷积的输出,通常我们不会去调整卷积层(接下来的分析将只对预计算的卷积特性进行)
model.load_weights(path+'results/ft1.h5')
# +
# 找出最后一个卷积层,并且分开卷积层和全连接层
for l,layer in enumerate(model.layers):
if(type(layer) is Conv2D):
print(l,layer,type(layer))
layer_index = [l for l, layer in enumerate(model.layers) if (type(layer)) is Conv2D]
print(layer_index)
print(layer_index[-1])
conv_layers = model.layers[:layer_index[-1]+1]
fc_layers = model.layers[layer_index[-1]+1]
# +
# def split_at(model, layer_type):
# layers = model.layers
# layer_idx = [index for index,layer in enumerate(layers)
# if type(layer) is layer_type][-1]
# return layers[:layer_idx+1], layers[layer_idx+1:]
# -
conv_layers,fc_layers = split_at(model, Conv2D)
conv_model = Sequential(conv_layers)
# 直接用卷积层建立模型,输出预测,作为全连接层的输入,也叫预计算
conv_feat = conv_model.predict(trn,)
conv_val_feat = conv_model.predict(val)
conv_test_feat = conv_model.predict(test)
conv_test2_feat = conv_model.predict(test2)
save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_feat.dat', conv_feat)
save_array(path+'results/conv_test_feat.dat', conv_test_feat)
save_array(path+'results/conv_test2_feat.dat', conv_test_feat)
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
conv_test_feat = load_array(path+'results/conv_test_feat.dat')
conv_val_feat.shape
conv_test_feat.shape
conv_test2_feat.shape
conv_feat.shape
# ### 训练模型
# We can now create our first baseline model - a simple 3-layer FC net.
#
# 现在建立我们的第一个基线模型 - 一个简单的3层全联接网络
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), # 序列模型第一个必须指定input_shape
BatchNormalization(axis=1),
Dropout(p/4),
Flatten(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(8, activation='softmax')
]
p=0.6
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=30,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr = 1e-4
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=10,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/conv_512_6.h5')
bn_model.evaluate(conv_val_feat, val_labels) # 在验证集上估计效果
bn_model.load_weights(path+'models/conv_512_6.h5')
# ### 直接预测提交
# 由于要提交2份测试数据的预测,我开始的做法是分别预测生成csv然后手动粘贴复制成一个csv的,能不能找到一个办法一起预测呢
def do_clip(arr, mx):
return np.clip(arr, (1-mx)/7, mx)
conv_test_feat.shape
conv_test2_feat.shape
preds = bn_model.predict(conv_test_feat,verbose=1)
preds.shape
preds2 = bn_model.predict(conv_test2_feat,verbose=1)
preds2.shape
subm = do_clip(preds,0.82)
subm.shape
subm_name = path+'results/subm_bb.gz'
# classes = sorted(batches.class_indices, key=batches.class_indices.get)
classes = ['ALB', 'BET', 'DOL', 'LAG', 'NoF', 'OTHER', 'SHARK', 'YFT']
submission = pd.DataFrame(data = subm, columns=classes)
submission.insert(0, 'image', [a[8:] for a in raw_test_filenames])
submission.to_csv(subm_name, index=False, compression='gzip')
subm_name = path+'results/subm_bb_2.gz'
subm = do_clip(preds2,0.82)
subm.shape
submission = pd.DataFrame(data = subm, columns=classes)
submission.insert(0, 'image', [a for a in raw_test2_filenames])
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
# ## Multi-input
# 这些图像有不同的尺寸,很可能代表它们来自的船(因为不同的船会使用不同的相机)。也许这造成了一些数据泄露,我们可以利用这一点来获得一个更好的“Kaggle”排行榜的位置?为了找到答案,首先我们为每个图像创建文件大小的数组:
path
# +
# 获得所有图片的大小,用PIL的Image.open方法
sizes = [PIL.Image.open(path+'train/'+f).size for f in filenames]
id2size = list(set(sizes))
size2id = {o:i for i,o in enumerate(id2size)}
# -
from collections import Counter
Counter(sizes)
len(id2size)
# Then we one-hot encode them (since we want to treat them as categorical) and normalize the data.
#
# 把图片大小视为一种类型,采用one-hot编码:
test_filenames
# 训练数据
trn_sizes_orig = to_categorical([size2id[o] for o in sizes],len(id2size))
# 验证数据,图片大小
raw_val_sizes = [PIL.Image.open(path+'valid/'+f).size for f in val_filenames]
val_sizes = to_categorical([size2id[o] for o in raw_val_sizes], len(id2size))
# 测试数据
raw_test_sizes = [PIL.Image.open(path+'test/'+f).size for f in test_filenames]
test_sizes = to_categorical([size2id[o] for o in raw_test_sizes], len(id2size))
# 原值-平均值 / 标准差 这是规范化的做法
trn_sizes = trn_sizes_orig-trn_sizes_orig.mean(axis=0)/trn_sizes_orig.std(axis=0)
val_sizes = val_sizes-trn_sizes_orig.mean(axis=0)/trn_sizes_orig.std(axis=0)
test_sizes = test_sizes-trn_sizes_orig.mean(axis=0)/trn_sizes_orig.std(axis=0)
trn_sizes
test_sizes
# To use this additional "meta-data", we create a model with multiple input layers - `sz_inp` will be our input for the size information.
#
# 为了使用这个额外的“元数据”,我们创建了一个具有多个输入层的模型——“sz_inp”将是我们对图片大小信息的输入。
p=0.6
# +
inp = Input(conv_layers[-1].output_shape[1:])
sz_inp = Input((len(id2size),))
bn_inp = BatchNormalization()(sz_inp)
x = MaxPooling2D()(inp)
x = BatchNormalization(axis=1)(x)
x = Dropout(p/4)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p/2)(x)
x = keras.layers.concatenate([x,bn_inp])
x = Dense(8, activation='softmax')(x)
# -
# When we compile the model, we have to specify all the input layers in an array.
#
# 当我们编译模型时,我们必须指定一个数组中的所有输入层。
model = Model([inp, sz_inp], x)
model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
# And when we train the model, we have to provide all the input layers' data in an array.
#
# 当我们训练模型时,我们必须指定一个数组中的所有输入层。
model.fit([conv_feat, trn_sizes], trn_labels, batch_size=batch_size, epochs=30,
validation_data=([conv_val_feat, val_sizes], val_labels))
# 比较以下`bn_model`模型的效果:
bn_model.optimizer.lr = 1e-4
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, epochs=8,
validation_data=(conv_val_feat, val_labels))
# The model did not show an improvement by using the leakage, other than in the early epochs. This is most likely because the information about what boat the picture came from is readily identified from the image itself, so the meta-data turned out not to add any additional information.
#
# 除了在早期阶段,该模型没有显示出使用泄漏的改进。这很有可能是因为图片本身的信息很容易从图像本身中识别出来,所以元数据并没有添加任何额外的信息。
# ## 边界盒 & 多输出
# ### 导入/可视化 边界盒
# A kaggle user has created bounding box annotations for each fish in each training set image. You can download them [from here](https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring/forums/t/25902/complete-bounding-box-annotation). We will see if we can utilize this additional information. First, we'll load in the data, and keep just the largest bounding box for each image.
#
# 一个kaggle用户在每个训练集图像中为每个鱼创建了边界框注释。你可以从这里下载。我们将看看是否能利用这些额外的信息。首先,我们将载入数据,并为每个图像保留最大的边界框。
import ujson as json
anno_classes = ['alb', 'bet', 'dol', 'lag', 'other', 'shark', 'yft']
# 定义下载边界盒子的函数
def get_annotations():
annot_urls = {
'5458/bet_labels.json': 'bd20591439b650f44b36b72a98d3ce27',
'5459/shark_labels.json': '94b1b3110ca58ff4788fb659eda7da90',
'5460/dol_labels.json': '91a25d29a29b7e8b8d7a8770355993de',
'5461/yft_labels.json': '9ef63caad8f076457d48a21986d81ddc',
'5462/alb_labels.json': '731c74d347748b5272042f0661dad37c',
'5463/lag_labels.json': '92d75d9218c3333ac31d74125f2b380a'
}
cache_subdir = os.path.abspath(os.path.join(path, 'annos'))
url_prefix = 'https://kaggle2.blob.core.windows.net/forum-message-attachments/147157/'
if not os.path.exists(cache_subdir):
os.makedirs(cache_subdir)
for url_suffix, md5_hash in annot_urls.items():
fname = url_suffix.rsplit('/', 1)[-1]
get_file(fname, url_prefix + url_suffix, cache_subdir=cache_subdir, md5_hash=md5_hash)
get_annotations()
bb_json = {}
for c in anno_classes:
if c == 'other': continue # no annotation file for "other" class
j = json.load(open('{}annos/{}_labels.json'.format(path, c), 'r'))
for l in j:
if 'annotations' in l.keys() and len(l['annotations'])>0:
bb_json[l['filename'].split('/')[-1]] = sorted(
l['annotations'], key=lambda x: x['height']*x['width'])[-1]
bb_json['img_04908.jpg']
file2idx = {o:i for i,o in enumerate(raw_filenames)}
val_file2idx = {o:i for i,o in enumerate(raw_val_filenames)}
# For any images that have no annotations, we'll create an empty bounding box.
empty_bbox = {'height': 0., 'width': 0., 'x': 0., 'y': 0.}
for f in raw_filenames:
if not f in bb_json.keys(): bb_json[f] = empty_bbox
for f in raw_val_filenames:
if not f in bb_json.keys(): bb_json[f] = empty_bbox
# Finally, we convert the dictionary into an array, and convert the coordinates to our resized 224x224 images.
bb_params = ['height', 'width', 'x', 'y']
def convert_bb(bb, size):
bb = [bb[p] for p in bb_params]
conv_x = (224. / size[0])
conv_y = (224. / size[1])
bb[0] = bb[0]*conv_y
bb[1] = bb[1]*conv_x
bb[2] = max(bb[2]*conv_x, 0)
bb[3] = max(bb[3]*conv_y, 0)
return bb
trn_bbox = np.stack([convert_bb(bb_json[f], s) for f,s in zip(raw_filenames, sizes)],
).astype(np.float32)
val_bbox = np.stack([convert_bb(bb_json[f], s)
for f,s in zip(raw_val_filenames, raw_val_sizes)]).astype(np.float32)
# Now we can check our work by drawing one of the annotations.
# +
def create_rect(bb, color='red'):
return plt.Rectangle((bb[2], bb[3]), bb[1], bb[0], color=color, fill=False, lw=3)
def show_bb(i):
bb = val_bbox[i]
plot(val[i])
plt.gca().add_patch(create_rect(bb))
# -
show_bb(15)
# ### 建立/训练 模型
# Since we're not allowed (by the kaggle rules) to manually annotate the test set, we'll need to create a model that predicts the locations of the bounding box on each image. To do so, we create a model with multiple outputs: it will predict both the type of fish (the 'class'), and the 4 bounding box coordinates. We prefer this approach to only predicting the bounding box coordinates, since we hope that giving the model more context about what it's looking for will help it with both tasks.
p=0.6
inp = Input(conv_layers[-1].output_shape[1:])
x = MaxPooling2D()(inp)
x = BatchNormalization(axis=1)(x)
x = Dropout(p/4)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p/2)(x)
x_bb = Dense(4, name='bb')(x) # 多输出
x_class = Dense(8, activation='softmax', name='class')(x)
# Since we have multiple outputs, we need to provide them to the model constructor in an array, and we also need to say what loss function to use for each. We also weight the bounding box loss function down by 1000x since the scale of the cross-entropy loss and the MSE is very different.
model = Model([inp], [x_bb, x_class])
model.compile(Adam(lr=0.001), loss=['mse', 'categorical_crossentropy'], metrics=['accuracy'],
loss_weights=[.001, 1.])
model.fit(conv_feat, [trn_bbox, trn_labels], batch_size=batch_size, epochs=3,
validation_data=(conv_val_feat, [val_bbox, val_labels]))
model.optimizer.lr = 1e-5
model.fit(conv_feat, [trn_bbox, trn_labels], batch_size=batch_size, epochs=20,
validation_data=(conv_val_feat, [val_bbox, val_labels]))
# Excitingly, it turned out that the classification model is much improved by giving it this additional task. Let's see how well the bounding box model did by taking a look at its output.
#
# 令人兴奋的是,通过给它这个额外的任务,分类模型得到了很大的改进。让我们来看看这个边界框模型是如何通过观察它的输出来实现的。
pred = model.predict(conv_val_feat[0:10])
def show_bb_pred(i):
bb = val_bbox[i]
bb_pred = pred[0][i]
plt.figure(figsize=(6,6))
plot(val[i])
ax=plt.gca()
ax.add_patch(create_rect(bb_pred, 'yellow'))
ax.add_patch(create_rect(bb))
# The image shows that it can find fish that are tricky for us to see!
show_bb_pred(6)
model.evaluate(conv_val_feat, [val_bbox, val_labels])
model.save_weights(path+'models/bn_anno.h5')
model.load_weights(path+'models/bn_anno.h5')
# ## 更大的图片
# ### 建立数据
# Let's see if we get better results if we use larger images. We'll use 640x360, since it's the same shape as the most common size we saw earlier (1280x720), without being too big.
#
# 让我们看看如果我们使用更大的图像会得到更好的结果。我们将使用640x360,因为它的形状与我们之前看到的最常见的尺寸相同(1280x720),而不是太大。
trn = get_data(path+'train', (360,640))
val = get_data(path+'valid', (360,640))
# The image shows that things are much clearer at this size.
plot(trn[0])
# 最早我试图一次读入13154张图片,但是内存不够,完全读不了,目前test1只有1000张图片
test = get_data(path+'test', (360,640))
save_array(path+'results/trn_640.dat', trn)
save_array(path+'results/val_640.dat', val)
save_array(path+'results/test_640.dat', test)
trn = load_array(path+'results/trn_640.dat')
val = load_array(path+'results/val_640.dat')
# We can now create our VGG model - we'll need to tell it we're not using the normal 224x224 images, which also means it won't include the fully connected layers (since they don't make sense for non-default sizes). We will also remove the last max pooling layer, since we don't want to throw away information yet.
vgg640 = Vgg16BN((360, 640)).model
vgg640.pop()
vgg640.input_shape, vgg640.output_shape
vgg640.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
# We can now pre-compute the output of the convolutional part of VGG.
conv_val_feat = vgg640.predict(val, batch_size=32, verbose=1)
conv_trn_feat = vgg640.predict(trn, batch_size=32, verbose=1)
save_array(path+'results/conv_val_640.dat', conv_val_feat)
save_array(path+'results/conv_trn_640.dat', conv_trn_feat)
conv_test_feat = vgg640.predict(test, batch_size=32, verbose=1)
save_array(path+'results/conv_test_640.dat', conv_test_feat)
conv_val_feat = load_array(path+'results/conv_val_640.dat')
conv_trn_feat = load_array(path+'results/conv_trn_640.dat')
conv_test_feat = load_array(path+'results/conv_test_640.dat')
# ### 完整的全卷积网络 Fully convolutional net (FCN)
# Since we're using a larger input, the output of the final convolutional layer is also larger. So we probably don't want to put a dense layer there - that would be a *lot* of parameters! Instead, let's use a fully convolutional net (FCN); this also has the benefit that they tend to generalize well, and also seems like a good fit for our problem (since the fish are a small part of the image).
#
#
# 由于我们使用的是更大的输入,最终卷积层的输出也更大。所以我们可能不想在那里放一个稠密的层——那将会有很多参数!相反,让我们使用一个完全卷积的网络(FCN);这也有好处,他们倾向于很好地概括,而且似乎也很适合我们的问题(因为鱼是图像的一小部分)。
conv_layers,_ = split_at(vgg640, Conv2D) # 丢弃全连接层,因为参数太多了
# I'm not using any dropout, since I found I got better results without it.
nf=128; p=0.6
def get_lrg_layers():
return [
BatchNormalization(axis=1, input_shape=conv_layers[-1].output_shape[1:]),
Conv2D(nf,kernel_size=(3,3),strides=(1,1), activation='relu', padding='same'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(nf,kernel_size=(3,3),strides=(1,1), activation='relu', padding='same'),
BatchNormalization(axis=1),
MaxPooling2D(),
Conv2D(nf,kernel_size=(3,3),strides=(1,1), activation='relu', padding='same'),
BatchNormalization(axis=1),
MaxPooling2D((1,2)),
Conv2D(8,kernel_size=(3,3),strides=(1,1), padding='same'),
Dropout(p),
GlobalAveragePooling2D(),
Activation('softmax')
]
lrg_model = Sequential(get_lrg_layers())
lrg_model.summary()
lrg_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, epochs=2,
validation_data=(conv_val_feat, val_labels))
lrg_model.optimizer.lr=1e-5
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, epochs=20,
validation_data=(conv_val_feat, val_labels))
# When I submitted the results of this model to Kaggle, I got the best single model results of any shown here (ranked 22nd on the leaderboard as at Dec-6-2016.)
lrg_model.save_weights(path+'models/lrg_nmp.h5')
lrg_model.load_weights(path+'models/lrg_nmp.h5')
lrg_model.evaluate(conv_val_feat, val_labels)
lrg_model.predict(conv_test_feat,verbose=1)
# ### 提交
# 补充test2的数据 ,还是memory的问题,根本没办法,目前24G内存也读不进来┏┛墓┗┓...(((m -__-)m了
test = get_data(path+'test2', (360,640))
# Another benefit of this kind of model is that the last convolutional layer has to learn to classify each part of the image (since there's only an average pooling layer after). Let's create a function that grabs the output of this layer (which is the 4th-last layer of our model).
l = lrg_model.layers
conv_fn = K.function([l[0].input, K.learning_phase()], l[-4].output)
def get_cm(inp, label):
conv = conv_fn([inp,0])[0, label]
return scipy.misc.imresize(conv, (360,640), interp='nearest')
# We have to add an extra dimension to our input since the CNN expects a 'batch' (even if it's just a batch of one).
inp = np.expand_dims(conv_val_feat[0], 0)
np.round(lrg_model.predict(inp)[0],2)
plt.imshow(to_plot(val[0]))
cm = get_cm(inp, 0)
# The heatmap shows that (at very low resolution) the model is finding the fish!
plt.imshow(cm, cmap="cool")
# ### All convolutional net heatmap
# To create a higher resolution heatmap, we'll remove all the max pooling layers, and repeat the previous steps.
def get_lrg_layers():
return [
BatchNormalization(axis=1, input_shape=conv_layers[-1].output_shape[1:]),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
Convolution2D(nf,3,3, activation='relu', border_mode='same'),
BatchNormalization(axis=1),
Convolution2D(8,3,3, border_mode='same'),
GlobalAveragePooling2D(),
Activation('softmax')
]
lrg_model = Sequential(get_lrg_layers())
lrg_model.summary()
lrg_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
lrg_model.optimizer.lr=1e-5
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=6,
validation_data=(conv_val_feat, val_labels))
lrg_model.save_weights(path+'models/lrg_0mp.h5')
lrg_model.load_weights(path+'models/lrg_0mp.h5')
# #### Create heatmap
l = lrg_model.layers
conv_fn = K.function([l[0].input, K.learning_phase()], l[-3].output)
def get_cm2(inp, label):
conv = conv_fn([inp,0])[0, label]
return scipy.misc.imresize(conv, (360,640))
inp = np.expand_dims(conv_val_feat[0], 0)
plt.imshow(to_plot(val[0]))
cm = get_cm2(inp, 0)
cm = get_cm2(inp, 4)
plt.imshow(cm, cmap="cool")
plt.figure(figsize=(10,10))
plot(val[0])
plt.imshow(cm, cmap="cool", alpha=0.5)
# ### Inception mini-net
# Here's an example of how to create and use "inception blocks" - as you see, they use multiple different convolution filter sizes and concatenate the results together. We'll talk more about these next year.
def conv2d_bn(x, nb_filter, nb_row, nb_col, subsample=(1, 1)):
x = Convolution2D(nb_filter, nb_row, nb_col,
subsample=subsample, activation='relu', border_mode='same')(x)
return BatchNormalization(axis=1)(x)
def incep_block(x):
branch1x1 = conv2d_bn(x, 32, 1, 1, subsample=(2, 2))
branch5x5 = conv2d_bn(x, 24, 1, 1)
branch5x5 = conv2d_bn(branch5x5, 32, 5, 5, subsample=(2, 2))
branch3x3dbl = conv2d_bn(x, 32, 1, 1)
branch3x3dbl = conv2d_bn(branch3x3dbl, 48, 3, 3)
branch3x3dbl = conv2d_bn(branch3x3dbl, 48, 3, 3, subsample=(2, 2))
branch_pool = AveragePooling2D(
(3, 3), strides=(2, 2), border_mode='same')(x)
branch_pool = conv2d_bn(branch_pool, 16, 1, 1)
return merge([branch1x1, branch5x5, branch3x3dbl, branch_pool],
mode='concat', concat_axis=1)
inp = Input(vgg640.layers[-1].output_shape[1:])
x = BatchNormalization(axis=1)(inp)
x = incep_block(x)
x = incep_block(x)
x = incep_block(x)
x = Dropout(0.75)(x)
x = Convolution2D(8,3,3, border_mode='same')(x)
x = GlobalAveragePooling2D()(x)
outp = Activation('softmax')(x)
lrg_model = Model([inp], outp)
lrg_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
lrg_model.optimizer.lr=1e-5
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=6,
validation_data=(conv_val_feat, val_labels))
lrg_model.fit(conv_trn_feat, trn_labels, batch_size=batch_size, nb_epoch=10,
validation_data=(conv_val_feat, val_labels))
lrg_model.save_weights(path+'models/lrg_nmp.h5')
lrg_model.load_weights(path+'models/lrg_nmp.h5')
# ## Pseudo-labeling
model = vgg_ft_bn(8)
model.load_weights(path+'results/ft1.h5')
conv_layers,fc_layers = split_at(model, Conv2D)
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
conv_test_feat = load_array(path+'results/conv_test_feat.dat')
p=0.6
len(id2size)
conv_layers[-1].output_shape[1:]
# +
inp = Input(conv_layers[-1].output_shape[1:])
sz_inp = Input((len(id2size),))
bn_inp = BatchNormalization()(sz_inp)
x = MaxPooling2D()(inp)
x = BatchNormalization(axis=1)(x)
x = Dropout(p/4)(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p)(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
x = Dropout(p/2)(x)
x = keras.layers.concatenate([x,bn_inp])
x = Dense(8, activation='softmax')(x)
# -
conv_feat.shape
model = Model([inp, sz_inp], x)
model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit([conv_feat, trn_sizes], trn_labels, batch_size=batch_size, epochs=30,
validation_data=([conv_val_feat, val_sizes], val_labels))
model.optimizer.lr = 0.0001
model.fit([conv_feat, trn_sizes], trn_labels, batch_size=batch_size, epochs=30,
validation_data=([conv_val_feat, val_sizes], val_labels))
preds = model.predict([conv_test_feat, test_sizes], batch_size=batch_size*2) # 半监督学些,伪标记
preds[:1] # 8种图片大小
gen = image.ImageDataGenerator()
conv_test_feat.shape
test_batches = gen.flow(conv_test_feat, preds, batch_size=16)
val_batches = gen.flow(conv_val_feat, val_labels, batch_size=4)
batches = gen.flow(conv_feat, trn_labels, batch_size=44)
next(batches)[0][0].shape
next(batches)[0][1].shape
type(batches)
class MixIterator(object):
def __init__(self, iters):
self.iters = iters
self.multi = type(iters) is list
if self.multi:
self.n = sum([it.n for it in self.iters])
else:
self.n = sum([it.n for it in self.iters])
def reset(self):
for it in self.iters:
it.reset()
def __iter__(self):
return self
def next(self, *args, **kwargs):
if self.multi:
# for o in self.iters:
# for it in o:
# print(type(it))
nexts = [[next(o)] for o in self.iters]
n0 = np.concatenate([n[0][0] for n in nexts])
n1 = np.concatenate([n[1][1] for n in nexts])
return (n0, n1)
else:
nexts = [next(it) for it in self.iters]
n0 = np.concatenate([n[0][0] for n in nexts])
n1 = np.concatenate([n[1][0] for n in nexts])
return (n0, n1)
mi = MixIterator([batches, test_batches, val_batches])
bn_model.fit_generator(mi, mi.n, epochs=8, validation_data=(conv_val_feat, val_labels))
# ## Submit 1
def do_clip(arr, mx):
return np.clip(arr, (1-mx)/7, mx)
lrg_model.evaluate(conv_val_feat, val_labels, batch_size*2)
preds = lrg_model.predict(conv_test_feat, batch_size=batch_size)
preds
preds.shape
preds = preds[1]
preds
subm = do_clip(preds,0.82)
subm.shape
subm_name = path+'results/subm_bb.gz'
# classes = sorted(batches.class_indices, key=batches.class_indices.get)
classes = ['ALB', 'BET', 'DOL', 'LAG', 'NoF', 'OTHER', 'SHARK', 'YFT']
submission = pd.DataFrame(data = subm, columns=classes)
submission.insert(0, 'image', [a[8:] for a in raw_test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
| deeplearning1/nbs/lesson7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Autoencoder network with MedNIST Dataset
#
# This notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising.
#
# # Learning objectives
# This will go through the steps of:
# * Loading the data from a remote source
# * Using a lambda to create a dictionary of images
# * Using MONAI's in-built AutoEncoder
#
# [](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/autoencoder_mednist.ipynb)
# ## Setup environment
# !python -c "import monai" || pip install -q "monai-weekly[pillow, tqdm]"
# # 1. Imports and configuration
# + tags=[]
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadImageD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
EnsureTypeD,
Lambda,
)
from monai.utils import set_determinism
print_config()
# -
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10, 10), titles=None):
shape = (1, len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i, im in enumerate(ims):
plt.subplot(*shape, i + 1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
# # 2. Get the data
#
# The MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),
# [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),
# and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).
#
# The dataset is kindly made available by [Dr. <NAME>., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)
# under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
# + tags=[]
resource = "https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
# +
# scan_type could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
scan_type = "Hand"
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename)
for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2, 4))
# -
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
# # 3. Create the image transform chain
#
# To train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.
#
# Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images – the original, the Gaussian blurred and the noisy (salt and pepper).
# +
NoiseLambda = Lambda(lambda d: {
"orig": d["im"],
"gaus": torch.tensor(
random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p": torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
# -
# ### Create dataset and dataloader
#
# Hold data and present batches during training.
# + tags=[]
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms,
num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
# -
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(
ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']],
titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, max_epochs=10, learning_rate=1e-3):
model = AutoEncoder(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = []
t = trange(
max_epochs,
desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(
f"{dict_key_for_training} -- epoch {epoch + 1}"
+ f", average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
max_epochs = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, max_epochs=max_epochs)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y) + 1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend()
# +
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] +
["recon w/\n" + x for x in training_types],
shape=(2, len(training_types)))
# -
# ### Cleanup data directory
#
# Remove directory if a temporary was used.
# + pycharm={"is_executing": true}
if directory is None:
shutil.rmtree(root_dir)
| modules/autoencoder_mednist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/05_MNIST_Estimator_Tensorboard_playground.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="VXoIW4SOyY8y" colab_type="text"
# Please run this notebook on a GPU backend. Porting the model from Estimator to TPUEstimator is needed for it to work on TPU.
# + [markdown] id="xqLjB2cy5S7m" colab_type="text"
# ## MNIST with Tensorboard, using the Estimator API
#
# Fun with handwritten digits and tensorboard.
#
# This notebook will show you how to follow your training and validation curves in Tensorboard and what you can do to address the issues you see there.
# + [markdown] id="qpiJj8ym0v0-" colab_type="text"
# ### Imports
# + id="AoilhmYe1b5t" colab_type="code" outputId="96b4256e-3c8b-4065-f862-c1feb370a233" colab={"base_uri": "https://localhost:8080/", "height": 34}
import os, re, math, json, shutil, pprint, datetime
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
# + [markdown] id="aBDGQWkbLGvh" colab_type="text"
# ### Parameters
# + id="_V_VbLELLJCS" colab_type="code" colab={}
BATCH_SIZE = 32 #@param {type:"integer"}
BUCKET = '' #@param {type:"string"}
assert re.search(r'gs://.+', BUCKET), 'You need a GCS bucket for your Tensorboard logs. Head to http://console.cloud.google.com/storage and create one.'
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
# + [markdown] id="Lzd6Qi464PsA" colab_type="text"
# ### Colab-only auth
# + id="MPx0nvyUnvgT" colab_type="code" cellView="both" colab={}
# backend identification
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
# Auth on Colab
# Little wrinkle: without auth, Colab will be extremely slow in accessing data from a GCS bucket, even public
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user()
# + id="qhdz68Xm3Z4Z" colab_type="code" cellView="form" colab={}
#@title visualization utilities [RUN ME]
"""
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
"""
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
# + [markdown] id="Lz1Zknfk4qCx" colab_type="text"
# ### tf.data.Dataset: parse files and prepare training and validation datasets
# Please read the [best practices for building](https://www.tensorflow.org/guide/performance/datasets) input pipelines with tf.data.Dataset
# + id="ZE8dgyPC1_6m" colab_type="code" colab={}
def read_label(tf_bytestring):
label = tf.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(10) # fetch next batches while training on the current one
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# In Estimator, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
# + [markdown] id="_fXo6GuvL3EB" colab_type="text"
# ### Let's have a look at the data
# + colab_type="code" outputId="ab5b48d5-b38a-4c2b-8109-35a948d4554b" id="DaWNgUPKLz_9" colab={"base_uri": "https://localhost:8080/", "height": 181}
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
# + [markdown] id="KIc0oqiD40HC" colab_type="text"
# ### Estimator model [WORK REQUIRED]
# If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: [Tensorflow and deep learning without a PhD](https://github.com/GoogleCloudPlatform/tensorflow-without-a-phd/#featured-code-sample)
# + id="56y8UNFQIVwj" colab_type="code" colab={}
# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs
def model_fn(features, labels, mode):
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
x = features
y = tf.reshape(x, [-1, 28, 28, 1])
###
# YOUR LAYERS HERE (remove this single dense layer and go wild)
# LAYERS YOU CAN TRY:
# y = tf.layers.Conv2D(filters=6, kernel_size=3, padding='same')(y)
# y = tf.layers.Dense(200)(y)
# y = tf.layers.MaxPooling2D(pool_size=2)(y)
# tf.layers.Dropout(0.5)(y, training=is_training)
#
y = tf.layers.Flatten()(y)
y = tf.layers.Dense(200)(y)
###
logits = tf.layers.Dense(10)(y)
predictions = tf.nn.softmax(logits)
classes = tf.math.argmax(predictions, axis=-1)
if (mode != tf.estimator.ModeKeys.PREDICT):
loss = tf.losses.softmax_cross_entropy(labels, logits)
step = tf.train.get_or_create_global_step()
###
# YOUR LEARNING RATE SCHEDULLE HERE
#
# lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000, 1/math.e)
# tf.summary.scalar("learn_rate", lr) # you can visualize it in Tensorboard
###
optimizer = tf.train.AdamOptimizer(0.01)
# little wrinkle: batch norm uses running averages which need updating after each batch. create_train_op does it, optimizer.minimize does not.
train_op = tf.contrib.training.create_train_op(loss, optimizer)
#train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
metrics = {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}
else:
loss = train_op = metrics = None # None of these can be computed in prediction mode because labels are not available
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={"predictions": predictions, "classes": classes}, # name these fields as you like
loss=loss,
train_op=train_op,
eval_metric_ops=metrics
)
# little wrinkle: tf.keras.layers can normally be used in an Estimator but tf.keras.layers.BatchNormalization does not work
# in an Estimator environment. Using TF layers everywhere for consistency. tf.layers and tf.ketas.layers are carbon copies of each other.
# + id="DeIxkrv9Wihg" colab_type="code" colab={}
# Called once when the model is saved. This function produces a Tensorflow
# graph of operations that will be prepended to your model graph. When
# your model is deployed as a REST API, the API receives data in JSON format,
# parses it into Tensors, then sends the tensors to the input graph generated by
# this function. The graph can transform the data so it can be sent into your
# model input_fn. You can do anything you want here as long as you do it with
# tf.* functions that produce a graph of operations.
def serving_input_fn():
# placeholder for the data received by the API (already parsed, no JSON decoding necessary,
# but the JSON must contain one or multiple 'image' key(s) with 28x28 greyscale images as content.)
inputs = {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])} # the shape of this dict should match the shape of your JSON
features = inputs['serving_input'] # no transformation needed
return tf.estimator.export.TensorServingInputReceiver(features, inputs) # features are the features needed by your model_fn
# Return a ServingInputReceiver if your features are a dictionary of Tensors, TensorServingInputReceiver if they are a straight Tensor
# + [markdown] id="RxpRgF874-ix" colab_type="text"
# ### Train and validate the model
# + id="TTwH_P-ZJ_xx" colab_type="code" colab={}
EPOCHS = 3
steps_per_epoch = 60000 // BATCH_SIZE # 60,000 images in training dataset
MODEL_EXPORT_NAME = "mnist" # name for exporting saved model
tf_logging.set_verbosity(tf_logging.INFO)
now = datetime.datetime.now()
MODEL_DIR = BUCKET+"/mnistjobs/job" + "-{}-{:02d}-{:02d}-{:02d}:{:02d}:{:02d}".format(now.year, now.month, now.day, now.hour, now.minute, now.second)
training_config = tf.estimator.RunConfig(model_dir=MODEL_DIR, save_summary_steps=10, save_checkpoints_steps=steps_per_epoch, log_step_count_steps=steps_per_epoch/4)
export_latest = tf.estimator.LatestExporter(MODEL_EXPORT_NAME, serving_input_receiver_fn=serving_input_fn)
estimator = tf.estimator.Estimator(model_fn=model_fn, config=training_config)
train_spec = tf.estimator.TrainSpec(training_input_fn, max_steps=EPOCHS*steps_per_epoch)
eval_spec = tf.estimator.EvalSpec(validation_input_fn, steps=1, exporters=export_latest, throttle_secs=0) # no eval throttling: evaluates after each checkpoint
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
tf_logging.set_verbosity(tf_logging.WARN)
# + [markdown] id="9jFVovcUUVs1" colab_type="text"
# ### Visualize predictions
# + id="w12OId8Mz7dF" colab_type="code" outputId="5d4f55f7-105e-4d6a-d05c-30489b9ee7b7" colab={"base_uri": "https://localhost:8080/", "height": 571}
# recognize digits from local fonts
predictions = estimator.predict(lambda: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_font_classes = next(predictions)['classes']
display_digits(font_digits, predicted_font_classes, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
predictions = estimator.predict(validation_input_fn,
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_labels = next(predictions)['classes']
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
# + [markdown] id="5tzVi39ShrEL" colab_type="text"
# ## Deploy the trained model to ML Engine
#
# Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
#
# You will need a GCS bucket and a GCP project for this.
# Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
# Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
# + [markdown] id="3Y3ztMY_toCP" colab_type="text"
# ### Configuration
# + colab_type="code" cellView="both" id="VoN13WTwtPVp" colab={}
PROJECT = "" #@param {type:"string"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "estimator_mnist" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
export_path = os.path.join(MODEL_DIR, 'export', MODEL_EXPORT_NAME)
last_export = sorted(tf.gfile.ListDirectory(export_path))[-1]
export_path = os.path.join(export_path, last_export)
print('Saved model directory found: ', export_path)
# + [markdown] id="zy3T3zk0u2J0" colab_type="text"
# ### Deploy the model
# This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
#
# + id="nGv3ITiGLPL3" colab_type="code" colab={}
# Create the model
if NEW_MODEL:
# !gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# + id="o3QtUowtOAL-" colab_type="code" colab={}
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
# !echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
# !gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
# + [markdown] id="jE-k1Zn6kU2Z" colab_type="text"
# ### Test the deployed model
# Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
# command line tool but any tool that can send a JSON payload to a REST endpoint will work.
# + id="zZCt0Ke2QDer" colab_type="code" colab={}
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because that is what you defined in your serving_input_fn: {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])}
f.write(data+'\n')
# + id="n6PqhQ8RQ8bp" colab_type="code" colab={}
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
# predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
predictions = np.array([int(p.split('[')[0]) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
display_top_unrecognized(digits, predictions, labels, N, 100//N)
# + [markdown] id="XXSk0bENYB7-" colab_type="text"
# ## License
# + [markdown] id="hleIN5-pcr0N" colab_type="text"
#
#
# ---
#
#
# author: <NAME><br>
# twitter: @martin_gorner
#
#
# ---
#
#
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# ---
#
#
# This is not an official Google product but sample code provided for an educational purpose
#
| courses/fast-and-lean-data-science/05_MNIST_Estimator_Tensorboard_playground.ipynb |
Subsets and Splits