code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# МАДМО <a href="https://mipt.ru/science/labs/laboratoriya-neyronnykh-sistem-i-glubokogo-obucheniya/"><img align="right" src="https://avatars1.githubusercontent.com/u/29918795?v=4&s=200" alt="DeepHackLab" style="position:relative;top:-40px;right:10px;height:100px;" /></a> ### Физтех-Школа Прикладной математики и информатики МФТИ ### Лаборатория нейронных сетей и глубокого обучения (DeepHackLab) Домашнее задание необходимо загрузить в общий репозиторий с именной папкой ## Домашнее задание 1 ### Основы Python и пакет NumPy --- ``` import numpy as np import random import scipy.stats as sps ``` ### Задача 1 В первой задаче вам предлагается перемножить две квадратные матрицы двумя способами -- без использования пакета ***numpy*** и с ним. ``` # Для генерации матриц используем фукнцию random -- она используется для генерации случайных объектов # функция sample создает случайную выборку. В качестве аргумента ей передается кортеж (i,j), здесь i -- число строк, # j -- число столбцов. a = np.random.sample((1000,1000)) b = np.random.sample((1000,1000)) # выведите ранг каждой матрицы с помощью функции np.linalg.rank. # Используйте функцию shape, что она вывела? # ======== rank_a = np.linalg.matrix_rank(a) print(rank_a) print(a.shape) print(np.linalg.matrix_rank(b)) print(b.shape) # ======== #print(a) #print(b) # здесь напишите перемножение матриц без # использования NumPy и выведите результат def mult(a, b): rows_a = len(a) cols_a = len(a[0]) rows_b = len(b) cols_b = len(b[0]) if cols_a != rows_b: return 'Error' c = [[0 for row in range(cols_b)] for col in range(rows_a)] for i in range(rows_a): for j in range(cols_b): for k in range(cols_a): c[i][j] += a[i][k] * b[k][j] #return c def np_mult(a, b): # здесь напишите перемножение матриц с # использованием NumPy и выведите результат return np.dot(a,b) %%time # засечем время работы функции без NumPy mult(a,b) %%time # засечем время работы функции с NumPy np_mult(a,b) ``` ### Задача 2 Напишите функцию, которая по данной последовательности $\{A_i\}_{i=1}^n$ строит последовательность $S_n$, где $S_k = \frac{A_1 + ... + A_k}{k}$. Аналогично -- с помощью библиотеки **NumPy** и без нее. Сравните скорость, объясните результат. ``` # функция, решающая задачу с помощью NumPy def sec_av(A): return np.cumsum(A)/list(range(1,len(A)+1)) pass # функция без NumPy def stupid_sec_av(A): S = [0 for i in range(len(A))] S[0] = A[0] for i in range(len(A)-1): S[i+1] = A[i+1] + S[i] numb = list(range(1,len(A)+1)) for i in range(len(A)): S[i] = S[i] / numb[i] return S # зададим некоторую последовательность и проверим ее на ваших функциях. # Первая функция должна работать ~ в 50 раз быстрее A = sps.uniform.rvs(size=10 ** 7) %time S1 = sec_av(A) %time S2 = stupid_sec_av(A) #проверим корректность: np.abs(S1 - S2).sum() ``` ### Задача 3 Пусть задан некоторый массив $X$. Надо построить новый массив, где все элементы с нечетными индексами требуется заменить на число $a$ (если оно не указано, то на 1). Все элементы с четными индексами исходного массива нужно возвести в куб и записать в обратном порядке относительно позиций этих элементов. Массив $X$ при этом должен остаться без изменений. В конце требуется слить массив X с преобразованным X и вывести в обратном порядке. ``` # функция, решающая задачу с помощью NumPy def transformation(X, a = 1): X[1::2] = a X[::2] **= 3 X[::2] = X[::2][::-1] return X # функция, решающая задачу без NumPy def stupid_transformation(X): t_odd = [] t_ev = [] t_ev_inv = [] Y = [] t_odd = int(round(len(X)/2)) * [1] for i in range(0,len(X),2): t_ev = t_ev + [round(X[i]**3,8)] for i in range(len(t_ev),0,-1): t_ev_inv = t_ev_inv + [temp_ev[i-1]] for i in range(min(len(t_ev_inv), len(t_odd))): Y = Y + [t_ev_inv[i]] + [t_odd[i]] if len(t_ev_inv) > len(t_odd): Y = Y + [t_ev_inv[-1]] if len(t_ev_inv) < len(t_odd): Y = Y + [t_odd[-1]] return Y X = sps.uniform.rvs(size=10 ** 7) # здесь код эффективнее примерно в 20 раз. # если Вы вдруг соберетесь печатать массив без np -- лучше сначала посмотрите на его размер %time S1 = transformation(X) %time S2 = stupid_transformation(X) # проверим корректность: np.abs(S1 - S2).sum() ``` Почему методы ***numpy*** оказываются эффективнее? ``` # Написаны на C ``` ## Дополнительные задачи Дополнительные задачи подразумевают, что Вы самостоятельно разберётесь в некоторых функциях ***numpy***, чтобы их сделать. Эти задачи не являются обязательными, но могут повлиять на Ваш рейтинг в лучшую сторону (точные правила учёта доп. задач будут оглашены позже). ### Задача 4* Дана функция двух переменных: $f(x, y) = sin(x)cos(y)$ (это просто такой красивый 3D-график), а также дана функция для отрисовки $f(x, y)$ (`draw_f()`), которая принимает на вход двумерную сетку, на которой будет вычисляться функция. Вам нужно разобраться в том, как строить такие сетки (подсказка - это одна конкретная функция ***numpy***), и подать такую сетку на вход функции отрисовки. ``` from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline def f(x, y): '''Функция двух переменных''' return np.sin(x) * np.cos(y) def draw_f(grid_x, grid_y): '''Функция отрисовки функции f(x, y)''' fig = plt.figure(figsize=(10, 8)) ax = Axes3D(fig) ax.plot_surface(grid_x, grid_y, f(grid_x, grid_y), cmap='inferno') plt.show() i = np.arange(-1, 1, 0.02) grid_x, grid_y = np.meshgrid(i, i) draw_f(grid_x, grid_y) ``` ### Задача 5* Выберите любую картинку и загрузите ее в папку с кодом. При загрузке её размерность равна 3: **(w, h, num_channels)**, где **w** - ширина картинки в пикселях, **h** - высота картинки в пикселях, **num_channels** - количество каналов *(R, G, B, alpha)*. Вам нужно "развернуть" картинку в одномерный массив размера w \* h \* num_channels, написав **одну строку кода**. ``` from matplotlib import pyplot as plt %matplotlib inline path_to_image = './image.png' image_array = plt.imread(path_to_image) plt.imshow(image_array); flat_image_array = = image_array.flatten() print(len(flat_image_array)) ```
github_jupyter
# Colab-pytorch-image-classification Original repo: [bentrevett/pytorch-image-classification](https://github.com/bentrevett/pytorch-image-classification) [SqueezeNet code](https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py): [pytorch/vision](https://github.com/pytorch/vision) My fork: [styler00dollar/Colab-image-classification](https://github.com/styler00dollar/Colab-image-classification) This colab is a combination of [this Colab](https://colab.research.google.com/github/bentrevett/pytorch-image-classification/blob/master/5_resnet.ipynb) and [my other Colab](https://colab.research.google.com/github/styler00dollar/Colab-image-classification/blob/master/5_(small)_ResNet.ipynb) to do SqueezeNet training. ``` !nvidia-smi ``` # DATASET CREATION ``` #@title Mount Google Drive from google.colab import drive drive.mount('/content/drive') print('Google Drive connected.') # copy data somehow !mkdir '/content/classification' !mkdir '/content/classification/images' !cp "/content/drive/MyDrive/classification_v2.7z" "/content/classification/images/classification.7z" %cd /content/classification/images !7z x "classification.7z" !rm -rf /content/classification/images/classification.7z #@title dataset creation TRAIN_RATIO = 0.90 #@param {type:"number"} import os import shutil from tqdm import tqdm #data_dir = os.path.join(ROOT, 'CUB_200_2011') data_dir = '/content/classification' #@param {type:"string"} images_dir = os.path.join(data_dir, 'images') train_dir = os.path.join(data_dir, 'train') test_dir = os.path.join(data_dir, 'test') if os.path.exists(train_dir): shutil.rmtree(train_dir) if os.path.exists(test_dir): shutil.rmtree(test_dir) os.makedirs(train_dir) os.makedirs(test_dir) classes = os.listdir(images_dir) for c in classes: class_dir = os.path.join(images_dir, c) images = os.listdir(class_dir) n_train = int(len(images) * TRAIN_RATIO) train_images = images[:n_train] test_images = images[n_train:] os.makedirs(os.path.join(train_dir, c), exist_ok = True) os.makedirs(os.path.join(test_dir, c), exist_ok = True) for image in tqdm(train_images): image_src = os.path.join(class_dir, image) image_dst = os.path.join(train_dir, c, image) shutil.copyfile(image_src, image_dst) for image in tqdm(test_images): image_src = os.path.join(class_dir, image) image_dst = os.path.join(test_dir, c, image) shutil.copyfile(image_src, image_dst) ``` # CALC MEANS & STDS ``` #@title print means and stds import torch import torchvision.transforms as transforms import torchvision.datasets as datasets from tqdm import tqdm train_data = datasets.ImageFolder(root = train_dir, transform = transforms.ToTensor()) means = torch.zeros(3) stds = torch.zeros(3) for img, label in tqdm(train_data): means += torch.mean(img, dim = (1,2)) stds += torch.std(img, dim = (1,2)) means /= len(train_data) stds /= len(train_data) print("\n") print(f'Calculated means: {means}') print(f'Calculated stds: {stds}') ``` # TRAIN ``` #@title import, seed, transforms, dataloader, functions, plot, model, parameter %cd /content/ from tqdm import tqdm import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler from torch.optim.lr_scheduler import _LRScheduler import torch.utils.data as data import torchvision.transforms as transforms import torchvision.datasets as datasets import torchvision.models as models from sklearn import decomposition from sklearn import manifold from sklearn.metrics import confusion_matrix from sklearn.metrics import ConfusionMatrixDisplay import matplotlib.pyplot as plt import numpy as np import copy from collections import namedtuple import os import random import shutil SEED = 1234 #@param {type:"number"} random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True train_dir = '/content/classification/train' #@param {type:"string"} test_dir = '/content/classification/test' #@param {type:"string"} pretrained_size = 256 #@param {type:"number"} pretrained_means = [0.6838, 0.6086, 0.6063] #@param {type:"raw"} pretrained_stds= [0.2411, 0.2403, 0.2306] #@param {type:"raw"} #https://github.com/mit-han-lab/data-efficient-gans/blob/master/DiffAugment_pytorch.py import torch import torch.nn.functional as F def DiffAugment(x, policy='', channels_first=True): if policy: if not channels_first: x = x.permute(0, 3, 1, 2) for p in policy.split(','): for f in AUGMENT_FNS[p]: x = f(x) if not channels_first: x = x.permute(0, 2, 3, 1) x = x.contiguous() return x def rand_brightness(x): x = x + (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) - 0.5) return x def rand_saturation(x): x_mean = x.mean(dim=1, keepdim=True) x = (x - x_mean) * (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) * 2) + x_mean return x def rand_contrast(x): x_mean = x.mean(dim=[1, 2, 3], keepdim=True) x = (x - x_mean) * (torch.rand(x.size(0), 1, 1, 1, dtype=x.dtype, device=x.device) + 0.5) + x_mean return x def rand_translation(x, ratio=0.125): shift_x, shift_y = int(x.size(2) * ratio + 0.5), int(x.size(3) * ratio + 0.5) translation_x = torch.randint(-shift_x, shift_x + 1, size=[x.size(0), 1, 1], device=x.device) translation_y = torch.randint(-shift_y, shift_y + 1, size=[x.size(0), 1, 1], device=x.device) grid_batch, grid_x, grid_y = torch.meshgrid( torch.arange(x.size(0), dtype=torch.long, device=x.device), torch.arange(x.size(2), dtype=torch.long, device=x.device), torch.arange(x.size(3), dtype=torch.long, device=x.device), ) grid_x = torch.clamp(grid_x + translation_x + 1, 0, x.size(2) + 1) grid_y = torch.clamp(grid_y + translation_y + 1, 0, x.size(3) + 1) x_pad = F.pad(x, [1, 1, 1, 1, 0, 0, 0, 0]) x = x_pad.permute(0, 2, 3, 1).contiguous()[grid_batch, grid_x, grid_y].permute(0, 3, 1, 2) return x def rand_cutout(x, ratio=0.5): cutout_size = int(x.size(2) * ratio + 0.5), int(x.size(3) * ratio + 0.5) offset_x = torch.randint(0, x.size(2) + (1 - cutout_size[0] % 2), size=[x.size(0), 1, 1], device=x.device) offset_y = torch.randint(0, x.size(3) + (1 - cutout_size[1] % 2), size=[x.size(0), 1, 1], device=x.device) grid_batch, grid_x, grid_y = torch.meshgrid( torch.arange(x.size(0), dtype=torch.long, device=x.device), torch.arange(cutout_size[0], dtype=torch.long, device=x.device), torch.arange(cutout_size[1], dtype=torch.long, device=x.device), ) grid_x = torch.clamp(grid_x + offset_x - cutout_size[0] // 2, min=0, max=x.size(2) - 1) grid_y = torch.clamp(grid_y + offset_y - cutout_size[1] // 2, min=0, max=x.size(3) - 1) mask = torch.ones(x.size(0), x.size(2), x.size(3), dtype=x.dtype, device=x.device) mask[grid_batch, grid_x, grid_y] = 0 x = x * mask.unsqueeze(1) return x AUGMENT_FNS = { 'color': [rand_brightness, rand_saturation, rand_contrast], 'translation': [rand_translation], 'cutout': [rand_cutout], } train_transforms = transforms.Compose([ transforms.Resize(pretrained_size), transforms.RandomRotation(5), transforms.RandomHorizontalFlip(0.5), transforms.RandomCrop(pretrained_size, padding = 10), transforms.ToTensor(), transforms.Normalize(mean = pretrained_means, std = pretrained_stds) ]) test_transforms = transforms.Compose([ transforms.Resize(pretrained_size), transforms.CenterCrop(pretrained_size), transforms.ToTensor(), transforms.Normalize(mean = pretrained_means, std = pretrained_stds) ]) train_data = datasets.ImageFolder(root = train_dir, transform = train_transforms) test_data = datasets.ImageFolder(root = test_dir, transform = test_transforms) VALID_RATIO = 0.90 #@param {type:"number"} n_train_examples = int(len(train_data) * VALID_RATIO) n_valid_examples = len(train_data) - n_train_examples train_data, valid_data = data.random_split(train_data, [n_train_examples, n_valid_examples]) valid_data = copy.deepcopy(valid_data) valid_data.dataset.transform = test_transforms print(f'Number of training examples: {len(train_data)}') print(f'Number of validation examples: {len(valid_data)}') print(f'Number of testing examples: {len(test_data)}') BATCH_SIZE = 32 #@param {type:"number"} train_iterator = data.DataLoader(train_data, shuffle = True, batch_size = BATCH_SIZE) valid_iterator = data.DataLoader(valid_data, batch_size = BATCH_SIZE) test_iterator = data.DataLoader(test_data, batch_size = BATCH_SIZE) def normalize_image(image): image_min = image.min() image_max = image.max() image.clamp_(min = image_min, max = image_max) image.add_(-image_min).div_(image_max - image_min + 1e-5) return image def plot_images(images, labels, classes, normalize = True): n_images = len(images) rows = int(np.sqrt(n_images)) cols = int(np.sqrt(n_images)) fig = plt.figure(figsize = (15, 15)) for i in range(rows*cols): ax = fig.add_subplot(rows, cols, i+1) image = images[i] if normalize: image = normalize_image(image) ax.imshow(image.permute(1, 2, 0).cpu().numpy()) label = classes[labels[i]] ax.set_title(label) ax.axis('off') N_IMAGES = 25 #@param {type:"number"} images, labels = zip(*[(image, label) for image, label in [train_data[i] for i in range(N_IMAGES)]]) classes = test_data.classes plot_images(images, labels, classes) def format_label(label): label = label.split('.')[-1] label = label.replace('_', ' ') label = label.title() label = label.replace(' ', '') return label test_data.classes = [format_label(c) for c in test_data.classes] classes = test_data.classes plot_images(images, labels, classes) #https://github.com/pytorch/vision/blob/master/torchvision/models/squeezenet.py import torch import torch.nn as nn import torch.nn.init as init #from .utils import load_state_dict_from_url from typing import Any #__all__ = ['SqueezeNet', 'squeezenet1_0', 'squeezenet1_1'] model_urls = { '1_0': 'https://download.pytorch.org/models/squeezenet1_0-a815701f.pth', '1_1': 'https://download.pytorch.org/models/squeezenet1_1-f364aa15.pth', } class Fire(nn.Module): def __init__( self, inplanes: int, squeeze_planes: int, expand1x1_planes: int, expand3x3_planes: int ) -> None: super(Fire, self).__init__() self.inplanes = inplanes self.squeeze = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1) self.squeeze_activation = nn.ReLU(inplace=True) self.expand1x1 = nn.Conv2d(squeeze_planes, expand1x1_planes, kernel_size=1) self.expand1x1_activation = nn.ReLU(inplace=True) self.expand3x3 = nn.Conv2d(squeeze_planes, expand3x3_planes, kernel_size=3, padding=1) self.expand3x3_activation = nn.ReLU(inplace=True) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.squeeze_activation(self.squeeze(x)) return torch.cat([ self.expand1x1_activation(self.expand1x1(x)), self.expand3x3_activation(self.expand3x3(x)) ], 1) class SqueezeNet(nn.Module): def __init__( self, version: str = '1_0', num_classes: int = 1000 ) -> None: super(SqueezeNet, self).__init__() self.num_classes = num_classes if version == '1_0': self.features = nn.Sequential( nn.Conv2d(3, 96, kernel_size=7, stride=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(96, 16, 64, 64), Fire(128, 16, 64, 64), Fire(128, 32, 128, 128), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(256, 32, 128, 128), Fire(256, 48, 192, 192), Fire(384, 48, 192, 192), Fire(384, 64, 256, 256), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(512, 64, 256, 256), ) elif version == '1_1': self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, stride=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(64, 16, 64, 64), Fire(128, 16, 64, 64), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(128, 32, 128, 128), Fire(256, 32, 128, 128), nn.MaxPool2d(kernel_size=3, stride=2, ceil_mode=True), Fire(256, 48, 192, 192), Fire(384, 48, 192, 192), Fire(384, 64, 256, 256), Fire(512, 64, 256, 256), ) else: # FIXME: Is this needed? SqueezeNet should only be called from the # FIXME: squeezenet1_x() functions # FIXME: This checking is not done for the other models raise ValueError("Unsupported SqueezeNet version {version}:" "1_0 or 1_1 expected".format(version=version)) # Final convolution is initialized differently from the rest final_conv = nn.Conv2d(512, self.num_classes, kernel_size=1) self.classifier = nn.Sequential( nn.Dropout(p=0.5), final_conv, nn.ReLU(inplace=True), nn.AdaptiveAvgPool2d((1, 1)) ) for m in self.modules(): if isinstance(m, nn.Conv2d): if m is final_conv: init.normal_(m.weight, mean=0.0, std=0.01) else: init.kaiming_uniform_(m.weight) if m.bias is not None: init.constant_(m.bias, 0) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.features(x) x = self.classifier(x) return torch.flatten(x, 1) def _squeezenet(version: str, pretrained: bool, progress: bool, **kwargs: Any) -> SqueezeNet: model = SqueezeNet(version, **kwargs) if pretrained: arch = 'squeezenet' + version state_dict = load_state_dict_from_url(model_urls[arch], progress=progress) model.load_state_dict(state_dict) return model def squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet: r"""SqueezeNet model architecture from the `"SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size" <https://arxiv.org/abs/1602.07360>`_ paper. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet progress (bool): If True, displays a progress bar of the download to stderr """ return _squeezenet('1_0', pretrained, progress, **kwargs) def squeezenet1_1(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> SqueezeNet: r"""SqueezeNet 1.1 model from the `official SqueezeNet repo <https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>`_. SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters than SqueezeNet 1.0, without sacrificing accuracy. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet progress (bool): If True, displays a progress bar of the download to stderr """ return _squeezenet('1_1', pretrained, progress, **kwargs) """ #https://github.com/pytorch/vision/blob/master/torchvision/models/utils.py try: from torch.hub import load_state_dict_from_url except ImportError: from torch.utils.model_zoo import load_url as load_state_dict_from_url """ model_train = '1_1' #@param ["1_0", "1_1"] {type:"string"} if model_train == '1_0': model = SqueezeNet(num_classes=len(test_data.classes), version='1_0') #state_dict = load_state_dict_from_url(model_urls[model_train], # progress=True) #model.load_state_dict(state_dict) elif model_train == '1_1': model = SqueezeNet(num_classes=len(test_data.classes), version='1_1') #state_dict = load_state_dict_from_url(model_urls[model_train], # progress=True) #model.load_state_dict(state_dict) def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') START_LR = 1e-7 #@param {type:"number"} optimizer = optim.Adam(model.parameters(), lr=START_LR) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') criterion = nn.CrossEntropyLoss() model = model.to(device) criterion = criterion.to(device) class LRFinder: def __init__(self, model, optimizer, criterion, device): self.optimizer = optimizer self.model = model self.criterion = criterion self.device = device torch.save(model.state_dict(), 'init_params.pt') def range_test(self, iterator, end_lr = 10, num_iter = 100, smooth_f = 0.05, diverge_th = 5): lrs = [] losses = [] best_loss = float('inf') lr_scheduler = ExponentialLR(self.optimizer, end_lr, num_iter) iterator = IteratorWrapper(iterator) for iteration in tqdm(range(num_iter)): loss = self._train_batch(iterator) #update lr lr_scheduler.step() lrs.append(lr_scheduler.get_lr()[0]) if iteration > 0: loss = smooth_f * loss + (1 - smooth_f) * losses[-1] if loss < best_loss: best_loss = loss losses.append(loss) if loss > diverge_th * best_loss: print("Stopping early, the loss has diverged") break #reset model to initial parameters model.load_state_dict(torch.load('init_params.pt')) return lrs, losses def _train_batch(self, iterator): self.model.train() self.optimizer.zero_grad() x, y = iterator.get_batch() x = x.to(self.device) y = y.to(self.device) y_pred, _ = self.model(x) loss = self.criterion(y_pred, y) loss.backward() self.optimizer.step() return loss.item() class ExponentialLR(_LRScheduler): def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1): self.end_lr = end_lr self.num_iter = num_iter super(ExponentialLR, self).__init__(optimizer, last_epoch) def get_lr(self): curr_iter = self.last_epoch + 1 r = curr_iter / self.num_iter return [base_lr * (self.end_lr / base_lr) ** r for base_lr in self.base_lrs] class IteratorWrapper: def __init__(self, iterator): self.iterator = iterator self._iterator = iter(iterator) def __next__(self): try: inputs, labels = next(self._iterator) except StopIteration: self._iterator = iter(self.iterator) inputs, labels, *_ = next(self._iterator) return inputs, labels def get_batch(self): return next(self) def calculate_topk_accuracy(y_pred, y, k = 5): with torch.no_grad(): batch_size = y.shape[0] _, top_pred = y_pred.topk(k=1) top_pred = top_pred.t() correct = top_pred.eq(y.view(1, -1).expand_as(top_pred)) correct_1 = correct[:1].view(-1).float().sum(0, keepdim = True) #correct_k = correct[:k].view(-1).float().sum(0, keepdim = True) acc_1 = correct_1 / batch_size #acc_k = correct_k / batch_size acc_k = 0 return acc_1, acc_k def train(model, iterator, optimizer, criterion, scheduler, device, current_epoch): epoch_loss = 0 epoch_acc_1 = 0 epoch_acc_5 = 0 model.train() policy = 'color,translation,cutout' #@param {type:"string"} diffaug_activate = True #@param ["False", "True"] {type:"raw"} #https://stackoverflow.com/questions/45465031/printing-text-below-tqdm-progress-bar with tqdm(iterator, position=1, bar_format='{desc}') as desc: for (x, y) in tqdm(iterator, position=0): x = x.to(device) y = y.to(device) optimizer.zero_grad() if diffaug_activate == False: y_pred = model(x) else: y_pred = model(DiffAugment(x, policy=policy)) loss = criterion(y_pred, y) acc_1, acc_5 = calculate_topk_accuracy(y_pred, y) loss.backward() optimizer.step() scheduler.step() epoch_loss += loss.item() epoch_acc_1 += acc_1.item() #epoch_acc_5 += acc_5.item() epoch_loss /= len(iterator) epoch_acc_1 /= len(iterator) desc.set_description(f'Epoch: {current_epoch+1}') desc.set_description(f'\tTrain Loss: {epoch_loss:.3f} | Train Acc @1: {epoch_acc_1*100:6.2f}% | ' \ f'Train Acc @5: {epoch_acc_5*100:6.2f}%') return epoch_loss, epoch_acc_1, epoch_acc_5 def evaluate(model, iterator, criterion, device): epoch_loss = 0 epoch_acc_1 = 0 epoch_acc_5 = 0 model.eval() with torch.no_grad(): with tqdm(iterator, position=0, bar_format='{desc}', leave=True) as desc: for (x, y) in iterator: x = x.to(device) y = y.to(device) y_pred = model(x) loss = criterion(y_pred, y) acc_1, acc_5 = calculate_topk_accuracy(y_pred, y) epoch_loss += loss.item() epoch_acc_1 += acc_1.item() #epoch_acc_5 += acc_5.item() epoch_loss /= len(iterator) epoch_acc_1 /= len(iterator) #epoch_acc_5 /= len(iterator) desc.set_description(f'\tValid Loss: {epoch_loss:.3f} | Valid Acc @1: {epoch_acc_1*100:6.2f}% | ' \ f'Valid Acc @5: {epoch_acc_5*100:6.2f}%') return epoch_loss, epoch_acc_1, epoch_acc_5 def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs #@title lr_finder END_LR = 10 #@param {type:"number"} NUM_ITER = 100#@param {type:"number"} #100 lr_finder = LRFinder(model, optimizer, criterion, device) lrs, losses = lr_finder.range_test(train_iterator, END_LR, NUM_ITER) #@title plot_lr_finder def plot_lr_finder(lrs, losses, skip_start = 5, skip_end = 5): if skip_end == 0: lrs = lrs[skip_start:] losses = losses[skip_start:] else: lrs = lrs[skip_start:-skip_end] losses = losses[skip_start:-skip_end] fig = plt.figure(figsize = (16,8)) ax = fig.add_subplot(1,1,1) ax.plot(lrs, losses) ax.set_xscale('log') ax.set_xlabel('Learning rate') ax.set_ylabel('Loss') ax.grid(True, 'both', 'x') plt.show() plot_lr_finder(lrs, losses, skip_start = 30, skip_end = 30) #@title config FOUND_LR = 2e-4 #@param {type:"number"} """ params = [ {'params': model.conv1.parameters(), 'lr': FOUND_LR / 10}, {'params': model.bn1.parameters(), 'lr': FOUND_LR / 10}, {'params': model.layer1.parameters(), 'lr': FOUND_LR / 8}, {'params': model.layer2.parameters(), 'lr': FOUND_LR / 6}, {'params': model.layer3.parameters(), 'lr': FOUND_LR / 4}, {'params': model.layer4.parameters(), 'lr': FOUND_LR / 2}, {'params': model.fc.parameters()} ] """ #optimizer = optim.Adam(params, lr = FOUND_LR) optimizer = optim.Adam(model.parameters(), lr = FOUND_LR) EPOCHS = 100 #@param {type:"number"} STEPS_PER_EPOCH = len(train_iterator) TOTAL_STEPS = EPOCHS * STEPS_PER_EPOCH MAX_LRS = [p['lr'] for p in optimizer.param_groups] scheduler = lr_scheduler.OneCycleLR(optimizer, max_lr = MAX_LRS, total_steps = TOTAL_STEPS) #@title training without topk import time best_valid_loss = float('inf') best_valid_accuracy = 0 for epoch in range(EPOCHS): start_time = time.monotonic() train_loss, train_acc_1, train_acc_5 = train(model, train_iterator, optimizer, criterion, scheduler, device, epoch) valid_loss, valid_acc_1, valid_acc_5 = evaluate(model, valid_iterator, criterion, device) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'best-validation-loss.pt') if best_valid_accuracy < valid_acc_1: best_valid_accuracy = valid_acc_1 torch.save(model.state_dict(), 'best-validation-accuracy.pt') end_time = time.monotonic() epoch_mins, epoch_secs = epoch_time(start_time, end_time) ``` ##################################################################################################### # TESTING ``` #@title Calc test loss model.load_state_dict(torch.load('best-validation-accuracy.pt')) print("best-validation-accuracy.pt") test_loss, test_acc_1, test_acc_5 = evaluate(model, test_iterator, criterion, device) print("-----------------------------") model.load_state_dict(torch.load('best-validation-loss.pt')) print("best-validation-loss.pt") test_loss, test_acc_1, test_acc_5 = evaluate(model, test_iterator, criterion, device) #@title plot_confusion_matrix def get_predictions(model, iterator): model.eval() images = [] labels = [] probs = [] with torch.no_grad(): for (x, y) in iterator: x = x.to(device) y_pred = model(x) y_prob = F.softmax(y_pred, dim = -1) top_pred = y_prob.argmax(1, keepdim = True) images.append(x.cpu()) labels.append(y.cpu()) probs.append(y_prob.cpu()) images = torch.cat(images, dim = 0) labels = torch.cat(labels, dim = 0) probs = torch.cat(probs, dim = 0) return images, labels, probs images, labels, probs = get_predictions(model, test_iterator) pred_labels = torch.argmax(probs, 1) def plot_confusion_matrix(labels, pred_labels, classes): fig = plt.figure(figsize = (50, 50)); ax = fig.add_subplot(1, 1, 1); cm = confusion_matrix(labels, pred_labels); cm = ConfusionMatrixDisplay(cm, display_labels = classes); cm.plot(values_format = 'd', cmap = 'Blues', ax = ax) fig.delaxes(fig.axes[1]) #delete colorbar plt.xticks(rotation = 90) plt.xlabel('Predicted Label', fontsize = 50) plt.ylabel('True Label', fontsize = 50) plot_confusion_matrix(labels, pred_labels, classes) #@title plot corrects = torch.eq(labels, pred_labels) incorrect_examples = [] for image, label, prob, correct in zip(images, labels, probs, corrects): if not correct: incorrect_examples.append((image, label, prob)) incorrect_examples.sort(reverse = True, key = lambda x: torch.max(x[2], dim = 0).values) def plot_most_incorrect(incorrect, classes, n_images, normalize = True): rows = int(np.sqrt(n_images)) cols = int(np.sqrt(n_images)) fig = plt.figure(figsize = (25, 20)) for i in range(rows*cols): ax = fig.add_subplot(rows, cols, i+1) image, true_label, probs = incorrect[i] image = image.permute(1, 2, 0) true_prob = probs[true_label] incorrect_prob, incorrect_label = torch.max(probs, dim = 0) true_class = classes[true_label] incorrect_class = classes[incorrect_label] if normalize: image = normalize_image(image) ax.imshow(image.cpu().numpy()) ax.set_title(f'true label: {true_class} ({true_prob:.3f})\n' \ f'pred label: {incorrect_class} ({incorrect_prob:.3f})') ax.axis('off') fig.subplots_adjust(hspace=0.4) N_IMAGES = 36 plot_most_incorrect(incorrect_examples, classes, N_IMAGES) #@title plot_representations def get_representations(model, iterator): model.eval() outputs = [] intermediates = [] labels = [] with torch.no_grad(): for (x, y) in iterator: x = x.to(device) y_pred, _ = model(x) outputs.append(y_pred.cpu()) labels.append(y) outputs = torch.cat(outputs, dim = 0) labels = torch.cat(labels, dim = 0) return outputs, labels outputs, labels = get_representations(model, train_iterator) def get_pca(data, n_components = 2): pca = decomposition.PCA() pca.n_components = n_components pca_data = pca.fit_transform(data) return pca_data def plot_representations(data, labels, classes, n_images = None): if n_images is not None: data = data[:n_images] labels = labels[:n_images] fig = plt.figure(figsize = (15, 15)) ax = fig.add_subplot(111) scatter = ax.scatter(data[:, 0], data[:, 1], c = labels, cmap = 'hsv') #handles, _ = scatter.legend_elements(num = None) #legend = plt.legend(handles = handles, labels = classes) output_pca_data = get_pca(outputs) plot_representations(output_pca_data, labels, classes) #@title get_tsne def get_tsne(data, n_components = 2, n_images = None): if n_images is not None: data = data[:n_images] tsne = manifold.TSNE(n_components = n_components, random_state = 0) tsne_data = tsne.fit_transform(data) return tsne_data output_tsne_data = get_tsne(outputs) plot_representations(output_tsne_data, labels, classes) #@title plot_filtered_images def plot_filtered_images(images, filters, n_filters = None, normalize = True): images = torch.cat([i.unsqueeze(0) for i in images], dim = 0).cpu() filters = filters.cpu() if n_filters is not None: filters = filters[:n_filters] n_images = images.shape[0] n_filters = filters.shape[0] filtered_images = F.conv2d(images, filters) fig = plt.figure(figsize = (30, 30)) for i in range(n_images): image = images[i] if normalize: image = normalize_image(image) ax = fig.add_subplot(n_images, n_filters+1, i+1+(i*n_filters)) ax.imshow(image.permute(1,2,0).numpy()) ax.set_title('Original') ax.axis('off') for j in range(n_filters): image = filtered_images[i][j] if normalize: image = normalize_image(image) ax = fig.add_subplot(n_images, n_filters+1, i+1+(i*n_filters)+j+1) ax.imshow(image.numpy(), cmap = 'bone') ax.set_title(f'Filter {j+1}') ax.axis('off'); fig.subplots_adjust(hspace = -0.7) N_IMAGES = 5 N_FILTERS = 7 images = [image for image, label in [train_data[i] for i in range(N_IMAGES)]] filters = model.conv1.weight.data plot_filtered_images(images, filters, N_FILTERS) #@title plot_filters #filters = model.conv1.weight.data def plot_filters(filters, normalize = True): filters = filters.cpu() n_filters = filters.shape[0] rows = int(np.sqrt(n_filters)) cols = int(np.sqrt(n_filters)) fig = plt.figure(figsize = (30, 15)) for i in range(rows*cols): image = filters[i] if normalize: image = normalize_image(image) ax = fig.add_subplot(rows, cols, i+1) ax.imshow(image.permute(1, 2, 0)) ax.axis('off') fig.subplots_adjust(wspace = -0.9) plot_filters(filters) ```
github_jupyter
# From Variables to Classes ## A short Introduction Python - as any programming language - has many extensions and libraries at its disposal. Basically, there are libraries for everything. <center>But what are **libraries**? </center> Basically, **libraries** are a collection of methods (_small pieces of code where you put sth in and get sth else out_) which you can use to analyse your data, visualise your data, run models ... do anything you like. As said, methods usually take _something_ as input. That _something_ is usually a **variable**. In the following, we will work our way from **variables** to **libraries**. ## Variables Variables are one of the simplest types of objects in a programming language. An [object](https://en.wikipedia.org/wiki/Object_(computer_science) is a value stored in the memory of your computer, marked by a specific identifyer. Variables can have different types, such as [strings, numbers, and booleans](https://www.learnpython.org/en/Variables_and_Types). Differently to other programming languages, you do not need to declare the type of a variable, as variables are handled as objects in Python. ```python x = 4.2 # floating point number y = 'Hello World!' # string z = True # boolean ``` ``` x = 4.2 print(type(x)) y = 'Hello World!' print(type(y)) z = True print(type(z)) ``` We can use operations (normal arithmetic operations) to use variables for getting results we want. With numbers, you can add, substract, multiply, divide, basically taking the values from the memory assigned to the variable name and performing calculations. Let's have a look at operations with numbers and strings. We leave booleans to the side for the moment. We will simply add the variables below. ```python n1 = 7 n2 = 42 s1 = 'Looking good, ' s2 = 'you are.' ``` ``` n1 = 7 n2 = 42 s1 = 'Looking good, ' s2 = 'you are.' first_sum = n1 + n2 print(first_sum) first_conc = s1 + s2 print(first_conc) ``` Variables can be more than just a number. If you think of an Excel-Spreadsheet, a variable can be the content of a single cell, or multiple cells can be combined in one variable (e.g. one column of an Excel table). So let's create a list -_a collection of variables_ - from `x`, `n1`, and `n2`. Lists in python are created using [ ]. Now, if you want to calculate the sum of this list, it is really exhausting to sum up every item of this list manually. ```python first_list = [x, n1, n2] # a sum of a list could look like second_sum = some_list[0] + some_list[1] + ... + some_list[n] # where n is the last item of the list, e.g. 2 for first_list. ``` Actually, writing the second sum like this is the same as before. It would be great, if this step of calculating the sum could be used many times without writing it out. And this is, what functions are for. For example, there already exists a sum function: ```python sum(first_list)``` ``` first_list = [x, n1, n2] second_sum = first_list[0] + first_list[1] + first_list[2] print('manual sum {}'.format(second_sum)) # This can also be done with a function print('sum function {}'.format(sum(first_list))) ``` ## Functions The `sum()` method we used above is a **function**. Functions (later we will call them methods) are pieces of code, which take an input, perform some kind of operation, and (_optionally_) return an output. In Python, functions are written like: ```python def func(input): """ Description of the functions content # called the function header """ some kind of operation on input # called the function body return output ``` As an example, we write a `sumup` function which sums up a list. ``` def sumup(inp): """ input: inp - list/array with floating point or integer numbers return: sumd - scalar value of the summed up list """ val = 0 for i in inp: val = val + i return val # let's compare the implemented standard sum function with the new sumup function sum1 = sum(first_list) sum2 = sumup(first_list) print("The python sum function yields {}, \nand our sumup function yields {}.".format(*(sum1,sum2))) # summing up the numbers from 1 to 100 import numpy as np ar_2_sum = np.linspace(1,100,100, dtype='i') print("the sum of the array is: {}".format(sumup(ar_2_sum))) ``` As we see above, functions are quite practical and save a lot of time. Further, they help structuring your code. Some functions are directly available in python without any libraries or other external software. In the example above however, you might have noticed, that we `import`ed a library called `numpy`. In those libraries, functions are merged to one package, having the advantage that you don't need to import each single function at a time. Imagine you move and have to pack all your belongings. You can think of libraries as packing things with similar purpose in the same box (= library). ## Functions to Methods as part of classes When we talk about functions in the environment of classes, we usually call them methods. But what are **classes**? [Classes](https://docs.python.org/3/tutorial/classes.html) are ways to bundle functionality together. Logically, functionality with similar purpose (or different kind of similarity). One example could be: think of **apples**. Apples are now a class. You can apply methods to this class, such as `eat()` or `cut()`. Or more sophisticated methods including various recipes using apples comprised in a cookbook. The `eat()` method is straight forward. But the `cut()` method may be more interesting, since there are various ways to cut an apple. Let's assume there are two apples to be cut differently. In python, once you have assigned a class to a variable, you have created an **instance** of that class. Then, methods of are applied to that instance by using a . notation. ```python Golden_Delicious = apple() Yoya = apple() Golden_Delicious.cut(4) Yoya.cut(8) ``` The two apples Golden Delicious and Yoya are _instances_ of the class apple. Real _incarnations_ of the abstract concept _apple_. The Golden Delicious is cut into 4 pieces, while the Yoya is cut into 8 pieces. This is similar to more complex libraries, such as the `scikit-learn`. In one exercise, you used the command: ```python from sklearn.cluster import KMeans ``` which simply imports the **class** `KMeans` from the library part `sklearn.cluster`. `KMeans` comprises several methods for clustering, which you can use by calling them similar to the apple example before. For this, you need to create an _instance_ of the `KMeans` class. ```python ... kmeans_inst = KMeans(n_clusters=n_clusters) # first we create the instance of the KMeans class called kmeans_inst kmeans_inst.fit(data) # then we apply a method to the instance kmeans_inst ... ``` An example: ``` # here we just create the data for clustering from sklearn.datasets.samples_generator import make_blobs import matplotlib.pyplot as plt %matplotlib inline X, y = make_blobs(n_samples=100, centers=3, cluster_std= 0.5, random_state=0) plt.scatter(X[:,0], X[:,1], s=70) # now we create an instance of the KMeans class from sklearn.cluster import KMeans nr_of_clusters = 3 # because we see 3 clusters in the plot above kmeans_inst = KMeans(n_clusters= nr_of_clusters) # create the instance kmeans_inst kmeans_inst.fit(X) # apply a method to the instance y_predict = kmeans_inst.predict(X) # apply another method to the instance and save it in another variable # lets plot the predicted cluster centers colored in the cluster color plt.scatter(X[:, 0], X[:, 1], c=y_predict, s=50, cmap='Accent') centers = kmeans_inst.cluster_centers_ # apply the method to find the new centers of the determined clusters plt.scatter(centers[:, 0], centers[:, 1], c='red', s=200, alpha=0.6); # plot the cluster centers ``` ## Summary This short presentation is meant to make you familiar with the concept of variables, functions, methods and classes. All of which are objects! * Variables are normally declared by the user and link a value stored in the memory of your pc to a variable name. They are usually the input of functions * Functions are pieces of code taking an input and performing some operation on said input. Optionally, they return directly an output value * To facilitate the use of functions, they are sometimes bundled as methods within classes. Classes in turn can build up whole libraries in python. * Similar to real book libraries, python libraries contain a collection of _recipes_ which can be applied to your data. * In terms of apples: You own different kinds of apples. A book about apple dishes (_class_) from the library contains different recipes (_methods_) which can be used for your different apples (_instances of the class_). ## Further links * [Data Science Handbook](https://jakevdp.github.io/PythonDataScienceHandbook/) * [Python for Geosciences](https://github.com/koldunovn/python_for_geosciences) * [Introduction to Python for Geoscientists](http://ggorman.github.io/Introduction-to-programming-for-geoscientists/) * [Full Video course on Object Oriented Programming](https://www.youtube.com/watch?v=ZDa-Z5JzLYM&list=PL-osiE80TeTsqhIuOqKhwlXsIBIdSeYtc)
github_jupyter
``` import pandas as pd import numpy as np pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('display.max_colwidth', -1) %matplotlib inline from sidecar import Sidecar from ipywidgets import IntSlider sc = Sidecar(title='Sidecar Output') sl = IntSlider(description='Some slider') # Remove unrelated columns form data and get their name folder_path = '../../../datalcdem/data/optima/dementia_18July/' patient_df = pd.read_csv(folder_path + 'optima_patients.csv') display(patient_df.head(5)) #patient_df[['MMS1', 'MMS2']].hist() patient_com_df = pd.read_csv(folder_path + 'optima_patients_comorbidities.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Comorbidity_cui']] display(patient_com_df.head(10)) patient_filt_df = pd.read_csv(folder_path + 'optima_patients_filtered.csv') #display(patient_filt_df.head(5)) patient_treat_df = pd.read_csv(folder_path + 'optima_patients_treatments.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Medication_cui']] display(patient_treat_df.head(5)) len(set(patient_com_df['patient_id'].tolist())), len(set(patient_treat_df['patient_id'].tolist())) patient_treat_df['EPISODE_DATE'] = pd.to_datetime(patient_treat_df['EPISODE_DATE']) patient_com_df['EPISODE_DATE'] = pd.to_datetime(patient_com_df['EPISODE_DATE']) patient_com_treat_df = pd.merge(patient_com_df, patient_treat_df,on=['patient_id', 'EPISODE_DATE'], how='outer') #pd.concat([patient_com_df, patient_treat_df], keys=['patient_id', 'EPISODE_DATE'], ignore_index=True, sort=False) #patient_com_df.append(patient_treat_df, sort=False) #pd.concat([patient_com_df, patient_treat_df], axis=0, sort=False) #patient_treat_com_df = patient_treat_df.join(patient_com_df, on=['patient_id', 'EPISODE_DATE'], how='outer') print (patient_com_treat_df.shape) print (len(set(patient_com_treat_df['patient_id'].tolist()))) patient_com_treat_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True) patient_com_treat_df.reset_index(drop=True, inplace=True) patient_com_treat_df.head(10) patient_com_treat_df.to_csv('../../../datalcdem/data/optima/optima_ahmad/patient_com_treat_df.csv') folder_path = '../../../datalcdem/data/optima/dementia_18July/' df_datarequest = pd.read_excel(folder_path+'Data_Request_Jan_2019_final.xlsx') df_datarequest.head(5) df_datarequest_mmse = df_datarequest[['GLOBAL_PATIENT_DB_ID', 'Age At Episode', 'EPISODE_DATE', 'CAMDEX SCORES: MINI MENTAL SCORE']] df_datarequest_mmse_1 = df_datarequest_mmse.rename(columns={'GLOBAL_PATIENT_DB_ID':'patient_id'}) df_datarequest_mmse_1.head(10) #patient_com_treat_df.astype('datetime') patient_com_treat_df['EPISODE_DATE'] = pd.to_datetime(patient_com_treat_df['EPISODE_DATE']) print (df_datarequest_mmse_1.dtypes, patient_com_treat_df.dtypes) patient_com_treat_df = pd.merge(patient_com_treat_df,df_datarequest_mmse_1,on=['patient_id', 'EPISODE_DATE'], how='left') patient_com_treat_df.shape, patient_com_treat_df.head(10) len(set (patient_com_treat_df['patient_id'].tolist())) patient_com_treat_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True) patient_com_treat_df.head(20) patient_com_treat_df.reset_index(inplace=True, drop=True) patient_com_treat_df.head(5) def setLineNumber(lst): lst_dict = {ide:0 for ide in lst} lineNumber_list = [] for idx in lst: if idx in lst_dict: lst_dict[idx] = lst_dict[idx] + 1 lineNumber_list.append(lst_dict[idx]) return lineNumber_list patient_com_treat_df['lineNumber'] = setLineNumber(patient_com_treat_df['patient_id'].tolist()) patient_com_treat_df.tail(20) df = patient_com_treat_df id_dict = {i:0 for i in df['patient_id'].tolist()} for x in df['patient_id'].tolist(): if x in id_dict: id_dict[x]=id_dict[x]+1 line_updated = [int(j) for i in id_dict.values() for j in range(1,i+1)] print (line_updated[0:10]) df.update(pd.Series(line_updated, name='lineNumber'),errors='ignore') display(df.head(20)) #patients merging based on id and creating new columns r = df['lineNumber'].max() print ('Max line:',r) l = [df[df['lineNumber']==i] for i in range(1, int(r+1))] print('Number of Dfs to merge: ',len(l)) df_new = pd.DataFrame() tmp_id = [] for i, df_l in enumerate(l): df_l = df_l[~df_l['patient_id'].isin(tmp_id)] for j, df_ll in enumerate(l[i+1:]): #df_l = df_l.merge(df_ll, on='id', how='left', suffix=(str(j), str(j+1))) #suffixe is not working #print (j) df_l = df_l.join(df_ll.set_index('patient_id'), on='patient_id', rsuffix='_'+str(j+1)) tmp_id = tmp_id + df_l['patient_id'].tolist() #display(df_l) df_new = df_new.append(df_l, ignore_index=True, sort=False) display(df_new.head(20)) display(df_new[['patient_id']+[col for col in df_new.columns if 'line' in col or 'DATE' in col]].head(10)) fltr_linnum = ['_'+str(i) for i in range(10, 27)] print (fltr_linnum) df_new.drop(columns=[col for col in df_new.columns for i in fltr_linnum if i in col],inplace=True) df_new.to_csv(folder_path+'dementialTreatmentLine_preData_line_episode.csv', index=False) df_new = df_new.drop([col for col in df_new.columns if 'lineNumber' in col or 'EPISODE_DATE' in col], axis=1).reset_index(drop=True) df_new.to_csv(folder_path+'dementialTreatmentLine_preData.csv', index=False) # Calculate matching intial episode in the data ''' df_episode= pd.read_csv('../../../datalcdem/data/optima/dementialTreatmentLine_preData_line_episode.csv') df_patients = pd.read_csv('../../../datalcdem/data/optima/patients.csv') display(df_episode.columns, df_patients.columns) df_pat_ep = pd.merge(df_episode[['patient_id', 'EPISODE_DATE']], df_patients[['patient_id', 'epDateInicial', 'mmseInicial']]) df_episode.shape, df_patients.shape, df_pat_ep.shape df_pat_ep['dateEqual']=df_pat_ep['EPISODE_DATE']==df_pat_ep['epDateInicial'] display(sum(df_pat_ep['dateEqual'].tolist())) df_pat_ep.head(10) display(sum(df_pat_ep['mmseInicial']<24))''' df_new.head(10) # Take Some other features from API df_patient_api = pd.read_csv(folder_path+'patients.csv') display(df_patient_api.head(10)) df_patient_api = df_patient_api[['patient_id', 'gender', 'dementia', 'smoker', 'alcohol', 'education', 'bmi', 'weight', 'apoe']] display(df_patient_api.head(10)) display(df_new.head(10)) df_patient_new = df_patient_api.merge(df_new, on=['patient_id'], how='inner') df_patient_new.head(10) df_patient_new.to_csv(folder_path+'patients_new.csv', index=False) def removeNANvalues(lst): return lst[~numpy.isnan(lst)] comorbidity_cui_lst = df_patient_new[[col for col in df_patient_new.columns if 'Comorbidity_cui' in col]].values.flatten() medication_cui_lst = df_patient_new[[col for col in df_patient_new.columns if 'Medication_cui' in col]].values.flatten() x = x[~numpy.isnan(x)] medication_cui_lst [val for lst_1 in medication_cui.flatten() for val in lst_1] ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from tqdm import tqdm from collections import defaultdict from itertools import combinations import re import time ### You may import any Python's standard library here (Do not import other external libraries) ### pass_test1_1_1 = False pass_test1_1_2 = False pass_test1_2 = False pass_test2_1 = False pass_test2_2 = False ``` ## Implementing LSH algorithm ### 0. Dataset #### 0.1 Import 20-news dataset ``` newsgroup_dataset = datasets.fetch_20newsgroups(data_home='./dataset/', subset='train', remove=('headers', 'footers', 'quotes'), download_if_missing=True) raw_documents = newsgroup_dataset['data'][:] len(raw_documents) raw_documents[0] ``` #### 0.2 Preprocess the documents ``` K = 5 # number of word tokens to shinlge def preprocess(documents): processed_words = defaultdict(list) cnt = 0 for doc in documents: # first, filter out some uncesseary symbols like punctuations doc = re.sub('\/|\-|\'|\@|\%|\$|\#|\,|\(|\)|\}|\"|\{|\?|\.|\!|\;|\:', '', doc) # second, split the document into the words for word in doc.split(): # third, let word to be the lower-case if word.isalpha(): processed_words[cnt].append(word.lower()) # fourth, filter out the articles that has less than k shingles if len(processed_words[cnt]) < K: continue else: processed_words[cnt] = ' '.join(processed_words[cnt]) cnt += 1 return list(processed_words.values()) documents = preprocess(raw_documents) del raw_documents len(documents) documents[0] ``` ### 1. Shingling ``` ######################################################################################################################## # Programming 1 [15pt] # # In this section, you will implement the shingling algorithm to convert the document into the characteristic matrix. # # However, since storing the whole characteristic matrix in the form of a dense matrix is expensivein terms of space, # # your implementation should store the characteristic matrix in the form of a dictionary. # # # # i) get the all unique shingles from the documents [10pt] # # ii) create the dictionary that maps each document to the list of shingles [5pt] # # # # Note that, shingling is divided into 2-steps just for the readability of the algorithm # # # ######################################################################################################################## ``` #### 1.1 Get Shingles from the documents ``` def get_shingles(documents): ###################################################################################### # Programming 1.1 [10pt] # # Implement 'get_shingles' function to get 1-singles from the preprocessed documents # # You should especially be take care of your algorithm's computational efficiency # # # # Parameters: # # documents (dict) # # # # Returns: # # shingles (set) set of tuples where each element is a k-shingle # # ex) shingles = {('its', 'hard', 'to', 'say', 'whether'), # # ('known', 'bugs', 'in', 'the', 'warning') ...} # ###################################################################################### shingles = set() for doc in documents: doc_split = doc.split() for i in range(len(doc_split) - (K-1)): shingles.add(tuple(doc_split[i:i+K])) return shingles start = time.time() shingles = get_shingles(documents) end = time.time() # Check whether your implementation is correct [5pt] if len(shingles) == 1766049: pass_test1_1_1 = True print('Test1 passed') # Check whether your implementation is efficient enough [5pt] # With 4-lines of my implementations, it took 4.8 seconds with i7-8700 cpu if (end - start) < 20: pass_test1_1_2 = True print('Test2 passed') print(end - start) ``` #### 1.2 Build document to shingles dictionary ``` def build_doc_to_shingle_dictionary(documents, shingles): ################################################################################################################################ # Programming 1.2 [5pt] # # Implement 'build_doc_to_shingle_dictionary' function to convert documents into shingle dictionary with documents & shingles # # You need to construct and utilize a shingle2idx dictionary that maps each shingle into the uniuqe integer index. # # # # Parameters: # # documents (dict) # # shingles (set) # # # # Returns: # # doc_to_shingles (dict) # # key: index of the documents # # value: list of the shingle indexes # # ex) doc_to_shingles = {0: [1705196, 422880, 491967, ...], # # 1: [863922, 1381606, 1524066, ...], # # ... } # ################################################################################################################################ doc_to_shingles = {} shingle2idx = {} for idx, shingle in enumerate(shingles): shingle2idx[shingle] = idx for idx, doc in enumerate(documents): shingle_list = [shingle2idx[s] for s in get_shingles([doc])] doc_to_shingles[idx] = shingle_list return doc_to_shingles doc_to_shingles = build_doc_to_shingle_dictionary(documents, shingles) # Check whether your implementation is correct [5pt] if len(doc_to_shingles) == 10882 and len(doc_to_shingles[0]) == 84: pass_test1_2 = True print('Test passed') ``` ### 2. Min-Hashing ``` ############################################################################################################################ # Programming 2 [25pt] # # In this section, you will implement the min-hashing algorithm to convert the characteristic matrix into the signatures. # # # # i) implement the jaccard-similarity algorithm [5pt] # # ii) implement the min-hash algorithm to create the signatures for the documents [20pt] # # # ############################################################################################################################ ``` #### 2.1 Generate Prime numbers for Universal Hashing ``` def is_prime(n): for i in range(2,int(np.sqrt(n))+1): if not n % i: return False return True def generate_prime_numbers(M, N): # this function generate the M prime numbers where each prime number is greater than N primes = [] cnt = 0 n = N + 1 while cnt < M: if is_prime(n): primes.append(n) cnt += 1 n += 1 return primes # Test prime number generation generate_prime_numbers(M = 3, N = 3) ``` #### 2.2 Jaccard Similarity ``` def jaccard_similarity(s1, s2): ################################################################################## # Programming 2.2 [5pt] # # Implement the jaccard similarity algorithm to get the similarity of two sets # # # # Parameters # # s1 (set) # # s2 (set) # # Returns # # similarity (float) # ################################################################################## similarity = len(s1&s2) / len(s1|s2) return similarity s1 = {1, 3, 4} s2 = {3, 4, 6} if (jaccard_similarity(s1, s2) - 0.5) < 1e-3: pass_test2_1 = True print('Test passed') ``` #### 2.3 Min Hash ``` M = 100 # Number of Hash functions to use N = len(shingles) # First we will create M universal hashing functions # You can also modify or implement your own hash functions for implementing min_hash function class Hash(): def __init__(self, M, N): self.M = M self.N = N self.p = generate_prime_numbers(M, N) self.a = np.random.choice(9999, M) self.b = np.random.choice(9999, M) def __call__(self, x): return np.mod(np.mod((self.a * x + self.b), self.p), self.N) def __len__(self): return M #primes = generate_prime_numbers(M, N) hash_functions = Hash(M, N) def min_hash(doc_to_shingles, hash_functions): ########################################################################################### # Programming 2.3 [20pt] # # Implement the min-hash algorithm to create the signatures for the documents # # It would take about ~10 minutes to finish computation, # # while would take ~20 seconds if you parallelize your hash functions # # # # Parameters # # doc_to_shingles: (dict) dictionary that maps each document to the list of shingles # # hash_functions: [list] list of hash functions # # Returns # # signatures (np.array) numpy array of size (M, C) where C is the number of documents # # # ########################################################################################### C = len(doc_to_shingles) M = len(hash_functions) signatures = np.array(np.ones((M, C)) * 999999999999, dtype = np.int) for doc_id in range(C): shingles = doc_to_shingles[doc_id] for shingle in shingles: hash_shingle = hash_functions(shingle) signatures[:,doc_id] = np.where(hash_shingle < signatures[:,doc_id], hash_shingle, signatures[:,doc_id]) return signatures def compare(signatures, doc_to_shingles, trials = 10000): M, C = signatures.shape diff_list = [] for t in tqdm(range(trials)): doc1, doc2 = np.random.choice(C, 2, replace = False) shingle1, shingle2 = set(doc_to_shingles[doc1]), set(doc_to_shingles[doc2]) sig1, sig2 = signatures[:,doc1], signatures[:,doc2] true_sim = jaccard_similarity(shingle1, shingle2) approx_sim = sum(np.equal(sig1, sig2)) / M diff_list.append(abs(true_sim - approx_sim)) return diff_list start = time.time() signatures = min_hash(doc_to_shingles, hash_functions) end = time.time() diff_list = compare(signatures, doc_to_shingles) # Check whether your implementation is correct [20pt] # Average difference of document's jaccard similarity between characteristic matrix and signatures should be at most 1% # With 10 random seeds, difference was around 1e-5 ~ 1e-6% if np.mean(diff_list) < 0.01: pass_test2_2 = True print('Test passed') ``` #### 2.4 Qualitive Analysis ``` print('Document 3542') print(documents[3542]) print('-------------') print('Document 8033') print(documents[8033]) print('-------------') print('true jaccard similarity:' ,jaccard_similarity(set(doc_to_shingles[3542]), set(doc_to_shingles[8033]))) print('approx jaccard similarity:',sum(np.equal(signatures[:,3542], signatures[:,8033])) / M) print('Do you think signature well reflects the characteristic matrix?') ``` ### 3. Locality Sensitive Hashing ``` ######################################################################################################################## # Programming 3 [35pt] # # In this section, you will implement the Min-Hash based Locality Sensitive Hashing algorithm to convert signatures # # into the similar document pair candidates # # Finally, we will test our results based on the precision, recall and F1 score # # # # 1) get the similar document pair candidates [20pt] # # 2) calculate precision, recall, and f1 score [10pt] # # # ######################################################################################################################## ``` #### 3.1 Min-Hash based LSH ``` def lsh(signatures, b, r): ######################################################################################################### # Programming 3.1 [20pt] # # Implement the min-hash based LSH algorithm to find the candidate pairs of the similar documents. # # In the implementation, use python's dictionary to make your hash table, # # where each column is hashed into a bucket. # # Convert each column vector (within a band) into the tuple and use it as a key of the dictionary. # # # # Parameters # # signatures: (np.array) numpy array of size (M, C) where # # M is the number of min-hash functions, C is the number of documents # # b: (int) the number of bands # # r: (int) the number of rows per each band # # # # Requirements # # 1) M should be equivalent to b * r # # # # Returns # # candidatePairs (Set[Tuple[int, int]]) set of the pairs of indexes of candidate document pairs # # # ######################################################################################################### M = signatures.shape[0] # The number of min-hash functions C = signatures.shape[1] # The number of documents assert M == b * r candidatePairs = set() # TODO: Write down your code here for num_b in range(b): bucket = {} bands = signatures[num_b*r:(num_b+1)*r] for col in range(C): if tuple(bands[:,col]) in bucket.keys(): bucket[tuple(bands[:,col])].append(col) else: bucket[tuple(bands[:,col])] = [col] for value in bucket.values(): if len(value) >= 2: combi = combinations(value, 2) candidatePairs.update(list(combi)) #import ipdb; ipdb.set_trace() ### Implementation End ### return candidatePairs # You can test your implementation here b = 10 n = 0 tmpPairs = list(lsh(signatures, b, M // b)) print(f"b={b}") print(f"# of candidate pairs = {len(tmpPairs)}") samplePair = tmpPairs[n] shingle1, shingle2 = set(doc_to_shingles[samplePair[0]]), set(doc_to_shingles[samplePair[1]]) print(f"{n}th sample pair: {samplePair}") print(f"Jaccard similarity: {jaccard_similarity(shingle1, shingle2)}") print('-------------') print(documents[samplePair[0]]) print('-------------') print(documents[samplePair[1]]) print('-------------') ``` #### 3.2 Compute the precision, recall, and F1-score ``` # Compute the number of condition positives, which is the number of every document pair whose Jaccard similarity is greater than or equal to the threshold s = 0.8 # similarity threshold for checking condition positives numConditionPositives = 151 # This is the computed result when s=0.8, but I gave it to you to save your time. computeConditionPositives = False # If you want to calculate it, then change it to True. It will take 30 minutes to compute. if computeConditionPositives: numConditionPositives = 0 numDocs = len(documents) for i in tqdm(range(numDocs)): shingle1 = set(doc_to_shingles[i]) for j in range(i+1, numDocs): shingle2 = set(doc_to_shingles[j]) true_sim = jaccard_similarity(shingle1, shingle2) if true_sim >= s: numConditionPositives += 1 print(f"The number of condition positives: {numConditionPositives} when s={s}") def query_analysis(signatures, b, s, numConditionPositives): ########################################################################################################### # Programming 3.2 [10pt] # # Calculate the query time, precision, recall, and F1 score for the given configuration # # # # Parameters # # signatures: (np.array) numpy array of size (M, C) where # # M is the number of min-hash functions, C is the number of documents # # b: (int) the number of bands # # s: (float) similarity threshold for checking condition positives # # numConditionPositives: (int) the number of condition positives # # # # Requirements # # 1) b should be the divisor of M # # 2) 0 <= s <= 1 # # # # Returns # # query time: (float) the execution time of the codes which find the similar document candidate pairs # # precision: (float) # # recall: (float) # # f1: (float) F1-Score # # # ########################################################################################################### M = signatures.shape[0] # The number of min-hash functions assert M % b == 0 # TODO: Write down your code here TP = 0 t = time.time() candidatePairs = lsh(signatures, b, M // b) query_time = time.time() - t for pair in candidatePairs: shingle1, shingle2 = set(doc_to_shingles[pair[0]]), set(doc_to_shingles[pair[1]]) if jaccard_similarity(shingle1, shingle2) >= s: TP += 1 precision = TP / len(candidatePairs) recall = TP / numConditionPositives f1 = 2 * precision * recall / (precision + recall) ### Implementation End ### return query_time, precision, recall, f1 # Return the list of every divisor of given integer def find_divisors(x): divisors = list() for i in range(1, x + 1): if x % i == 0: divisors.append(i) return divisors b_list = find_divisors(M) query_time_list = list() precision_list = list() recall_list = list() f1_list = list() for b in tqdm(b_list): query_time, precision, recall, f1 = query_analysis(signatures, b, s, numConditionPositives) query_time_list.append(query_time) precision_list.append(precision) recall_list.append(recall) f1_list.append(f1) print("b: ", b_list) print("Query times: ", query_time_list) print("Precisions: ", precision_list) print("Recalls: ", recall_list) print("F1 scores: ", f1_list) plt.title(f"Query time (s={s})") plt.xlabel("b") plt.ylabel("Query time [sec]") plt.plot(b_list, query_time_list) plt.show() plt.title(f"Precision (s={s})") plt.xlabel("b") plt.ylabel("Precision") plt.plot(b_list, precision_list) plt.show() plt.title(f"Recall (s={s})") plt.xlabel("b") plt.ylabel("Recall") plt.plot(b_list, recall_list) plt.show() plt.title(f"F1 Score (s={s})") plt.xlabel("b") plt.ylabel("F1 Score") plt.plot(b_list, f1_list) plt.show() # Check whether the test passed test_msg = {True: "Passed", False: "Failed"} print("-----Test results-----") print(f"[Test 1.1 (1)]: {test_msg[pass_test1_1_1]}") print(f"[Test 1.1 (2)]: {test_msg[pass_test1_1_2]}") print(f"[Test 1.2]: {test_msg[pass_test1_2]}") print(f"[Test 2.1]: {test_msg[pass_test2_1]}") print(f"[Test 2.2]: {test_msg[pass_test2_2]}") print("----------------------") ```
github_jupyter
``` import sys sys.path.append("..") import numpy as np np.seterr(divide="ignore") import logging import pickle import glob from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.preprocessing import RobustScaler from sklearn.utils import check_random_state from scipy import interp from recnn.preprocessing import rewrite_content from recnn.preprocessing import permute_by_pt from recnn.preprocessing import extract from recnn.preprocessing import sequentialize_by_pt from recnn.preprocessing import randomize %matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (6, 6) ``` # Plotting functions ``` from recnn.preprocessing import sequentialize_by_pt def load_tf(filename_train, preprocess=None, n_events_train=-1): # Make training data print("Loading training data...") fd = open(filename_train, "rb") X, y = pickle.load(fd) fd.close() y = np.array(y) if n_events_train > 0: indices = check_random_state(123).permutation(len(X))[:n_events_train] X = [X[i] for i in indices] y = y[indices] print("\tfilename = %s" % filename_train) print("\tX size = %d" % len(X)) print("\ty size = %d" % len(y)) # Preprocessing print("Preprocessing...") X = [rewrite_content(jet) for jet in X] if preprocess: X = [preprocess(jet) for jet in X] X = [extract(permute_by_pt(jet)) for jet in X] tf = RobustScaler().fit(np.vstack([jet["content"] for jet in X])) return tf def load_test(tf, filename_test, preprocess=None, cropping=True): # Make test data print("Loading test data...") fd = open(filename_test, "rb") X, y = pickle.load(fd) fd.close() y = np.array(y) print("\tfilename = %s" % filename_test) print("\tX size = %d" % len(X)) print("\ty size = %d" % len(y)) # Preprocessing print("Preprocessing...") X = [rewrite_content(jet) for jet in X] if preprocess: X = [preprocess(jet) for jet in X] X = [extract(permute_by_pt(jet)) for jet in X] for jet in X: jet["content"] = tf.transform(jet["content"]) if not cropping: return X, y # Cropping X_ = [j for j in X if 250 < j["pt"] < 300 and 50 < j["mass"] < 110] y_ = [y[i] for i, j in enumerate(X) if 250 < j["pt"] < 300 and 50 < j["mass"] < 110] X = X_ y = y_ y = np.array(y) print("\tX size = %d" % len(X)) print("\ty size = %d" % len(y)) # Weights for flatness in pt w = np.zeros(len(y)) X0 = [X[i] for i in range(len(y)) if y[i] == 0] pdf, edges = np.histogram([j["pt"] for j in X0], density=True, range=[250, 300], bins=50) pts = [j["pt"] for j in X0] indices = np.searchsorted(edges, pts) - 1 inv_w = 1. / pdf[indices] inv_w /= inv_w.sum() w[y==0] = inv_w X1 = [X[i] for i in range(len(y)) if y[i] == 1] pdf, edges = np.histogram([j["pt"] for j in X1], density=True, range=[250, 300], bins=50) pts = [j["pt"] for j in X1] indices = np.searchsorted(edges, pts) - 1 inv_w = 1. / pdf[indices] inv_w /= inv_w.sum() w[y==1] = inv_w return X, y, w from recnn.recnn import grnn_transform_simple from recnn.recnn import grnn_predict_simple from recnn.recnn import grnn_predict_gated from recnn.recnn import grnn_predict_simple_join def predict(X, filename, func=grnn_predict_simple): fd = open(filename, "rb") params = pickle.load(fd) fd.close() y_pred = func(params, X) return y_pred def evaluate_models(X, y, w, pattern, func=grnn_predict_simple): rocs = [] fprs = [] tprs = [] for filename in glob.glob(pattern): print("Loading %s" % filename), y_pred = predict(X, filename, func=func) # Roc rocs.append(roc_auc_score(y, y_pred, sample_weight=w)) fpr, tpr, _ = roc_curve(y, y_pred, sample_weight=w) fprs.append(fpr) tprs.append(tpr) print("ROC AUC = %.4f" % rocs[-1]) print("Mean ROC AUC = %.4f" % np.mean(rocs)) return rocs, fprs, tprs def build_rocs(prefix_train, prefix_test, model_pattern, preprocess=None, gated=False): tf = load_tf("../data/w-vs-qcd/final/%s-train.pickle" % prefix_train, preprocess=preprocess) X, y, w = load_test(tf, "../data/w-vs-qcd/final/%s-test.pickle" % prefix_test, preprocess=preprocess) if not gated: rocs, fprs, tprs = evaluate_models(X, y, w, "../models/jet-study-2/model-w-s-%s-[0-9]*.pickle" % model_pattern) else: rocs, fprs, tprs = evaluate_models(X, y, w, "../models/jet-study-2/model-w-g-%s-[0-9]*.pickle" % model_pattern, func=grnn_predict_gated) return rocs, fprs, tprs def remove_outliers(rocs, fprs, tprs): inv_fprs = [] base_tpr = np.linspace(0.05, 1, 476) for fpr, tpr in zip(fprs, tprs): inv_fpr = interp(base_tpr, tpr, 1. / fpr) inv_fprs.append(inv_fpr) inv_fprs = np.array(inv_fprs) scores = inv_fprs[:, 225] p25 = np.percentile(scores, 1 / 6. * 100.) p75 = np.percentile(scores, 5 / 6. * 100) robust_mean = np.mean([scores[i] for i in range(len(scores)) if p25 <= scores[i] <= p75]) robust_std = np.std([scores[i] for i in range(len(scores)) if p25 <= scores[i] <= p75]) indices = [i for i in range(len(scores)) if robust_mean - 3*robust_std <= scores[i] <= robust_mean + 3*robust_std] new_r, new_f, new_t = [], [], [] for i in indices: new_r.append(rocs[i]) new_f.append(fprs[i]) new_t.append(tprs[i]) return new_r, new_f, new_t def report_score(rocs, fprs, tprs, label, latex=False, input="particles", short=False): inv_fprs = [] base_tpr = np.linspace(0.05, 1, 476) for fpr, tpr in zip(fprs, tprs): inv_fpr = interp(base_tpr, tpr, 1. / fpr) inv_fprs.append(inv_fpr) inv_fprs = np.array(inv_fprs) mean_inv_fprs = inv_fprs.mean(axis=0) if not latex: print("%32s\tROC AUC=%.4f+-%.2f\t1/FPR@TPR=0.5=%.2f+-%.2f" % (label, np.mean(rocs), np.std(rocs), np.mean(inv_fprs[:, 225]), np.std(inv_fprs[:, 225]))) else: if not short: print("%10s \t& %30s \t& %.4f $\pm$ %.4f \t& %.1f $\pm$ %.1f \\\\" % (input, label, np.mean(rocs), np.std(rocs), np.mean(inv_fprs[:, 225]), np.std(inv_fprs[:, 225]))) else: print("%30s \t& %.4f $\pm$ %.4f \t& %.1f $\pm$ %.1f \\\\" % (label, np.mean(rocs), np.std(rocs), np.mean(inv_fprs[:, 225]), np.std(inv_fprs[:, 225]))) def plot_rocs(rocs, fprs, tprs, label="", color="r", show_all=False): inv_fprs = [] base_tpr = np.linspace(0.05, 1, 476) for fpr, tpr in zip(fprs, tprs): inv_fpr = interp(base_tpr, tpr, 1. / fpr) inv_fprs.append(inv_fpr) if show_all: plt.plot(base_tpr, inv_fpr, alpha=0.1, color=color) inv_fprs = np.array(inv_fprs) mean_inv_fprs = inv_fprs.mean(axis=0) plt.plot(base_tpr, mean_inv_fprs, color, label="%s" % label) def plot_show(filename=None): plt.xlabel("Signal efficiency") plt.ylabel("1 / Background efficiency") plt.xlim([0.1, 1.0]) plt.ylim(1, 500) plt.yscale("log") plt.legend(loc="best") plt.grid() if filename: plt.savefig(filename) plt.show() ``` # Count parameters ``` def count(params): def _count(thing): if isinstance(thing, list): c = 0 for stuff in thing: c += _count(stuff) return c elif isinstance(thing, np.ndarray): return np.prod(thing.shape) c = 0 for k, v in params.items(): c += _count(v) return c # Simple vs gated fd = open("../models/jet-study-2/model-w-s-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() print("Simple =", count(params)) fd = open("../models/jet-study-2/model-w-g-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() print("Gated =", count(params)) # double # Simple vs gated fd = open("../models/jet-study-2/model-w-sd-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() print("Simple =", count(params)) fd = open("../models/jet-study-2/model-w-gd-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() print("Gated =", count(params)) ``` # Embedding visualization ``` prefix_train = "antikt-kt" prefix_test = prefix_train tf = load_tf("../data/w-vs-qcd/final/%s-train.pickle" % prefix_train) X, y, w = load_test(tf, "../data/w-vs-qcd/final/%s-test.pickle" % prefix_test) fd = open("../models/jet-study-2/model-w-s-antikt-kt-1.pickle", "rb") params = pickle.load(fd) fd.close() Xt = grnn_transform_simple(params, X[:5000]) from sklearn.manifold import TSNE Xtt = TSNE(n_components=2).fit_transform(Xt) for i in range(5000): plt.scatter(Xtt[i, 0], Xtt[i, 1], color="b" if y[i] == 1 else "r", alpha=0.5) plt.show() from sklearn.decomposition import PCA Xtt = PCA(n_components=2).fit_transform(Xt) for i in range(5000): plt.scatter(Xtt[i, 0], Xtt[i, 1], color="b" if y[i] == 1 else "r", alpha=0.5) plt.show() ``` # Generate all ROCs ``` for pattern, gated in [ # Simple ## Particles ("antikt-kt", False), ("antikt-cambridge", False), ("antikt-antikt", False), ("antikt-random", False), ("antikt-seqpt", False), ("antikt-seqpt-reversed", False), ## Towers ("antikt-kt-delphes", False), ("antikt-cambridge-delphes", False), ("antikt-antikt-delphes", False), ("antikt-random-delphes", False), ("antikt-seqpt-delphes", False), ("antikt-seqpt-reversed-delphes", False), ## Images ("antikt-kt-images", False), # Gated ## Particles ("antikt-kt", True), ("antikt-antikt", True), ("antikt-seqpt", True), ("antikt-seqpt-reversed", True), ("antikt-cambridge", True), ("antikt-random", True), ## Towers ("antikt-kt-delphes", True), ("antikt-antikt-delphes", True), ("antikt-seqpt-delphes", True), ("antikt-seqpt-reversed-delphes", True), ("antikt-cambridge-delphes", True), ("antikt-random-delphes", True), ## Images ("antikt-kt-images", True) ]: r, f, t = build_rocs(pattern, pattern, pattern, gated=gated) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "wb") pickle.dump((r, f, t), fd) fd.close() # sd/gd == contatenate embeddings of h1_L + h1_R for pattern, gated in [ # Simple ## Particles ("antikt-kt", False), ## Towers ("antikt-kt-delphes", False), ## Images ("antikt-kt-images", False), # Gated ## Particles ("antikt-kt", True), ## Towers ("antikt-kt-delphes", True), ## Images ("antikt-kt-images", True) ]: r, f, t = build_rocs(pattern, pattern, pattern, gated=gated) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("sd" if not gated else "gd", pattern), "wb") pickle.dump((r, f, t), fd) fd.close() ``` # Table ``` for pattern, gated, label in [ # Simple ## Particles ("antikt-kt", False, "RNN $k_t$"), ("antikt-cambridge", False, "RNN C/A"), ("antikt-antikt", False, "RNN anti-$k_t$"), ("antikt-random", False, "RNN random"), ("antikt-seqpt", False, "RNN asc-$p_T$"), ("antikt-seqpt-reversed", False, "RNN desc-$p_T$"), ## Towers ("antikt-kt-delphes", False, "RNN $k_t$"), ("antikt-cambridge-delphes", False, "RNN C/A"), ("antikt-antikt-delphes", False, "RNN anti-$k_t$"), ("antikt-random-delphes", False, "RNN random"), ("antikt-seqpt-delphes", False, "RNN asc-$p_T$"), ("antikt-seqpt-reversed-delphes", False, "RNN desc-$p_T$"), ## Images ("antikt-kt-images", False, "RNN $k_t$"), # Gated ## Particles ("antikt-kt", True, "RNN $k_t$ (gated)"), ("antikt-cambridge", True, "RNN C/A (gated)"), ("antikt-antikt", True, "RNN anti-$k_t$ (gated)"), ("antikt-random", True, "RNN random (gated)"), ("antikt-seqpt", True, "RNN asc-$p_T$ (gated)"), ("antikt-seqpt-reversed", True, "RNN desc-$p_T$ (gated)"), ## Towers ("antikt-kt-delphes", True, "RNN $k_t$ (gated)"), ("antikt-cambridge-delphes", True, "RNN C/A (gated)"), ("antikt-antikt-delphes", True, "RNN anti-$k_t$ (gated)"), ("antikt-random-delphes", True, "RNN random (gated)"), ("antikt-seqpt-delphes", True, "RNN asc-$p_T$ (gated)"), ("antikt-seqpt-reversed-delphes", True, "RNN desc-$p_T$ (gated)"), # Images ("antikt-kt-images", False, "RNN $k_t$"), ("antikt-kt-images", True, "RNN $k_t$ (gated)") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) report_score(r, f, t, label=label, latex=True, input="particles" if "delphes" not in pattern and "images" not in pattern else "towers") for pattern, gated, label in [ # Simple ## Particles ("antikt-kt", False, "RNN $k_t$"), ## Towers ("antikt-kt-delphes", False, "RNN $k_t$"), ## Images ("antikt-kt-images", False, "RNN $k_t$"), # Gated ## Particles ("antikt-kt", True, "RNN $k_t$ (gated)"), ## Towers ("antikt-kt-delphes", True, "RNN $k_t$ (gated)"), # Images ("antikt-kt-images", True, "RNN $k_t$ (gated)") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("sd" if not gated else "gd", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) report_score(r, f, t, label=label, latex=True, input="particles" if "delphes" not in pattern and "images" not in pattern else "towers") ``` # Plots ``` # Simple vs gated for pattern, gated, label, color in [ ("antikt-kt", False, "RNN $k_t$ (simple)", "r"), ("antikt-kt", True, "RNN $k_t$ (gated)", "b") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Topologies (particles, simple) for pattern, gated, label, color in [ ("antikt-kt", False, "$k_t$", "r"), ("antikt-cambridge", False, "C/A", "g"), ("antikt-antikt", False, "anti-$k_t$", "b"), ("antikt-seqpt", False, "asc-$p_T$", "c"), ("antikt-seqpt-reversed", False, "desc-$p_T$", "m"), ("antikt-random", False, "random", "orange") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Topologies (towers, simple) for pattern, gated, label, color in [ ("antikt-kt-delphes", False, "RNN $k_t$", "r"), ("antikt-cambridge-delphes", False, "RNN C/A", "g"), ("antikt-antikt-delphes", False, "RNN anti-$k_t$", "b"), ("antikt-seqpt-delphes", False, "RNN asc-$p_T$", "c"), ("antikt-seqpt-reversed-delphes", False, "RNN desc-$p_T$", "m"), ("antikt-random-delphes", False, "RNN random", "orange") ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Topologies (particles, gated) for pattern, gated, label, color in [ ("antikt-kt", True, "RNN $k_t$", "r"), ("antikt-antikt", True, "RNN anti-$k_t$", "b"), ("antikt-seqpt", True, "RNN asc-$p_T$", "c"), ("antikt-seqpt-reversed", True, "RNN desc-$p_T$", "m"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Topologies (towers, gated) for pattern, gated, label, color in [ ("antikt-kt-delphes", True, "RNN $k_t$", "r"), ("antikt-antikt-delphes", True, "RNN anti-$k_t$", "b"), ("antikt-seqpt-delphes", True, "RNN asc-$p_T$", "c"), ("antikt-seqpt-reversed-delphes", True, "RNN desc-$p_T$", "m"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() # Particles vs towers vs images (simple) for pattern, gated, label, color in [ ("antikt-kt", False, "particles", "r"), ("antikt-kt-delphes", False, "towers", "g"), ("antikt-kt-images", False, "images", "b"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show(filename="particles-towers-images.pdf") # Particles vs towers vs images (gated) for pattern, gated, label, color in [ ("antikt-kt", True, "particles", "r"), ("antikt-kt-delphes", True, "towers", "g"), ("antikt-kt-images", True, "images", "b"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s.pickle" % ("s" if not gated else "g", pattern), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() ``` # Trimming ``` for pattern_train, pattern_test, gated in [ ("antikt-kt", "antikt-kt", False), ("antikt-kt", "antikt-kt-trimmed", False), ("antikt-kt-trimmed", "antikt-kt-trimmed", False), ("antikt-kt-trimmed", "antikt-kt", False), ]: r, f, t = build_rocs(pattern_train, pattern_test, pattern_train, gated=gated) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "wb") pickle.dump((r, f, t), fd) fd.close() for pattern_train, pattern_test, gated, label, color in [ ("antikt-kt", "antikt-kt", False, "$k_t$ on $k_t$", "b"), ("antikt-kt", "antikt-kt-trimmed", False, "$k_t$ on $k_t$-trimmed", "c"), ("antikt-kt-trimmed", "antikt-kt-trimmed", False, "$k_t$-trimmed on $k_t$-trimmed", "r"), ("antikt-kt-trimmed", "antikt-kt", False, "$k_t$-trimmed on $k_t$", "orange"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() ``` # Colinear splits ``` from functools import partial from recnn.preprocessing import sequentialize_by_pt preprocess_seqpt = partial(sequentialize_by_pt, reverse=False) preprocess_seqpt_rev = partial(sequentialize_by_pt, reverse=True) for pattern_train, pattern_test, gated, preprocess in [ # kt ("antikt-kt", "antikt-kt-colinear1", False, None), ("antikt-kt", "antikt-kt-colinear10", False, None), ("antikt-kt", "antikt-kt-colinear1-max", False, None), ("antikt-kt", "antikt-kt-colinear10-max", False, None), # asc-pt ("antikt-seqpt", "antikt-kt-colinear1", False, preprocess_seqpt), ("antikt-seqpt", "antikt-kt-colinear10", False, preprocess_seqpt), ("antikt-seqpt", "antikt-kt-colinear1-max", False, preprocess_seqpt), ("antikt-seqpt", "antikt-kt-colinear10-max", False, preprocess_seqpt), # desc-pt ("antikt-seqpt-reversed", "antikt-kt-colinear1", False, preprocess_seqpt_rev), ("antikt-seqpt-reversed", "antikt-kt-colinear10", False, preprocess_seqpt_rev), ("antikt-seqpt-reversed", "antikt-kt-colinear1-max", False, preprocess_seqpt_rev), ("antikt-seqpt-reversed", "antikt-kt-colinear10-max", False, preprocess_seqpt_rev), ]: r, f, t = build_rocs(pattern_train, pattern_test, pattern_train, gated=gated, preprocess=preprocess) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "wb") pickle.dump((r, f, t), fd) fd.close() for pattern_train, pattern_test, gated, label in [ # kt ("antikt-kt", "antikt-kt-colinear1", False, "$k_t$ colinear1"), ("antikt-kt", "antikt-kt-colinear10", False, "$k_t$ colinear10"), ("antikt-kt", "antikt-kt-colinear1-max", False, "$k_t$ colinear1-max"), ("antikt-kt", "antikt-kt-colinear10-max", False, "$k_t$ colinear10-max"), # asc-pt ("antikt-seqpt", "antikt-kt-colinear1", False, "asc-$p_T$ colinear1"), ("antikt-seqpt", "antikt-kt-colinear10", False, "asc-$p_T$ colinear10"), ("antikt-seqpt", "antikt-kt-colinear1-max", False, "asc-$p_T$ colinear1-max"), ("antikt-seqpt", "antikt-kt-colinear10-max", False, "asc-$p_T$ colinear10-max"), # desc-pt ("antikt-seqpt-reversed", "antikt-kt-colinear1", False, "desc-$p_T$ colinear1"), ("antikt-seqpt-reversed", "antikt-kt-colinear10", False, "desc-$p_T$ colinear10"), ("antikt-seqpt-reversed", "antikt-kt-colinear1-max", False, "desc-$p_T$ colinear1-max"), ("antikt-seqpt-reversed", "antikt-kt-colinear10-max", False, "desc-$p_T$ colinear10-max"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) report_score(r, f, t, label=label, latex=True, short=True) ``` # Soft particles ``` from functools import partial from recnn.preprocessing import sequentialize_by_pt preprocess_seqpt = partial(sequentialize_by_pt, reverse=False) preprocess_seqpt_rev = partial(sequentialize_by_pt, reverse=True) for pattern_train, pattern_test, gated, preprocess in [ ("antikt-kt", "antikt-kt-soft", False, None), ("antikt-seqpt", "antikt-kt-soft", False, preprocess_seqpt), ("antikt-seqpt-reversed", "antikt-kt-soft", False, preprocess_seqpt_rev), ]: r, f, t = build_rocs(pattern_train, pattern_test, pattern_train, gated=gated, preprocess=preprocess) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "wb") pickle.dump((r, f, t), fd) fd.close() for pattern_train, pattern_test, gated, label in [ ("antikt-kt", "antikt-kt-soft", False, "$k_t$ soft"), ("antikt-seqpt", "antikt-kt-soft", False, "asc-$p_T$ soft"), ("antikt-seqpt-reversed", "antikt-kt-soft", False, "desc-$p_T$ soft"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%s.pickle" % ("s" if not gated else "g", pattern_train, pattern_test), "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) report_score(r, f, t, label=label, latex=True, short=True) ``` # Learning curve ``` for pattern, gated, n_events in [ # ("antikt-kt", False, 6000), # ("antikt-seqpt-reversed", False, 6000), ("antikt-kt", True, 6000), ("antikt-seqpt-reversed", True, 6000), # ("antikt-kt", False, 15000), # ("antikt-seqpt-reversed", False, 15000), ("antikt-kt", True, 15000), ("antikt-seqpt-reversed", True, 15000), ]: tf = load_tf("../data/w-vs-qcd/final/%s-train.pickle" % pattern, n_events_train=n_events) X, y, w = load_test(tf, "../data/w-vs-qcd/final/%s-test.pickle" % pattern) if not gated: rocs, fprs, tprs = evaluate_models(X, y, w, "../models/jet-study-2/model-w-s-%s-%d-[0-9]*.pickle" % (pattern, n_events)) else: rocs, fprs, tprs = evaluate_models(X, y, w, "../models/jet-study-2/model-w-g-%s-%d-[0-9]*.pickle" % (pattern, n_events), func=grnn_predict_gated) # Save fd = open("../models/jet-study-2/rocs/rocs-%s-%s-%d.pickle" % ("s" if not gated else "g", pattern, n_events), "wb") pickle.dump((rocs, fprs, tprs), fd) fd.close() for pattern, label, color in [ ("s-antikt-kt", "$k_t$ 100k", "r"), ("s-antikt-kt-15000", "$k_t$ 10k", "g"), ("s-antikt-kt-6000", "$k_t$ 1k", "b"), ("s-antikt-seqpt-reversed", "desc-$p_T$ 100k", "r--"), ("s-antikt-seqpt-reversed-15000", "desc-$p_T$ 10k", "g--"), ("s-antikt-seqpt-reversed-6000", "desc-$p_T$ 1k", "b--"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s.pickle" % pattern, "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() for pattern, label, color in [ ("g-antikt-kt", "$k_t$ 100k", "r"), ("g-antikt-kt-15000", "$k_t$ 10k", "g"), ("g-antikt-kt-6000", "$k_t$ 1k", "b"), ("g-antikt-seqpt-reversed", "desc-$p_T$ 100k", "r--"), ("g-antikt-seqpt-reversed-15000", "desc-$p_T$ 10k", "g--"), ("g-antikt-seqpt-reversed-6000", "desc-$p_T$ 1k", "b--"), ]: fd = open("../models/jet-study-2/rocs/rocs-%s.pickle" % pattern, "rb") r, f, t = pickle.load(fd) fd.close() r, f, t = remove_outliers(r, f, t) plot_rocs(r, f, t, label=label, color=color) report_score(r, f, t, label=label) plot_show() ``` # Tau21 ``` import h5py f = h5py.File("../data/w-vs-qcd/h5/w_100000_j1p0_sj0p30_delphes_jets_images.h5", "r")["auxvars"] tau1 = f["tau_1"] tau2 = f["tau_2"] tau21 = np.true_divide(tau2, tau1) pt = f["pt_trimmed"] mass = f["mass_trimmed"] mask = (f["mass_trimmed"] < 110) & (f["mass_trimmed"] > 50) & (f["pt_trimmed"] < 300) & (f["pt_trimmed"] > 250) #mask = mask & np.isfinite(tau21) & (tau21 != 0.) signal_tau21 = tau21[mask] signal_pt = pt[mask] signal_mass = mass[mask] f = h5py.File("../data/w-vs-qcd/h5/qcd_100000_j1p0_sj0p30_delphes_jets_images.h5", "r")["auxvars"] tau1 = f["tau_1"] tau2 = f["tau_2"] tau21 = np.true_divide(tau2, tau1) pt = f["pt_trimmed"] mass = f["mass_trimmed"] mask = (f["mass_trimmed"] < 110) & (f["mass_trimmed"] > 50) & (f["pt_trimmed"] < 300) & (f["pt_trimmed"] > 250) #mask = mask & np.isfinite(tau21) & (tau21 != 0.) bkg_tau21 = tau21[mask] bkg_pt = pt[mask] bkg_mass = mass[mask] plt.hist(bkg_mass, histtype="step", bins=40, normed=1) plt.hist(signal_mass, histtype="step", bins=40, normed=1) tau21 = np.concatenate((signal_tau21, bkg_tau21)) pts = np.concatenate((signal_pt, bkg_pt)) masss = np.concatenate((signal_mass, bkg_mass)) X = np.hstack([tau21.reshape(-1,1), masss.reshape(-1,1)]) y = np.concatenate((np.ones(len(signal_tau21)), np.zeros(len(bkg_tau21)))) w = np.zeros(len(y)) pdf, edges = np.histogram(pts[y == 0], density=True, range=[250, 300], bins=50) indices = np.searchsorted(edges, pts[y == 0]) - 1 inv_w = 1. / pdf[indices] inv_w /= inv_w.sum() w[y==0] = inv_w pdf, edges = np.histogram(pts[y == 1], density=True, range=[250, 300], bins=50) indices = np.searchsorted(edges, pts[y == 1]) - 1 inv_w = 1. / pdf[indices] inv_w /= inv_w.sum() w[y==1] = inv_w X_train, X_test, y_train, y_test, w_train, w_test = train_test_split(X, y, w, train_size=0.5) def evaluate_models(X, y, w): rocs = [] fprs = [] tprs = [] y_pred = X # Roc rocs.append(roc_auc_score(y, y_pred, sample_weight=w)) fpr, tpr, _ = roc_curve(y, y_pred, sample_weight=w) fprs.append(fpr) tprs.append(tpr) return rocs, fprs, tprs r, f, t = evaluate_models(-tau21, y, w) plot_rocs(r, f, t, label="tau21") report_score(r, f, t, label="tau21") r, f, t = evaluate_models(masss, y, w) plot_rocs(r, f, t, label="mass") report_score(r, f, t, label="mass") plot_show() clf = ExtraTreesClassifier(n_estimators=1000, min_samples_leaf=100, max_features=1) clf.fit(X_train, y_train) r, f, t = evaluate_models(-clf.predict_proba(X_test)[:, 0], y_test, w_test) plot_rocs(r, f, t, label="tau21+mass") report_score(r, f, t, label="tau21+mass") plot_show() ```
github_jupyter
``` import numpy as np from keras.models import Model from keras.layers import Input from keras.layers.pooling import AveragePooling3D from keras import backend as K import json from collections import OrderedDict def format_decimal(arr, places=6): return [round(x * 10**places) / 10**places for x in arr] DATA = OrderedDict() ``` ### AveragePooling3D **[pooling.AveragePooling3D.0] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last'** ``` data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(290) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'** ``` data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(291) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'** ``` data_in_shape = (4, 5, 2, 3) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(282) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'** ``` data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(283) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.3'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'** ``` data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(284) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.4'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'** ``` data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(285) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.5'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'** ``` data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(286) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.6'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'** ``` data_in_shape = (4, 5, 4, 2) L = AveragePooling3D(pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(287) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.7'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'** ``` data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(288) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.8'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'** ``` data_in_shape = (4, 4, 4, 2) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(289) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.9'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'** ``` data_in_shape = (2, 3, 3, 4) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(290) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.10'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'** ``` data_in_shape = (2, 3, 3, 4) L = AveragePooling3D(pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(291) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.11'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` **[pooling.AveragePooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'** ``` data_in_shape = (3, 4, 4, 3) L = AveragePooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(292) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.AveragePooling3D.12'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } ``` ### export for Keras.js tests ``` print(json.dumps(DATA)) ```
github_jupyter
# Debugging strategies In this notebook, we'll talk about what happens when you get an error message (it will happen often!) and some steps you can take to resolve them. Run the code in the next cell. ``` x = 10 if x > 20 print(f'{x} is greater than 20!') ``` The "traceback" message shows you a couple of useful things: - What line the error is on: `line 3` - The class of error: `SyntaxError` (v common) - Exactly _where_ the error occured -- see where the `^` symbol is pointing? What's the problem? #### Googling If it's not immediately clear what's wrong -- if you're not even sure what a `SyntaxError` even is -- I might start by Googling the error messsage, the word "python" and maybe some keywords for what I was trying to do when I got the error. Something like [`"SyntaxError: invalid syntax" python if statement`](https://www.google.com/search?q=%22SyntaxError%3A+invalid+syntax%22+python+if+statement) Click through the first couple of links -- you'll become _very_ familiar with StackOverflow -- and see if you spot the problem. If you're still stuck, maybe it's time to ... #### Read the docs My next stop would be the Python documentation to find some examples of the thing I'm trying to do. [Here's the page outlining how to write an `if` statement in Python](https://docs.python.org/3/tutorial/controlflow.html). From there, I would copy the example code, run it, compare it line by line with my code and see what's different. If I'm _still_ stuck, I might see if there are other keywords to search on and take another run at Google. #### Use `print()` liberally The `print()` function can be a lifesaver -- it can show you _what_ a value is before you try to do something to it, and whether it matches up with your expectations of what that value should be, and thereby give you a clue about why your script is failing. An example can help clarify this idea. **Scenario:** Your newsroom is handing out longevity bonuses. (Congratulations!) Each employee's bonus will be the number of years they've been with the company, times 50. So we're going to loop over our staff data, held in a list of dictionaries, and calculate each person's bonus. ``` staff = [ {'name': 'Fran', 'years_of_service': 2, 'job': 'Reporter'}, {'name': 'Graham', 'years_of_service': 7, 'job': 'Reporter'}, {'name': 'Pat', 'years_of_service': 4, 'job': 'Web Producer'}, {'name': 'John', 'years_of_service': '26', 'job': 'Managing Editor'}, {'name': 'Sue', 'years_of_service': 33, 'job': 'Executive Editor'} ] for person in staff: name = person['name'] bonus = person['years_of_service'] * 50 print(f'{name} is getting a bonus of {bonus}') ``` We didn't get an exception, but something is _clearly_ wrong with John's bonus. What's going on? Maybe you spot the error already. If not, we might Google something like ["python multiply numbers repeating"](https://www.google.com/search?q=python+multiply+numbers+repeating) -- which leads us to [this StackOverflow answer](https://stackoverflow.com/questions/20401871/want-to-multiply-not-repeat-variable). Is that what's going on here? Let's add a `print()` statement before we do the multiplication and use the [`type()`](https://docs.python.org/3/library/functions.html#type) function to check the value that we're pulling out of each dictionary. ``` for person in staff: name = person['name'] bonus = person['years_of_service'] * 50 print(name, type(person['years_of_service'])) print(f'{name} is getting a bonus of {bonus}') ``` Aha! John's value for `years_of_service` has been stored as a string, not an integer. Let's fix that by using the [`int()`](https://docs.python.org/3/library/functions.html#int) function to coerce the value to an integer. ``` for person in staff: name = person['name'] bonus = int(person['years_of_service']) ** 2 print(f'{name} is getting a bonus of {bonus}') ``` Winner winner, chicken dinner. Here are some more debugging exercises for you to work through. See if you can figure out what's wrong and fix them. ``` print(Hello, Pittsburgh!) desk = { 'wood': 'fir', 'color': 'black', 'height_in': 36, 'width_in': 48, 'length_in': 68 } print(desk['drawer_count']) students = ['Kelly', 'Larry', 'José', 'Frank', 'Sarah', 'Sue'] for student in students: if student = 'Kelly': print('It's Kelly!') elif student == 'José': print("It's José!") import cvs with open('../../../data/eels.csv', 'r') as o: reader = csv.DictReader(o) for row in Reader: print(row) ``` ### Further reading - [Python's tutorial on errors and exceptions](https://docs.python.org/3/tutorial/errors.html) - [Software Carpentry post on understanding Python errors](https://anenadic.github.io/2014-11-10-manchester/novice/python/07-errors.html) - [How to read a traceback](http://cs.franklin.edu/~ansaria/traceback.html)
github_jupyter
``` import subprocess try: import dgl except: subprocess.check_call(["python", '-m', 'pip', 'install', 'dgl-cu110']) import dgl import os import dgl.data from dgl.data import DGLDataset import torch import torch.nn as nn import torch.nn.functional as F import pandas as pd import numpy as np import tqdm from sklearn.linear_model import LinearRegression from dgl.data.utils import save_graphs from dgl.data.utils import load_graphs from dgl.nn.pytorch.conv import ChebConv import copy import matplotlib.pyplot as plt os.chdir("/content/drive/MyDrive/Winter_Research") max_pixs = 128309 ``` ## Make the Dataset ``` CA_x, CA_y = [], [] KS_x, KS_y = [], [] MT_x, MT_y = [], [] TX_x, TX_y = [], [] OH_x, OH_y = [], [] states = {"CA" : [CA_x, CA_y, "Roi_1"], "KS" : [KS_x, KS_y, "Roi_2"], "MT" : [MT_x, MT_y, "Roi_3"], "TX" : [TX_x, TX_y, "Roi_4"], "OH" : [OH_x, OH_y, "Roi_5"]} ``` #### Load into RAM ``` master_df = pd.read_csv("Sentinel2_Traffic/Traffic_Data/5_state_traffic.csv") master_df = master_df.set_index("Unnamed: 0") CA_x, CA_y = [], [] KS_x, KS_y = [], [] MT_x, MT_y = [], [] TX_x, TX_y = [], [] OH_x, OH_y = [], [] states = {"Cali" : [CA_x, CA_y, "Roi_1"], "KS" : [KS_x, KS_y, "Roi_2"], "MT" : [MT_x, MT_y, "Roi_3"], "TX" : [TX_x, TX_y, "Roi_4"], "Ohio" : [OH_x, OH_y, "Roi_5"]} j = 0 for st in ["Cali", "KS", "MT", "TX", "Ohio"]: # for st in ["TX"]: # path_check = "R/" + states[st][2] + "/greedy_a/" path = "new_roi/" + st # + "/sent_cloud_90p_raw/" # imgs_check = os.listdir(path_check) imgs = os.listdir(path) # for img, img_check in zip(imgs, imgs_check): for img in imgs: date = img[len(st):len(st) + 10] # print(date) # break try: photo = pd.read_csv(path + '/' + img) except: continue # photo_check = np.loadtxt(path_check + img_check).reshape(-1, 7, 3) # cali_pixs = 72264 # # kansas_pixs = 69071 # # mont_pixs = 72099 # # texas_pixs = 71764 # ohio_pixs = 62827 if photo.shape[0] < 50000: continue if date in list(master_df.index): if st == "Cali": lookup_st = "CA" elif st == "Ohio": lookup_st = "OH" else: lookup_st = st if not pd.isna(master_df.loc[date][lookup_st]): states[st][0].append(photo) states[st][1].append(master_df.loc[date][lookup_st]) print(j, st, photo.shape) j += 1 def gen_around(x, y): return [(x, y), (x, y + 10), (x, y - 10), (x + 10, y), (x - 10, y), (x + 10, y + 10), (x + 10, y - 10), (x - 10, y + 10), (x - 10, y - 10)] def gen_around_strict(x, y): return [(x, y), (x, y + 10), (x, y - 10), (x + 10, y), (x - 10, y)] def neighbors(road, coords, x, y, diagonal=True): neigh = [] if diagonal: cand = gen_around(x, y) else: cand = gen_around_strict(x, y) for pix in cand: if pix[0] in coords: if pix[1] in coords[pix[0]]: neigh.append(coords[pix[0]][pix[1]]['idx']) return neigh def src_dst(road, coords, diagonal=True): src, dst, values = [], [] , [] for row in range(road.shape[0]): x = road["x"][row] y = road["y"][row] idx = coords[x][y]['idx'] val = coords[x][y]['val'] # if val[0] != road[row][:3][0]: # assert(False) for c in neighbors(road, coords, x, y, diagonal): src.append(idx) dst.append(c) values.append(val) return src, dst #, values device = torch.cuda.current_device() class RoadDataset(DGLDataset): def __init__(self, states): self.states = states super().__init__(name='road_graphs') def process(self): self.graphs = [] self.labels = [] self.state = [] for st in self.states.keys(): # for st in ["TX"]: print(st) for i in range(len(self.states[st][0])): print(i) img = states[st][0][i] coords = {} vals = [] print(img.shape[0]) for j in range(img.shape[0]): # print(img[j].shape) lon = img["x"][j].astype(int) # print(lon) lat = img["y"][j].astype(int) val = [img["B2"][j], img["B3"][j], img["B4"][j]] vals.append(val) if lon not in coords: coords[lon] = {} coords[lon][lat] = {'idx' : j, 'val' : val} src, dst = src_dst(img, coords) #src, dst, values = src_dst(img, coords) # print(np.mean(src), np.mean(dst), np.mean(values)) graph = dgl.graph((src, dst), num_nodes=img.shape[0]) graph.ndata['feat'] = torch.from_numpy(np.array(vals)) #graph = graph.add_self_loop(graph) graph = graph.to(device) self.graphs.append(graph) self.labels.append(self.states[st][1][i]) self.state.append(st) # assert(False) def __getitem__(self, i): return self.graphs[i], self.labels[i], self.state[i] def __len__(self): return len(self.graphs) class RoadDatasetLoad(DGLDataset): def __init__(self, states): self.states = states super().__init__(name='road_graphs') def process(self): self.graphs = load_graphs("graphs/data_new.bin")[0] self.labels = np.loadtxt("graphs/labels_new.csv") self.state = np.loadtxt("graphs/states_new.csv", dtype=np.str) def __getitem__(self, i): return self.graphs[i], self.labels[i]#, self.state[i] def __len__(self): return len(self.graphs) Road_Graphs = RoadDataset(states) dataset = Road_Graphs dataset[100] # Road_Graphs = RoadDataset(states) save_graphs('graphs/data_new.bin', dataset.graphs) labels = np.array(dataset.labels) states = np.array(dataset.state) np.savetxt("graphs/labels_new.csv", labels) np.savetxt('graphs/states_new.csv', states, fmt="%s") Road_Load = RoadDatasetLoad(states) dataset_save = dataset # Generate a synthetic dataset with 10000 graphs, ranging from 10 to 500 nodes. # dataset = dgl.data.GINDataset('PROTEINS', self_loop=True) dataset = Road_Load ``` ## Train the Model ``` # X = dataset[:][0] # y = dataset[:][1] print(dataset.state[0:37]) print(dataset.state[37:64]) print(dataset.state[64:88]) print(dataset.state[88:119]) print(dataset.state[119:124]) from dgl.dataloading import GraphDataLoader from torch.utils.data.sampler import SubsetRandomSampler from torch.utils.data import DataLoader state_val = False one_sample = False state = "TX" lookup_state = {"CA" : 0, "KS" : 1, "MT" : 2, "TX" : 3, "OH" : 4} state_idxs = [(0, 37), (37, 64), (64, 88), (88, 119), (119, 124)] num_examples = len(dataset) if state_val: x = torch.arange(num_examples) start = state_idxs[lookup_state[state]][0] end = state_idxs[lookup_state[state]][1] test_sample = x[start + 3: end] val_sample = x[start : start + 3] train_sample = torch.cat((x[:start], x[end:])) train_sample = train_sample[torch.randperm(train_sample.shape[0])] print(train_sample) else: num_train = int(num_examples * 0.7) num_val = int(num_examples * 0.85) x = torch.randperm(num_examples) train_sample = x[:num_train] val_sample = x[num_train: num_val] test_sample = x[num_val:] train_sampler = SubsetRandomSampler(train_sample) val_sampler = SubsetRandomSampler(val_sample) test_sampler = SubsetRandomSampler(test_sample) train_dataloader = GraphDataLoader( dataset, sampler=train_sampler, batch_size=16, drop_last=False) val_dataloader = GraphDataLoader( dataset, sampler=val_sampler, batch_size=16, drop_last=False) test_dataloader = GraphDataLoader( dataset, sampler=test_sampler, batch_size=16, drop_last=False) # print(train_sample, val_sample, test_sample) it = iter(test_dataloader) batch = next(it) print(batch) batched_graph, labels = batch print('Number of nodes for each graph element in the batch:', batched_graph.batch_num_nodes()) print('Number of edges for each graph element in the batch:', batched_graph.batch_num_edges()) # Recover the original graph elements from the minibatch graphs = dgl.unbatch(batched_graph) print('The original graphs in the minibatch:') print(graphs) print(labels) from dgl.nn import GraphConv, DenseGraphConv, GATConv class GCN(nn.Module): def __init__(self, in_feats, conv_hidden, lin_hidden): super(GCN, self).__init__() self.conv_layers = nn.ModuleList() self.LR = nn.LeakyReLU(0.2) self.lin_layers = nn.ModuleList() self.conv_layers.append(GraphConv(in_feats, conv_hidden[0])) for i in range(1, len(conv_hidden)): self.conv_layers.append(GraphConv(conv_hidden[i - 1], conv_hidden[i])) for i in range(1, len(lin_hidden) - 1): self.lin_layers.append(nn.Linear(lin_hidden[i - 1], lin_hidden[i])) #self.lin_layers.append(nn.BatchNorm1d(lin_hidden[i])) self.lin_layers.append(nn.Linear(lin_hidden[-2], lin_hidden[-1])) def forward(self, g, in_feat): output = in_feat for layer in self.conv_layers: output = self.LR(layer(g, output)) # print(torch.mean(output)) graphs = dgl.unbatch(g) flat_arr = torch.zeros((g.batch_size, max_pixs)) prev = 0 # print("Before", torch.mean(output)) for i in range(len(batched_graph.batch_num_nodes())): end = prev + int(batched_graph.batch_num_nodes()[i].item()) entry = output[prev: end] entry = entry / int(g.batch_num_nodes()[i].item()) pad_val = int(torch.mean(entry).item()) pad_length = (max_pixs - entry.shape[0]) // 2 entry = torch.nn.functional.pad(entry.flatten(), (pad_length, pad_length), value=pad_val) flat_arr[i][:entry.shape[0]] = entry prev = end flat_arr = flat_arr.to(device) #print("After", torch.mean(flat_arr)) output = flat_arr for i, layer in enumerate(self.lin_layers): output = layer(output) if i != (len(self.lin_layers) - 1): output = self.LR(output) #print(flat_arr.shape) # g.ndata['h'] = h # print(dgl.mean_nodes(g, 'h')) # assert(False) return output #dgl.mean_nodes(g, 'h') # # Create the model with given dimensions model = GCN(3, [10, 10, 1], [max_pixs,1000, 500, 100, 50, 10, 1]) # model = GCN(3, 16, 1) model.cuda() criterion = nn.MSELoss() #model.to('cuda:0') optimizer = torch.optim.Adam(model.parameters(), lr=0.01) del criterion del optimizer del model torch.cuda.empty_cache() def init_weights(m): if type(m) == nn.Linear: torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) model.apply(init_weights) best_model = model min_val = 1e9 j = 0 for epoch in range(100): loss_tot = 0 loss = 0 batches = 0 model.train() for batched_graph, labels in train_dataloader: batched_graph = batched_graph.to(device) labels = labels.to(device) pred = model(batched_graph, batched_graph.ndata['feat'].float()) # print(pred, labels) labels = labels.to(device) loss = criterion(pred, labels.reshape(labels.shape[0], 1).float()) loss_tot += loss.item() batches += 1 optimizer.zero_grad() loss.backward() optimizer.step() if j % 10 == 0: print("Train Loss:", loss_tot / batches) num_tests = 0 loss_i = 0 with torch.no_grad(): model.eval() for batched_graph, labels in val_dataloader: batched_graph = batched_graph.to(device) labels = labels.to(device) pred = model(batched_graph, batched_graph.ndata['feat'].float()) loss_i += criterion(pred, labels.reshape(labels.shape[0], 1).float()).item() # x.extend([x[0] for x in pred.cpu().detach().numpy().tolist()]) # y.extend([x[0] for x in labels.reshape(labels.shape[0], 1).cpu().detach().numpy().tolist()]) # print(type(pred)) num_tests += 1 val_loss = loss_i / num_tests if j % 10 == 0: print('Val loss:', val_loss) # val_loss.append(loss_v.item()) if val_loss < min_val: print("new_best:", val_loss) min_val = val_loss best_model = copy.deepcopy(model) j += 1 # num_correct = 0 num_tests = 0 x = [] y = [] loss = 0 with torch.no_grad(): for batched_graph, labels in test_dataloader: # print(batched_graph) batched_graph = batched_graph.to(device) labels = labels.to(device) pred = best_model(batched_graph, batched_graph.ndata['feat'].float()) loss += criterion(pred, labels.reshape(labels.shape[0], 1).float()).item() x.extend([x[0] for x in pred.cpu().detach().numpy().tolist()]) y.extend([x[0] for x in labels.reshape(labels.shape[0], 1).cpu().detach().numpy().tolist()]) num_tests += 1 print('Test loss:', loss / num_tests) x_temp = y y_temp = x # print(y_temp) # for i in range(len(y_temp)): # if y_temp[i] < 600: # y_temp.pop(i) # x_temp.pop(i) # break x_plot = np.array(y_temp) y_plot = np.array(x_temp) new_x = np.array(x_plot).reshape(-1,1) new_y = np.array(y_plot) fit = LinearRegression().fit(new_x, new_y) score = fit.score(new_x, new_y) plt.xlabel("Prediction") plt.ylabel("Actual Traffic") print(score) plt.scatter(new_x, new_y) axes = plt.gca() x_vals = np.array(axes.get_xlim()) y_vals = x_vals plt.plot(x_vals, y_vals, '--') pre_y = fit.predict(new_x) # plt.plot plt.plot(new_x, pre_y) plt.plot(x_vals, y_vals, '--') # plt.savefig("GCN_MSE_143_r_881.png") plt.show() y labels class ChebNet(nn.Module): def __init__(self, k, in_feats, hiddens, out_feats): super(ChebNet, self).__init__() self.pool = nn.MaxPool1d(2) self.layers = nn.ModuleList() self.readout = MaxPooling() # Input layer self.layers.append( ChebConv(in_feats, hiddens[0], k)) for i in range(1, len(hiddens)): self.layers.append( ChebConv(hiddens[i - 1], hiddens[i], k)) self.cls = nn.Sequential( nn.Linear(hiddens[-1], out_feats), nn.LogSoftmax() ) def forward(self, g_arr, feat): for g, layer in zip(g_arr, self.layers): feat = self.pool(layer(g, feat, [2] * g.batch_size).transpose(-1, -2).unsqueeze(0))\ .squeeze(0).transpose(-1, -2) return self.cls(self.readout(g_arr[-1], feat)) ```
github_jupyter
``` !wget http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip !unzip cornell_movie_dialogs_corpus.zip -d dialogues import torch from torch.jit import script, trace import torch.nn as nn from torch import optim import torch.nn.functional as F import csv import random import re import os import unicodedata import codecs from io import open import itertools import math device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device corpus_name = "cornell movie-dialogs corpus" corpus = os.path.join("dialogues", corpus_name) def printLines(file, n=10): with open(file, 'rb') as datafile: lines = datafile.readlines() for line in lines[:n]: print(line) printLines(os.path.join(corpus, "movie_lines.txt")) printLines(os.path.join(corpus, "movie_conversations.txt")) # load movie_lines.txt def loadLines(fileName, fields): lines = {} with open(fileName, 'r', encoding='iso-8859-1') as f: for line in f: values = line.split(" +++$+++ ") lineObj = {} for i, field in enumerate(fields): lineObj[field] = values[i] lines[lineObj['lineID']] = lineObj return lines # # load movie_conversations.txt def loadConversations(fileName, lines, fields): conversations = [] with open(fileName, 'r', encoding='iso-8859-1') as f: for line in f: values = line.split(" +++$+++ ") convObj = {} for i, field in enumerate(fields): convObj[field] = values[i] lineIds = eval(convObj["utteranceIDs"]) convObj["lines"] = [] for lineId in lineIds: convObj["lines"].append(lines[lineId]) conversations.append(convObj) return conversations def extractSentencePairs(conversations): qa_pairs = [] for conversation in conversations: for i in range(len(conversation["lines"]) - 1): # We ignore the last line (no answer for it) inputLine = conversation["lines"][i]["text"].strip() targetLine = conversation["lines"][i+1]["text"].strip() if inputLine and targetLine: qa_pairs.append([inputLine, targetLine]) return qa_pairs MOVIE_LINES_FIELDS = ["lineID", "characterID", "movieID", "character", "text"] MOVIE_CONVERSATIONS_FIELDS = ["character1ID", "character2ID", "movieID", "utteranceIDs"] lines = loadLines(os.path.join(corpus, "movie_lines.txt"), MOVIE_LINES_FIELDS) conversations = loadConversations(os.path.join(corpus, "movie_conversations.txt"), lines, MOVIE_CONVERSATIONS_FIELDS) lines['L194'] conversations[:1] datafile = os.path.join(corpus, "formatted_movie_lines.txt") delimiter = '\t' delimiter = str(codecs.decode(delimiter, "unicode_escape")) with open(datafile, 'w', encoding='utf-8') as outputfile: writer = csv.writer(outputfile, delimiter=delimiter, lineterminator='\n') for pair in extractSentencePairs(conversations): writer.writerow(pair) print("\nSample lines from file:") printLines(datafile) PAD_token = 0 # padding short sentences SOS_token = 1 # Start-of-sentence EOS_token = 2 # End-of-sentence class Voc: def __init__(self, name): self.name = name self.trimmed = False self.word2index = {} self.word2count = {} self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"} self.num_words = 3 # Count SOS, EOS, PAD def addSentence(self, sentence): for word in sentence.split(' '): self.addWord(word) def addWord(self, word): if word not in self.word2index: self.word2index[word] = self.num_words self.word2count[word] = 1 self.index2word[self.num_words] = word self.num_words += 1 else: self.word2count[word] += 1 def trim(self, min_count): if self.trimmed: return self.trimmed = True keep_words = [] for k, v in self.word2count.items(): if v >= min_count: keep_words.append(k) print('keep_words {} / {} = {:.4f}'.format( len(keep_words), len(self.word2index), len(keep_words) / len(self.word2index) )) # Reinitialize self.word2index = {} self.word2count = {} self.index2word = {PAD_token: "PAD", SOS_token: "SOS", EOS_token: "EOS"} self.num_words = 3 for word in keep_words: self.addWord(word) MAX_LENGTH = 10 def unicodeToAscii(s): return ''.join( c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn' ) def normalizeString(s): s = unicodeToAscii(s.lower().strip()) s = re.sub(r"([.!?])", r" \1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) s = re.sub(r"\s+", r" ", s).strip() return s def readFile(datafile): lines = open(datafile, encoding='utf-8').read().strip().split('\n') pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines] return pairs def filterPair(p): return len(p[0].split(' ')) < MAX_LENGTH and len(p[1].split(' ')) < MAX_LENGTH def filterPairs(pairs): return [pair for pair in pairs if filterPair(pair)] def loadPrepareData(corpus_name, datafile): pairs = readFile(datafile) print("Read {!s} sentence pairs".format(len(pairs))) pairs = filterPairs(pairs) print("Trimmed to {!s} sentence pairs".format(len(pairs))) print("Counting words...") voc = Voc(corpus_name) for pair in pairs: voc.addSentence(pair[0]) voc.addSentence(pair[1]) print("Counted words:", voc.num_words) return voc, pairs voc, pairs = loadPrepareData(corpus_name, datafile) print("\nsanity check\npairs:") for pair in pairs[:10]: print(pair) MIN_COUNT = 3 def trimRareWords(voc, pairs, MIN_COUNT): voc.trim(MIN_COUNT) # Filter pairs with trimmed words keep_pairs = [] for pair in pairs: input_sentence = pair[0] output_sentence = pair[1] keep_input = True keep_output = True for word in input_sentence.split(' '): if word not in voc.word2index: keep_input = False break for word in output_sentence.split(' '): if word not in voc.word2index: keep_output = False break if keep_input and keep_output: keep_pairs.append(pair) print("Trimmed from {} pairs to {}, {:.4f} of total".format(len(pairs), len(keep_pairs), len(keep_pairs) / len(pairs))) return keep_pairs pairs = trimRareWords(voc, pairs, MIN_COUNT) def indexesFromSentence(voc, sentence): return [voc.word2index[word] for word in sentence.split(' ')] + [EOS_token] def zeroPadding(l, fillvalue=PAD_token): return list(itertools.zip_longest(*l, fillvalue=fillvalue)) def binaryMatrix(l, value=PAD_token): m = [] for i, seq in enumerate(l): m.append([]) for token in seq: if token == PAD_token: m[i].append(0) else: m[i].append(1) return m def inputVar(l, voc): indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l] lengths = torch.tensor([len(indexes) for indexes in indexes_batch]) padList = zeroPadding(indexes_batch) padVar = torch.LongTensor(padList) return padVar, lengths def outputVar(l, voc): indexes_batch = [indexesFromSentence(voc, sentence) for sentence in l] max_target_len = max([len(indexes) for indexes in indexes_batch]) padList = zeroPadding(indexes_batch) mask = binaryMatrix(padList) mask = torch.ByteTensor(mask) padVar = torch.LongTensor(padList) return padVar, mask, max_target_len def batch2TrainData(voc, pair_batch): pair_batch.sort(key=lambda x: len(x[0].split(" ")), reverse=True) input_batch, output_batch = [], [] for pair in pair_batch: input_batch.append(pair[0]) output_batch.append(pair[1]) inp, lengths = inputVar(input_batch, voc) output, mask, max_target_len = outputVar(output_batch, voc) return inp, lengths, output, mask, max_target_len small_batch_size = 5 batches = batch2TrainData(voc, [random.choice(pairs) for _ in range(small_batch_size)]) input_variable, lengths, target_variable, mask, max_target_len = batches print("input_variable:", input_variable) print("lengths:", lengths) print("target_variable:", target_variable) print("mask:", mask) print("max_target_len:", max_target_len) ``` ## Bi directional GRU encoder ``` class EncoderRNN(nn.Module): def __init__(self, hidden_size, embedding, n_layers=1, dropout=0): super(EncoderRNN, self).__init__() self.n_layers = n_layers self.hidden_size = hidden_size self.embedding = embedding self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout), bidirectional=True) def forward(self, input_seq, input_lengths, hidden=None): embedded = self.embedding(input_seq) packed = torch.nn.utils.rnn.pack_padded_sequence(embedded, input_lengths) outputs, hidden = self.gru(packed, hidden) outputs, _ = torch.nn.utils.rnn.pad_packed_sequence(outputs) outputs = outputs[:, :, :self.hidden_size] + outputs[:, : ,self.hidden_size:] return outputs, hidden ``` ### Attention module ``` class Attn(torch.nn.Module): def __init__(self, method, hidden_size): super(Attn, self).__init__() self.method = method if self.method not in ['dot', 'general', 'concat']: raise ValueError(self.method, "is not an appropriate attention method.") self.hidden_size = hidden_size if self.method == 'general': self.attn = torch.nn.Linear(self.hidden_size, hidden_size) elif self.method == 'concat': self.attn = torch.nn.Linear(self.hidden_size * 2, hidden_size) self.v = torch.nn.Parameter(torch.FloatTensor(hidden_size)) def dot_score(self, hidden, encoder_output): return torch.sum(hidden * encoder_output, dim=2) def general_score(self, hidden, encoder_output): energy = self.attn(encoder_output) return torch.sum(hidden * energy, dim=2) def concat_score(self, hidden, encoder_output): energy = self.attn(torch.cat((hidden.expand(encoder_output.size(0), -1, -1), encoder_output), 2)).tanh() return torch.sum(self.v * energy, dim=2) def forward(self, hidden, encoder_outputs): if self.method == 'general': attn_energies = self.general_score(hidden, encoder_outputs) elif self.method == 'concat': attn_energies = self.concat_score(hidden, encoder_outputs) elif self.method == 'dot': attn_energies = self.dot_score(hidden, encoder_outputs) attn_energies = attn_energies.t() return F.softmax(attn_energies, dim=1).unsqueeze(1) class AttnDecoderRNN(nn.Module): def __init__(self, attn_model, embedding, hidden_size, output_size, n_layers=1, dropout=0.1): super(AttnDecoderRNN, self).__init__() self.attn_model = attn_model self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.dropout = dropout self.embedding = embedding self.embedding_dropout = nn.Dropout(dropout) self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=(0 if n_layers == 1 else dropout)) self.concat = nn.Linear(hidden_size * 2, hidden_size) self.out = nn.Linear(hidden_size, output_size) self.attn = Attn(attn_model, hidden_size) def forward(self, input_step, last_hidden, encoder_outputs): embedded = self.embedding(input_step) embedded = self.embedding_dropout(embedded) rnn_output, hidden = self.gru(embedded, last_hidden) attn_weights = self.attn(rnn_output, encoder_outputs) context = attn_weights.bmm(encoder_outputs.transpose(0, 1)) rnn_output = rnn_output.squeeze(0) context = context.squeeze(1) concat_input = torch.cat((rnn_output, context), 1) concat_output = torch.tanh(self.concat(concat_input)) output = self.out(concat_output) output = F.softmax(output, dim=1) return output, hidden def maskNLLLoss(inp, target, mask): nTotal = mask.sum() crossEntropy = -torch.log(torch.gather(inp, 1, target.view(-1, 1))) loss = crossEntropy.masked_select(mask).mean() loss = loss.to(device) return loss, nTotal.item() def train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, encoder_optimizer, decoder_optimizer, batch_size, clip): encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() input_variable = input_variable.to(device) lengths = lengths.to(device) target_variable = target_variable.to(device) mask = mask.to(device) loss = 0 print_losses = [] n_totals = 0 encoder_outputs, encoder_hidden = encoder(input_variable, lengths) decoder_input = torch.LongTensor([[SOS_token for _ in range(batch_size)]]) decoder_input = decoder_input.to(device) decoder_hidden = encoder_hidden[:decoder.n_layers] use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False if use_teacher_forcing: for t in range(max_target_len): decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_outputs) decoder_input = target_variable[t].view(1, -1) mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t]) loss += mask_loss print_losses.append(mask_loss.item() * nTotal) n_totals += nTotal else: for t in range(max_target_len): decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden, encoder_outputs) _, topi = decoder_output.topk(1) decoder_input = torch.LongTensor([[topi[i][0] for i in range(batch_size)]]) decoder_input = decoder_input.to(device) mask_loss, nTotal = maskNLLLoss(decoder_output, target_variable[t], mask[t]) loss += mask_loss print_losses.append(mask_loss.item() * nTotal) n_totals += nTotal loss.backward() _ = torch.nn.utils.clip_grad_norm_(encoder.parameters(), clip) _ = torch.nn.utils.clip_grad_norm_(decoder.parameters(), clip) encoder_optimizer.step() decoder_optimizer.step() return sum(print_losses) / n_totals def trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename): training_batches = [batch2TrainData(voc, [random.choice(pairs) for _ in range(batch_size)]) for _ in range(n_iteration)] print('Initializing ...') start_iteration = 1 print_loss = 0 if loadFilename: start_iteration = checkpoint['iteration'] + 1 print("Training...") for iteration in range(start_iteration, n_iteration + 1): training_batch = training_batches[iteration - 1] input_variable, lengths, target_variable, mask, max_target_len = training_batch # Run a training iteration with batch loss = train(input_variable, lengths, target_variable, mask, max_target_len, encoder, decoder, encoder_optimizer, decoder_optimizer, batch_size, clip) print_loss += loss if iteration % print_every == 0: print_loss_avg = print_loss / print_every print("Iteration: {}; Percent complete: {:.1f}%; Average loss: {:.4f}".format(iteration, iteration / n_iteration * 100, print_loss_avg)) print_loss = 0 if (iteration % save_every == 0): directory = os.path.join(save_dir, model_name, corpus_name, '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size)) if not os.path.exists(directory): os.makedirs(directory) torch.save({ 'iteration': iteration, 'en': encoder.state_dict(), 'de': decoder.state_dict(), 'en_opt': encoder_optimizer.state_dict(), 'de_opt': decoder_optimizer.state_dict(), 'loss': loss, 'voc_dict': voc.__dict__, 'embedding': embedding.state_dict() }, os.path.join(directory, '{}_{}.tar'.format(iteration, 'checkpoint'))) class GreedySearchDecoder(nn.Module): def __init__(self, encoder, decoder): super(GreedySearchDecoder, self).__init__() self.encoder = encoder self.decoder = decoder def forward(self, input_seq, input_length, max_length): encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length) decoder_hidden = encoder_hidden[:decoder.n_layers] decoder_input = torch.ones(1, 1, device=device, dtype=torch.long) * SOS_token all_tokens = torch.zeros([0], device=device, dtype=torch.long) all_scores = torch.zeros([0], device=device) for _ in range(max_length): decoder_output, decoder_hidden = self.decoder(decoder_input, decoder_hidden, encoder_outputs) decoder_scores, decoder_input = torch.max(decoder_output, dim=1) all_tokens = torch.cat((all_tokens, decoder_input), dim=0) all_scores = torch.cat((all_scores, decoder_scores), dim=0) decoder_input = torch.unsqueeze(decoder_input, 0) return all_tokens, all_scores def evaluate(encoder, decoder, searcher, voc, sentence, max_length=MAX_LENGTH): indexes_batch = [indexesFromSentence(voc, sentence)] lengths = torch.tensor([len(indexes) for indexes in indexes_batch]) input_batch = torch.LongTensor(indexes_batch).transpose(0, 1) input_batch = input_batch.to(device) lengths = lengths.to(device) tokens, scores = searcher(input_batch, lengths, max_length) decoded_words = [voc.index2word[token.item()] for token in tokens] return decoded_words def evaluateInput(encoder, decoder, searcher, voc): input_sentence = '' while(1): try: input_sentence = input('> ') # Check if it is quit case if input_sentence == 'q' or input_sentence == 'quit': break input_sentence = normalizeString(input_sentence) output_words = evaluate(encoder, decoder, searcher, voc, input_sentence) output_words[:] = [x for x in output_words if not (x == 'EOS' or x == 'PAD')] print('Bot:', ' '.join(output_words)) except KeyError: print("Error: Encountered unknown word.") model_name = 'cb_model' attn_model = 'dot' #attn_model = 'general' #attn_model = 'concat' hidden_size = 500 encoder_n_layers = 2 decoder_n_layers = 2 dropout = 0.1 batch_size = 64 # Set checkpoint to load from; set to None if starting from scratch loadFilename = None checkpoint_iter = 4000 #loadFilename = os.path.join(save_dir, model_name, corpus_name, # '{}-{}_{}'.format(encoder_n_layers, decoder_n_layers, hidden_size), # '{}_checkpoint.tar'.format(checkpoint_iter)) # Load model if a loadFilename is provided if loadFilename: # If loading on same machine the model was trained on checkpoint = torch.load(loadFilename) # If loading a model trained on GPU to CPU #checkpoint = torch.load(loadFilename, map_location=torch.device('cpu')) encoder_sd = checkpoint['en'] decoder_sd = checkpoint['de'] encoder_optimizer_sd = checkpoint['en_opt'] decoder_optimizer_sd = checkpoint['de_opt'] embedding_sd = checkpoint['embedding'] voc.__dict__ = checkpoint['voc_dict'] save_dir = os.path.join("dialogues", "save") print('Building encoder and decoder ...') embedding = nn.Embedding(voc.num_words, hidden_size) if loadFilename: embedding.load_state_dict(embedding_sd) # Initialize encoder & decoder models encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout) decoder = AttnDecoderRNN(attn_model, embedding, hidden_size, voc.num_words, decoder_n_layers, dropout) if loadFilename: encoder.load_state_dict(encoder_sd) decoder.load_state_dict(decoder_sd) # Use appropriate device encoder = encoder.to(device) decoder = decoder.to(device) print('Models built and ready to go!') clip = 50.0 teacher_forcing_ratio = 1.0 learning_rate = 0.0001 decoder_learning_ratio = 5.0 n_iteration = 4000 print_every = 100 save_every = 500 encoder.train() decoder.train() print('Building optimizers ...') encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate * decoder_learning_ratio) if loadFilename: encoder_optimizer.load_state_dict(encoder_optimizer_sd) decoder_optimizer.load_state_dict(decoder_optimizer_sd) print("Starting Training!") trainIters(model_name, voc, pairs, encoder, decoder, encoder_optimizer, decoder_optimizer, encoder_n_layers, decoder_n_layers, save_dir, n_iteration, batch_size, print_every, save_every, clip, corpus_name, loadFilename) encoder.eval() decoder.eval() searcher = GreedySearchDecoder(encoder, decoder) evaluateInput(encoder, decoder, searcher, voc) ```
github_jupyter
# When can we start watching? --- Henry Rachootin - December 2018 MIT License: https://opensource.org/licenses/MIT --- BitTorrent allows people to download movies without staying strictly within the confines of the law, but because of the peer to peer nature of the download, the file will not download sequentially. The VLC player can play the incomplete movie, but if it encounters a missing piece while streaming it will fail. Our pirate pirate friend is downloading _Avengers: Infinity War_, which is 149 minutes long and 12.91 GB. The torrent downloads in 4 MB pieces. If we start watching the movie when their torrent client says it is $x$ percent downloaded, What is the probability that we can get $t$ seconds into the movie without VLC failing on a missing piece? ``` # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' import numpy as np from scipy.stats import poisson from math import ceil,exp,floor from thinkbayes2 import Suite import thinkplot import pandas as pd from itertools import product ``` First we will just define some values. Let's define $T$ to be the runtime of the movie in seconds and $N$ to be the number of 4 MB pieces in the movie. From these, we can define $t_p$, the runtime of a single 4 MB piece as $\frac{T}{N}$. ``` T = 149*60 #movie runtime in seconds N = ceil(12.91*1000/4) #number of 4MB pieces in the whole movie t_p = T/N #runtime of a single 4MB piece print(f"The runtime of a single piece is {t_p:.2f} seconds") ``` Let's now consider where we are going with this calculation. When watching the movie, we need to have the next piece every 2.77 seconds. If we assume that each piece is equally likely to be downloaded, we can define a function $P_p(t)$ which tells us the probability of having a specific piece after $t$ seconds, and that will be the probability of having the next piece. We will find the actual form of $P_p(t)$ later. We want to find $P(t)$, the probability of making it $t$ seconds into the movie without missing a piece. Let's define $n(t)=\lceil\frac{t}{t_p}\rceil$ to be the number of pieces needed to get $t$ seconds into the movie. We need to have each of those $n$ pieces at the time that they are played, and we have a function to tell us the probability that we will have them at that time. We can then say that $$P(t)=\prod_{i=0}^{n(t)} P_p(i~t_p).$$ As for the actual form of $P_p(t)$, we will first find the distribution of the number of pieces downloaded at time $t$. Let's define the probability distribution $P_n(n,t)$, the probability of having $n$ pieces downloaded at time $t$. If we model piece arrival as a Poisson process, we can define $P_n(n,t)$ as $$P_n(n,t)=\text{poisson}(n;\lambda t)$$ where $\lambda$ is the unknown mean piece arrival rate in pieces per second. We will find a distribution for $\lambda$ using real data. If we further assume that each piece is equally likely to be downloaded at any time, we can define $P_p(t)$ by the law of total probability as $$P_p(t)=\sum_{n=n_0}^{N} \frac{n}{N}P_n(n-n_0,t)$$ where $n_0$ is the number of pieces downloaded when we start watching the movie, which we can just approximate as $\left\lfloor\frac{xN}{100}\right\rfloor$, were $x$ is still the percent downloaded at the start. Of course, whatever probabilities we get out of that will be dependent on $\lambda$, so we will have to sum them over our probability distribution for $\lambda$, once we have that. We will use a grid algorithm to find that $\lambda$ distribution, by starting with a uniform prior for a number of sample $\lambda$ values and updating it with measured interarrival times, remembering that the likelihood of an interarrival time $t$ is $\lambda e^{\lambda t}$ for a poisson process. ``` #wireshark dump data = pd.read_csv('torrent pieces.csv') #this finds the piece packets data = data[data.Info=="Piece[Malformed Packet]"] #extract the time each piece arrived at times = np.array(data.Time) #dump the initial times, they don't represent the long term behavior times = times[45:] interTimes = np.diff(times) class Lambda(Suite): def Likelihood(self, inter, lam): #poisson process interarrival likelihood return lam*exp(-lam*inter) #start with a uniform distribution for lambda lamPrior = np.linspace(0.5,1.8,25) lam = Lambda(lamPrior) thinkplot.Pdf(lam,label='prior') lam.UpdateSet(interTimes) thinkplot.Pdf(lam,label='posterior') thinkplot.decorate(title="PMF for $\lambda$",xlabel="$\lambda$ (pieces/s)",ylabel="PMF") ``` And we can implement all the functions we defined above: ``` def P_n(n,t,lam): """probability of having exactly n pieces at time t for rate lambda""" return poisson.pmf(n,lam*t) def P_p(t,n_0,lam): """probability of having a specific piece at time t for rate lambda""" #all the numbers of pieces there could be ns = np.array(range(n_0,N+1)) #the probabilities of having them ps = P_n(ns-n_0,t,lam) #the total probability #(since we are cutting off the poisson distribution at N #this is not always 1) P = np.sum(ps) if(P==0): #if lam*t is so large that we have cut off the whole poisson distribution, we can #just assume that we will have downloaded the whole movie return 1 return np.sum(ns*ps)/(N*P) def P(t,n_0,lam): """probability of getting to time t without missing a piece""" #total pieces we will need nt = ceil(t/t_p) #times we need each piece at ts = np.array(range(nt))*t_p #probabilitis of having each piece in time ps = np.array([P_p(t,n_0,lam) for t in ts]) #total probability return np.product(ps) ``` With those done, we can make our final $P(t,x)$ function, which will give us the probability of getting to time $t$ if we start at $x$ percent downloaded with our derived distribution for $\lambda$. ``` def PWatch(t,x): """Probability of getting to time t with initial download percentage x""" #intial piece number approximation n0 = floor(x*N/100) Ptot = 0 #law of total probability for l,p in lam.Items(): Ptot += p*P(t,n0,l) return Ptot ``` Unfortunately that function is prohibitively slow. We can speed it up quite a lot by improving our $P_p$ function to be less accurate but much faster. We will approximate it by $$P_p(t)=\frac{\min(\lambda t+n_0,N)}{N}$$ which is just assuming that we get one piece every $\lambda$ seconds. This ignores the uncertanty of the poisson distribution, but is much faster to calculate since it does not involve a sum. ``` def P_p_fast(t,n_0,lam): return min(lam*t+n_0,N)/N testLam = lam.Mean() ts = np.linspace(0,4000) ps = np.array([P_p(t,0,testLam) for t in ts]) psFast = np.array([P_p_fast(t,0,testLam) for t in ts]) thinkplot.plot(ts,ps,label='Correct') thinkplot.plot(ts,psFast,label='Fast') thinkplot.decorate(title='Probability of having a specific piece over time', xlabel='time (s)', ylabel='probability') ``` From the graph we can see that this is an ok approximation. With that done, we can start making graphs and answering the original question. ``` P_p = P_p_fast #use the fast function from now on ts = np.linspace(0,500) xs = [50,90,95,99] for x in xs: ps = [PWatch(t,x) for t in ts] thinkplot.plot(ts,ps,label=f'start at {x}%') thinkplot.decorate(title='Probability of getting to different times in the movie', xlabel='Time (s)', ylabel='Probability') ``` That graph is zoomed in near the start of the movie, but here's what it looks like over the whole runtime: ``` ts = np.linspace(0,T) xs = [50,90,95,99] for x in xs: ps = [PWatch(t,x) for t in ts] thinkplot.plot(ts,ps,label=f'start at {x}%') thinkplot.decorate(title='Probability of getting to different times in the movie', xlabel='Time (s)', ylabel='Probability') ``` So we can see there is a definite falling off period, and after that we will probably finish the movie. With that in mind, we can ask what the probability of finishing the movie will be for different starting percentages. ``` xs = np.linspace(0,100) ps = [PWatch(T,x) for x in xs] thinkplot.plot(xs,ps) thinkplot.decorate(title='Probability of finishing movie', xlabel='Starting percent downloaded', ylabel='Probability of finishing movie') ``` Here's the nonzero portion of that graph: ``` xs = np.linspace(90,100) ps = [PWatch(T,x) for x in xs] thinkplot.plot(xs,ps) thinkplot.decorate(title='Probability of finishing movie', xlabel='Starting percent downloaded', ylabel='Probability of finishing movie') ``` So we can see that you need to wait until about 90% has downloaded before we can expect to have any chance of finishing, and then the probability picks up rather quickly between 95% and 100%.
github_jupyter
# Notebook para o PAN - Atribuição Autoral - 2018 ``` %matplotlib inline #python basic libs from __future__ import print_function from tempfile import mkdtemp from shutil import rmtree import os; from os.path import join as pathjoin; import re; import glob; import json; import codecs; from collections import defaultdict; import pprint; from pprint import pprint from time import time import logging #data analysis libs import numpy as np; import pandas as pd; import matplotlib.pyplot as plt; import random; #machine learning libs #feature extraction from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer #preprocessing and transformation from sklearn.preprocessing import normalize, MaxAbsScaler, MinMaxScaler; from sklearn.preprocessing import LabelBinarizer; from sklearn.decomposition import PCA; from sklearn.metrics.pairwise import cosine_similarity; from sklearn.base import BaseEstimator, ClassifierMixin #classifiers from sklearn.svm import LinearSVC, SVC from sklearn.multiclass import OneVsOneClassifier, OneVsRestClassifier from sklearn.linear_model import LogisticRegression from sklearn.neural_network import MLPClassifier from sklearn.feature_selection import RFE,SelectFpr,SelectPercentile, chi2; # from sklearn.ensemble import AdaBoostClassifier, BaggingClassifier from sklearn.ensemble import VotingClassifier from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline #model valuation from sklearn.model_selection import train_test_split; from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score, accuracy_score; import seaborn as sns; sns.set(color_codes=True); from pandas.plotting import scatter_matrix import platform; print(platform.platform()) print("NumPy", np.__version__) import scipy; print("SciPy", scipy.__version__) import sklearn; print("Scikit-Learn", sklearn.__version__) print("seaborn", sns.__version__) ``` ### paths configuration ``` baseDir = '/Users/joseeleandrocustodio/Dropbox/mestrado/02 - Pesquisa/code'; inputDir= pathjoin(baseDir,'pan18aa'); outputDir= pathjoin(baseDir,'out',"oficial"); if not os.path.exists(outputDir): os.mkdir(outputDir); ``` ## loading the dataset ``` def readCollectionsOfProblems(path): # Reading information about the collection infocollection = path+os.sep+'collection-info.json' with open(infocollection, 'r') as f: problems = [ { 'problem': attrib['problem-name'], 'language': attrib['language'], 'encoding': attrib['encoding'], } for attrib in json.load(f) ] return problems; problems = readCollectionsOfProblems(inputDir); problems[0] def readProblem(path, problem): # Reading information about the problem infoproblem = path+os.sep+problem+os.sep+'problem-info.json' candidates = [] with open(infoproblem, 'r') as f: fj = json.load(f) unk_folder = fj['unknown-folder'] for attrib in fj['candidate-authors']: candidates.append(attrib['author-name']) return unk_folder, candidates; def read_files(path,label): # Reads all text files located in the 'path' and assigns them to 'label' class files = glob.glob(pathjoin(path,label,'*.txt')) texts=[] for i,v in enumerate(files): f=codecs.open(v,'r',encoding='utf-8') texts.append((f.read(),label, os.path.basename(v))) f.close() return texts for index,problem in enumerate(problems): unk_folder, candidates_folder = readProblem(inputDir, problem['problem']); problem['candidates_folder_count'] = len(candidates_folder); problem['candidates'] = []; for candidate in candidates_folder: problem['candidates'].extend(read_files(pathjoin(inputDir, problem['problem']),candidate)); problem['unknown'] = read_files(pathjoin(inputDir, problem['problem']),unk_folder); pd.DataFrame(problems) #******************************************************************************************************* import warnings from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score from sklearn.preprocessing import LabelEncoder def eval_measures(gt, pred): """Compute macro-averaged F1-scores, macro-averaged precision, macro-averaged recall, and micro-averaged accuracy according the ad hoc rules discussed at the top of this file. Parameters ---------- gt : dict Ground truth, where keys indicate text file names (e.g. `unknown00002.txt`), and values represent author labels (e.g. `candidate00003`) pred : dict Predicted attribution, where keys indicate text file names (e.g. `unknown00002.txt`), and values represent author labels (e.g. `candidate00003`) Returns ------- f1 : float Macro-averaged F1-score precision : float Macro-averaged precision recall : float Macro-averaged recall accuracy : float Micro-averaged F1-score """ actual_authors = list(gt.values()) encoder = LabelEncoder().fit(['<UNK>'] + actual_authors) text_ids, gold_authors, silver_authors = [], [], [] for text_id in sorted(gt): text_ids.append(text_id) gold_authors.append(gt[text_id]) try: silver_authors.append(pred[text_id]) except KeyError: # missing attributions get <UNK>: silver_authors.append('<UNK>') assert len(text_ids) == len(gold_authors) assert len(text_ids) == len(silver_authors) # replace non-existent silver authors with '<UNK>': silver_authors = [a if a in encoder.classes_ else '<UNK>' for a in silver_authors] gold_author_ints = encoder.transform(gold_authors) silver_author_ints = encoder.transform(silver_authors) # get F1 for individual classes (and suppress warnings): with warnings.catch_warnings(): warnings.simplefilter('ignore') f1 = f1_score(gold_author_ints, silver_author_ints, labels=list(set(gold_author_ints)), average='macro') precision = precision_score(gold_author_ints, silver_author_ints, labels=list(set(gold_author_ints)), average='macro') recall = recall_score(gold_author_ints, silver_author_ints, labels=list(set(gold_author_ints)), average='macro') accuracy = accuracy_score(gold_author_ints, silver_author_ints) return f1,precision,recall,accuracy def evaluate(ground_truth_file,predictions_file): # Calculates evaluation measures for a single attribution problem gt = {} with open(ground_truth_file, 'r') as f: for attrib in json.load(f)['ground_truth']: gt[attrib['unknown-text']] = attrib['true-author'] pred = {} with open(predictions_file, 'r') as f: for attrib in json.load(f): if attrib['unknown-text'] not in pred: pred[attrib['unknown-text']] = attrib['predicted-author'] f1,precision,recall,accuracy = eval_measures(gt,pred) return f1, precision, recall, accuracy from sklearn.base import BaseEstimator from scipy.sparse import issparse class DenseTransformer(BaseEstimator): """Convert a sparse array into a dense array.""" def __init__(self, return_copy=True): self.return_copy = return_copy self.is_fitted = False def transform(self, X, y=None): """ Return a dense version of the input array. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] (default: None) Returns --------- X_dense : dense version of the input X array. """ if issparse(X): return X.toarray() elif self.return_copy: return X.copy() else: return X def fit(self, X, y=None): """ Mock method. Does nothing. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] (default: None) Returns --------- self """ self.is_fitted = True return self def fit_transform(self, X, y=None): """ Return a dense version of the input array. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] (default: None) Returns --------- X_dense : dense version of the input X array. """ return self.transform(X=X, y=y) ``` ### examinando o parametro min_df isoladamente ``` def runML(problem): print ("\nProblem: %s, language: %s, " %(problem['problem'],problem['language'])) train_docs, train_labels, _ = zip(*problem['candidates']) problem['training_docs_size'] = len(train_docs); test_docs, _, test_filename = zip(*problem['unknown']) pipeline = Pipeline([ ('vect', TfidfVectorizer(analyzer='char', min_df=0.05, max_df=1.0, norm='l1', ngram_range=(3,5), sublinear_tf=True, smooth_idf=True, lowercase =False)), ('dense', DenseTransformer()), ('scaler', MaxAbsScaler()), ('transf', PCA(0.999)), ('clf', LogisticRegression(random_state=0,multi_class='multinomial', solver='newton-cg')), ]) # uncommenting more parameters will give better exploring power but will # increase processing time in a combinatorial way parameters = { 'vect__min_df':(2,0.01,0.05,0.1) } grid_search = GridSearchCV(pipeline, parameters, cv=5, n_jobs=-1, verbose=False, scoring='f1_macro') print("Performing grid search...") t0 = time() grid_search.fit(train_docs, train_labels) print("done in %0.3fs" % (time() - t0)) print("Best score: %0.3f" % grid_search.best_score_) print("Best parameters set:") best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print("\t%s: %r" % (param_name, best_parameters[param_name])) train_pred=grid_search.predict(train_docs); test_pred=grid_search.predict(test_docs); # Writing output file out_data=[] for i,v in enumerate(test_pred): out_data.append({'unknown-text': test_filename[i],'predicted-author': v}) answerFile = pathjoin(outputDir,'answers-'+problem['problem']+'.json'); with open(answerFile, 'w') as f: json.dump(out_data, f, indent=4) #evaluation train f1,precision,recall,accuracy=evaluate( pathjoin(inputDir, problem['problem'], 'ground-truth.json'), answerFile) return { 'problem-name' : problem['problem'], "language" : problem['language'], 'AuthorCount' : len(set(train_labels)), "train_doc_size": len(train_docs), "train_caract_per_doc": sum([len(l) for l in train_docs])/len(train_docs), "test_doc_size" : len(test_docs), "test_caract_per_doc": sum([len(l) for l in test_docs])/len(test_docs), 'macro-f1' : round(f1,3), 'macro-precision': round(precision,3), 'macro-recall' : round(recall,3), 'micro-accuracy' : round(accuracy,3), }, grid_search.cv_results_, best_parameters; result = []; cv_result = []; best_parameters = []; for problem in problems: r, c, b = runML(problem); result.append(r); cv_result.append(c); b['problem'] = problem['problem']; best_parameters.append(b); pd.DataFrame(best_parameters)[['problem','vect__min_df']] ``` ### analisando os demais parametros ``` def runML(problem): print ("\nProblem: %s, language: %s, " %(problem['problem'],problem['language'])) train_docs, train_labels, _ = zip(*problem['candidates']) problem['training_docs_size'] = len(train_docs); test_docs, _, test_filename = zip(*problem['unknown']) pipeline = Pipeline([ ('vect', TfidfVectorizer(analyzer='char', min_df=0.01, max_df=1.0, norm='l1', lowercase =False, sublinear_tf=True)), ('dense', DenseTransformer()), ('scaler', MaxAbsScaler()), ('transf', PCA()), ('clf', LogisticRegression(random_state=0,multi_class='multinomial', solver='newton-cg')), ]) # uncommenting more parameters will give better exploring power but will # increase processing time in a combinatorial way parameters = { 'vect__ngram_range':((2,3),(2,4),(2,5),(3,5)), 'vect__sublinear_tf':(True, False), 'vect__norm':('l1','l2'), 'transf__n_components': (0.1,0.25,0.5,0.75,0.9,0.99), } grid_search = GridSearchCV(pipeline, parameters, cv=3, n_jobs=-1, verbose=False, scoring='f1_macro') print("Performing grid search...") t0 = time() grid_search.fit(train_docs, train_labels) print("done in %0.3fs" % (time() - t0)) print("Best score: %0.3f" % grid_search.best_score_) print("Best parameters set:") best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print("\t%s: %r" % (param_name, best_parameters[param_name])) train_pred=grid_search.predict(train_docs); test_pred=grid_search.predict(test_docs); # Writing output file out_data=[] for i,v in enumerate(test_pred): out_data.append({'unknown-text': test_filename[i],'predicted-author': v}) answerFile = pathjoin(outputDir,'answers-'+problem['problem']+'.json'); with open(answerFile, 'w') as f: json.dump(out_data, f, indent=4) #evaluation train f1,precision,recall,accuracy=evaluate( pathjoin(inputDir, problem['problem'], 'ground-truth.json'), answerFile) return { 'problem-name' : problem['problem'], "language" : problem['language'], 'AuthorCount' : len(set(train_labels)), "train_doc_size": len(train_docs), "train_caract_per_doc": sum([len(l) for l in train_docs])/len(train_docs), "test_doc_size" : len(test_docs), "test_caract_per_doc": sum([len(l) for l in test_docs])/len(test_docs), 'macro-f1' : round(f1,3), 'macro-precision': round(precision,3), 'macro-recall' : round(recall,3), 'micro-accuracy' : round(accuracy,3), }, grid_search.cv_results_,best_parameters; result = []; cv_result = []; best_parameters = []; for problem in problems: r, c, b = runML(problem); result.append(r); cv_result.append(c); b['problem'] = problem['problem']; best_parameters.append(b); df=pd.DataFrame(result)[['problem-name', "language", 'AuthorCount', "train_doc_size","train_caract_per_doc", "test_doc_size", "test_caract_per_doc", 'macro-f1','macro-precision','macro-recall' ,'micro-accuracy']] df print(df[["macro-f1"]].reset_index().to_latex(index=False).replace(" "," ")) languages={ 'en':'inglesa', 'sp':'espanhola', 'it':'italiana', 'pl':'polonesa', 'fr':'francesa' } cv_result2 = []; dfCV = pd.DataFrame(); for i, c in enumerate(cv_result): temp = pd.DataFrame(c); temp['problem'] = i+1; temp['language'] = languages[problems[i]['language']] dfCV = dfCV.append(temp); for p in ['param_transf__n_components', 'mean_test_score','std_test_score','mean_train_score', 'split0_test_score','split0_train_score', 'split1_test_score','split1_train_score', 'split2_test_score','split2_train_score']: dfCV[p]=dfCV[p].astype(np.float32); dfCV =dfCV[[ 'problem', 'language', 'rank_test_score', 'param_transf__n_components', 'param_vect__ngram_range', 'param_vect__sublinear_tf', 'param_vect__norm', 'mean_test_score', 'std_test_score', 'mean_train_score', 'split0_test_score','split0_train_score', 'split1_test_score','split1_train_score', 'split2_test_score','split2_train_score', 'mean_score_time', 'mean_fit_time', 'std_fit_time', 'std_score_time', 'std_train_score', ]]; dfCV.rename(columns={ 'param_transf__n_components':'PCA_componentes', 'param_vect__ngram_range':'ngram_range', 'param_vect__sublinear_tf':'sublinear_tf', 'param_vect__smooth_idf':'smooth_idf', 'param_vect__norm':'norm' },inplace=True); #print('\',\n\''.join(dfCV.columns)) dfCV.to_csv('PANAA2018_CHAR.csv', index=False) dfCV = pd.read_csv('PANAA2018_CHAR.csv', na_values='') (dfCV[dfCV.rank_test_score == 1])[ ['problem', 'language', 'rank_test_score', 'mean_test_score', 'std_test_score', 'ngram_range', 'sublinear_tf', 'norm', 'PCA_componentes'] ].sort_values(by=[ 'problem', 'mean_test_score', 'ngram_range', 'sublinear_tf', 'PCA_componentes' ], ascending=[True, False,False,False,False]) dfCV.pivot_table( index=['problem','language','PCA_componentes'], columns=['norm','sublinear_tf', 'ngram_range'], values='mean_test_score' ) pd.options.display.precision = 3 print(u"\\begin{table}[h]\n\\centering\n\\caption{Medida F1 para os parâmetros }") print(re.sub(r'[ ]{2,}',' ',dfCV[dfCV.PCA_componentes >= 0.99].pivot_table( index=['problem','language','sublinear_tf','norm'], columns=['ngram_range'], values='mean_test_score' ).to_latex())) print ("\label{tab:modelocaracter}") print(r"\end{table}") ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. ``` # Save and restore models <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/save_and_restore_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/save_and_restore_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/save_and_restore_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Model progress can be saved during—and after—training. This means a model can resume where it left off and avoid long training times. Saving also means you can share your model and others can recreate your work. When publishing research models and techniques, most machine learning practitioners share: * code to create the model, and * the trained weights, or parameters, for the model Sharing this data helps others understand how the model works and try it themselves with new data. Caution: Be careful with untrusted code—TensorFlow models are code. See [Using TensorFlow Securely](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) for details. ### Options There are different ways to save TensorFlow models—depending on the API you're using. This guide uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. For other approaches, see the TensorFlow [Save and Restore](https://www.tensorflow.org/guide/saved_model) guide or [Saving in eager](https://www.tensorflow.org/guide/eager#object_based_saving). ## Setup ### Installs and imports Install and import TensorFlow and dependencies: ``` !pip install h5py pyyaml ``` ### Get an example dataset We'll use the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to train our model to demonstrate saving weights. To speed up these demonstration runs, only use the first 1000 examples: ``` from __future__ import absolute_import, division, print_function import os !pip install tf-nightly-2.0-preview import tensorflow as tf keras = tf.keras tf.__version__ (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data() train_labels = train_labels[:1000] test_labels = test_labels[:1000] train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0 test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0 ``` ### Define a model Let's build a simple model we'll use to demonstrate saving and loading weights. ``` # Returns a short sequential model def create_model(): model = tf.keras.models.Sequential([ keras.layers.Dense(512, activation=tf.keras.activations.relu, input_shape=(784,)), keras.layers.Dropout(0.2), keras.layers.Dense(10, activation=tf.keras.activations.softmax) ]) model.compile(optimizer='adam', loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) return model # Create a basic model instance model = create_model() model.summary() ``` ## Save checkpoints during training The primary use case is to automatically save checkpoints *during* and at *the end* of training. This way you can use a trained model without having to retrain it, or pick-up training where you left of—in case the training process was interrupted. `tf.keras.callbacks.ModelCheckpoint` is a callback that performs this task. The callback takes a couple of arguments to configure checkpointing. ### Checkpoint callback usage Train the model and pass it the `ModelCheckpoint` callback: ``` checkpoint_path = "training_1/cp.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) # Create checkpoint callback cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path, save_weights_only=True, verbose=1) model = create_model() model.fit(train_images, train_labels, epochs = 10, validation_data = (test_images,test_labels), callbacks = [cp_callback]) # pass callback to training ``` This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch: ``` !ls {checkpoint_dir} ``` Create a new, untrained model. When restoring a model from only weights, you must have a model with the same architecture as the original model. Since it's the same model architecture, we can share weights despite that it's a different *instance* of the model. Now rebuild a fresh, untrained model, and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy): ``` model = create_model() loss, acc = model.evaluate(test_images, test_labels) print("Untrained model, accuracy: {:5.2f}%".format(100*acc)) ``` Then load the weights from the checkpoint, and re-evaluate: ``` model.load_weights(checkpoint_path) loss,acc = model.evaluate(test_images, test_labels) print("Restored model, accuracy: {:5.2f}%".format(100*acc)) ``` ### Checkpoint callback options The callback provides several options to give the resulting checkpoints unique names, and adjust the checkpointing frequency. Train a new model, and save uniquely named checkpoints once every 5-epochs: ``` # include the epoch in the file name. (uses `str.format`) checkpoint_path = "training_2/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint( checkpoint_path, verbose=1, save_weights_only=True, # Save weights, every 5-epochs. period=5) model = create_model() model.fit(train_images, train_labels, epochs = 50, callbacks = [cp_callback], validation_data = (test_images,test_labels), verbose=0) ``` Now, look at the resulting checkpoints and choose the latest one: ``` ! ls {checkpoint_dir} latest = tf.train.latest_checkpoint(checkpoint_dir) latest ``` Note: the default tensorflow format only saves the 5 most recent checkpoints. To test, reset the model and load the latest checkpoint: ``` model = create_model() model.load_weights(latest) loss, acc = model.evaluate(test_images, test_labels) print("Restored model, accuracy: {:5.2f}%".format(100*acc)) ``` ## What are these files? The above code stores the weights to a collection of [checkpoint](https://www.tensorflow.org/guide/saved_model#save_and_restore_variables)-formatted files that contain only the trained weights in a binary format. Checkpoints contain: * One or more shards that contain your model's weights. * An index file that indicates which weights are stored in a which shard. If you are only training a model on a single machine, you'll have one shard with the suffix: `.data-00000-of-00001` ## Manually save weights Above you saw how to load the weights into a model. Manually saving the weights is just as simple, use the `Model.save_weights` method. ``` # Save the weights model.save_weights('./checkpoints/my_checkpoint') # Restore the weights model = create_model() model.load_weights('./checkpoints/my_checkpoint') loss,acc = model.evaluate(test_images, test_labels) print("Restored model, accuracy: {:5.2f}%".format(100*acc)) ``` ## Save the entire model The entire model can be saved to a file that contains the weight values, the model's configuration, and even the optimizer's configuration (depends on set up). This allows you to checkpoint a model and resume training later—from the exact same state—without access to the original code. Saving a fully-functional model is very useful—you can load them in TensorFlow.js ([HDF5](https://js.tensorflow.org/tutorials/import-keras.html), [Saved Model](https://js.tensorflow.org/tutorials/import-saved-model.html)) and then train and run them in web browsers, or convert them to run on mobile devices using TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_api#exporting_a_tfkeras_file_), [Saved Model](https://www.tensorflow.org/lite/convert/python_api#exporting_a_savedmodel_)) ### As an HDF5 file Keras provides a basic save format using the [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) standard. For our purposes, the saved model can be treated as a single binary blob. ``` model = create_model() # You need to use a keras.optimizer to restore the optimizer state from an HDF5 file. model.compile(optimizer='adam', loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) model.fit(train_images, train_labels, epochs=5) # Save entire model to a HDF5 file model.save('my_model.h5') ``` Now recreate the model from that file: ``` # Recreate the exact same model, including weights and optimizer. new_model = keras.models.load_model('my_model.h5') new_model.summary() ``` Check its accuracy: ``` loss, acc = new_model.evaluate(test_images, test_labels) print("Restored model, accuracy: {:5.2f}%".format(100*acc)) ``` This technique saves everything: * The weight values * The model's configuration(architecture) * The optimizer configuration Keras saves models by inspecting the architecture. Currently, it is not able to save TensorFlow optimizers (from `tf.train`). When using those you will need to re-compile the model after loading, and you will loose the state of the optimizer. ## What's Next That was a quick guide to saving and loading in with `tf.keras`. * The [tf.keras guide](https://www.tensorflow.org/guide/keras) shows more about saving and loading models with `tf.keras`. * See [Saving in eager](https://www.tensorflow.org/guide/eager#object_based_saving) for saving during eager execution. * The [Save and Restore](https://www.tensorflow.org/guide/saved_model) guide has low-level details about TensorFlow saving.
github_jupyter
<h1 style="text-align:center;text-decoration: underline">Stream Analytics Tutorial</h1> <h1>Overview</h1> <p>Welcome to the stream analytics tutorial for EpiData. In this tutorial we will perform near real-time stream analytics on sample weather data acquired from a simulated wireless sensor network.</p> <h2>Package and Module Imports</h2> <p>As a first step, we will import packages and modules required for this tutorial. Since <i>EpiData Context (ec)</i> is required to use the application, it is implicitly imported. Sample functions for near real-time analytics are avaialable in <i>EpiData Analytics</i> package. Other packages and modules, such as <i>datetime</i>, <i>pandas</i> and <i>matplotlib</i>, can also be imported at this time.</p> ``` #from epidata.context import ec from epidata.analytics import * %matplotlib inline from datetime import datetime, timedelta import pandas as pd import time import pylab as pl from IPython import display import json ``` <h2>Stream Analysis</h2> <h3>Function Definition</h3> <p>EpiData supports development and deployment of custom algorithms via Jupyter Notebook. Below, we define python functions for substituting extreme outliers and aggregating temperature measurements. These functions can be operated on near real-time and historic data. In this tutorial, we will apply the functions on near real-time data available from Kafka 'measurements' and 'measurements_cleansed' topics</p> ``` import pandas as pd import math, numbers def substitute_demo(df, meas_names, method="rolling", size=3): """ Substitute missing measurement values within a data frame, using the specified method. """ df["meas_value"].replace(250, np.nan, inplace=True) for meas_name in meas_names: if (method == "rolling"): if ((size % 2 == 0) and (size != 0)): size += 1 if df.loc[df["meas_name"]==meas_name].size > 0: indices = df.loc[df["meas_name"] == meas_name].index[df.loc[df["meas_name"] == meas_name]["meas_value"].apply( lambda x: not isinstance(x, basestring) and (x == None or np.isnan(x)))] substitutes = df.loc[df["meas_name"]==meas_name]["meas_value"].rolling( window=size, min_periods=1, center=True).mean() df["meas_value"].fillna(substitutes, inplace=True) df.loc[indices, "meas_flag"] = "substituted" df.loc[indices, "meas_method"] = "rolling average" else: raise ValueError("Unsupported substitution method: ", repr(method)) return df import pandas as pd import numpy as np import json def subgroup_statistics(row): row['start_time'] = np.min(row["ts"]) row["stop_time"] = np.max(row["ts"]) row["meas_summary_name"] = "statistics" row["meas_summary_value"] = json.dumps({'count': row["meas_value"].count(), 'mean': row["meas_value"].mean(), 'std': row["meas_value"].std(), 'min': row["meas_value"].min(), 'max': row["meas_value"].max()}) row["meas_summary_description"] = "descriptive statistics" return row def meas_statistics_demo(df, meas_names, method="standard"): """ Compute statistics on measurement values within a data frame, using the specified method. """ if (method == "standard"): df_grouped = df.loc[df["meas_name"].isin(meas_names)].groupby(["company", "site", "station", "sensor"], as_index=False) df_summary = df_grouped.apply(subgroup_statistics).loc[:, ["company", "site", "station", "sensor", "start_time", "stop_time", "event", "meas_name", "meas_summary_name", "meas_summary_value", "meas_summary_description"]].drop_duplicates() else: raise ValueError("Unsupported summary method: ", repr(method)) return df_summary ``` <h3>Transformations and Streams</h3> <p>The analytics algorithms are executed on near real-time data through transformations. A transformation specifies the function, its parameters and destination. The destination can be one of the database tables, namely <i>'measurements_cleansed'</i> or <i>'measurements_summary'</i>, or another Kafka topic.</p> <p>Once the transformations are defined, they are initiated via <i>ec.create_stream(transformations, data_source, batch_duration)</i> function call.</p> ``` #Stop current near real-time processing ec.stop_streaming() # Define tranformations and steam operations op1 = ec.create_transformation(substitute_demo, [["Temperature", "Wind_Speed"], "rolling", 3], "measurements_substituted") ec.create_stream([op1], "measurements") op2 = ec.create_transformation(identity, [], "measurements_cleansed") op3 = ec.create_transformation(meas_statistics, [["Temperature", "Wind_Speed"], "standard"], "measurements_summary") ec.create_stream([op2, op3],"measurements_substituted") # Start near real-time processing ec.start_streaming() ``` <h3>Data Ingestion</h3> <p>We can now start data ingestion from simulated wireless sensor network. To do so, you can download and run the <i>sensor_data_with_outliers.py</i> example shown in the image below.</p> <img src="./static/jupyter_tree.png"> <h3>Data Query and Visualization</h3> <p>We query the original and processed data from Kafka queue using Kafka Consumer. The data obtained from the quey is visualized using Bokeh charts.</p> ``` from bokeh.io import push_notebook, show, output_notebook from bokeh.layouts import row, column from bokeh.plotting import figure from bokeh.models import ColumnDataSource from kafka import KafkaConsumer import json from pandas.io.json import json_normalize output_notebook() plot1 = figure(plot_width=750, plot_height=200, x_axis_type='datetime', y_range=(30, 300)) plot2 = figure(plot_width=750, plot_height=200, x_axis_type='datetime', y_range=(30, 300)) df_kafka_init = pd.DataFrame(columns = ["ts", "meas_value"]) test_data_1 = ColumnDataSource(data=df_kafka_init.to_dict(orient='list')) test_data_2 = ColumnDataSource(data=df_kafka_init.to_dict(orient='list')) meas_name = "Temperature" plot1.circle("ts", "meas_value", source=test_data_1, legend=meas_name, line_color='orangered', line_width=1.5) line1 = plot1.line("ts", "meas_value", source=test_data_1, legend=meas_name, line_color='orangered', line_width=1.5) plot1.legend.location = "top_right" plot2.circle("ts", "meas_value", source=test_data_2, legend=meas_name, line_color='blue', line_width=1.5) line2 = plot2.line("ts", "meas_value", source=test_data_2, legend=meas_name, line_color='blue', line_width=1.5) plot2.legend.location = "top_right" consumer = KafkaConsumer() consumer.subscribe(['measurements', 'measurements_substituted']) delay = .1 handle = show(column(plot1, plot2), notebook_handle=True) for message in consumer: topic = message.topic measurements = json.loads(message.value) df_kafka = json_normalize(measurements) df_kafka["meas_value"] = np.nan if "meas_value" not in measurements else measurements["meas_value"] df_kafka = df_kafka.loc[df_kafka["meas_name"]==meas_name] df_kafka = df_kafka[["ts", "meas_value"]] df_kafka["ts"] = df_kafka["ts"].apply(lambda x: pd.to_datetime(x, unit='ms').tz_localize('UTC').tz_convert('US/Pacific')) if (not df_kafka.empty): if (topic == 'measurements'): test_data_1.stream(df_kafka.to_dict(orient='list')) if (topic == 'measurements_substituted'): test_data_2.stream(df_kafka.to_dict(orient='list')) push_notebook(handle=handle) time.sleep(delay) ``` <p>Another way to query and visualize processed data is using <i>ec.query_measurements_cleansed(..) and ec.query_measurements_summary(..)</i> functions. For our example, we specify paramaters that match sample data set, and query the aggregated values using <i>ec.query_measurements_summary(..)</i> function call.</p> ``` # QUERY MEASUREMENTS_CLEANSED TABLE primary_key={"company": "EpiData", "site": "San_Jose", "station":"WSN-1", "sensor": ["Temperature_Probe", "RH_Probe", "Anemometer"]} start_time = datetime.strptime('8/19/2017 00:00:00', '%m/%d/%Y %H:%M:%S') stop_time = datetime.strptime('8/20/2017 00:00:00', '%m/%d/%Y %H:%M:%S') df_cleansed = ec.query_measurements_cleansed(primary_key, start_time, stop_time) print "Number of records:", df_cleansed.count() df_cleansed_local = df_cleansed.toPandas() df_cleansed_local[df_cleansed_local["meas_name"]=="Temperature"].tail(10).sort_values(by="ts",ascending=False) # QUERY MEASUREMNTS_SUMMARY TABLE primary_key={"company": "EpiData", "site": "San_Jose", "station":"WSN-1", "sensor": ["Temperature_Probe"]} start_time = datetime.strptime('8/19/2017 00:00:00', '%m/%d/%Y %H:%M:%S') stop_time = datetime.strptime('8/20/2017 00:00:00', '%m/%d/%Y %H:%M:%S') last_index = -1 summary_result = pd.DataFrame() df_summary = ec.query_measurements_summary(primary_key, start_time, stop_time) df_summary_local = df_summary.toPandas() summary_keys = df_summary_local[["company", "site", "station", "sensor", "start_time", "stop_time", "meas_name", "meas_summary_name"]] summary_result = df_summary_local["meas_summary_value"].apply(json.loads).apply(pd.Series) summary_combined = pd.concat([summary_keys, summary_result], axis=1) summary_combined.tail(5) ``` <h3>Stop Stream Analytics</h3> <p>The transformations can be stopped at any time via <i>ec.stop_streaming()</i> function call<p> ``` #Stop current near real-time processing ec.stop_streaming() ``` <h2>Next Steps</h2> <p>Congratulations, you have successfully perfomed near real-time analytics on sample data aquired by a simulated wireless sensor network. The next step is to explore various capabilities of EpiData by creating your own custom analytics application!</p>
github_jupyter
``` # %load train.py #!/usr/bin/env python # In[1]: from t_cnn import tcnn model = tcnn(num_classes = 2,pretrained = True, model_root = '/home/krf/model/BALL/') import senet import os import numpy as np import torch from torchvision.datasets import ImageFolder from utils import TransformImage import shutil import time import torch import torch.nn as nn import torch.nn.parallel import torch.backends.cudnn as cudnn import torch.optim as optim from torch.optim import lr_scheduler import torch.utils.data import torchvision.transforms as transforms import torchvision.datasets as datasets from tensorboardX import SummaryWriter DATA_DIR = "/home/krf/dataset/BALL/" traindir = DATA_DIR + "train" valdir = DATA_DIR +"val" os.environ["CUDA_VISIBLE_DEVICES"] = "2" BATCH_SIZE = 32 WORKERS = 4 START = 0 EPOCHS = 1200 PRINT_FREQ = 20 # In[31]: #model = senet.se_resnext50_32x4d(num_classes = 2) #通过随机变化来进行数据增强 train_tf = TransformImage( model, random_crop=False, random_hflip=True, random_vflip=True, random_rotate=True, preserve_aspect_ratio=True ) train_loader = torch.utils.data.DataLoader( # datasets.ImageFolder(traindir, transforms.Compose([ # # transforms.RandomSizedCrop(max(model.input_size)), # transforms.RandomHorizontalFlip(), # transforms.ToTensor(), # normalize, # ])), datasets.ImageFolder(traindir,train_tf), batch_size=BATCH_SIZE, shuffle=True, num_workers=WORKERS, pin_memory=True) val_tf = TransformImage( model, preserve_aspect_ratio=True) val_loader = torch.utils.data.DataLoader( datasets.ImageFolder(valdir,val_tf), batch_size=BATCH_SIZE, shuffle=False, num_workers=WORKERS, pin_memory=True) # In[29]: def train(train_loader, model, criterion, optimizer, epoch,scheduler): # switch to train mode model.train() # end = time.time() for i, (input, target) in enumerate(train_loader): # measure data loading time # data_time.update(time.time() - end) input = input.cuda() target = target.cuda(async=True) input_var = torch.autograd.Variable(input.float()) target_var = torch.autograd.Variable(target) #print(input_var.type()) # compute output output = model(input_var) loss = criterion(output, target_var) #print(output.data) # # compute gradient and do SGD step optimizer.zero_grad() loss.backward() optimizer.step() # # measure elapsed time # batch_time.update(time.time() - end) # end = time.time() meters = trainMeter.update(output,target,loss,input.size(0)) if i % PRINT_FREQ == 0: print('Epoch: [{0}][{1}/{2}]\t' 'Loss {loss:.5f}\t' 'Acc {Acc:.5f}\t' 'Precision {P:.5f}\t' 'Recall {R:.5f}\t' 'F1 {F1:.5f}'.format( epoch,i, len(train_loader), loss=meters[4], Acc=meters[3],P=meters[0],R=meters[1],F1=meters[2])) step = epoch*len(train_loader) + i writer.add_scalar('TRAIN/Precision', meters[0], step) writer.add_scalar('TRAIN/Recall', meters[1], step) writer.add_scalar('TRAIN/F1', meters[2], step) writer.add_scalar('TRAIN/Acc', meters[3], step) writer.add_scalar('TRAIN/loss',meters[4], step) scheduler.step(meters[4]) def validate(val_loader, model, criterion,epoch): # switch to evaluate mode model.eval() # end = time.time() meters = [] for i, (input, target) in enumerate(val_loader): target = target.cuda() input = input.cuda() input_var = torch.autograd.Variable(input, volatile=True) target_var = torch.autograd.Variable(target, volatile=True) # compute output output = model(input_var) loss = criterion(output, target_var) meters = valMeter.update(output,target,loss,input.size(0)) if i % PRINT_FREQ == 0: print('Test: [{0}/{1}]\t' 'Loss {loss:.5f}\t' 'Acc {Acc:.5f}\t' 'Precision {P:.5f}\t' 'Recall {R:.5f}\t' 'F1 {F1:.5f}'.format( i, len(val_loader), loss=meters[4], Acc=meters[3],P=meters[0],R=meters[1],F1=meters[2])) step = epoch * len(val_loader) + i writer.add_scalar('VAL/Precision', meters[0], step) writer.add_scalar('VAL/Recall', meters[1], step) writer.add_scalar('VAL/F1', meters[2], step) writer.add_scalar('VAL/Acc', meters[3], step) writer.add_scalar('VAL/loss',meters[4], step) print(' * Acc {Acc:.5f} F1 {F1:.5f}' .format(Acc=meters[3],F1=meters[2])) writer.add_scalar('VAL/EPOCH_F1', meters[2], epoch) return meters[2] def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'): torch.save(state, filename) if is_best: shutil.copyfile(filename, 'tcnn5_model_best.pth.tar') class ModelMeter(object): def __init__(self): self.reset() def reset(self): self.losses = AverageMeter() self.top1 = AverageMeter() self.TP = 1e-8 self.TN = 1e-8 self.FN = 1e-8 self.FP = 1e-8 self.P=1e-8 self.R=1e-8 self.F1=1e-8 self.Acc=1e-8 def update(self, output,target,loss, n=1): _, pred = output.data.topk(1, 1, True, True) pred = pred.t() # print(pred,target.data) # TP predict 和 label 同时为1 self.TP += ((pred == 1) & (target.data == 1)).cpu().numpy().sum() # TN predict 和 label 同时为0 self.TN += ((pred == 0) & (target.data == 0)).cpu().numpy().sum() # FN predict 0 label 1 self.FN += ((pred == 0) & (target.data == 1)).cpu().numpy().sum() # FP predict 1 label 0 self.FP += ((pred == 1) & (target.data == 0)).cpu().numpy().sum() # print(self.TP,self.TN,self.FN,self.FP) # zes=torch.autograd.Variable(torch.zeros(target.size()).type(torch.cuda.LongTensor))#全0变量 # ons=torch.autograd.Variable(torch.ones(target.size()).type(torch.cuda.LongTensor))#全1变量 # print(zes,ons) # train_correct01 = ((pred==zes)&(target.data.squeeze(1)==ons)).sum()#原标签为1,预测为 0 的总数 # train_correct10 = ((pred==ons)&(target.data.squeeze(1)==zes)).sum()#原标签为0,预测为1  的总数 # train_correct11 = ((pred_y==ons)&(target.data.squeeze(1)==ons)).sum() # train_correct00 = ((pred_y==zes)&(target.data.squeeze(1)==zes)).sum() # self.FN += train_correct01.data[0] # self.FP += train_correct10.data[0] # self.TP += train_correct11.data[0] # self.TN += train_correct00.data[0] self.P = self.TP / (self.TP + self.FP) self.R = self.TP / (self.TP + self.FN) self.F1 = 2 * self.R * self.P / (self.R + self.P) self.Acc = (self.TP + self.TN) / (self.TP + self.TN + self.FP + self.FN) self.losses.update(loss.data[0],n) return [self.P,self.R,self.F1,self.Acc,self.losses.avg] class AverageMeter(object): """Computes and stores the average and current value""" def __init__(self): self.reset() def reset(self): self.val = 1e-8 self.avg = 1e-8 self.sum = 1e-8 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count def adjust_learning_rate(lr,optimizer, epoch): if epoch >= 300 and epoch % 100 == 0 : lr /= 10 for param_group in optimizer.param_groups: param_group['lr'] = lr print("adjuct learning rate to {}".format(lr)) return lr def accuracy(output, target, topk=(1,)): """Computes the precision@k for the specified values of k""" maxk = max(topk) batch_size = target.size(0) _, pred = output.topk(maxk, 1, True, True) pred = pred.t() correct = pred.eq(target.view(1, -1).expand_as(pred)) res = [] for k in topk: correct_k = correct[:k].view(-1).float().sum(0) res.append(correct_k.mul_(100.0 / batch_size)) return res # In[ ]: # 加载模型,解决命名和维度不匹配问题,解决多个gpu并行 def load_state_keywise(model, model_path): model_dict = model.state_dict() print("=> loading checkpoint '{}'".format(model_path)) checkpoint = torch.load(model_path,map_location='cpu') START = checkpoint['epoch'] best_F1 = checkpoint['best_prec1'] #model.load_state_dict(checkpoint['state_dict']) pretrained_dict = checkpoint['state_dict']#torch.load(model_path, map_location='cpu') key = list(pretrained_dict.keys())[0] # 1. filter out unnecessary keys # 1.1 multi-GPU ->CPU if (str(key).startswith('module.')): pretrained_dict = {k[7:]: v for k, v in pretrained_dict.items() if k[7:] in model_dict and v.size() == model_dict[k[7:]].size()} else: pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict and v.size() == model_dict[k].size()} # 2. overwrite entries in the existing state dict model_dict.update(pretrained_dict) # 3. load the new state dict model.load_state_dict(model_dict) print("start at epoch {}, best_f1={}".format(START,best_F1)) return model,START,best_F1 criterion = nn.CrossEntropyLoss().cuda() lr = 1e-4 optimizer = torch.optim.SGD(model.parameters(), lr = lr,momentum = 0.9,weight_decay=1e-6) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=10, verbose=True) best_f1 = 0 model,START,best_f1 = load_state_keywise(model,'tcnn5_checkpoint-Copy1.pth.tar') # model = model.cuda() model = torch.nn.DataParallel(model).cuda() # TP = 0,TN = 0,FN = 0, FP = 0 writer = SummaryWriter() trainMeter = ModelMeter() valMeter = ModelMeter() for epoch in range(START,EPOCHS): # train for one epoch lr = optimizer.param_groups[0]['lr'] writer.add_scalar('LR', lr, epoch) train(train_loader, model, criterion, optimizer, epoch, scheduler) # evaluate on validation set F1 = validate(val_loader, model, criterion,epoch) # remember best prec@1 and save checkpoint is_best = F1 > best_f1 best_f1 = max(F1, best_f1) save_checkpoint({ 'epoch': epoch + 1, 'arch': "T-CNN", 'state_dict': model.state_dict(), 'best_prec1': best_f1, }, is_best,filename='tcnn5_checkpoint.pth.tar') # export scalar data to JSON for external processing writer.export_scalars_to_json("./test.json") writer.close() # import torch.utils.model_zoo as model_zoo # model_dict = model_zoo.load_url('https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth', '/home/krf/model/BALL/') # print(model_dict.keys()) # model = tcnn(2) # pre_dict = model.state_dict() # print(pre_dict.keys()) # k1 = ['features.0.weight', 'features.0.bias', 'features.3.weight', 'features.3.bias', 'features.6.weight', 'features.6.bias', 'features.8.weight', 'features.8.bias', 'features.10.weight', 'features.10.bias', 'classifier.1.weight', 'classifier.1.bias', 'classifier.4.weight', 'classifier.4.bias', 'classifier.6.weight', 'classifier.6.bias'] # k2 = ['conv1.0.weight', 'conv1.0.bias', 'conv2.0.weight', 'conv2.0.bias', 'conv3.0.weight', 'conv3.0.bias', 'conv4.0.weight', 'conv4.0.bias', 'conv5.0.weight', 'conv5.0.bias', 'fc1.weight', 'fc1.bias', 'classifier.1.weight', 'classifier.1.bias', 'classifier.3.weight', 'classifier.3.bias'] # for i in range(len(k1)): # pretrained_dict[k2[i]] = model_dict[k1[i]] # # 2. overwrite entries in the existing state dict # model_dict.update(pretrained_dict) ```
github_jupyter
``` import geemap geemap.show_youtube('OwjSJnGWKJs') ``` ## Update the geemap package If you run into errors with this notebook, please uncomment the line below to update the [geemap](https://github.com/giswqs/geemap#installation) package to the latest version from GitHub. Restart the Kernel (Menu -> Kernel -> Restart) to take effect. ``` # geemap.update_package() ``` ## Create an interactive map ### Use the Drawing tool to draw a rectangle on the map ``` Map = geemap.Map() Map ``` ## Generate a Landsat timelapse animation ``` import os out_dir = os.path.join(os.path.expanduser("~"), 'Downloads') if not os.path.exists(out_dir): os.makedirs(out_dir) label = 'Urban Growth in Las Vegas' Map.add_landsat_ts_gif(label=label, start_year=1985, bands=['Red', 'Green', 'Blue'], font_color='white', frames_per_second=10, progress_bar_color='blue') ``` ## Create Landsat timeseries ``` import os import ee import geemap Map = geemap.Map() Map ``` You and define an roi or draw a rectangle on the map ``` roi = ee.Geometry.Polygon( [[[-115.471773, 35.892718], [-115.471773, 36.409454], [-114.271283, 36.409454], [-114.271283, 35.892718], [-115.471773, 35.892718]]], None, False) # roi = Map.draw_last_feature collection = geemap.landsat_timeseries(roi=roi, start_year=1985, end_year=2019, start_date='06-10', end_date='09-20') print(collection.size().getInfo()) first_image = collection.first() vis = { 'bands': ['NIR', 'Red', 'Green'], 'min': 0, 'max': 4000, 'gamma': [1, 1, 1] } Map.addLayer(first_image, vis, 'First image') ``` ## Download ImageCollection as a GIF ``` # Define arguments for animation function parameters. video_args = { 'dimensions': 768, 'region': roi, 'framesPerSecond': 10, 'bands': ['NIR', 'Red', 'Green'], 'min': 0, 'max': 4000, 'gamma': [1, 1, 1] } work_dir = os.path.join(os.path.expanduser("~"), 'Downloads') if not os.path.exists(work_dir): os.makedirs(work_dir) out_gif = os.path.join(work_dir, "landsat_ts.gif") geemap.download_ee_video(collection, video_args, out_gif) ``` ## Add animated text to GIF ``` geemap.show_image(out_gif) texted_gif = os.path.join(work_dir, "landsat_ts_text.gif") geemap.add_text_to_gif(out_gif, texted_gif, xy=('3%', '5%'), text_sequence=1985, font_size=30, font_color='#ffffff', add_progress_bar=False) label = 'Urban Growth in Las Vegas' geemap.add_text_to_gif(texted_gif, texted_gif, xy=('2%', '88%'), text_sequence=label, font_size=30, font_color='#ffffff', progress_bar_color='cyan') geemap.show_image(texted_gif) ```
github_jupyter
``` # Copyright © 2020, Johan Vonk # SPDX-License-Identifier: MIT %matplotlib inline import numpy as np import pandas as pd import math import matplotlib.pyplot as plt from sklearn.manifold import MDS from sklearn.metrics import pairwise_distances import paho.mqtt.client as mqtt from threading import Timer import json from config import username, password import seaborn as sns measured=np.array([ [0, 37.9, 92.2, 95.2, 56.6, 95.5, 73.5, 56.7, 121.2, 73.9], [0, 0, 54.7, 71.8, 44.4, 59.4, 41.6, 21.9, 89.5, 46.8], [0, 0, 0, 60.3, 67.6, 27.3, 45.8, 42.3, 65.1, 43.5], [0, 0, 0, 0, 40.4, 87.1, 94.8, 78.9, 125.4, 25.4], [0, 0, 0, 0, 0, 86.9, 81.3, 61.5, 123.0, 28.0], [0, 0, 0, 0, 0, 0, 29.1, 39.1, 28.3, 67.2], [0, 0, 0, 0, 0, 0, 0, 20.6, 48.6, 70.0], [0, 0, 0, 0, 0, 0, 0, 0, 67.6, 53.5], [0, 0, 0, 0, 0, 0, 0, 0, 0, 105.5], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ]) measured*=0.0254 measured+=measured.T model = MDS(n_components=2, metric=True, dissimilarity='precomputed', random_state=1, n_init=1000, max_iter=1000) positions = model.fit_transform(measured) positions -= positions[8] positions[:, 1]*=-1 theta=np.radians(221)+math.atan2(positions[5,1],positions[5,0]) positions=positions.dot([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) positions[:,0]-=positions[3,0] angles=np.radians([18,9,-18,135,156,-59,-23,77,-90,62]) plt.quiver(positions[:,0], positions[:,1], np.cos(angles), np.sin(angles)) devices=pd.DataFrame(columns=("name", "address", "version", "date")) df=pd.DataFrame(columns=("TIMESTAMP","SCANNER","ADVERTISER","TX POWER","RSSI","DISTANCE","ANGLE")) class RepeatTimer(Timer): def run(self): while not self.finished.wait(self.interval): self.function(*self.args, **self.kwargs) def switch_devices(client, devices): for device,payload in zip(devices["name"],np.random.choice(['scan', 'adv'],len(devices))): client.publish("blescan/ctrl/"+device, payload=payload) def on_connect(client, userdata, flags, rc): client.subscribe("blescan/data/#") client.publish("blescan/ctrl", payload="who") client.publish("blescan/ctrl", payload="int 2") def on_message(client, userdata, msg): source=msg.topic.rsplit('/', 1)[-1] data = json.loads(msg.payload.decode('ASCII').replace('""','"')) if "name" in data and data["name"] not in devices["name"].values: devices.loc[len(devices)]=[data["name"],data["address"],data["version"],data["date"]] elif "RSSI" in data and data["address"] in devices["address"].values and source in devices["name"].values: sc_pos=positions[int(source.replace("esp32-",""))-1] advertiser=devices[devices['address']==data['address']]['name'].values[0] ad_pos=positions[int(advertiser.replace("esp32-",""))-1] dx=sc_pos[0]-ad_pos[0] dy=sc_pos[1]-ad_pos[1] df.loc[len(df)]=[pd.Timestamp.now(),source,advertiser,data["txPwr"],data["RSSI"],math.sqrt(dx**2+dy**2),(math.atan2(dy,dx)-angles[int(advertiser.replace("esp32-",""))-1]+2*np.pi)%(2*np.pi)] client=mqtt.Client("reader") client.on_connect = on_connect client.on_message = on_message client.connect('mqtt.vonk', 1883) client.username_pw_set(username=username,password=password) timer = RepeatTimer(60, switch_devices, args=(client,devices)) try: client.loop_start() timer.start() except KeyboardInterrupt: client.loop_stop() timer.cancel() d=df.copy() d['TIMESTAMP']=pd.to_datetime(d['TIMESTAMP'],errors='coerce') d['SCANNER']=d['SCANNER'].astype(str) d['ADVERTISER']=d['ADVERTISER'].astype(str) d['TX POWER']=pd.to_numeric(d['TX POWER'],errors='coerce').astype('int8') d['RSSI']=pd.to_numeric(d['RSSI'],errors='coerce').astype('int8') d['DISTANCE']=pd.to_numeric(d['DISTANCE'],errors='coerce') d['ANGLE']=pd.to_numeric(d['ANGLE'],errors='coerce') angle_shift=(1-np.cos(2*d['ANGLE']))/d['ANGLE']*3-0.855 d['HUMAN PREDICTION']=10**((11.5511+d['TX POWER']-d['RSSI']-angle_shift)/10/2) d['HUMAN PREDICTION']=pd.to_numeric(d['HUMAN PREDICTION'],errors='coerce') d['HUMAN SLE']=np.log((d['DISTANCE']+1)/(d['HUMAN PREDICTION']+1))**2 d['HUMAN SLE']=pd.to_numeric(d['HUMAN SLE'],errors='coerce') print('Received {0:.5} messages per second.'.format(len(df)/(df.iloc[-1]["TIMESTAMP"]-df.iloc[0]["TIMESTAMP"]).total_seconds())) print("Human distance and angle mean squared log error is {0:.5}.".format(np.sum(d['HUMAN SLE'])/len(d))) plot_data=d.query('`HUMAN PREDICTION`>0 and `HUMAN PREDICTION`<4') sns.jointplot(x="DISTANCE", y="HUMAN PREDICTION", data=plot_data, kind="hex") d['DISTANCE PREDICTION']=10**((11.5511+d['TX POWER']-d['RSSI'])/10/2) d['DISTANCE PREDICTION']=pd.to_numeric(d['DISTANCE PREDICTION'],errors='coerce') d['DISTANCE SLE']=np.log((d['DISTANCE']+1)/(d['DISTANCE PREDICTION']+1))**2 d['DISTANCE SLE']=pd.to_numeric(d['DISTANCE SLE'],errors='coerce') print("Distance-only mean squared log error is {0:.5}.".format(np.sum(d['DISTANCE SLE'])/len(d))) plot_data=d.query('`DISTANCE PREDICTION`>0 and `DISTANCE PREDICTION`<4') sns.jointplot(x="DISTANCE", y="DISTANCE PREDICTION", data=plot_data, kind="hex") import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler power=10**((11.5511+d['TX POWER']-d['RSSI'])/20) cos_2angle=np.cos(2*d['ANGLE']) sin_2angle=np.sin(2*d['ANGLE']) cos_angle=np.cos(d['ANGLE']) sin_angle=np.sin(d['ANGLE']) X = pd.DataFrame([power,cos_2angle,sin_2angle,cos_angle,sin_angle]).T y = np.ravel(d['DISTANCE']) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) scaler = StandardScaler().fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) model = Sequential() model.add(Dense(8, kernel_initializer='normal', activation='relu', input_shape=(5,))) model.add(Dense(8, kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.compile(loss='mean_squared_logarithmic_error', optimizer='sgd', metrics=['mse']) model.summary() history = model.fit(X_train, y_train, epochs=36, batch_size=32, verbose=1, validation_data=(X_test, y_test)) model.save('model') X_predict=scaler.transform(X) d['PREDICTION']=model.predict(X_predict, verbose=1) d['SLE']=np.log((d['DISTANCE']+1)/(d['PREDICTION']+1))**2 d.to_csv(f"pact_{d.iloc[0]['TIMESTAMP']:%Y%m%dT%H%M%S}.csv") plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss (msle)') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') print("ML model mean squared log error is {0:.5}.".format(np.sum(d['SLE'])/len(d))) print("False positive rate is {0:.3%}.".format(len(d.query('DISTANCE>0.9144 and PREDICTION<=0.9144'))/len(d))) print("False negative rate is {0:.3%}.".format(len(d.query('DISTANCE<=0.9144 and PREDICTION>0.9144'))/len(d))) print("True positive rate is {0:.3%}.".format(len(d.query('DISTANCE<=0.9144 and PREDICTION<=0.9144'))/len(d))) print("True negative rate is {0:.3%}.".format(len(d.query('DISTANCE>0.9144 and PREDICTION>0.9144'))/len(d))) plot_data=d.query('`PREDICTION`>0 and `PREDICTION`<4') sns.jointplot(x="DISTANCE", y="PREDICTION", data=plot_data, kind="hex") yard_power=0.9144 n_points=1000 angles=2*np.pi/n_points*np.arange(0, n_points) X_angles = scaler.transform(pd.DataFrame([np.full(len(angles),yard_power),np.cos(2*angles),np.sin(2*angles),np.cos(angles),np.sin(angles)]).T) result_angles=np.log10(model.predict(X_angles, verbose=1).flatten())*20 result_angles-=result_angles.max()-30 import plotly.express as px px.line_polar(r=result_angles, theta=angles*180/np.pi, line_close=True) #df[df['TIMESTAMP'] <= df['TIMESTAMP'].iloc[0]+pd.Timedelta(2,'D')] graph=df.sample(n=100) px.line(graph['TIMESTAMP'],graph['RSSI']) df ```
github_jupyter
##### Copyright 2019 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); ``` # Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ``` # Multilingual Universal Sentence Encoder Q&A Retrieval <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/retrieval_with_tf_hub_universal_encoder_qa.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/retrieval_with_tf_hub_universal_encoder_qa.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/retrieval_with_tf_hub_universal_encoder_qa.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This is a demo for using [Univeral Encoder Multilingual Q&A model](https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3) for question-answer retrieval of text, illustrating the use of **question_encoder** and **response_encoder** of the model. We use sentences from [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) paragraphs as the demo dataset, each sentence and its context (the text surrounding the sentence) is encoded into high dimension embeddings with the **response_encoder**. These embeddings are stored in an index built using the [simpleneighbors](https://pypi.org/project/simpleneighbors/) library for question-answer retrieval. On retrieval a random question is selected from the [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) dataset and encoded into high dimension embedding with the **question_encoder** and query the simpleneighbors index returning a list of approximate nearest neighbors in semantic space. ## Setup ``` %%capture #@title Setup Environment # Install the latest Tensorflow version. !pip install -q tensorflow_text !pip install -q simpleneighbors[annoy] !pip install -q nltk !pip install -q tqdm #@title Setup common imports and functions import json import nltk import os import pprint import random import simpleneighbors import urllib from IPython.display import HTML, display from tqdm.notebook import tqdm import tensorflow.compat.v2 as tf import tensorflow_hub as hub from tensorflow_text import SentencepieceTokenizer nltk.download('punkt') def download_squad(url): return json.load(urllib.request.urlopen(url)) def extract_sentences_from_squad_json(squad): all_sentences = [] for data in squad['data']: for paragraph in data['paragraphs']: sentences = nltk.tokenize.sent_tokenize(paragraph['context']) all_sentences.extend(zip(sentences, [paragraph['context']] * len(sentences))) return list(set(all_sentences)) # remove duplicates def extract_questions_from_squad_json(squad): questions = [] for data in squad['data']: for paragraph in data['paragraphs']: for qas in paragraph['qas']: if qas['answers']: questions.append((qas['question'], qas['answers'][0]['text'])) return list(set(questions)) def output_with_highlight(text, highlight): output = "<li> " i = text.find(highlight) while True: if i == -1: output += text break output += text[0:i] output += '<b>'+text[i:i+len(highlight)]+'</b>' text = text[i+len(highlight):] i = text.find(highlight) return output + "</li>\n" def display_nearest_neighbors(query_text, answer_text=None): query_embedding = model.signatures['question_encoder'](tf.constant([query_text]))['outputs'][0] search_results = index.nearest(query_embedding, n=num_results) if answer_text: result_md = ''' <p>Random Question from SQuAD:</p> <p>&nbsp;&nbsp;<b>%s</b></p> <p>Answer:</p> <p>&nbsp;&nbsp;<b>%s</b></p> ''' % (query_text , answer_text) else: result_md = ''' <p>Question:</p> <p>&nbsp;&nbsp;<b>%s</b></p> ''' % query_text result_md += ''' <p>Retrieved sentences : <ol> ''' if answer_text: for s in search_results: result_md += output_with_highlight(s, answer_text) else: for s in search_results: result_md += '<li>' + s + '</li>\n' result_md += "</ol>" display(HTML(result_md)) ``` Run the following code block to download and extract the SQuAD dataset into: * **sentences** is a list of (text, context) tuples - each paragraph from the SQuAD dataset are splitted into sentences using nltk library and the sentence and paragraph text forms the (text, context) tuple. * **questions** is a list of (question, answer) tuples. Note: You can use this demo to index the SQuAD train dataset or the smaller dev dataset (1.1 or 2.0) by selecting the **squad_url** below. ``` #@title Download and extract SQuAD data squad_url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json' #@param ["https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json", "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"] squad_json = download_squad(squad_url) sentences = extract_sentences_from_squad_json(squad_json) questions = extract_questions_from_squad_json(squad_json) print("%s sentences, %s questions extracted from SQuAD %s" % (len(sentences), len(questions), squad_url)) print("\nExample sentence and context:\n") sentence = random.choice(sentences) print("sentence:\n") pprint.pprint(sentence[0]) print("\ncontext:\n") pprint.pprint(sentence[1]) print() ``` The following code block setup the tensorflow graph **g** and **session** with the [Univeral Encoder Multilingual Q&A model](https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3)'s **question_encoder** and **response_encoder** signatures. ``` #@title Load model from tensorflow hub module_url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3" #@param ["https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3", "https://tfhub.dev/google/universal-sentence-encoder-qa/3"] model = hub.load(module_url) ``` The following code block compute the embeddings for all the text, context tuples and store them in a [simpleneighbors](https://pypi.org/project/simpleneighbors/) index using the **response_encoder**. ``` #@title Compute embeddings and build simpleneighbors index batch_size = 100 encodings = model.signatures['response_encoder']( input=tf.constant([sentences[0][0]]), context=tf.constant([sentences[0][1]])) index = simpleneighbors.SimpleNeighbors( len(encodings['outputs'][0]), metric='angular') print('Computing embeddings for %s sentences' % len(sentences)) slices = zip(*(iter(sentences),) * batch_size) num_batches = int(len(sentences) / batch_size) for s in tqdm(slices, total=num_batches): response_batch = list([r for r, c in s]) context_batch = list([c for r, c in s]) encodings = model.signatures['response_encoder']( input=tf.constant(response_batch), context=tf.constant(context_batch) ) for batch_index, batch in enumerate(response_batch): index.add_one(batch, encodings['outputs'][batch_index]) index.build() print('simpleneighbors index for %s sentences built.' % len(sentences)) ``` On retrieval, the question is encoded using the **question_encoder** and the question embedding is used to query the simpleneighbors index. ``` #@title Retrieve nearest neighbors for a random question from SQuAD num_results = 25 #@param {type:"slider", min:5, max:40, step:1} query = random.choice(questions) display_nearest_neighbors(query[0], query[1]) ```
github_jupyter
COPYRIGHT © 2018 Kiran Arun <[email protected]> ### Setup ``` # install dependencies !rm -r Neural_Networks-101-demo !git clone -b explanations https://github.com/KiranArun/Neural_Networks-101-demo.git !python3 /content/Neural_Networks-101-demo/scripts/setup.py helper_funcs tensorboard # run tensorboard get_ipython().system_raw('tensorboard --logdir=/content/logdir/ --host=0.0.0.0 --port=6006 &') get_ipython().system_raw('./ngrok http 6006 &') ! curl -s http://localhost:4040/api/tunnels | python3 -c "import sys, json; print('Tensorboard Link:', json.load(sys.stdin)['tunnels'][0]['public_url'])" ``` # MNIST Handwriten Digits Classifier ``` import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import os from math import ceil,floor import helper_funcs as helper # this is the directory where we will keep and external files, eg. data, logs model_root_dir = '/content/' # get data mnist = helper.MNIST_data(model_root_dir+'MNIST_data/',shuffle=False) image_dims = (28,28) input_size = 28**2 num_classes = 10 batch_size = 100 learning_rate = 0.1 epochs = 2 iterations = ceil(mnist.number_train_samples/batch_size) hidden_size = 256 embedding_size = 10 model_logdir = model_root_dir+'logdir/' LABELS = os.path.join(os.getcwd(), model_logdir+"labels_1024.tsv") SPRITES = os.path.join(os.getcwd(), model_logdir+"sprite_1024.png") hparam_str = 'fc2,lr_%f' % (learning_rate) previous_runs = list(f for f in os.listdir(model_logdir) if f.startswith('run')) if len(previous_runs) == 0: run_number = 1 else: run_number = max([int(s[4:6]) for s in previous_runs]) + 1 LOGDIR = '%srun_%02d,' % (model_logdir, run_number)+hparam_str tf.reset_default_graph() with tf.name_scope('input'): X_placeholder = tf.placeholder(shape=[None, input_size], dtype=tf.float32, name='X_placeholder') Y_placeholder = tf.placeholder(shape=[None, num_classes], dtype=tf.int64, name='Y_placeholder') with tf.name_scope('input_reshaped'): X_image = tf.reshape(X_placeholder, shape=[-1,*image_dims, 1]) tf.summary.image('input', X_image, 3) def variable_summaries(var): with tf.name_scope('summaries'): mean = tf.reduce_mean(var) tf.summary.scalar('mean', mean) stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean))) tf.summary.scalar('stddev', stddev) tf.summary.scalar('max', tf.reduce_max(var)) tf.summary.scalar('min', tf.reduce_min(var)) tf.summary.histogram('histogram', var) with tf.name_scope('hidden_layer'): with tf.name_scope('Weights'): W1 = tf.Variable(tf.truncated_normal(shape=[input_size, hidden_size]), dtype=tf.float32, name='W1') variable_summaries(W1) with tf.name_scope('biases'): b1 = tf.Variable(tf.constant(0.1,shape=[hidden_size]), dtype=tf.float32, name='b1') variable_summaries(b1) with tf.name_scope('output'): hidden_output = tf.nn.relu(tf.matmul(X_placeholder, W1) + b1) with tf.name_scope('output_layer'): with tf.name_scope('Weights'): W2 = tf.Variable(tf.truncated_normal(shape=[hidden_size, num_classes]), dtype=tf.float32, name='W2') variable_summaries(W2) with tf.name_scope('biases'): b2 = tf.Variable(tf.constant(0.1,shape=[num_classes]), dtype=tf.float32, name='b2') variable_summaries(b2) with tf.name_scope('output'): Y_predictions = tf.matmul(hidden_output, W2) + b2 embedding_input = Y_predictions with tf.name_scope('loss'): cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_placeholder, logits=Y_predictions, name='cross_entropy') loss = tf.reduce_mean(cross_entropy) tf.summary.scalar('loss', loss) with tf.name_scope('train'): train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) with tf.name_scope('accuracy'): with tf.name_scope('correct_predictions'): correct_prediction = tf.equal(tf.argmax(Y_predictions, 1), tf.argmax(Y_placeholder, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) tf.summary.scalar('accuracy', accuracy) sess = tf.InteractiveSession() summ = tf.summary.merge_all() embedding = tf.Variable(tf.zeros([1024, embedding_size]), name="embedding") assignment = embedding.assign(embedding_input) saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) writer = tf.summary.FileWriter(LOGDIR) writer.add_graph(sess.graph) config = tf.contrib.tensorboard.plugins.projector.ProjectorConfig() embedding_config = config.embeddings.add() embedding_config.tensor_name = embedding.name embedding_config.sprite.image_path = SPRITES embedding_config.metadata_path = LABELS embedding_config.sprite.single_image_dim.extend([*image_dims]) tf.contrib.tensorboard.plugins.projector.visualize_embeddings(writer, config) losses = np.array([]) for epoch in range(epochs): print('New epoch', str(epoch+1)+'/'+str(epochs)) for iteration in range(iterations): batch_xs, batch_ys = mnist.get_batch(iteration, batch_size) _, _loss, _summary = sess.run([train_step, loss, summ], feed_dict={ X_placeholder: batch_xs, Y_placeholder: batch_ys }) if (iteration+1) % (iterations/5) == 0: _accuracy = sess.run(accuracy, feed_dict={X_placeholder : mnist.validation_images, Y_placeholder : mnist.validation_labels }) print('step', str(iteration+1)+'/'+str(iterations), 'loss', _loss, 'accuracy', str(round(100*_accuracy,2))+'%') if iteration % 10 == 0: writer.add_summary(_summary, (epoch*iterations)+iteration) losses = np.append(losses, _loss) sess.run(assignment, feed_dict={X_placeholder: mnist.test_images[:1024], Y_placeholder: mnist.test_labels[:1024]}) saver.save(sess, os.path.join(LOGDIR, "model.ckpt"), (epoch*iterations)+iteration) fig, ax = plt.subplots(figsize=(10,6)) ax.plot(losses) ax.grid(True) _accuracy = sess.run(accuracy, feed_dict={X_placeholder : mnist.test_images, Y_placeholder : mnist.test_labels }) print(str(round(100*_accuracy,2))+'%') ```
github_jupyter
**Chapter 16 – Reinforcement Learning** This notebook contains all the sample code and solutions to the exercices in chapter 16. # Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures: ``` # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import numpy.random as rnd import os import sys # to make this notebook's output stable across runs rnd.seed(42) # To plot pretty figures and animations %matplotlib nbagg import matplotlib import matplotlib.animation as animation import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "rl" def save_fig(fig_id, tight_layout=True): path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png") print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format='png', dpi=300) ``` # Introduction to OpenAI gym In this notebook we will be using [OpenAI gym](https://gym.openai.com/), a great toolkit for developing and comparing Reinforcement Learning algorithms. It provides many environments for your learning *agents* to interact with. Let's start by importing `gym`: ``` import gym ``` Next we will load the MsPacman environment, version 0. ``` env = gym.make('MsPacman-v0') ``` Let's initialize the environment by calling is `reset()` method. This returns an observation: ``` obs = env.reset() ``` Observations vary depending on the environment. In this case it is an RGB image represented as a 3D NumPy array of shape [width, height, channels] (with 3 channels: Red, Green and Blue). In other environments it may return different objects, as we will see later. ``` obs.shape ``` An environment can be visualized by calling its `render()` method, and you can pick the rendering mode (the rendering options depend on the environment). In this example we will set `mode="rgb_array"` to get an image of the environment as a NumPy array: ``` img = env.render(mode="rgb_array") ``` Let's plot this image: ``` plt.figure(figsize=(5,4)) plt.imshow(img) plt.axis("off") save_fig("MsPacman") plt.show() ``` Welcome back to the 1980s! :) In this environment, the rendered image is simply equal to the observation (but in many environments this is not the case): ``` (img == obs).all() ``` Let's create a little helper function to plot an environment: ``` def plot_environment(env, figsize=(5,4)): plt.close() # or else nbagg sometimes plots in the previous cell plt.figure(figsize=figsize) img = env.render(mode="rgb_array") plt.imshow(img) plt.axis("off") plt.show() ``` Let's see how to interact with an environment. Your agent will need to select an action from an "action space" (the set of possible actions). Let's see what this environment's action space looks like: ``` env.action_space ``` `Discrete(9)` means that the possible actions are integers 0 through 8, which represents the 9 possible positions of the joystick (0=center, 1=up, 2=right, 3=left, 4=down, 5=upper-right, 6=upper-left, 7=lower-right, 8=lower-left). Next we need to tell the environment which action to play, and it will compute the next step of the game. Let's go left for 110 steps, then lower left for 40 steps: ``` env.reset() for step in range(110): env.step(3) #left for step in range(40): env.step(8) #lower-left ``` Where are we now? ``` plot_environment(env) ``` The `step()` function actually returns several important objects: ``` obs, reward, done, info = env.step(0) ``` The observation tells the agent what the environment looks like, as discussed earlier. This is a 210x160 RGB image: ``` obs.shape ``` The environment also tells the agent how much reward it got during the last step: ``` reward ``` When the game is over, the environment returns `done=True`: ``` done ``` Finally, `info` is an environment-specific dictionary that can provide some extra information about the internal state of the environment. This is useful for debugging, but your agent should not use this information for learning (it would be cheating). ``` info ``` Let's play one full game (with 3 lives), by moving in random directions for 10 steps at a time, recording each frame: ``` frames = [] n_max_steps = 1000 n_change_steps = 10 obs = env.reset() for step in range(n_max_steps): img = env.render(mode="rgb_array") frames.append(img) if step % n_change_steps == 0: action = env.action_space.sample() # play randomly obs, reward, done, info = env.step(action) if done: break ``` Now show the animation (it's a bit jittery within Jupyter): ``` def update_scene(num, frames, patch): patch.set_data(frames[num]) return patch, def plot_animation(frames, repeat=False, interval=40): plt.close() # or else nbagg sometimes plots in the previous cell fig = plt.figure() patch = plt.imshow(frames[0]) plt.axis('off') return animation.FuncAnimation(fig, update_scene, fargs=(frames, patch), frames=len(frames), repeat=repeat, interval=interval) video = plot_animation(frames) plt.show() ``` Once you have finished playing with an environment, you should close it to free up resources: ``` env.close() ``` To code our first learning agent, we will be using a simpler environment: the Cart-Pole. # A simple environment: the Cart-Pole The Cart-Pole is a very simple environment composed of a cart that can move left or right, and pole placed vertically on top of it. The agent must move the cart left or right to keep the pole upright. ``` env = gym.make("CartPole-v0") obs = env.reset() obs ``` The observation is a 1D NumPy array composed of 4 floats: they represent the cart's horizontal position, its velocity, the angle of the pole (0 = vertical), and the angular velocity. Let's render the environment... unfortunately we need to fix an annoying rendering issue first. ## Fixing the rendering issue Some environments (including the Cart-Pole) require access to your display, which opens up a separate window, even if you specify the `rgb_array` mode. In general you can safely ignore that window. However, if Jupyter is running on a headless server (ie. without a screen) it will raise an exception. One way to avoid this is to install a fake X server like Xvfb. You can start Jupyter using the `xvfb-run` command: $ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook If Jupyter is running on a headless server but you don't want to worry about Xvfb, then you can just use the following rendering function for the Cart-Pole: ``` from PIL import Image, ImageDraw try: from pyglet.gl import gl_info openai_cart_pole_rendering = True # no problem, let's use OpenAI gym's rendering function except Exception: openai_cart_pole_rendering = False # probably no X server available, let's use our own rendering function def render_cart_pole(env, obs): if openai_cart_pole_rendering: # use OpenAI gym's rendering function return env.render(mode="rgb_array") else: # rendering for the cart pole environment (in case OpenAI gym can't do it) img_w = 600 img_h = 400 cart_w = img_w // 12 cart_h = img_h // 15 pole_len = img_h // 3.5 pole_w = img_w // 80 + 1 x_width = 2 max_ang = 0.2 bg_col = (255, 255, 255) cart_col = 0x000000 # Blue Green Red pole_col = 0x669acc # Blue Green Red pos, vel, ang, ang_vel = obs img = Image.new('RGB', (img_w, img_h), bg_col) draw = ImageDraw.Draw(img) cart_x = pos * img_w // x_width + img_w // x_width cart_y = img_h * 95 // 100 top_pole_x = cart_x + pole_len * np.sin(ang) top_pole_y = cart_y - cart_h // 2 - pole_len * np.cos(ang) draw.line((0, cart_y, img_w, cart_y), fill=0) draw.rectangle((cart_x - cart_w // 2, cart_y - cart_h // 2, cart_x + cart_w // 2, cart_y + cart_h // 2), fill=cart_col) # draw cart draw.line((cart_x, cart_y - cart_h // 2, top_pole_x, top_pole_y), fill=pole_col, width=pole_w) # draw pole return np.array(img) def plot_cart_pole(env, obs): plt.close() # or else nbagg sometimes plots in the previous cell img = render_cart_pole(env, obs) plt.imshow(img) plt.axis("off") plt.show() plot_cart_pole(env, obs) ``` Now let's look at the action space: ``` env.action_space ``` Yep, just two possible actions: accelerate towards the left or towards the right. Let's push the cart left until the pole falls: ``` obs = env.reset() while True: obs, reward, done, info = env.step(0) if done: break plt.close() # or else nbagg sometimes plots in the previous cell img = render_cart_pole(env, obs) plt.imshow(img) plt.axis("off") save_fig("cart_pole_plot") ``` Notice that the game is over when the pole tilts too much, not when it actually falls. Now let's reset the environment and push the cart to right instead: ``` obs = env.reset() while True: obs, reward, done, info = env.step(1) if done: break plot_cart_pole(env, obs) ``` Looks like it's doing what we're telling it to do. Now how can we make the poll remain upright? We will need to define a _policy_ for that. This is the strategy that the agent will use to select an action at each step. It can use all the past actions and observations to decide what to do. # A simple hard-coded policy Let's hard code a simple strategy: if the pole is tilting to the left, then push the cart to the left, and _vice versa_. Let's see if that works: ``` frames = [] n_max_steps = 1000 n_change_steps = 10 obs = env.reset() for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) # hard-coded policy position, velocity, angle, angular_velocity = obs if angle < 0: action = 0 else: action = 1 obs, reward, done, info = env.step(action) if done: break video = plot_animation(frames) plt.show() ``` Nope, the system is unstable and after just a few wobbles, the pole ends up too tilted: game over. We will need to be smarter than that! # Neural Network Policies Let's create a neural network that will take observations as inputs, and output the action to take for each observation. To choose an action, the network will first estimate a probability for each action, then select an action randomly according to the estimated probabilities. In the case of the Cart-Pole environment, there are just two possible actions (left or right), so we only need one output neuron: it will output the probability `p` of the action 0 (left), and of course the probability of action 1 (right) will be `1 - p`. ``` import tensorflow as tf from tensorflow.contrib.layers import fully_connected # 1. Specify the network architecture n_inputs = 4 # == env.observation_space.shape[0] n_hidden = 4 # it's a simple task, we don't need more than this n_outputs = 1 # only outputs the probability of accelerating left initializer = tf.contrib.layers.variance_scaling_initializer() # 2. Build the neural network X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = fully_connected(X, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer) outputs = fully_connected(hidden, n_outputs, activation_fn=tf.nn.sigmoid, weights_initializer=initializer) # 3. Select a random action based on the estimated probabilities p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) init = tf.global_variables_initializer() ``` In this particular environment, the past actions and observations can safely be ignored, since each observation contains the environment's full state. If there were some hidden state then you may need to consider past actions and observations in order to try to infer the hidden state of the environment. For example, if the environment only revealed the position of the cart but not its velocity, you would have to consider not only the current observation but also the previous observation in order to estimate the current velocity. Another example is if the observations are noisy: you may want to use the past few observations to estimate the most likely current state. Our problem is thus as simple as can be: the current observation is noise-free and contains the environment's full state. You may wonder why we are picking a random action based on the probability given by the policy network, rather than just picking the action with the highest probability. This approach lets the agent find the right balance between _exploring_ new actions and _exploiting_ the actions that are known to work well. Here's an analogy: suppose you go to a restaurant for the first time, and all the dishes look equally appealing so you randomly pick one. If it turns out to be good, you can increase the probability to order it next time, but you shouldn't increase that probability to 100%, or else you will never try out the other dishes, some of which may be even better than the one you tried. Let's randomly initialize this policy neural network and use it to play one game: ``` n_max_steps = 1000 frames = [] with tf.Session() as sess: init.run() obs = env.reset() for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) if done: break env.close() ``` Now let's look at how well this randomly initialized policy network performed: ``` video = plot_animation(frames) plt.show() ``` Yeah... pretty bad. The neural network will have to learn to do better. First let's see if it is capable of learning the basic policy we used earlier: go left if the pole is tilting left, and go right if it is tilting right. The following code defines the same neural network but we add the target probabilities `y`, and the training operations (`cross_entropy`, `optimizer` and `training_op`): ``` import tensorflow as tf from tensorflow.contrib.layers import fully_connected tf.reset_default_graph() n_inputs = 4 n_hidden = 4 n_outputs = 1 learning_rate = 0.01 initializer = tf.contrib.layers.variance_scaling_initializer() X = tf.placeholder(tf.float32, shape=[None, n_inputs]) y = tf.placeholder(tf.float32, shape=[None, n_outputs]) hidden = fully_connected(X, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer) logits = fully_connected(hidden, n_outputs, activation_fn=None) outputs = tf.nn.sigmoid(logits) # probability of action 0 (left) p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(cross_entropy) init = tf.global_variables_initializer() saver = tf.train.Saver() ``` We can make the same net play in 10 different environments in parallel, and train for 1000 iterations. We also reset environments when they are done. ``` n_environments = 10 n_iterations = 1000 envs = [gym.make("CartPole-v0") for _ in range(n_environments)] observations = [env.reset() for env in envs] with tf.Session() as sess: init.run() for iteration in range(n_iterations): target_probas = np.array([([1.] if obs[2] < 0 else [0.]) for obs in observations]) # if angle<0 we want proba(left)=1., or else proba(left)=0. action_val, _ = sess.run([action, training_op], feed_dict={X: np.array(observations), y: target_probas}) for env_index, env in enumerate(envs): obs, reward, done, info = env.step(action_val[env_index][0]) observations[env_index] = obs if not done else env.reset() saver.save(sess, "./my_policy_net_basic.ckpt") for env in envs: env.close() def render_policy_net(model_path, action, X, n_max_steps = 1000): frames = [] env = gym.make("CartPole-v0") obs = env.reset() with tf.Session() as sess: saver.restore(sess, model_path) for step in range(n_max_steps): img = render_cart_pole(env, obs) frames.append(img) action_val = action.eval(feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) if done: break env.close() return frames frames = render_policy_net("./my_policy_net_basic.ckpt", action, X) video = plot_animation(frames) plt.show() ``` Looks like it learned the policy correctly. Now let's see if it can learn a better policy on its own. # Policy Gradients To train this neural network we will need to define the target probabilities `y`. If an action is good we should increase its probability, and conversely if it is bad we should reduce it. But how do we know whether an action is good or bad? The problem is that most actions have delayed effects, so when you win or lose points in a game, it is not clear which actions contributed to this result: was it just the last action? Or the last 10? Or just one action 50 steps earlier? This is called the _credit assignment problem_. The _Policy Gradients_ algorithm tackles this problem by first playing multiple games, then making the actions in good games slightly more likely, while actions in bad games are made slightly less likely. First we play, then we go back and think about what we did. ``` import tensorflow as tf from tensorflow.contrib.layers import fully_connected tf.reset_default_graph() n_inputs = 4 n_hidden = 4 n_outputs = 1 learning_rate = 0.01 initializer = tf.contrib.layers.variance_scaling_initializer() X = tf.placeholder(tf.float32, shape=[None, n_inputs]) hidden = fully_connected(X, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer) logits = fully_connected(hidden, n_outputs, activation_fn=None) outputs = tf.nn.sigmoid(logits) # probability of action 0 (left) p_left_and_right = tf.concat(axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial(tf.log(p_left_and_right), num_samples=1) y = 1. - tf.to_float(action) cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits) optimizer = tf.train.AdamOptimizer(learning_rate) grads_and_vars = optimizer.compute_gradients(cross_entropy) gradients = [grad for grad, variable in grads_and_vars] gradient_placeholders = [] grads_and_vars_feed = [] for grad, variable in grads_and_vars: gradient_placeholder = tf.placeholder(tf.float32, shape=grad.get_shape()) gradient_placeholders.append(gradient_placeholder) grads_and_vars_feed.append((gradient_placeholder, variable)) training_op = optimizer.apply_gradients(grads_and_vars_feed) init = tf.global_variables_initializer() saver = tf.train.Saver() def discount_rewards(rewards, discount_rate): discounted_rewards = np.zeros(len(rewards)) cumulative_rewards = 0 for step in reversed(range(len(rewards))): cumulative_rewards = rewards[step] + cumulative_rewards * discount_rate discounted_rewards[step] = cumulative_rewards return discounted_rewards def discount_and_normalize_rewards(all_rewards, discount_rate): all_discounted_rewards = [discount_rewards(rewards, discount_rate) for rewards in all_rewards] flat_rewards = np.concatenate(all_discounted_rewards) reward_mean = flat_rewards.mean() reward_std = flat_rewards.std() return [(discounted_rewards - reward_mean)/reward_std for discounted_rewards in all_discounted_rewards] discount_rewards([10, 0, -50], discount_rate=0.8) discount_and_normalize_rewards([[10, 0, -50], [10, 20]], discount_rate=0.8) env = gym.make("CartPole-v0") n_games_per_update = 10 n_max_steps = 1000 n_iterations = 250 save_iterations = 10 discount_rate = 0.95 with tf.Session() as sess: init.run() for iteration in range(n_iterations): print("\rIteration: {}".format(iteration), end="") all_rewards = [] all_gradients = [] for game in range(n_games_per_update): current_rewards = [] current_gradients = [] obs = env.reset() for step in range(n_max_steps): action_val, gradients_val = sess.run([action, gradients], feed_dict={X: obs.reshape(1, n_inputs)}) obs, reward, done, info = env.step(action_val[0][0]) current_rewards.append(reward) current_gradients.append(gradients_val) if done: break all_rewards.append(current_rewards) all_gradients.append(current_gradients) all_rewards = discount_and_normalize_rewards(all_rewards, discount_rate=discount_rate) feed_dict = {} for var_index, gradient_placeholder in enumerate(gradient_placeholders): mean_gradients = np.mean([reward * all_gradients[game_index][step][var_index] for game_index, rewards in enumerate(all_rewards) for step, reward in enumerate(rewards)], axis=0) feed_dict[gradient_placeholder] = mean_gradients sess.run(training_op, feed_dict=feed_dict) if iteration % save_iterations == 0: saver.save(sess, "./my_policy_net_pg.ckpt") env.close() frames = render_policy_net("./my_policy_net_pg.ckpt", action, X, n_max_steps=1000) video = plot_animation(frames) plt.show() ``` # Markov Chains ``` transition_probabilities = [ [0.7, 0.2, 0.0, 0.1], # from s0 to s0, s1, s2, s3 [0.0, 0.0, 0.9, 0.1], # from s1 to ... [0.0, 1.0, 0.0, 0.0], # from s2 to ... [0.0, 0.0, 0.0, 1.0], # from s3 to ... ] n_max_steps = 50 def print_sequence(start_state=0): current_state = start_state print("States:", end=" ") for step in range(n_max_steps): print(current_state, end=" ") if current_state == 3: break current_state = rnd.choice(range(4), p=transition_probabilities[current_state]) else: print("...", end="") print() for _ in range(10): print_sequence() ``` # Markov Decision Process ``` transition_probabilities = [ [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], # in s0, if action a0 then proba 0.7 to state s0 and 0.3 to state s1, etc. [[0.0, 1.0, 0.0], None, [0.0, 0.0, 1.0]], [None, [0.8, 0.1, 0.1], None], ] rewards = [ [[+10, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, -50]], [[0, 0, 0], [+40, 0, 0], [0, 0, 0]], ] possible_actions = [[0, 1, 2], [0, 2], [1]] def policy_fire(state): return [0, 2, 1][state] def policy_random(state): return rnd.choice(possible_actions[state]) def policy_safe(state): return [0, 0, 1][state] class MDPEnvironment(object): def __init__(self, start_state=0): self.start_state=start_state self.reset() def reset(self): self.total_rewards = 0 self.state = self.start_state def step(self, action): next_state = rnd.choice(range(3), p=transition_probabilities[self.state][action]) reward = rewards[self.state][action][next_state] self.state = next_state self.total_rewards += reward return self.state, reward def run_episode(policy, n_steps, start_state=0, display=True): env = MDPEnvironment() if display: print("States (+rewards):", end=" ") for step in range(n_steps): if display: if step == 10: print("...", end=" ") elif step < 10: print(env.state, end=" ") action = policy(env.state) state, reward = env.step(action) if display and step < 10: if reward: print("({})".format(reward), end=" ") if display: print("Total rewards =", env.total_rewards) return env.total_rewards for policy in (policy_fire, policy_random, policy_safe): all_totals = [] print(policy.__name__) for episode in range(1000): all_totals.append(run_episode(policy, n_steps=100, display=(episode<5))) print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals))) print() ``` # Q-Learning Q-Learning will learn the optimal policy by watching the random policy play. ``` n_states = 3 n_actions = 3 n_steps = 20000 alpha = 0.01 gamma = 0.99 exploration_policy = policy_random q_values = np.full((n_states, n_actions), -np.inf) for state, actions in enumerate(possible_actions): q_values[state][actions]=0 env = MDPEnvironment() for step in range(n_steps): action = exploration_policy(env.state) state = env.state next_state, reward = env.step(action) next_value = np.max(q_values[next_state]) # greedy policy q_values[state, action] = (1-alpha)*q_values[state, action] + alpha*(reward + gamma * next_value) def optimal_policy(state): return np.argmax(q_values[state]) q_values all_totals = [] for episode in range(1000): all_totals.append(run_episode(optimal_policy, n_steps=100, display=(episode<5))) print("Summary: mean={:.1f}, std={:1f}, min={}, max={}".format(np.mean(all_totals), np.std(all_totals), np.min(all_totals), np.max(all_totals))) print() ``` # Learning to play MsPacman using Deep Q-Learning ``` env = gym.make("MsPacman-v0") obs = env.reset() obs.shape env.action_space ``` ## Preprocessing Preprocessing the images is optional but greatly speeds up training. ``` mspacman_color = np.array([210, 164, 74]).mean() def preprocess_observation(obs): img = obs[1:176:2, ::2] # crop and downsize img = img.mean(axis=2) # to greyscale img[img==mspacman_color] = 0 # Improve contrast img = (img - 128) / 128 - 1 # normalize from -1. to 1. return img.reshape(88, 80, 1) img = preprocess_observation(obs) plt.figure(figsize=(11, 7)) plt.subplot(121) plt.title("Original observation (160×210 RGB)") plt.imshow(obs) plt.axis("off") plt.subplot(122) plt.title("Preprocessed observation (88×80 greyscale)") plt.imshow(img.reshape(88, 80), interpolation="nearest", cmap="gray") plt.axis("off") save_fig("preprocessing_plot") plt.show() ``` ## Build DQN ``` tf.reset_default_graph() from tensorflow.contrib.layers import convolution2d, fully_connected input_height = 88 input_width = 80 input_channels = 1 conv_n_maps = [32, 64, 64] conv_kernel_sizes = [(8,8), (4,4), (3,3)] conv_strides = [4, 2, 1] conv_paddings = ["SAME"]*3 conv_activation = [tf.nn.relu]*3 n_hidden_inputs = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each n_hidden = 512 hidden_activation = tf.nn.relu n_outputs = env.action_space.n initializer = tf.contrib.layers.variance_scaling_initializer() learning_rate = 0.01 def q_network(X_state, scope): prev_layer = X_state conv_layers = [] with tf.variable_scope(scope) as scope: for n_maps, kernel_size, stride, padding, activation in zip(conv_n_maps, conv_kernel_sizes, conv_strides, conv_paddings, conv_activation): prev_layer = convolution2d(prev_layer, num_outputs=n_maps, kernel_size=kernel_size, stride=stride, padding=padding, activation_fn=activation, weights_initializer=initializer) conv_layers.append(prev_layer) last_conv_layer_flat = tf.reshape(prev_layer, shape=[-1, n_hidden_inputs]) hidden = fully_connected(last_conv_layer_flat, n_hidden, activation_fn=hidden_activation, weights_initializer=initializer) outputs = fully_connected(hidden, n_outputs, activation_fn=None) trainable_vars = {var.name[len(scope.name):]: var for var in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope.name)} return outputs, trainable_vars X_state = tf.placeholder(tf.float32, shape=[None, input_height, input_width, input_channels]) actor_q_values, actor_vars = q_network(X_state, scope="q_networks/actor") # acts critic_q_values, critic_vars = q_network(X_state, scope="q_networks/critic") # learns copy_ops = [actor_var.assign(critic_vars[var_name]) for var_name, actor_var in actor_vars.items()] copy_critic_to_actor = tf.group(*copy_ops) with tf.variable_scope("train"): X_action = tf.placeholder(tf.int32, shape=[None]) y = tf.placeholder(tf.float32, shape=[None, 1]) q_value = tf.reduce_sum(critic_q_values * tf.one_hot(X_action, n_outputs), axis=1, keep_dims=True) cost = tf.reduce_mean(tf.square(y - q_value)) global_step = tf.Variable(0, trainable=False, name='global_step') optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(cost, global_step=global_step) init = tf.global_variables_initializer() saver = tf.train.Saver() actor_vars from collections import deque replay_memory_size = 10000 replay_memory = deque([], maxlen=replay_memory_size) def sample_memories(batch_size): indices = rnd.permutation(len(replay_memory))[:batch_size] cols = [[], [], [], [], []] # state, action, reward, next_state, continue for idx in indices: memory = replay_memory[idx] for col, value in zip(cols, memory): col.append(value) cols = [np.array(col) for col in cols] return cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], cols[4].reshape(-1, 1) eps_min = 0.05 eps_max = 1.0 eps_decay_steps = 50000 import sys def epsilon_greedy(q_values, step): epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps) if rnd.rand() < epsilon: return rnd.randint(n_outputs) # random action else: return np.argmax(q_values) # optimal action n_steps = 100000 # total number of training steps training_start = 1000 # start training after 1,000 game iterations training_interval = 3 # run a training step every 3 game iterations save_steps = 50 # save the model every 50 training steps copy_steps = 25 # copy the critic to the actor every 25 training steps discount_rate = 0.95 skip_start = 90 # Skip the start of every game (it's just waiting time). batch_size = 50 iteration = 0 # game iterations checkpoint_path = "./my_dqn.ckpt" done = True # env needs to be reset with tf.Session() as sess: if os.path.isfile(checkpoint_path): saver.restore(sess, checkpoint_path) else: init.run() while True: step = global_step.eval() if step >= n_steps: break iteration += 1 print("\rIteration {}\tTraining step {}/{} ({:.1f}%)".format(iteration, step, n_steps, step * 100 / n_steps), end="") if done: # game over, start again obs = env.reset() for skip in range(skip_start): # skip boring game iterations at the start of each game obs, reward, done, info = env.step(0) state = preprocess_observation(obs) # Actor evaluates what to do q_values = actor_q_values.eval(feed_dict={X_state: [state]}) action = epsilon_greedy(q_values, step) # Actor plays obs, reward, done, info = env.step(action) next_state = preprocess_observation(obs) # Let's memorize what happened replay_memory.append((state, action, reward, next_state, 1.0 - done)) state = next_state if iteration < training_start or iteration % training_interval != 0: continue # Critic learns X_state_val, X_action_val, rewards, X_next_state_val, continues = sample_memories(batch_size) next_q_values = actor_q_values.eval(feed_dict={X_state: X_next_state_val}) y_val = rewards + continues * discount_rate * np.max(next_q_values, axis=1, keepdims=True) training_op.run(feed_dict={X_state: X_state_val, X_action: X_action_val, y: y_val}) # Regularly copy critic to actor if step % copy_steps == 0: copy_critic_to_actor.run() # And save regularly if step % save_steps == 0: saver.save(sess, checkpoint_path) ``` # Exercise solutions Coming soon...
github_jupyter
[![AnalyticsDojo](https://github.com/rpi-techfundamentals/spring2019-materials/blob/master/fig/final-logo.png?raw=1)](http://rpi.analyticsdojo.com) <center><h1>Basic Text Feature Creation in Python</h1></center> <center><h3><a href = 'http://rpi.analyticsdojo.com'>rpi.analyticsdojo.com</a></h3></center> # Basic Text Feature Creation in Python ``` !wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/train.csv !wget https://raw.githubusercontent.com/rpi-techfundamentals/spring2019-materials/master/input/test.csv import numpy as np import pandas as pd import pandas as pd train= pd.read_csv('train.csv') test = pd.read_csv('test.csv') #Print to standard output, and see the results in the "log" section below after running your script train.head() #Print to standard output, and see the results in the "log" section below after running your script train.describe() train.dtypes #Let's look at the age field. We can see "NaN" (which indicates missing values).s train["Age"] #Now let's recode. medianAge=train["Age"].median() print ("The Median age is:", medianAge, " years old.") train["Age"] = train["Age"].fillna(medianAge) #Option 2 all in one shot! train["Age"] = train["Age"].fillna(train["Age"].median()) train["Age"] #For Recoding Data, we can use what we know of selecting rows and columns train["Embarked"] = train["Embarked"].fillna("S") train.loc[train["Embarked"] == "S", "EmbarkedRecode"] = 0 train.loc[train["Embarked"] == "C", "EmbarkedRecode"] = 1 train.loc[train["Embarked"] == "Q", "EmbarkedRecode"] = 2 # We can also use something called a lambda function # You can read more about the lambda function here. #http://www.python-course.eu/lambda.php gender_fn = lambda x: 0 if x == 'male' else 1 train['Gender'] = train['Sex'].map(gender_fn) #or we can do in one shot train['NameLength'] = train['Name'].map(lambda x: len(x)) train['Age2'] = train['Age'].map(lambda x: x*x) train #We can start to create little small functions that will find a string. def has_title(name): for s in ['Mr.', 'Mrs.', 'Miss.', 'Dr.', 'Sir.']: if name.find(s) >= 0: return True return False #Now we are using that separate function in another function. title_fn = lambda x: 1 if has_title(x) else 0 #Finally, we call the function for name train['Title'] = train['Name'].map(title_fn) test['Title']= train['Name'].map(title_fn) test #Writing to File submission=pd.DataFrame(test.loc[:,['PassengerId','Survived']]) #Any files you save will be available in the output tab below submission.to_csv('submission.csv', index=False) ```
github_jupyter
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Neuromatch Academy: Week 1, Day 5, Tutorial 3 # Dimensionality Reduction and reconstruction __Content creators:__ Alex Cayco Gajic, John Murray __Content reviewers:__ Roozbeh Farhoudi, Matt Krause, Spiros Chavlis, Richard Gao, Michael Waskom --- # Tutorial Objectives In this notebook we'll learn to apply PCA for dimensionality reduction, using a classic dataset that is often used to benchmark machine learning algorithms: MNIST. We'll also learn how to use PCA for reconstruction and denoising. Overview: - Perform PCA on MNIST - Calculate the variance explained - Reconstruct data with different numbers of PCs - (Bonus) Examine denoising using PCA You can learn more about MNIST dataset [here](https://en.wikipedia.org/wiki/MNIST_database). ``` # @title Video 1: PCA for dimensionality reduction from IPython.display import YouTubeVideo video = YouTubeVideo(id="oO0bbInoO_0", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` --- # Setup Run these cells to get the tutorial started. ``` # Imports import numpy as np import matplotlib.pyplot as plt # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper Functions def plot_variance_explained(variance_explained): """ Plots eigenvalues. Args: variance_explained (numpy array of floats) : Vector of variance explained for each PC Returns: Nothing. """ plt.figure() plt.plot(np.arange(1, len(variance_explained) + 1), variance_explained, '--k') plt.xlabel('Number of components') plt.ylabel('Variance explained') plt.show() def plot_MNIST_reconstruction(X, X_reconstructed): """ Plots 9 images in the MNIST dataset side-by-side with the reconstructed images. Args: X (numpy array of floats) : Data matrix each column corresponds to a different random variable X_reconstructed (numpy array of floats) : Data matrix each column corresponds to a different random variable Returns: Nothing. """ plt.figure() ax = plt.subplot(121) k = 0 for k1 in range(3): for k2 in range(3): k = k + 1 plt.imshow(np.reshape(X[k, :], (28, 28)), extent=[(k1 + 1) * 28, k1 * 28, (k2 + 1) * 28, k2 * 28], vmin=0, vmax=255) plt.xlim((3 * 28, 0)) plt.ylim((3 * 28, 0)) plt.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False) ax.set_xticks([]) ax.set_yticks([]) plt.title('Data') plt.clim([0, 250]) ax = plt.subplot(122) k = 0 for k1 in range(3): for k2 in range(3): k = k + 1 plt.imshow(np.reshape(np.real(X_reconstructed[k, :]), (28, 28)), extent=[(k1 + 1) * 28, k1 * 28, (k2 + 1) * 28, k2 * 28], vmin=0, vmax=255) plt.xlim((3 * 28, 0)) plt.ylim((3 * 28, 0)) plt.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False) ax.set_xticks([]) ax.set_yticks([]) plt.clim([0, 250]) plt.title('Reconstructed') plt.tight_layout() def plot_MNIST_sample(X): """ Plots 9 images in the MNIST dataset. Args: X (numpy array of floats) : Data matrix each column corresponds to a different random variable Returns: Nothing. """ fig, ax = plt.subplots() k = 0 for k1 in range(3): for k2 in range(3): k = k + 1 plt.imshow(np.reshape(X[k, :], (28, 28)), extent=[(k1 + 1) * 28, k1 * 28, (k2+1) * 28, k2 * 28], vmin=0, vmax=255) plt.xlim((3 * 28, 0)) plt.ylim((3 * 28, 0)) plt.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False) plt.clim([0, 250]) ax.set_xticks([]) ax.set_yticks([]) plt.show() def plot_MNIST_weights(weights): """ Visualize PCA basis vector weights for MNIST. Red = positive weights, blue = negative weights, white = zero weight. Args: weights (numpy array of floats) : PCA basis vector Returns: Nothing. """ fig, ax = plt.subplots() cmap = plt.cm.get_cmap('seismic') plt.imshow(np.real(np.reshape(weights, (28, 28))), cmap=cmap) plt.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False) plt.clim(-.15, .15) plt.colorbar(ticks=[-.15, -.1, -.05, 0, .05, .1, .15]) ax.set_xticks([]) ax.set_yticks([]) plt.show() def add_noise(X, frac_noisy_pixels): """ Randomly corrupts a fraction of the pixels by setting them to random values. Args: X (numpy array of floats) : Data matrix frac_noisy_pixels (scalar) : Fraction of noisy pixels Returns: (numpy array of floats) : Data matrix + noise """ X_noisy = np.reshape(X, (X.shape[0] * X.shape[1])) N_noise_ixs = int(X_noisy.shape[0] * frac_noisy_pixels) noise_ixs = np.random.choice(X_noisy.shape[0], size=N_noise_ixs, replace=False) X_noisy[noise_ixs] = np.random.uniform(0, 255, noise_ixs.shape) X_noisy = np.reshape(X_noisy, (X.shape[0], X.shape[1])) return X_noisy def change_of_basis(X, W): """ Projects data onto a new basis. Args: X (numpy array of floats) : Data matrix each column corresponding to a different random variable W (numpy array of floats) : new orthonormal basis columns correspond to basis vectors Returns: (numpy array of floats) : Data matrix expressed in new basis """ Y = np.matmul(X, W) return Y def get_sample_cov_matrix(X): """ Returns the sample covariance matrix of data X. Args: X (numpy array of floats) : Data matrix each column corresponds to a different random variable Returns: (numpy array of floats) : Covariance matrix """ X = X - np.mean(X, 0) cov_matrix = 1 / X.shape[0] * np.matmul(X.T, X) return cov_matrix def sort_evals_descending(evals, evectors): """ Sorts eigenvalues and eigenvectors in decreasing order. Also aligns first two eigenvectors to be in first two quadrants (if 2D). Args: evals (numpy array of floats) : Vector of eigenvalues evectors (numpy array of floats) : Corresponding matrix of eigenvectors each column corresponds to a different eigenvalue Returns: (numpy array of floats) : Vector of eigenvalues after sorting (numpy array of floats) : Matrix of eigenvectors after sorting """ index = np.flip(np.argsort(evals)) evals = evals[index] evectors = evectors[:, index] if evals.shape[0] == 2: if np.arccos(np.matmul(evectors[:, 0], 1 / np.sqrt(2) * np.array([1, 1]))) > np.pi / 2: evectors[:, 0] = -evectors[:, 0] if np.arccos(np.matmul(evectors[:, 1], 1 / np.sqrt(2)*np.array([-1, 1]))) > np.pi / 2: evectors[:, 1] = -evectors[:, 1] return evals, evectors def pca(X): """ Performs PCA on multivariate data. Eigenvalues are sorted in decreasing order Args: X (numpy array of floats) : Data matrix each column corresponds to a different random variable Returns: (numpy array of floats) : Data projected onto the new basis (numpy array of floats) : Vector of eigenvalues (numpy array of floats) : Corresponding matrix of eigenvectors """ X = X - np.mean(X, 0) cov_matrix = get_sample_cov_matrix(X) evals, evectors = np.linalg.eigh(cov_matrix) evals, evectors = sort_evals_descending(evals, evectors) score = change_of_basis(X, evectors) return score, evectors, evals def plot_eigenvalues(evals, limit=True): """ Plots eigenvalues. Args: (numpy array of floats) : Vector of eigenvalues Returns: Nothing. """ plt.figure() plt.plot(np.arange(1, len(evals) + 1), evals, 'o-k') plt.xlabel('Component') plt.ylabel('Eigenvalue') plt.title('Scree plot') if limit: plt.show() ``` --- # Section 1: Perform PCA on MNIST The MNIST dataset consists of a 70,000 images of individual handwritten digits. Each image is a 28x28 pixel grayscale image. For convenience, each 28x28 pixel image is often unravelled into a single 784 (=28*28) element vector, so that the whole dataset is represented as a 70,000 x 784 matrix. Each row represents a different image, and each column represents a different pixel. Enter the following cell to load the MNIST dataset and plot the first nine images. ``` from sklearn.datasets import fetch_openml mnist = fetch_openml(name='mnist_784') X = mnist.data plot_MNIST_sample(X) ``` The MNIST dataset has an extrinsic dimensionality of 784, much higher than the 2-dimensional examples used in the previous tutorials! To make sense of this data, we'll use dimensionality reduction. But first, we need to determine the intrinsic dimensionality $K$ of the data. One way to do this is to look for an "elbow" in the scree plot, to determine which eigenvalues are signficant. ## Exercise 1: Scree plot of MNIST In this exercise you will examine the scree plot in the MNIST dataset. **Steps:** - Perform PCA on the dataset and examine the scree plot. - When do the eigenvalues appear (by eye) to reach zero? (**Hint:** use `plt.xlim` to zoom into a section of the plot). ``` help(pca) help(plot_eigenvalues) ################################################# ## TO DO for students: perform PCA and plot the eigenvalues ################################################# # perform PCA # score, evectors, evals = ... # plot the eigenvalues # plot_eigenvalues(evals, limit=False) # plt.xlim(...) # limit x-axis up to 100 for zooming # to_remove solution # perform PCA score, evectors, evals = pca(X) # plot the eigenvalues with plt.xkcd(): plot_eigenvalues(evals, limit=False) plt.xlim([0, 100]) # limit x-axis up to 100 for zooming ``` --- # Section 2: Calculate the variance explained The scree plot suggests that most of the eigenvalues are near zero, with fewer than 100 having large values. Another common way to determine the intrinsic dimensionality is by considering the variance explained. This can be examined with a cumulative plot of the fraction of the total variance explained by the top $K$ components, i.e., \begin{equation} \text{var explained} = \frac{\sum_{i=1}^K \lambda_i}{\sum_{i=1}^N \lambda_i} \end{equation} The intrinsic dimensionality is often quantified by the $K$ necessary to explain a large proportion of the total variance of the data (often a defined threshold, e.g., 90%). ## Exercise 2: Plot the explained variance In this exercise you will plot the explained variance. **Steps:** - Fill in the function below to calculate the fraction variance explained as a function of the number of principal componenets. **Hint:** use `np.cumsum`. - Plot the variance explained using `plot_variance_explained`. **Questions:** - How many principal components are required to explain 90% of the variance? - How does the intrinsic dimensionality of this dataset compare to its extrinsic dimensionality? ``` help(plot_variance_explained) def get_variance_explained(evals): """ Calculates variance explained from the eigenvalues. Args: evals (numpy array of floats) : Vector of eigenvalues Returns: (numpy array of floats) : Vector of variance explained """ ################################################# ## TO DO for students: calculate the explained variance using the equation ## from Section 2. # Comment once you've filled in the function raise NotImplementedError("Student excercise: calculate explaine variance!") ################################################# # cumulatively sum the eigenvalues csum = ... # normalize by the sum of eigenvalues variance_explained = ... return variance_explained ################################################# ## TO DO for students: call the function and plot the variance explained ################################################# # calculate the variance explained variance_explained = ... # Uncomment to plot the variance explained # plot_variance_explained(variance_explained) # to_remove solution def get_variance_explained(evals): """ Plots eigenvalues. Args: (numpy array of floats) : Vector of eigenvalues Returns: Nothing. """ # cumulatively sum the eigenvalues csum = np.cumsum(evals) # normalize by the sum of eigenvalues variance_explained = csum / np.sum(evals) return variance_explained # calculate the variance explained variance_explained = get_variance_explained(evals) with plt.xkcd(): plot_variance_explained(variance_explained) ``` --- # Section 3: Reconstruct data with different numbers of PCs Now we have seen that the top 100 or so principal components of the data can explain most of the variance. We can use this fact to perform *dimensionality reduction*, i.e., by storing the data using only 100 components rather than the samples of all 784 pixels. Remarkably, we will be able to reconstruct much of the structure of the data using only the top 100 components. To see this, recall that to perform PCA we projected the data $\bf X$ onto the eigenvectors of the covariance matrix: \begin{equation} \bf S = X W \end{equation} Since $\bf W$ is an orthogonal matrix, ${\bf W}^{-1} = {\bf W}^T$. So by multiplying by ${\bf W}^T$ on each side we can rewrite this equation as \begin{equation} {\bf X = S W}^T. \end{equation} This now gives us a way to reconstruct the data matrix from the scores and loadings. To reconstruct the data from a low-dimensional approximation, we just have to truncate these matrices. Let's call ${\bf S}_{1:K}$ and ${\bf W}_{1:K}$ as keeping only the first $K$ columns of this matrix. Then our reconstruction is: \begin{equation} {\bf \hat X = S}_{1:K} ({\bf W}_{1:K})^T. \end{equation} ## Exercise 3: Data reconstruction Fill in the function below to reconstruct the data using different numbers of principal components. **Steps:** * Fill in the following function to reconstruct the data based on the weights and scores. Don't forget to add the mean! * Make sure your function works by reconstructing the data with all $K=784$ components. The two images should look identical. ``` help(plot_MNIST_reconstruction) def reconstruct_data(score, evectors, X_mean, K): """ Reconstruct the data based on the top K components. Args: score (numpy array of floats) : Score matrix evectors (numpy array of floats) : Matrix of eigenvectors X_mean (numpy array of floats) : Vector corresponding to data mean K (scalar) : Number of components to include Returns: (numpy array of floats) : Matrix of reconstructed data """ ################################################# ## TO DO for students: Reconstruct the original data in X_reconstructed # Comment once you've filled in the function raise NotImplementedError("Student excercise: reconstructing data function!") ################################################# # Reconstruct the data from the score and eigenvectors # Don't forget to add the mean!! X_reconstructed = ... return X_reconstructed K = 784 ################################################# ## TO DO for students: Calculate the mean and call the function, then plot ## the original and the recostructed data ################################################# # Reconstruct the data based on all components X_mean = ... X_reconstructed = ... # Plot the data and reconstruction # plot_MNIST_reconstruction(X, X_reconstructed) # to_remove solution def reconstruct_data(score, evectors, X_mean, K): """ Reconstruct the data based on the top K components. Args: score (numpy array of floats) : Score matrix evectors (numpy array of floats) : Matrix of eigenvectors X_mean (numpy array of floats) : Vector corresponding to data mean K (scalar) : Number of components to include Returns: (numpy array of floats) : Matrix of reconstructed data """ # Reconstruct the data from the score and eigenvectors # Don't forget to add the mean!! X_reconstructed = np.matmul(score[:, :K], evectors[:, :K].T) + X_mean return X_reconstructed K = 784 # Reconstruct the data based on all components X_mean = np.mean(X, 0) X_reconstructed = reconstruct_data(score, evectors, X_mean, K) # Plot the data and reconstruction with plt.xkcd(): plot_MNIST_reconstruction(X, X_reconstructed) ``` ## Interactive Demo: Reconstruct the data matrix using different numbers of PCs Now run the code below and experiment with the slider to reconstruct the data matrix using different numbers of principal components. **Steps** * How many principal components are necessary to reconstruct the numbers (by eye)? How does this relate to the intrinsic dimensionality of the data? * Do you see any information in the data with only a single principal component? ``` # @title # @markdown Make sure you execute this cell to enable the widget! def refresh(K=100): X_reconstructed = reconstruct_data(score, evectors, X_mean, K) plot_MNIST_reconstruction(X, X_reconstructed) plt.title('Reconstructed, K={}'.format(K)) _ = widgets.interact(refresh, K=(1, 784, 10)) ``` ## Exercise 4: Visualization of the weights Next, let's take a closer look at the first principal component by visualizing its corresponding weights. **Steps:** * Enter `plot_MNIST_weights` to visualize the weights of the first basis vector. * What structure do you see? Which pixels have a strong positive weighting? Which have a strong negative weighting? What kinds of images would this basis vector differentiate? * Try visualizing the second and third basis vectors. Do you see any structure? What about the 100th basis vector? 500th? 700th? ``` help(plot_MNIST_weights) ################################################# ## TO DO for students: plot the weights calling the plot_MNIST_weights function ################################################# # Plot the weights of the first principal component # plot_MNIST_weights(...) # to_remove solution # Plot the weights of the first principal component with plt.xkcd(): plot_MNIST_weights(evectors[:, 0]) ``` --- # Summary * In this tutorial, we learned how to use PCA for dimensionality reduction by selecting the top principal components. This can be useful as the intrinsic dimensionality ($K$) is often less than the extrinsic dimensionality ($N$) in neural data. $K$ can be inferred by choosing the number of eigenvalues necessary to capture some fraction of the variance. * We also learned how to reconstruct an approximation of the original data using the top $K$ principal components. In fact, an alternate formulation of PCA is to find the $K$ dimensional space that minimizes the reconstruction error. * Noise tends to inflate the apparent intrinsic dimensionality, however the higher components reflect noise rather than new structure in the data. PCA can be used for denoising data by removing noisy higher components. * In MNIST, the weights corresponding to the first principal component appear to discriminate between a 0 and 1. We will discuss the implications of this for data visualization in the following tutorial. --- # Bonus: Examine denoising using PCA In this lecture, we saw that PCA finds an optimal low-dimensional basis to minimize the reconstruction error. Because of this property, PCA can be useful for denoising corrupted samples of the data. ## Exercise 5: Add noise to the data In this exercise you will add salt-and-pepper noise to the original data and see how that affects the eigenvalues. **Steps:** - Use the function `add_noise` to add noise to 20% of the pixels. - Then, perform PCA and plot the variance explained. How many principal components are required to explain 90% of the variance? How does this compare to the original data? ``` help(add_noise) ################################################################### # Insert your code here to: # Add noise to the data # Plot noise-corrupted data # Perform PCA on the noisy data # Calculate and plot the variance explained ################################################################### np.random.seed(2020) # set random seed X_noisy = ... # score_noisy, evectors_noisy, evals_noisy = ... # variance_explained_noisy = ... # plot_MNIST_sample(X_noisy) # plot_variance_explained(variance_explained_noisy) # to_remove solution np.random.seed(2020) # set random seed X_noisy = add_noise(X, .2) score_noisy, evectors_noisy, evals_noisy = pca(X_noisy) variance_explained_noisy = get_variance_explained(evals_noisy) with plt.xkcd(): plot_MNIST_sample(X_noisy) plot_variance_explained(variance_explained_noisy) ``` ## Exercise 6: Denoising Next, use PCA to perform denoising by projecting the noise-corrupted data onto the basis vectors found from the original dataset. By taking the top K components of this projection, we can reduce noise in dimensions orthogonal to the K-dimensional latent space. **Steps:** - Subtract the mean of the noise-corrupted data. - Project the data onto the basis found with the original dataset (`evectors`, not `evectors_noisy`) and take the top $K$ components. - Reconstruct the data as normal, using the top 50 components. - Play around with the amount of noise and K to build intuition. ``` ################################################################### # Insert your code here to: # Subtract the mean of the noise-corrupted data # Project onto the original basis vectors evectors # Reconstruct the data using the top 50 components # Plot the result ################################################################### X_noisy_mean = ... projX_noisy = ... X_reconstructed = ... # plot_MNIST_reconstruction(X_noisy, X_reconstructed) # to_remove solution X_noisy_mean = np.mean(X_noisy, 0) projX_noisy = np.matmul(X_noisy - X_noisy_mean, evectors) X_reconstructed = reconstruct_data(projX_noisy, evectors, X_noisy_mean, 50) with plt.xkcd(): plot_MNIST_reconstruction(X_noisy, X_reconstructed) ```
github_jupyter
# Quick Start **A tutorial on Renormalized Mutual Information** We describe in detail the implementation of RMI estimation in the very simple case of a Gaussian distribution. Of course, in this case the optimal feature is given by the Principal Component Analysis ``` import numpy as np # parameters of the Gaussian distribution mu = [0,0] sigma = [[1, 0.5],[0.5,2]] # extract the samples N_samples = 100000 samples = np.random.multivariate_normal(mu, sigma, N_samples ) ``` Visualize the distribution with a 2D histogram ``` import matplotlib.pyplot as plt plt.figure() plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) plt.gca().set_aspect("equal") plt.xlabel("$x_1$") plt.ylabel("$x_2$") plt.title("$P_x(x)$") plt.show() ``` ## Estimate Renormalized Mutual Information of a feature Now we would like to find a one-dimensional function $f(x_1,x_2)$ to describe this 2d distribution. ### Simplest feature For example, we could consider ignoring one of the variables: ``` def f(x): # feature # shape [N_samples, N_features=1] return x[:,0][...,None] def grad_f(x): # gradient # shape [N_samples, N_features=1, N_x=2] grad_f = np.zeros([len(x),1,2]) grad_f[...,0] = 1 return grad_f def feat_and_grad(x): return f(x), grad_f(x) ``` Let's plot it on top of the distribution ``` # Range of the plot xmin = -4 xmax = 4 # Number of points in the grid N = 100 # We evaluate the feature on a grid x_linspace = np.linspace(xmin, xmax, N) x1_grid, x2_grid = np.meshgrid(x_linspace, x_linspace, indexing='ij') x_points = np.array([x1_grid.flatten(), x2_grid.flatten()]).T feature = f(x_points) gradient = grad_f(x_points) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show() ``` $f(x)=x_1$ is clearly a linear function that ignores $x_2$ and increases in the $x_1$ direction **How much information does it give us on $x$?** If we used common mutual information, it would be $\infty$, because $f$ is a deterministic function, and $H(y|x) = -\log \delta(0)$. Let's estimate the renormalized mutual information ``` import rmi.estimation as inf samples = np.random.multivariate_normal(mu, sigma, N_samples ) feature = f(samples) gradient = grad_f(samples) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) ``` Please note that we perform the plot by calculating the feature on a uniform grid. But, to estimate RMI, the feature should be calculated on x **sampled** from the $x$ distribution. In particular, we have ``` p_y, delta_y = inf.produce_P(feature) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm) print("Renormalized Mutual Information (x,f(x)): %2.2f" % (entropy + fterm)) ``` Renormalized Mutual Information is the sum of the two terms - Entropy - RegTerm ### Reparametrization invariance Do we gain information if we increase the variance of the feature? For example, let's rescale our feature. Clearly the information on $x$ should remain the same ``` scale_factor = 4 feature *= scale_factor gradient *= scale_factor RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) p_y, delta_y = inf.produce_P(feature) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm) ``` Let's try even a non-linear transformation. As long as it is invertible, we will get the same RMI ``` # For example y_lin = np.linspace(-4,4,100) f_lin = y_lin**3 + 5*y_lin plt.figure() plt.title("Reparametrization function") plt.plot(y_lin, f_lin) plt.show() feature_new = feature**3 + 5*feature gradient_new = 3*feature[...,None]**2*gradient +5*gradient# chain rule... RMI = inf.RenormalizedMutualInformation(feature_new, gradient_new, 2000) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) p_y, delta_y = inf.produce_P(feature_new) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient_new) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm) ``` In this case, we have to increase the number of bins to calculate the Entropy with reasonable accuracy. The reason is that the feature now spans a quite larger range but changes very rapidly in the few bins around zero (but we use a uniform binning when estimating the entropy). ``` plt.hist(feature_new,1000) plt.show() ``` And if we instead appliead a **non-invertible** transformation? The consequence is clear: we will **lose information**. Consider for example: ``` feature_new = feature**2 gradient_new = 2*feature[...,None]*gradient # chain rule... RMI_2 = inf.RenormalizedMutualInformation(feature_new, gradient_new, 2000) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI_2) p_y, delta_y = inf.produce_P(feature_new) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient_new) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm) plt.hist(feature_new,1000) plt.show() ``` The careful observer will be able to guess how much information we have lost in this case: our feature is centered in zero and we squared it. We lose the sign, and on average the half of the samples have one sign and the half the other sign. One bit of information is lost. The difference is $\log 2$! ``` deltaRMI = RMI - RMI_2 print("delta RMI %2.3f" %deltaRMI) print("log 2 = %2.3f" % np.log(2)) ``` ### Another feature Let's take another linear feature, for example, this time in the other direction ``` def f(x): # feature # shape [N_samples, N_features=1] return x[:,1][...,None] def grad_f(x): # gradient # shape [N_samples, N_features=1, N_x=2] grad_f = np.zeros([len(x),1,2]) grad_f[...,1] = 1 return grad_f def feat_and_grad(x): return f(x), grad_f(x) feature = f(x_points) gradient = grad_f(x_points) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background samples = np.random.multivariate_normal(mu, sigma, N_samples ) plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show() feature = f(samples) gradient = grad_f(samples) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) ``` This feature seems to better describe our input. This is reasonable: it lies closer to the direction of larger fluctuation of the distribution. What is the best linear feature that we can take? ``` # Let's define a linear feature def linear(x, th): """ linear increasing in the direction given by angle th. Args: x (array_like): [N_samples, 2] array of samples th (float): direction of the feature in which it increases Returns: feature (array_like): [N_samples, 1] feature grad_feature (array_like): [N_samples, 1, N_x] gradient of the feature """ Feature = x[:, 0]*np.cos(th) + x[:, 1]*np.sin(th) Grad1 = np.full(np.shape(x)[0], np.cos(th)) Grad2 = np.full(np.shape(x)[0], np.sin(th)) return Feature, np.array([Grad1, Grad2]).T samples = np.random.multivariate_normal(mu, sigma, N_samples ) th_lin = np.linspace(0,np.pi, 30) rmis = [] for th in th_lin: feature, grad = linear(samples, th) rmi = inf.RenormalizedMutualInformation(feature,grad) rmis.append([th,rmi]) rmis = np.array(rmis) plt.figure() plt.title("Best linear feature") plt.xlabel("$\theta$") plt.ylabel(r"$RMI(x,f_\theta(x))$") plt.plot(rmis[:,0], rmis[:,1]) plt.show() best_theta = th_lin[np.argmax(rmis[:,1])] ``` Let's plot the feature with the parameter that gives the largest Renormalized Mutual Information ``` feature, gradient = linear(x_points,best_theta) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background samples = np.random.multivariate_normal(mu, sigma, N_samples ) plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show() feature, gradient = linear(samples,best_theta) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) ``` This is the same feature that we would get if we considered the first Principal Component of PCA. This is the only case in which this is possible: PCA can only extract linear features, and in particular, since it only takes into account the covariance matrix of the distribution, it can provide the best feature only for a Gaussian (which is identified by its mean and covariance matrix) ``` import rmi.pca samples = np.random.multivariate_normal(mu, sigma, N_samples ) g_pca = rmi.pca.pca(samples,1) eigenv = g_pca.w[0] angle_pca = np.arctan(eigenv[1]/eigenv[0]) feature, gradient = linear(samples,angle_pca) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) print("best found angle %2.2f" %best_theta) print("pca direction %2.2f" %angle_pca) ``` We recall that in this very special case, and as long as the proposed feature is only rotated (without changing the scale), the simple maximization of the Feature Entropy would have given the same result. Again, this only holds for linear features, and in particular for those whose gradient vector is not affected by a change of parameters). As soon as we use a non-linear feature, just looking at the entropy of the feature is not enough anymore - entropy is not reparametrization invariant. Also, given an arbitrary deterministic feature function, RMI is the only quantity that allows to estimate it's dependence with its arguments ## Feature Optimization Let's try now to optimize a neural network to extract a feature. In this case, as we already discussed, we will still get a linear feature ``` import rmi.neuralnets as nn # Define the layout of the neural network # The cost function is implicit when choosing the model RMIOptimizer rmi_optimizer = nn.RMIOptimizer( layers=[ nn.K.layers.Dense(30, activation="relu",input_shape=(2,)), nn.K.layers.Dense(1) ]) # Compile the network === choose the optimizer to use during the training rmi_optimizer.compile(optimizer=nn.tf.optimizers.Adam(1e-3)) # Print the table with the structure of the network rmi_optimizer.summary() # Define an objects that handles the training rmi_net = nn.Net(rmi_optimizer) # Perform the training of the neural network batchsize = 1000 N_train = 5000 def get_batch(): return np.random.multivariate_normal(mu, sigma, batchsize) rmi_net.fit_generator(get_batch, N_train) # Plot the training history (value of RMI) # The large fluctuations can be reduced by increasing the batchsize rmi_net.plot_history() ``` Calculate the feature on the input points: just apply the object `rmi_net`! ``` feature = rmi_net(x_points) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background samples = np.random.multivariate_normal(mu, sigma, N_samples ) plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show() ``` To calculate also the gradient of the feature, one can use the function `get_feature_and_grad` ``` feature, gradient = rmi_net.get_feature_and_grad(samples) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) ``` ## Tradeoff between simplicity and compression When optimizing renormalized mutual information to obtain a **meaningful feature** (in the sense of representation learning), one should avoid to employ too powerful networks. A good feature should set a convenient tradeoff between its **"simplicity"** (i.e. number of parameters, or how "smooth" the feature is) and its **information content** (i.e. how much the input space is compressed in a smaller dimension). In other words, useful representations should be "well-behaved", even at the price of reducing their renormalized mutual information. We can show this idea in a straight forward example ``` # Let's define a linear feature def cheating_feature(x): Feature = x[:, 0]*np.cos(best_theta) + x[:, 1]*np.sin(best_theta) step_size = 3 step_width = 1/12 step_argument = x[:, 0]*np.cos(best_theta+np.pi/2) + x[:, 1]*np.sin(best_theta+np.pi/2) Feature +=step_size*np.tanh(step_argument/step_width) Grad1 = np.full(x.shape[0], np.cos(best_theta)) Grad2 = np.full(x.shape[0], np.sin(best_theta)) Grad1 += step_size/step_width*np.cos(best_theta+np.pi/2)/np.cosh(step_argument/step_width)**2 Grad2 += step_size/step_width*np.sin(best_theta+np.pi/2)/np.cosh(step_argument/step_width)**2 return Feature, np.array([Grad1, Grad2]).T samples = np.random.multivariate_normal(mu, sigma, N_samples ) feature, gradient = cheating_feature(x_points) plt.figure() plt.title("Feature contours") plt.xlabel(r"$x_1$") plt.ylabel(r"$x_2$") plt.gca().set_aspect('equal') # Draw the input distribution on the background samples = np.random.multivariate_normal(mu, sigma, N_samples ) plt.hist2d(*samples.T, bins=100, cmap=plt.cm.binary) # Draw the contour lines of the extracted feature plt.xlim([-4,4]) plt.ylim([-4,4]) plt.contour(x1_grid, x2_grid, feature.reshape([N,N]),15, linewidths=4, cmap=plt.cm.Blues) plt.colorbar() plt.show() feature, gradient = cheating_feature(samples) RMI = inf.RenormalizedMutualInformation(feature, gradient) print("Renormalized Mutual Information (x,f(x)): %2.2f" % RMI) p_y, delta_y = inf.produce_P(feature) entropy = inf.Entropy(p_y, delta_y) fterm = inf.RegTerm(gradient) print("Entropy\t %2.2f" % entropy) print("Fterm\t %2.2f" % fterm) ``` This feature has a larger mutual information than the linear one. It is still increasing in the direction of largest variance of $x$. However, it contains a _jump_ in the orthogonal direction. This jump allows to encode a "bit" of additional information (about the orthogonal coordinate), allowing to unambiguously distinguish whether $x$ was extracted on the left or right side of the Gaussian. In principle, one can add an arbitrary number of jumps until the missing coordinate can be identified with arbitrary precision. This feature would have an arbitrary high renormalized mutual information (as it should be, since it contains more information on $x$). However, such a non-smooth feature is definitely not useful for feature extraction! One can avoid these extremely compressed representations by encouraging simpler features (like smooth, or a neural network with a limited number of layers for example). ``` # Histogram of the feature # The continuous value of x encodes one coordinate, # the two peaks of the distribution provide additional information # on the second coordinate! plt.hist(feature,1000) plt.show() ``` ## Conclusions This technique can be applied to - estimate the information that a deterministic feature $f(x)$ carries about a (higher-dimensional) $x$ - in other words, to estimate how useful is a given "macroscopic" quantity to describe a system? - extract non-linear representations in an unsupervised way, by optimizinng Renormalized Mutual Information. For more examples: - see the notebooks with the spiral-shaped distribution for an example with a non-Gaussian input distribution - see the Wave Packet and Liquid Drop notebooks for proof-of-concept applications in physics (or in general for higher-dimensional input spaces and extraction of a two-dimensional feature) At the moment, only one-dimensional or two-dimensional features can be extracted with the neural network class. This is due to the implementation of the Entropy estimation, which currently is based on a histogram - which is not efficient in larger dimensions. An alternative (differentiable) way to estimate the Entropy will allow to extend this technique to also extract features with more than 2 dimensions.
github_jupyter
# UPDATE This notebook is no longer being used. Please look at the most recent version, NLSS_V2 found in the same directory. My project looks at the Northernlion Live Super Show, a thrice a week Twitch stream which has been running since 2013. Unlike a video service like Youtube, the live nature of Twitch allows for a more conversational 'live comment stream' to accompany a video. My goal is to gather statistics about the episodes and pair this with a list of all the comments corrosponding to a video. Using this I will attempt to recognize patterns in Twitch comments based on the video statistics. ``` # Every returned Out[] is displayed, not just the last one. from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" import nltk import pandas as pd import numpy as np ``` A text file containing basic information about every NLSS episode must be organized into something usable ``` with open(r'data\NLSS_Dockets.txt') as f: file = f.read() shows = file.split('\n\n') #split into every show shows[:5] ``` This text file was taken from a webpage and so it contains links to Nick's livestream. Let's get rid of this since it's not needed. ``` index = 0 for s in shows: shows[index] = s.replace(' Nick View', '') index+=1 shows[-10:] ``` Now I need to split up each show into their meaningful parts. Let's start with the games played on each episode. ``` games = [] for s in shows: g = s.split('\n') #Text files has games on second line games.append(g[1]) games ``` I'll have to clean these up later to make sure all the games are spelled consistantly. ``` #Number of dockets, not individual games len(games) ``` Now lets take a look at the first lines of the text file which contain the date of the show and the people who joined the show that day. They are seperated in the file by (). ``` date_crew = [] for s in shows: dc = s.split('\n')[0] date_crew.append(dc) print(date_crew) ``` I'm going to use regex to split these up. ``` import re date = [] crew = [] for entry in date_crew: foo = re.search(r'\((.*)\)', entry).group(1) d = foo.split(r')')[0] date.append(d) c = foo.split(r'(')[-1] crew.append(c) ``` Now I'll start creating a data frame of this information ``` date_df = pd.DataFrame(date, columns = ["Date"]) date_df.head() games_df = pd.DataFrame(games, columns = ["Docket"]) games_df.head() crew_df = pd.DataFrame(crew, columns = ["Crew"]) crew_df.head() ``` Now combine them ``` nlss_df = pd.DataFrame() nlss_df['Date'] = date_df['Date'] nlss_df['Crew'] = crew_df['Crew'] nlss_df['Docket'] = games_df['Docket'] nlss_df.head() nlss_df.describe() ``` I noticed that some lines had a link called "(continued)" in the games list. I want to get rid of these. While I'm at it, let's make the games docket contain the games as lists. ``` improved = [] #For each docket for d in nlss_df['Docket']: #Split docket into list of games d = d.split(r',') #For each game for g in d: #If game matches string to remove if g == r" (continued)" or g == r" (Continued)": #Remove game d.remove(g) improved.append(d) nlss_df['Docket'] = improved nlss_df.head() nlss_df['Crew'] ``` I want to split on "w/" so each crew member is individual item. I'm also going to put them into a list. ``` improved = [] #For each cast of crew for e in nlss_df['Crew']: #Split cast into list of members e = e.split(r',') #For each member for m in e: #If member contains a /w if r'w/' in m: both = m.split(r'w/') e.remove(m) e.extend(both) improved.append(e) improved[:20] ``` Strip extra spaces ``` fullstripped = [] for entry in improved: stripped = [] for member in entry: member = member.strip(' ') stripped.append(member) fullstripped.append(stripped) fullstripped[:10] ``` Let's make the names consistant. Luckily I know the aliases that are used. Let's see what we're working with. ``` names = [] for entry in fullstripped: for user in entry: if user not in names: names.append(user) print(names) ``` Translated: Northernlion, RockLeeSmile, CobaltStreak, AlpacaPatrol, LastGreyWolf, HCJustin, BaerTaffy, JSmithOTI, Sinvicta, DanGheesling, MALF, FlackBlag, TotalBiscuit, LovelyMomo, Blueman, BaerTaffy, MathasGames, Crendor, BananasaurusRex, NOTREAL, AlpacaPatrol, DanGheesling, BananasaurusRex, MALF, Arumba, BaerTaffy, CobaltStreak, MALF, Magresta, Northernlion, JSmithOTI, RockLeeSmile, LovelyMomo, MikeBithell, RedPandaGamer, OhmWrecker, PrescriptionPixel, Green9090 ``` foo = "Northernlion, RockLeeSmile, CobaltStreak, AlpacaPatrol, LastGreyWolf, HCJustin, BaerTaffy, JSmithOTI, Sinvicta, DanGheesling, MALF, FlackBlag, TotalBiscuit, LovelyMomo, Blueman, BaerTaffy, MathasGames, Crendor, BananasaurusRex, NOTREAL, AlpacaPatrol, DanGheesling, NOTREAL, BananasaurusRex, MALF, Arumba, BaerTaffy, CobaltStreak, MALF, Magresta, Northernlion, JSmithOTI, RockLeeSmile, LovelyMomo, MikeBithell, RedPandaGamer, OhmWrecker, PrescriptionPixel, Green9090" translated = foo.split(", ") translated guests = [] for cast in fullstripped: guests.append([translated[names.index(user)] for user in cast]) #Replace first names with second names guests[0] ``` Looking better. Let's swap it into our DF. ``` nlss_df['Crew'] = guests nlss_df.head() ``` # Adding more stats File from https://sullygnome.com/channel/Northernlion/365/streams This version can only go back 365 days. Can we create a column for date that matches nlss_df format? If so, we can combine overlapping stats. I also have a larger CSV which I'm working on in FullCSV.ipynb. I will combine these once formated correctly. ``` import os import glob print(os.getcwd()) allFiles = glob.glob(r"data\*.csv") stream_df = pd.DataFrame() l = [] for foo in allFiles: stream_df = pd.read_csv(foo,index_col=None, header=0) l.append(stream_df) stream_df = pd.concat(l) #stream_df = pd.read_csv(r'StreamStats365.csv') stream_df formatted = [] order = [1,0,2] for date in stream_df['Stream start time']: dmy = date.split(' ')[1:-1] #Date/Month/Year dmy[0] = dmy[0][:-2] #Remove day suffixes mdy = [dmy[i] for i in order] formatted.append(str(mdy[0] + " " + mdy[1] + ", " + mdy[2])) formatted[:15] stream_df["Date"] = formatted stream_df = stream_df.reset_index() stream_df.index = stream_df["index"] stream_df = stream_df.drop('index', axis=1) stream_df.head() ``` There was a day where an extra non-NLSS stream happened. It messes up our ordering so let's remove it. ``` stream_df[stream_df["Date"]=='January 4, 2017'] stream_df = stream_df[stream_df['Unnamed: 0'] != 0] stream_df[stream_df["Date"]=='January 4, 2017'] combined = nlss_df.merge(stream_df) #drop eronious columns combined = combined.drop('Games', 1) combined = combined.drop('Unnamed: 0', 1) nlss_df.head() combined.head() result = pd.concat([nlss_df, combined], axis=1) #Removes repeat columns result = result.T.groupby(level=0).first().T #Reorder columns result = result[['Date','Crew','Docket','Stream start time','End time','Stream length','Avg Viewers','Peak viewers','Followers gained','Followers per hour','Views','Views per hour']] nlss_df = result nlss_df.loc[50] nlss_df[70:85] nlss_df.loc[nlss_df['Date']=="Wednesday 8th February 2017 22:15"] ``` # Let's Explore Our stats have been compiled. Now let's look around. Which show had most peak viewers? ``` mpv = nlss_df.loc[nlss_df['Peak viewers'].idxmax()] print("Date:", mpv["Date"]) print("Peak viewers:", mpv['Peak viewers']) print("Peak percentage:", (mpv['Peak viewers']/mpv['Views'])*100) print("Total viewers:", mpv['Views']) print("Games:", mpv['Docket']) nlss_df.loc[nlss_df['Peak viewers'].idxmax()] nlss_df.head() ``` # NLSS Dataframe ``` len(nlss_df) nlss_df.head() nlss_df.tail() ``` 8/25/2017 - 2/25/2013 # Current Goals I'm working on an ipynb called FullCSV. I got this by contacting the owner of Sullygnome.com. This CSV goes back further than the ones I've been working with and so will be helpful to use. However, there are differences in how it is formated compared to the CSVs used here. I'm currently working to set the dates up in the same format so I can add in the stats from FullCSV not found here. My new major issue is rather recent. Twitch recently (earlier this week as of writing) [changed their API](https://blog.twitch.tv/the-new-twitch-api-be3fb2b078e6), breaking most existing apps using it. This new update requires new types of authenication. I'm working on learning the new API, however my formerly working Twitch comment downloader is not in working order yet. As far as I can tell, none of the comment downloaders I found on Github currently work, but a few developers are working on updating them to the newest API. I converted a file of Twitch emote commands from Twitch's API site. I will attempt to find user channel specific emote commands and combine it with list. I can then use this master list to search for or omit emotes from analysis. # Sharability of Data I've been in contact with the people who created the NLSS Docket list and the CSVs. They both gave me permission to use and publicly release the data in my project. I will work on organizing a cleaned up folder of my data to publish on Github.
github_jupyter
# Boltzmann Machines Notebook ini berdasarkan kursus __Deep Learning A-Z™: Hands-On Artificial Neural Networks__ di Udemy. [Lihat Kursus](https://www.udemy.com/deeplearning/). ## Informasi Notebook - __notebook name__: `taruma_udemy_boltzmann` - __notebook version/date__: `1.0.0`/`20190730` - __notebook server__: Google Colab - __python version__: `3.6` - __pytorch version__: `1.1.0` ``` #### NOTEBOOK DESCRIPTION from datetime import datetime NOTEBOOK_TITLE = 'taruma_udemy_boltzmann' NOTEBOOK_VERSION = '1.0.0' NOTEBOOK_DATE = 1 # Set 1, if you want add date classifier NOTEBOOK_NAME = "{}_{}".format( NOTEBOOK_TITLE, NOTEBOOK_VERSION.replace('.','_') ) PROJECT_NAME = "{}_{}{}".format( NOTEBOOK_TITLE, NOTEBOOK_VERSION.replace('.','_'), "_" + datetime.utcnow().strftime("%Y%m%d_%H%M") if NOTEBOOK_DATE else "" ) print(f"Nama Notebook: {NOTEBOOK_NAME}") print(f"Nama Proyek: {PROJECT_NAME}") #### System Version import sys, torch print("versi python: {}".format(sys.version)) print("versi pytorch: {}".format(torch.__version__)) #### Load Notebook Extensions %load_ext google.colab.data_table #### Download dataset # ref: https://grouplens.org/datasets/movielens/ !wget -O boltzmann.zip "https://sds-platform-private.s3-us-east-2.amazonaws.com/uploads/P16-Boltzmann-Machines.zip" !unzip boltzmann.zip #### Atur dataset path DATASET_DIRECTORY = 'Boltzmann_Machines/' def showdata(dataframe): print('Dataframe Size: {}'.format(dataframe.shape)) return dataframe ``` # STEP 1-5 DATA PREPROCESSING ``` # Importing the libraries import numpy as np import pandas as pd import torch import torch.nn as nn import torch.nn.parallel import torch.optim as optim import torch.utils.data from torch.autograd import Variable movies = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/movies.dat', sep='::', header=None, engine='python', encoding='latin-1') showdata(movies).head(10) users = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/users.dat', sep='::', header=None, engine='python', encoding='latin-1') showdata(users).head(10) ratings = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/ratings.dat', sep='::', header=None, engine='python', encoding='latin-1') showdata(ratings).head(10) # Preparing the training set and the test set training_set = pd.read_csv(DATASET_DIRECTORY + 'ml-100k/u1.base', delimiter='\t') training_set = np.array(training_set, dtype='int') test_set = pd.read_csv(DATASET_DIRECTORY + 'ml-100k/u1.test', delimiter='\t') test_set = np.array(test_set, dtype='int') # Getting the number of users and movies nb_users = int(max(max(training_set[:, 0]), max(test_set[:, 0]))) nb_movies = int(max(max(training_set[:, 1]), max(test_set[:, 1]))) # Converting the data into an array with users in lines and movies in columns def convert(data): new_data = [] for id_users in range(1, nb_users+1): id_movies = data[:, 1][data[:, 0] == id_users] id_ratings = data[:, 2][data[:, 0] == id_users] ratings = np.zeros(nb_movies) ratings[id_movies - 1] = id_ratings new_data.append(list(ratings)) return new_data training_set = convert(training_set) test_set = convert(test_set) # Converting the data into Torch tensors training_set = torch.FloatTensor(training_set) test_set = torch.FloatTensor(test_set) training_set. ``` # STEP 6 ``` # Converting the ratings into binary ratings 1 (Liked) or 0 (Not Liked) training_set[training_set == 0] = -1 training_set[training_set == 1] = 0 training_set[training_set == 2] = 0 training_set[training_set >= 3] = 1 test_set[test_set == 0] = -1 test_set[test_set == 1] = 0 test_set[test_set == 2] = 0 test_set[test_set >= 3] = 1 training_set ``` # STEP 7 - 10 Building RBM Object ``` # Creating the architecture of the Neural Network # nv = number visible nodes, nh = number hidden nodes class RBM(): def __init__(self, nv, nh): self.W = torch.randn(nh, nv) self.a = torch.randn(1, nh) self.b = torch.randn(1, nv) def sample_h(self, x): wx = torch.mm(x, self.W.t()) activation = wx + self.a.expand_as(wx) p_h_given_v = torch.sigmoid(activation) return p_h_given_v, torch.bernoulli(p_h_given_v) def sample_v(self, y): wy = torch.mm(y, self.W) activation = wy + self.b.expand_as(wy) p_v_given_h = torch.sigmoid(activation) return p_v_given_h, torch.bernoulli(p_v_given_h) def train(self, v0, vk, ph0, phk): self.W += (torch.mm(v0.t(), ph0) - torch.mm(vk.t(), phk)).t() self.b += torch.sum((v0 - vk), 0) self.a += torch.sum((ph0 - phk), 0) ``` # STEP 11 ``` nv = len(training_set[0]) nh = 100 batch_size = 100 rbm = RBM(nv, nh) ``` # STEP 12-13 ``` # Training the RBM nb_epochs = 10 for epoch in range(1, nb_epochs + 1): train_loss = 0 s = 0. for id_user in range(0, nb_users - batch_size, batch_size): vk = training_set[id_user:id_user+batch_size] v0 = training_set[id_user:id_user+batch_size] ph0,_ = rbm.sample_h(v0) for k in range(10): _,hk = rbm.sample_h(vk) _,vk = rbm.sample_v(hk) vk[v0<0] = v0[v0<0] phk,_ = rbm.sample_h(vk) rbm.train(v0, vk, ph0, phk) train_loss += torch.mean(torch.abs(v0[v0>=0] - vk[v0>=0])) s += 1. print('epoch: '+str(epoch)+' loss: '+str(train_loss/s)) ``` # STEP 14 ``` # Testing the RBM test_loss = 0 s = 0. for id_user in range(nb_users): v = training_set[id_user:id_user+1] vt = test_set[id_user:id_user+1] if len(vt[vt>=0]) > 0: _,h = rbm.sample_h(v) _,v = rbm.sample_v(h) test_loss += torch.mean(torch.abs(vt[vt>=0] - v[vt>=0])) s += 1. print('test loss: '+str(test_loss/s)) ```
github_jupyter
As a demonstration, create an ARMA22 model drawing innovations from there different distributions, a bernoulli, normal and inverse normal. Then build a keras/tensorflow model for the 1-d scattering transform to create "features", use these features to classify which model for the innovations was used. ``` from blusky.blusky_models import build_model_1d import matplotlib.pylab as plt import numpy as np from scipy.stats import bernoulli, norm, norminvgauss def arma22(N, alpha, beta, rnd, eps=0.5): inov = rnd.rvs(2*N) x = np.zeros(2*N) # arma22 mode for i in range(2,N*2): x[i] = (alpha[0] * x[i-1] + alpha[1]*x[i-2] + beta[0] * inov[i-1] + beta[1] * inov[i-2] + eps * inov[i]) return x[N:] N = 512 k = 10 alpha = [0.99, -0.1] beta = [0.2, 0.0] eps = 1 series = np.zeros((24*k, N)) y = np.zeros(24*k) for i in range(8*k): series[i, :] = arma22(N, alpha, beta, norm(1.0), eps=eps) y[i] = 0 for i in range(8*k, 16*k): series[i, :] = arma22(N, alpha, beta, norminvgauss(1,0.5), eps=eps) y[i] = 1 for i in range(16*k, 24*k): series[i, :] = arma22(N, alpha, beta, bernoulli(0.5), eps=eps)*2 y[i] = 2 plt.plot(series[3*k,:200], '-r') plt.plot(series[8*k,:200]) plt.plot(series[-3*k,:200]) plt.legend(['normal', 'inverse normal', 'bernoulli']) #Hold out data: k = 8 hodl_series = np.zeros((24*k, N)) hodl_y = np.zeros(24*k) for i in range(8*k): hodl_series[i, :] = arma22(N, alpha, beta, norm(1.0), eps=eps) hodl_y[i] = 0 for i in range(8*k, 16*k): hodl_series[i, :] = arma22(N, alpha, beta, norminvgauss(1,0.5), eps=eps) hodl_y[i] = 1 for i in range(16*k, 24*k): hodl_series[i, :] = arma22(N, alpha, beta, bernoulli(0.5), eps=eps)*2 hodl_y[i] = 2 # hold out data plt.plot(hodl_series[0,:200], '-r') plt.plot(hodl_series[8*k,:200]) plt.plot(hodl_series[16*k,:200]) plt.legend(['normal', 'inverse normal', 'bernoulli']) ``` The scattering transform reduces the timeseries to a set of features, which we use for classification. The seperation between the series is more obvious looking at the log- of the features (see below). A support vector machine has an easy time classifying these processes. ``` base_model = build_model_1d(N, 7,6, concatenate=True) result = base_model.predict(hodl_series) plt.semilogy(np.mean(result[:,0,:], axis=0), '-r') plt.semilogy(np.mean(result[8*k:16*k,0,:], axis=0), '-b') plt.semilogy(np.mean(result[16*k:,0,:], axis=0), '-g') from sklearn.svm import SVC from sklearn.metrics import classification_report model = build_model_1d(N, 7, 6, concatenate=True) result = np.log(model.predict(series)) X = result[:,0,:] rdf = SVC() rdf.fit(X,y) hodl_result = np.log(model.predict(hodl_series)) hodl_X = hodl_result[:,0,:] y_pred = rdf.predict(hodl_X) cls1 = classification_report(hodl_y, y_pred) print(cls1) ``` Blusky build_model_1d creates a regular old keras model, which you can use like another, think VGG16 etc. The order (order < J) defines the depth of the network. If you want a deeper network, increase this parameter. Here we attach a set of fully connected layers to classify like we did previously with the SVM. Dropping in a batch normalization here, seeems to be important for regularizong the problem. ``` from tensorflow.keras import Input, Model import tensorflow.keras.backend as K from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.layers import BatchNormalization, Dense, Flatten, Lambda from tensorflow.keras.utils import to_categorical early_stopping = EarlyStopping(monitor="val_loss", patience=50, verbose=True, restore_best_weights=True) J = 7 order = 6 base_model = build_model_1d(N, J, order, concatenate=True) dnn = Flatten()(base_model.output) # let's add the "log" here like we did above dnn = Lambda(lambda x : K.log(x))(dnn) dnn = BatchNormalization()(dnn) dnn = Dense(32, activation='linear', name='dnn1')(dnn) dnn = Dense(3, activation='softmax', name='softmax')(dnn) deep_model_1 = Model(inputs=base_model.input, outputs=dnn) deep_model_1.compile(optimizer='rmsprop', loss='categorical_crossentropy') history_1 = deep_model_1.fit(series, to_categorical(y), validation_data=(hodl_series, to_categorical(hodl_y)), callbacks=[early_stopping], epochs=200) y_pred = deep_model_1.predict(hodl_series) cls_2 = classification_report(hodl_y, np.argmax(y_pred, axis=1)) base_model.output plt.plot(history_1.history['loss'][-100:]) plt.plot(history_1.history['val_loss'][-100:]) print(cls_2) ```
github_jupyter
# Apply Signature Analysis to Cell Morphology Features Gregory Way, 2020 Here, I apply [`singscore`](https://bioconductor.org/packages/devel/bioc/vignettes/singscore/inst/doc/singscore.html) ([Foroutan et al. 2018](https://doi.org/10.1186/s12859-018-2435-4)) to our Cell Painting profiles. This notebook largely follows the [package vignette](https://bioconductor.org/packages/devel/bioc/vignettes/singscore/inst/doc/singscore.html). I generate two distinct signatures. 1. Comparing Clone A and E resistant clones to sensitive wildtype cell lines. * Clones A and E both have a confirmed _PSMB5_ mutation which is known to cause bortezomib resistance. 2. Derived from comparing four other resistant clones to four other sensitive wildtype clones. * We do not know the resistance mechanism in these four resistant clones. However, we can hypothesize that the mechanisms are similar based on single sample enrichment using the potential PSMB5 signature. To review how I derived these signatures see `0.build-morphology-signatures.ipynb`. ``` suppressPackageStartupMessages(library(singscore)) suppressPackageStartupMessages(library(dplyr)) suppressPackageStartupMessages(library(ggplot2)) seed <- 1234 num_permutations <- 1000 set.seed(seed) ``` ## Load Clone A/E (_PSMB5_ Mutations) Signature ``` sig_cols <- readr::cols( feature = readr::col_character(), estimate = readr::col_double(), adj.p.value = readr::col_double() ) sig_file <- file.path("results", "cloneAE_signature_tukey.tsv") psmb_signature_scores <- readr::read_tsv(sig_file, col_types=sig_cols) head(psmb_signature_scores, 2) # Extract features that are up and down in the signature up_features <- psmb_signature_scores %>% dplyr::filter(estimate > 0) %>% dplyr::pull(feature) down_features <- psmb_signature_scores %>% dplyr::filter(estimate < 0) %>% dplyr::pull(feature) ``` ## Load Four Clone Dataset ``` col_types <- readr::cols( .default = readr::col_double(), Metadata_Plate = readr::col_character(), Metadata_Well = readr::col_character(), Metadata_plate_map_name = readr::col_character(), Metadata_clone_number = readr::col_character(), Metadata_clone_type = readr::col_character(), Metadata_plate_ID = readr::col_character(), Metadata_plate_filename = readr::col_character(), Metadata_treatment = readr::col_character(), Metadata_batch = readr::col_character() ) # Do not load the feature selected data profile_dir <- file.path("..", "2.describe-data", "data", "merged") profile_file <- file.path(profile_dir, "combined_four_clone_dataset.csv") fourclone_data_df <- readr::read_csv(profile_file, col_types = col_types) print(dim(fourclone_data_df)) head(fourclone_data_df, 2) # Generate unique sample names (for downstream merging of results) sample_names <- paste( fourclone_data_df$Metadata_clone_number, fourclone_data_df$Metadata_Plate, fourclone_data_df$Metadata_Well, fourclone_data_df$Metadata_batch, sep = "_" ) fourclone_data_df <- fourclone_data_df %>% dplyr::mutate(Metadata_unique_sample_name = sample_names) ``` ## Apply `singscore` ``` # Convert the four clone dataset into a feature x sample matrix without metadata features_only_df <- t(fourclone_data_df %>% dplyr::select(!starts_with("Metadata_"))) # Apply the `rankGenes()` method to get feature rankings per feature for each sample rankData <- rankGenes(features_only_df) colnames(rankData) <- fourclone_data_df$Metadata_unique_sample_name print(dim(rankData)) head(rankData, 3) # Using the rank dataframe, up, and down features, get the sample scores scoredf <- simpleScore(rankData, upSet = up_features, downSet = down_features) # Merge scores with metadata features full_result_df <- dplyr::bind_cols( fourclone_data_df %>% dplyr::select(starts_with("Metadata_")), scoredf ) print(dim(full_result_df)) head(full_result_df, 2) ``` ## Perform Permutation Testing to Determine Significance of Observation ``` # Generate a null distribution of scores by randomly shuffling ranks permuteResult <- generateNull( upSet = up_features, downSet = down_features, rankData = rankData, centerScore = TRUE, knownDirection = TRUE, B = num_permutations, seed = seed, useBPPARAM = NULL ) # Calculate p values and add to list pvals <- getPvals(permuteResult, scoredf) pval_tidy <- broom::tidy(pvals) colnames(pval_tidy) <- c("names", "Metadata_permuted_p_value") full_result_df <- full_result_df %>% dplyr::left_join( pval_tidy, by = c("Metadata_unique_sample_name" = "names") ) # Are there differences in quantiles across batch? batch_info <- gsub("^.*_", "", rownames(t(permuteResult))) batch_permute <- t(permuteResult) %>% dplyr::as_tibble() %>% dplyr::mutate(batch = batch_info) permute_bounds <- list() for (batch_id in unique(batch_permute$batch)) { subset_permute <- batch_permute %>% dplyr::filter(batch == !!batch_id) %>% dplyr::select(!batch) min_val <- quantile(as.vector(as.matrix(subset_permute)), 0.005) max_val <- quantile(as.vector(as.matrix(subset_permute)), 0.995) permute_bounds[[batch_id]] <- c(batch_id, min_val, max_val) } do.call(rbind, permute_bounds) ``` ## Visualize Results ``` min_val <- quantile(as.vector(as.matrix(permuteResult)), 0.05) max_val <- quantile(as.vector(as.matrix(permuteResult)), 0.95) apply_psmb_signature_gg <- ggplot(full_result_df, aes(y = TotalScore, x = Metadata_clone_number)) + geom_boxplot(aes(fill = Metadata_treatment), outlier.alpha = 0) + geom_point( aes(fill = Metadata_treatment, group = Metadata_treatment), position = position_dodge(width=0.75), size = 0.9, alpha = 0.7, shape = 21) + scale_fill_manual(name = "Treatment", labels = c("bortezomib" = "Bortezomib", "DMSO" = "DMSO"), values = c("bortezomib" = "#9e0ba3", "DMSO" = "#fcba03")) + theme_bw() + annotate("rect", ymin = min_val, ymax = max_val, xmin = 0, xmax = length(unique(full_result_df$Metadata_clone_number)) + 1, alpha = 0.2, color = "red", linetype = "dashed", fill = "grey") + xlab("") + ylab("PSMB5 Signature Score") + theme(axis.text.x = element_text(angle=90)) + facet_wrap("Metadata_batch~Metadata_plate_ID", nrow=3) + theme(strip.text = element_text(size = 8, color = "black"), strip.background = element_rect(colour = "black", fill = "#fdfff4")) output_fig <- file.path("figures", "signature", "psmb5_signature_apply_fourclone.png") ggsave(output_fig, dpi = 500, height = 5, width = 10) apply_psmb_signature_gg summarized_mean_result_df <- full_result_df %>% dplyr::group_by( Metadata_batch, Metadata_plate_map_name, Metadata_clone_number, Metadata_treatment, Metadata_clone_type ) %>% dplyr::mutate(mean_score = mean(TotalScore)) %>% dplyr::select( Metadata_batch, Metadata_plate_map_name, Metadata_clone_number, Metadata_clone_type, Metadata_treatment, mean_score ) %>% dplyr::distinct() %>% tidyr::spread(key = "Metadata_treatment", value = "mean_score") %>% dplyr::mutate(treatment_score_diff = DMSO - bortezomib) head(summarized_mean_result_df) apply_psmb_signature_diff_gg <- ggplot(summarized_mean_result_df, aes(y = treatment_score_diff, x = Metadata_clone_number, fill = Metadata_clone_type)) + geom_boxplot(outlier.alpha = 0) + geom_jitter( width = 0.2, size = 2, alpha = 0.7, shape = 21) + scale_fill_manual(name = "Clone Type", labels = c("resistant" = "Resistant", "wildtype" = "Wildtype"), values = c("resistant" = "#9e0ba3", "wildtype" = "#fcba03")) + theme_bw() + xlab("") + ylab("Difference PSMB5 Signature Score\nDMSO - Bortezomib") + theme(axis.text.x = element_text(angle=90)) + theme(strip.text = element_text(size = 8, color = "black"), strip.background = element_rect(colour = "black", fill = "#fdfff4")) output_fig <- file.path("figures", "signature", "psmb5_signature_apply_fourclone_difference.png") ggsave(output_fig, dpi = 500, height = 4.5, width = 6) apply_psmb_signature_diff_gg ``` ## Load Four Clone Signature (Generic Resistance) ``` sig_file <- file.path("results", "fourclone_signature_tukey.tsv") resistance_signature_scores <- readr::read_tsv(sig_file, col_types=sig_cols) head(resistance_signature_scores, 2) # Extract features that are up and down in the signature up_resistance_features <- resistance_signature_scores %>% dplyr::filter(estimate > 0) %>% dplyr::pull(feature) down_resistance_features <- resistance_signature_scores %>% dplyr::filter(estimate < 0) %>% dplyr::pull(feature) ``` ## Load Clone A/E Dataset ``` # Do not load the feature selected data profile_file <- file.path(profile_dir, "combined_cloneAcloneE_dataset.csv") cloneae_cols <- readr::cols( .default = readr::col_double(), Metadata_CellLine = readr::col_character(), Metadata_Plate = readr::col_character(), Metadata_Well = readr::col_character(), Metadata_batch = readr::col_character(), Metadata_plate_map_name = readr::col_character(), Metadata_clone_type = readr::col_character() ) cloneAE_data_df <- readr::read_csv(profile_file, col_types = cloneae_cols) print(dim(cloneAE_data_df)) head(cloneAE_data_df, 2) # Generate unique sample names (for downstream merging of results) cloneae_sample_names <- paste( cloneAE_data_df$Metadata_CellLine, cloneAE_data_df$Metadata_Plate, cloneAE_data_df$Metadata_Well, cloneAE_data_df$Metadata_batch, sep = "_" ) cloneAE_data_df <- cloneAE_data_df %>% dplyr::mutate(Metadata_unique_sample_name = cloneae_sample_names) ``` ## Apply `singscore` ``` # Convert the four clone dataset into a feature x sample matrix without metadata features_only_res_df <- t(cloneAE_data_df %>% dplyr::select(!starts_with("Metadata_"))) # Apply the `rankGenes()` method to get feature rankings per feature for each sample rankData_res <- rankGenes(features_only_res_df) colnames(rankData_res) <- cloneAE_data_df$Metadata_unique_sample_name print(dim(rankData_res)) head(rankData_res, 3) # Using the rank dataframe, up, and down features, get the sample scores scoredf_res <- simpleScore(rankData_res, upSet = up_resistance_features, downSet = down_resistance_features) # Merge scores with metadata features full_res_result_df <- dplyr::bind_cols( cloneAE_data_df %>% dplyr::select(starts_with("Metadata_")), scoredf_res ) print(dim(full_res_result_df)) head(full_res_result_df, 2) ``` ## Perform Permutation Testing ``` # Generate a null distribution of scores by randomly shuffling ranks permuteResult_res <- generateNull( upSet = up_resistance_features, downSet = down_resistance_features, rankData = rankData_res, centerScore = TRUE, knownDirection = TRUE, B = num_permutations, seed = seed, useBPPARAM = NULL ) # Calculate p values and add to list pvals_res <- getPvals(permuteResult_res, scoredf_res) pval_res_tidy <- broom::tidy(pvals) colnames(pval_res_tidy) <- c("names", "Metadata_permuted_p_value") full_res_result_df <- full_res_result_df %>% dplyr::left_join( pval_res_tidy, by = c("Metadata_unique_sample_name" = "names") ) ``` ## Visualize Signature Results ``` min_val <- quantile(as.vector(as.matrix(permuteResult_res)), 0.05) max_val <- quantile(as.vector(as.matrix(permuteResult_res)), 0.95) append_dose <- function(string) paste0("Dose: ", string, "nM") apply_res_signature_gg <- ggplot(full_res_result_df, aes(y = TotalScore, x = Metadata_CellLine)) + geom_boxplot(aes(fill = Metadata_clone_type), outlier.alpha = 0) + geom_point( aes(fill = Metadata_clone_type, group = Metadata_clone_type), position = position_dodge(width=0.75), size = 0.9, alpha = 0.7, shape = 21) + scale_fill_manual(name = "Clone Type", labels = c("resistant" = "Resistant", "wildtype" = "WildType"), values = c("resistant" = "#f5b222", "wildtype" = "#4287f5")) + theme_bw() + annotate("rect", ymin = min_val, ymax = max_val, xmin = 0, xmax = length(unique(full_res_result_df$Metadata_CellLine)) + 1, alpha = 0.2, color = "red", linetype = "dashed", fill = "grey") + xlab("") + ylab("Generic Resistance Signature Score") + theme(axis.text.x = element_text(angle=90)) + facet_grid("Metadata_Dosage~Metadata_batch", labeller = labeller(Metadata_Dosage = as_labeller(append_dose))) + theme(strip.text = element_text(size = 8, color = "black"), strip.background = element_rect(colour = "black", fill = "#fdfff4")) output_fig <- file.path("figures", "signature", "generic_resistance_signature_apply_cloneAE.png") ggsave(output_fig, dpi = 500, height = 5, width = 5) apply_res_signature_gg full_res_result_df$Metadata_Dosage <- factor( full_res_result_df$Metadata_Dosage, levels = unique(sort(full_res_result_df$Metadata_Dosage)) ) full_res_result_df <- full_res_result_df %>% dplyr::mutate(Metadata_group = paste0(Metadata_batch, Metadata_CellLine)) ggplot(full_res_result_df, aes(x = Metadata_Dosage, y = TotalScore, color = Metadata_CellLine, group = Metadata_group)) + geom_point(size = 1) + geom_smooth(aes(fill = Metadata_clone_type), method = "loess", lwd = 0.5) + facet_wrap("~Metadata_batch", nrow = 2) + theme_bw() + scale_fill_manual(name = "Clone Type", labels = c("resistant" = "Resistant", "wildtype" = "WildType"), values = c("resistant" = "#f5b222", "wildtype" = "#4287f5")) + ylab("Generic Resistance Signature Score") + annotate("rect", ymin = min_val, ymax = max_val, xmin = 0, xmax = length(unique(full_res_result_df$Metadata_CellLine)) + 2, alpha = 0.2, color = "red", linetype = "dashed", fill = "grey") + theme(strip.text = element_text(size = 8, color = "black"), strip.background = element_rect(colour = "black", fill = "#fdfff4")) output_fig <- file.path("figures", "signature", "generic_resistance_signature_apply_cloneAE_xaxis_dosage.png") ggsave(output_fig, dpi = 500, height = 5, width = 5) ```
github_jupyter
# Generating an ROC Curve This notebook is meant to be be an introduction to generating an ROC curve for multi-class prediction problems and the code comes directly from an [Scikit-Learn demo](http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html). Please issue a comment on my Github account if you would like to suggest any changes to this notebook. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.metrics import roc_curve, auc from sklearn.cross_validation import train_test_split from sklearn.preprocessing import label_binarize from sklearn.multiclass import OneVsRestClassifier from scipy import interp # Import some data to play with iris = datasets.load_iris() X = iris.data y = iris.target # Binarize the output y = label_binarize(y, classes=[0, 1, 2]) n_classes = y.shape[1] # Add noisy features to make the problem harder random_state = np.random.RandomState(0) n_samples, n_features = X.shape X = np.c_[X, random_state.randn(n_samples, 200 * n_features)] # shuffle and split training and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5, random_state=0) # Learn to predict each class against the other classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True, random_state=random_state)) y_score = classifier.fit(X_train, y_train).decision_function(X_test) # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # Compute micro-average ROC curve and ROC area fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel()) roc_auc["micro"] = auc(fpr["micro"], tpr["micro"]) ############################################################################## # Plot of a ROC curve for a specific class plt.figure() plt.plot(fpr[2], tpr[2], label='ROC curve (area = %0.2f)' % roc_auc[2]) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() ############################################################################## # Plot ROC curves for the multiclass problem # Compute macro-average ROC curve and ROC area # First aggregate all false positive rates all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)])) # Then interpolate all ROC curves at this points mean_tpr = np.zeros_like(all_fpr) for i in range(n_classes): mean_tpr += interp(all_fpr, fpr[i], tpr[i]) # Finally average it and compute AUC mean_tpr /= n_classes fpr["macro"] = all_fpr tpr["macro"] = mean_tpr roc_auc["macro"] = auc(fpr["macro"], tpr["macro"]) # Plot all ROC curves plt.figure() plt.plot(fpr["micro"], tpr["micro"], label='micro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["micro"]), linewidth=2) plt.plot(fpr["macro"], tpr["macro"], label='macro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["macro"]), linewidth=2) for i in range(n_classes): plt.plot(fpr[i], tpr[i], label='ROC curve of class {0} (area = {1:0.2f})' ''.format(i, roc_auc[i])) plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Some extension of Receiver operating characteristic to multi-class') plt.legend(loc="lower right") plt.show() ```
github_jupyter
# In-Class Coding Lab: Strings The goals of this lab are to help you to understand: - String slicing for substrings - How to use Python's built-in String functions in the standard library. - Tokenizing and Parsing Data - How to create user-defined functions to parse and tokenize strings # Strings ## Strings are immutable sequences Python strings are immutable sequences.This means we cannot change them "in part" and there is impicit ordering. The characters in a string are zero-based. Meaning the index of the first character is 0. We can leverage this in a variety of ways. For example: ``` x = input("Enter something: ") print ("You typed:", x) print ("number of characters:", len(x) ) print ("First character is:", x[0]) print ("Last character is:", x[-1]) ## They're sequences, so you can loop definately: print("Printing one character at a time: ") for ch in x: print(ch) # print a character at a time! ``` ## Slices as substrings Python lists and sequences use **slice notation** which is a clever way to get substring from a given string. Slice notation requires two values: A start index and the end index. The substring returned starts at the start index, and *ends at the position before the end index*. It ends at the position *before* so that when you slice a string into parts you know where you've "left off". For example: ``` state = "Mississippi" print (state[0:4]) # Miss print (state[4:len(state)]) # issippi ``` In this next example, play around with the variable `split` adjusting it to how you want the string to be split up. Re run the cell several times with different values to get a feel for what happens. ``` state = "Mississippi" split = 4 # TODO: play around with this number left = state[0:split] right = state[split:len(state)] print(left, right) state = "Mississippi" split = 2 # TODO: play around with this number left = state[0:split] right = state[split:len(state)] print(left, right) state = "Mississippi" split = 8 # TODO: play around with this number left = state[0:split] right = state[split:len(state)] print(left, right) state = "Mississippi" split = 5 # TODO: play around with this number left = state[0:split] right = state[split:len(state)] print(left, right) ``` ### Slicing from the beginning or to the end If you omit the begin or end slice, Python will slice from the beginnning of the string or all the way to the end. So if you say `x[:5]` its the same as `x[0:5]` For example: ``` state = "Ohio" print(state[0:2], state[:2]) # same! print(state[2:len(state)], state[2:]) # same ``` ### Now Try It! Split the string `"New Hampshire"` into two sub-strings one containing `"New"` the other containing `"Hampshire"` (without the space). ``` ## TODO: Write code here state = "NewHampshire" split = 3 left = state[0:split] right = state[split:len(state)] print(left, right) ``` ## Python's built in String Functions Python includes several handy built-in string functions (also known as *methods* in object-oriented parlance). To get a list of available functions, use the `dir()` function on any string variable, or on the type `str` itself. ``` print ( dir(str)) ``` Let's suppose you want to learn how to use the `count` function. There are 2 ways you can do this. 1. search the web for `python 3 str count` or 1. bring up internal help `help(str.count)` Both have their advantages and disadvanges. I would start with the second one, and only fall back to a web search when you can't figure it out from the Python documenation. Here's the documentation for `count` ``` help(str.count) ``` You'll notice in the help output it says S.count() this indicates this function is a method function. this means you invoke it like this `variable.count()`. ### Now Try It Try to use the count() function method to count the number of `'i'`'s in the string `'Mississippi`: ``` state = 'Mississippi' #TODO: use state.count state.count("i") print(state.count('i')) ``` ### TANGENT: The Subtle difference between function and method. You'll notice sometimes we call our function alone, other times it's attached to a variable, as was the case in the example above. when we say `state.count('i')` the period (`.`) between the variable and function indicates this function is a *method function*. The key difference between a the two is a method is attached to a variable. To call a method function you must say `variable.function()` whereas when you call a function its just `function()`. The variable associated with the method call is usually part of the function's context. Here's an example: ``` name = "Larry" print( len(name) ) # a function call len(name) stands on its own. Gets length of 'Larry' print( name.__len__() ) # a method call name.__len__() does the name thing for its variable 'Larry' ``` ### Now Try It Try to figure out which built in string function to use to accomplish this task. Write some code to find the text `'is'` in some text. The program shoud output the first position of `'is'` in the text. Examples: ``` When: text = 'Mississippi' then position = 1 When: text = "This is great" then position = 2 When: text = "Burger" then position = -1 ``` ``` print(dir(str)) help(str.find) # TODO: Write your code here text = input("Enter some text: ") text.find('is') print("when text =", text,"then position =",text.find('is')) text = input("Enter some text: ") text.find('is') print("when text =", text,"then position =",text.find('is')) text = input("Enter some text: ") text.find('is') print("when text =", text,"then position =",text.find('is')) ``` ### Now Try It **Is that a URL?** Try to write a rudimentary URL checker. The program should input a text string and then use the `startswith` function to check if the string begins with `"http://"` or `"https://"` If it does we can assume it is a URL. ``` ## TODO: write code here: url = input("Enter a URL: ") if url.startswith('http://'): print("We can assume this is a URL") elif url.startswith('https://'): print("We can assume this is a URL") else: print("This is not a URL") url = input("Enter a URL: ") if url.startswith('http://'): print("We can assume this is a URL") elif url.startswith('https://'): print("We can assume this is a URL") else: print("This is not a URL") url = input("Enter a URL: ") if url.startswith('http://'): print("We can assume this is a URL") elif url.startswith('https://'): print("We can assume this is a URL") else: print("This is not a URL") ```
github_jupyter
# CH. 8 - Market Basket Analysis ## Activities #### Activity 8.01: Load and Prep Full Online Retail Data ``` import matplotlib.pyplot as plt import mlxtend.frequent_patterns import mlxtend.preprocessing import numpy import pandas online = pandas.read_excel( io="./Online Retail.xlsx", sheet_name="Online Retail", header=0 ) online['IsCPresent'] = ( online['InvoiceNo'] .astype(str) .apply(lambda x: 1 if x.find('C') != -1 else 0) ) online1 = ( online .loc[online["Quantity"] > 0] .loc[online['IsCPresent'] != 1] .loc[:, ["InvoiceNo", "Description"]] .dropna() ) invoice_item_list = [] for num in list(set(online1.InvoiceNo.tolist())): tmp_df = online1.loc[online1['InvoiceNo'] == num] tmp_items = tmp_df.Description.tolist() invoice_item_list.append(tmp_items) online_encoder = mlxtend.preprocessing.TransactionEncoder() online_encoder_array = online_encoder.fit_transform(invoice_item_list) online_encoder_df = pandas.DataFrame( online_encoder_array, columns=online_encoder.columns_ ) ## COL in different order online_encoder_df.loc[ 20125:20135, online_encoder_df.columns.tolist()[100:110] ] ``` #### Activity 8.02: Apriori on the Complete Online Retail Data Set ``` mod_colnames_minsupport = mlxtend.frequent_patterns.apriori( online_encoder_df, min_support=0.01, use_colnames=True ) mod_colnames_minsupport.loc[0:6] mod_colnames_minsupport[ mod_colnames_minsupport['itemsets'] == frozenset( {'10 COLOUR SPACEBOY PEN'} ) ] mod_colnames_minsupport['length'] = ( mod_colnames_minsupport['itemsets'].apply(lambda x: len(x)) ) ## item set order different mod_colnames_minsupport[ (mod_colnames_minsupport['length'] == 2) & (mod_colnames_minsupport['support'] >= 0.02) & (mod_colnames_minsupport['support'] < 0.021) ] mod_colnames_minsupport.hist("support", grid=False, bins=30) plt.xlabel("Support of item") plt.ylabel("Number of items") plt.title("Frequency distribution of Support") plt.show() ``` #### Activity 8.03: Find the Association Rules on the Complete Online Retail Data Set ``` rules = mlxtend.frequent_patterns.association_rules( mod_colnames_minsupport, metric="confidence", min_threshold=0.6, support_only=False ) rules.loc[0:6] print("Number of Associations: {}".format(rules.shape[0])) rules.plot.scatter("support", "confidence", alpha=0.5, marker="*") plt.xlabel("Support") plt.ylabel("Confidence") plt.title("Association Rules") plt.show() rules.hist("lift", grid=False, bins=30) plt.xlabel("Lift of item") plt.ylabel("Number of items") plt.title("Frequency distribution of Lift") plt.show() rules.hist("leverage", grid=False, bins=30) plt.xlabel("Leverage of item") plt.ylabel("Number of items") plt.title("Frequency distribution of Leverage") plt.show() plt.hist(rules[numpy.isfinite(rules['conviction'])].conviction.values, bins = 30) plt.xlabel("Conviction of item") plt.ylabel("Number of items") plt.title("Frequency distribution of Conviction") plt.show() ```
github_jupyter
# Welcome to Jupyter Notebooks! Author: Shelley Knuth Date: 23 August 2019 Purpose: This is a general purpose tutorial to designed to provide basic information about Jupyter notebooks ## Outline 1. General information about notebooks 1. Formatting text in notebooks 1. Formatting mathematics in notebooks 1. Importing graphics 1. Plotting ## General Information about Notebooks ### What is a Jupyter Notebook? It's an interactive web platform that allows one to create and edit live code, add text descriptions, and visualizations in a document that can be easily shared and displayed. ### How to work with a Notebook To run a cell, hit "shift" and "enter" at the same time Don't be alarmed if your notebook runs for awhile - indicated by [*] Sometimes takes awhile ### Different cell types Code, Markdown are the two I use most frequently ### Exercise Write one sentence on what you are planning to do this weekend in a cell. ### Opening, saving notebooks Opening: File -> New Notebook -> Python 3 Saving: File -> Save as -> Save and Checkpoint (Ctrl + S) Printing: File -> Print Preview Download: File -> Download as PDF (or others) ## Keyboard shortcuts Toggle between edit and command mode with Esc and Enter, respectively. Once in command mode: Scroll up and down your cells with your Up and Down keys. Press A or B to insert a new cell above or below the active cell. M will transform the active cell to a Markdown cell. Y will set the active cell to a code cell. D + D (D twice) will delete the active cell. Z will undo cell deletion. Hold Shift and press Up or Down to select multiple cells at once. With multiple cells selected, Shift + M will merge your selection. Ctrl + Shift + -, in edit mode, will split the active cell at the cursor. You can also click and Shift + Click in the margin to the left of your cells to select them. (from https://www.dataquest.io/blog/jupyter-notebook-tutorial/) ## Formatting text in notebooks Jupyter notebooks are __really__ cool! Jupyter notebooks are _really_ cool! Two spaces after text gives you a newline! ### Headings # Jupyter notebooks are really cool! ## Do you know what else is cool? ### Turtles! #### And Bon Jovi! ### Code The best program to use for this is the `grep` command ### Text color and size The sky is <font color = blue, size = 30>blue!</font> Sometimes the <font color = blue>color</font> doesn't turn out <font size=30> WELL</font> ### Indent or list your text > This is how! - This is how! - This is how! 1. This is how! * This is also how! * This is also how! ### Hyperlinks Sometimes copy and paste is just fine too! https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet [I'm an inline-style link](https://www.google.com) [I'm a reference-style link][Arbitrary case-insensitive reference text] [I'm a relative reference to a repository file](../blob/master/LICENSE) [You can use numbers for reference-style link definitions][1] Or leave it empty and use the [link text itself]. URLs and URLs in angle brackets will automatically get turned into links. http://www.example.com or <http://www.example.com> and sometimes example.com (but not on Github, for example). Some text to show that the reference links can follow later. [arbitrary case-insensitive reference text]: https://www.mozilla.org [1]: http://slashdot.org [link text itself]: http://www.reddit.com (from https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) ## Mathematical Equations in Notebooks $F=ma$ This is an equation, $x=y+z$, where $y=10$ and $z=20$ ### Superscripts and Subscripts $y = x^3 + x^2 + 3x$ $F_g = m g$ ### Grouping $6.022\times 10^{23}$ . ### Greek Letters $\pi = 3.1415926$ $\Omega = 10$ $\delta$ ### Special Symbols $\pm$, $\gg$, $\ll$, $\infty$ $i = \sqrt{-1}$ $\int_a^b$ ## Fractions and Derivatives Fractions $\frac{1}{2}$ Derivatives $\frac{dm}{dt}$, $\frac{\partial m}{\partial t}$ ### Matrices $$\begin{matrix} a & b \\ c & d \end{matrix}$$ ### Exercise Write out an equation where the total derivative of x over y is equal to the square root of 10 added to 7/8 pi $\frac{dx}{dy} = \sqrt{10} + \pi$ ### Exercise Write out an equation where x sub j is equal to a 2x2 matrix containing 10, 20, 30, and 40 $x_j = \begin{matrix} 10 & 20 \\ 30 & 40 \end{matrix}$ ## Importing Graphics Easy way: Drag and drop! Or "Edit -> Insert image" when in Markdown Harder ways: Python: ``` from IPython.display import Image Image("bonjovi.jpg") ``` HTML: <img src="bonjovi.jpg"> ## Basic Programming with Python ### Print statements ``` print("Hello, World!") ``` Look at how the input changed (to the left of the cell). Look at the output! (This and several cells from https://www.dataquest.io/blog/jupyter-notebook-tutorial/) Anything run in the kernal persists in the notebook Can run code and import libraries in the cells ### Variables in Python * Variables are not declared * Variables are created at assignment time * Variable type determined implicitly via assignment x=2 Int x=2.0 . Float Z="hello" str (single or double quotes) z=True Boolean Note capital "T" or "F" * Can convert types using conversion functions: int(), float(), str(), bool() * Python is case sensitive * Check variable type using type function (from https://github.com/ResearchComputing/Python_Spring_2019/blob/master/session1_overview/session1_slides.pdf) ``` z=10.0 print('z is: ', type(z) ) x=int(43.4) print(x) ``` Arithmetic in Python respects the order of operations * Addition: + * Subtraction: - * Multiplication: * * Division: / (returns float) * Floor Division: // (returns int or float; rounds down) * Mod: % (3%2 -> 1) * Exponentiation: ** 2**4 -> 16) Can concatenate strings using "+" (from https://github.com/ResearchComputing/Python_Spring_2019/blob/master/session1_overview/session1_slides.pdf) ``` x='hello '+'there' print(x) ``` ### Lists Multiple Values can be grouped into a list ``` - lists - basic plotting with matplotlib - arrays and numpy; doing calculations - plotting using numpy - importing data from csv files mylist=[1, 2, 10] print(mylist) ``` * List elements accessed with [] notation * Element numbering starts at 0 ``` print(mylist[1]) ``` * Lists can contain different variable types ``` mylist=[1, 'two', 10.0] print(mylist) ```
github_jupyter
<a href="https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/pdp-exp1/pdp-exp1_cslg-rand-5000_plotting.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ### Experiment Description Produce PDP for a randomly picked data from cslg. > This notebook is for experiment \<pdp-exp1\> and data sample \<cslg-rand-5000\>. ### Initialization ``` %load_ext autoreload %autoreload 2 import numpy as np, sys, os in_colab = 'google.colab' in sys.modules # fetching code and data(if you are using colab if in_colab: !rm -rf s2search !git clone --branch pipelining https://github.com/youyinnn/s2search.git sys.path.insert(1, './s2search') %cd s2search/pipelining/pdp-exp1/ pic_dir = os.path.join('.', 'plot') if not os.path.exists(pic_dir): os.mkdir(pic_dir) ``` ### Loading data ``` sys.path.insert(1, '../../') import numpy as np, sys, os, pandas as pd from s2search_score_pdp import pdp_based_importance, apply_order sample_name = 'cslg-rand-5000' f_list = ['title', 'abstract', 'venue', 'authors', 'year', 'n_citations'] pdp_xy = {} pdp_metric = pd.DataFrame(columns=['feature_name', 'pdp_range', 'pdp_importance']) for f in f_list: file = os.path.join('.', 'scores', f'{sample_name}_pdp_{f}.npz') if os.path.exists(file): data = np.load(file) sorted_pdp_data = apply_order(data) feature_pdp_data = [np.mean(pdps) for pdps in sorted_pdp_data] pdp_xy[f] = { 'y': feature_pdp_data, 'numerical': True } if f == 'year' or f == 'n_citations': pdp_xy[f]['x'] = np.sort(data['arr_1']) else: pdp_xy[f]['y'] = feature_pdp_data pdp_xy[f]['x'] = list(range(len(feature_pdp_data))) pdp_xy[f]['numerical'] = False pdp_metric.loc[len(pdp_metric.index)] = [f, np.max(feature_pdp_data) - np.min(feature_pdp_data), pdp_based_importance(feature_pdp_data, f)] pdp_xy[f]['weird'] = feature_pdp_data[len(feature_pdp_data) - 1] > 30 print(pdp_metric.sort_values(by=['pdp_importance'], ascending=False)) ``` ### PDP ``` import matplotlib.pyplot as plt categorical_plot_conf = [ { 'xlabel': 'Title', 'ylabel': 'Scores', 'pdp_xy': pdp_xy['title'] }, { 'xlabel': 'Abstract', 'pdp_xy': pdp_xy['abstract'] }, { 'xlabel': 'Authors', 'pdp_xy': pdp_xy['authors'] }, { 'xlabel': 'Venue', 'pdp_xy': pdp_xy['venue'], 'zoom': { 'inset_axes': [0.15, 0.45, 0.47, 0.47], 'x_limit': [4900, 5050], 'y_limit': [-9, 7], 'connects': [True, True, False, False] } }, ] numerical_plot_conf = [ { 'xlabel': 'Year', 'ylabel': 'Scores', 'pdp_xy': pdp_xy['year'] }, { 'xlabel': 'Citation Count', 'pdp_xy': pdp_xy['n_citations'], 'zoom': { 'inset_axes': [0.4, 0.2, 0.47, 0.47], 'x_limit': [-100, 500], 'y_limit': [-7.5, -6.2], 'connects': [True, False, False, True] } } ] def pdp_plot(confs, title): fig, axes = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100) subplot_idx = 0 # plt.suptitle(title, fontsize=20, fontweight='bold') # plt.autoscale(False) for conf in confs: axess = axes if len(confs) == 1 else axes[subplot_idx] axess.plot(conf['pdp_xy']['x'], conf['pdp_xy']['y']) axess.grid(alpha = 0.4) if ('ylabel' in conf): axess.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10) axess.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10) if not (conf['pdp_xy']['weird']): if (conf['pdp_xy']['numerical']): axess.set_ylim([-9, -6]) pass else: axess.set_ylim([-15, 10]) pass if 'zoom' in conf: axins = axess.inset_axes(conf['zoom']['inset_axes']) axins.plot(conf['pdp_xy']['x'], conf['pdp_xy']['y']) axins.set_xlim(conf['zoom']['x_limit']) axins.set_ylim(conf['zoom']['y_limit']) axins.grid(alpha=0.3) rectpatch, connects = axess.indicate_inset_zoom(axins) connects[0].set_visible(conf['zoom']['connects'][0]) connects[1].set_visible(conf['zoom']['connects'][1]) connects[2].set_visible(conf['zoom']['connects'][2]) connects[3].set_visible(conf['zoom']['connects'][3]) subplot_idx += 1 pdp_plot(categorical_plot_conf, "PDPs for four categorical features") plt.savefig(os.path.join('.', 'plot', f'{sample_name}-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight') # second fig pdp_plot(numerical_plot_conf, "PDPs for two numerical features") plt.savefig(os.path.join('.', 'plot', f'{sample_name}-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight') ```
github_jupyter
# Alzhippo Pr0gress ##### Possible Tasks - **Visualizing fibers** passing through ERC and hippo, for both ipsi and contra cxns (4-figs) (GK) - **Dilate hippocampal parcellations**, to cover entire hippocampus by nearest neighbour (JV) - **Voxelwise ERC-to-hippocampal** projections + clustering (Both) ## Visulaizating fibers 1. Plot group average connectome 2. Find representative subject X (i.e. passes visual inspection match to the group) 3. Visualize fibers with parcellation 4. Repeat 3. on dilated parcellation 5. If connections appear more symmetric in 4., regenerate graphs with dilated parcellation ### 1. Plot group average connectome ``` import numpy as np import networkx as nx import nibabel as nib import scipy.stats as stats import matplotlib.pyplot as plt from nilearn import plotting import os import seaborn as sns import pandas %matplotlib notebook def matrixplotter(data, log=True, title="Connectivity between ERC and Hippocampus"): plotdat = np.log(data + 1) if log else data plt.imshow(plotdat) labs = ['ERC-L', 'Hippo-L-noise', 'Hippo-L-tau', 'ERC-R', 'Hippo-R-noise', 'Hippo-R-tau'] plt.xticks(np.arange(0, 6), labs, rotation=40) plt.yticks(np.arange(0, 6), labs) plt.title(title) plt.colorbar() plt.show() avg = np.load('../data/connection_matrix.npy') matrixplotter(np.mean(avg, axis=2)) ``` ### 2. Find representative subject ``` tmp = np.reshape(avg.T, (355, 36)) tmp[0] corrs = np.corrcoef(tmp)[-1] corrs[corrs == 1] = 0 bestfit = int(np.where(corrs == np.max(corrs))[0]) print("Most similar graph: {}".format(bestfit)) dsets = ['../data/graphs/BNU1/combined_erc_hippo_labels/', '../data/graphs/BNU3/', '../data/graphs/HNU1/'] files = [os.path.join(d,f) for d in dsets for f in os.listdir(d)] graph_fname = files[bestfit] gx = nx.read_weighted_edgelist(graph_fname) adjx = np.asarray(nx.adjacency_matrix(gx).todense()) matrixplotter(adjx) print(graph_fname) ``` **N.B.**: The fibers from the subject/session shown above were SCP'd from the following location on Compute Canada's Cedar machine by @gkiar. They are too large for a git repository, but they were downloaded to the `data/fibers/` directory from the root of this project. Please @gkiar him if you'd like access to this file, in lieu of better public storage: > /project/6008063/gkiar/ndmg/connectomics/ndmg-d/HNU1/fibers/sub-0025444_ses-2_dwi_fibers.npz ### 3. Visualize fibers with parcellation Because I don't have VTK/Dipy locally, this was done in Docker with the script in `./code/npz2trackviz.py` and submitted to the scheduler with `./code/npzdriver.sh`. The command to run this in Docker, from the base directory of this project was: docker run -ti \ -v /Users/greg/code/gkiar/alzhippo/data/:/data \ -v /Users/greg/code/gkiar/alzhippo/code/:/proj \ --entrypoint python2.7 \ bids/ndmg:v0.1.0 \ /proj/npz2trackviz.py /data/fibers/sub-0025444_ses-2_dwi_fibers.npz /data/combined_erc_hippo_labels.nii.gz The resulting `.trk` files were viewed locally with [TrackVis](http://www.trackvis.org/) to make the screenshot below.
github_jupyter
# Course introduction ## A. Overview ### Am I ready to take this course? Yes. Probably. Some programming experience will help, but is not required. If you have no programming experience, I strongly encourage you to go through the first handful of modules on the [Codecademy Python course](https://www.codecademy.com/learn/learn-python) as soon as possible. While that course utilizes Python 2, we will be using Python 3 in our course here. BUT...many of the basics are identical between the two versions. There are <b>a lot</b> of online resources that you can use to supplement things we learn in class. Some examples are: * [python.org Tutorial](https://docs.python.org/3/tutorial/index.html) * [Learn Python](https://www.learnpython.org/) * [Google](http://google.com) (Just type in your question and follow the first [stackoverflow](http://stackoverflow.com) link. This is surprisingly effective; do this first.) ### What computational resources do I need for class? You will need a laptop that will provide you access to the course (i.e. internet access) and a Python environment to follow along. ### How is this for geosciences specifically? The goal of this class is to provide information for all fields in geoscience. To that end, I will try to cover topics from geology, geography, atmospheric sciences, and oceanography. Specifically, I will focus on 1D timeseries, and 2D geospatial (i.e., on a map) analysis. If you have any topics you would like to cover, please let me know, and I will do my best to accommodate. ## Class setup ### Class format We will go through course materials during class time. You should bring a computer to class so that you can follow along and participate in exercises. Also, course materials are interactive, so you can learn by running code snippets as we go and asking questions. Much like learning a new spoken language, hands-on coding is one the <b>best</b> ways to learn a new language. ### Course materials The course materials are available in the [class repository](https://github.com/snifflesnrumjum/python4geosciences). They are in the form of [Jupyter notebooks](http://jupyter.org/). More information on notebooks in the next section. You'll do your work either on your own computer, in a Google Colab notebook, or through the VOAL provided by Texas A&M University. To access the VOAL when off campus, you need to first set up a VPN connection. Set this up for your computer by visiting `https://connect.tamu.edu` and follow instructions there. You'll need to sign in with your NetID, and click on the little blue link that says "AnyConnect VPN" if and when you find that "Web-based installation was unsuccessful" to install Cisco AnyConnect (you will no longer use the web-based installer after this). When you open the Cisco application on your computer, you will need to fill in "connect.tamu.edu" in the little box, then use your NetID and university password to connect. Then you can run this application to use your computer as if you are on campus. ### Course textbook There is no textbook for the course. But if you'd like an outside resource, here are three recommendations: 1. Learning Python by Mark Lutz (available electronically through TAMU Library http://library.tamu.edu/) 2. Beginning Python by Magnus Lie Hetland (available electronically through TAMU Library http://library.tamu.edu/) 3. Allen Downey has written a number of books on Python and related scientific subjects. And as a bonus, they are free (digital versions): http://greenteapress.com/wp/. In particular you would want to check out Think Python (2nd edition). ## B. Jupyter notebooks This file format makes it easy to seamlessly combine text and code. The text can be plain or formatted with [Markdown](https://daringfireball.net/projects/markdown/). The code can be written in over 40 languages including Python, R, and Scala. Most importantly, the code can be interacted with when the notebook is opened in a local (that is, on your computer) iPython server. Alternatively, it can simply be viewed through a github repository (like [this very notebook](https://github.com/snifflesnrumjum/python4geosciences/blob/master/materials/0_intro.ipynb)) or through [nbviewer](http://nbviewer.ipython.org/). You'll be able to run class materials (in the form of Jupyter notebooks) on your own computer, on Google Colab or the VOAL via your web browser as well as create and work on homework assignments. If you prefer, you are welcome to run Python on your own computer, but you will need to do that mostly on your own. If you go that route, I recommend using Python 3 (which we will be using in class) and a distribution from [Anaconda](https://www.anaconda.com/products/individual). ### Create a new notebook Start up your local notebook server in your new repo and create a new Jupyter notebook from the local server page. ### Choose syntax for a cell Notebooks are built of cells. Cells can have multiple built-in formats including code and Markdown for text. You can select the desired format from a dropdown menu at the top. If you want to type words, use "Markdown"; if you want to write code, choose "code". ### Move between cells To run a given cell, type `[shift-enter]` which active in that cell. You can run all of the cells with Cell > Run all; other variations are available in that drop down menu. ### Homework We'll discuss homework soon and go through details. It will be in the form of Jupyter notebooks and will be submitted through the Canvas LMS. --- The material below is bonus for any students that are interested in using a terminal window on their own computer for running Python. We may go through it in class. ## Command-line interface A command-line interface is a way to interact with your computer using text instead of a Graphical User Interface (GUI), a GUI being visually based with icons etc. We will use these in this class. On a Macintosh or Linux machine, this is a terminal window. On a PC this is often called a command prompt. Here are some commonly-used commands: * `cd [path]`: change directory from current location to [path]. `cd ..` can be used to move up a single directory, and `cd ../..` moves up two directories, etc. * `pwd`: print working directory, as in write out the current location in the terminal window. * `ls`: list files in current directory. `ls -l` list files in long format to include more information, `ls -a` to list all files even those that are usually not shown because the have a `.` in front, `ls -h` to show file sizes in human readable format. Flags can always be combined to use multiple options at once, as in `ls -ah` to show all files in human readable format. * [tab]: Tab completion. You can always push tab in the terminal window to see available options. As you have some letters entered and push tab, the options will be limited to those that fit the pattern you have started. * `mkdir [dirname]`: make directory called dirname. * `rm [filename]`: remove a file called filename. To remove a directory called dirname, use `rm -r [dirname]`. ## Short git and GitHub tutorial (optional) Class materials are available on a [GitHub](http://github.org) repository. GitHub is a way to share and access code online which has been version-controlled using git. Version control allows changes in code to be tracked over time; this is important for reproducibility, retrieving code in case of accidents, and working on code in groups. Git is one way to version control your code — other methods include subversion (svn), cvs, and mercurial. More information on this is provided below. Remember: you can always google to learn more! Google is an infinite resource that you can ask at any time of the day. Here we summarize a brief overview of how to use git. GitHub has a [cheatsheet](https://education.github.com/git-cheat-sheet-education.pdf) available. To get changes in a file in a local version of a repository tracked, saved, and then shared with the internet (on github), do the following: * `git add` to initially tell the system to track your file and subsequently to tell the system that you want to take into account new changes to the file (you can also add more than one file in this process). Then * `git commit -m [commit note]` to save the changes. Then * `git push` to share the changes with the version of your repository on github. Now you should be able to look at your repo on github and see your updated file there. **GitHub Desktop** After you have made your repository on GitHub (the website), you should clone it to GitHub Desktop (which is on your local machine). (This should be very easy if you are properly signed into your github account in GitHub Desktop.) Then to get your file tracked and pushed to GitHub (the website): * While inspecting the relevant repository, any untracked files or changes to tracked files are shown in the middle window (it is white with horizontal lines). To do the equivalent of `git add`, you should check the box of the file. * To commit your changes, fill out the form at the bottom of the same window. One short window leaves space for a "Summary" of your changes, and if you have more to say you can put it in the "Description" box. * To push your local changes out to GitHub online, use the Sync button on the upper right hand corner of the window. As a side note, this sync button is also how you should pull down changes to a repository you are following (the equivalent of `git pull`). Note that you do not want to have a directory covered by two git repositories. So for file structure for this class, for example, you might want to have one directory for the class ("Python_x89") which contains two version-controlled subdirectories: the course materials (python4geosciences) and your homework repository ("homework"). That will keep everything properly separated. ![XKCD](https://imgs.xkcd.com/comics/git.png) [Git as explained by XKCD](https://xkcd.com/1597/) ### `git status` Type this in a git-monitored subdirectory on your computer to see the status of files that are under git version control and which files are not being monitored. ### `git add` Use this to add local files to your git repository. `git add [filename]`. ### `git commit` Use this to save to your repository local file changes. First you need to add the file with `git add`, then you can `git commit -m [commit message]`, where the commit message is a concise and useful note to explain what changes you have made. You may skip the `git add` step if you would like to commit at once all changes that have been made (you can see what changes would be committed by first consulting `git status`) with `git commit -am [commit message]`. The `-a` flag stands for "all" as in commit all changes. ### `git push` To move your local changes to github so that they are saved and also for sharing with others, you need to `git push`. After making whatever commits you want to make, you can run this command to finalize your changes on github. ### `git merge` If changes have been made in multiple places (say, by another person working on the same code base, or by yourself on two different machines) between `git push`es, git will try to merge if possible — which it will do if the changes don't overlap and therefore don't require your input. If the changes do overlap, you will have to merge the two versions of the code together. You probably won't need to do this in this class. ### Get set up with GitHub You'll need an account on github if you don't already have one, and to have git installed on your computer. We will interact with github through a terminal window in class and through the website. You may also download the [GitHub Desktop application](https://desktop.github.com/) to use if you prefer, though we won't be able to help you much with the details of it. ### Create a new repo In your account on the github webpage, click on the Repositories tab and then the green "New" button in the upper right. You'll need to keep this repo public. After creating this new repo, follow the instructions on the next page for quick setup or to create a new repository on the command line. This makes it so that you have the repo both on github and on your local machine. ### Clone a repo In a terminal window in the location you want to save the repo materials, type: `git clone [repo address, e.g. https://github.com/snifflesnrumjum/python4geosciences]`.
github_jupyter
# Optimization > Marcos Duarte > Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/)) > Federal University of ABC, Brazil <div style="text-align: right"> <i>If there occur some changes in nature, the amount of action necessary for this change must be as small as possible.</i> <br>Maupertuis (sec XVIII) </div> **Optimization is the process of finding the best value from possible alternatives with regards to a certain criteria** ([Wikipedia](http://en.wikipedia.org/wiki/Mathematical_optimization)). Typically, such best value is the value that maximizes or minimizes the criteria. In this context, to solve a (mathematical) optimization problem is to find the maximum or minimum (a.k.a., a stationary point) of a function (and we can use maximum or minimum interchangeably because the maximum of a function is the minimum of the negative of that function). To solve an optimization problem, we first have to model the problem and define the objective, the variables, and the constraints of the problem. In optimization, these terms are usually defined as: 1. Objective function (or also, cost, loss, utility, or fitness function): a function describing what we want to optimize. 2. Design variable(s): variables that will be manipulated to optimize the cost function. 3. Constraint functions: a set of constraints, equalities or inequalities that constrains the possible solutions to possible values of the design variables (candidate solutions or feasible solutions or feasible set). A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution. The optimization problem is the calculation of the minimum or maximum values of an objective function over a set of **unknown** possible values of the design variables. Even in case of a finite number of possible values of the objective function and design variables (e.g., after discretization and a manual or a grid search), in general the evaluation of the objective function is computationally expensive and should be avoided. Of note, even if there is no other option, a random search is in fact more efficient than a manual or a grid search! See [Bergstra, Bengio (2012)](http://jmlr.csail.mit.edu/papers/volume13/bergstra12a/bergstra12a.pdf). A typical problem of optimization: [Knapsack problem](https://en.wikipedia.org/wiki/Knapsack_problem). Read more about that in [Introduction to Optimization](http://neos-guide.org/content/optimization-introduction) from the [NEOS Guide](http://neos-guide.org/). ## Some jargon in mathematical optimization - **Linear versus nonlinear optimization**: linear optimization refers to when the objective function and the constraints are linear mathematical functions. When the objective function is linear, an optimal solution is always found at the constraint boundaries and a local optimum is also a global optimum. See [Wikipedia 1](https://en.wikipedia.org/wiki/Linear_programming) and [Wikipedia 2](https://en.wikipedia.org/wiki/Nonlinear_programming). - **Constrained versus unconstrained optimization**: in constrained optimization there are no constraints. - **Convex optimization**: the field of optimization that deals with finding the minimum of convex functions (or the maximum of concave functions) over a convex constraint set. The convexity of a function facilitates the optimization because a local minimum must be a global minimum and first-order conditions (the first derivatives) are sufficient conditions for finding the optimal solution. Note that although convex optimization is a particular case of nonlinear optimization, it is a relatively simple optimization problem, with robust and mature methods of solution. See [Wikipedia](https://en.wikipedia.org/wiki/Convex_optimization). - **Multivariate optimization**: optimization of a function of several variables. - **Multimodal optimization**: optimization of a function with several local minima to find the multiple (locally) optimal solutions, as opposed to a single best solution. - **Multi-objective optimization**: optimization involving more than one objective function to be optimized simultaneously. - **Optimal control**: finding a control law for a given system such that a certain optimality criterion is achieved. See [Wikipedia](https://en.wikipedia.org/wiki/Optimal_control). - **Quadratic programming**: optimization of a quadratic function subject to linear constraints. See [Wikipedia](https://en.wikipedia.org/wiki/Quadratic_programming). - **Simplex algorithm**: linear optimization algorithm that begins at a starting vertex and moves along the edges of the polytope (the feasible region) until it reaches the vertex of the optimum solution. See [Wikipedia](https://en.wikipedia.org/wiki/Simplex_algorithm). ## Maxima and minima In mathematics, the maximum and minimum of a function are the largest and smallest values that the function takes at a point either within a neighborhood (local) or on the function entire domain (global) ([Wikipedia](http://en.wikipedia.org/wiki/Maxima_and_minima)). For a function of one variable, if the maximum or minimum of a function is not at the limits of the domain and if at least the first and second derivatives of the function exist, a maximum and minimum can be found as the point where the first derivative of the function is zero. If the second derivative on that point is positive, then it's a minimum, if it is negative, it's a maximum. <div class='center-align'><figure><img src='./../images/maxmin.png' width=350 alt='minima and maxima of a function'/> <figcaption><center><i>Figure. Maxima and minima of a function of one variable.</i></center></figcaption> </figure></div> - Note that the requirement that the second derivative on the extremum to be positive for a minimum or negative for a maximum is sufficient but not a necessary condition. For instance, the function $f(x)=x^4$ has an extremum in $x=0$ since $f'(x)=4x^3$ and $f'(0)=0$, but its second derivative at $x=0$ is also zero: $f''(x)=12x^2;\: f''(0)=0$. In fact, the requirement is that the first non-zero derivative on that point should be positive for a minimum or negative for a maximum: $f''''(0)=24$; the extremum is a minimum. Let's now apply optimization to solve a problem with a univariate function. ``` # import Python libraries import numpy as np %matplotlib inline import matplotlib import matplotlib.pyplot as plt import sympy as sym from sympy.plotting import plot import pandas as pd from IPython.display import display from IPython.core.display import Math ``` ### Example 1: Maximum volume of a cardboard box We want to make a box from a square cardboard with side $a$ such that its volume should be maximum. What is the optimal distance where the square cardboard should be cut and folded to make a box with maximum volume? <div class='center-align'><figure><img src='./../images/box.png' width=450 alt='box optimization'/> <figcaption><center><i>Figure. A box to be made from a cardboard such that its volume should be maximum. Where we should cut?</i></center></figcaption> </figure></div> If the distance where to cut and fold the cardboard is $b$, see figure above, the volume of the box will be: \begin{equation} \begin{array}{l l} V(b) = b(a-2b)(a-2b) \\ \\ V(b) = a^2b - 4ab^2 + 4b^3 \end{array} \label{} \end{equation} In the context of optimization: **The expression for $V$ is the cost function, $b$ is the design variable, and the constraint is that feasible values of $b$ are in the interval $]0, \dfrac{a}{2}[$, i.e., $b>0$ and $b<\dfrac{a}{2}$.** The first and second derivatives of $V$ w.r.t. $b$ are: \begin{equation} \begin{array}{l l} \dfrac{\mathrm{d}V}{\mathrm{d}b} = a^2 - 8ab + 12b^2 \\ \\ \dfrac{\mathrm{d}^2 V}{\mathrm{d}b^2} = - 8a + 24b \end{array} \label{} \end{equation} We have to find the values for $b$ where the first derivative of $V$ is zero (the extrema) and then use the expression for the second derivative of $V$ to find whether each of these extrema is a minimum (positive value) or a maximum (negative value). Let's use Sympy for that: ``` a, b = sym.symbols('a b') V = b*(a - 2*b)*(a - 2*b) Vdiff = sym.expand(sym.diff(V, b)) roots = sym.solve(Vdiff, b) display(Math(sym.latex('Roots:') + sym.latex(roots))) roots ``` Discarding the solution $b=\dfrac{a}{2}$ (where $V=0$, which is a minimum), $b=\dfrac{a}{6}$ results in the maximum volume. We can check that by plotting the volume of the cardboard box for $a=1$ and $b: [0,\:0.5]$: ``` plot(V.subs({a: 1}), (b, 0, .5), xlabel='b', ylabel='V') display(Math(sym.latex('V_{a=1}^{max}(b=%s)=%s' %(roots[0].evalf(n=4, subs={a: 1}), V.evalf(n=3, subs={a: 1, b: roots[0]}))))) ``` - Note that although the problem above is a case of nonlinear constrained optimization, because the objective function is univariate, well-conditioned and the constraints are linear inequalities, the optimization is simple. Unfortunately, this is seldom the case. ## Curve fitting as an optimization problem Curve fitting is the process of fitting a model, expressed in terms of a mathematical function, that depends on adjustable parameters to a series of data points and once adjusted, that curve has the best fit to the data points. The general approach to the fitting procedure involves the definition of a merit function that measures the agreement between data and model. The model parameters are then adjusted to yield the best-fit parameters as a problem of minimization (an optimization problem, where the merit function is the cost function). A classical solution, termed least-squares fitting, is to find the best fit by minimizing the sum of the squared differences between data points and the model function (the sum of squared residuals as the merit function). For more on curve fitting see the video below and the notebook [Curve fitting](http://nbviewer.jupyter.org/github/demotu/BMC/blob/master/notebooks/CurveFitting.ipynb). ``` from IPython.display import YouTubeVideo YouTubeVideo('Rxp7o7_RxII', width=480, height=360, rel=0) ``` ## Gradient descent Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function ([Wikipedia](https://en.wikipedia.org/wiki/Gradient_descent)). In the gradient descent algorithm, a local minimum of a function is found starting from an initial point and taking steps proportional to the negative of the derivative of the function (gradient) at the current point and we evaluate if the current point is lower than then the previous point until a local minimum in reached (hopefully). It follows that, if \begin{equation} x_{n+1} = x_n - \gamma \nabla f(x) \label{} \end{equation} for $\gamma$ small enough, then $f(x_{n}) \geq f(x_{n+1})$. This process is repeated iteratively until the step size (which is proportional to the gradient!) is below a required precision (hopefully the sequence $x_{n}$ converges to the desired local minimum). ### Example 2: Minimum of a function by gradient descent From https://en.wikipedia.org/wiki/Gradient_descent: Calculate the minimum of $f(x)=x^4-3x^3+2$. ``` # From https://en.wikipedia.org/wiki/Gradient_descent # The local minimum of $f(x)=x^4-3x^3+2$ is at x=9/4 cur_x = 6 # The algorithm starts at x=6 gamma = 0.01 # step size multiplier precision = 0.00001 step_size = 1 # initial step size max_iters = 10000 # maximum number of iterations iters = 0 # iteration counter f = lambda x: x**4 - 3*x**3 + 2 # lambda function for f(x) df = lambda x: 4*x**3 - 9*x**2 # lambda function for the gradient of f(x) while (step_size > precision) & (iters < max_iters): prev_x = cur_x cur_x -= gamma*df(prev_x) step_size = abs(cur_x - prev_x) iters+=1 print('True local minimum at {} with function value {}.'.format(9/4, f(9/4))) print('Local minimum by gradient descent at {} with function value {}.'.format(cur_x, f(cur_x))) ``` ## Multivariate optimization When there is more than one design variable (the cost function depends on more than one variable), it's a multivariate optimization. The general idea of finding minimum and maximum values where the derivatives are zero still holds for a multivariate function. The second derivative of a multivariate function can be described by the Hessian matrix: \begin{equation} \mathbf{H} = \begin{bmatrix}{\dfrac {\partial ^{2}f}{\partial x_{1}^{2}}}&{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{n}}}\\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{2}^{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{n}}}\\[2.2ex]\vdots &\vdots &\ddots &\vdots \\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{n}^{2}}} \end{bmatrix} \label{} \end{equation} Let's see now a classical problem in biomechanics where optimization is useful and there is more than one design variable. ## The distribution problem in biomechanics Using the inverse dynamics approach in biomechanics, we can determine the net force and torque acting on a joint if we know the external forces on the segments and the kinematics and inertial properties of the segments. But with this approach we are unable to determine the individual muscles forces that created such torque, as expressed in the following equation: \begin{equation} M_{total} = M_1 + M_2 + \dots + M_n = r_1F_1 + r_2F_2 + \dots + r_nF_n \label{} \end{equation} where $r_i$ is the moment arm of the force $F_i$ that generates a torque $M_i$, a parcel of the (known) total torque $M_{total}$. Even if we know the moment arm of each muscle (e.g., from cadaveric data or from image analysis), the equation above has $n$ unknowns. Because there is more than one muscle that potentially created such torque, there are more unknowns than equations, and the problem is undetermined. So, the problem is how to find how the torque is distributed among the muscles of that joint. One solution is to consider that we (biological systems) optimize our effort in order to minimize energy expenditure, stresses on our tissues, fatigue, etc. The principle of least action, stated in the opening of this text, is an allusion that optimization might be ubiquitous in nature. With this rationale, let's solve the distribution problem in biomechanics using optimization and find the minimum force of each muscle necessary to complete a given task. The following cost functions have been proposed to solve the distribution problem in biomechanics: \begin{equation} \begin{array}{l l} \displaystyle\sum_{i=1}^N F_i \quad &\text{e.g., Seireg and Arkivar (1973)} \\ \displaystyle\sum_{i=1}^N F_i^2 \quad & \\ \displaystyle\sum_{i=1}^N \left(\dfrac{F_i}{pcsa_i}\right)^2 \quad &\text{e.g., Crowninshield and Brand (1981)} \\ \displaystyle\sum_{i=1}^N \left(\dfrac{F_i}{M_{max,i}}\right)^3 \quad &\text{e.g., Herzog (1987)} \end{array} \label{} \end{equation} Where $pcsa_i$ is the physiological cross-sectional area of muscle $i$ and $M_{max,i}$ is the maximum torque muscle $i$ can produce. Each muscle force $F_i$ is a design variable and the following constraints must be satisfied: \begin{equation} \begin{array}{l l} 0 \leq F_i \leq F_{max} \\ \displaystyle\sum_{i=1}^N r_i \times F_i = M \end{array} \label{} \end{equation} Let's apply this concept to solve a distribution problem in biomechanics. ### Muscle force estimation Consider the following main flexors of the elbow joint (see figure below): biceps long head, biceps short head, and brachialis. Suppose that the elbow net joint torque determined using inverse dynamics is 20 Nm (flexor). How much each of these muscles contributed to the net torque? <div class='center-align'><figure><img src='./../images/elbowflexors.png' alt='Elbow flexors'/> <figcaption><center><i>Figure. A view in OpenSim of the arm26 model showing three elbow flexors (Biceps long and short heads and Brachialis).</i></center></figcaption> </figure></div> For the optimization, we will need experimental data for the moment arm, maximum moment, and *pcsa* of each muscle. Let's import these data from the OpenSim arm26 model: ``` # time elbow_flexion BIClong BICshort BRA r_ef = np.loadtxt('./../data/r_elbowflexors.mot', skiprows=7) f_ef = np.loadtxt('./../data/f_elbowflexors.mot', skiprows=7) ``` The maximum isometric force of these muscles are defined in the arm26 model as: Biceps long head: 624.3 N, Biceps short head: 435.56 N, and Brachialis: 987.26 N. Let's compute the mamimum torques that each muscle could produce considering a static situation at the different elbow flexion angles: ``` m_ef = r_ef*1 m_ef[:, 2:] = r_ef[:, 2:]*f_ef[:, 2:] ``` And let's visualize these data: ``` labels = ['Biceps long head', 'Biceps short head', 'Brachialis'] fig, ax = plt.subplots(nrows=1, ncols=3, sharex=True, figsize=(10, 4)) ax[0].plot(r_ef[:, 1], r_ef[:, 2:]) #ax[0].set_xlabel('Elbow angle $(\,^o)$') ax[0].set_title('Moment arm (m)') ax[1].plot(f_ef[:, 1], f_ef[:, 2:]) ax[1].set_xlabel('Elbow angle $(\,^o)$', fontsize=16) ax[1].set_title('Maximum force (N)') ax[2].plot(m_ef[:, 1], m_ef[:, 2:]) #ax[2].set_xlabel('Elbow angle $(\,^o)$') ax[2].set_title('Maximum torque (Nm)') ax[2].legend(labels, loc='best', framealpha=.5) ax[2].set_xlim(np.min(r_ef[:, 1]), np.max(r_ef[:, 1])) plt.tight_layout() plt.show() ``` These data don't have the *pcsa* value of each muscle. We will estimate the *pcsa* considering that the amount of maximum muscle force generated per area is constant and equal to 50N/cm$^2$. Consequently, the *pcsa* (in cm$^2$) for each muscle is: ``` a_ef = np.array([624.3, 435.56, 987.26])/50 # 50 N/cm2 print(a_ef) ``` ### Static versus dynamic optimization In the context of biomechanics, we can solve the distribution problem separately for each angle (instant) of the elbow; we will refer to that as static optimization. However, there is no guarantee that when we analyze all these solutions across the range of angles, they will be the best solution overall. One reason is that static optimization ignores the time history of the muscle force. Dynamic optimization refers to the optimization over a period of time. For such, we will need to input a cost function spanning the entire period of time at once. Dynamic optimization usually has a higher computational cost than static optimization. For now, we will solve the present problem using static optimization. ### Solution of the optimization problem For the present case, we are dealing with a problem of minimization, multidimensional (function of several variables), nonlinear, constrained, and we can't assume that the cost function is convex. Numerical optimization is hardly a simple task. There are many different algorithms and public and commercial software for performing optimization. For instance, look at [NEOS Server](http://www.neos-server.org/neos/), a free internet-based service for solving numerical optimization problems. We will solve the present problem using the [scipy.optimize](http://docs.scipy.org/doc/scipy/reference/optimize.html#module-scipy.optimize) package which provides several optimization algorithms. We will use the function `minimize`: ```python scipy.optimize.minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None) """Minimization of scalar function of one or more variables.""" ``` Now, let's write Python functions for each cost function: ``` from scipy.optimize import minimize def cf_f1(x): """Cost function: sum of forces.""" return x[0] + x[1] + x[2] def cf_f2(x): """Cost function: sum of forces squared.""" return x[0]**2 + x[1]**2 + x[2]**2 def cf_fpcsa2(x, a): """Cost function: sum of squared muscle stresses.""" return (x[0]/a[0])**2 + (x[1]/a[1])**2 + (x[2]/a[2])**2 def cf_fmmax3(x, m): """Cost function: sum of cubic forces normalized by moments.""" return (x[0]/m[0])**3 + (x[1]/m[1])**3 + (x[2]/m[2])**3 ``` Let's also define the Jacobian for each cost function (which is an optional parameter for the optimization): ``` def cf_f1d(x): """Derivative of cost function: sum of forces.""" dfdx0 = 1 dfdx1 = 1 dfdx2 = 1 return np.array([dfdx0, dfdx1, dfdx2]) def cf_f2d(x): """Derivative of cost function: sum of forces squared.""" dfdx0 = 2*x[0] dfdx1 = 2*x[1] dfdx2 = 2*x[2] return np.array([dfdx0, dfdx1, dfdx2]) def cf_fpcsa2d(x, a): """Derivative of cost function: sum of squared muscle stresses.""" dfdx0 = 2*x[0]/a[0]**2 dfdx1 = 2*x[1]/a[1]**2 dfdx2 = 2*x[2]/a[2]**2 return np.array([dfdx0, dfdx1, dfdx2]) def cf_fmmax3d(x, m): """Derivative of cost function: sum of cubic forces normalized by moments.""" dfdx0 = 3*x[0]**2/m[0]**3 dfdx1 = 3*x[1]**2/m[1]**3 dfdx2 = 3*x[2]**2/m[2]**3 return np.array([dfdx0, dfdx1, dfdx2]) ``` Let's define initial values: ``` M = 20 # desired torque at the elbow iang = 69 # which will give the closest value to 90 degrees r = r_ef[iang, 2:] f0 = f_ef[iang, 2:] a = a_ef m = m_ef[iang, 2:] x0 = f_ef[iang, 2:]/10 # far from the correct answer for the sum of torques print('M =', M) print('x0 =', x0) print('r * x0 =', np.sum(r*x0)) ``` Inequality constraints (such as boundaries in our problem) can be entered with the parameter `bounds` to the `minimize` function: ``` bnds = ((0, f0[0]), (0, f0[1]), (0, f0[2])) ``` Equality constraints (such as the sum of torques should equals the desired torque in our problem), as well as inequality constraints, can be entered with the parameter `constraints` to the `minimize` function (and we can also opt to enter the Jacobian of these constraints): ``` # use this in combination with the parameter bounds: cons = ({'type': 'eq', 'fun' : lambda x, r, f0, M: np.array([r[0]*x[0] + r[1]*x[1] + r[2]*x[2] - M]), 'jac' : lambda x, r, f0, M: np.array([r[0], r[1], r[2]]), 'args': (r, f0, M)}) # to enter everything as constraints: cons = ({'type': 'eq', 'fun' : lambda x, r, f0, M: np.array([r[0]*x[0] + r[1]*x[1] + r[2]*x[2] - M]), 'jac' : lambda x, r, f0, M: np.array([r[0], r[1], r[2]]), 'args': (r, f0, M)}, {'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[0]-x[0], 'jac' : lambda x, r, f0, M: np.array([-1, 0, 0]), 'args': (r, f0, M)}, {'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[1]-x[1], 'jac' : lambda x, r, f0, M: np.array([0, -1, 0]), 'args': (r, f0, M)}, {'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[2]-x[2], 'jac' : lambda x, r, f0, M: np.array([0, 0, -1]), 'args': (r, f0, M)}, {'type': 'ineq', 'fun' : lambda x, r, f0, M: x[0], 'jac' : lambda x, r, f0, M: np.array([1, 0, 0]), 'args': (r, f0, M)}, {'type': 'ineq', 'fun' : lambda x, r, f0, M: x[1], 'jac' : lambda x, r, f0, M: np.array([0, 1, 0]), 'args': (r, f0, M)}, {'type': 'ineq', 'fun' : lambda x, r, f0, M: x[2], 'jac' : lambda x, r, f0, M: np.array([0, 0, 1]), 'args': (r, f0, M)}) ``` Although more verbose, if all the Jacobians of the constraints are also informed, this alternative seems better than informing bounds for the optimization process (less error in the final result and less iterations). Given the characteristics of the problem, if we use the function `minimize` we are limited to the SLSQP (Sequential Least SQuares Programming) solver. Finally, let's run the optimization for the four different cost functions and find the optimal muscle forces: ``` f1r = minimize(fun=cf_f1, x0=x0, args=(), jac=cf_f1d, constraints=cons, method='SLSQP', options={'disp': True}) f2r = minimize(fun=cf_f2, x0=x0, args=(), jac=cf_f2d, constraints=cons, method='SLSQP', options={'disp': True}) fpcsa2r = minimize(fun=cf_fpcsa2, x0=x0, args=(a,), jac=cf_fpcsa2d, constraints=cons, method='SLSQP', options={'disp': True}) fmmax3r = minimize(fun=cf_fmmax3, x0=x0, args=(m,), jac=cf_fmmax3d, constraints=cons, method='SLSQP', options={'disp': True}) ``` Let's compare the results for the different cost functions: ``` dat = np.vstack((np.around(r*100,1), np.around(a,1), np.around(f0,0), np.around(m,1))) opt = np.around(np.vstack((f1r.x, f2r.x, fpcsa2r.x, fmmax3r.x)), 1) er = ['-', '-', '-', '-', np.sum(r*f1r.x)-M, np.sum(r*f2r.x)-M, np.sum(r*fpcsa2r.x)-M, np.sum(r*fmmax3r.x)-M] data = np.vstack((np.vstack((dat, opt)).T, er)).T rows = ['$\text{Moment arm}\;[cm]$', '$pcsa\;[cm^2]$', '$F_{max}\;[N]$', '$M_{max}\;[Nm]$', '$\sum F_i$', '$\sum F_i^2$', '$\sum(F_i/pcsa_i)^2$', '$\sum(F_i/M_{max,i})^3$'] cols = ['Biceps long head', 'Biceps short head', 'Brachialis', 'Error in M'] df = pd.DataFrame(data, index=rows, columns=cols) print('\nComparison of different cost functions for solving the distribution problem') df ``` ## Comments The results show that the estimations for the muscle forces depend on the cost function used in the optimization. Which one is correct? This is a difficult question and it's dependent on the goal of the actual task being modeled. Glitsch and Baumann (1997) investigated the effect of different cost functions on the optimization of walking and running and the predicted muscles forces were compared with the electromyographic activity of the corresponding muscles of the lower limb. They found that, among the analyzed cost functions, the minimization of the sum of squared muscle stresses resulted in the best similarity with the actual electromyographic activity. In general, one should always test different algorithms and different initial values before settling for the solution found. Downey (2011), Kitchin (2013), and Kiusalaas (2013) present more examples on numerical optimization. The [NEOS Guide](http://neos-guide.org/) is a valuable source of information on this topic and [OpenOpt](http://openopt.org/) is a good alternative software for numerical optimization in Python. ## Exercises 1. Find the extrema in the function $f(x)=x^3-7.5x^2+18x-10$ analytically and determine if they are minimum or maximum. 2. Find the minimum in the $f(x)=x^3-7.5x^2+18x-10$ using the gradient descent algorithm. 2. Regarding the distribution problem for the elbow muscles presented in this text: a. Test different initial values for the optimization. b. Test other values for the elbow angle where the results are likely to change. 3. In an experiment to estimate forces of the elbow flexors, through inverse dynamics it was found an elbow flexor moment of 10 Nm. Consider the following data for maximum force (F0), moment arm (r), and pcsa (A) of the brachialis, brachioradialis, and biceps brachii muscles: F0 (N): 1000, 250, 700; r (cm): 2, 5, 4; A (cm$^2$): 33, 8, 23, respectively (data from Robertson et al. (2013)). a. Use static optimization to estimate the muscle forces. b. Test the robustness of the results using different initial values for the muscle forces. c. Compare the results for different cost functions. ## References - Bergstra B, Bengio Y (2012) [Random Search for Hyper-Parameter Optimization](http://jmlr.csail.mit.edu/papers/volume13/bergstra12a/bergstra12a.pdf). Journal of Machine Learning Research, 13, 281-305. - Crowninshield RD, Brand RA (1981) [A physiologically based criterion of muscle force prediction in locomotion](http://www.ncbi.nlm.nih.gov/pubmed/7334039). Journal of Biomechanics, 14, 793–801. - Downey AB (2014) [Physical Modeling in MATLAB](http://greenteapress.com/wp/physical-modeling-in-matlab-2e/). 2nd edition. Green Tea Press. - Herzog W (1987) [Individual muscle force estimations using a non-linear optimal design](http://www.ncbi.nlm.nih.gov/pubmed/3682873). J Neurosci Methods, 21, 167-179. - Glitsch U, Baumann W (1997) [The three-dimensional determination of internal loads in the lower extremity](http://www.ncbi.nlm.nih.gov/pubmed/9456380). Journal of Biomechanics, 30, 1123–1131. - Kitchin J (2013) [pycse - Python Computations in Science and Engineering](http://kitchingroup.cheme.cmu.edu/pycse/). - Kiusalaas (2013) [Numerical methods in engineering with Python 3](http://books.google.com.br/books?id=aJkXoxxoCoUC). 3rd edition. Cambridge University Press. - Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley. - Robertson G, Caldwell G, Hamill J, Kamen G (2013) [Research Methods in Biomechanics](http://books.google.com.br/books?id=gRn8AAAAQBAJ). 2nd Edition. Human Kinetics. - Seireg A, Arvikar RJ (1973) [A mathematical model for evaluation of forces in lower extremeties of the musculo-skeletal system](http://www.ncbi.nlm.nih.gov/pubmed/4706941). Journal of Biomechanics, 6, 313–322, IN19–IN20, 323–326.
github_jupyter
___ <a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a> ___ <center><em>Copyright Pierian Data</em></center> <center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center> # Pandas Data Visualization Exercises This is just a quick exercise to review the various plots we showed earlier. Use <tt>df3.csv</tt> to replicate the following plots. <div class="alert alert-danger" style="margin: 10px"><strong>IMPORTANT NOTE!</strong> Make sure you don't run the cells directly above the example output shown, <br>otherwise you will end up writing over the example output!</div> ``` # RUN THIS CELL import pandas as pd import matplotlib.pyplot as plt %matplotlib inline df3 = pd.read_csv('df3.csv') print(len(df3)) print(df3.head()) ``` So <tt>df3</tt> has 500 records and 3 columns. The data represents factory production numbers and reported numbers of defects on certain days of the week. ### 1. Recreate this scatter plot of 'produced' vs 'defective'. Note the color and size of the points. Also note the figure size. See if you can figure out how to stretch it in a similar fashion. ``` # 1. Recrie este gráfico de dispersão de 'produzido' vs 'defeituoso'. Observe a cor e o tamanho dos pontos. # Observe também o tamanho da figura. Veja se você pode descobrir como esticá-lo de forma semelhante. # CODE HERE df3.plot.scatter(x='produced', y='defective', c='red', figsize=(12,3), s=20) # DON'T WRITE HERE ``` ### 2. Create a histogram of the 'produced' column. ``` # 2. Crie um histograma da coluna 'produzida'. df3['produced'].plot.hist() # DON'T WRITE HERE ``` ### 3. Recreate the following histogram of 'produced', tightening the x-axis and adding lines between bars. ``` # 3. Recrie o seguinte histograma de 'produzido', apertando o eixo x e adicionando linhas entre as barras. df3['produced'].plot.hist(edgecolor='k').autoscale(axis='x', tight=True) # DON'T WRITE HERE ``` ### 4. Create a boxplot that shows 'produced' for each 'weekday' (hint: this is a groupby operation) ``` # 4. Crie um boxplot que mostre 'produzido' para cada 'dia da semana' (dica: esta é uma operação groupby) df3[['weekday','produced']].boxplot(by='weekday', figsize=(12,5)) # DON'T WRITE HERE ``` ### 5. Create a KDE plot of the 'defective' column ``` # 5. Crie um gráfico KDE da coluna 'defeituoso' df3['defective'].plot.kde() # DON'T WRITE HERE ``` ### 6. For the above KDE plot, figure out how to increase the linewidth and make the linestyle dashed.<br>(Note: You would usually <em>not</em> dash a KDE plot line) ``` # 6. Para o gráfico do KDE acima, descubra como aumentar a largura da linha e tornar o estilo de linha tracejada. # (Nota: Você normalmente não traçou uma linha de plotagem do KDE) df3['defective'].plot.kde(ls='--', lw=5) # DON'T WRITE HERE ``` ### 7. Create a <em>blended</em> area plot of all the columns for just the rows up to 30. (hint: use .loc) ``` # 7. Crie um gráfico de área combinada de todas as colunas apenas para as linhas até 30. (dica: use .loc) ax = df3.loc[0:30].plot.area(stacked=False, alpha=0.4) ax.legend(loc=0) # DON'T WRITE HERE ``` ## Bonus Challenge! <strong>Notice how the legend in our previous figure overlapped some of actual diagram.<br> Can you figure out how to display the legend outside of the plot as shown below?</strong> ``` ax = df3.loc[0:30].plot.area(stacked=False, alpha=0.4) ax.legend(loc=0, bbox_to_anchor=(1.3,0.5)) # DON'T WRITE HERE ``` # Great Job!
github_jupyter
# Disk Stacking [link](https://www.algoexpert.io/questions/Disk%20Stacking) ## My Solution ``` def diskStacking(disks): # Write your code here. disks.sort(key=lambda x: x[1]) globalMaxHeight = 0 prevDiskIdx = [None for _ in range(len(disks) + 1)] opt = [0 for _ in range(len(disks) + 1)] for i in range(len(disks)): opt[i + 1] = disks[i][2] for i in range(len(opt)): maxHeight = opt[i] for j in range(i): if disks[i - 1][0] > disks[j][0] \ and disks[i - 1][1] > disks[j][1] \ and disks[i - 1][2] > disks[j][2]: height = opt[j + 1] + disks[i - 1][2] if height > maxHeight: maxHeight = height prevDiskIdx[i] = j + 1 opt[i] = maxHeight if maxHeight > globalMaxHeight: globalMaxHeight = maxHeight globalMaxHeightIdx = i res = [] idx = globalMaxHeightIdx while prevDiskIdx[idx] != None: res.append(idx - 1) idx = prevDiskIdx[idx] if idx != 0: res.append(idx - 1) return [disks[res[i]] for i in reversed(range(len(res)))] def diskStacking(disks): # Write your code here. disks.sort(key=lambda x: x[1]) globalMaxHeight = 0 prevDiskIdx = [None for _ in disks] opt = [disk[2] for disk in disks] for i in range(len(opt)): for j in range(i): if isStackable(disks, i, j): height = opt[j] + disks[i][2] if height > opt[i]: opt[i] = height prevDiskIdx[i] = j if opt[i] > globalMaxHeight: globalMaxHeight = opt[i] globalMaxHeightIdx = i res = [] idx = globalMaxHeightIdx while idx is not None: res.append(idx) idx = prevDiskIdx[idx] return [disks[res[i]] for i in reversed(range(len(res)))] def isStackable(disks, lower, upper): return disks[lower][0] > disks[upper][0] \ and disks[lower][1] > disks[upper][1] \ and disks[lower][2] > disks[upper][2] ``` ## Expert Solution ``` # O(n^2) time | O(n) space def diskStacking(disks): disks.sort(key=lambda disk:disk[2]) heights = [disk[2] for disk in disks] sequences = [None for disk in disks] maxHeightIdx = 0 for i in range(1, len(disks)): currentDisk = disks[i] for j in range(0, i): otherDisk = disks[j] if areValidDimensions(otherDisk, currentDisk): if heights[i] <= currentDisk[2] + heights[j]: heights[i] = currentDisk[2] + heights[j] sequences[i] = j if heights[i] >= heights[maxHeightIdx]: maxHeightIdx = i return buildSequence(disks, sequences, maxHeightIdx) def areValidDimensions(o, c): return o[0] < c[0] and o[1] < c[1] and o[2] < c[2] def buildSequence(array, sequences, currentIdx): sequence = [] while currentIdx is not None: sequence.append(array[currentIdx]) currentIdx = sequences[currentIdx] return list(reversed(sequence)) ``` ## Thoughts - the difference between my solution 1 and 2 is the way considering the edge case that solution 1 set null input as an edge case and store the optimal value of it at `opt[0]` as 0. - why need a first sort: - if we don't sort it, the inital order is abitrarily. in an extreme situation, the largest disk which is in the optimal solution may at index 0. our solution will not get an correct answer. - if we sort it first, we at least make sure the correct result could come from the sorted array of input.
github_jupyter
``` import re from Bio import SeqIO from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord from Bio.Alphabet import IUPAC from Bio.SeqFeature import SeqFeature, FeatureLocation #first 6 aas of each domain #from uniprot: NL63 (Q6Q1S2), 229e(P15423), oc43 (P36334), hku1 (Q0ZME7) #nl63 s1 domain definition: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2693060/ s1_domains = {'nl63': 'FFTCNS', '229e': 'CQTTNG', 'oc43': 'AVIGDL', 'hku1': 'AVIGDF'} s2_domains = {'nl63': 'SSDNGI', '229e': 'IIAVQP', 'oc43': 'AITTGY', 'hku1': 'SISASY'} rdrp_domains_start = {'oc43': 'SKDTNF'} rdrp_domains_end = {'oc43': 'RSAVMQ'} def write_gene_reference(gene_seq, gene_id, gene_name, gene_description, cov_type, outfile): gene_record = SeqRecord(gene_seq, id= gene_id, name= gene_name, description= gene_description) source_feature = SeqFeature(FeatureLocation(0, len(gene_seq)), type='source', qualifiers={'organsism':cov_type, "mol_type":"genomic RNA"}) gene_record.features.append(source_feature) cds_feature = SeqFeature(FeatureLocation(0, len(gene_seq)), type='CDS', qualifiers={'translation':gene_seq.translate()}) gene_record.features.append(cds_feature) SeqIO.write(gene_record, outfile, 'genbank') def make_s1_s2_reference(cov): spike_reference = '../'+str(cov)+'/config/'+str(cov)+'_spike_reference.gb' with open(spike_reference, "r") as handle: for record in SeqIO.parse(handle, "genbank"): nt_seq = record.seq aa_seq = record.seq.translate() s1_regex = re.compile(f'{s1_domains[cov]}.*(?={s2_domains[cov]})') s1_aa = s1_regex.search(str(aa_seq)).group() s1_aa_coords = [(aa.start(0), aa.end(0)) for aa in re.finditer(s1_regex, str(aa_seq))][0] s1_nt_coords = [s1_aa_coords[0]*3, s1_aa_coords[1]*3] s1_nt_seq = nt_seq[s1_nt_coords[0]: s1_nt_coords[1]] s2_regex = re.compile(f'{s2_domains[cov]}.*') s2_aa = s2_regex.search(str(aa_seq)).group() s2_aa_coords = [(aa.start(0), aa.end(0)) for aa in re.finditer(s2_regex, str(aa_seq))][0] s2_nt_coords = [s2_aa_coords[0]*3, s2_aa_coords[1]*3] s2_nt_seq = nt_seq[s2_nt_coords[0]: s2_nt_coords[1]] write_gene_reference(s1_nt_seq, record.id, str(cov)+'_S1', 'spike s1 subdomain', cov, '../'+str(cov)+'/config/'+str(cov)+'_s1_reference.gb') write_gene_reference(s2_nt_seq, record.id, str(cov)+'_S2', 'spike s2 subdomain', cov, '../'+str(cov)+'/config/'+str(cov)+'_s2_reference.gb') # covs = ['oc43', '229e', 'nl63', 'hku1'] covs = ['229e'] for cov in covs: make_s1_s2_reference(cov) def make_rdrp_reference(cov): replicase_reference = '../'+str(cov)+'/config/'+str(cov)+'_replicase1ab_reference.gb' with open(replicase_reference, "r") as handle: for record in SeqIO.parse(handle, "genbank"): nt_seq = record.seq aa_seq = record.seq.translate() rdrp_regex = re.compile(f'{rdrp_domains_start[cov]}.*{rdrp_domains_end[cov]}') rdrp_aa = rdrp_regex.search(str(aa_seq)).group() rdrp_aa_coords = [(aa.start(0), aa.end(0)) for aa in re.finditer(rdrp_regex, str(aa_seq))][0] rdrp_nt_coords = [rdrp_aa_coords[0]*3, rdrp_aa_coords[1]*3] rdrp_nt_seq = nt_seq[rdrp_nt_coords[0]: rdrp_nt_coords[1]] write_gene_reference(rdrp_nt_seq, record.id, str(cov)+'_rdrp', 'rna-dependent rna polymerase', cov, '../'+str(cov)+'/config/'+str(cov)+'_rdrp_reference.gb') ```
github_jupyter
``` %matplotlib inline ``` # Active Contour Model The active contour model is a method to fit open or closed splines to lines or edges in an image [1]_. It works by minimising an energy that is in part defined by the image and part by the spline's shape: length and smoothness. The minimization is done implicitly in the shape energy and explicitly in the image energy. In the following two examples the active contour model is used (1) to segment the face of a person from the rest of an image by fitting a closed curve to the edges of the face and (2) to find the darkest curve between two fixed points while obeying smoothness considerations. Typically it is a good idea to smooth images a bit before analyzing, as done in the following examples. We initialize a circle around the astronaut's face and use the default boundary condition ``boundary_condition='periodic'`` to fit a closed curve. The default parameters ``w_line=0, w_edge=1`` will make the curve search towards edges, such as the boundaries of the face. .. [1] *Snakes: Active contour models*. Kass, M.; Witkin, A.; Terzopoulos, D. International Journal of Computer Vision 1 (4): 321 (1988). DOI:`10.1007/BF00133570` ``` import numpy as np import matplotlib.pyplot as plt from skimage.color import rgb2gray from skimage import data from skimage.filters import gaussian from skimage.segmentation import active_contour img = data.astronaut() img = rgb2gray(img) s = np.linspace(0, 2*np.pi, 400) r = 100 + 100*np.sin(s) c = 220 + 100*np.cos(s) init = np.array([r, c]).T snake = active_contour(gaussian(img, 3), init, alpha=0.015, beta=10, gamma=0.001, coordinates='rc') fig, ax = plt.subplots(figsize=(7, 7)) ax.imshow(img, cmap=plt.cm.gray) ax.plot(init[:, 1], init[:, 0], '--r', lw=3) ax.plot(snake[:, 1], snake[:, 0], '-b', lw=3) ax.set_xticks([]), ax.set_yticks([]) ax.axis([0, img.shape[1], img.shape[0], 0]) plt.show() ``` Here we initialize a straight line between two points, `(5, 136)` and `(424, 50)`, and require that the spline has its end points there by giving the boundary condition `boundary_condition='fixed'`. We furthermore make the algorithm search for dark lines by giving a negative `w_line` value. ``` img = data.text() r = np.linspace(136, 50, 100) c = np.linspace(5, 424, 100) init = np.array([r, c]).T snake = active_contour(gaussian(img, 1), init, boundary_condition='fixed', alpha=0.1, beta=1.0, w_line=-5, w_edge=0, gamma=0.1, coordinates='rc') fig, ax = plt.subplots(figsize=(9, 5)) ax.imshow(img, cmap=plt.cm.gray) ax.plot(init[:, 1], init[:, 0], '--r', lw=3) ax.plot(snake[:, 1], snake[:, 0], '-b', lw=3) ax.set_xticks([]), ax.set_yticks([]) ax.axis([0, img.shape[1], img.shape[0], 0]) plt.show() ```
github_jupyter
``` import pandas as pd import numpy as np import sklearn.neighbors from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix from sklearn.preprocessing import StandardScaler scaler = StandardScaler() from sklearn.neighbors import NearestCentroid import math from sklearn.linear_model import LogisticRegression import seaborn as sn import matplotlib.pyplot as plt df = pd.read_csv('./heart_failure_clinical_records_dataset.csv') df = df[['creatinine_phosphokinase', 'serum_creatinine', 'serum_sodium', 'platelets', 'DEATH_EVENT']] df ``` # Question1 ``` df0 = df[df.DEATH_EVENT == 0] df0Feature = df0[['creatinine_phosphokinase', 'serum_creatinine', 'serum_sodium', 'platelets']] df1 = df[df.DEATH_EVENT == 1] df1Feature = df1[['creatinine_phosphokinase', 'serum_creatinine', 'serum_sodium', 'platelets']] # corr matrix for death_event 0 sn.heatmap(df0Feature.corr(), annot=True) plt.show() # corr matrix for death_event 1 sn.heatmap(df1Feature.corr(), annot=True) plt.show() ``` # Question2 Group3--> X = serum sodium, Y = serum creatinine ``` def residual(yTest, yPredict): temp = 0 for (a, b) in zip(yTest, yPredict): temp += (a-b) * (a - b) return temp zeroSet = df0[[ 'serum_creatinine', 'serum_sodium']] oneSet = df1[[ 'serum_creatinine', 'serum_sodium']] # for death_evetn = 0 # simple linear regression x = zeroSet['serum_sodium'] y = zeroSet['serum_creatinine'] degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('linear') # quadratic x = zeroSet['serum_sodium'] y = zeroSet['serum_creatinine'] degree = 2 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('quadratic') # cubic x = zeroSet['serum_sodium'] y = zeroSet['serum_creatinine'] degree = 3 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('cubic') # GLM x = zeroSet['serum_sodium'] x = np.log(x) y = zeroSet['serum_creatinine'] degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('y = alog(x) + b') # GLM x = zeroSet['serum_sodium'] x = np.log(x) y = zeroSet['serum_creatinine'] y = np.log(y) degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(np.exp(yTest), np.exp(yPredict))) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('log(y) = alog(x) + b') # for death_evetn = 1 # simple linear regression x = oneSet['serum_sodium'] y = oneSet['serum_creatinine'] degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('linear') # quadratic x = oneSet['serum_sodium'] y = oneSet['serum_creatinine'] degree = 2 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('quadratic') # cubic x = oneSet['serum_sodium'] y = oneSet['serum_creatinine'] degree = 3 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('cubic') # GLM x = oneSet['serum_sodium'] x = np.log(x) y = oneSet['serum_creatinine'] degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(yTest, yPredict)) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('y = alog(x) + b') # GLM x = oneSet['serum_sodium'] x = np.log(x) y = oneSet['serum_creatinine'] y = np.log(y) degree = 1 xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size= 0.5, random_state = 0) weights = np.polyfit(xTrain, yTrain, degree) model = np.poly1d(weights) yPredict = model(xTest) print(weights) print(residual(np.exp(yTest), np.exp(yPredict))) plt.figure() plt.scatter(xTest, yTest, color = 'green', label = 'True Data') plt.scatter(xTest, yPredict, color = 'red', label = 'Predict') plt.grid() plt.legend() plt.title('log(y) = alog(x) + b') ```
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import FormatStrFormatter import cv2 import glob import h5py from skimage.morphology import disk from scipy.stats import pearsonr from scipy.ndimage import gaussian_filter %matplotlib inline %load_ext autoreload %autoreload 2 # for plot figures plt.rcParams['svg.fonttype'] = 'none' def adjust_spines(ax, spines): for loc, spine in ax.spines.items(): if loc in spines: spine.set_position(('outward', 2)) else: spine.set_color('none') if 'left' in spines: ax.yaxis.set_ticks_position('left') else: ax.yaxis.set_ticks([]) if 'bottom' in spines: ax.xaxis.set_ticks_position('bottom') else: ax.xaxis.set_ticks([]) #import data movie_name = "../data/image_twilight_bgr.h5" #read movie real data, real means: after spectral calibration, before gamma correction for the screen def read_sunriseset_from_h5(filename): h5f = h5py.File(filename,'r') img_sunrises=h5f['sunrises_bgr_real'][:] img_sunsets=h5f['sunsets_bgr_real'][:] h5f.close() return img_sunrises,img_sunsets img_sunrises,img_sunsets=read_sunriseset_from_h5(movie_name) print (img_sunrises.shape) print (img_sunsets.shape) #show one example, image real value plt.imshow(img_sunrises[5][...,::-1]) #to better visulaize image, use gamma correction to transfer image real to image view def img_real2view(img): gamma_correction=lambda x:np.power(x,1.0/2.2) img_shape=img.shape # gray image if np.size(img_shape)==2: #uint8 if np.max(img)>1: temp_view=np.zeros_like(img,dtype=np.float32) temp_view=np.float32(img)/255.0#float32, 1.0 temp_view=gamma_correction(temp_view) temp_view2=np.zeros_like(img,dtype=np.uint8) temp_view2=np.uint8(temp_view*255) return temp_view2 #float if np.max(img)<2: return gamma_correction(img) #color image if np.size(img_shape)==3: #uint8 if np.max(img)>1: temp_view=np.zeros_like(img,dtype=np.float32) temp_view=np.float32(img)/255.0#1.0 temp_view=gamma_correction(temp_view) temp_view2=np.zeros_like(img,dtype=np.uint8) temp_view2=np.uint8(temp_view*255)#255 return temp_view2 #float if np.max(img)<2: return gamma_correction(img) #show one example, image view value plt.imshow(img_real2view(img_sunrises[1])[...,::-1]) ``` ### Functions ``` #function: gaussian kernel 1d #input: sigma: std # order: A positive order corresponds to convolution with # that derivative of a Gaussian, use 0 here # radius: radius of the filter def my_gaussian_kernel1d(sigma, order, radius): """ Computes a 1D Gaussian convolution kernel. """ if order < 0: raise ValueError('order must be non-negative') p = np.polynomial.Polynomial([0, 0, -0.5 / (sigma * sigma)]) x = np.arange(-radius, radius + 1) phi_x = np.exp(p(x), dtype=np.double) phi_x /= phi_x.sum() if order > 0: q = np.polynomial.Polynomial([1]) p_deriv = p.deriv() for _ in range(order): # f(x) = q(x) * phi(x) = q(x) * exp(p(x)) # f'(x) = (q'(x) + q(x) * p'(x)) * phi(x) q = q.deriv() + q * p_deriv phi_x *= q(x) return phi_x #function: gaussian filter 2d def my_gaussian_kernel2d(sigma,order,radius): g_ker_1d=my_gaussian_kernel1d(sigma, order, radius) g_ker_2d=np.outer(g_ker_1d, g_ker_1d) g_ker_2d /=g_ker_2d.sum() return g_ker_2d #function: my difference of gaussian kernel 1d #input: centersigma is the center sigma, surround sigma=1.5*centersigma, centersigma=RFradius # radius: defalt 3*centersigma #output: kernel size length: 1+3*centersigma*2 def my_DOG_kernel1d(centersigma,order,radius): surroundsigma=1.5*centersigma center_kernel1d=my_gaussian_kernel1d(centersigma,order,radius) surround_kernel1d=my_gaussian_kernel1d(surroundsigma,order,radius) out_kernel1d=center_kernel1d-surround_kernel1d return out_kernel1d #function: my difference of gaussian kernel 2d, mimic retina center-surround onoff #input: centersigma is the center sigma, surround sigma=1.5*centersigma # radius: kernelradius, defalt 3*centersigma #output: kernel size length: 1+3*centersigma*2 def my_DOG_kernel2d(centersigma,order,radius): surroundsigma=1.5*centersigma center_kernel2d=my_gaussian_kernel2d(centersigma,order,radius) surround_kernel2d=my_gaussian_kernel2d(surroundsigma,order,radius) out_kernel2d=center_kernel2d-surround_kernel2d return out_kernel2d #function, calculate ONOFF for single pixel #input: #img: gray image, float, 1.0 (when phase srambled image, may be a little larger than 1.0) #(xx,yy): center coordinate, xx: along height, yy: along width, RFradius: radius of center #output: #onoff value def ONOFF_single(img,xx,yy,centersigma): surroundsigma=np.round(1.5*centersigma) kernelradius=3*centersigma temp=img[xx-kernelradius:xx+kernelradius+1,yy-kernelradius:yy+kernelradius+1] center_kernel2d=my_gaussian_kernel2d(centersigma,0,kernelradius) surround_kernel2d=my_gaussian_kernel2d(surroundsigma,0,kernelradius) centermean=np.sum(temp*center_kernel2d) surroundmean=np.sum(temp*surround_kernel2d) onoff=(centermean-surroundmean)/(centermean+surroundmean+1e-8) return onoff #input: #centersigma is the center sigma #img: image or image region, float #output: onoff_img, float, -1.0 to 1.0 def onoff_wholeimg(img,centersigma): kernelradius=3*centersigma onoff_img=np.zeros((img.shape[0],img.shape[1])) for ii in np.arange(kernelradius,img.shape[0]-kernelradius-1): for jj in np.arange(kernelradius,img.shape[1]-kernelradius-1): onoff_img[ii,jj]=ONOFF_single(img,ii,jj,centersigma) if img.shape[0]==437: mask_con=np.zeros((437,437),np.uint8) cv2.circle(mask_con,(218,218),radius=218-kernelradius,color=255,thickness=-1) mask_con=np.float32(mask_con/255.0) onoff_img=np.multiply(onoff_img,mask_con) return onoff_img #input: onoff_seed: random seed for contrast calculation #onoff_num: random pick numbers #centersigma is the center sigma #img: image or image region, float 1.0 (when phase srambled, may be a little larger than 1.0) #output: the onoff value distribution def onoff_random(onoff_seed,onoff_num,centersigma,img): kernelradius=3*centersigma np.random.seed(onoff_seed+866) walk_height=np.random.choice(np.arange(kernelradius,img.shape[0]-kernelradius-1),onoff_num,replace=False) np.random.seed(onoff_seed+899) walk_width=np.random.choice(np.arange(kernelradius,img.shape[1]-kernelradius-1),onoff_num,replace=False) onoffs=np.zeros(onoff_num) for ii in range(onoff_num): onoffs[ii]=ONOFF_single(img,walk_height[ii],walk_width[ii],centersigma) return onoffs #input: onoff_seed: random seed for contrast calculation #onoff_num: total random pick numbers=numberofimages* each_random_pick_numbers #centersigma is the center sigma #imgs: images, all gray images, float 1.0 (when phase srambled, may be a little larger than 1.0) # format like: numberofimages*height*width #output: the onoff value distribution def onoff_random_imgs(onoff_seed,onoff_num,centersigma,imgs): num_imgs=imgs.shape[0] onoffs=[] for ii in range(num_imgs): onoffs.append(onoff_random(onoff_seed+ii,int(np.round(onoff_num/num_imgs)),centersigma,imgs[ii])) onoffs=np.array(onoffs) onoffs=onoffs.flatten() return onoffs #input: onoff_seed: random seed for onoff and local contrast(rms2) calculation #onoff_num: random pick numbers #centersigma is the center sigma for onoff #RFradius for local contrast(rms2) #img: image or image region, float 1.0 (when phase srambled, may be a little larger than 1.0) #output: the onoff and local contrast (rms2) value distribution def onoff_rms2_random(onoff_seed,onoff_num,centersigma,RFradius,img): kernelradius=3*centersigma np.random.seed(onoff_seed+1866) walk_height=np.random.choice(np.arange(kernelradius,img.shape[0]-kernelradius-1),onoff_num,replace=False) np.random.seed(onoff_seed+2899) walk_width=np.random.choice(np.arange(kernelradius,img.shape[1]-kernelradius-1),onoff_num,replace=False) onoffs=np.zeros(onoff_num) rms2s=np.zeros(onoff_num) tempdisk=np.float64(disk(RFradius)) for ii in range(onoff_num): onoffs[ii]=ONOFF_single(img,walk_height[ii],walk_width[ii],centersigma) temp=img[walk_height[ii]-RFradius:walk_height[ii]+RFradius+1,\ walk_width[ii]-RFradius:walk_width[ii]+RFradius+1] temp=temp[np.nonzero(tempdisk)] rms2s[ii]=np.std(temp,ddof=1)/(np.mean(temp)+1e-8) return onoffs,rms2s #input: onoff_seed: random seed for contrast calculation #onoff_num: total random pick numbers=numberofimages* each_random_pick_numbers #centersigma is the center sigma for onoff #RFradius for local contrast(rms2) #imgs: images, all gray images, float 1.0 (when phase srambled, may be a little larger than 1.0) # format like: numberofimages*height*width #output: the onoff and local contrast (rms2) value distribution def onoff_rms2_random_imgs(onoff_seed,onoff_num,centersigma,RFradius,imgs): num_imgs=imgs.shape[0] onoffs=[] rms2s=[] for ii in range(num_imgs): temp_onoff,temp_rms2=onoff_rms2_random(onoff_seed+ii,int(np.round(onoff_num/num_imgs)),\ centersigma,RFradius,imgs[ii]) onoffs.append(temp_onoff) rms2s.append(temp_rms2) onoffs=np.array(onoffs) onoffs=onoffs.flatten() rms2s=np.array(rms2s) rms2s=rms2s.flatten() return onoffs,rms2s #function, get the rms2 image of one image, input: #img: image or image region, float, 1.0, could be a little larger than 1.0 for phase scrambled image #RFradius: the radius of the crop area to be estimated the rms2 #output: rms2_img, float, nonnegative def rms2_wholeimg(img,RFradius): tempdisk=np.float64(disk(RFradius)) rms2_img=np.zeros((img.shape[0],img.shape[1])) for ii in np.arange(RFradius,img.shape[0]-RFradius-1): for jj in np.arange(RFradius,img.shape[1]-RFradius-1): temp=img[ii-RFradius:ii+RFradius+1,jj-RFradius:jj+RFradius+1] temp=temp[np.nonzero(tempdisk)]#circular kernel rms2_img[ii,jj]=np.std(temp,ddof=1)/(np.mean(temp)+1e-8) if img.shape[0]==437:#whole image frame, not crop mask_con=np.zeros((437,437),np.uint8) cv2.circle(mask_con,(218,218),radius=218-RFradius,color=255,thickness=-1) mask_con=np.float32(mask_con/255.0) rms2_img=np.multiply(rms2_img,mask_con) return rms2_img #input: onoff_seed: random seed for local contrast(rms2) calculation #onoff_num: random pick numbers #RFradius for local contrast(rms2) #img: image or image region, float 1.0 (when phase srambled, may be a little larger than 1.0) #output: the local contrast (rms2) value distribution def rms2_random(onoff_seed,onoff_num,RFradius,img): np.random.seed(onoff_seed+1866) walk_height=np.random.choice(np.arange(RFradius,img.shape[0]-RFradius-1),onoff_num) np.random.seed(onoff_seed+2899) walk_width=np.random.choice(np.arange(RFradius,img.shape[1]-RFradius-1),onoff_num) rms2s=np.zeros(onoff_num) tempdisk=np.float64(disk(RFradius)) for ii in range(onoff_num): temp=img[walk_height[ii]-RFradius:walk_height[ii]+RFradius+1,\ walk_width[ii]-RFradius:walk_width[ii]+RFradius+1] temp=temp[np.nonzero(tempdisk)] rms2s[ii]=np.std(temp,ddof=1)/(np.mean(temp)+1e-8) return rms2s #bootstrapping #apply bootstrapping to estimate standard deviation (error) #statistics can be offratios, median, mean #for offratios, be careful with the threshold #data: for statistics offratios, median, mean: numpy array with shape (sample_size,1) #num_exp: number of experiments, with replacement def bootstrap(statistics,data,num_exp=10000,seed=66): if statistics == 'offratios': def func(x): return len(x[np.where(x<0)])/len(x[np.where(x>0)]) elif statistics == 'median': def func(x): return np.median(x) elif statistics == 'mean': def func(x): return np.mean(x) sta_boot=np.zeros((num_exp)) num_data=len(data) for ii in range(num_exp): np.random.seed(seed+ii) tempind=np.random.choice(num_data,num_data,replace=True) sta_boot[ii]=func(data[tempind]) return np.percentile(sta_boot,2.5),np.percentile(sta_boot,97.5) ``` ### Sunrise: Intensity profile in the dome, radiated from the sun ``` def createLineIterator(P1, P2, img): """ Produces and array that consists of the coordinates and intensities of each pixel in a line between two points Parameters: -P1: a numpy array that consists of the coordinate of the first point (x,y) -P2: a numpy array that consists of the coordinate of the second point (x,y) -img: the image being processed Returns: -it: a numpy array that consists of the coordinates and intensities of each pixel in the radii (shape: [numPixels, 3], row = [x,y,intensity]) """ #define local variables for readability imageH = img.shape[0] imageW = img.shape[1] P1X = P1[0] P1Y = P1[1] P2X = P2[0] P2Y = P2[1] #difference and absolute difference between points #used to calculate slope and relative location between points dX = P2X - P1X dY = P2Y - P1Y dXa = np.abs(dX) dYa = np.abs(dY) #predefine numpy array for output based on distance between points itbuffer = np.empty(shape=(np.maximum(dYa,dXa),3),dtype=np.float32) itbuffer.fill(np.nan) #Obtain coordinates along the line using a form of Bresenham's algorithm negY = P1Y > P2Y negX = P1X > P2X if P1X == P2X: #vertical line segment itbuffer[:,0] = P1X if negY: itbuffer[:,1] = np.arange(P1Y - 1,P1Y - dYa - 1,-1) else: itbuffer[:,1] = np.arange(P1Y+1,P1Y+dYa+1) elif P1Y == P2Y: #horizontal line segment itbuffer[:,1] = P1Y if negX: itbuffer[:,0] = np.arange(P1X-1,P1X-dXa-1,-1) else: itbuffer[:,0] = np.arange(P1X+1,P1X+dXa+1) else: #diagonal line segment steepSlope = dYa > dXa if steepSlope: #slope = dX.astype(np.float32)/dY.astype(np.float32) slope = dX/dY if negY: itbuffer[:,1] = np.arange(P1Y-1,P1Y-dYa-1,-1) else: itbuffer[:,1] = np.arange(P1Y+1,P1Y+dYa+1) itbuffer[:,0] = (slope*(itbuffer[:,1]-P1Y)).astype(np.int) + P1X else: #slope = dY.astype(np.float32)/dX.astype(np.float32) slope = dY/dX if negX: itbuffer[:,0] = np.arange(P1X-1,P1X-dXa-1,-1) else: itbuffer[:,0] = np.arange(P1X+1,P1X+dXa+1) itbuffer[:,1] = (slope*(itbuffer[:,0]-P1X)).astype(np.int) + P1Y #Remove points outside of image colX = itbuffer[:,0] colY = itbuffer[:,1] itbuffer = itbuffer[(colX >= 0) & (colY >=0) & (colX<imageW) & (colY<imageH)] #Get intensities from img ndarray itbuffer[:,2] = img[itbuffer[:,1].astype(np.uint),itbuffer[:,0].astype(np.uint)] return itbuffer #show line temp=img_real2view(img_sunrises[0]) lineeg=cv2.line(temp,(198,233),(53,161),(0,0,255),5) plt.imshow(lineeg[...,::-1]) #one example point1=(198,233) point2=(53,161) temp=createLineIterator(point1, point2, img_sunrises[0,...,0]) print (temp.shape) #intensity profile point1s=[[198,233],[198,233],[201,222]] point2s=[[53,161],[53,161],[56,150]] intenpro=np.zeros((3,2,145),np.uint8)#3 time points, 2 color channel (UV and G),135 pixels for ii in range(3): for jj in range(2): intenpro[ii,jj]=createLineIterator(point1s[ii], point2s[ii], img_sunrises[ii*2,...,jj])[:,2] intenpro=intenpro/255.0 #plot intensity profile in 3 time points fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.plot(intenpro[0,0],color='purple',linestyle='-',label='UV; Time 0') ax.plot(intenpro[1,0],color='purple',linestyle='--',label='UV; Time 2') ax.plot(intenpro[2,0],color='purple',linestyle=':',label='UV; Time 4') ax.plot(intenpro[0,1],color='g',linestyle='-',label='G; Time 0') ax.plot(intenpro[1,1],color='g',linestyle='--',label='G; Time 2') ax.plot(intenpro[2,1],color='g',linestyle=':',label='G; Time 4') ax.legend(loc='best',fontsize=16) ax.set_xticks([0,75,150]) ax.set_xticklabels(([0,35,70])) ax.set_ylim([0,1.0]) ax.set_yticks([0,0.5,1.0]) ax.set_xlabel('RF (degree)', fontsize=16) ax.set_ylabel('Intensity', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5)) #plot intensity profile in 3 time points fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.plot(intenpro[0,0],color='blueviolet',linestyle='-',label='UV; Time 0') ax.plot(intenpro[1,0],color='violet',linestyle='-',label='UV; Time 2') ax.plot(intenpro[2,0],color='purple',linestyle='-',label='UV; Time 4') ax.plot(intenpro[0,1],color='lime',linestyle='-',label='G; Time 0') ax.plot(intenpro[1,1],color='g',linestyle='-',label='G; Time 2') ax.plot(intenpro[2,1],color='yellowgreen',linestyle='-',label='G; Time 4') ax.legend(loc='best',fontsize=16) ax.set_xticks([0,75,150]) ax.set_xticklabels(([0,35,70])) ax.set_ylim([0,1.0]) ax.set_yticks([0,0.5,1.0]) ax.set_xlabel('RF (degree)', fontsize=16) ax.set_ylabel('Intensity', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5)) ``` ### Sunrise: Dome and tree intensity change along time points ``` temp=img_real2view(img_sunrises[5]) recteg=cv2.rectangle(temp,(168,35),(228,55),(255,255,255),1) plt.imshow(recteg[...,::-1]) #dome intensity domeinten_median=np.zeros((6,2)) #6 time points, 2 color channel (UV and G) domeinten_std=np.zeros((6,2)) domeinten_lowq_higq=np.zeros((6,2,2)) #6 time points, 2 color channel (UV and G), low and high quantiles(percentiles) for ii in range(6): for jj in range(2): temp=img_sunrises[ii,35:55,168:228,jj]/255 domeinten_median[ii,jj]=np.median(temp) domeinten_std[ii,jj]=np.std(temp) low_perc,high_perc=bootstrap('median',temp,num_exp=10000,seed=66) domeinten_lowq_higq[ii,jj,0] = domeinten_median[ii,jj]-low_perc #low domeinten_lowq_higq[ii,jj,1] =-domeinten_median[ii,jj]+high_perc #high #tree intensity treeinten_median=np.zeros((6,2))#6 time points, 2 color channel (UV and G) treeinten_std=np.zeros((6,2)) treeinten_lowq_higq=np.zeros((6,2,2)) #6 time points, 2 color channel (UV and G), low and high quantiles(percentiles) for ii in range(6): for jj in range(2): temp=img_sunrises[ii,80:100,230:280,jj]/255 treeinten_median[ii,jj]=np.median(temp) treeinten_std[ii,jj]=np.std(temp) low_perc,high_perc=bootstrap('median',temp,num_exp=10000,seed=6666) treeinten_lowq_higq[ii,jj,0] = treeinten_median[ii,jj]-low_perc #low treeinten_lowq_higq[ii,jj,1] =-treeinten_median[ii,jj]+high_perc #high #median, errorbar: 2.5-97.5 percentils timepoints=[0,1,2,3,4,5] fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.errorbar(timepoints,domeinten_median[:,0],yerr=(domeinten_lowq_higq[:,0,0],domeinten_lowq_higq[:,0,1]),marker='o',\ color='purple',linestyle='-', label='Dome UV',alpha=1.0, capsize=4) ax.errorbar(timepoints,domeinten_median[:,1],yerr=(domeinten_lowq_higq[:,1,0],domeinten_lowq_higq[:,1,1]),marker='o',\ color='g', linestyle='-', label='Dome G',alpha=1.0, capsize=4) ax.errorbar(timepoints,treeinten_median[:,0],yerr=(treeinten_lowq_higq[:,0,0],treeinten_lowq_higq[:,0,1]),marker='o',\ color='purple',linestyle='--',label='Tree UV',alpha=1.0, capsize=4) ax.errorbar(timepoints,treeinten_median[:,1],yerr=(treeinten_lowq_higq[:,1,0],treeinten_lowq_higq[:,1,1]),marker='o',\ color='g', linestyle='--',label='Tree G',alpha=1.0, capsize=4) ax.legend(loc='best',fontsize=16) ax.set_xticks([0,1,2,3,4,5]) ax.set_ylim([0,0.09]) ax.set_yticks([0,0.03,0.06,0.09]) ax.set_xlabel('Time point', fontsize=16) ax.set_ylabel('Intensity median', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5)) ``` ### Sunrise: Crms change along time points ``` #pick a rectangular area for the tree, not close to the sun, near the edge temp=img_real2view(img_sunrises[5]) recteg=cv2.rectangle(temp,(160,50),(340,100),(0,0,255),5) plt.imshow(recteg[...,::-1]) #RF: 2,10 degrees RFradius=np.array([2,12]) onoff_num=100 #Crms rms2_time=np.zeros((6,2,2,onoff_num))#6 time points, 2 color channel (UV and G),2 RFs, 100 data rms2_means=np.zeros((6,2,2))#6 time points, 2 color channel (UV and G),2 RFs rms2_stds=np.zeros((6,2,2)) rms2_lowq_higq=np.zeros((6,2,2,2)) #the last channel: low and high quantiles(percentiles) for ii in range(6): for jj in range(2): for kk in range(2): temp=img_sunrises[ii,50:100,160:340,jj]/255 temprms2s=rms2_random(566+ii*10,onoff_num,RFradius[kk],temp) rms2_time[ii,jj,kk]=temprms2s rms2_means[ii,jj,kk]=np.mean(temprms2s) rms2_stds[ii,jj,kk]=np.std(temprms2s) low_perc,high_perc=bootstrap('mean',temprms2s,num_exp=10000,seed=888) rms2_lowq_higq[ii,jj,kk,0] = rms2_means[ii,jj,kk]-low_perc #low rms2_lowq_higq[ii,jj,kk,1] =-rms2_means[ii,jj,kk]+high_perc #high #mean, errorbar: 2.5-97.5 percentiles timepoints=[0,1,2,3,4,5] fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.errorbar(timepoints,rms2_means[:,0,1],yerr=(rms2_lowq_higq[:,0,1,0],rms2_lowq_higq[:,0,1,1]),marker='o',\ color='purple',linestyle='-',label='UV; RF=10',alpha=1.0, capsize=4) ax.errorbar(timepoints,rms2_means[:,1,1],yerr=(rms2_lowq_higq[:,1,1,0],rms2_lowq_higq[:,1,1,1]),marker='o',\ color='g', linestyle='-',label='G; RF=10',alpha=1.0, capsize=4) ax.legend(loc='best',fontsize=16) ax.set_xticks([0,1,2,3,4,5]) ax.set_yticks([0,0.2,0.4]) ax.set_xlabel('Time point', fontsize=16) ax.set_ylabel('Crms mean', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5)) ``` ### Sunset: Conoff and Crms of tree ``` #pick a rectangular area for the tree temp=img_real2view(img_sunsets[1]) recteg=cv2.rectangle(temp,(130,50),(340,200),(0,0,255),5) plt.imshow(recteg[...,::-1]) RFradius=np.array([2,7,12,16]) onoff_num=200 #upper visual field, UV channel upper_UV_RF_rms2s=np.zeros((4,onoff_num)) for ii in range(4): temp=img_sunsets[1,50:200,130:340,0]/255 upper_UV_RF_rms2s[ii]=rms2_random(566+ii*10,onoff_num,RFradius[ii],temp) #upper visual field, G channel upper_G_RF_rms2s=np.zeros((4,onoff_num)) for ii in range(4): temp=img_sunsets[1,50:200,130:340,1]/255 upper_G_RF_rms2s[ii]=rms2_random(566+ii*10,onoff_num,RFradius[ii],temp) #calculate rms2medians RFradius=np.array([2,7,12,16]) #upper visual field, UV channel upper_UV_RF_rms2medians=np.zeros(4) upper_UV_RF_rms2stds=np.zeros(4) upper_UV_RF_rms2lowqs=np.zeros(4) #lower_quartile upper_UV_RF_rms2higqs=np.zeros(4) #upper_quartile for ii in range(4): upper_UV_RF_rms2medians[ii]=np.median(upper_UV_RF_rms2s[ii]) upper_UV_RF_rms2stds[ii]=np.std(upper_UV_RF_rms2s[ii]) low_perc,high_perc=bootstrap('median',upper_UV_RF_rms2s[ii],num_exp=10000,seed=66) upper_UV_RF_rms2lowqs[ii] = upper_UV_RF_rms2medians[ii]-low_perc upper_UV_RF_rms2higqs[ii] =-upper_UV_RF_rms2medians[ii]+high_perc #upper visual field, G channel upper_G_RF_rms2medians=np.zeros(4) upper_G_RF_rms2stds=np.zeros(4) upper_G_RF_rms2lowqs=np.zeros(4) #lower_quartile upper_G_RF_rms2higqs=np.zeros(4) #upper_quartile for ii in range(4): upper_G_RF_rms2medians[ii]=np.median(upper_G_RF_rms2s[ii]) upper_G_RF_rms2stds[ii]=np.std(upper_G_RF_rms2s[ii]) low_perc,high_perc=bootstrap('median',upper_G_RF_rms2s[ii],num_exp=10000,seed=66) upper_G_RF_rms2lowqs[ii] = upper_G_RF_rms2medians[ii]-low_perc upper_G_RF_rms2higqs[ii] =-upper_G_RF_rms2medians[ii]+high_perc #median, errorbar: 2.5-97.5 percentiles RFs=np.array([2,6,10,14]) fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(3,3)) ax.errorbar(RFs,upper_UV_RF_rms2medians,yerr=(upper_UV_RF_rms2lowqs,upper_UV_RF_rms2higqs),marker='o',\ color='purple',linestyle='-',label='Upper UV',alpha=1.0, capsize=4) ax.errorbar(RFs,upper_G_RF_rms2medians, yerr=(upper_G_RF_rms2lowqs,upper_G_RF_rms2higqs), marker='o',\ color='g', linestyle='-',label='Upper G', alpha=1.0, capsize=4) ax.legend(loc='best',fontsize=16) ax.set_xticks([2,6,10,14]) ax.set_yticks([0,0.2,0.4,0.6]) ax.set_xlabel('RF (degree)', fontsize=16) ax.set_ylabel('Crms median', fontsize=16) adjust_spines(ax, ['left', 'bottom']) handles, labels = ax.get_legend_handles_labels() lgd = ax.legend(handles, labels, loc='center left',frameon=False, bbox_to_anchor=(1, 0.5)) ```
github_jupyter
<h1><center>How to export 🤗 Transformers Models to ONNX ?<h1><center> [ONNX](http://onnx.ai/) is open format for machine learning models. It allows to save your neural network's computation graph in a framework agnostic way, which might be particulary helpful when deploying deep learning models. Indeed, businesses might have other requirements _(languages, hardware, ...)_ for which the training framework might not be the best suited in inference scenarios. In that context, having a representation of the actual computation graph that can be shared accross various business units and logics across an organization might be a desirable component. Along with the serialization format, ONNX also provides a runtime library which allows efficient and hardware specific execution of the ONNX graph. This is done through the [onnxruntime](https://microsoft.github.io/onnxruntime/) project and already includes collaborations with many hardware vendors to seamlessly deploy models on various platforms. Through this notebook we'll walk you through the process to convert a PyTorch or TensorFlow transformers model to the [ONNX](http://onnx.ai/) and leverage [onnxruntime](https://microsoft.github.io/onnxruntime/) to run inference tasks on models from 🤗 __transformers__ ## Exporting 🤗 transformers model to ONNX --- Exporting models _(either PyTorch or TensorFlow)_ is easily achieved through the conversion tool provided as part of 🤗 __transformers__ repository. Under the hood the process is sensibly the following: 1. Allocate the model from transformers (**PyTorch or TensorFlow**) 2. Forward dummy inputs through the model this way **ONNX** can record the set of operations executed 3. Optionally define dynamic axes on input and output tensors 4. Save the graph along with the network parameters ``` import sys !{sys.executable} -m pip install --upgrade git+https://github.com/huggingface/transformers !{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html !{sys.executable} -m pip install --upgrade onnxruntime==1.4.0 !{sys.executable} -m pip install -i https://test.pypi.org/simple/ ort-nightly !{sys.executable} -m pip install --upgrade onnxruntime-tools !rm -rf onnx/ from pathlib import Path from transformers.convert_graph_to_onnx import convert # Handles all the above steps for you convert(framework="pt", model="bert-base-cased", output=Path("onnx/bert-base-cased.onnx"), opset=11) # Tensorflow # convert(framework="tf", model="bert-base-cased", output="onnx/bert-base-cased.onnx", opset=11) ``` ## How to leverage runtime for inference over an ONNX graph --- As mentionned in the introduction, **ONNX** is a serialization format and many side projects can load the saved graph and run the actual computations from it. Here, we'll focus on the official [onnxruntime](https://microsoft.github.io/onnxruntime/). The runtime is implemented in C++ for performance reasons and provides API/Bindings for C++, C, C#, Java and Python. In the case of this notebook, we will use the Python API to highlight how to load a serialized **ONNX** graph and run inference workload on various backends through **onnxruntime**. **onnxruntime** is available on pypi: - onnxruntime: ONNX + MLAS (Microsoft Linear Algebra Subprograms) - onnxruntime-gpu: ONNX + MLAS + CUDA ``` !pip install transformers onnxruntime-gpu onnx psutil matplotlib ``` ## Preparing for an Inference Session --- Inference is done using a specific backend definition which turns on hardware specific optimizations of the graph. Optimizations are basically of three kinds: - **Constant Folding**: Convert static variables to constants in the graph - **Deadcode Elimination**: Remove nodes never accessed in the graph - **Operator Fusing**: Merge multiple instruction into one (Linear -> ReLU can be fused to be LinearReLU) ONNX Runtime automatically applies most optimizations by setting specific `SessionOptions`. Note:Some of the latest optimizations that are not yet integrated into ONNX Runtime are available in [optimization script](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers) that tunes models for the best performance. ``` # # An optional step unless # # you want to get a model with mixed precision for perf accelartion on newer GPU # # or you are working with Tensorflow(tf.keras) models or pytorch models other than bert # !pip install onnxruntime-tools # from onnxruntime_tools import optimizer # # Mixed precision conversion for bert-base-cased model converted from Pytorch # optimized_model = optimizer.optimize_model("bert-base-cased.onnx", model_type='bert', num_heads=12, hidden_size=768) # optimized_model.convert_model_float32_to_float16() # optimized_model.save_model_to_file("bert-base-cased.onnx") # # optimizations for bert-base-cased model converted from Tensorflow(tf.keras) # optimized_model = optimizer.optimize_model("bert-base-cased.onnx", model_type='bert_keras', num_heads=12, hidden_size=768) # optimized_model.save_model_to_file("bert-base-cased.onnx") # optimize transformer-based models with onnxruntime-tools from onnxruntime_tools import optimizer from onnxruntime_tools.transformers.onnx_model_bert import BertOptimizationOptions # disable embedding layer norm optimization for better model size reduction opt_options = BertOptimizationOptions('bert') opt_options.enable_embed_layer_norm = False opt_model = optimizer.optimize_model( 'onnx/bert-base-cased.onnx', 'bert', num_heads=12, hidden_size=768, optimization_options=opt_options) opt_model.save_model_to_file('bert.opt.onnx') from os import environ from psutil import cpu_count # Constants from the performance optimization available in onnxruntime # It needs to be done before importing onnxruntime environ["OMP_NUM_THREADS"] = str(cpu_count(logical=True)) environ["OMP_WAIT_POLICY"] = 'ACTIVE' from onnxruntime import GraphOptimizationLevel, InferenceSession, SessionOptions, get_all_providers from contextlib import contextmanager from dataclasses import dataclass from time import time from tqdm import trange def create_model_for_provider(model_path: str, provider: str) -> InferenceSession: assert provider in get_all_providers(), f"provider {provider} not found, {get_all_providers()}" # Few properties that might have an impact on performances (provided by MS) options = SessionOptions() options.intra_op_num_threads = 1 options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL # Load the model as a graph and prepare the CPU backend session = InferenceSession(model_path, options, providers=[provider]) session.disable_fallback() return session @contextmanager def track_infer_time(buffer: [int]): start = time() yield end = time() buffer.append(end - start) @dataclass class OnnxInferenceResult: model_inference_time: [int] optimized_model_path: str ``` ## Forwarding through our optimized ONNX model running on CPU --- When the model is loaded for inference over a specific provider, for instance **CPUExecutionProvider** as above, an optimized graph can be saved. This graph will might include various optimizations, and you might be able to see some **higher-level** operations in the graph _(through [Netron](https://github.com/lutzroeder/Netron) for instance)_ such as: - **EmbedLayerNormalization** - **Attention** - **FastGeLU** These operations are an example of the kind of optimization **onnxruntime** is doing, for instance here gathering multiple operations into bigger one _(Operator Fusing)_. ``` from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased") cpu_model = create_model_for_provider("onnx/bert-base-cased.onnx", "CPUExecutionProvider") # Inputs are provided through numpy array model_inputs = tokenizer("My name is Bert", return_tensors="pt") inputs_onnx = {k: v.cpu().detach().numpy() for k, v in model_inputs.items()} # Run the model (None = get all the outputs) sequence, pooled = cpu_model.run(None, inputs_onnx) # Print information about outputs print(f"Sequence output: {sequence.shape}, Pooled output: {pooled.shape}") ``` # Benchmarking PyTorch model _Note: PyTorch model benchmark is run on CPU_ ``` from transformers import BertModel PROVIDERS = { ("cpu", "PyTorch CPU"), # Uncomment this line to enable GPU benchmarking # ("cuda:0", "PyTorch GPU") } results = {} for device, label in PROVIDERS: # Move inputs to the correct device model_inputs_on_device = { arg_name: tensor.to(device) for arg_name, tensor in model_inputs.items() } # Add PyTorch to the providers model_pt = BertModel.from_pretrained("bert-base-cased").to(device) for _ in trange(10, desc="Warming up"): model_pt(**model_inputs_on_device) # Compute time_buffer = [] for _ in trange(100, desc=f"Tracking inference time on PyTorch"): with track_infer_time(time_buffer): model_pt(**model_inputs_on_device) # Store the result results[label] = OnnxInferenceResult( time_buffer, None ) ``` ## Benchmarking PyTorch & ONNX on CPU _**Disclamer: results may vary from the actual hardware used to run the model**_ ``` PROVIDERS = { ("CPUExecutionProvider", "ONNX CPU"), # Uncomment this line to enable GPU benchmarking # ("CUDAExecutionProvider", "ONNX GPU") } for provider, label in PROVIDERS: # Create the model with the specified provider model = create_model_for_provider("onnx/bert-base-cased.onnx", provider) # Keep track of the inference time time_buffer = [] # Warm up the model model.run(None, inputs_onnx) # Compute for _ in trange(100, desc=f"Tracking inference time on {provider}"): with track_infer_time(time_buffer): model.run(None, inputs_onnx) # Store the result results[label] = OnnxInferenceResult( time_buffer, model.get_session_options().optimized_model_filepath ) %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import os # Compute average inference time + std time_results = {k: np.mean(v.model_inference_time) * 1e3 for k, v in results.items()} time_results_std = np.std([v.model_inference_time for v in results.values()]) * 1000 plt.rcdefaults() fig, ax = plt.subplots(figsize=(16, 12)) ax.set_ylabel("Avg Inference time (ms)") ax.set_title("Average inference time (ms) for each provider") ax.bar(time_results.keys(), time_results.values(), yerr=time_results_std) plt.show() ``` # Quantization support from transformers Quantization enables the use of integers (_instead of floatting point_) arithmetic to run neural networks models faster. From a high-level point of view, quantization works as mapping the float32 ranges of values as int8 with the less loss in the performances of the model. Hugging Face provides a conversion tool as part of the transformers repository to easily export quantized models to ONNX Runtime. For more information, please refer to the following: - [Hugging Face Documentation on ONNX Runtime quantization supports](https://huggingface.co/transformers/master/serialization.html#quantization) - [Intel's Explanation of Quantization](https://nervanasystems.github.io/distiller/quantization.html) With this method, the accuracy of the model remains at the same level than the full-precision model. If you want to see benchmarks on model performances, we recommand reading the [ONNX Runtime notebook](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/quantization/notebooks/Bert-GLUE_OnnxRuntime_quantization.ipynb) on the subject. # Benchmarking PyTorch quantized model ``` import torch # Quantize model_pt_quantized = torch.quantization.quantize_dynamic( model_pt.to("cpu"), {torch.nn.Linear}, dtype=torch.qint8 ) # Warm up model_pt_quantized(**model_inputs) # Benchmark PyTorch quantized model time_buffer = [] for _ in trange(100): with track_infer_time(time_buffer): model_pt_quantized(**model_inputs) results["PyTorch CPU Quantized"] = OnnxInferenceResult( time_buffer, None ) ``` # Benchmarking ONNX quantized model ``` from transformers.convert_graph_to_onnx import quantize # Transformers allow you to easily convert float32 model to quantized int8 with ONNX Runtime quantized_model_path = quantize(Path("bert.opt.onnx")) # Then you just have to load through ONNX runtime as you would normally do quantized_model = create_model_for_provider(quantized_model_path.as_posix(), "CPUExecutionProvider") # Warm up the overall model to have a fair comparaison outputs = quantized_model.run(None, inputs_onnx) # Evaluate performances time_buffer = [] for _ in trange(100, desc=f"Tracking inference time on CPUExecutionProvider with quantized model"): with track_infer_time(time_buffer): outputs = quantized_model.run(None, inputs_onnx) # Store the result results["ONNX CPU Quantized"] = OnnxInferenceResult( time_buffer, quantized_model_path ) ``` ## Show the inference performance of each providers ``` %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import os # Compute average inference time + std time_results = {k: np.mean(v.model_inference_time) * 1e3 for k, v in results.items()} time_results_std = np.std([v.model_inference_time for v in results.values()]) * 1000 plt.rcdefaults() fig, ax = plt.subplots(figsize=(16, 12)) ax.set_ylabel("Avg Inference time (ms)") ax.set_title("Average inference time (ms) for each provider") ax.bar(time_results.keys(), time_results.values(), yerr=time_results_std) plt.show() ```
github_jupyter
# Lecture 1. Introduction to CUDA Programming Model and Toolkit. Welcome to the GPU short course exercies! During this course we will introduce you with the syntax of CUDA programming in Python using `numba.cuda` package. During the first lecture we will try to focus on optimizing the following function, which adds two vectors together using a single CPU core. ``` import numpy as np def add_vectors(a, b): assert len(a) == len(b) n = len(a) result = [None]*n for i in range(n): result[i] = a[i] + b[i] return result add_vectors([1, 2, 3, 4], [4, 5, 6, 7]) ``` Lets measure the time needed to execute the `add_vectors` for big-scale arrays: ``` a = np.random.rand(2**24) # ~ 1e7 elements b = np.random.rand(2**24) %%timeit -n 2 add_vectors(a, b) ``` In the following sections we will show you how to optimize the above implementation by reimplementing vector addition on GPU. ## Exercise 1.1. CUDA kernels and CUDA threads. #### 1.1.1. One-dimensional grid. Lets do the all necessary Python imports first. ``` import math from numba import cuda import numpy as np ``` The `numba.cuda` package provides a possibility to write CUDA kernels directly in Python language. We will describe Numba in more detail later. For now, it will be enough to understand that: - the code that is executed by each GPU core separately is called a *GPU kernel*, - in Numba, GPU kernel is a Python function with `@cuda.jit` decorator. The below code creates a CUDA kernel which simply does nothing (*NOP*). ``` @cuda.jit def my_first_gpu_kernel(): pass ``` The following line launches the code to be executed by 64 thread blocks, each block has 256 threads: ``` my_first_gpu_kernel[64, 256]() ``` Ok, now that we know what is the syntax for writing and lanuching CUDA kernel code, we can proceed with porting the `add_vectors` function to the GPU device. The key is to note that vector element additions are *independent tasks* - that is, for each `i` and `j`, the result of `a[i]+b[i]` does not depend on `a[j]+b[j]` and vice versa. So, let's move line 9 from `add_vectors` function to the new, GPU kernel implementation: ``` @cuda.jit def add_vectors_kernel(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # Make sure not to go outside the grid area! if i >= len(result): return result[i] = a[i] + b[i] ``` The above code is executed by each CUDA thread separately. Fields `blockIdx`, `blockDim` and `threadIdx` allows to determine the position of the current thread in the whole grid of threads executed by CUDA device: - `blockIdx` is the identifier of the currently executed block of threads in the grid, - `blockDim` is the size of a single block of threads, - `threadIdx` is the identifier of the currenty executed thread within a single block. A grid of threads can have one, two or three dimensions, described by `(cuda.blockIdx.z, cuda.blockDim.y, cuda.blockDim.x)` tuple. Each thread in a block has coordinates `(cuda.threadIdx.z, cuda.threadIdx.y, cuda.threadIdx.x)`. Each block in a grid has coordinates `(cuda.blockIdx.z, cuda.blockIdx.y, cuda.blockIdx.x)`. The `x` coordinate changes the fastest: two adjacent threads in the same block differ in the value of the `x` coordinate by 1. Grird and thread dimensions can be specified via `grid_size` and `thread_size` parameters: a single scalar value means, that a 1-D grid will be used, a pair of values imposes a 2-D grid, three values sets 3-D grid. Now, we would like to run the above kernel for each `result[i]`. Let's assume for a moment that we want the above CUDA kernel to be executed by 256 threads in parallel - i.e. one block will consists of 256 threads. To cover the entire input array, the kernel has to be executed by $\left\lceil \frac{n}{256} \right\rceil$ blocks of threads. ``` def add_vectors_gpu(a, b): assert len(a) == len(b) # Create output array in the GPU memory. result = cuda.device_array(shape=a.shape, dtype=a.dtype) block_size = 256 grid_size = math.ceil(len(a)/block_size) add_vectors_kernel[grid_size, block_size](result, a, b) return result.copy_to_host() %%timeit add_vectors_gpu(a, b) ``` Congratulations! Your very first GPU kernel, i.e. `add_vectors_gpu` function, executes much faster than its CPU counterpart. Of course, writing CUDA kernels is not the only part of preparing GPU processing pipeline. One of the other important things to consider is the heterogenous nature of the CPU-GPU processing: is the data transfer between GPU and the host computer. #### 1.1.2. Two-dimensional grid. In the previous example, the grid of threads was defined in a single dimension, i.e. the variables `grid_size` and `block_size` were a single scalar value. This time we will implement a function, which adds two **matrices**, and we will use a 2-D grid of threads for this purpose. Lets implement `add_matrices` for CPU first. ``` import itertools import numpy as np def add_matrices(a, b): a = np.array(a) b = np.array(b) assert a.shape == b.shape height, width = a.shape result = np.zeros(a.shape, dtype=a.dtype) for i, j in itertools.product(range(height), range(width)): result[i, j] = a[i, j] + b[i, j] return result add_matrices( # a = [[ 1, 2, 3, 4], [ 4, 5, 6, 7]], # b = [[-1, -2, -3, -4], [ 1, 1, 1, 1]]) A = np.random.rand(2**12, 2**12) # ~ 1e7 elements B = np.random.rand(2**12, 2**12) C = add_matrices(A, B) np.testing.assert_equal(C, A+B) %%timeit -n 2 add_matrices(A, B) ``` Similarly to the `add_vectors_kernel` implementation, the `add_matrices_kernel` will compute a single matrix element. This time, we will use `y` coordinate to address matrix elements in the second dimension: ``` @cuda.jit def add_matrices_kernel(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x j = cuda.blockIdx.y*cuda.blockDim.y + cuda.threadIdx.y height, width = result.shape # Make sure we are not accessing data outside available space! if i >= width or j >= height: return result[j, i] = a[j, i] + b[j, i] ``` Now, we must also pass the second dimension of the grid to the GPU kernel invocation parameters, as the implementation assumes a 2-D grid layout. The parameters `grid_size` and `block_size` now has to pairs of integer values: ``` def add_matrices_gpu(a, b): assert a.shape == b.shape # Create output array in the GPU memory. result = cuda.device_array(shape=a.shape, dtype=a.dtype) height, width = a.shape block_size = (16, 16) grid_size = (math.ceil(width/block_size[0]), math.ceil(height/block_size[1])) add_matrices_kernel[grid_size, block_size](result, a, b) return result.copy_to_host() add_matrices( # a = [[ 1, 2, 3, 4], [ 4, 5, 6, 7]], # b = [[-1, -2, -3, -4], [ 1, 1, 1, 1]]) ``` Let's test the implementation first: ``` C = add_matrices_gpu(A, B) np.testing.assert_equal(C, A+B) ``` Now lets compare CPU and GPU processing time: ``` %%timeit add_matrices_gpu(A, B) ``` We leave the implementation of adding two 3D arrays as homework for the students. #### 1.1.3. See also - CUDA kernels in Numba: introduction: https://numba.readthedocs.io/en/0.52.0/cuda/kernels.html#introduction - Matrix multiplication example https://numba.readthedocs.io/en/0.52.0/cuda/examples.html#matrix-multiplication ## Exercise 1.2. Transferring data to and from GPU memory. In the previous examples, we passed the `numpy` arrays directly to the GPU kernel code. The `numpy` arrays were stored in the host PC's operating memory. As GPU computing can be performed only on data which is located in GPU memory, Numba package impliclitly transferred the data from PC's memory to GPU global memory first. The data transfers can be run explicitly, if necessary: ``` block_size = 256 grid_size = math.ceil(len(a)/block_size) # Create an array for the result in the GPU global memory. result_gpu = cuda.device_array(shape=a.shape, dtype=a.dtype) # Here are the explicit data transfers from host PC memory to GPU global memory: a_gpu = cuda.to_device(a) b_gpu = cuda.to_device(b) add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) # After the computations are done, transfer the results to the host PC memory. result = result_gpu.copy_to_host() ``` Data transfer to and from GPU memory is only possible with GPU global memory. The following functions are available in Numba: - create an array in the GPU global memory: `numba.cuda.device_array`, or `numba.cuda.device_array_like`, - host PC to GPU global memory transfer: `numba.cuda.to_device` - GPU global memory to host PC memory transfer: `gpu_array.copy_to_host`, where `gpu_array` is a GPU array. The complete list of Numba's functions for data transfer to and from GPU is available here: https://numba.readthedocs.io/en/0.52.0/cuda/memory.html#data-transfer The advantage of heterogenous programming with CUDA is that computing performed on the GPU can be done in parallel with the operations performed by CPU -- both CPU and GPU are separate processing devices that can work simultaneously. In other words, that CUDA kernel invocations are **asynochronous**: when we invoke GPU kernel, the only job the CPU does is to enqueue the kernel to be executed on GPU, then it returns immediately (In fact, this is not always true for kernels written in Numba - the first launch of a CUDA kernel may also require Python code compilation. This topic will be discussed later in this lecture.) For example, the following GPU kernel call takes much less time than we've seen so far: ``` %%timeit -n 100 add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) ``` The difference is that we did not transfer data from the GPU to the CPU. Let's try now with the GPU -> CPU transfer: ``` %%timeit -n 100 add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) result = result_gpu.copy_to_host() ``` The difference is due to the fact that the data transfer is a blocking operation - it waits for all queued operations to be performed, then performs the transfer. To wait for the kernels to execute explicitly, without transferring the result data to host PC, run `cuda.default_stream().synchronize()`. ``` %%timeit -n 100 add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) cuda.default_stream().synchronize() ``` CUDA streams will be covered in more detail later in this short-course. ### 1.2.1 See also - CUDA Toolkit documentation: device memory management (CUDA C++): https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#device-memory - The complete list of Numba's functions for data transfer to and from GPU: https://numba.readthedocs.io/en/0.52.0/cuda/memory.html#data-transfer ## Exercise 1.3. CUDA Toolkit and Numba package. Nvidia provides several tools in its Toolkit that help in the implementation and testing of the GPU code. In this exercise we will show you how to: - check what parameters you hardware has, like determine how to programatically check how many GPUs are available, how much each kind of memory you it has, and so on, - debug and memcheck you Python CUDA kernels, - profile CUDA code execution time. Also, we will introduce you with more details of Numba Python package, which we will use during the whole course. ### Exercise 1.3.1. CUDA device diagnostics. The most basic diagnostic tool for GPU cards is `nvidia-smi` which displays the current status of available all GPU cards. (NOTE: `nvidia-smi` tool is not available Nvidia Jetson processors. For SoC chips, please use built-in `tegrastats` or install [`jtop`](https://pypi.org/project/jetson-stats/)). ``` ! nvidia-smi ``` `nvidia-smi` outputs information about: - the installed NVIDIA driver and CUDA Toolkit, - for each available GPU: - temperature, memory usage and GPU utilization, - processes that are currently running on that GPU. `nvidia-smi` is a command line tool, so use it in your shell to quickly check the state of your GPU. CUDA SDK provides also a programatic way to access device description in your application run-time, to e.g. check if we are not exceeding available GPU global memory. The CUDA device description is called SDK *device properties*. To get the device properties, we will use `cupy` package, which exposes CUDA SDK interface in Python in a convenient way. Let's check first how many GPU cards do we have: ``` import cupy as cp cp.cuda.runtime.getDeviceCount() ``` Now, let's we check: - what is the name of the device and what is its compute capability, - what is the GPU clock frequency, - how much global, shared and constant memory our GPU card has. ``` device_props = cp.cuda.runtime.getDeviceProperties(0) print(f"Device: {device_props['name']} (cc {device_props['major']}.{device_props['minor']})") print(f"GPU clock frequency: {device_props['clockRate']/1e3} MHz") print("Available memory: ") print(f"- global memory: {device_props['totalGlobalMem']/2**20} MiB") print(f"- shared memory per thread block: {device_props['sharedMemPerBlock']} B") print(f"- constant memory: {device_props['totalConstMem']} B") ``` The complete list of device properties is available [here](https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.html#group__CUDART__DEVICE_1g1bf9d625a931d657e08db2b4391170f0). ### Exercise 1.3.2. Numba. - Numba is a just-in-time (JIT) compiler for Python. - It generates machine code from Python bytecode using LLVM compiler library, which results in a significant speed up. - It works best on code that uses NumPy arrays and functions, and loops. - Numba can target NVIDIA CUDA and (experimentally) AMD ROC GPUs. In other words, it allows for (relatively) easy creation of Python code executed on GPU, which results (potentially) in a significant speed-up. Numba documentation is available here: https://numba.pydata.org/numba-doc/latest/index.html The thing that needs to be stressed here is to note, that Numba is a JIT compiler - that means it compiles a given function to machine code *lazily*, **on the first function call**. Compilation is performed only once - the first time a given function is run. After that, a cached version of machine code is used. Let's see how long it will take to execute the brand new kernel the first time: ``` @cuda.jit def increment(x): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x x[i] += 2 %time increment[1, 5](np.arange(5)) ``` The second kernel execution takes: ``` %time increment[1, 5](np.arange(5)) ``` ### Exercise 1.3.3. CUDA-MEMCHECK and debugging Numba code. #### 1.3.3.1 CUDA-MEMCHECK CUDA-MEMCHECK is an tool available in CUDA SDK, which gives the possibility to check if CUDA application makes any of the following errors: - misaligned and out of bounds memory access errors, - shared memory data races, - unintialized accesses to global memory. Let's to debug below Python script in order to detect any memory issues it may cause. According to Numba [documentation](https://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#debug-info), we can pass `debug=True` parameter to the `@cuda.jit` decorator in order to get some more information about the analyzed kernel. Let's do that, save the below cell to Python script, and run CUDA-MEMCHECK for the Python interpreter. ``` %%writefile 1_3_3_memcheck.py import os import numpy as np from numba import cuda import math @cuda.jit(debug=True) def add_vectors_invalid(result, a, b): pass i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # What are we missing here? result[i] = a[i] + b[i] a = np.arange(255) b = np.arange(255) result = cuda.device_array(a.shape, dtype=a.dtype) add_vectors_invalid[1, 256](result, a, b) result_host = result.copy_to_host() ``` The only thing the above cell does is saving the Python code to the `1_3_3_memcheck.py` script in the current directory (you can check it using `! pwd` command). Now, we can run `cuda-memcheck` along with the Python interpreter in order to see if there any issues with the script. ``` ! cuda-memcheck --show-backtrace no python 1_3_3_memcheck.py ``` As we can see, CUDA-MEMCHECK detected, the `add_vectors_invalid` kernel was not properly executed by thread `255`. What is causing the issue? #### 1.3.3.2 Debugging Numba kernels CUDA SDK toolkit includes debuggers that can be run on the GPU kernel code, in case it's necessary to trace the cause of the issue. A list of CUDA debuggers, that can be run on C++ CUDA kernel code is available here: [Linux](https://docs.nvidia.com/cuda/cuda-gdb/index.html), [Windows](https://docs.nvidia.com/nsight-visual-studio-edition/cuda-debugger/). For the application that uses Python with Numba to generate GPU machine code, user have an opportunity to run Python debugger (`pdb`) directly on the kernel code using the CUDA simulator. More details about the simulator can be found [here](https://numba.pydata.org/numba-doc/dev/cuda/simulator.html). Let's use CUDA simulator to print debug data directly in the kernel code. In order to be able to run CUDA simulator, set `NUMBA_ENABLE_CUDASIM` environment variable to value `1`. ``` %%writefile 1_3_3_numba_debugger.py # Turn on CUDA simulator. import os os.environ['NUMBA_ENABLE_CUDASIM'] = '1' import numpy as np from numba import cuda import math @cuda.jit def add_vectors(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # What are we missing here? result[i] = a[i] + b[i] if i == 10: print(f"{result[i]} = {a[i]} + {b[i]}") # or use PDB: import pdb; pdb.set_trace() a = np.arange(255) b = np.arange(255) result = cuda.device_array(a.shape, dtype=a.dtype) add_vectors[1, 255](result, a, b) result_host = result.copy_to_host() ! python 1_3_3_numba_debugger.py ``` ### Exercise 1.3.4. Profiling GPU code. Sometimes, to better understand and optimize performance of GPU appplication, it is necessary to perform a dynamic program analysis and to mesaure a specific metrics, for example the execution time and memory requirements of a particular CUDA kernel. The utility that performs such analysis is usually called a *code profiler*. NVIDIA provides a number of tools that enable code profiling. Some of them allow you to perform inspections from the command line, while others provide a graphical user interface that clearly presents various code metrics. In this excersie we will introduce you the tools available in CUDA ecosystem. NOTE: Currently, NVIDIA is migrating to a new profiling toolkit called *NVIDIA Nsight Systems* and NVIDIA Nsight Compute system. We will extend this exercise with examples of their use in the future. #### 1.3.4.1. NVPROF `nvprof` is a CUDA SDK tool that allows to acquire profiling data directly from the command line. Documentation for the tool is available here. We will use `nvprof` for the rest of this course. Let's try profiling our `add_vectors_gpu` code first. ``` %%writefile 1_3_4_nvprof_add_vectors.py import math from numba import cuda import numpy as np @cuda.jit def add_vectors_kernel(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # Make sure not to go outside the grid area! if i >= len(result): return result[i] = a[i] + b[i] y = cuda.device_array(4) add_vectors_kernel[1, 4](y, np.array([1, 2, 3, 4]), np.array([4, 5, 6, 7])) result = y.copy_to_host() np.testing.assert_equal(result, [5, 7, 9, 11]) ``` The usage is the following: ``` nvprof [options] [application] [application-arguments] ``` For example, to run the above Python script: ``` ! nvprof python 1_3_4_nvprof_add_vectors.py ``` By default, `nvprof` outputs all GPU and API calls activity. Mainly we are interested in the CUDA GPU tracing -- we can turn off API calls by using `--trace gpu` option. ``` ! nvprof --trace gpu python 1_3_4_nvprof_add_vectors.py ``` CUDA GPU activities includes: - CUDA kernels activities, - GPU to Host memory transfers (`DtoH`), host to GPU memory transfers (`HtoD`). Let's we add one more kernel to the above code: ``` %%writefile 1_3_4_nvprof_increment_add_vectors.py import math from numba import cuda import numpy as np @cuda.jit def increment(a): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x a[i] += 1 @cuda.jit def add_vectors_kernel(result, a, b): i = cuda.blockIdx.x*cuda.blockDim.x + cuda.threadIdx.x # Make sure not to go outside the grid area! result[i] = a[i] + b[i] result_gpu = cuda.device_array(2**24) a_gpu = cuda.to_device(np.random.rand(2**24)) b_gpu = cuda.to_device(np.random.rand(2**24)) block_size = 256 grid_size = math.ceil(len(result_gpu)/block_size) increment[grid_size, block_size](a_gpu) add_vectors_kernel[grid_size, block_size](result_gpu, a_gpu, b_gpu) result = result_gpu.copy_to_host() ! nvprof --trace gpu python 1_3_4_nvprof_increment_add_vectors.py ``` In the above case, `nvprof` should display the execution times for both kernels. #### 1.3.4.2. NVIDIA Visual Profiler NVIDIA Visual Profiler (NVVP) allows to Let's export the profiling results to a file that can be loaded by NVVP. We can `nvprof` for this purpose, just use `--export-profile` parameter. ``` ! nvprof --trace gpu --export-profile nvvp_example.nvvp -f python 1_3_4_nvprof_increment_add_vectors.py ``` Next, let's load the profiling results into NVVP: 1. Open NVVP. 2. Press: File -> Open choose the exported file. As a result, you should see an image similar to the one below. The generated graph presents the moments of execution each individual GPU activity over time. CUDA kernel launches are placed on the `Compute` lane (in the order in which they are executed: `increment` first, then `add_vectors`) CUDA memory transfers are on the `MemCpy` lanes (variable `a`, then variable `b`, both `HtoD`, then `result`, `DtoH`). ![nvvp_example.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABzoAAAQfCAYAAABrru89AAAABHNCSVQICAgIfAhkiAAAABl0RVh0U29mdHdhcmUAZ25vbWUtc2NyZWVuc2hvdO8Dvz4AACAASURBVHic7N13fBR1/sfx9+ym90BCAiH0LlWagChNqgVEFBsqFqw/vbOf7aynnuVsp1g4z3I2FBtIEUSl2eggvQcIJQnpye7M/P5IIT3ZZEMSeD0fjzxIdmbn853Z7LI773y+Y0gaKAAAAAAAAAAAAABoQHwk6fSePZfW9UAAAAAAAAAAAAAAoCpWrl49yFHXgwAAAAAAAAAAAAAATxF0AgAAAAAAAAAAAGhwCDoBAAAAAAAAAAAANDgEnQAAAAAAAAAAAAAaHIJOAAAAAAAAAAAAAA0OQScAAAAAAAAAAACABoegEwAAAAAAAAAAAECD41PXAwAAAAAAAAAAAABwcov1tRTja0mSkt2G9uQ6a7xNgk4AAAAAAAAAAAAAXhfra+maxlnqFGCqiY9VbNkRt6GlGX76MsVfB13Vm4SWoBMAAAAAAAAAAACAV02IyNE1jbPka5S9PMrH1gXhORoblqOPkgP0UVKAxzUIOgEAAAAAAAAAAAB4zV0xmRoemluldX0NaUqjbAUZtt45GuhRner1gQIAAAAAUNscjdRl+ARdccM1GhJXzz++NqSxAgAAAEAtmhCRU+WQs6iLInN0XniOR/fh0xcAAAAAoGqCemna299p/pcv6qI2J2CCIGcXTbzvbt101Th1a1SHH18DWumcO17UB98u0o9LFmv2e7eqd2CcxjwxU/O+e0c39g6RUV/GCgAAAAB1qJEz75qcxRgVfEYqsez6qCw1clrlrFwan74AAAAAoCyOtrrmvz9r6dIFmn5dVwUXuaZI0Khn9MPy5fr5m7vV2zdc5/xjgZYsX66fPrhe7Z0lNtPmGv3n5+VauvRr3ds/4Ph2l/+st65skfehzBGvC//5heb8sERLli3Tz0t+1ILvPte7rzyqm87vocY+ZYyr6P2LMBqP0dPzl2rp8qWadW9/VTjpj+HZ2B1NuqlfxwgFN+ml/p2CVc5lVk4sR3ONf3am5vywVEuW5x27+bM/01vP3avLBsbJv8YFAtTzxmf14MVnqFWkoYzkdBnZqTrq7KLBg5opJKKjzuoXz4drAAAAAPXWtdddV+ZXbbi8UXbxa3IaDkU89J4Ch11Sat3AYZco4qH3ioWdvkZeR2hVcY1OAAAAAKiII0SnXfMPPZBwgx787oBK/13pMS2fv1ypZ52j8FZDNKTdDG3dbBbcWa2HDFEbH8k6sFjzV2aXU8RP4U2iFR7gkJWTofRchwLDmqp9n2Zq33uEzhvztu69612ty7ArGaxTbcdP1oCQvA+J0aMu1ah3f9OXieX8Nazt2dgt99d68flgDWu8Td/8cEyVjebE8FdkTIzCA4z8Y+dUYEScugxqrs4DzlL3J6/V/XMOVn+sPp109llx8rFT9P1Dl+jvi1Ilw5CtYP3vje/k2ytVc7/ZKlONvblTAAAAAOA177z9dqnbaivoHBjsKn6DbSl7ydcKm/aU5OOrrPkfSJIChl2s0GlPKu2dv0t28c+s54TlVvlanQSdAAAAAFAJwxGls+5+QlfvvlUzNmaVWp6+Yp6WJA/XuMYtNXRYR/1n80a5JcnZTkOHt5WPTO1dNF/rXapkXh1Tm965VtPe3yMFNlWPcTfp3ttGKL7HVN1/zW+66tX1clV098B+umh8Ozldm/XL2mj17dNHEy9or2/f3Jw3njJ4NHbfrppwy5UaHbRf9q8/6JV1tmIGXas7bjhXvVtHyic7RYd2LdQr97ykZdZY/fObBzXA8afeuOI6vb/boV53ztTLF0Ur9eu/avw/fpFLgeoz7TndMaatYhsFyyc3RXvWLdL/XnlDc3eUPs4VO37sHBFddfkTL+i63o004KKRarngB7WZNk0X9j9NbeIaK8SZo+Ttn+vhaW9ojdlYvSbfpOsmnKnOTfyUeWC9fvxkuqZ/uUGptiQjRGEhhuSI0Ign52mEJCvxM/3fJV+p59iRGtjBkvnDW1qYWM6wnFHqc9ktum78IHWM8lFawlot/vA1vTFnqzJtSY54Dbu5nLFV+GADAAAAQMWWLl9e5u2DBgyolXot/ExF+JT+M9PsRZ9KUl7YKcl25yps2lNKe+fvhcFnUeFOW7G+lg66Kp87h9l1AAAAAKBClpIOJio3oLOueuQW9Qsr42NU1m/6buFBWXIqfvgIdfbNu9mn4wgNb+mUzJ1aMO/PikPKklWzDmjVzCf1zJf7ZRk+ih8xUl0r/FNVQ42GTNDQxoayV36hf85YoIOWU63GjVfviv4QtgZjNxqP0V2PXa2z2kfInbhDuw7lKizaTzmVdp4WcMkMiVW0X7aSDx1WuqOR2pwxSfc/PkWdavBnue6UjVqwdKdMSY7G0Wrk01KDLhimXm1jFGSlKSXTqRAzXUlWsHrd9opevHWcesYFy8qxFdqir8bf/Yqeu6qDfAt3VJLtUmriHu3Zs0d79iUpu0q7GKReN7+k524ara5Rpo4czlBQy36a+LcXdf+IxnlT/zrLGZtZ2bYBAAAAoHKDBgwo9lWbQhzlf1DKXvSp0qY/oNBr/54Xck5/oMyQs0BVr9NJ0AkAAAAAFTK1d+bjevHnJDlaTND9dw5R41KfpHK17rv52m1KjqZDdU53P0m+Om3kMDV32srdMEfztlcnucrRxlUblW1Ljsjmigup4KqYjjiNHN9fwUrXijk/6uDauVqw25QjerguOCuigutpVn/sjibxauZvyM5eqTdvnaqpV0zUuEkvaGWVE123Vr84SaPPm6zrbrxJ19/4in7Pknzie6p7dPU+rhoOf0W2OVMTz2kvH0nWoYM6XPBZ296vz/9ygc4fM1yjb/5Y+6JG6+oJreRrJ+r7Ry7S6JFjdMXzK5RqB6rzpVdoUEiRDduHNOehy3XpJZfo8lvf1Z/ltcgWHUuTUbrqwtbyca3T9Gsu0KSLLtBlz65QhtFIgy8YomK7WGJse6v2mR4AAAAA6g3fOkgdCToBAAAAoDLu/Zrz1FP6JsFW9Ig7defIJqWCQ/eW2fp2Q65sRxOdPfJ0BQScrlFDm8phZ+jXr+YpoZrBlWE48mvZsivoInS2HaOxp/nJPrZU85cdk21u0fcLtstthKj/ucPVpIJPf9Udu7lzuZbtzZUC++nOjz/X6w9P1dAWAVW/HqajiQbc+C99tmChZn/zpWb+93b1DTQkI0hBVbscSxE+6nLzJ/pp6WJ9++Ezmtw5UDIPafEn87SvVE5ry+Vyy6f9aerkZ8hKWaHZixPlVq72zpuvtW5bjpBOOq2l09NBFB9R+67q5G/I8OumGz/8UUuXL9Ws+wYo2DDkjI1TTJmPSd7YAAAAAKChSaxgqtmi1+RMnf43hU57UoEjryh3/SSzahEm1+gEAAAAgCqwji3Tq098ou4vX6q+ZzTKu63YCgma9+UKXdPtLDUacp5GbzQ1JMoh6+hifbk4qerhXzEh6jWgm/wNyTq8R3vLnRLWV13HjVJrp+SIGK1/fD+62FJnj3Ea2WKW3t9VTmJZ3bFnr9brN92sXZdfqYvPHahuY65XtyGDNeOWGzRjryVTkgxf+fmV3U8aPPhmPXh5X4Wmr9MXr3+hdZntNeGvl6qHXxUOTSm2LNMlV26uso4latfG37To84/09cojsn3LvUvtcjjkkGTnbNWiT5ZqX5HDbx1bo0RLUs2yVAAAAAD1UHnXxixQ21PIVsU7b7/t9W0muQ1lWoaCSkxhGzDs4jKvyVlwzc6SU9imuI0qXZ9TIugEAAAAgCrLXP22nv10gF6+rLV8S2V3tpIXf6FFN56p86MH69bbbQU4TO2e84V+z/S0kiG/yLY6Y+KtunNstBx2rrbO/U4b3Sp7Xp6A0zV6eFM5bLfSDu1Xck7BAoeCo+PUOLCDRo/qoP9N36SyJ6Gt5th9whRhb9Hs1+7T7BntdM0r/9F1p7XT4DPi9e62JCVnWFJEnE47LUqOrUdL3NmhRs3jFOyQzM3f6b8z5+qoT7JOv3lykaDTkmnaksLUJCZIxrrUCrJJU5umX6lp7+9RVZtn3dvWa1PuaPWJOEPjhsRo5YIkNR01Uj18DNkZW/Tnbk+mGy49VvfOrdrhHqVuvpHyS/xeH365XRmW5BvRVBG5B3WYoBMAAAA4KdWHILNAZaGrN+Xahhak+emC8JzjNxoOBQw6P++anIs+Kbw5e9GnMmTI/8zzlLXgf5J9/JPcsozy/lq1NIJOAAAAAKiyLK2d8Zy+POtlXRRfRkKV9bs++3KrxlzfUYGBkp31m2Z+sVlVvmSlnOp0w3tacJ1Tfr4+chiSbJcOLXtFj7+/ReVNaBrcb7TOauSQnbVUL119r75LKYgDfdTllg/1xhUt1GLESHWZsUnryhtMNcbu02mKXv/3hQo6tF+H030V28Yp2ek6kJAk25WoH5cc0bjzmqjf3R/pm2vS5QhvXGTKX0uHNm9RktVV0b1v1fR3hmlvWqhahxZJkM292rXXLbtrqIY+9L7uyZqsZ5ZmVfloVsY+NF/vf3WJekxqoRGPztTAu3PlGxIkX2Vr82cfamm6pKp+vi5rrMvm6P15E/XUuGYafNd7mn3TMaXbgQoLztbcu87XU8ur/psBAAAAAJ6qi8D1yxR/nROae7yr07aU8sSUYkFmgaxFnyjrh8+KLXPZ0odJAVWuxzU6AQAAAMATmav0n1fn5XXjlWJq51cfa3m6JcnSkYUfa35iVfoLc5R8MFEpWZYMHz/5OU1lJu3RxmVf6+1HrtVV98zU9pxy7mqEacCoQQp32Mr6fZGWHSva8+jW5oU/aK8pOZoN08ju/hWMwfOxOx2p2rcrVX7RrdW+XYx0aL0WTH9AL3yfIlsZWvHyfXr+61Xal2ooJCpS/rlJ2rd5pZZtOCxTUs4fb+iBf83Wuv0uRXborT69WyswdZ82/bFKO9MkWXs169nn9e3qBKWZGUpJyq3CsfSAna7fX75N90yfrw0Hs+Qb6FRmwmrNfukO3fnOnyrvkJeprLHaKVr6zM26+43vtHrvMbkDwhXml6vD2zYpMde/1HVeAQAAAKChO+hy6IOSQWUZIWd5y/5zNLDK1+eUJEPSwNN79lzqwRgBAAAAAAAAAAAAoEzXNs7SRZEe/fmoFqb56bnEoCqvv3L16kFMXQsAAAAAAAAAAADAa97J78y8pnGWfCuZzsZl53VyzkqpaBaishF0AgAAAAAAAAAAAPCqWSn++iPTRxMicnRGsEsRTrvY8gSXQ79n+urLFH8ddFXvapsEnQAAAAAAAAAAAAC8bk+uUy8dCtLrhq1YX0shDlu+DinR5ah2uFkUQScAAAAAAAAAAACAWpNrG9qT6/T6dmselQIAAAAAAAAAAADACUbQCQAAAAAAAAAAAKDBIegEAAAAAAAAAAAA0OAQdAIAAAAAAAAAAABocAg6AQAAAAAAAAAAADQ4BJ0AAAAAAAAAAAAAGhyCTgAAAAAAAAAAAAANjo8kJSUn1/U4AAAAAAAAAAAAAKDKfCRpw5+b6nocAAAAAAAAAAAAAFAlwUGBTF0LAAAAAAAAAAAAoOEh6AQAAAAAAAAAAADQ4BB0AgAAAAAAAAAAAGhwCDoBAAAAAAAAAAAANDgEnQAAAAAAAAAAAAAanNoJOu1DWvbOk3p53j5ZtVIAXmUf1eovZ2jmb0my63ospzKeNwAAAAAAAAAAAFXmnaDTvVHv3jhJ17+5RrmSZKVo84pl2nAwq34FZyXHeTLwxj6ZB/XrrC/0887M+vV4nWrq6/MGAAAAAAAAAACgHqpa0Gkna87fxmn0qJElvsbpwbnHZDuC1TiuuZpHhdTJXLiLH79Q465+QxvcpZe5N0zXNeMu1nMrsqQ6HmdFdn5wo84d/6R+zqp83ewlT+rCcXfo8wN2vd4nr7BT9dMzl+qCsaM0ZvQojTt3vC6dersee2O2/kypJ32P+c+P8x9dpJIPX86Pj+v8c+/V7KNElwAAAAAAAAAAAN7k49HK3a7UY1f3kn/hLQ6FxIXIcITrvIde1HleH17V9B3aT8HLl2jxhqk6rYdfkSUu/fnTEh2OGKihvQIlR8s6HWdF4tq2UYBrq3YkmBrczpl3o7lVnz78L63pfrsevaRD/oNlKmH7LuWGd1fbaKNe75N3mEo/liqz82V64tq+CnBnKmXfGn330Wu6d3Winn5pqrr4V74VAAAAAAAAAAAAnFw8CjodoXHq3LWrAksusHbqw1tv0w99ntf0qR3lLOvO7kT9+tFb+t+iNdp1xFREh0G6aNqNGtchWEZ1R58vuPdQnRH2o5YtXqfre/RWYdSZu0E/Lj2qxmcOV3f/csZZ7rh8tfKlKXrkzzF66bWr1NYpyfW7/nXl33Xg8nf19HlRMmQrdf7DuvJ1P30166Ea7YNv63ZqqZ+1fUe67HbhMiSZO5bo+1VblXDwZ22/qIM6OiXZmdq546AcrSeqlbOMfbKPaMkbz+mjX3fp4JFjylagGrftrwun3aILOofkHevcffrp3df1wcK12p8ZoNhObeSXbsu36ICso1r1yXS9O+dX7Ug2FNmmn8ZNnaZJPUO06gQelwKOsHh16dIl73evex/1bpKmax+er+83TlGXbuv16pQHtH3iW3phYjMZkuyDX+iu62eq9RPv6dbuKZUfkxPFk+eBfUTL3npBH63YoYRDx5TrCFXTzgN13lXX6Lwu4Sd23AAAAAAAAAAAAPWMZ7Od2rZM0yz8sqyqTseZrQ3/eVBPzs1Sn6mP6J/P3q0xISv1xqNv6vdMzwddSmAvjTgrSseWL9aa7OM3565brOXJzTRkeOfiIV6VxuWnjj1Ok1/CZm1JzdtPc+9GbUp1acefW/Ovh+nS1o1bZHfsWeNdMKLaq31jU9u37JSZV027lq1Qap9hOj11mZZuy5+X19ypLdtNNevYXqFlJV12mnavW6eUVhN1z2NP64m/XaeBjt/01hNv6Y8sSXaG/pj+gJ6dm6wul92nxx+7S5f0DFZObtHHMkeb/vuAHvlkr1pNuldPPHm3JrbcrY8fflAfbtYJPS7l8Q0NVaBylJNdhd/Byo6Jt9imXLm5yi3y5XKbRVbw8Hlgp2nn6jU62uJC3fPYU3r0/ms00Od3vfW3B/XxNpcXBw4AAAAAAAAAANDweNTRmbv8GV009pnCn50tJuvFN6aqQyWtZXbaMn0+54h63/qcLh+c14nW7tZE/X7Nh/p5/a3q26/sGLLq/HTayOGKmz1bP/yRob6DgiVladXiFUptM17D25W9m5WNq0+3Xuqo/2jdn9kaMzBASevXKTEwRMbGtdrmHqDTjJ1auz5DLUd4IdDzaavOHXz17dYtSrR6Ks7epp+WJKnnlEfV3/cWvf/jFk3p2EXOw1u1LSVEHTvHl905K0kyFBjfXX16dZRTPdQ9KlEr71io37a61bvlMn2z8Kg6Tnlat1/QNK8rsFuIds7/VRsKjkv6Cn3+7V61uewN3X5+vBySenSNU9aumzXz8180YdoJPC7F2LJyM3R07zrNe3e+DoT31zVdqvorXP4x6dPdo6dBuXKX/1MXn/fP0gt8e+WNvrLnQe+yxx3Uoof6np7Xgdy7dxvptr9o1he/afw9A0t3VwMAAAAAAAAAAJwiPLtGZ/er9fT1fQqnhjX8G6mFU5JV8f3Mfdu1KztTiS9cpvNfLLjVluk25J+UKVs1n4bT2eYcndN+pj5a+ItSBw5TaMavWrg8U6dNGar4cvpWKxuX+vZW3/ZvaNaqLXINaKN1q3ep40WXKGDmYq3dZ6mz71qtSWymvn2b1XD0khSoTqe1ld7dqM2ZtmJ2/6Cf03rrut4x6ubuoddmLNT6qzurw5aN2unooNEdqh4OO2KaKcaRqmNptsz9O7XHFa2BXZqUe8zNvVu0IztaA7o1Pd7y64xTt65R+uDXLdofPvYEHpc8ucue1oRRT+f9YDgV2Xms/u+JazUw3JDcnm+v6DHxFp/uU/Tk1N4qesnQ3FUz9MD/8r6v/HlQBX5t1ad7pD5fs0UJ5kC1Kz/tBgAAAAAAAAAAOKl5do3OkFi17dChGl1ktuRorKF3PqVL2hdNZgwFNgr1zrUGHc00fGwvffTafP185GwNWLlAvxv9dPuQ8gO9SsdlhKpPv1Z6d/6v2paRod83xqnP1KEKXPM//bDqoIY5f9WuJv10U0tvpE2Gorr3VHP3HK3fkqGIJUuU3fc29QoyFND3bPV67XUtXHWVHBs2y2w7Xt3Cqn7UDKePnLKVN9OwkbffdsWxWoVLjZgTeFzy+PaYqmem9ZN/9iZ9+vSrWhXUWl1bBeU/toYcDsl0m7KlKv0+FT8m3uEIaaYOnTsXe37kHAqTofT8nyp7HiRWsZCjkgcIAAAAAAAAAADg5OfZNTo9ZOd3ejqbtVYL3xTtSLDUND5e8YVfzRUV7K0hGIo881wNDFqnOXOXat7sNQo5a6zOqCAQrHxcDjUfdKZaHl6uH774WWsieqlX0yid3re1diybq++WblH0wMFq652ZT+Vs2Us9GqVq7dJPNW+ZoYHDeipQkhHST8P721ox9zMtWZuipj16qEk102FnfHu18T+k1Sv3lNsImbfOYa1fd/B4s66ZoPXrjyigTXvFOU/scZEkI7iJWrdto7anjdHt905Qo7Vv6YXPd+btgyNCkeG2Du3dn3+N0Pqpqs8Du6IOaeuANm48Kv9WrdWUbk4AAAAAAAAAAHAKq52g0xGm8FBbh9b8pD8SMqXwQZowuqn2fvaEnvnf9/ptzVqt/mWhvvp+ozK92ZkW1Fvnj2qm3Z8/r093xGv0uT2KTSNaklGFcTniBuvstoma8+kShQ4YqFZOQ7EDBqnVppmauSFGZ57dzrO22Ir4dNQZfSO0f+5MLQ8+SyO6FYw+UKePOFOBv32uOXui1bd/6wquz1kxI2SALjq/pfZ+/Jie+nChfl+7Tmv+WKd9WXbxdc6L146PntRLX6/Q2rXL9dW//qGPd7fU+Rf1V7BO8HEpvgcK6jpFf50Up63/e0Vf7TElR3P1H9hKGT+9rZdn/aw/1qzRyjU7lVrJlMonWqW/byWeN3mPiKUjy2fq43lL9cfKpfrqxaf08Y5YnXNePwXX7e4AAAAAAAAAAADUqVrKomI07LKJ+uWVOXr7uwE6/bou6nHd03ok7G19MP91Pf5BphQcrTaDr9Wg4VKQ1wr7qN248erx5cva1Hu8RreqLA4MqnxcjmYaMrK7PtiarDPPapsXMDY9U4M7/EdbzXM0oo032+r8dNqgfoqcM09hQ4arfZFHx6/bCJ0VM0dfmP01oH1NHjZ/dZrylJ4Im6H3vn1Dj32YJssvVFHNumlQfMFUsP7qdNUT+nvAdL376dN6IEWKbNNPkx+9UZM65oevJ/S4lN6HDhffrHN/ukcfv7NQZ/99pNpM+pvuTH1V7//vWf2YbsovpLFiOvVUu4habVr2UCW/byWfN1PzJsH1803Vqo9f0MeHchXSvKfOvf8mTenu+QTSAAAAAAAAAAAAJxND0sCMzKyldT0QACVYO/Xhrbfphz7Pa/rUjtXu4gUAAAAAAAAAADjZBAcFDqpP7W4AAAAAAAAAAAAAUCUEnQAAAAAAAAAAAAAaHKauBQAAAAAAAAAAANCgMHUtAAAAAAAAAAAAgAaJoBMAAAAAAAAAAABAg+MjSdk5uXU9DgAAAAAAAAAAAACoMh9Jykg7VtfjOKU0joqu6yHUyNEjh+t6CAAAnHQa+vsDAAAAAAAA4ETzqesBnIps267rIQAAgHqG9wcAAAAAAACAZwg66wQnMgEAQEm8PwAAAAAAAAA8QdBZB2jYAAAAJfH+AAAAAAAAAPAMQWdd4EwmAAAoifcHAAAAAAAAgEcIOusApzEB4OSSnZ2tY6mpMt1ur27X6XQqPDxcAQEBXt0u6ifeHwAAAAAAAACeIeisC3RsAMBJIzs7WykpKfL19ZW/n59Xt22aplJSUhQREUHYeSrg/QEAAAAAAADgEYLOOsBpTAA4eaSmpsrX11dOp9Pr2y7YZmpaGkHnKYD3BwAAAAAAAIBnCDrrgE3HBgCcNFwul/y83MlZlNPpVHZ2dq1tH/UH7w8AAAAAAAAAzxB01glOZKJuBYeGKyPtWF0P45RV349/fR/ficbxqFun1vHn/QEAAAAAAADgCYLOIn7dnKJfNiVXad3+nSLVr2NEteqcKg0bCeterNb94rr9xcsjqV3bt29XQsJ+HU06KtNtKiIiQnFxcerUqaMMw6jr4dWK5GwpMVM6lGErOduUZdlqFOBQVJChpsFSoyBHrdaTpIgAh6IDDcUGS40CvXuck7KlA+m2DmbaSsp0y7ZsRQY4FRNsqFmooSgv7x9Obhl7/1Ol9YLjr/FKvaq+9nrrtfZE16uJTzfP1cp9G+QwHDqjVQ+d32ZYXQ+pmFPl/QEAAAAAAADgLbUadG6aPUKdxn1fo218/9s+/bnf5dF9Ojfz1Yi+zT2u9cumZD08ta8em/GbHp7at8x1CpY9NuO3agedp1LHxunn/NOj9VcuuLuWRlI7Vq9erSNHjig+voV69eop25Z2796tjRs36PDhwxo8+MyTLuzclWooIV2KCQ/QwLggxYT5S5IOpmRrW2KaVu3PUHy4Qx2ivfPyUlG9HYfSteZAhpqHGWof5Z16249Je9NsxYYHanDzYDUNz6u3PzlLWw+m6feEDLWKcKhTtK9X6qH64lu00N49e+p6GFVS2Wuht1/7TvZ61fG/P2crMX2XRnWM0bajx7T9yCZ95nJpUsdRdT20Ik6d9wcAAAAAAACAN9Ra0Llp9giNuOEXff9m/xqFnX/udxUGi49PG1Thug9NX1q47ohq1LIsq9i/Za9jV7pOZU6Vjg3L9jzgs6yGEwpu375dR44cUa9evdS4cWPFxsYqMzNTkZERio6O0vffL9SqVat0+umn1/VQvWZXqqEDGQ71aR2udk2Ciy1r0yRYbZoEa1PCMS3blCjLMtUpxv8E1EvV8k2JSiBxagAAIABJREFUMs2a19t+TEpIN9S/TSN1iMmr1//iVyRJv3x6m9rFhGjjvmNa8udBWaalLrE1q1cdn3z7i0frX3Ju/1oaSfkmPjFBT0x8Sg9+/rdy//38wVleqRXfooWGTz+70vXeHfO+V+pVh9t0SpIGX/5vSdLPH95c6me36b0u4YJ6P207rAce+azU8tqqV7A/tV2vuhZv/V3NIpxauOGwJMnwd2rnkWP1Kug8Vd4fAAAAAAAAAN7iUdA59dUVchiGHI68MMpyuxUaHKymTUIV0zhMF/bL66Lc9fUAKT+vcrlMrftyqOK6XadGbS/3eIB2YbBoy20WOQNoGLJtW4aMvFp2kXXt6oWQpmUpK8eUZVrKcRmF+1C0w8I0TWXlmDLN6gedp8qZTLfllGUr7/AVHEtbOvORsyRJSx776fjK+YfEbTtP5BBrJCFhv+LjWyg8PFzZ2dnav3+/QkJC5Ovnp6ioKLVv305/bvzzpAk6k7OlhHSVCh2LBoGS1CkuXJZla/HaBDUKdKhJWPU6H8urV1KnuDBZlqUfa1gvKTuvk7NoyFmWLs3DZVmWFq9NUOMgh2KqWa8mxl55UZXWm/P+zFoeSdkqCzmfmPiUV+rs3bNH8S1aaOG0H6sUdtaVXNNHRf/LMC1bQ658XZK0+P2bZVq2XGb5/x3bHv6fUVCvIORc/P5NkgzZyvs/s7J6niq5fwX1hlyZF3x6u1517TiSoHD/JooM9Jdt20rKytHGg3vreljFnSLvDwAAAAAAAABv8ejMY7vB/eXrMOTnk5daOS2pcYiPmoX6qUmwryLDpBXvdNMFN38pSbIzd2n0NTMlQ/rq3+OrFXSalqVclynLspTrzr/RtlXQLGjIKszRTMtSjsss7Lr0uJbbVHqWS27TVJYrPz01jLyt5598dJum0rNdMk2zWjUkKT09rdr3bUhM01n4WNi2kZ9HH39synqcatL18+NXb1X7vpJ09gXXe7R+UlKSevXqqYyMDFmWpaysLB07dkyNGjVWRkaG2rdvr+8XLqzRmOqTxMy86WMrCh0LdImP0JZ9ydp5OKvawWN16u2oQb0D6XnT1Zbs5CxQNNDt2iIyr96h7DoJOiWphs2rtaogzOzcuXNh52bJf72laNhZX6exzXX7yFXkD3UKQs4F794ol2lJMpTjLv/3yNPprwvqPXD/BXryH18V1pOkB+6/QEM6NquwnqfK278CLlNerVddU/tfoA371yo7K0eWpEBnqKYNHFzXwyqGmBMAAAAAAADwjEdB57affyne0elyKzQsWLGNwxQTFar4/s3VZfLv+viFHpKkS65/Rp++da9kS2373lStAVqWpaxcU6ZpKseV1x5oF7Rw5kecRn58VrQjszrcblOpGS653aZy3FKRls7C7023qdT0vHVQMZfpkNu2JduQobyT+cW6Yy1p6GN5XViLHvlRsiV3Dbt+kk5vVObtC3Ys0CPnfFLu/TZ99qzHtez86Yuzs7PldrsLb7csS263Wz4+PnK73OXdvcE5lGFrYFxQldfvGBeu2b8kqX/b0BNSr0NcuGb/clRnVLPewUxbg5tXHqoW6NQ8QrNX7NCAdtWrdzLz9vS0lRk+/WwtnPZjvb1mZ7bbV7ll/L/kMiXJlm3byvZiEFhQb2DbWM1+50blTXkgjbtuup78x1ca+Pa0WqlXYPY704538ttSrtvyar3quuy0cbpvwzb9kr5TktQ/uLsmnVN/pq2VREcnAAAAAAAA4CGPUqW/tfuvYi79ttzlfg5bkVamxvxls758qq3szF3KznKr44Bb1KrlacqpxgDdpqX0rLywKMdtF4aax8+iFgRoeTKz3dWeVtZtmkpJz9Ulo7rqmXe+L3aitsBl47orJT1H7hp0dJ4qXKaPTFOFj5lRrJ9TchedyjG/G8hlVn/q2mzTqXGdzi1z2YIdCyq9r6ciIiO0Z88excbG6tixY4W3u1wuhYaGauPGjWrarJnH2y3PAw88UOHyJ5980mu1ypKcbSomLK+NsGS3Y8nbfvn0NsVHh+rA0XRJTWqlXlG/fHqbWkSHav/hNEkx1aqXlOlW0/DjbZIFU/GWnJq3QHx0qBKOZFSr1snOm9PTVlV9DjtzXL4q+jcPX06fpvHTpmvstW/oy+nTJNnKdXkvCDxer+gr7vHvXWZt1Subt+tV1zOvfqpINdX3t95e+PMzr36qe2+9uMbb9tbrs01PJwAAAAAAAOARj4JOt5U3rWiUX9nLnYbka/jLzM7UxPvXa8ZjnXXZw6slSakrX5RC+3g8QMsylZ7pyuuSK5war+x/LdNUWv7Us9XhducFnZI07qxOhVs2ilxgMiXdJSN/XVTMZTplmpIMW6OfGipJ+u7+HwqXm9bxpLPgIXNZ1Q86091+mr2p/CDe33k8ty7Mr/On1E13l/NLXYFmzZpp06bNCg8PV1hYmJKTkyVJQUHBSklJ0R8rV6lvn97V2JOyjRo1SvPmzStz2bBhw7xWpzy2h1NC27JP6PMkr171QwLPp7y2azSFdU2Nvbzi8HfOh7dVuLw2lXVNzs6dO9d63foadua6fYtN7eoybX327+s16ea3NH7adH327xu8PJVsXr1JN79ZatmlVwz0+lSypfavMGTNe6WtD1PXPvPqp/JRtu68dUrhbffeerGef/U9r4SdXnt9JucEAAAAAAAAPOJR0Jmbf/3Ewv5JQzKKpUaGDMNSQJCPXFluXfr3LTKsbAUF5GjToaOKaeP5AE13Xng5cURnPfduQUhm509Ye5wh6eKRXZWe6ZZVzfDBNE0dy8gt/Pm1GcW7AG+Zek7hudu6DDgaCpfllLtEeFS02dZd7Pv8jk539YPO1Fw/7Tj0u6Tikw4XCHYW3Hp8umPZhmRIaS7Pg84OHTroyJGjWrpkqdq0bau2bdtKsrVhw0atWrVSLeKby+ms/v6UdNZZZ0lSqZPpw4YN0/Dhw71WpzyRAQ4dTMlSmyYhxboby+t43H0wVU0iAqpdLyLAoYPHstUmOrjUtsuy62CqYmpQr1GAU/uTs9QuJqTY7eXVrun+1bbE6rTQe0nJkPNUn8Y21/SRadn6+NW86wCb+a93RX/OrWDabtvD6UxL1iupsnqeKlkv21Xw4m4XW6eulBVyFrjz1ileCTu99fpMRycAAAAAAADgGY/OPObkB50++XmRLck4nh0Vdj86ZCooKO9Ep20ZMhWg1AyzWhNKuk1LaRluSbbGntlRhZWN45eyMpQfVmXlSjKqPXWtaVpKy3TLMGz996PFatM4SZI0/ux4SdILMxboqslD8oPO6tU4lbjdTplWRdPVHv/+8s9GFn5/hkqfjK6KDLev9h7aV+ayoKBg+Tvyfk8MSVd+PK5w2XuXzFF6NaZVNAxDAwcO0Jo1wdq8eZMWLlwkt9ul2NhY9e7dR06nQzNmzNC2rVv17R8PF97v43/t9LhWgZIn009UyClJUUGGtiWmq02TkMpXlrRux2G1iAqsdr3oQEPbE9PVJrpq181ct/2QWkZXP3iMCTG09WBaYdBZ3nS5BcHn2h2H1Sq6+vtXU3M+vK1Ow8yKlAw3T1TI+e6Y9yVJe/dI8S1a1Juw0206NH9W6e7K4hzlLjGMsv50o/bqeepE1/PEM69+qj371ui1p8ufOvbOW6folvse0DOvyqthZ7Ven8k5AQAAAAAAAI941tGZ323nLBp0FvlX+f86DEu28oIjy85bcjQ1V+2rMUDTNJWWmVskUZVeeqt4x8Tt14/KH5AtGUa1p661LEvZuccvNNa1VYjatYxUdGRBt1+GsnLNwnVRMbflLBZsSsWnqy3Z7fnXCxP0whdxuvq7KwsDC09kmz5q0aR5mcuOpCeX6vIsqDflk7Hqalbv5LZhGOrZs4d69uxR5vIZM2bop59/loKkv01vraem7dTkO1rXOOx0u/N+T0/ElLUFmgZLq/ZnaFPCMXWKC69w3VVbE7XzQLKGd4uqdr3YYGn1gQxt2p+qTs3CKlx35dZE7dyfpOHdq3c9UElqFmLo933p2rjvmLo0r3j/Vubv34iu1d8/b4jxr3ydulAX1+gsae+ePfUm7Bw4cOgJrTd27ISTup4nKgs5C7z29JO65b4HJNVsCtuavj7T0QkAAAAAAAB4xqOgM7tE0FkWX4cp6XgoYTryTtpl5HjWkVJ4f9NSepa7MEx958NFhZ2WBUHkC2/N07WX551QtI3qh5ATh3fS5ws3Ff68IaWFNqQUX+ePNVsL10XF3KZTVn5H52d/WShJMm1Dn/1loWxJln389uu+Gi2p5mFn2OqBZd8u6ZkX3z1+QzMVq7e+2aeSzvO4XmXefPNN3XDDDYU/eyvsPJEBZ4FGQQ7FRzi0bFOiLMtWl/gISaWndl21NVHzftupDTuP6Mr8buhq1Qs0FB9maPmmRFmmVVivpJVbEzXv1x3asPOIpgxtWe16UUEOtYp0asmmg7Isq9wpa1duTdT833Zq487DmlKD/TuZnejpastTl2HnvCWba3T/UYUzGFTNJ9/+UqN6l5zbv17Xq477v39emWdYenDRi3pi2F+8tm5lavT67OE0xQAAAAAAAMCpzsOpa0sEnQVTxxoF/5oydLzFyJZLDvkVu6+nLNtWRra7zGUd2zQu/L7oOpZZvROF/TpGqF/HM6p1X0/4+9ff6/pVRU5OdpXWc1sOFTRtFj4iti1L+Z3AJU7ovrs5U+5QQ43G7lDSHM8v6HrXtVUPKq/+7tMa16uqN998U5PvaF1Yr9lLzbT/9v21Vq82dYjykWWaWrwuQVv2JatjXLjio0Nly9bug6lat+Owdh1I1oadR2Rblu5++zf987q+1a7XPspHpmnpx7V59TrEhatFkXprdxzSjoRkbdh5WJZl6c43f9XzN/Srdr1O0b6yTEs/5Nfr1DxC8dGhUmG9w9p5IFkbdx6RZdq66+3f9FwN9u9kVR86OgvUZdh5zs2TJEkdX80LxANu2StJGrLqTEnSvm2vSpJmXPyeJGnJ1Xm/u379qvf/5clezxN3zntGLaP9NLLTaVp98JDunPe0nh91X43XrW3knAAAAAAAAIBnPAs6XQ4ZkgpPURrKv+ZhwZm549fgMuSWbD85jOP3rQ7bspSVUxBi5m2soJOzUdjx4WflHO/6tOz6Pq3sqXEm07Id+v6ryq7bls+QkgxbLl9DvrUYOhZVV/XMBhpyFugU469GgQ7tOJyl2b8c1f4jGTJNU00iAtQiKlDDukXpirPjdddbv8m2bd315q96ribhY4yfGgUaefVWHNX+o2kyXbaiI/zVKjpQI7pH66qhLXTnm79KUo3Dzi6x/moc5NCOw9mavWK7Eo5mFu5fq6hAjegWpSlF9q+m9apjzvszT2g9T9WXjs4CBWFnQ1Xyj0JQuTUJ29UkrKV+3LFdQb6+WpOwyyvr1j4eawAAAAAAAMATHgWdvv6BWvzvG7S4GoV8/QOrcS/pnL7NteC3HcVuK2tK2fV/7ih2H286752JkqRvrv28Sj9X5lQ5Z+3Jdds+/e4LHc211fjndpJUrWlrPVUX9YIfPChJNZq2tj5oEuarJmG+OqNtaLnrPHd9X9355q+yVfPwsXi9mDLXef6Gfl4LO2PCfBUT5qsBVdg/b9TzxImY8rOmOnfufEJDzqo8f+v6Op01YRjVm/r9VNahcZwSj6arQ6MI/XkkWR0ax3ll3dp2qrw/AAAAAAAAALzFkDRwz549S+t6IPWZt4NOPz8/bw/xhMrNzfX6Nq/+7srC709E6Hii602+o3Xh9w095PTUnW/+KkOqUVenp/UknbDg8UTXQ/2TkJCg4ODgYrd5+xqdGRkZiosrP4DjGp2l/WX2k9qfcljxkbF6bmzFU9F6sm5taujvDwAAAAAAAIATKTY2dhBBZx3w9W3YJzJdLu8HnQDQUJUVdHpbZUEnTg4N/f0BAAAAAAAAcCI1bRo7yKOpa+EdNtfgAgBUEdfoPHXw/gAAAAAAAADwDEFnXeCkNQCcNHx8fGSappxOZ61s37Is+TKl6amB9wcAAAAAAACARwg66wCnMQHg5BEeHq6UlBRJ8nrYaZqmXC6XIiMjvbpd1E+8PwAAAAAAAAA8Q9BZF+jYAICTRkBAgCIiIpSalqacnByvbtvX11eRkZHy9/f36nZRT/H+AAAAAAAAAPAIQWcd4DwmAJxcAgICFBAQUNfDQAPH+wMAAAAAAADAMwSddYIzmQAAoCTeHwAAAAAAAACeIOisAzYtGwAAoATeHwAAAAAAAACecdT1AAAAAAAAAAAAAADAU3R01gE6NgAAQEm8PwAAAAAAAAA8Q0cnAAAAAAAAAAAAgAaHoBMAAAAAAAAAAABAg0PQCQAAAAAAAAAAAKDBqfQanbm5uUpLSzsRYwHqldjYWMXGxtb1MAAAAAAAAAAAAE4ptm3ryJEjys7OrnC9coNOwzCUnJws27bVoUMHBQUFeX2QQH2WmpqqtWvXyrZtBQQEyMen0r8LAAAAAAAAAAAAQA0FBQUVNqNVFHiWmdwYhqHU1FS1bduWgBOnrLCwMHXv3l1paWnavXu3DMOQ0+ms62EBAAAAAAAAAICTlK+vb10PoU65XC5JUmZmpnbs2FEYeJYXdpYZdKampqp169YKDAys3dECDUBoaKhatmypPXv20NUJAAAAAAAAAABQSwzDKPZzVlaWEhMTFRMTo3379pVa31HqBkfeTYScwHGhoaEyDEOWZdX1UAAAAAAAAAAAAE4ZmZmZkqSAgIBSy0oFnWlpaWrdunXtjwpoYFq0aFHYMg0AAAAAAAAAAIATIzExUVFRUaVuLxV02rZNNydQhpCQkLoeAgAAAAAAAAAAwCknMzOz1LS2UgVT1wIojecHAAAAAAAAAABA/VAqtSkrDQWQh+cHAAAAAAAAAABA/UDQCXiA5wcAAAAAAAAAAED9wNS1gAd4fgAAAAAAAAAAANQPPiVvoGMNKJ9hGDxHAAAAAAAAAAAAaoGnGYxHQWeOy6WEQ0e1eccerdu4SWnWPq1evkn/feU1NQoP9Xy0QANDyAkAAAAAAAAAAFA/VDnovP3hp/TV198o8UCCsjOyFBUboHadIhToE62MzCyCTpwSDMOQbdt1PQwAAAAAAAAAAIBTXpWv0ZmakqzEvTsly1ZIaJD6Dm+mrn0aqfPpATp4cGutDxTVZO3RrPsm68YZW2TW9VhOAlyjEwAAAAAAAAAAoH6ockfnwP79tHjed7JtqduIGAUEuBUS6K+gAIeyjR1KSemiiIhIj4rbx1Zr1mdbFD9+kvpGMSVorbAO6bevv9SSiffopOpDtNO0c/lP2h4+WMNPC1NVfnsKfrf9/Pw0a9YsjR07VpK0aNEiTZgwQampqXmbrqBjk6lrAQAAAAAAAABAfdKyZcsqrbd79+4GWa8ipdrTygtyunbqoAA/PwXFNdbmHcfUNDJEwYFhCo/0l4+fqeRj+z0ubqcv02t3vKAfkizPR45Tm+s3/XPSRD0897BHAa5t25o1a5YmT56sOXPmaNGiRZo8ebLmzp1bpSlpCToBAAAAAAAAAEB9Y9t2hV8NvV55qjx1bduW8fILD5MdHa32Pbor4ZhbWS5/OYObKyXHUJYylZGRUaWirl8f0fCh1+lfqwLUqnUbhW57XTefc5bu+SGnZnsDVMDPz09z5szR2LFj9fHHH2vChAkaM2aMvvrqKw0YMECLFi2Sn59fhdtg6loAAAAAAAAAAHCqe/TRR2UYRplfjz766AkbR5U7OptENVJYdBPt+2Wl0o+6FRHbWzsOO3TIitN+M0qp8tO+QwerVNSM7K/zB/pp1esv6+tt8/Tvl5fL6jNeg2LdkrVPn99+jk5v31QRgb7yDYxUqwFX6qUVKbKVqdlTY+Tf6zGtK7zgpK39b4xQYOPLNSutkuXHEjTrztHq26GpwgN85Rcco87Db9Cry46oyj2lrt2a/feLNaB9tEKCGqnN4Kl6/fdjsmXr0KwpahHYVXcvSc9f90+9cHaE4iZ/ogOWqS3TJ6pzbJgCfP0U2rS7zn/oOyUUjLNgv9vFKjzQV76BjdVx9N369/QHNXlge0UHByq0WU9NenaJkgqCcKua+1PuPpTBOqjvHp6oQZ2bKTzAT/7hLXXJe/vztl/Rdqo6NnO/vn/qUvVvGa6AgAi16n+Znl504Pj1RCuqL5eW39VOTsOQYQTqok8yK3zoinZyjh07Vjk5OcrJySkMOSdPnqzFixdXuA06OgEAAAAAAAAAQH1TXuhY8OVtjzzySLndnI888ojX65WnytfolKTTe3bSMedRHbHCZSW41LhxB23a61SrDk2UYQcozZ0la99edW4eX2HRgPZj9X/3RemZ8T8q/cph2rapva649y86M8KQzB3a8NPPSuz6hD54o7cCMrbp22fv112T7lbHjW/pzNFnK/CjH/TT/gfVLd4hKVU/L/xdPoNe1uCQIPlWtDw4Sf9etFgJXR7T+6/1VkDmTv3wxuO6a9Qapfz0kx7s5V/J4crQkr+N1cUft9Q9L87Sq82TtOCpW3XHhLvUasNbGjP+Of3rot664qYndOGKx9Xo7Vv0+LbReuPjSWrqMBQ08AY9/cGdiouwlbDoOd3x0BTd3XOrPpwYIcNOztvvHk/rk3dOl1/Sr3rtr/frtvtO1/WPP60PnwhR4uzH9H8PXKXHB/ypFwf7SXaS1nm8P5XsQ1iJ1e3D+u2bb7Wj3d/13mtnKtKdLGeHGDkq205wVcaWpV8eHqvzXzF06VPv6+nTbK1/72E9cN65ylq8VI/2Daig/iZJvup51yz996p4OeRQeIvACh+9op2cOTnFu4cnTJiguXPnasCAARVuo7ZeEAAAAAAAAAAAAKrjRFwL80TxNIPxKOjs0Kqz3v/gC7Ud11/J6dlSgFMRMfH6Y/shWU0y1LxRsHZbLmmnW51bt66grFvrX7lVzxybrB9fulgLRp6hW54/V78+3kd50Zyh0E5na/TwvvLRUA2J26UFAz/QnJUujRp6rs7yvVlzFhzRzVObyMj4SXN+dGvA48PVyJCMCpcnSTIU1nmoxp7TVz6SRo7qIaPvIL30wlzd/v4FCq1g1HbSl3rhzQSNfO1HPXxRlAxJvV7brbntH9NnS17VmLFNNOH5f+nC3lM0bdoRRc7ZrFGvf6hJTfMaZ8O7jdIF3fK21adXpNZ/2lvvrtgk98Qz5Fuw3x0GacTZfeWjQYrf9rlmP9tF42+YqJF+kgZKP314gZYs2SFzcCc58+/jyf5Uvg9lhaMOhXcdoXHD+hb+wlS6nVGVjy0k+Ws99+9N6v7Aar15a97+DB3cXmnrT9dzz3+rv358kcLLqa/cvH8CYzuoa9e2pVuTyxEQEFDm7dnZ2UpOTq70/oScAAAAAAAAAAAA9UOVr9EpSd07d1L6oUPaOW+2wsJylLTvT+UkJWjH9vX6aP43mvHtl9qdcUwH3ClyuVwVlPVRt/vmasO3d6l7YEfdNnODFv69IOQszdmqnVo5jupIkiUjaowmDXVqydfzdNSWMn/+WguyztSk85vJIVW6vJSAnho1JFapq37XVndFh0pyb1mtdRlp+vba5goMCFBAQICC2t+pJblpOnAgVbYko8mFeu6ZMTrw0Qyt7P+onp/UNL9ujnZ89ZAuHtRZ8VERatziXL22xa2c7Ozy9lpNmzeVkX5YhwtmZPWJVVyMlJpSzjSzVdifquxDVVRrOyXG5t78h9Zkxmvw2W3zQ1tJPu119uA4Zaz+XVsqeTw8VdH0tEWnta0I1+gEAAAAAAAAAACoHzzq6Ozcvo2at2mjPdt36cAvv6ppZ4d2L92iuJ5nys/HrazcRH3y2XT169VEAVkXa2D3QeVXdjRSs9i8b/1imimqgkEaPr7ykSXTkmREaezkc+S88TPNOTxBTWbOVuaQp3R+bP64K1pezoUrDYdDsqsQ8dm25Giqy2bM1f29ix46h0KaNpIhSXayVv+0WpmhYdKvM/XNzmt0Y1unzPXP65JLX5Hzun/p3Zf6KMbYoulXX6qvKijn6+crw86RadmSDMnwk5+vIcuyKgwkK9yfquxDVVS6nbKv11p8bHaVg9WyN+bZ6kWnp120aJEmTJig7OxszZo1q8JpbYuVpKMTAAAAAAAAAACgXvAo6IyKDNfnH76vK66/Wdv/3KTolqFqEufWofXz1eGM7srN3KXspINa9buPVixeoEvPekLTJl/o5SEbanzuVTo/6HJ98N57avKtW2NeHa8mRlWXl2Bu17LlBxR4Wne1LXU0ivNp311d/F/Wmq2m2lzeVX6l1rB1dM49uvmjJnrohy8UcOfZeuCm6Rrx3c1q+v/s3Xd8Tfcfx/H3zZY9Zcgg9l6lRmvUVqtF7VKlVarmr8OqotU9tDq0WjWKqlnUCKVqFLWrqBqJiJAIsbLuvb8/SIzMazQ3vJ6Px308bs74ns8555Pj5n58v9+927XP9Ii+GNtTjbwNksldJb3vQe/AW8/nWq9Is8mcx3PIm1zbMeYem13p6qpc6FNtWH9Exlqlr/bqTPtHv/0eLefK1VXKLpt2JMlQSM6FpPMJ52VSFl2Ts5CYmJhR5OzcubNWrFihhIQEde7cWXPmzFHLli2VkpKSYxsUOgEAAAAAAAAAAKxDptJebkNzVi1XUmuX/KieLw7XxnWr5FPURaEVU3Q+cqv8i3rIxt9BprRUlS7WWI3r1Lw3Ubs1Ue9Ofmr8+v9k7/u0FrX0vLlzX47rjTqx5H29VbKzaoVIB2aP01u7i6n3uy2vzQeZPYNvOw1+dqJavNdJ3exG6pm6oXK8FKW/zhZXrx615ZYYodGD5sh72AYNqVZJtl+M1oKao/XSty216KGKKmn+VF9PmKEinSrIzy5ax87fUX/GvJ2PjY/8vM06/us8rfynpFqWzOUc4hfq2ZrPanPrH7X1k8bZzlma27Vwz0tsXm00fEAZPfZWZz3vOk7dy5u1d/piDdafAAAgAElEQVRoTfyrnAZ/1irn+2FXStUqOWnS7Lf12cMvqKL5uM74tVGn2jn3SjUYDHJwcNC6detUu3ZtScroyZlbkVNi6FoAAAAAAAAAAABr6RhmUY/OdP4+Xloy/Su98ta7+uqLr+Tk7KvQypdlNqeokIubmpYdpGdadbmHRSEn1e3bW5W+eEMpvZ5XQ2dL1hvk5BCviIl99NbxJHmWbqgXZn+icQ1c83BcdzV8b5UW+b6iN74brA5vJEoeIarU4W092f0hHf34FU3TM1o2tOrV+UbLDND7L36nem+M1Yq9n2r6Zyf10jtD1GbSeRkdXOUdUFo1i3tZOgLrLXI5H5ui6jZ6qH5+YYr+93VbNX23dg7nUFtukszmvAwpm9O1SC905natnVVr3DItdh6mke90V/PTkn/llnptyYd6uWahXE7bRx3enqzfeo3Q6x0WKtW9hJqPq6mOtb2vz/d5C3M2w/m2bNkyx+FqbzqslfziAgAAAAAAAAAA5Ifjx4/ndwgZDJLqREZGbkxf4O7uLg+P3Po2Xvf13AUaOWaEAsuaVLlmOQ14fKwerlzlXsR654x7Ne6hmvqh+W/aN7FG5ipvQWPN52PNsd2B8+fPKyEhIb/DAAAAAAAAAAAA96EHfWRJk8mU7bqiRYsqKioq4+fQ0NC6t9Wj80Z9Oz0pfx9fLVu+QmP7DlOgn49F+wMFCT06AQAAAAAAAAAArIPFc3RmpU3jemrTuN5dCSg/mU9/oxahfbUyq1FMbXzVe1m0pjZ3+M/jgvWwsbGh2AkAAAAAAAAAAHAPWFqDyTR0rbe3t1xcXO52XAVD2lkd2R+pxKx6xRrs5BNeXiFuFLkeZJcuXVJcXFx+hwEAAAAAAAAAAO5DD3pnK7PZnO26sLCwuz907X3FzlvhlbzzOwpYsQf69wMAAAAAAAAAAMCKZBqn9kGf5BTICb8fAAAAAAAAAAAA1iFT1YYea0D2+P0AAAAAAAAAAACwDhQ6AQvw+wEAAAAAAAAAAGAdKHQCFuD3AwAAAAAAAAAAwDrY3bqAOQiB7NnY2FDsBAAAAAAAAAAAuAcsrcHQoxOwAL8fAAAAAAAAAAAA1oFCJ2ABfj8AAAAAAAAAAACsQ6ZCZ2xsbH7EARQIUVFR+R0CAAAAAAAAAAAAlMUcnQEBAfkRB1AghIaG5ncIAAAAAAAAAAAAUBY9OgEAAAAAAAAAAADA2mXq0XmjyMjI/yoOAPe5+HMXVbVSufwOAwAAAAAAAAAA3Cfo0QngP3HgnyP5HQIAAAAAAAAAALiPUOgEAAAAAAAAAAAAUOBQ6AQAAAAAAAAAAABQ4FDoBAAAAAAAAAAAAFDgUOgEAAAAAAAAAAAAUOBQ6AQAAAAAAAAAAABQ4NjldwAAAOtgvnBQy6ZN18rdp5TiXky12vdW97pBss92h4s6tORTTfo1SP3f76ly/IsCK5fnHDee1d5f5mnx+t06Fp+qQv6lVbdDT3Ws7ifb/AgcsEDe8zxGm3+YpaXbDykm0SznwsVVs21PdakTmP1zH7ACFn9ekaS0aK3+8C39cLaBxrzZUcV4mMOKWZLjaX99r2HvrlGC+doCm8JqPvIddS1JksO6WfwsT43TvjXLtGrLXv17qpAee3Ws2hclz2G98prjxiNz9dr4ZTplvHm5fY0B+mLgw3L4zyIGLGPJczzl5Gb9OHORNh86o5RCgSpfv6OefqKKvHmM4y7ia2kAgGQ+ry3TPtUvl5vqxbcekdvRRfrsm0+10H+cniqR6aO4Eg6s1ZL5y7XtxGVdcgjKl5ABi1iQ4+akaB044aZHe49UP780HV0xVVOmzFTwu4P1iIchn04AyANLnuW2Xgp9qLmea95X3vaXFblumj6bNlfFKw1SXVfyHFbKos8r6ftc1J6Zn2pZtI0Mhf7bcAGLWZrjyVeUEvaExo9spSAbSQaDbGz51hBWzuI8P6qlH07SBud6atd5iHoF+8jdhTyHFbMgx22LPqlxk1vLlL5r3O+a/M5KedYpw38+hPWy5DluPK6lk7/TgXIvatygcnKMWaspH3ypaYXf1pB6nuIvT9wtFDoBADInbNf6PYX06CuPq2xhW8mvo1ptGqZ5Gw7qyRIVMv1jceX0GTnW6acxThEaOSeHho1ntH3ut5q74ZDiUh3kEdxIL4zqoNL864P/mCU5bnCpqI79Kmb87NOkln5Z95tizpokj1u+VCHHYUUse5Y7qUjZCpLMMialydYg2Xn7y9chiz81yXNYCUs/r0hmnds2QzOOVNdz7WP18fJsGibHYSUsy3Gzki5eksnVU54O9rLP6ZtCchxWxLI8T9OxpVMV4fqUXn+xtrzIcxQAFuW4jb2cnK8VhszntXnxz4qu2EsvVPPIXAAix2ElLMrx1BidOO2i0j3KycfRTgp7WNVCftTK0+dkkufNo2aR47gDpAkAQMaTkYo2F9GjQdc+YhicFRLqq0v7TyjBXEF+N33CtlVQva7qLCltW0SO7ab9tVjTfrdV61c/Uj0/sxLPJsuN/3yLfGBZjt+0p07v2aOTHuXUIShz8pLjsCaW53ma9n47SO+vuyAVCleroW1UKovxschzWAtLc9x8/g/NmntGj7zURyVPT8m2XXIc1sKyHDfrQuIFGeN2afmiKwoILatqlYvJI4tvechxWBOL8jztkNZviJbRcanG9ftGiQYPFavZRj26NlCY083tkuewFrf7t2fa8Qgt3eevFm9UlVsW25DjsBYW5bhjKVUrn6rZ8xeqYp+WCotbprUniqt+t5BMUwOR47gTNvkdAADACiQnKdnGUU4ZX3Ab5FjISUpOUpI5px1zZnB0kkNqgk6duiCzk4f8gwvLmXEpkB9uK8dNStg1U5MWXlS9Z9urrGPmLchxWBWL89xOFZ+ZpK8+najBTR217uPJ+vV05g3Jc1gNS3LcfFl7589XTK1uahFmn+OwWOQ4rIZFz3GDXCu01tMtysn5SqQ2zXxLL4+fpwOXeY7DylmQ5+ZzUTpx0VdVnnxR4z79SpPf6Koih2bq8yX/Ku2WZslzWI3b+dvTfFm7V6zTpRqt1SAg66/ryXFYDUty3OCtWh1bKfhEhL5752UNfzdCyVWbq1Zg5gomOY47QaETACA5OsnRlKSklPQFZiVfSZIcneR0Bx8qbEt10JBnKihu4QQNHjZB36w6pAt3UDgFbpvFOW7S2e3f6Z2v/1HZ54erSzmXLL8kJ8dhVW7nWW6wlZNHEVVt10n13Q/p9z/P6NYUJs9hNSzIcePJCC3YE662LcKVRUflm5DjsBoWPccNcitaXfUaNlPbrv00Ynw/VTu3Ugs3JfAch3WzIM/NqalKlaN8gvzl5mAnJ/9qalonSPH7/9YZ083bkuewGrfxmdx8cYc27HJUjUfLyynrTchxWA9LcjzlkH6a/IscOr6hDz6cpI/HdlPwvi/18bIoGW/ZlBzHnaDQCQCQbVCoihhOKjL62v+LNV9W1PE4uQSH5DwPSm4MTgqp00XDJn6k8T3CdeKnTzVnd/JdiRmwhGU5btaVgz/po2+PqEK/l9Wtsmf2PYHIcViRO3uW28hgkGQ2ZfqCnDyHtch7jpsUt2e3os5v0xcvPavevXvruc+3KTlmucYP/U57M3UDIsdhHe7kOW5wCVGor5R4PlGmTCvJcVgPS/Lc4O4tT5sExZ5JvbbErNSUVMneIfO8tOQ5rITlz3KzLu/boQOFKqla8RxmmSPHYSUsyXHj0a3aeiZcdeoEyE62civWQG3r+urEvkNKzNT7kxzH7aPQCQCQwesh1at0Rb/PX6b9p+IU9ceP+vlvH9V9tJTsJJkT92jepMlacvBy5i/Ac2A+F6lDJxJ0OcUg18AQ+RVK1uXLaRa1AdwNFuW46aQiZkfI0KSv2pcppLTUVKWmpinNmDlzyXFYE0vy3HT5X23d9JciT59Vwulj2rFkrn47G6pqlf0y/YFAnsNa5D3Hk1S4xWhN/e5bffvt1deU/jXkGNhSoz98RhVv+Q6RHIe1sOzzymn9ve0vHT99VglxJ/RXxHz9etxT5coFZZrzihyHNbEkz+VcQbUrG7Rj2VL9nZCkS1Eb9PPvCSr5cBX53Nr7kzyHlbD8+xWjjh34V+bwUgrLoc5JjsNaWJLjhsIhKmL7rzavO6SEpBRditmpdTtOyzMkWK48x3EX5fD4BAA8MAweqt1roOK/m67PRy1Wilsx1e75otqVsJckma+c1tFDh+RULVkq7ZzHRs26cvw3TZ/6m6LPp8jG2VfFanRTz4eyHgIUuKcsyHFz0L86GJmkY0fG6PmF6Q3YKrDVaL31VPgNXx6S47AyluS5V6z2RizU99/G6bKc5B1aSU1f6qwWRTJ9PU6ew3rweQX3O0ue4yFntH/tTK3/N1YX0uzlHlRaD/ceovZlbx2smRyHlbHoWe6lmk+/qDPfz9IXLy/TJXs/lXusv557rPAt+Uuew4pY+nnFfF4nYy7Js6S/HLNtlByHFbEgxw2lH1Xv/nH6/sfP9MrcCzIW8lXxh7rqxfalZH9To+Q47oxBUp3IyMiN6QtCQkIyVkZGRuZHTADuQxu37VGX9q3yOwwAAAAAAAAAAFBARUVFZbwPDQ2ty9C1AAAAAAAAAAAAAAocCp0AgNu2cOHC3DcCCjByHA8C8hz3O3Ic9ztyHA8C8hz3O3Ic9ztyHPcShU4AwG3jQwrud+Q4HgTkOe535Djud+Q4HgTkOe535Djud+Q47iUKnQD+E2VKhud3CAAAAAAAAAAA4D5il98BAHgwlC4ZrstXkvI7DNxlrVq15r7ivkaO40FAnuN+R47jfkeO40FAnuN+R47jfkeO4045F3LKdp1BUp3IyMiN6QtCQkIyVkZGRt7TwAA8OHz9Cud3CAAAAAAAAAAAoIC5sdAZFRWV8T40NLRujj06489d1IF/jty7yADc18qUDFfpG4asXbw8Ih+jAQAAAAAAAAAABUmZkuGqWqlctutzLHRWKFtSFcqWvOtBAXhwpKYZJV0tcnZo0yyfowEAAAAAAAAAAPcLm/wOAAAAAAAAAAAAAAAsRaETAAAAAAAAAAAAQIFDoRMAAAAAAAAAAABAgUOhEwAAAAAAAAAAAECBY2fpDp98s9ii7Qf1aWvpIXATs5LiTyjBPliB7ob8DgYAAAAAAAAAAACwChYXOiWpZY8Oedpu+Yyfbqf5AsGcMENPFB2jsMU7VGdyUQ3z/UlHvmgihztt2JikJLOTnNLvTEqEhlTpokuTozW9jeOdto57LPnQbA17fox+3BatK651NX7dCg0uY5vltub4hXrmocFKeXurZnXyF2VsAAAAAAAAAACAvLvtoWv9HXN/Zcv4l96r7ycXJ0c5OhaSm2+IytfrpNdm7NI58+1G9N8yFHKVi72LXF3s5epaSC4uzndeqLq8UN2DqmrMttS7EeLtM8Vr01tNFVxsoH5NyeM+qYc0Z3BrVS/qIzcXLwVXbKZBPxxUkqXHTv5bkx8vIpdKo/VnmqU75zPzSc0aNlDLAl7T6sMndXzHdPUskXWRU5IMbuXVuvezalvJkyInAAAAAAAAAACAhfJnjk5zkhITUlRt7FZFn4rSoW3L9EkXZy0b2ELPzjopU74EZSHbwgr0D1JQgL0KBwUqKChA2Ze08sicqpSU/Kz0punkb59rYJOaeuqTnTpryY2wD1LldoM0dd1+HT38h6Z3MmvOwBGaH2/B+ZjjtXJYJ32wz6ZgFv5SD2rXXrPqduqqygGe8g4KlFdOfaYdSqn96DHqVJaeugAAAAAAAAAAAJbKn0LnNfYunvL28lVgsUpq/PwEvVDzkrZs2Ks0435N7VlfFUJ85OrspsLVRmlTqmSKXa/3utdRMR83uQeUU9OXpmvvxevtmU7/pg97PqLifm5y8w5R1ZeWKsEsGU9G6M3OdVQqwFOewVX0xLjVijFJSjumRf9roXIB7nLx8Fd43df1e0oOy29kV1G93h+pVkGOKtvtbY15IuzqxUzbo8kda6lMsI/cnF3lHVpd7V75UQcuS9IlLXs2WL4d5yjhWv3PnDBbHfzC9dK65KsLjEf0cX0XOTg4yLnqG9qdJklXFDGssop4uskjsJyaDf1JR26IJ8frkrZbn7Z/WGVCfOXu7Cz3LPa/UeKRY3Lt8r02TGoiy8pvrirboLGqFC0sT9dCsrORHIqUUGihvJYsTYpZMESDt7fV1HGP5P3Y2Z7fJS19poj8Ov2oc+nbJkWof5i/uv50Ppf7JOm2cjBNqcZL+vEpdzk4OMjR7xktS84h9pTVeiHEV08vSc7Tfcoyv9OyjjP7nL9+HDdnF3mGPKSuoydq2BM1VMzXXe7+ZdTslZ91wnjtMmTXDgAAAAAAAAAAQD677UJny26f5viyhDk1QYd++Vyzd9qqTPmSsjNGa/vavQp/baP+iTys7fNfVBWbQ/qsWwdNOtdGX286oH0r3lDZLUPUdtBSxZslGQ9pctcO+jiupT5bt18Hd63W1AEPy8N4UJO6ddHUtG76ftsB7fjuSSV+3ktDfzqjpIi39OJMe72w8rBiIvdo9ZReqmAvJWez/CYGd1VoUV9htpJruWZqWOxa1z1TrPZtOaTSI37X/n/+1sbvn5Xrz8/p8cHLlGB20aOtG8p+4zKtS7y6edKWtfrDob6a1rhW2rMtpoGr45WYmKiEP0aqkp0kOahK/9natP+gdsx+WjZzXtDQ2TEy6+p553hdTKe1f+s/KjN6iw78e0jbZnSTZt+w/03sVKbXu5rY5xEF3dZkoyla1S9Yrj7F1fSjS3pm8qt6xDlve5pP/aThrx1V9y9Hqra7Bf05sz2/86rT4hEZNkVoy7XiZequCK27VEct6rvncp8k3U4OSpJc9dTsM0pMTFTiiSlqkdeKbW73Kbv8NmURpyH7nDebTmv/1sMqN3a7/j1+SBverqg/P/xYfz/8vlbu/lvbv2uthK8G6M01V6S0HNrJ+x0CAAAAAAAAAAC4J+5Zj87YnHqySZJStXlEZfl4ecjVLVDVn/tF3s9/r6n9wq8FZSPP4GIK9PVXaLEAOez9Qd9uK6PBH/9PjUsXUWiVjnr7zU4yLfxaP58xK+3a+iGfvqoW5UMUFFpG1Ur7ybR3tr7fUUYD3u6n2iEBCm/ysoa3ln5d/odSXFxVKClah/6Jk8nVX8XLF5OnQbLJZnne2cgjOFwhRUJUtmF/ffFee6X89I2WnTXLvXEXtbKL0LzVCTIrVTvXblBa/VZ6xCV9X4NsHZzk5OQkRwe7a0O42sq3eDmFBQWpeIOX9Hwjg7Zv3qtUKeO8s7su6fG4B4QoKDBYpRsP0QuNr+9/dzmo6RfHFBe5S/MHOGtqx+76+mgeuv+Zz2nVG2N1sNMHGlrF6TaGrc3q/PbJpVE71U9ZraWbkySlae/yXxT3yBNq6mvI2C+7+5S+3pIczIjG/tr9c7S38Bcs+/uUXX5n/buSfc5fnTPVIFe/QBX2K6KKT/VSC/8UuYZXV6kiRVSqWW89USZR+/fHKCXXdgAAAAAAAAAAAPJPTjMI5mj5rIF5KGbmxF5Vhy3R9J5hKuTqo8J+rsroNJnFsKrGk1E6ZVdUJUKuz4TpEF5CIcatioo1yngySjF2YQovcnNpyRgTpZjk7RpT1VNj0xeajLJtHKeUum9o4eQJGjnuMRV/pazaD3lTb/WvI99Hsll+m2Vh5xKlFWKcr6hYk1SukXp18lS7GYt1qk0lrVh9XvVHPCaPPLdmL09vF12JvyxzHq6LPHPe/64z2Ms1oJweHzVRz8xvpJmLjqnPkPAcC35pf3+pcSura8S2Giok6Y7S6obzk3dzdXpsqP43f4Peq+ethYtOq8GIx1U4m0rqTffJLfN6y6/13TmP9PucVX5nJaecv2QucvPGBnd5ukv/XkqSWa4yGNzl6W5Q8pVkpeXYjpTnUYkBAAAAAAAAAADugTvq0envmPMrN04+YQovFqYiNxY5s2EbGCz/tOP698T1HoKpx47ohE2QQvxtZVs4QL5pUTp68uYehLaFA+TnVF8fHUzUhQsXrr4uXda5xb3ka+OmCl3f0eLdh7X144e07/VOevWXS5Ihm+W3KfVEpE7a+CvQz0aSo2r166cKm6Zo6o8/aUl8E3Vu5nW1F6ONnWwNyUrOpdJnMFyvMOV2XXLb/96xkY1BMptMuRRUjTq+8hftjV2onkXd5ebmJt/uC3Xx0AdqULy/VmUzl2hOMs7P4KXmPdvIvHSOflkzR/PPt1TPlj7Z9hi9+T5ldjvX+k7cdJ+zye8s48wp5zOdvI2uHub6XTIYrv5kWTsAAAAAAAAAAAD/rXs2dO3dZlepq56p9rc+GfKB1hyKVtTu+Roxco7Uro9a+RlkV7mTupXbrY8GvasV+6J0KuaY9uw5ruTKndS17Da9N+gTrf7rhGJPR+vwjp06dkkyxezRpr+ide6yQV6lK6qo62WdS0yRMZvlee8BmaK/1yzRlkMnFLVnkV5//UelNeuq5teqQ7bhPTW8XYw+HPilElv3UJP07pz2YSoRHK9183/RoZPROvjnAZ3Opa6V23X5r5jPbdX82Wu052i0oo/u1M9vjdR3JyqpTYuiyrkEaKviQ9Yr8dLFjGJa3Mwn5FpqmNb9+7ma3tZcode5PtZbXT0Wa+gLP0hP9VZj9xvX5nyfbpWf1zq7/M6q/G6XQ85beszs2jGfXqkxT3XV2xvOy5TNe+bxBAAAAAAAAAAA91KBKXTKtrQGzv5R/d0W6NlapVW+6Ujtfeh9LZ7U+mrvMvtK+t+8OerrvEgvNCir8NIPq8PopYqyqaSXf/pBPe3mql+DsipWrIIa9HxHa2LSdGHX9xrUoqKK+Piq6KNvKqbF+xrb1lMXs1zuZcHckWad2/KJetQtozINBuu3YiP04+ROCkhvwOCp5oOeVbipqHo+10AZ03PaVdULE/vJ95feqlaivBo9/4125Vacyu26/EdMZw9r1RcvqkXVUipVpYWGrfbViz/O0ZBytz068t3hUEN9+lZSQny4nu1bSzd3NM7lPt0qP691dvltzHrbrHM+q41zPmZ27ZguHNH2TRu14/hFGbN5T6ETAAAAAAAAAADcSwZJdSIjIzemLwgJCclYmZqammmHT75ZbNEBBvVpe/vRFUQpq/VC8S66NDla09vcOn5vqi4kXJGNMUpLR3bVyCuva/P3T+o/7HiJdDneJ9xNqWlXC6yLl0eoQ5tm+RwNAAAAAAAAAAAoSOztr0+AGRUVlfE+NDS0rsXd7R64wuXdlLZPnzRvpIkHXFSh9cuaNfmJAlPkNCfMVY8qg7Um05yZNvLpNlM73m+o7EaZza9981tBjh0AAAAAAAAAAMDaWdyjEw8o82XFn0xQUharbJx9FeiVQ6/I/No3vxXk2O8ienQCAAAAAAAAAIDbdVd7dOIBZXCWTxHngrVvfivIsQMAAAAAAAAAAFg5m/wOAAAAAAAAAAAAAAAsRaETAAAAAAAAAAAAQIFDoRMAAAAAAAAAAABAgZPjHJ1nz579r+IAcJ9yc/fIeM8zBQAAAAAAAAAAWMLf3z/bdfToBAAAAAAAAAAAAFDgUOgEgHssISwsv0Mo8LiGAAAAAABYD/5OhyXIFwD3EoVOAAAAAAAAAAAAAAUOhU4AAAAAAAAAAAAABQ6FTosk68w/u3X4rDm/A7kHzEpOiFbshf/g3IxXFHf6vFLu/ZEAAAAAAAAAAABwn7KKQueMBRsU7+Kf32Eo5fB8vdK2hkqFBCmw9ECtvPSPvmxXQfXf3K5USUr5XW+27arJu1IlSeaEpXqxahU9t/CMCnzpM2W9RtZroDd+T73HBzLp2JSnVLPPbMWY7vGhAAAAAAAAAAAAcN+yy+8AbovxgD5r10bv/HlBaWYb2bv6KKR0DTV7erAGdawoD8NttGk+pXmjX9WqwmO14M9WCnG0VSFno3yf6C4VC5FtFrsYXMqoWbduSivvods5ZC4B6fyeufrow++0bMtBnbriIN8SNdSi51AN715d3lZRor5dBb4sDAAAAAAAAAAAgHxWMAud5mQlnktRpVdWa0avIKWei9GBtV9q1CtP6Yh+1bdPBVjeVTX1sPYeMOvhCR1UobBjxuLqPV9V9ez2cSih1sP/d5snkROzEjdPUPvuM2Vo+7Imzm2gkm6X9O9vM/XuWx3U7uB3WjyhgbzufnUVAAAAAAAAAAAAKBCsql+gpcPX2jl7yNPDW/5h5VW/5wj1rnZZ27bsV5rxoGYOaK26lUooJDhUpRu+qa2pkunMJn3Wr7mqlghVWJk6av/qXO2/lN5amtLSLmvRs2EKCAhQYKmBWpV8RjM6h6vR+/uVllUAKes0vGJJ9f8lRUrbp697NVGtiiUVFhyssLJ11GHUEh27YSJKU/wWffHi43qoVKiCQ8JVrkZjdft8V+a20/7W16O/UVzzDzXng2fVuHJxhYVX0mO93tGcSW10ccZofbH7vFYOLK9Szy7S+fT9ktfrf1VK67kliZIk46n1+rBPc9UsU0zFKtTX0++tU6xJUlbXJ02SkrRuTD2VLxaqolnEn217StHmt5qqaqlQBYeUUNWm/TV198WMfpspx5drfLd6qlA0RMUqNtCAucdktOhOAwAAAAAAAAAAADezqkKndLXYGe/iLzvvvBc9zanndXjNVM3fa6uSZcJlZ4zRrt/2K2zwL9q+50+tmdZXFW0O65vnemnK+Rb6eMUf2jjvFZX6c6S6v7ZSCRkjqbqo3deHdPz4cR3f96EaO1gQuClOB/88opLDV+qPP7dqzZcdpPnDNWpB7NWCn/GIpj3fU5/FNNDbS/7Qjm2rNKJilLbtj9WtU1Uaj63R6kPeat6liXxu6rVpkGfDzmrhd1xrIk6oRqNaMmxbrz+vXF2bune9Nl56WI3ruklphzXl+b6aaeyoyWv+0K+ftVLi1AEatSRe5qyuj50k2ati7ylasXmrfhegoboAACAASURBVP2mk2wW3BB/Tu3JXmU6f6B563dqz9YFGhyyWeNe+15HjJJS92pSnwFaaN9VX0Zs1cYFE9U23JILCwAAAAAAAAAAAGRmNUPXLp/xU6ZlPZ58NIc90rR9fD2VmGhWalKKDF6lVb/nZL3Zq6hsdEySjTwDw+Tv7SB5S2l7v9OsnSXVb+2Lqh9uK6mtXh/5u1Y8O10rxjRVF/errdrYOcnR8VohzuKpJG3kVjhYAf4Okv8LeqbBV3pl236ldvaXzYH5mrG9qJ5fPVSNS9pKSlERL3sZkjO3Yjobp7OGwgryz2JmUJvCCiosnY07J9dnW6pu8gSt3Jasx+rZ6u/VEYqvNUANvQ1K2ztfc3aXVJ+1vVSjiK1U5CUNbD5NAyK2K7mlY6bro5T9kmzlU7SMQgIcpIDn1LP+59fj359De+2aySu8orwkST7q0v0xvd//kI4apbD9S7Twn0p6fupzeiTURpK/mtYI0tu/WHptAQAAAAAAAAAAgOusotCZc0EzO3aqOOAHfd45WIVcveXr4yL79FUpmbc2xUYr1i5URYOvFw/tw4opyLhD0aeNkvvtRJ5zfJ6ezkpKuHz1+DFRirELV4mwLIqXt7Dx9pW3+bROxhqlkrdsbzqtk6clb18v2XpWULt6o/T6z5v1Rm0vLV1+Ro8MaSpfg5R6OlqnUnZpYoNiejt9X7NJtvXO6rICLY8/p/ZSo/XbB6P0/oKtOhqXJPtCtrpgaiaTJFNcrOLtghQSYHWdhwEAAAAAAAAAAFCAFejqk6N3sIqGhSjwxiJnNmwKB6lwWpSORV8fKDYt6rhO2virSOHci4+3w2C4Pu6swctXXmknFX361oFqM7Mt+pgalzqrFbNWKe6mXqVmJaydreVnwtSocUnZGjzUqEsLadUCRWxYoKUXmqhTEy8ZJNn4FpavYx1N2HpckZGRV19RJ3R0Vhd530b8ObVnv3K8XpqRqqem/q6/jhzW9snt5Je+X+EgFU47piORzMoJAAAAAAAAAACAu6dAFzotYVe+g7pWPqQpIz/Xb4djFL3vZ42fsEB6vIea3TwRZvYMheTsdFlH9v2jRAvrdvYVHlfLkF36asy32hJ5Rqf+2axth69kPTquXTk9N/5Z+a4api5Dv1HEzn905PBurZ32sjq/tERu3cfphcpXS7suj3TTk26/aPSwn6S23dTALf18n1CHUjv12YivtO7ASZ2Ji9GRPXsUedmyuDNCyqk9Y5pMZslsTlFyUoqMMii9RmpXrr26Vv5XX/zvHS3fF60zZ07qxJlszhsAAAAAAAAAAADIowem0CnbEur79bfq7fazBjWrqTrt39T+quM0c2Jzeeexzin7aur8/KM688Vz+vDPVMuO71hdw795T40Tv9dzDaurTsdxWhVjkoNDVn1RDXKvPVrzF47TIwnz9Grnxqr/WAcNmx6t6iMXaNGEhvJKj9mhqno8XU4JZ4uqe4+H5JARa3kNnDZFnW0XalibWqpata7a9J+k307dZs/KHNpza/6KJjx+Qd90rKkyxUuo0vMr5FAqVJ4GSXYl9dzU6RpQZJNeb19bVarUUa95V1S+Yphc8nrdAQAAAAAAAAAAgFsYJNWJjIzcmL4gJCQkY2VsbGx+xHSfSlX88eNKcvaVu5N04div+mDACJ0ZsE7fd/IXNT/cr9zcPSRJi5dH6LFHauRzNPkjISxMXseP53cYBRrXEAAAAAAA68Hf6bAE+QLgTvn7+2e8j4qKyngfGhpa1y4/AnogGU9o1cS+em/dcZ25mCbngHJ6tPNkvdOBIicAAAAAAAAAAABgKQqd/xXbYury5a/qkt9xAAAAAAAAAAAAAPeBB2eOTgAAAAAAAAAAAAD3DQqdAAAAAAAAAAAAAAocCp0AcI8x2fqd4xoCAAAAAGA9+DsdliBfANxLFDoBAAAAAAAAAAAAFDh2Oa309vb+r+IAcJ9KTTNmvOeZAgAAAAAAAAAA7hZ6dAIAAAAAAAAAAAAocCh0AgAAAAAAAAAAAChwKHQCAAAAAAAAAAAAKHAodAIAAAAAAAAAAAAocOws3eGTbxZbtP2gPm0zLzTHa/oTxfVChFE2tj56+qdDmtzEIesGUrfotWrN9dmxNJlsauntPas0MIz6LAAAAAAAAAAAAPAgs7jQKUkte3TI03bLZ/yUzRqzTCZbPTxxtyIGhuXcrdS+libuPae3zk5Xu5LfyWy2NFoAAAAAAAAAAAAA95vb7hrp75j7yxLmuA16t1N1hXi6y7dEQw2YdUBJed9b5/f8oBFP1VWpQA85O3vIv0RNtR40V4eNueya9qdGV3KWg4PDTS+3NtMUly9FVaMOvFNbbqE9tShTACadnNVBge5N9PkJkySzEtcPVyXPshq4OkEZW1/cqFereKrcSxFKSInW6nefUcOygfJw81JIlTZ6bfExpd7QasragQpzuuH8C5XRy5uvb3Fn9wYAAAAAAAAAAAC4+6xjDFjzKc0d0EUfn++gmTv/1vqJZbThxc4a/0deymlmJW4aq+YNX9JKl476YPFW7duzUUu+GKpGlQLlk6cztNfDYzfrWGSkIq+9Dk/vLB/DHZ7XbTEq5sRJpcbO05gPt95cULy4Tm+/sUzxaWd0Os4kySD3+m/oqxedNfPFEVp51izpgn4f319fmfrqywmN5HnxL23Y76unv1ijPfvW6ZPHTuvLPkM0J/Z6EdV8KVHJlUdra0KiEhMTlXhuj96qZX9t5Z3cGwAAAAAAAAAAAODesIpCp+nkIn23yl1Pjx2u+sWKqHyH8Xq5QZRmTv9dybntbNyvyUM/1ommn2vZt4PVumZphZeooIebdNbgZ+vJ61qx0ngyQm92rqNSAZ7yDK6iJ8atVozpejP2br4KCAjIePl7OskgyRS7Xu91r6NiPm5yDyinpi9N196L1489tWd9VQjxkauzmwpXG6VNqZLp9G/6sOcjKu7nJjfvEFV9aakSzLnHcFWyYmMSFfxoHaV+N1FzotM3MOno9xM11+VRPeJ7VrFn0pe7qPbIrzXU+0cNfGWZjq4fp/5f26n/V2NVz90gg3dTjZv2np5pUE5hYZXVbsBTqpjyrw6dSO/qatbFs+eU5huoQGcnOTk5ycnRQXaGu3BvAAAAAAAAAAAAgHvktubolKSW3T7Ncf3yWQPz3Fba33u131RGT5e91ovQ4KmKVUJ17te/FGNqrKI5lGON/y7X0r881HpCWwVk1wMz7aAmdeuiqT5jNWtbe/kfmKK+PXppaJld+uHJHAIzHtJn3TpoktMwfbdpnkpd2aQPnuuntoO8te2bVvIxRmv72r0Kf22TVj/lpdQLZvnaHNLkrh30seNgfbVutiq7XdKpKz7yMB7UJ9nF8JSfMkI3ndWZs1LQ42P1rPtT+mDydnV5q6YcL63TR5MOqflb36v4W09qd1ySJIer+xSqppenvKqIhn1Ud2mKQgev0qjaLlldCB1dsVIH/BvqjTLpt96k+NNxSj22TB+OP6+SVRqoVbPqCnC883sDAAAAAAAAAAAA3Cv3rEwVa0F3P/PlC7ps6yJXp/QlBrm4ukoXL+hiLvNkmuLPKF4BKhJ4c83WdGqTZk35WX9dltL2ztb3O8powNv9VDskQOFNXtbw1tKvy/+4NjRsqraMqi5/Pz/5+fnJL6CaRm5OVdreH/TttjIa/PH/1Lh0EYVW6ai33+wk08Kv9fOZ9MBs5BlcTIG+/gotFiCHa/sM+fRVtSgfoqDQMqpW2k+mXGNID/ys4s9K3n7l9NQrPaUZH2nxGaOi53ykeZ59NbxVuLw9zYo/Ha8bO4M6lm6uJsUvKv6cr6o9XFxOupVRJ5cNUccJ8er11etqkFEHtZF345c1aXBDuV/cox+GNFbFBqP123nzHd8bAAAAAAAAAAAA4F657R6dy2cNtKiYmRODs5ucjSd1MUmSoySZdfHiRcnVTa65zJNp4+MnH51W7GmjVN42Y7nx6CKN/98eDW3VSiViohSTvF1jqnpqbPoGJqNsG8fpkjlQkr2qDFusWT2Dr1Z+DXZy87eXcU2UTtkVVYmQ6+06hJdQiHGromKNkmfmeIwnoxRjF6bwIjfXkI05xiAVSj9P83klJNrLw8tFhWr018BKNfXJV3O0e8EO1R3+jSo4OmmPh3Q+4bxMSq9UJ2nne/30UVxrDe/xtyYPfE2tNn+hlhmTjBp1YlF/tey/Q42//VlvN/S+3oNUBvlWbaueVa/+NOKVJepd7WlN+OF5rXgh+I7uDQAAAAAAAAAAAHCv3FGPTn/HnF95ZVe2osrZHNCe/SlXF5gTtHdnpDzLV1BgLhHahj+mRuFx+nnmKsVn08PQtnCA/Jzq66ODibpw4cLV16XLOre4l3yvFeucvIIVFhZ29RVaRN6Okm1gsPzTjuvfE9f7TqYeO6ITNkEK8bfN9li+aVE6etKUaXluMUiSTBd1/qKTXJ0Nkk2IOr/UTjHv9tNnlztr0BP+MshJri62SjyXqPTTvfLnu+r3QZw6fT5ZEz7+WoM95+qlV5crzixJZiX+/rqefGG7Gk9bpvdbBOZ40w1eFVSpqFmnY8/IpDu7NwAAAAAAAAAAAMC9YhWlKpugdurV9IJmjH1fvx6O1N55o/Tu+mB1f7qucq2X2lXTwIlPq9C8Pmrz0hSt2nlYkVHHdOjfU7qSvknlTupadpveG/SJVv91QrGno3V4x04du5RL05W66plqf+uTIR9ozaFoRe2erxEj50jt+qiVX9bdGe0qd1K3crv10aB3tWJflE7FHNOePceVnNcYkhOVmFJILs5X23drPEivPfmoOo0YoDqFJMkgZ9dCunAu8erQtcm79dHAjxXf8WO92dRbBucaemXyQHnMG6rXV5+T2XhAX7z8uWz6f62x9dyVkpSkpKRkpaRdK5Maj2r9wrXafSRa0cf/0povX9c3uwLUsGFp2d3pvQEAAAAAAAAAAADuEasodMoQoM6TZ2uQx0/qUb28Grx2UI9OmqNRD2eeaTKLneXX8lOtXTZKVY58ob5NqqpMmcp6bOQOFalfR8WdJdlX0ss//aCednPVr0FZFStWQQ16vqM1Mcacm7YtrYGzf1R/twV6tlZplW86Unsfel+LJ7W+uRfmjewr6X/z5qiv8yK90KCswks/rA6jlyrKJm8xmC4k6pKcMwqdsi2jvtOWa8rT4bK9dr4ubs5KOX9eV2TS8e9f08eRTTT+jebyurZLoRrD9W4Ps2a8+qG2n/pDG/de1I7xteTr7i53d3e5u3up2tg/lSbJfPGIfv1qiFrXKK2S5evr2W8vqOWXizShvvNduDcAAAAAAAAAAADAvWGQVCcyMnJj+oKQkJCMlampqZl2+OSbxRYdYFCftpkXmuM0rW1pTW+yQxEDw/JUbTWfna52Jb9Toz/X6KWi1lGfBZC71LSrxfzFyyPUoU2zfI4GAAAAAAAAAAAUJPb29hnvo6KiMt6HhobWtbO0sSwLl7clRZtHVJbPWF/1mLtfkxo7ZL1Z6h8aVeNxTT6eouTU6mp0l44OAAAAAAAAAAAAoOCyuNB5Vxh81WvJBfXKy7b2D2vCrjhNuMchAQAAAAAAAAAAACg4GAMWAAAAAAAAAAAAQIFDoRMAAAAAAAAAAABAgUOhEwAAAAAAAAAAAECBQ6ETAAAAAAAAAAAAQIFjl9PKs2fP/ldxALhPubl7ZLznmQIAAAAAAAAAACzh7++f7Tp6dAIAAAAAAAAAAAAocCh0AgAAAAAAAAAAAChwKHQCAAAAAAAAAAAAKHAodAIAAAAAAAAAAAAocOz+6wPOWLDBou17PPnoPYqkIEvR2fhkefm4yZDfoVxjSozXeScfeTnkdyQAAAAAAAAAAAB4EPznhU5JatmjQ562Wz7jp3scSQ6MyUo2O8rxblyhu9mWpEtb3tIT/eP06q+fqoWHFZQ6zWe17H+N9VHwV1o6uqac8zseAAAAAAAAAAAA3Pfybehaf8fcXzkzK/GveRrXu7lqlCmqkJBwVajTWn0/26Lz5jsM7spS9SvXQBN3pt5hQxa0ZUrQ1o/aq0LVV/V7Sg7bpf2tr1+fI6/nh6qph0E5XoeU3/VqlaLq8eN53XhJ0naOV92iT+rbkybJeECftS6lkKBABQaFqFiZamrYfoAm/rhbCaZMQSp6eicVD66pERuvXF9s8FaL4X3lMvN1fXMwzaLLAwAAAAAAAAAAANyOAjpHp1kXtr6jjm1Har1LO43/IUK/b1ih6eO7qmZRb7ncaSdHc5pSU++0WprXttJ0avO3evXJxur95V6dy+WwlzdM1fTYZhrQLVy2uV2HPMWXrMRzKar08krt3bdDm5ZP0+i2XvpjQls1G7BQJ4w3bJuyU99O2SpHj/Oa/+USxd4Qq13J7urfNFrfT92oK5kOAgAAAAAAAAAAANxdBbPQaTyoqaO+1KnG72vOpH5qXq2EwoqWUrVGXdS3VSnZSTKd2aTP+jVX1RKhCitTR+1fnav9l67tn7ZPX/dqoloVSyosOFhhZeuow6glOnZjT0rjMX3ZOkQBAQEKbvCu9qVJxlPr9WGf5qpZppiKVaivp99bp1iTSTEL+6pK+e76Icokyaz4Xwbqoap9tCDGlG1bN7pwLFLO7Sdr2dsNlPMUl8n6Y9lqpTVsq0dd83Yd8srOxUve3n4KDK+kx56eoDkzB8jjl9F685eEa71BzTq/eqrmnm2ise93lOdv32j2oRuroO6q37qBklcv09aceqQCAAAAAAAAAAAAd0G+zNEpSS27fZrj+uWzBma7zng0QisPeKjZyObyy6r3pvGwvnmul6Y4DdCnK75T8StbNXnoMHV/zVNrPmkmL1OcDv55RCWHr9ai5k66eGCOXu03XKMqPKwZnf1lkCTbMPVdsEajqtvLYLCVvQ7ry+f7aqb3K/pqTWv5/fO9BvcboFElf9OUdm9qwurmeuXlWarzkbfeH7VRD73xi9oF2kiXs2jrpqtup5JdxmqMpJSl3+Z80YzHtGv3BZXpUUGOebkOd8CpQld1rvqp3l2xVcmtmsnJFKNF01fJofVUtW7iq+PFp+uH6X+o35t15HRtn0KVqqnM+R+067hR9Uva3t2AAAAAAAAAAAAAgBtYbY/O2OTs15kS4pSgwgrwz7qYlrZ/vmbtLKl+b76o+iUCFVyxrV4f+YTMy6ZrRXz6eKs2ciscrAD/IJWo/4KeaWDQrm37dX0mTYNs7R3l6OgoBwc7GffP15zdJdVndC/VKFJYRRu8pIHNpQ0R25VsKKzHx09U48MT1Knda9pSZ4ImtPa/4eLe3NZt1ySNsTp1xl7+/p4y5OE63BEbX/kXttWV+HhdNkvGf37UrC0+avVkbRWyK6MnnqygUwtnKOKGCVEN3gHytzujmFhjDg0DAAAAAAAAAAAAdy7fenQunzUwx2JmTmy8fOWlOJ2OM0rKXOQzxUYr1i5URYOvr7MPK6Yg4w5FnzZK7rfuYSdPT2clJVzO9pim09E6lbJLExsU09vpC80m2dY7q8tmycnnUXVo5K6fpl9Wi1dqyOcu97C8dkCZTIaM8nRu10EGGxkMktF4c+HRbDTJKBvZ5BSjKU6xp40qVMRHzoYU7Zo7TweCH9fEalcH1w1v84SqvfeOZi87rce7XusFKxvZGEwyme7CqQIAAAAAAAAAAAA5yNcenf6OOb+yY1v0UdUrGqeVc9cqwZx5vU3hIBVOi9Kx6OsVt7So4zpp468ihbPu/Wgw3FD1s7GTjSFZyTfMNWnjW1i+jnU0YetxRUZGXn1FndDRWV3kbZAubnlPY5YX0wsvVtKWN17TwvT5ObNo67bZeMvPO0XxcRdlzsN1kE1hFQkw68jBwzf0VDXr3KGDOu0UqEDP7G9/0r4fNGeHmxo0qyHHK5s0Z+FRpUZNV/dKZVW2bFmVb/aedqZd0oY5C3X82qmaL8QpLtVbfj5W21EYAAAAAAAAAAAA94mCWZGyq6TnxnRWoSWD1OXl7xSx8x8dPXpYezcv1fdL9iitfAd1rXxIU0Z+rt8Oxyh6388aP2GB9HgPNctLV0u7EBULOquNP0fo8KkYHd79j86Ve0IdSu3UZyO+0roDJ3UmLkZH9uxR5GVJFzfr3f/NU+D/3tdrr36gMVX+0OsjflKMKeu24m63x6NduCqUc9TBvQeuFi5zuQ5JtuFq3amWEma8openrdfefw7qz2UfatB7v8u/fWc9Uuh606kX4nX69CmdOLxTEdNGqXP3z5TQfJxGtfTWhbU/allCVQ1fsF5r16699lqvVRObqtDOnzT/4NUeo6n79+iAU3lVLJZvHYUBAAAAAAAAAADwgCiYhU4Z5NP4bS2a+z9Vivpew59qpHr1mqrToE8VsTtGCYYS6vv1t+rt9rMGNaupOu3f1P6q4zRzYnN552VIWbuK6j36GXlHvKSGD9VRu6EztDelvAZOm6LOtgs1rE0tVa1aV236T9Jvp5K078sxmus9UOO7hMrWJkgdxg1X+U0TNXH1OZmzaiv7EXJz4ay6zR9V0q8rtDMlD9fBZKOwp7/QjFcr6eiX/dTqscbqNOoXOXT4Qj+MqSsXSTI4ysPLQXvfa6GqVR7SI4/31rhFZ1R9xCKt/Ly9QmzitOLHCKlxXz1TvYgCAwMzXqU7Pqf2gQe1YN4upShJ235Zq9T6zVW7UM5nAQAAAAAAAAAAANwpg6Q6kZGRG9MXhISEZKyMjY296wecsWCDRdv3ePLRux5DgZa0RWMa9dGJIWs0tUP63Jj5zxQzRz0bfaoS09bq9Zo5jDuMB46bu4ckafHyCD32SI18jgYAAAAAAAAAABQk/v7+Ge+joqIy3oeGhtb9z8cYpXB5h5we1sDX6qnZG+9obZMP1MjDCkqd5gSteus97X9svD6qQZETAAAAAAAAAAAA9x6TKRY4Bvk9PlFzw84p2BqKnJJk8FK9odM016eCfK0kJAAAAAAAAAAAANzfKHQWRAYPlazokd9R3MS5WEWVyO8gAAAAAAAAAAAA8MD4P3v3HR5Ftf9x/L2bbBKSQDopEHoJvUkRpAhIRywoomJBRRFREa9eBftVuILXih0LXguKqEhRKSJcmhRp0kMLJKSHJEA2uzPz+yNLSGgGf2DY+Hk9zzzP7syZM9/ZXY9hv/s9x17eAYiIiIiIiIiIiIiIiIiInCslOkVERERERERERERERETE6yjRKSIiIiIiIiIiIiIiIiJe56xrdIaHh/9VcYhIBeVyG8WPNaaIiIiIiIiIiIiIiMj5oopOEREREREREREREREREfE6Z63oFBEREZG/zsovBpR3CHIedbhhdnmHICJobL1YaYwUEREREZHzQYlOERERkYuEYZjlHYKISIWjsVVERERERKTiUqJTRERE5CLhNqzyDkFEpMLR2CoiIiIiIlJxKdEpZ+EkI91JRFQVbOUUgXk4nexKUUT4lVMAIiIifyHDVNWRiMj5prFVRERERESk4qoQic5X3/+OfsMGU9/fddZ2495Z8af6f/7uS//Ued4uf+kT9Lw1jWfXfsiVYWVIdRoFFFgBBJyvT5WVwTf3tuOFmp+y5IWOBJ2nbkVERC5Wbre+jP9jgQTGNCewcBMZWUfKOxgR8QIaW/+IxlUREREREfFe9vIO4K8WGupgWJ8EhvVJYEDn2lzeLp62zeNoXD+C0FAHoaGOUm1Pz2Dbvy+lco1b+Tbj5GmQTJI/HUxslSt488CF+Ae1xeGNn/LYdR2pHxNCUFAY1Rt3ZeikpWSfzxmZ3Jt5/ZFpRDwwnv6V1/FE8xC6v76PkndkZU1jUEgt7l9cCEe/4ea4Vjy5+uzJ5lKM35nUNYqgAH/8A4IIjalD61638uR/15JpArZIBj1xH8FT/8EbW9zn8eZEREQuToZhnbJR6wl63/0FTeICT3PcTmCLN+kz4jVqBtpOe773bKFU7z+fvr17YDv5NajzDH1HvER8ABg0pHav52hQvQrmOV+jMtHdZ9Dn7p/pP3IRfe+aTY8b3qD1ZdcTWdn/vN+TiFwcNK5eyHFVY6SIiIiIiJSvClHRWVbP330pj721jO9XJjKwQ13CgvwBOFJgFLfJySlK1IWGOsjMcjJxZKfT9GSQciAZV+oGnvzPvfR5oT0Bxw/lL2biM3PIdDckLcOE6uczl2yRu/xp+gx8E9egcfxnVl+aRpqkb1/FqoJIKp/H+WWPLJrCe8kDeH14PXxYW4bQXBQWnuM/Vq0CcrMLaf3USr6+Kw5XzkE2/fxfJj3WnS9+fJcFHw2hRsKdjO33CmOmLGb0lJ4E/rnbERER8Qpu4+QfSdnxqxSB3R5NjUuvYdf+j8kt2SSoJw1bJ+BjS8Lhb+HO8eaqJQvTAsuycBsmJX/iZJkWFmAYJm7DxPTscxsm5/bXhx2fgBBsKR/zv+WrMX2CCAhtTEzjobQd0pd9P45lfWL6OfYpIhe70mOrxlU4n+OqiIiIiIhI+apQic6dTscfTl87YWQnHnljSalk51GnSf4xZ3Gb0FAHaelHmXRflzP04iQ1JZfqnTvi+nACX4yayW3V7IDJno8nMD2oM5dFbiM1/cQ/io3kBUx86Ek+XryFNN9aXD5iEm+Ov4JYcwOvDxnBlF8TOZh5DJ+IRvS77VpiN89k5tLtZPrEceltk5j6wkCqs4UpD73Cwb5TWT11MNGexGbtOo1oB+DeyJShI3h9xU4OZjlxRDaky9BHmfjU9SQEHmHOHQ25NXcyO7+8gTAbWNmfc12DccR9tZXXuvmXuL8Cln0zB1fvd+leGShrMaWxm1e6BvEK4NtkHCtWP0WzzF94aexjvDlvE5mOeDpc/09eeuEWmgWfOM0RHE5kZDT2yGiq1WtN1zahdLt8LOO+68m0ayLoMbgXzlHfsPw/Penpf8ari4iIeL1T15Gz4QgKh/wDHA29hoa1Z7Jy52HPMQfhLYYRXZBEviMEP/8S59tjiG0/ikYJ+xrzowAAIABJREFUrQkJ9sGZuoTtv7xGYmo+2KpSvevjNKpdl6DgEHw5Qv6B2exMdBHVuAdVo6ri49zPoXUvsXbNRgqLQ4mgapv7aNqiI6FBPrjztrHrxzFsSQmlWufHaFS3HsGVQ/GxcjmSspRdy98mMdlJTK+v6Vh1Bgs+/ZDDFoCdkPYf0LPZBpZ88DIn/lwyPTNHmBimyYmfoUHRt+4WlmlimBaW5UN45y+4ujNAIQfm9mXF9gKwRRLddjRNm19KaKBFQfoKEv/3OtuTMj1f3Hu+zC/YS/qB9UV/4uxbxr5NsznU723a9niA9ORx7Duir/lFKpLSY6vGVeA8jqsiIiIiIiLlq8JNXbvT6WCn00EuZ5p2Fl68rwspKXl8vzIRgNDAorY5OS5CQx0kp+SdJckJmFmkZ0Fc36f556W/8p8pa3ACHFnMy6/toM/4x+kWlU9aRkFRe/d2XrtpKFPdN/Hx6m2s+/Aact+8jYdmpGOZaWz5dReNn15D4r4dLJ3YjLX/eYWt7Sfz44atrPlwINnvjOL5hccwEucy+/cQBt42sDjJWTquVDav3EHDx//Hlp1bWfbxHQR/P4L+D84h2wqi88DLcSybw+LcouYFKxexyq8rvdqelD00drNmbR5N27Y8UalaFj61GT0/k9zcXLJXjaO5bQdv3DSY13Ku5L3l29j8wzM0WjmGQQ/MJvMs/yqu1PJ2bml7mJ9mraAACGzVjiaH17Jmj3Hmk0RERCoAt9s6abPhE1AFK2066zbmE9t2EEGm55h/Dxo2C+Xg8qkcOlYJvwB/zzn+hHWcTIcmlUj+5TEWTv8XuwouoeXAUUTZLdzuYIKrtcQ/7ROWzRjNz7OnkRM2hFaXtcW55U2Wf/0oq7ceo2qnJ0iI8fX06UeVdpPo1L4ueesmsOSrR1i5dCYpmW7c7spUjm9NQMZ0Vswcwy/fv0OSuz0trp5Mg3CD1D3rMcJaEx5g8/QVQmhsDdwH1pFeWPp+TQuwfLFwlN6soj9ZDcPTDoPs1WOZ9+HNzP3wdtbuLMDt9iOkw2Q6ta1Jzq/PsnjGc2zNqE3CoEkkRDpKX8M86bUuTGb30q/J9u9IrbpRGKe8D39uE5GLg8bVCzuuaowUEREREZHyVGEqOud+MuOUfQ/cOeiM7f/z4OWMnjyf7ymq7ISiSs79+3N4/R89z34xM4vMLAiPasz1j97KS4Nf5rsx/6XTrJf5KvQufhpQh1/etMhMy8SkCuamz/l4XQKj1t7DpfE+EP8IDw98h9vnrqLgKn/ARnBULFWj/Kl6/W30feJbkuu0oUG1YIgZztUJ7/HDlhTcldPJJIa42LO9bXZCqtchvpo/VLuXtyatpfGw95kzoR839xzKAN87+Gp+NlcNDua3RUtxd32ey4JO6sJIITnVj5jYME7kU12seLwFEU+WzLAaOAvDubP4uQ0fvwACAooSx+71n/HB6gQeXPsPetbzAa5j4vM/892Q9/h+Qn9uCz/TLUQRF+3D0cx0jlhQKSKOGN9DHEwxIMHn7O+NiIiIFzNOmbrWF0dAIOaxNA7++jnpzW6nQfx0Vu52EtZsCLH53/LT1n1UbwehAVUwjcNYAZ1p2DyKlJ9GsmFrDgAZh6sSc/cdxMdN4sBuE9OycGVu4ODeLVisJy+kG3Ed9rB/3SKSDSDJIqrRi0TGxWDu24cV0JGE1rXJWT6MZSv3la7isXn6S19DUuIWLOBg4g7MW9+l4SXt2LpwKWnmw8TVqMK2jVngSCAyxiRj8Qacpe63qB+feuO5Zsz407w4qzENE8OwsLBw5+0lK/XgiVgCOpHQsibZy4exYnVRjMn79uMT8TGNL+nI798twuW5BpaFYZxU3ZS5lawCO3FhcZhGCt48WaWIlFZ6bNW4euKFOR/jqoiIiIiISPmqEInOsyU0z+b1h69g5IR50KEuyem57N6TwVv/7PPHJ1qHyc51EBIWRKW29zK6eTtefecLNsxcR6eH36epfwAbQ+Bw9mFMwEhJIsW5hidbhfL08T5MA5+eGRyxqpXu21aF0CqQeKQAi2BstiqEVrHhPObEFhFFBGkcSjOgSdkSfoH1GhJvfE1SqgmNe3DbkFCu+uQ7Dl3ZnB/mH6br490JOfUGMU0btlJVow5ajZ3Fx8OqnSgDPvwN93R744zXNpKTOORbi3rxJ2L1q1OPeONXklINOFOi00wnOdWgUvUogmwAdux2k1Nm8xMREalgTpm61haEwx9cuXm4c1ax+fdb6Na2FxsOpNKgZSwpS74ivdCfKCc4AoIxTBNbWH1C/QIJ7jeLof2Od2TH7mPiDgwsmqIQsDxTGVqY5OdmgCMUPx8TwwW40zlyBKr6B3n6TCDMkUrSviTcp8R4cn9A4TYO7Mukcc0Ego5OZ8/eh2hfvz2O9XMojGxBpGM723dnnnS/RVMsmvvfZf7PK0slIX1q3UvPy/BMsei5hmWdeAzYwxoSenKM5j5SktJpXjeBYBaQYZ5lGkfPfWAVHdOfHSIVR6mxRuMqcD7H1fPyFomIiIiIiPxpFSLR+f/hNk78c8/tKuNilGY+h/MDCA60gT2eG+6/ignX3cMbcSOYfXU0NpwEB/mQm5OLBfhUjSEqoCvPbZnHiLiT5pwtnH9S53ZPgvHEb3pttqJnPnW606POM3zyyY883W0gEaebvvYkrgP7SbZHExtlB/zpcM89NO3wLlO/7MaszCt4unfJqs3jIUQSHelkbXoeFv7FxwMialK3bs3iRKeVVZXA4wftvvjYnDhPLHWKT2x1ot0/k3jAhLpFZ7n27uaAPY746DMnao+t/5Bpv1bhipEdCACs3DQyCiNpG1XhZloWEREp5ZSp/GxB+PpBYcFR3O6jJK38mpy7bqBl92xqGD+wYGMaLlcETqcNP79ADLcFBlhWBrtm3c/GlJJfa5u48nJwu8M907daGG6raM3KQjeWzYFpeGKwuXAZgIWnz6L/4ZvGaaYbtFmn9AdW8Zftbnc2ezeuoO2VVxAX8APpNdoSmLaQ/VkGpbuyME2wnEmk7ttUqkrINzgXqOyZYhFMCyzPVJPFX8ifNkaLouKmotjcZtE1MIvuq+RffrbwRoQHGBxOS8JVol8R8X6lxi2Nq8D5HFf/1FsiIiIiIiJy3vztM0eGu0Si85Tp4s7AmUtuYSWCPFm+yj0f4LFrOjPk8VF0rARgIzC4Enk5uZiAb4sh3NhoNZMeeJX5vx8gNe0gu9b9xt4j5xisb2tGT7iFSjPuZOCot5i7ehuJidtZv2Qm7874jWMAFLJ14SxW7jhA0sZveeqpL3H3vpE+kUWx+tS5lYevSuE/o98md+Awrji1nBN869GyWQC/r/+97FMROWpSr3omi7+ex47kg2xfu42spjdye+utvDrmJRbuOEjShq95fNwXcNWdDIg6kV4tzE3n0KFk9m9fzdx3HmLAoMlkDpzMC1dFYgMKN61jc6XmtKr7t8/Li4hIBWd4KmuKN6sSDn9wO4/iNk3cad+yaVcYdVo0IXv15xxwmhjmEZyF4PAPxDRNXBk7yXaHEx5mIzttN5nF215yj7mLqxUtT+VO0XWKpnM1S1zbsiiu7nFl7CDLHU10jbji6p+S26n9xRFVLRJ32g5y3AZHt88k8Vhr6jdtQ/V6tcjZupgs4/T9FFdblnodiuKzTBPDOIbLdeJ+j7dxpW8h0xVNdI3YE/upTtX4KNyHtpDlPss1bHHU7XYdYQXL2LYlFfdp7vHPbCJycdC4emHHVY2RIiIiIiJSnpToNE/85NUs4z+2zLxcjhBYnOjEJ4G7PprLu7fUoahO0UZQ5UAKDx8uSj46mvPIjM+41Xc693RrRO3aTel2679ZWOrXwGVhI6rf6yya8wSt9r3LyL5tad6iA/3unMTcdQfJ8vyyNmflqwzrlEBCtwdZUvtxvpwyhJjjeUVbKH0euIM6Zi1uHdGNk5fnLBJElysvp+DH71ntPG2DU/m2YuSEe4icN5zW9ZrQ4+73WV/QkNGff8m9lWdyR4eGNOk1jk2XTOa71wYSaQNsAYSE+/Hbs52oXasezTpfx2NfptL+XwtZMW0oNX0AjrFi1o+4egyic+A5vlwiIiJexu02S29GJXz8wFVw1LMvk50/v8Hmte+wavVeXG4Tt7sQp9OFzS8Im2Hizl3I+rUHCO00gcsv60u1Gq2IqduXxs2aYjdM3IZnnUqz5HWK/h4yi5+bmBR9Ae52m7hzF7F+zUEiukyia8dexMW3IqZ+f+Ij/Tz92QlqcDPNW3QjtlY3Gg98gVYxB9m2cilH3SbuY6vZvC6JqEvH0yRmN7s27/bEXjoGw5MEOPV1KHp9DLeJ27WH1EOFhDS9jUb12hBTbwC1YyvjzvuZdav2Et55Ape16Ux0fGcaDXiBVlV3s3HpL0VxHL9GpdpUjW9NdK3O1G4zkm7DP6JznRy2fjORbdnu0tf260aXB37mut5tsZXleYlNRC4OGlcv4LiqMVJERERERMqZSuSACVN/Oaf29mr38GPOPWdp4aDji9vJKbHHt1pvxn/em/GntL2Ct5IySjRswVO/ZZe8GPf8mM2Jq/kQ0/k+pnS+jyknd1U4H/Dn0nELmXal/0kHXeRlH8NuJDH79a/IGfQU97ZwnCF+G2H972XoUzfwytdj6XhjG57bePjUVuG38N3hW4qfx/V/kYX9XyzdqPLlPPrpCh493WV8mvDw4nQePkMUAObBL3nlSz9umtGX0DJM1SsiIuLNSv4ACwB7AL4OcDmP4fYcMw58xeIDJRuZOJ0FEFoZH9PCsPLZP/ce5uQ/QLs2/6B392AoOETG5lfZuc7C8EyJX1Qp5JkS0XNZ07Q8MXimTSxuk0/SDyOZffRB2rd9nL49AzCO7CFx7ioSD4GFhWGEUL3Lk7QJ9ceZsZpNX7zIysQjnjXhXKSt+oykjk8Qf2Aa29MMjJPnhrVZJaqdrFJrydms4zFZGEYOO+e9QOzg0bQd2h17wX72LdjE9n27SZ5/H7MLx3Jpl+cZEARHDy1lzbTJrN1fUHSLNhcFRw5jNbuD/sPvwDSOcCw7kdTED5kz/Sv2Zhw79U0xLSxsnsosy7MW31me/4n3XUQurFJjq8ZVz6HzNK5qjBQRERERkXJmAzru379/2fEd8fHxxQddrjJPXCoXg8L5jKw7lCNTDp6a6HT/xr869WDCtiCaDnyE16bcR/uQs2UOLVJn3sqlj1Ziyq9v0zesHLKMVibf39GOB83/sOLDQVRVotMruTw/F/9u7gIGX9m7nKMREbm4vTGuTXmHcO7s9elw/2ckbL+dafM2c8YaHd9WdBnzCiELruH73zL/ygjLzX3Pry3vEEQELxxb/ybjqsZIEREREREpK4fjROFeUlJS8eMaNWp0UkVnReJ3UnVoSb6tGL8q6zQVpWdiI/rqV5lbJ4ca5ZHkBLBF0H3c18yNbKEkp4iI/C1455plx9eSK7ke3HEBhMTVw88WSEynx2hc8Ckz1qdT1mXRRUTOB+8bWzWuioiIiIiIlJUSnXJmtjASWoaVawhBdVvSsFwjEBER+et45Zplds90jJ5150rdgb0mDQdP5bI4NzmJM5n7wTscdHrhPYqIV/O6sVXjqoiIiIiISJkp0SkiIiJykThljU5vYG5nyb9bsuS0x7awdPIlLP2rYxIRKcHrxlaNqyIiIiIiImWmRKeIiIjIReKRlzaWdwgiIhWOxlYREREREZGKy17eAYiIiIiIiIiIiIiIiIiInKuzVnRmZWX9VXGISAVVuUpI8WONKSIiIiIiIiIiIiIici6io6PPeEwVnSIiIiIiIiIiIiIiIiLidbRGp4iIiIiIiIiIiIiIXJR2/HRLeYcgf6BBr2nlHYL8jSnRKSIiIiIiIiIiIiIiFyXDMMs7BBG5iCnRKSIiIiIiIiIiIiIiFyW3YZV3CCJyEVOiU86ikKxMJ2ERlbGVYxRmbiaHAyII8yvHIERERERERERERETkL2eYqugUkTOrEInOT2Yupd+wwUQcST1ru5e+2v6n+h97XcM/dZ63O7LyBa6+N4N//vw6fUPKmOo0nDgtf/zP1yfLymLOP3rycvV3mP1EOwLPU7ciIiIiIiIiIiIicvFzu5Xo/GOBBMY0J7BwExlZR8o7GJG/lL28A/irhYY6GNYngWF9EhjQuTaXt4unbfM4GtePIDTUQWioo1Tb0zPY+VpvarQYxZysk8vmTQ59dTuNal7DB8kXYgC2yP39K54d3oe2CbWIj69D044DueuNlRw+nxX87q2899QXhN39EL2Ct/PGwAbEx8USGxdP7YTWXH7tKCZ8uYHskrd4bDb3NO7GhN9c53CdDbzQuRaD3kuiZFdWznRuqtWKfy4Ppu/DdxH036d4f7v7fN2diIiIiIiIiIiIiHgBw7BO2aj1BL3v/oImcYGnOW4nsMWb9BnxGjUDbac933u2ykR3n0Gfu3+m/8hF9L1rNj1ueIPWl11PZGX/E+1oSO1ez9GgehXMcohTpDz9rRKdY69rSFaWk+9XJgIQFuRPkL9vqWlZc3KKknShoQ4ys5xnqOY0SE1OxZ32HROnrMNZ8tCRZbwy6SeyjUzSM893otMi79d/c92gcfwSdBXPfbaA/y39gWnP3Ui7WuEEncf5ZY8uncq01N6MuqkOPpaT3JxCmj/yI5s2r2P53I94YlAYq/41iN6jvuGAcTw8Ny7X+R/UfOvfzL29DvLx1GUcO++9i4iIiIiIiIiIiMjFym2YJ23gUykCuz2aGpdeQ6B10vGAy2nYOgEfWwgOf+s053vTZscnIARbysf876vR/G/Ws2z8bQ2uqKG0HfI2zWpFYHjamoBlls/9ipSnCjF17XGZQdF/OH3tP4Y0YuJnm/l+ZSIDO9QlLMifo06T/GMn0pWhoQ7S0o/y2I1Nz9BLIWmpucR1aIv7s5eZeec0hsbaAZN9X7zMN4EdaB+xq1Si0zj0C6+On8AX/9tOum8NOt/6DJPGdiPa3Mx7d45h6tq9JGcX4BPWgCuGDiR622xmr9hJlj2Wtjc+zWvj+xDHdqaOf5tDPV9jwWtXEuVJbNas1YDWAO7fmTpiDO/9upvkHCeOiPp0vOZ+nnzkKupXOsr8+9txb/6zrJl6DSE2sA5/zfC2zxP94Uomdiq5AKaTVXPm4778ZToHA55CSt+gMMLDo7CHRxFbpzkdW4Yw8MoneH5eN94cEFaUMDb28vbAeN4GfBMe4ocFj9A4ezlvPvEsUxdsIcu3Opdc9QDPPTGExkFlfWer0HVgN5yPzuHXwq501VqdIiIiIiIiIiIiIn8Lp67RacMRFA75Bzgaeg0Na89k5c7DnmMOwlsMI7ogiXxHCH7+Jc63xxDbfhSNEloTEuyDM3UJ2395jcTUfLBVpXrXx2lUuy5BwSH4coT8A7PZmegiqnEPqkZVxce5n0PrXmLtmo0UFocSQdU299G0RUdCg3xw521j149j2JISSrXOj9Gobj2CK4fiY+VyJGUpu5a/TWKyk5heX9Ox6gwWfPqhZ6ZGOyHtP6Bnsw0s+eBl0otv2ZPALNhL+oH1RV/V71vGvk2zOdTvbdr2eID05HHsK7CwLB/CO3/B1Z0BCjkwty8rtheALZLotqNp2vxSQgMtCtJXkPi/19melIlqMaUiqHAVnZlB0WQGReMbHn3GNv+8sSkpKXnFlZ2hgUVT1ObkuAgNdZCckneWJCdgZpOZBTFXPMoD7dbx5vvriwa2o8t4693d9Bz7EJdF5JOR6Umeunfx7t138V/jOqYsXMXPbwwgd+ooxs/KxDIz2L52Dw0fXci6Db8y58nGbHjrLXa0eZavlqxk4ZTeHP7oUf6zpABjzwJ+3BZC76F9ipOcpeNKZ+vqROo+NJcVq1cyb8rNBP04hhsen89hK5AOvTvju3I+S/OKmjvXLGWNoxOXtzopc2jsZf2GPBJaNcX/LK91QNMbuaFVLot++PVEVatPTe6auYt9+/ax+6eHaGLbxfsjbuPdw3155YdVLPvqURqsHcfNj/1I9jmMopWatybh8HrW7zP+uLGIiIiIiIiIiIiIVAhut3XSZsMnoApW2nTWbcwntu0ggkzPMf8eNGwWysHlUzl0rBJ+Af6ec/wJ6ziZDk0qkfzLYyyc/i92FVxCy4GjiLJbuN3BBFdriX/aJyybMZqfZ08jJ2wIrS5ri3PLmyz/+lFWbz1G1U5PkBDj6+nTjyrtJtGpfV3y1k1gyVePsHLpTFIy3bjdlakc35qAjOmsmDmGX75/hyR3e1pcPZkG4Qape9ZjhLUmPMDm6SuE0NgauA+sI72w9P2aFmCe9DoUJrN76ddk+3ekVt0oDLeFiUH26rHM+/Bm5n54O2t3FuB2+xHSYTKd2tYk59dnWTzjObZm1CZh0CQSIh2neW3/3CZSnipMRefcT2acsm/YNZ3P2H78LS146oO1fE9RZScUVXLu35/DM3e0PvvFzGyyciA0IoGr7r+BKbe9xdx736b9vDf5rsowvu5dk+VTLbIysjCpjLnla77YUJ87F91G22o+UO1+Rvf5iFEL1uDs5w/YCIqIITLCj8irhtJjwhxSa7agXmwQVL2J/vWnsXD7IYzgDLKpSky0z1mCsxMSW4tqsX4QO5zJT6/n0pGf8OOTPbm+27X0doxm1uLD9L8ykI1LVuLuNI4OgSd1YaRyKN1BdHQoZ50N1x5JdFUfjmVmctSCAABs+Dj88fcvSh67N33Np7/V555F99G1jg8wiKfG/Y8f7pjGD0/2YmgogJs1z3Wh3oSSVzModIVzs+eZLTyGaN90UlINqH+2+xcRERERERERERGRisI4ZWpUXxwBgZjH0jj46+ekN7udBvHTWbnbSVizIcTmf8tPW/dRvR2EBlTBNA5jBXSmYfMoUn4ayYatOQBkHK5KzN13EB83iQO7TUzLwpW5gYN7t2CxnryQbsR12MP+dYtINoAki6hGLxIZF4O5bx9WQEcSWtcmZ/kwlq3cV7o60ubpL30NSYlbsICDiTswb32Xhpe0Y+vCpaSZDxNXowrbNmaBI4HIGJOMxRtwlrrfon6wLAzDpFQZUOZWsgrsxIXFYRoWFhbuvL1kpR48EUtAJxJa1iR7+TBWrC6KMXnffnwiPqbxJR35/btFuM7nmyVSDipEovNsCc2zeWZ4G8a9swo61CU5PZfdezJ4fkS7Pz7RyiMnz0FISCABre7griY9eeejmWyevZEOo16jkZ8/v4dAbk4uJmCmHeRQ4XomdKvNxOI+THy6ZHGU2NJ924IJqQx7jjqxCMJmq0xIZRvOgkIIiySMDNIyDKBsyb5KdepTzZhFcpoJDbsw9KpQhk2fS1qfJixcfJhOY7pQ5dQbxDRtf1zva2aQmmZQqVoEgWfIiJqpB0n1rUGt6ifiddSsTZyxjoNpBoQC+NJs1GdMuT7mxCXz5jJ24PslerJjt5mcMkuBiIiIiIiIiIiIiFRYp0xdawvC4Q+u3DzcOavY/PstdGvbiw0HUmnQMpaUJV+RXuhPlBMcAcEYpoktrD6hfoEE95vF0H7HO7Jj9zFxBwZimBYWYGFimCYWJvm5GeAIxc/HxHAB7nSOHIGq/kGePhMIc6SStC8J9ykxntwfULiNA/syaVwzgaCj09mz9yHa12+PY/0cCiNbEOnYzvbdmSfdb9HUtXj6MU5zDSwTw7R5HlsnrgfYwxoSenKM5j5SktJpXjeBYBaQoe/cxctViETn/4fbODE0uF3usp1k5ZN3JICgQMBejWtG9OeV28fyfuytfDEgChuFBAX6kHc4HwuwR1Yl0r8jI1Z8ya0xJ2UECxef1Lkdmw0o8fsPm63omU+tznSp9W++nL6IRzv1Ieys5Zaee0o+QKo9iuhIO+DDJbfdRqPeH/PfbzvxQ9blPNI95NSqTXs4UeGFrM/IxyL8jFWdBZs/44t1lek2vG3RFLd2X+w2J87CEl1VjaOq+3/sPWhC7aI0pjtpH8n2aKpVPZH89A+vTu3a8cWJTisnkkolLmzlZZDhCqd1RIWbbVlEREREREREREREzuCUqVFtQfj6QWHBUdzuoySt/Jqcu26gZfdsahg/sGBjGi5XBE6nDT+/QAy3BQZYVga7Zt3PxpSS6UITV14Obne4Z4pYyzMNLFiFbiybA9PwxGBz4TIAC0+fRV9gm8Zppm+1Waf0B1ZxEtPtzmbvxhW0vfIK4gJ+IL1GWwLTFrI/y6B0V1ZR8Y9ZdM2SGQxbeCPCAwwOpyXhctfAtMDyTOFbnOg8bYwWRUWjRbG5legUL/e3zxoZ7hKJzlNK4M/AmUdeYQCBnkxccNe7eXDgpVw95g7aFs3fSmBQJfJzc7EA3yZXM7jBb7zx+Dss3pZMekYKuzduZP/RcwzWtzkjnryBSrMeYOgjH7Lgt53s2bOLTStm8/GsjRQAUMiOJfNYsyuZg7/PYeLEb3F1H0yP8KJYfWrdwH39Unnznx+R1+d6Lj+1nBN869C0sT/bN20rVbbuysskLe0QB3b9xoKPxnPDzW+Q3edZxvfzJEN946kdl8Wy7xew61AKuzbsJKfRYG5ssYN3x73Jkl0pHNz8Pc/9ayb0H0bviDJkao9fe8tGtgU0oVntv31uXkRERERERERERORvwzDN0ptVCYc/uJ1HcZsm7rRv2bQrjDotmpC9+nMOOE0M8wjOQnD4B2KaJq6MnWS7wwkPs5GdtpvM4m0vucfcGGZR5aTlqYgsuk7RlLFmiWtbFsVVk66MHWS5o4muEYd1coyn7S+OqGqRuNN2kOM2OLp9JonHWlO/aRuq16tFztbFZBmn76e4ovP4ZoujbrfrCCtYxrYtqbiNY7hcJ+73eDtX+hYyXdFE14g9sZ/qVI2Pwn1oC1nuU6/3ZzaR8qREp3ni5xFmGf+DNPPzOUKl4kQnPvW55Y3pvDyklmdCWRuBwZUozM3lGICjCaM/epcbfL5h7JUdaNWqE1cTet8DAAAgAElEQVTe+xpLDhlnvMbp2YjoOZFvp/+D5kkf8/D1PejSpRdDHnidBRtSyC6qTefwmncY2a89Ha4cz/KaY/hg0tVUPZ5TtIXQ/e6bqWXW4IZbOnHy8pxFAunUpzMFP//Ab4WAzZ+QMD82TepLq5aXcFn/4Tz7bTptHv+WH9+8lvjjhZm+zRj+xO2EL7ifyy/pyFUPfcImZz3ueu8Dhlf+ngd6t6Pjtc+zpdWz/HdCH8LLnOcsYPW8Rbi69uHSSuf4komIiIiIiIiIiIiI13K7zdKbUQkfP3AVHPXsy2Tnz2+wee07rFq9F5fbxO0uxOl0YfMLwmaYuHMXsn7tAUI7TeDyy/pSrUYrYur2pXGzptgNE7fhWQvTLHmdotyBWfzcxMTCOt4mdxHr1xwkosskunbsRVx8K2Lq9yc+0s/Tn52gBjfTvEU3Ymt1o/HAF2gVc5BtK5dy1G3iPraazeuSiLp0PE1idrNr825P7KVjMCygUm2qxrcmulZnarcZSbfhH9G5Tg5bv5nItmw3btceUg8VEtL0NhrVa0NMvQHUjq2MO+9n1q3aS3jnCVzWpjPR8Z1pNOAFWlXdzcalvxTF4deNLg/8zHW922Iry/PTbCLlSeVxwISpv5xTe3vsbczYc9tZWjho9/Qq9pTY4xvbnbHvdWfsKW27MXnTzhINm/LI4t0lL8ZtM3Zz4mo+VO1wJy9+cScvntxV4WLAn7YPfcubff1OOugi/3ABduMgP773HYf7PcIdTR1niN9GSK/hXPvvO3ln1ijaDU5g1KwdjDrLHXuCJabX03zb6+nSu4Mv4/53fuT+053i24LHl+49NYLQIXy6dwgAZsoXvP2tg+s+uoKQsheBioiIiIiIiIiIiIiXK1msBIA9AF8HuJzHcHuOGQe+YvGBko1MnM4CCK2Mj2lhWPnsn3sPc/IfoF2bf9C7ezAUHCJj86vsXGdheJaPK6rA9Ew167msaVqeGDzT0Ra3ySfph5HMPvog7ds+Tt+eARhH9pA4dxWJh8DCwjBCqN7lSdqE+uPMWM2mL15kZeIRz1qbLtJWfUZSxyeIPzCN7WkGxkm3is1FwZHDWM3uoP/wOzCNIxzLTiQ18UPmTP+KvRnHPA1z2DnvBWIHj6bt0O7YC/azb8Emtu/bTfL8+5hdOJZLuzzPgCA4emgpa6ZNZu3+gqJbNC0sbJ7qVcuzXulZnp+3d1bk/LABHffv37/s+I74+Pjig6mpqeURk/xZhYt5uM1dHH3x91MTne6NvNT3al7ZGUhCn/uZ+OKdtKlytqyhRfrsUfR+JoBJC16iR3llGK1sfri/J+Os5/jx9X5EKtHpdSpXCQHgu7kL6H5Z23KORkRERERERERERLzJZy9dUd4hnDt7fTrc/xkJ229n2rzNnLHm0bcVXca8QsiCa/j+t8y/MsLz6sax88s7BKngoqOjix8nJSUVP65Ro0YnVXRWJH4nVYeW5NucsfMTT1NReiY2ovpPYHrNHKqXZxmlLYwuD33E9IimSnKKiIiIiIiIiIiI/M145xqQx9foLLnO5nEBhMTVw88WSEynx2hc8Ckz1qdjeONtilwElOiUM7OFUL9ZSHlHQWDtZtQr7yBERERERERERERE5C/nlWtA2j3T3HrW8yx1B/aaNBw8lcvi3OQkzmTuB+9w0OmF9yhykVCiU0RERERERERERERELkqnrNHpDcztLPl3S5ac9tgWlk6+hKV/dUwiFZQSnSIiIiIiIiIiIiIiclG6c9zi8g5BRC5i9vIOQERERERERERERERERETkXJ21ojM8PPyvikNEKiiX2yh+rDFFRERERERERERERETOF1V0ioiIiIiIiIiIiIiIiIjX0RqdIiIiIiIiIiIiInJOto3ZVea2CS/Xu4CRiDcYs6vsn5eTvVxPnx8ROTMlOkVERERERERERETknLiMwvIOQbyIUajPi4hcGEp0ioiIiIiIiIiIiMg5KTRc5R2CeBHDpc+LiFwYXpfofPX9786p/QN3DrpAkYiIiIiIiIiIiIj8PRWqolPOgSo6ReRC8bpEJ0C/YYPL1G7uJzMucCR/EaOAAiuAAK98t0RERERERERERKSi0dS1ci6U6BSRC8Ve3gH8WdH+J7bbh7/O7cNfL7Uv2v+PerA4vPFTHruuI/VjQggKCqN6464MnbSUbOuvuIMyOvoNN8e14snVKu0XERERERERERGRi0OhUVjmTcQoLPzTm4jI2VSoGsFUZ1kSnAAWucufps/AN3ENGsd/ZvWlaaRJ+vZVrCqIpLLtQkd6DiwXhYUXU+ZVRERERERERERE/u5U0SnnQglLEblQvLai8//F2MKUh17hYN+3mDP1QQa2a0jtOo1o1/c2Rl/dCF/ATP2FSTd3pHZEZarENKbX/dPYlO85372B169tT0J8JJUDgwiNv4Qbn5jA2KvbUjuyClWiE+j96PccMAD3RqZc14GE6hFUDgwmvEYbrnr0S7Yd9fR15EuuD0vgkRXHKzadfHtzBHUe/IXiGk5jN690DcLPz4/AVs+wwQ1G8gKev6EjDWJCCa3ekqufnU+K+Re+hiIiIiIiIiIiIvK3VWi6yryJmC7Xn95ERM7Gays6+930+in7bh9+Yt/cT0ef8VwjcS6zfw9h4L8GEn266k1jB2/cNJjXAsby4fKvaHBsOS+NuIdBD4Sz+v0BRJhpbPl1F42fXsuSKx2k/jSewSNeoe4TX/LjG/VgwyvcfOMonu/Rk7e6pbJ55Q4aPr6M+YOCyd/xPc+PGkH/7CDWvNOfsLLcrE9tRs9fw/PtHdjsvvixnVduGsrUiKf5dPW1RG97l7uG3cZDCev57PooLqaCVBEREREREREREal4NCWtnAtVdIrIhVJhKzpTnWc+Zmamk0kMcbGnz/O6N33GB6sTePCVf9CzYTVqtLyOic8PwfzmPb5PPz6NrI3gqFiqRlWj2fW30Te6kOA6bWhQrRoNeg/n6oRctmxJoajI0k5I9TrEV4un0eX38takaymc8T5zsso6Ja0NH78AAgIC8Pfzxdj0OR+vS2DUxHu4ND6GOlc8wsMD4ee5qygo+0skIiIiIiIiIiIi8qe4jMIybyJao1NELhSvreic++no4mTm8UrODz84cxVnSfaIKCJI41CaAU18TjluJCdxyLcW9eJPHPOrU49441eSUg0IPekEWxVCq0DikQIsgrHZqhBaxYbzmJPTpTID6zUk3viapFQTagJYnLbhGRgpSaQ41/Bkq1CePr7TNPDpmcERCyqppFNEREREREREREQuIJehKUWl7AxNQSsiF4jXJjoBov3Ltu9kPnW606POM3zyyY883W0gESclBn1iqxPt/pnEAybULSp6de3dzQF7HPHRpyZGwY7NBiWzlTbbmXOXrgP7SbZHExtlB98ggv1ySc9yA47TdO2Lj82Js0SFqk/VGKICuvLclnmMiFNWU0RERERERERERP5amrpWzoUqM0XkQvHqROef5tua0RNuYcYNdzIw4GnG3345DcNt5B38nV/TajPs6hu5vfWbvDzmJVpMvpEGx1by8rgv4KoPGBBlg3P+8UkhWxfOYmXCpVQrWMMbT32Ju/dr9Im0gdWKLh3cPPbGyyxKuJ1mVZyk55sQ6TnVUZN61TOZ9fU8djRog5WSR1izIdzY6G0mPfAqtZ8eTPMoG3kH0vBt2IpaQef5tRIRERERERERERE5SaGpxJWUnalEp4hcIBUi0VlyGtuysRHV73UWzWnMcxPfZWTfR8go8CUkLoF2g8fR/5oBjP78S5xjH+eODs+Q5ahO++sm892EgUT+qQJKi5yVrzLss9tJNiJp3v9xvnxlCDE2wBbHTa++y+a7H+PGVs9x2KpEWGw9LusdiQ3AtxUjJ9zD8geG03qqm9DGd/LBz5N5ZMZnmA8/zT3dniDlqC/hdXrzzDefcke901WcioiIiIiIiIiIiJw/WntTzoUqOkXkQrEBHffv37/s+I74+Pjig66LcN7sV9//7pzaP3DnoAsUSRkVzmdk3aEcmXKQaVeWYV5dkQrG5TYA+G7uAgZf2bucoxERERERERERkfPhxb6Ty9z2kXkPX8BIxBv0nVz2z8vJ5j2sz4/I353DcWLpx6SkpOLHNWrU6OR1FZ3lnrgUERERERERERER+ZvTGp1yLlTRKSIXitclOr2O3xW8lZRR3lGIiIiIiIiIiIiInDeaulbOhRKdInKhKNEpIiIiIiIiIiIiIudEFZ1yLpToFJELRYlOERERERERERERETknLsNV3iGIFzFc+ryIyIWhRKeIiIiIiIiIiIiInJPJSyeVdwjiRZZO0udFRC4Me3kHICIiIiIiIiIiIiIiIiJyrs5a0XnFpBp/VRwiUkHNHbOn+LHGFBEREREREREREREROReLH0854zFVdIqIiIiIiIiIiIiIiIiI19EanSLyl5l+x/ryDkFERERERERERERERCoIVXSKiIiIiIiIiIiIiIiIiNdRolNEREREREREREREREREvI7XTV07IOz1c2o/O3v0BYpERERERERERERERERERMqL1yU6AfoNG1ymdnM/mfHnLmDzx8/mpND8c6eLiIiIiIiIiIiIiIiIyIXllYlOgGj/E4/73VRU5Tn307JWbwZQo/FjjOo0iGbhETjcmSQfmsuHs8ez+Ghfnhw9jowvu/LmAdf5D1xERERERERERERERERE/t+8NtF5OqnO0gnQM7FXvYfxA64ka+kYRm/dTr5PDHXjIknON8HuwNfHduGDFREREREREREREREREZE/rUIlOsvKJ7Q+8Wzi698Wk1hgAcmkZnoO2gFbLa6/ZT/XA0b6y9z9wXck9J/EkFqNiAn0w5nxDv/84AW2BHbh5l6P0bdmQ8LM/az77SleWvoLmZaDFt1mM75NAuGOQrLTfuKzeY/yTUo+lr0Jg695mWuq1SKqUgDGsR2s2PA9mVED6FqjPiFWCps3PMPERT+QZqtBl+6TGNGsPTGOArLSPuJfn0xko1Fer5yIiIiIiIiIiIiIiIjIxcFrE53Hp6st6fbhJ/adbRpb98EfWVHwOiMGP43vkvdYuP8ABSUbWPuY8WlP3j3owrIM3HTiqtqNSV7Wlwe35ODrZyOHelx39XsMOPYiz079nqyIW3n0qimMzujKM1uy2LNxLGPXpnCYanS94iPu630rqz+eQpItklrVarPnlx6M2lFIeJ1xPDdgJMlLhvPQD7sheiRPXj2RW/Ys5lXbg4xp5ubTzy5hXo6N0CqBHFaSU0RERERERERERERERAR7eQdwoaQ6z3zMOjKLFz66if+m1uLaa5Yy4+5PGNm0MUEnWmAaTgrdTlyGGwsAk7zcfWQeTSc1Jw131WvpG7uTmYs+YnNuGsl7XuPzndC63iX4YZGbtZn9eZkcztvI3PWLOBJWn2r2E/0fO3qI7KOHSNzyOavyHRzN2UBS3iGSEj9lSWYVakXFgOsITt8Y4sMjsBWmczB9H/kX7iUTERERERERERERERER8RpeW9E599PRxcnM45WcH35w5irOk7lz/8c38//HN4uiaNTsSR7r9xXV6MET28t2vj04jgifltx5127uOL7TZsfYE06AvRptOj/HbU3aUS0oALfLIND20xmyynnkO6Gawx8bR7DII99p4efrh7H/3zw+bywjOn/Llz12sHjVv3hnzWoOW2W+TREREREREREREREREZEKyWsTnQDR/mXbd1ZGOlvXP8P0hCsZVaclvtvcmJY/Dp+zn2YeSSPHvZz33hrCrPzSmceghLf4ZysHH3/eme9Sc/Ct/W+mXV3pTD15KkZtnucWVnF3+eze/Az/3DyZag0e4YkrpzIyuwMTdx09x5sUERERERERERERERERqVgq7NS1Z+Nb9RqGtOhJo6iaRIfUIaHhnfSKNTmQsRfDTOJgbjitGvUgPjiWGrH1CbOd2oeR+i0/ZbTixt4jaBsVR1hgDNVjmhHrAOy++AA2mx/+vn74lEpelp09uAlNo2IIdljkZW4hpbASwf4OThOOiIiIiIiIiIiIiIiIyN+KV1d0HldyGts/ZiOoUnUaNr+WwT1qEOZnpyB/G+vX38eLv27DNO188/NHNO31Gh+09CU//b+88NnCU7sxf+ezGSOw93yUh295nEiHm9zsRUz96h5m73iR12pO5tYbV3Gvv51CZxaZGf8l14KyZyltBEYP4cH+N1IrKADTeYDt257i5R2H0cy1IiIiIiIiIiIiIiIi8ndnAzru379/2fEd8fHxxQe7vRBbHjGd1YCw18+p/ezssq/bKSLn39wxewD4bu4Cul/WtpyjERERERERERERERERbxIdHV38OCkpqfhxjRo1OnldRacSlyIiIiIiIiIiIiIiIiLyt1yjU0RERERERERERERERES8mxKdIiIiIiIiIiIiIiIiIuJ1lOgUEREREREREREREREREa/jdWt0ioj3GjK1ZXmHICIiIiIiIiIiIiIiXmTx4ylnPKaKThERERERERERERERERHxOmet6DxbhlREpCyOHisofqwxRUREREREREREREREzhdVdIqIiIiIiIiIiIiIiIiI11GiU0RERERERERERERERES8jhKdIiIiIiIiIiIiIiIiIuJ1lOgUEREREREREREREREREa+jRKeIiIiIiIiIiIiIiIiIeB0lOkVERERERERERERERETE6yjRKSIiIiIiIiIiIiIiIiJeR4lOEREREREREREREREREfE6SnSKiIiIiIiIiIiIiIiIiNdRolNEREREREREREREREREvI4SnSIiIiIiIiIiIiIiIiLidZToFBERERERERERERERERGvo0SniIiIiIiIiIiIiIiIiHgdJTpFRERERERERERERERExOso0SkiIiIiIiIiIiIiIiIiXkeJThERERERERERERERERHxOkp0ioiIiIiIiIiIiIiIiIjXUaJTRERERERERERERERERLyOEp0iIiIiIiIiIiIiIiIi4nWU6BQRERERERERERERERERr6NEp4iIiIiIiIiIiIiIiIh4HSU6RURERERERERERERERMTrKNEpIiIiIiIiIiIiIiIiIl5HiU4RERERERERERERERER8TpKdIqIiIiIiIiIiIiIiIiI11GiU0RERERERERERERERES8jhKdIiIiIiIiIv/H3p2HyXUWdr7/narqVmuz9t2WZMuy5RXJu2wwGC9gOywmJOBcshImk8mQJ/NMtiEDJuGGh3AhGTK5TAzhZpgQwlwSCIsdG+MFvGBsbLzLeJNs7Yu1Ly11d9X80ZIsS93qFtb2is/nefyoXevpOm/Veet861QDAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0KXnlgkAACAASURBVAkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRM4LObMPulILwIAAAAAAHAMaezvzJdeeulwLQdwjBs3ekS2bus80osBAAAAAAAUZNjQjn7P22/oBDiY9vdiBAAAAAAAcCCEToDDbMeOHenq6kqr1TrSiwIAAAAAwFGoqqq0tbWlvb39SC/KUU3oBDiMlixZkvvuuy/Lly/Pjh07jvTiAAAAAABwFGpvb8/UqVMzf/78HH/88Ud6cY5atSO9AAA/S+666668+OKLIicAAAAAAP3asWNHFi1alLvuuutIL8pRTegEOIy2b99+pBcBAAAAAIBCdHV1HelFOKoJnQCH0YUXXpixY8em0fDN4QAAAAAA9K3RaGTs2LG54IILjvSiHNXsaQc4jM4444wMHz48q1at8vW1AAAAAAD0qb29PZMmTcqMGTOO9KIcNVqt1j6n7RM6V69enQkTJhyWBQL4WVNVVU488cSceOKJR3pRAAAAAACgCM1mM6tXr97n9H2+urazs/OwLBAAAAAAAADAQKqq6vNbEvv86tpdR3VOnz79kC8YAAAAAAAAQF/6O5oz6eOIzqT3qM7Vq1f3+V23AAAAAAAAAIdSs9lMq9XK6tWr+zyaM+nniM6kN3YuWbIkHR0dGT9+fKqqOmQLCgAAAAAAAJBkwMC5S7+hc5ddwRMAAAAAAADgaNHnV9cCAAAAAAAAHM2ETgAAAAAAAKA4QicAAAAAAABQnAH/RmdHR0cmTJhwOJYFADiKrVy1LitWrTvSi8ExYvLEMZk0ccyRXgwAjkKtVitr1qxJZ2fnAV2vq7vKpi07DtFScaiZGwAAsKfBvi/oN3QKnABAkmzesi0LX1qZJBk+vCP1WnWEl4hjwabNW7Lq5Q1JkpknTMzIEcOO8BIBcLSoqioTJkwY9I6NqtbIuo3bk5irlMzcAACAPQ32fUGfoVPkBACS3sj54tLVGTa0PfWab7zn4Gk0GhnZaKS7p5mXlqzO9OMnZuSIoUd6sQA4ilRVlfHjx+93p0ZVa2TDli5zlWOAuQEAAH0Z6H1Bn6FT5AQAkuTFJasybEhbanU7Djk02hq1VGnLS0tX54xTpx/pxQHgKLNrp8aSJUv6PH/D5h3mKscYcwMAAPa2v/cF+4TOjo6Ow7JQAMDRbcvWztSqWuqN+pFeFI5xbW317OjqyeYt2zJiuCM3ANhXR0fHPp/ertXbUqu6zFWOQeYGAAD0pa/3BfuETkdzAgBJsnjpmgwd2p6q8neuOPQ6OtqyZNnLmTP7+CO9KAAcZfr79PamLd3mKscwcwMAAPbU3/uCPr+6FgAgVdKo2XHI4dFWr2W74QZAP/qMmeYqxzRzAwAA9tbX+wKhEwDoU62qHCHBYVUz3gA4AOYqxz5zAwAABiJ0AgB9quw85DAz3gA4EOYqxz7rFwCAgQidAECfqiqJfUscRvZlAnAgzFWOfeYGAAAMROgEAPrkKAkON+MNgANhrnLss34BABiI0AkA9MnOQw434w2AA2GucuyzfgEAGIjQCQD0qVZVqXwfHIdRzc5MAA6Aucqxz9wAAICBCJ0AP4O6e5o7/55R746DPXcfVPv8sNf5djb8zKhq1av+LtL2rmaqWvKqcbPz/NruK72yu3HP840aBqOq9T1SvnbX01neOXzQtzOlY0ve9aY5B2uxADhK7T1X4djT39wAAAB2GVTo/IfvPpUkqdeqNOq1tNVraWvU0t5Wz5BGvffftlrad/686/xGvZZ6rcqE0YPfMXVQtbbkxYcezNJR8zJ/9qijZidrsY9nKV7zeu/Omqcfy5pxZ2bOhPZDsIBHu+6sf+HxLOk4PWdOHXJA1zS2D6GD/Hr2/3/vmbQPaaStXtu9rhp7rI/e/69611H91afV0sykcccdlF+LQ+w1jpu9vw7u33/2+xk5cmg62urpaK9naHs9Q9sbGTqk0ftveyMdQ+oZ1rbz9N3n19NeS6aO9Rw/6jSX5OY//0junPWhfOKXTk79CC9Ofx+kWN45LBO7Fg76dpZn5kFaoiPkEMxhbaOBY9H+vrr2P/3p51Kv11KrqtRq9dQbVeq1RhqNWuqNehqNRhr1ehpt9d5/67X81vVvOcy/AQPxIUsAAAYyqNA5dvq0JEmtSuq1pFEljVqVtl3/1pK2nf82ar07UGpVjvwnK1vbsmbxS1lZOyutvIajSZqr8tA3b8+LUy/LOy6Y/Jp3Ahb7eB5yXdm0akU2tU/O1NFtg7/a3uvnNa73njWP5Z4HF2by5Wce4DWPFVWaG5fkkYe2Zcw7Ls60A2i9xnZfjo5xvY8qGTZqZOpppV610qhXveuoXk9bvUqjSu8O7lrVGzxrVe/ptSpd3a/1zhnY0TFu9t55WFXJmBOmpD09aU9PhrTX0lGvpaOtnqGNKkPqtQxpq9LWSBqNpF719B7p2d2TVlubHVUHU2tzXnrw3iw87uJcOmfk4Ndt1wP588uvz01v/GLu+Njr09Fak0duuTk/fNvvvepo3AM1YfzoJEl7e3u++L++lCuuuCpJcvfd38+v/sr7smnTxiTJ6jXr93s7/Y6RVjNVVcvrL794wGW55/b7klbzAJb+KHSwX/NzNG6j+3ud6+f0gzwfh2PJYLevrVaryPvbn/2FzvPPOzUXzD0zPd096enpyYrlqzJm7Oh0dXenq6c7ze5menqa6W4202w2s3DximNvrtLffGHv+cAhuvuDMT845tYJAMAh8rP8vmBQoXPtS0uT7PEJ8EYt7fV62tpquz8B3tNWT6tRS9rqqRq11Oq9/w24d2Tb87n9a3dn+biL8s63zMmIo3EOW7Vn+KjjMmp4+0HZ0XTIHs+u9Vn0xGNZsHBZXt7cmZ7akIwYPTHTTp2Xc2ePTVu2Z+H3/zX3vLA13a2kqrelY/iojJ82K2eePSeTh9WS1vo89q1v5MlxV+Q9l0x75asI08zLD38j335uSt7y7osyubbnHe95u1Vq9bZ0jBid8VNn5JTT5uSEUYP8huTmmjxx551Ze8Z1mTK6bfCP9cFcP60tef7HC7LthIszd2p7dqz5SR566Km8uHpjOnvq6RgxOhNPmpf5c6dl6NE4Vg+KesaeeV5Ofu67efjp0zP17NGDflwPy9je1JlmfWiOGz8lJ551Xs6eVs8i4/qAjWnfnpFVd1avWpmtWzZnSFsjVVWlVqt6P/VeVanVssfPvTuRtm3blnmnnZCknyM6u9Zn0ZO71tX2tNqGZ8ykGTn17LNy8viOI3Nk/UDbmf2Or2G9yzyYyyQ51sZNrXr118GdOHJTJvRszjM/eSovr1qekcOGpr7ziN9GrfeI33q92vnzzv+v1bJ2/fr8xnUXp5pwVp/309r8dL712b/KF75+Vx5dtC49I6fl9Pk/l1/5vd/L9fPGpdbntQ6Nzm+/P3Pe/2De/b/vzafeNHKvc7vzk89cncs/0ciHf3hTfmt6LWmtzb/+9sX54NdXprMnqbcNz5hpp+acy38xv/Of3p+LJ++1zl91+SqNISMz7oQ5OefSt+V9v/UbuWrWsMEtaNfD+Zvf+JU88R/uz6WnHUDorI/M1Fkn5+Rpo1LfK2JV1WsLaq1WKzfffHPe+9735vOf/0KGdHTk333g/bn11lsyf/78nTuj938b/f4drlYztZ1fXTeuPUmrd1lbyR4/7LyNWpVqoNA56Of0seNQbKNf0zypv9e5/k4/xNtFKN1AOw8Odiw63PfXn73nKnt68Ec/yYaXN6eq7Zzj1htpLFudWr2Wer2Wer3e+1+tlkajlqSf2zrQbf3RpL/5wt7zgUO4CK91fuBvdAIADN7P6vuCQc3ILxnzaEaedf1PdQebHv+nTDi3v+u2sv7Zp7K03pG2lU9lwerZOX/iUfj57Gp05lx2bQ7WX3q6ZMyjWXRSM620ZfyLs7JmxvNJ0vvz9OfTnWTYi7OyasbTr7pMla7MfL7W9+O5Y0UeuvW7eXzj8Ew/9ezMnzgi9a6t2bR2VdbtaO381HszOzq3pzlxbq46b1raml3ZtnFlnnv8R7l16YZc8bb5mXYABw69Yo/bPXdq6j07sm3jmix59vHc8eyzmf3Gt2T+9EO40/Agrp/Whufy9PIhmfXW6enoXJi7bvtBlo6YndfNPzejh7TSuWFN1lTtaT/W32s1Jue0U0bnmWeeyaozL8ikQVaGS8Y8mo2zrk9z5+VrVVI9/1C6p78uVaORaudhKT0vPJTNM+elqmq7d1KPfraf14o9x/YpvWO7rWdb1q9anu1dtVTG9U9l9qj1mTz7zDRPntgbDPb4RXZvn/aIB7vOb7VaWfnsfcm06fve6PZlefDWO/Lk5uNy4mlzM2fcsFSd67P02ady380vZcWlV+f1Mw93QBhgOzPg+BrkZXY7tsbN3kdJXDt9Yeb+3HvS0zwnabVedd7uiU1V7RxEr+y0ajabefTmT6Y65ex97qO19nv56Luuz98unpXrPvDH+bWzJ6exdkFu/9Lf5vevvSn333hT/vrtUw9T7Gzm5eUrs6N7cb78sRvz/tf/QU7f4/WjtfJr+fjfPJTO5slZvbaVakaVpCcb16xLz3l/kK9+9MoM27Epq1+4L//01zfk3Xc8k3+8/dN586g91/jOy5//h/nqhy/LkO0bs/KFh3LbP34mv/qmf8z/dePX8slrpw58tNquoy93rqNBj6nG6Xn/F7+T9+9xO7tv7zUc0dne3p6bb74511xzTb7yla/kuuuuS5LcddddmT9/fu644460t7cPOMHt7/yq1dMb3JLcevcz6erqzo7t27N9+/Z0d3XlfT//+rTSu/y1Wi2tnp7+7+SAntPHjoM+n+9cmPsO5zzpEG8XgQPzp3/6p/noRz/a53n9nX4o7O+Izrlnz87vf+C63f//w8deyIVnn9TvbX3q81/v57YOdFt/FOlvvrD3fOAQORjzA0d0AgAcvY6W9wUH9NHDYT9Fg9y0vzO7l2fB0xszZd6VOWHRd/Ljpxbn7Ikzs/uvAra2ZNED9+SxJeuyaUtnutKW4WNPyBkXXpTTJ/R+nH/DT+7M7T9els2dPal1jMrkU87LxfOOz7B95sLdWXz3V3PH2tPy9rfPzZgqSVpZ98g3861nJuWqd5+b9uceyP2PLcqazd2pdRyXE867MpeePCJVa10e+ea38sLx1+Sd545PLV1Z+0w/lx3k47Ji9HOpt5LxmZUVoxekys7QOWpBuqven5ePfm736StGL0ijlczMKX09kFn5yL15fP3ozL36LZk7fo+9srP23R1UDRmViRMnpi1JJk/LCWN68vWbfpJnl52faTMG+Qv0oRoyKhMnTeq93akn5MRTTs6U792ce+69P5PGvzmzhiVpbs7iRx/Moy+syLotzQwdPyNnXnBB5ozf9f2ozax68J/zPx9MknpmvumXctmJ9f2v533Wz94Gu75a2bj4pawfekIuHF9Pc+WKrNwxPCdfMD9n7ip9x8/Iybsu3lyeH3z1tqw947pcc2bvp2Nbm57Mv339iYy58hcyf/K23vG7eF02bu1Md9ozcvLsnDGjnhXPLczytVvS0zYq0864KBefNal33O8a8wdynYGeB61tWfLID/LowlVZt3l7empDM+Oit2bW8pv283y4KJNmzMiYR57NSy+fl0kTBp8YtlbpPbKylaTWe9zf5lZS70laVW83G5Vk087L1Fq9p4/u89Z2ju0NYzLv6qvyunGvjO3pJ52686dtSYzrA30d6o1Srd5PSO+8wpqlz6TZ0zXgdbd1dvZxaneW//i+PLlpXM69+qqcNXbXBmN6Zp48M+PvuCk/uP/BnDDljZnZvjUvPnhPHlu8Nhu2dKanGpKRE6bntHPOzZyJexz1ub/HdcDtw67F2t92ZjDjazCX2dexMm723nnY09N7lFy99sotP/3Db6Zrx7Z+H4td1q1f38eOqq255y9+NzcumpeP3Pwv+eAZQ3eefm3e8Z535dxfviJ/8IcfyVve8Hd5++jl+fZHfid/ecsTeX7Jy+msjcnM86/Nb/7JDXn/BXsc9dm1ON/59H/Np776/Ty1rCcT5v5cPvjxT+TX5x2Xqrk03/zQb+cvb1uQRUtfztYMz5Sz3prf/vin8lvnj0qVVtasWJlMnZXjn/tc/urffjOff8fYnY/N9jx846dz+6iTc9KWl3tDZ1VlV9Ctxp2aCy+8MMOT5A1X5C2nd+WSa76QL3/vz3P5O/Y8SnPn5ceekgvnX9x7+Te9Ne/65ffljb95VX739/4wF53/pfzipGr/v0vvocZ58MNzM+HDSdKRt31hUb74riF5/u9/Nb/0iTvy4trtaRt7St7wK3+WT//xlZlST9LzZD552WX558tvyX03nJPGHjs9q6pK1dqcx//Xf8kf/bdv5OElW9M+blbeesNX87fXT99vbP6HL305733ve/OVr3wl11xzTbZv3777vDvuuCPvec978q1v3/xTh85Wq5n6ztB5+cWz9zm/p5UkraSV1Ou1VN39HdE5yOd0a2uWPf5AHnp6SdZ1Vhk65vicet75OWvKzg8qDPgaNMBzbaDb78uArxmDc7Dm8811A8yTBrXMfbzOzejn9JmbXv36NpjtQGtbVjz1ozy8YHHWbOlK1TY0I0dPyCkXvjGnj+t5zfN5OJoc7hh0ww035IYbbuj3/MO1U2N/oTNJ7n342dSS3R/EvPfhZ/PKBq2WpLn7p12318e9DLytf9umfPcT/zmf+tcHsuCll7O9fVKu+eTt+bv3TkmtZ3m+95kP5WN//508sbrKpDOvyq9/5OP54KU7v4a7uewgzHFW9n3/70r6nC+8Y+Gr5wMD3f4RnB8InQAAg/ez+r5gUKGz2er/wXnk/nsHuPb09PeZyc6Xns7CnJg3njQxE4bOyCPffzoLN8/InN3fK7gj61eszLbR5+TS+eNS796UxY//KA/c+WBGXXdJprVVGTrx1Jx36ZkZ1p5sXf54fvjw3Xlw7M/njTP33unTyKRpk1NfuCIrt7YyZniVpDOrV21IfdLcjN3wZP7t/sUZPvfSXD1tWFqdG7Jj+NA+d3S01j2Zuwd52b40W1XOePTndv//mXv8POexvk/f9XOz9dy+N9i9PM88vykdJ16QM8Yf+KFrVaMtjaq5ewf2QVMbmVnzTstT33gkz7y4JSedNiSrHrotdy0ckbMueHPmD9+epY/enx/e8WBGXHdJjq8nSS1jz3xz3nDy8FSp0j6id4gOfj3va/Drqzsvr16fatwZGVtLqhHHZWS1NcueX5LN46dnxAHvGNw5fseem8tePy617Wuy4IEf5f4fjcsp55ybS89pS+fiR3L/Q3fn0YnvygWTaj/ldQZ6HnRmzUuLs+m4ebl0/qS0N7enNmpERtf6fz6MqyXVyAkZP+SRrFm9La0Jwwc1vputKluaSXd6g2bV6g2dW3Ye6NWsencijNp1WpWk1nu5ZquPt8g7x/bQEy/KGeMObGwb1/vXbO77tQIbX16SCZNPTL2+/8d63cpF+57YvTzPvbA5w066OKeP3evJUjsus183O0/etCDPLe7MzFnbs275imwZPW/na/vmrHj6kTz4nbXZcfU1ed24epLurNzf49oYaPvQe9f73c4MZny9hjH46segzHFT2+uM7j6eT4ufuT9nzLs87e1D9zlvTy88fte+J277Xv7pX5Zk0rv/Oh84Y6/rt5+UX/rDX87/+9Yb80+3rs3bfnFdnvr+PVl+6p/kb/+fuenY9mLu/v/+Ih9+1+PZcPMt+f2zhyTZmvs/9vP59X85Ib/78S/nk1PX5a6//P186H1/kun3//dcMWx9nr7vB1l12odz41/OTcfWF3LrZz6aj/zaf83sH/73XD6imbVr1qY6/UP52Cn/I7/5376Qp6/9g5zW6D2a86+++HLe8vFPZuyf/U6Wrd6apP/HvD58eIZW3encPsg/ats2I7/woQ/kc2/4i3zpm0vz7g+MzQP7+12GJElbzvrgl/PZ66elllqOO773MZxw4a/nhhs/mCmjWln+/b/Ohz7+7/KRsx7J594+asDXiJ4Ff5P/+Ee3ZsoffS7fvnxymqufy8ZpkwY8ovbKK6/K333h73Pddde9aidmklx33XX56j9/Leeff8GAD8PeY26XqtncfUTn7fc927us3T3p6urKjh070tXVlZ7u7rznna9PrVZLWv0c0Tmo53RPVj98W767IDnp3Etz3uhk3XMP5+Hvfjc9V1+beePrGWiOOnXz/p5rg7n9fRZ8/6+Jg3h52t98fpf+5/X7zucHnicNsMz7eZ3r//Q9DbQd6MmaR27LbU9054TXzc/lEzuSzQvz4L0vZPWWVlq117YdhaPJ4fibN0er/rYbSdLd3ZNLznnlwzHff2BBLr3gtH4v/70f/PiA7vtV2/rWmjx8yy1ZdNJ/yf/41PyM7l6f+skTU8u2PPTxd+f6z1X5+Y98PjfMaWXBV/48//d7fyGd374tf3xOR9I6GHOc/u7/mfQ3X3i1/d/+ZYuP3Pxgf+sYAIBX/Cy/LxhU6OzpK0Ds4aKL5vd73v33/6DvM1qb8vzTyzJ01lszpS2pTTs1s4bekp88uy6nzBv7qglz2+jJmTZ1fGqZksnDN2XZt5/P4pebmTa5lvYx0zJ9zM4Ljjs36xZ+M8+t3pDmzAn7TLrbpx6fybX7s3hZZ+bMHpp0r8nKl6tMOHdiGp3Ls701JFMnT8mEcY0k4/r9nVqdnYO+bF96WrW8PPve7GgNy5Tn5mXZKXcmSaY+c1mWnnJnqj1+braG5YRnL8yyU+5MK8nJT+77dZGtreuzYUctYyeMz+B2wzfT09OTqmdHtm5Ykeceejrr6pNy6qTXsBO/H9Vx4zKuvZmlGzamtX1rnvzJ1kydf3XmzuxIkoydvzlL/+XRLFp5UY6f2nudxtBRGTPm1X9v7EDW894Gvb5aW7N5SytDJw1PPUk1ck4uuXh9vvfAHfnaS+NywqzZOXXOrEwdeWCPU9uoSZkyeXxqmZThGxdlyROjM+PUmZlWSzIxWfHC7Vm5clNak17ZAX2g1xn48anSPmZqjp+yxxFejf6fD21JUg3LiGHJ0s1b0srgQmdPq5at61alu7vZ+6npqvfetq1ZmWa9vntZkmTb6pVJVaVnd+jc9x52je0xE8YO4sXKuD4QzX42fOMmHJ+2ncFqy6ZN2bx5UyZOnpyXFi7MCTNnplarZ/ELj+67PFvXZ0NXLWPGj+vzKzdrY8dnbKMna9dvTnPnJdpHT8nxU3vH5LRpY5Nv3ZSnnlya0y+dnrbtL+7/cZ3We7v72z4MtJ2pBjG+DmwM7l+J46aq7XVEZ7PvDw6ccvr8DB/Zu1Arly3LimXLcua8ubnnjjtyyZsuS6OtLfd+9x/2+VRZz/Jn8tymek4/93Xp6ONrU9vPODdnDdmRJ555Mc2qI1WqHHfqpbnqzeemkeTNl5+V6rIrc+Nnb89v33hthq/7dj77P5flzZ+6JX/0znGpkrzu04tz+zmfyDfu/3SuvLxKlSojT3l9rnjTuWnkjXn91Jdy51X/O7c91p0rLtmRtWu3pH3stLz5dz6Y+V/6ZG686z/kM1d25KkvfjZ3TPq13PzO2bn5M8nja9amWY1JfddSN7uyffv21HdszPJn7smX/+zvsqDj4vzG/JF7/d6v/Lz3V8U2Tpqbs0b15PZnF6Zn/X37/13e1HvNjkmzc/rpJ75q3f8f9u47Pori/+P4a6+k9x5SSEKA0HuRIqgICCiCvWJBRbF3v3Z/NizYRVERRQVR7IqKnSa9Sw0ltFDTe+7290cKScjlEhSB+H4+HjzI7e3Nzu7M7c7NZ2c2sM0AhrYpf9EhmHVf9OWjxRtxDO+GvXybhlExCqY8S+UjYsyD+zlgBnFK75Pp1sEX6OSqehzGy8ur1uWFhYVk1Tqi93CGq95M01E5krhf9+TKR3JWrF1x74zTNMtGfrqoq/X6ThensXpdFiEdhtO7Vdl1NjoygNLMr1i1ejtt+idUhrhdnYOizTq+aw1Iv5K7c2Ks+7ux3LXnK9TWrq+tPe+2neQuz67Oc04Xy138XnN5HQjZzuq/MghoeyYnty//bVGYwxpjc1lyf/M6KiLHh5ptlaqKikv4ee7qymuECfw8d7XLtIqKHS5HdAJur/UGFgJbn8Kgfl0qrzFm5pe8+s4G2t45j5eubYEVOLlXMnlr+/DKaz8wdtLZBBjGP9TGOXz7FLtoLziqtwdwk35/72PXPnDZNhARERERKfePBDqremv2O1zT1/2TyiPzrwAAIABJREFUHpwZqWw6EEhy7/KpWKzhNE8OYu3GTexr351IF/01Fr8A/IwiiopMwEFO2goWr9rKvqwCSq2eWEqcWCJc3MXvFUdi9ALmb9tJYfNkPDLS2VsSSkoTLyzeLWnbZBuLfviCA0ktSElpQXyoV62dxpaI+q9bG4dpYbVvBp5kEE0ndvrk4QSaANt98iiu8jfkEQds8smjFEiorSwqe/tq/gBwcnDFTH5Ka8KpQzsRVv5RR9pspr4/u2Jv8AyOo9OpPWnpZ7jsRPonOLMPklFaQs6c6bxfOWDAxOkwsObXNV1mA8u5hnqXl1lKaSlYrRVfCyuByb05K6Eje9NS2bhhFT9/tpzwjqdwaofIOsbxuGLg7esDJYUUllI2EMjwxscL0ouLK3+AN/wzR3h86vg+lLFhs0FpaT1HI1FWt+OsZX2UJpVxTuKsYLGAw1peTfMgxgOsFVPcWqG0rrpdjzCr6nXDzkOuAp0AJSXF5GZnszMtDTApLipi+9YtBAYHExQcUs8tuOJiu9YQYqK8Wb17P9nOeIKP4LhWvz64v85E1Kd+NaAONsSJUm9qTgdX6nBdb/Lz8ti1YzuL5szGxCQ3J5t5v/5CfGISicnJlenVTB/AwMXzIWtMq0rF2hXLvTswoG8kr81exmbHMFqnruKv/FzSbmpJ9M0ViTgpLbbgvSe3ynXyUBq2ps1oajnIwQwTw8wiIwv8YwKwxZ7CTec+z+WvfcxdPWJ5Y/JW+t1/HR298pkbAFkHM8vzV5ZO0cwxJEeNKU/eg5DWg7l3ynhGNbXW2K86nq1pVJyvDBz12JeKmaerp1PE1m+f4ZGXPmfhxnQKPEPwyCnF1r2oRnkala+NKq/tPa7mln5f8+CIniw770quHj2KYe3D3DYY//jjd0ZffSW//fbbYe99/vnnXHjhhbwzaTIDBw6qMx2XnZ2mA0t5R2fVAelVa2TF7LV1juisx3fambWfg6W+xEdVCbJZAoiK9GHZjv1kOxMIq+XjVc9BlljX37V6pV8zT27PGf9coBNg2vqXuLDlLXz2x3pGnuxqmu6620m2Iz7PHbmqZeDM2s+BUl/iYoKOSnteRI4PdU1dW1hUzGm921a+/mnu6mqva/pp7hLXU9fi5lrvqKWdApRuWs7qghiG9EnCVpG2PZnevaIZ9+MyUh0j6HyU2jjlB6j29kKN9d21O+wXHLv2gaauFRERERF36hfodNb9k7/mTHYOJ0ya+w5X9XYV8HSwb9MmMkpzWPzZZBZXfcvIZ+PuTkS6moPLsGDBxDTLpuD77be1GC2606dHGN5GNutn/0aay5x6EZsUi2XuFtIKkojYtZu84ARifA0wgml9+khid6WybvUa5ny9mpDOAxnUPuzwriNrA9atbe+dFvotPzQtbbcqf/d08ffJ5X87zMP3zvAJwM/mZO/BDBxEV8uDs7SIgqISqhaRJaoTA7vGYLfa8fTxw8/LVqW7z4rFCo6S0sPCEaUlpWC11msfK5hZ+zhQbME/MKAsoGV406zPQNqHVa9TNm/P2hPgSMq5hvqWl2HFZgOHo0Zgz+ZLRFJ7IpJa02rlj3y3bD5r4s6iU1DZj0ani1EjtbFYLIDjUCetYcFqAeoIOrn7zJEfnzq+DwCUUuoAm63+Je5wWrAERuCk/Me0BcjcDmGROK02bCaYFiBjB9bgSCzGoXrg2Hv4dgzfAPxtTvbUUrdrUr1u2HnIrGXq2grFhUXs3b2bfXvSiWzShB3btuLj48v+vXsICg6pNVRp+AQSYHeSvv8AjuZNDsuDM+MAGaVWAgL9seDieY5VOzHcHtda0qhyfajXdSbKff1qSB1050SsN5Uj/srVNRV0TlYWq5cuYc2K5XTo2o35v/9KaHg461avJDG5bARezX4qW5Nkmvk5mLN8JUWX9afmRGol65axpsiDpBYJ2Iw9laMPqzxaFsNiAdMsX2aCJYrzXv+C2ztWbeJY8IsKxlJLGobdjg0nDhMMssnOAT9/XyyGL/1uGE3Lk1/nsfvD+d7jAj46NxqrZQ/+vibZ2TmYFWkAHn3u57PHTsfPw4/g6Djiwn1d1pcqgyirhdscqUtYkW0lqUUSNrf7ciihquk4/nqZK6+egPXyZ3jjmc6EGxt5d8yVfFuxvfJ/VP276ntebRjz6TIG/jqVt159hbH9X+LNB77g8zs6H1Y+VV16yUV8//33nHTSSfzyyy+MGDGCwsJCPv/8c4YMGcK0adMYMWIE+/YfrCMVd4FOC6uXrDz8YJrV/7ZYDDBrr6v/5Hf68MSrnIPq+K4dUXftEZ4zqnLXnj+0HhQWOKm4r8HtDPAu2kkd/oE8N1jVMjCdmBiupzz8m9dRETk+1GyrVFVUXMyXPy0ov5HIxNfbt+y1C0XFxS7Tcnutr3l9rcyfWdkGqpr2ofdruTZXvPc32ziVG3KxvOr6uEvfK+yYtQ8U6BQRERERd/6REZ0lDpMpf07isp5Xlb0u70AvddWRXprOpi35hHccRM+mXlWmpCpk6/yfWLdxB0WxibjrAnFk7CeDSE7q1JwmnoDpQYBnLY3gKtnwjGtBosfPbErdTdGOLIKaxhNY+RE7AU1S6N6kGQmLvmHm2o3saRtGk1rb1S7WrUcfksO0cDApFYNiQja34kBSKqalkLBNbTiQlApA6OZmHExai9PirFxuUEzwGr9ashJNUrwXWzevYkOrSFoF1p0JwyOAsPDw2qe5NXwICvSgdPcu9pQ0pUnFSo4Mdu7OxxoYjH+9h4zlsGnpOg56xNK3qQ9WSzBB1kIOZpv4Nws8/I55Z1mgsbjG6MZ6l3NN1apfPcrL8MHP16AgLx8H1HJHv42gmCh8l/5Fdo4JwV54e5nkZmXjIPBvT2t5pI74+ODm+2Dmk5sHvn71m7YWyup2NlBigKW8YzQEyHFW78wPBXKd4LSAxSz7P9RZy1Zs0STFe7MldRUbWkXQKtB116PqdcPOQ3WN6HQ4HfgHBmJN3010TCwZBw4QHhVFRFS06wRt0SQn+rJ580rWtYqkTVCVsnLmsGnFBrI84+gY50WtQUozh71787EFlZWFJcDNcXU3Src+15mYGPf1qwF1sE4naL2xHDai03XEo7i4mCbxTfFctpRO3XuweeMG2nToRNvOnSvXOayjyrc/F42MYcb0Z3n3mpMY26rK1GYlW5j2zHukBp3BvWeEYTH2VqRyaESCYysLF6bj1aotSXYDe3I7UjwmsDrVSeIFLQ8feV/baIuKUaWGgWHmkJVl4u3nh8UwsLYYxfUDXuCaD7bR+bFX6eNrYOCLnw/kZOVglo/oxABLUDJdunXD1+URqjwK5QVaY8RF8WamPf4WfwUMYsLZMXja3exLSdno/uzMinyUJ/PXMtY6e/HC/ZfSP9gAZwDJwZYqU9XWOAblnzWdVUZ6GP40O+1anj7tYs6+/2SGTXyfhTd3oX8dUxlkZ2dXdmJecMGFfPb5F2RmZHLhhRcybdo0hgwZQnFxsdvOSouL9wN8vdmwN6/Oz1Zb389FSdTjO20JDCPEtpY96TmY4eXTyjuzSd+Tjy0kjAAL9ZwpwMV3rT7pV3zVyrfj9pxYD/Ud0VniNAnIHkJpqUlx5PeUOFvUcwvV20mWyCM7z4Gr5Q1j8Q8iwFjH3r15mOH+LtI58uuoiBwfarZVqiouLGX4gB6Vr2f+vrza65pm/bbU9dS17q71LkZU2lt0op3XG8ybuxVn97KpayndzLz5u/Fu14nmdqP20aD/RBsHwFJ7e6Fmfm3u0odj1j5w1TYQEREREalQr9iM003HSGmNwGbF1HYlLqa4K96ZSlpROB2aRxPqV7XRamJPDGX1slTSChJpXvujHCpZAoMJMP9iw4pN+CYG42XJJ7e4ygqGJ14eJrnpW9iZHUBcgB1sUbRo7s+3q+eSUxxEq95lHUxmznbW7zYJCvHF7sxjd2YReHrWGmxtyLq1cZoW0v3XYjEhhFak+68FIIw2lX+H0oxd/qlgli3f7b8WAwg0a3sWhgdxnXuQtOcPFsycycHWLYkL9cVGEfsyimtZvy5Wolu1InTLcv6YBe1SYgiwFLB340rWZPnRontTXBWLWZjJnvR0rI4SCrL3sWPTejZnetGiX0+SfAygKa2br2TW6t/43dKB5hG+WErzyCzyp3lyBHZLIKHBVv7avJK14SkEm7kUecXT1F0511Sj3GOMdDbUq7zshIYFYW7YR4YzkdADq5m3sZTw6FD8vaxQlMX2v9aTZQunVagFjADi4oNZsWox80IcNAvxhNwMCv/lZ/66/R7UxcX3AcDM2c/+Ih+iwnzq3cnoNC1kOMt+Z1som14wBMh1cOhuYmdZoDOLskGpFrNstEhwrecZD2K79CBpz+8s+G4mB1q3JDbUB7tZTM6BPeT4t6Frs/qcxhpfvY410v/eeaiOEZ0H9+8nfecOfHx9SduyGdM0yc/LpaAgH08Xz9gBO0069aLVnl9YNPM7DrZqSVyYD5bCDHZs+IuN++0k9e1GgheV80vmpa1mZUAi4b6QtXk5Kw/607xbbFnHipeb4+pm/+p1nSlMJNlt/fKvRx08vAP9eK039T8fln+8RudhXSM6N61fx7IFfxIWEcGcX37C6XCwf286B/fvJzAoqDK96gI49aGXuXbBxTx4xumsun40Qzo1wbp/NbPee40pS3w5Z+KTjAi3YDgMDBzs/O4lnk8+j26xsOGTJ3ludQKXPzGYQMOAsDMZO+o5Rr50OaNtd3PpSXF45u1gbUYSl17UA/8qwc1DQb3KncUgj9x88PX1Ke84Deesux9jQchBho5qUT7lnCe+vnacmVnkY+BdJZ1ap989/KiWnQr3r2PunNl4FeeyN3URP3zwDjPWh3P52+M5L9qKBTf7Ym9Bx7aevPHJ80zsdg1tzDT2hw1jeMu2NDNfZ/IzU2lyThvCbLtIyy4LF1Wbqrbib2soYcEmabM/55fNyZxmzOXd2U5at43Fp2QHc9ZlQnAYIVbXI2aq1hcPDw9mzvyB7j3KOpLfnfw+I0aMoLi42EUdODyN2lw6sFW11x/8uJbsnJzK1wH+/oetU7v6XFfiaJsSyMwVvzHP1plmwSYHNy1lZWYQbXrG1Wvq+jrbih71SP+wNuzfOyeC+/Z8hcp2vNOksMDpcspqxz437SR353EX57nECBfLw+uV/UO8m5KSuJxfVvzBQnsHEgINcnZs4aATovn77XmR/7rjZZRdXVPXZmblcOlNz1Rb9uGnP7pNr5al7q/1tbUxACN4GLdc14Khz47iFr+HuLCVyZqPHuX5dSmMHT+krP1iHKU2DoCL9sI5Xauvb3GTvs+WmUw6Ru2D46WuiYiIiMjhjpe2Wj0DnQbbFn0NUMud1SE4nXBB1ytr/f9whWxP3U5JeCfifWseBAO/+ATCliwhdWseySl158sS2paTT8rnz5UL+emvYkyLHU+fAML8y7ugDH+adWzL9vnrWbw+nphuEViwENKyNdFr5rE7vC1J5cPXnAUZbF29hn3ZRTgsHviHxdG9T1tCa7lrv85168FpGrSYVzbCpZhdlX+XsIvm87qXL99Li1r+dvUUNsM3kb5DfYlYuZIN6xeyOb8Yp8UDb78AmsSF492A+mYN7cDpgzxYsnQdq+ZuoMi04xsaS8cBXWjbpLauNAsenp4Yu1Ywa+YKDKsHXn5BhDdpx2knpxAXWFHN7ER3G8RpXotZtnEBvywrBg8/QhK60LQZ2A1PErr2JH32Epb9sg2nRwCxncJJSHFTzocdjOrlHtW0vuVlEBAXT9DyTWw70IVgiyceBVtZs2A1uYWlGHZvAsLi6X56l7LnPmIQ0q4ffYr+ZNnKP9hcZGLz9MEvPJrQhhzwv8nt96DuT9f6fQCTnLRtZPjE0b2+FZuyuh2ztvq5ogSI2bgLs8rpphiIWftDtc86XHTTGj4J9B3qU1a3NyxiS34xptUTn8BwYls7qqVbl/9uva6dWceIzoRmyWRlHKRps2RysrIICAoiMbkeI3q8YugxdBgRq1awdvNS5qwswmnzITgygZ6DO9Aiwqva9cNqKWTXyrmszHXgERhNSv8edIqqKAs3x7XOjBTV/zrTyl39AovbOlj1uti46o1hqd6ZV1eg85RBg9mWupF+A89gZ9o24hISOW3IsGrr1Np1GHo642b9RvcXnuHNjx/humczcfpE06rXSJ777m6u6BFeZRpJAy+Pg/z63PU8t72IwOYnc827z/FAX7/ytAPo98TXTAt9gKem3MVlT+VAQCxtR/wfwy/sQUCNvBhV/gYwnPnk5YOPrw+W8uXeHa9m/KvV98LXzxszLYsck8Oe1ej+7G8jMDQE62/PcN6QZ7B6+hMW14qup9zB1EmjOSO5YpyIm32xhDDi/15k3rUP88SlX1Hq34wBD3Rl5DW38tbzu7hz/N1cMCEbh4cvwZHN6ZoUXLlP1Y6BpSkX3nszM299h/vfHUbvYav46sVXeWDLQYrsQTTtOJSnJ9xCB1vd+5abm1/r8sGDBnHwYKbbo1KZJ5fzjFYEN7PL7pIpn4awYk+ys7N5/dOy6QgDAgLqDHrW57oS3vl0BtgWsmTV76QWgldwLO1P60678PqN6q67rViP9Gtpw9Z9zqhHnupsz1coa9cP6NYcpxNGtrzFRXsenG7bSUd4nosIqn15QwOdeBLfcwC9bYtYs+w31hfbCAjzxVJxs8HfbM+L/JfV1Y78t9Vsq1T15tM3HZ1t1uP9Q+v40P2Bz/nY+14eHX8lI/ZBRLvB3Dl9HLd19a6y3tFp42C4aC90rbl+3el77D127YO62gYiIiIicuwcV78LgF5paWlzKxbExcU1KIG5c+e6Xad3794NzthR5djLwi9+Jrfj2ZzarK4nSsh/jpnH+llfsMyzNyP6Jfw37uqv7ftQms6CL2ext/mZDGsfdMTTxsnxa+3KeYctK8zPIKV9Pzw86x5Ov3zBd3TpddaRb9zMYPlXX7M5dghndwk7oikY5d+RnZOPzXqohL779KXD1zm4jRGXPYyvf2Cd54pJL93AZWNfP/LMOFbzVJ++fHz6Tyx+rMsxmy5cjq5Sh5MAf59jnY3/rLra9cdde/4ImJkr+fqrjUSfMYJu4br6iJyItm/fXu11YFBotbbKCUltnDqpbSAiIiIiNVX9XRAfH9/7b7ehT5xOj1JyD2ZSbJawf+0CNnm0YlCigpxSg+FLcqcU1n6/lOW7Y+gRXZ/J4E5EdX0fHBz8azEbnUn0T1GQs7Fq1b7XYctWLZnF2hW/uv3s8TIlgRx9NaeDG3rerYet89n7D/Dp5AfcpmWxWP9e3al8lmZ9p4iVE5HOL8fWidOurw8nmVvXsZcAAnw8MYozSFu1hkz/5vQMOcGDIiJSqa6pa08YauPU6YQvXxERERE56v47Nws6s0id+z3LMyz4RTanzyntNTWV1Moa3p7eXUz2WV1P0XjCq/P7YGLxjaZ9rzbE1mf2W2k02nU5/VhnQY4zRvW512o1ctTj/0peDptzVRol9WXKP6eEnH3bWLMlg9yCEky7D8FRLejXuyMR9ZuBWEROAPVpqxz31Mapk9oGIiIiIuLO3566VkRERBqnvPxCrCf6dHByQnE4nPj61D19toiI/HfVnLo2JDRcbZVGTm0DEREREanpH5+6VkRERBqnRjEdnJxQVN9ERKQh1FZp/FS+IiIiIuKOAp0iIiJSK4thoCdFyb/Jos5MERFpALVVGj+1DURERETEHQU6RUREpFaGxdBzkeRfZVhU4UREpP7UVmn81DYQEREREXcU6BQREZFaaTo4+beptomISEOordL4qXRFRERExB0FOkVERKRW6jyUf5tqm4iINITaKo2fSldERERE3FGgU0RERGplMTQdnPy7LOrOFBGRBlBbpfFT20BERERE3FGgU0RERGplqPNQREREjmNqq4iIiIiIiAKdIiIiUit1HoqIiMjxTG0VERERERGxHOsMiIiIyPFp954DxzoL8h+jOiciIg2h60bjpzIWEREREXcU6BQREZFaNYkKPdZZkP8Y1TkREWkIXTcaP5WxiIiIiLijQKeIiIiIiIiIiIiIiIiInHAU6BQRERERERERERERERGRE44CnSIiIiIiIiIiIiIiIiJywlGgU0REREREREREREREREROOAp0ioiIiIiIiIiIiIiIiMgJR4FOERERERERERERERERETnhKNApIiIiIiIiIiIiIiIiIiccBTpFRERERERERERERERE5ISjQKeIiIiIiIiIiIiIiIiInHAU6DzKzLz1fPHUDZw/5DROOW0w54/7g7xjnan/MjOf3avns3hLHuaxzouIiIiIiIgcc86iTHZuSmVP4bHOiYiIiIiINJSt4R8xyV33NRMnzmD2XzvILLETEJlIu9Ov5PZR3QlR6LSKUtZO+T9eWdaU0f8bT4dgKPKKx7uWNc38rfz+8Yd8/sti1u/KxukTQVKHvpx56SWckRL4r0aki/54lOEPr+H0Z97ljm6+Nd51sPXDG7lmkpXrPnyZc6MsYGbx6+NX8tQvByhygsXmTUBEU1r3GMgFl55NxzDrv5h7N0rXMfWhB9h4/vt0TvTFONb5+Y9x7pzG2MteZ01pzXc86PfINzza8os63/+/U72qLHOw59enufXxXwi65n1euzCmyvfEQea6n/h0xg/MW7GBPeEXM+GVi4ndXff2q6cvIiIiIiInutK9C5n21gd8v2g9u7NKsQVEkNSuN2dfdQ2Dkjwxc37g/nOeZG5pS66b/AaXxKtTQ0RERETkRNLgQKeZ+RvP3T2eRVFncNnt15AQ4CQzbS3rLL746fdAdc49rFyRTkS/+zivTxuXB9vMWsLrt/+PT/bEcurIK7irRRjWrM0s+PZTnr9xNisfeJl7+of/S8FOJ5n7DlDi2MN3E2cwotPlJFXJuHngZ96eupYiZywHMk2IAnCQk5mFo83lPDemJ14leWTsWMnMj97gjgXbeOqt2+jup5CigCX8dO5+rR35zoolDnb/+Dzjfo2gW0sPt+8fYpK58BXufnEpxR7UUMSmzx7if+/to/WwM7nsriuIjowl0uJ++yIiIiIi0ogUruTN2+9jeloJpmFgsVhwZO5kzdzFdLvi+rJ1TCcOh+b7ERERERE5UTU40Fm6eRkrcyMYdOMdXNi+/OM9T2ZwxQolSxl/wV1sPP/QCCvn7k+4+bKpNHv2Y27rkMnvrz7F+/NT2bUvm0J8adJxCOee7MGKmT+zJHUfxb7x9DjvVu68qD0BBuDc1/DP4GT7Vw9x3zuL2Z1djC0wns7DxnD7VT0JtwDOg/z57nje/3UNW3ZnUewRSp9bXmTgsut5aOM5vP32KBItZelseW80o7/pwPNTb6Fj1SPm3M/iD1/jra/+ZNNBCE0+ieHXjeWizqHlQckiCouc7Pz4ek79uOxwt7puMhMuia8StCxk+bvP8umullz36rNc1MyzfHlv+g88lVYPXM8LL0ygV6cH6ee/nz8mjOODuZvYvieLImsATVr3YeToazm7bZVRn6XpzJ8ygfd/XErqfifBLfty0Y03MTzFF6PyWG5m174sCvEmvHkvzr/xVs5p44eBScb+gxAeS8T2GUyZM5yH+geWj3wsYe2nU1jgH0tMQRaZmU7g0GhNI7Apbdu2KRux2qk7JyWVcMWNX/Ld4hvo3r/GSDmXebTy59MX8MCGEbz19hUkWQBM9n95Oxe+FcxD0x/iZI+/s38ADta8fhH9X4fKkXynONn4zau89OHvrN1TiD0oll7XjeOBwVGHB5jdlXu98lCR1j73ZVprXZ3Aw4PDsLitg2WfXzbtNSZ+MZ+NB0rxjkjh3PufZ1Rbe911xcx3fUzqeM/I+oNx145jde9HmXhzV3yq7q9HKImtQg/t/u4vmfTTQbrc8AzDYiyAu/fLOHZ8zRNPL6PzfXfj9doDLK+yicKVb/PYR1ZGvf4mQ2Ps1cvO7fZFRERERKSxKF3/G7/tKMG0JnL++Je5vlMg5O3kr1WZRCfVmHmodB1vXtqfNwF7h5v56KVziWQHv77xFl8sWkvqzgPkOTwIbjaSR167hvaW/SyZ9gbvfDWPDfsd+MW0pf9F13Pt4GR8DIAClrx9Ly9/n0p6Rj6lHkHEte3PRTdcy6BEL3Ck8tW41/hmzVZ27csiz+FJREofhvQNZdMvs1icmgFBCXQdPpbbL+1MsAXAwZ757/Ly29+xdGsmpV6BRDQ9hRuevIneQbq5WERERET+mxoc6LRGxtLEeoDFP/5Jeqs+RNndf6a6HLYsX0FGszE8ck8L7Nlr+ezVN3npjRYMGz2Gh67x5uD8ybz01pO813YKN7W3H+FnDELansWYBy8kzM9k/9JpvPzOE0xoPpUH+/lhkMm6ufPZFXsl99/eHn9HDpa4KBJtHfD4eSnLD1xGYrgFzEzWrNqOZ7sraFntaBXx1zt3c98MGHDtA4xJNNn8/Tu8de89FL38OlelVIwOM4ge+jBPnJ+EFfAMia4eOCtcwnc/7SH09LsYWRnkLGePYcgVQ5l+wwxmzs/m5IE5bF6yjH0JV/PAbS3xKNrN0i/f47U7NpH76qtc3twOFLJy4t08/HMkF930BLeG57D4gxd45f7XiHr/bnp6lx/LpGt48M6WeBTuYN5Hb/LaQ68TN+Vuevg4ycrMwki6ihubfsqjH37J1r6Xk2gF8+DPfPB1Fr1uvIWAiU+zL7MQcF0BrN7eeBqlFBU7arxTdx7b9+iIx0/LWLH/cpIiLEA+K5esw9ruFjp4F7Jywt/YPzuAleQLn+b+MyKxYOAb4Ylzy3s89eJ8wq+8n5d7hGEeTCM3IrSWUbT1KXd3x7j698F9mdZWV0Ow1CsvRaybfDf3TC+l3xV3c3VKACUHsvBrYnVbDt33THd5TJxbXL9nYuJwOHA43dwVbeYwf/J7rIgcyYRBkYcfa1fvl2xm2lMhJgylAAAgAElEQVSTyBv5LNd3dTC52mcy+X3qN+y2xvLNPRcw4WApAUknMeK6Gzi3XWD1ILO77YuIiIiIyAnN8PPHz4A95gE2LF7Fjua9aOoXQ7ueMbWs7ElQVCQBdrBG+pfd0uvYxryvf2NZPth9ggjyKiTX6kuwNZ/lE+7g7ulbKLUHEhXhSWbaYj4bdycZHpN45LQQDOxYctPZW+JFUIQPhQf2smXBDMbt9aHpO6NJMdNZNXsJ6/INPPyCCLBmk77qe95ZBYbVmwBfKzn7N/L7pEewN/2AB/sFwIHvGf/oFOYX2AhokkCsZx7p+0vw9FWQU0RERET+uxoc6LTEnM3dd2zhsVcf5NK5zek1cAhnnjWIrjHeDXjeoYFPXDu6dkzBSjsidv7O/I8S6HtmP7rZgbaw4sf7WLVyN872FaMfG/oZA9+k7vRJKt9kC382/zqamWu24ehXMY2sgV9iV07qnFI5LtH07EUH2wvMW5jJ2UNDMIrWsXK9QZvr2lJ1TKKZM5dpn2+j2WXvctfIsjx2ah9L/parmTZtHhc80h/f8m3Yg2JITEyktidVOvensSPfQlJKMrVNnGlLSiHZo4RN23bjxAMw8G3amZ7dyvLcrXsyxjU38MnHCzj3gT54Z89m+tf76HbbK1xRPhKz+W3pLLhoMr+tuo2e3cuPZXxHenRJwUonOoWls/iGH/lzQyk9OhaTlVWAPSCcrudfQLtv3+OTRedyd09PNn/1CYtCz+SV/nHMmQobM7Jw4l9l5GApxcXFGCW5HNi2gpkTP2ezZ3uGt6/+VFLTTR57dDqJjrbxzF+UxdlDgzEKlzN/uYM2o7vgn/M3969NWR48Q+JITDz0TMfSrAyy8Kdbx860ae4FtKi15jak3F0f45pfu7rL1MdVXa1PXnLnMXXGFpIue5d7L4yvVgfdlUNXT9fHxFnH8TIC+3H/p/1qPX5VOXd9z9RfHfS9byTJtZyJan+/lK2fPs90zufFC5rjYayr/qHStSxdVUREt4FcNLw9TbyzWTX9JV679wm8Jz3NsEiLm/RFRERERKSxsCaNYMyI2Tw8YxPLp/yPyz8Oo2XvwYy44DwGtg6u/hvdmsgFz9Z4RmfFPbtGE0Y+O4Wxba2UlDiwHviWF77cSqm9Dde+9RKXJNrY+/U9jHpuIXO++p39p4wg3GKjw81T+famYnIysijI+J1nxr7G4h3LWbXfSUrFRDNGNCPGTeGG+Hk8ftlDzMq00f3Oj3nmjGK+vutSnl+UzZKF6ynt1w3Lvp3sKjIxPDtxzUvjGB5hxVlaegQ9OyIiIiIijccRNIc9iB98D2/1v4I1c37ku68/4v5P36P1qEf5v8va49/g9CyEhodiFGSSWUTZAEFrCOEhsCI3l9rHhNXnMyXsmvMeb370G6vTDlDkEYA934G1TXGduTGCTuKUzi/zwpyFZA4ZjH/qClYXtmBE16BqgVxH2no2FUXQp2OTQ4E+aywd24cz+c/1bHf0J6Wekd/6PQ3ExVoeyXTvFMLHy9axw9GHpO0bSS0oIH3cSAaMO/RZR6mB54G8WlOxRMUQZWSRlW2CM5fsHPAJ98Ua0ZULB3zAQ9NncXnbCD79ajedrx5BC48ilvtAbnZOtfSK5z7JmQOeLHth2AlIPImrHruNM6Oqj5VzuMkjgT3p38nKi3MXkj1kEJ4r5rK4qD1X9Q7D+Xf3zwVb2+Fc3Hk2r98+inUDzuLsEUM5uXnQYcHpIy33+uShUo0ybeGiHtUnL8nb1rGxsGydw/bFzbG0DnR9TOp7vFxzsGnmN6wNGcC1fQJruUmi9ved+2bx5rRchjx5Dol2DnU8VOQ+9wAHCiwk9h7GyR3LAuzJt13J0vOeZNbc/QwZGVF+rNxtX0RERERETnhGED1ufpMpp/zMN9/+wE9/LGP9rx/w9B/fM/vOV3h8aEwDfwsY2O02SjauYX2RiWmu4c1RA3izyhqWvbvY44Rw9rHgrad4YcZS9hQ6D/1WtRVQUFBLyn4ptEmwMmuZSUF+IaYlnBYtI7Es2kZeVjYOwJ7Qk5NiPyZt2yJeuPRCfuw3lHMvPo/+iYp0ioiIiMh/1xG3hg2vSNoOuIy2A85j5Id3cuM74/nkpHe4KtHAYgGHo+Z0pXVkwm7FMItxOk3AAMOOzWbgdJoug4DuPuPcMo1HHp2BZdjN/O/mFEKM7Xz55KPMcbtjgZx0endeefpn5mUMoPWSJexL7E/3iJoTW7rOW0NYwuKI9XGyfP0miod0ocbktZRuWU9qsZ2YuGgsHKw9yxZLlTioCZYQBtz7PJe1rBp2MvAODcCoJQ3DZsOKicMEyCM3D7x9vDHwost5w2k6+hMmvhbMfPvpPHFaGBbjID7eJrk5eVR9Sqe941U8N6YH3nZvAsKiiAr2cjEdqJs8GgY9T+uK5fnfmJ95MsG/zaew07X0DjEg/e/unwv2JM555kN6Lv6ez6ZP5+lrpjJj9HM8f2lKjTI5snKvVx6qrl+tTF2pT16cdazj5ljag1wfk3ofLxccG/n1tx1E9buH1rXNflzr+yZZC35mYeY2Ft48hOkVe+goxZx4OWelPsrnt9iwGSY5Obk48S6rf17hRAZAWkYWJhH1276IiIiIiDQSdsLbDebKdoO54uYtfP3kHYz/Yz/zp37DhsHX0dIwyoOdJu6evlHJdOIEDM9kTjm3F7FVfk4ZAe2IsEDenDd4cupicvzacPZ1I2jns5EvXprOSlf3Xhs2bOU9NBX9KXZ7+Y8Vs/y3n1d7xrzyCgnTPuLT7+az+odJrP59Lle8/DpXpuiHjYiIiIj8N/0Dj6XzIrFbRyLM3ezY7QRLECFBsGfbDuoeO3l0FaeuZ7OzPcOvHEyXlEQSm7ckNqA+92oaBJx0Jqf4r2Dmj0uZNz+NhL59iKtxpKzxLUn22MvK5btwVix07GDFyn14NmtBXH2Htnl1ZtAp4RyYNYUvt9Q4YqW7+OG9b9nhfxKDerkYdebcxerVB/BIbEaMFayxzUiwZ5K6w0F0fDzxlf/iCPetR3E788jNM/Hy9sIALHFDObd7Jr989xdNh59DRy/A8MLbC/Jz86sF0Qz/WFJatyKleQJNXAY565NHg8Beg+ntuYwff/ieH+c56Dm4L8HGP7B/hieenpCXW8voT8ObmG4juOnZSTx3XjBrP/uW1SU18v5PlXtdapSpK/XJizU2mUR72To1bz2o17Gs65jU43i53MW0BSzcHUzXns1rvdui9vcNAk+5m3envM+7775b9u/tBxkabafZ+U/z+nVdsHknkBhlkrrqL/Ir0srcyuYMGzGxEZV10t32RURERESkEXCms/TnP9m4Nx8HYNgDCA0quzXTzM8n3wmGpy++HgY497Bla9nvRGdp6aHfWLWwJiSTaDMwSzIpjjyVi64azejRoxl17jDOGtadCIuTjJ27yHOCtcVgLh85kAEDutPU82/OJVOaTabRnCFjHmfS9Ilc1dqGWZjKnIU76syviIiIiEhj1uA+/pI103hmZgFtO7WgSbAXzqxtzJ/xFWlebTinhRUs8fTqk8B7H73Jc81LGJQcALs2k/lPDH9sAHvTJGKZwdfv/0DYqUkEWfeRnlfPTHh15KwhTRjz8TPsyk1g5L3xhwXtDP/eXDiyKbd+8AjP+lzNwATY/MPbfLA1gfNu61X+nMb68KHr6DsZsfoBJtx8IxvPHU6flqFYMjez4JtP+W6tF6fefwP9gwzKfrk42Tt3KlNiT6NNBGz7eTJTUqMZcn3Psm0G9uX8oR9w59RH+D/rZQxpF4m9YC9bsmM4Y1Ab9/kyC8gvBC/v8ieSGkGcfPm1DA/Ips/QuPLjYMfLy4YzJ4cCk1qfLVoXw10eDcCnG0NPDeK2SROwBQ7iiZ5+ZYHev7t/1lhaNvNgxk8f8lmrs0kincyg3vQPWs1Xy02SmkXgVbKXZVtzwD+QgBoF/8+Ve1VuytTVcaxPXgL7cv6wD7jjvQd4gssZ2DoMW246hdH96ZNc97H03jWPL10cE+dO1++ZWX8w7tpxrO79KBNv7lr+jNGqTLL/Wk2aNYWLWtR217Hr9y2+EcRXPSiOfAJsBh5BTYgN88agBWec2ZrPJr7J+M/9uLB1KUsnv8eKoAE80Tug8k7turcvIiIiIiKNgWPDV7zw+AdscxgYFitWHDicJqZhIbLnSbS0AWYKXdr68POCTH5+7AKWh3hQ6D2IZyddTxsX6VqaDOaSgZ/z4He7mfPCVZz1ZiB+lgKy8zwZ9PRn3NvTTniL5oRY1rBv6QSuv+5X4vxy2ZL79zpGStd/wNibvyA/ogkRviWkb3GA4Ut0kxA9jkNERERE/rMaHuj0CMA/43c+fnka6VlFGN4hxLXqy43jRlc+izH54oe5L/tF3n3/cX7OceLhH0pUqy40D/4nhrvVj7X5Rdx/635e+uhV7vs0F6fdG/+QeFrH+NfjB4CNZmeeR5dPnmVpqwsZUHM4JwBetLl6HE96vsZbHz7OnRkQktyTS5+6kYtb1WvyzkpGYHduevUN2kydwuezJvLk+7k4vUNJbNefW1++jGFtgqsEWg08bNks/uAZpuwpxi+uE2c/fDNXd/Iuf9+XTjeM58mgN5j83Ss8+G4e+EbQrP8Y+g2sT6CzkIIC8PL2rDxOHi3O4ra7a+y9jyfmnlzyTAhs8C8qN3k0ADxod+YQmn05mZIzzqKzVz0/627TRiD9xtzO8icnMumhPyj1iaHH1a3onZzKH1M/YcLObIptfkS37M2N911E8mFV9p8r9yqZclOmrtQnLz50HPM8TwVMYNKX43ng7SJsQfH0G9uFXsnhdR5Le4brY1JSx3sAplnXD3gHO7ftwBl2Mk28juT9uliIP+8RHi96mTen3M/1WQZhrQZw59Nj6elXUVH/TvoiIiIiInKicPqm0P+0LsxZsZEd+3MpsXgTHNuMjv3P5+rLepb99jTCGXTH/Wx/fiIzV+wg4yCEp3hQ4sD1HFhGEL3veJlxsW/z/syFrN+VRbbVj/CkFCI9ijGx49n5Gh67qYjXP53Lho3L2Isd78AYUpp1JMHvyPbH4QwgtmkAa9K2sjHdjl9kawYMvZobTnUxA5SIiIiIyH+AAfRKS0ubW7EgLi7uGGbnOFK8iteuvJ/dV0zm/04/Tu6OdG5m8uhr+anHq7x3XQr/XthYjhqVqYiIiIiISL1s37692mv1X4iIiIiI/PdU/V0QHx/fW4+nq6aQ9I1byaOAtZ+9xEzfcxjf/zgJcoqIiIiIiIiIiIiIiIhIJQU6q3Js58dnb2Vyqo3oDkO497FL0SP8RERERERERERERERERI4/mrpWRERERERERI57mrpWRERERERqTl1rOYZ5ERERERERERERERERERE5Igp0ioiIiIiIiIiIiIiIiMgJR4FOERERERERERERERERETnhKNApIiIiIiIiIiIiIiIiIicc27HOgIjUbvfu3URHRx/rbEg5lcfxReUhIiIi0rj83fbd7AXL/sHciIiIiIjI8aRvj04u31OgU0REREREREROeHV1foiI/Jc9+uijPPzww8c6GyIiIkfE3U2NmrpWREREREREREREpJFSkFNERBozBTpFREREREREREREGinDMI51FkRERI4aTV0rIiJSF+dB1v4+m6WpWcSecSn9Yiy1LxMRERERERE5DpmmeayzICIictSoZ1ZE5N+Uv5y3xpzFGefexYwtpcc6N1IfjrV89sx43vxgJqsznK6XiYiIiIiIiByHNKJTREQas39+RKdzM5NHj2bSZk/ajHqO565og2/5tTR/1v846//mUBp2NuM/vp3O9n986yL/LWYGS997hhdnLGZnkT/N+o3irtuH09wbcO7hm3uu4rlFHpz6yLs82D8INWuPply+v+9snpxb7HINe497+HRsBos2ZJLnWM7C9XmMTAxUuRxlZt4Wfpv+EV/+tpQNuzIowJuwuJZ07n8+V17Skyjrsc6hiIiIiIiIyNGjEZ0iItKYHb0Rnc48/nrvQZ76IR2NdRE5OooWv8Pjk+eT33I4F58Wwc5ZL/P0J1twYpI9fxLvLsrFu9PljD5ZQc6jz8BqtWO3l/2zWcqOuGGxVS7zsFmxxJ/JLbdewvlX3MuY/gpyHm3Og3MZf/11PPLuDyzdso/colIcRTns2bSEn+amUqwCEBEREZETkVnIvo3LWbOzgGrhC8cmPrl/LPd8tBbXt2CKyH+NRnSKiEhjdlSf0Wk69zP7hYd5r+lLXNnKq5Y1Cljy9r28/H0q6Rn5lHoEEde2PxfdcC2DEr3AkcpX417jmzVb2bUvizyHJxEpfRjSN5RNv8xicWoGBCXQdfhYbr+0M8EVYVvHfpZMe4N3vprHhv0O/GLa0v+i67l2cDI+uq5Lo+Fk78ZNZBoxnH/V9VydPJcdvzzI7A2pFBXlM3XiLPZbmzFqzDCaaJLqf4Evpz8+k9MBKGHu42fxvx/z8TzlEb59+GQqB7CXzOXVCR/xQ0ETjG79GNtqm85zR4uZye+vPMPXWwsxfJtz5tgbObdHEqG2AvakrmAjnYm1gNtrUb052DP/XV5++zuWbs2k1CuQiKancMOTN9E7SIUiIiIi0qiYuSx4837eWpBJiRMsdm8CQmNIatOVfgNPpWOU59HdvmMz3736MtsGP03rGO9Dyw1vQqKiiA7x0bOKRKSSRnSKiEhjdhQDnRaCo8LI37OOKY9PoO2EW2hz2Dp2LLnp7C3xIijCh8IDe9myYAbj9vrQ9J3RpJjprJq9hHX5Bh5+QQRYs0lf9T3vrALD6k2Ar5Wc/Rv5fdIj2Jt+wIP9AjDIZ/kbd3D39C2U2gOJivAkM20xn427kwyPSTxyWohGUEkjYRAQGozN3MraJWvYlLGUzcUGAaGBpH85gRlbnUSeOYYLWmiO6OOaU+e5o8U8+Adfz8nEaXjS8erHuX1YdHlnTyD+XaNIrlzTzbWovts78D3jH53C/AIbAU0SiPXMI31/CZ6+Kg0RERGRxsdBfnYOzuTh3HleWzxK8sncs5kVf3zLSw/+wclj7mZUl+B/P9hoieG0mx7ktH97uyJyXDMMQ8FOERFptI5ioNNK3Mj/MXjlYzw350uefrEjL/Y4fPMdbp7KtzcVk5ORRUHG7zwz9jUW71jOqv1OUkLLVzOiGTFuCjfEz+Pxyx5iVqaN7nd+zDNnFPP1XZfy/KJslixcT2m/blj3zeL9L7dSam/DtW+9xCWJNvZ+fQ+jnlvInK9+Z/8pIwjXbY3SKBgE9r2Ui79cxeS3b+Qq08Ae3ptbTjnAOw+tpjigN1df0RWf0lwO5jjxDQrAU3X/+KXz3D/OsWMrO0pMsDala5fIOjqZ6nktcsO5bye7ikwMz05c89I4hkdYcZaWHuW5E0RERETElW+++Ybp06fXuc6IESMYMWLEEW/D8IumeYsWeAG06UiPfn1p/frjvDPpfVo2u5lefmt5747n2Db4aR48IwIDMPf9wBP3fkfcnc8zKiWPlZ+9x5cLN7FjXw4l9iC6XPYQN/QJZM+vr/LCZ6vZl1uCza8JrfpfwBUj2h+a5QUHm6beyaipAHa6jZ3ATd328MVDjzC//f08eX4SVgBnBmu+mconv64gLQuC4jty6vkXMaR1UFkb2TzIog/f4qsVO9h7MJdiPAlu2pFBF1/OwGQf3UQp0ggoyCkiIo3Z0e1+tTXhjHvuZd2W+/j6lxd5ISei+rMjnPtY8NZTvDBjKXsKnYfesxVQUHB4coZfCm0SrMxaZlKQX4hpCadFy0gsi7aRl5WNA2DjGtYXmZjmGt4cNYA3q3zesncXe5woACCNh08brnh1GqevXc+uYn8SWsWwbcLVzMnyoM2Yy4j/7SEunjiHXYUmHpHdueLhR7i4ra9+qB7HdJ77B9W3ojfwWuSKNaEnJ8V+TNq2Rbxw6YX82G8o5158Hv0TFekUERERORaGDRsG4DLY+XeDnLWyhtNr5Gn8+MAX/L4og5NOcbO+mc3mZcvZGzmC60a1wNeRhxEdiIFBQPIpXDDmDIJ9TDL+mskHM95kWvyzjOlWEXy0Ej/kVq7rE4KBgU9obbP5FJM643lemAU9z7uOC2Jhx5wZfDJ+PMX/e5CRSXYw89i5bj3Zsedy/RWJ2Iv2sOzb6Ux7dRrRT19F+4Y8zUFEjksa0SkiIo3ZUe99NQJ7csO957Lq9uksXpABUDmqJm/OGzw5dTE5fm04+7oRtPPZyBcvTWdlsavEbNjKc+xwOACw28sb8qZZ1jltOnEChmcyp5zbi1hrlY8HtCNCnf/S2Fj8iGnThRigZONk7vtmN5a4C7iu91omXj2Hg4nDuW2YBz+98QmTXv2Cvq9fQry+B8cvnef+MZaoWKKtBumlaSxdvpdLE6NqHdXZ4GuRK17tGfPKKyRM+4hPv5vP6h8msfr3uVzx8utcmaIppEVERESOBVfBzqMS5CxniUygqY/Jqt17cNbrEwY+sW3o0Lp8BGY5n7h2dC7/O7GpLzsWPMTs1F04uyVXrucRGEVsbMShe/xqbNDMW8Z3P+8i7qwnuOr0skc5tGoRScGOB5k5czmDx3bDpzwPXtEptGuThJVWpITsY81j81ix1UH7FCsicmJTkFNERBqzf2WYiU+Hq7jjnAXc9vFWSiqvq04ydu4izwnWFoO5fORAQkuDWfbGJw3vXK7CmpBMom0Wq0syKY48lYvOSsLXAiWZ6WR6RGqUkzReznRmvvExGx2hDLzuElrlfEB6iZVmp13AWWd6Yf39M1an7maPAwU6T3A6z9WPJawnfdu8wfLlhSx/62Fe8hrLeT2SCPYoJnPHBlYdiOC03gn1uBZZsVoNIJu9e/MxCcCobVlpNplGc4aMeZwhV6Ty3q3X8M5fqcxZuINRKYn//vOZRERERAQ4PNh5NIOc/5xS9i79kunfLmTT7kyK7H7YC5xYmpc0KBXn7i2kFYfSJSXiUHvUGkVKyxC+WLGFdEc3kmqZCcUSFkmYkUtunoIjIo2BRnSKiEhj9i/Np+dN+ytuY/ic25mxw1G+zEJ4i+aEWNawb+kErr/uV+L8ctmS+/cuupYmg7lk4Oc8+N1u5rxwFWe9GYifpYDsPE8GPf0Z9/bUqBppjEyy5rzN5CX5+HS5nqt7B2LLbEaCt4NVC39hUZIH81Id2JsmEaebcU94Os/Vk6UJZ958NXNum8CyrLV8/tSNfF717ZAhBHa+l87urkXWWJrGWTHW5PDbE1fynNeH3NXz8GW3BX7A2Ju/ID+iCRG+JaRvcYDhS3STEE0XLSIiInKMDRs2jJKSsiDh0Q5yOtO3sDXfIDI6EouxB4sBTocDkwY8XWHHd7z2+o9Y+l3KNZcmEmik8+vE11lyNDNehWGxYcXEWb8hqSJynFOQU0REGrN/b4CJT0dG3XA6YVW26Nn5Gh676QzaRdvI3LiMJcu2UuAfQ0qnjiT4HeF2jCB63/Ey464dRIe4QKxFWWQXexCelEKkRzG6rEujVLiaDyf+wgFbcy4YcwZRFjBCBnDj3WeTtOND/nfPJDZGD+GO24YRpWFlJz6d5+rNM/kCnpk4jjFn9aB5VABeNgs2T3/CEzvS7/T2hDhN99ciSyxn33EbQ9vH4G/1JTDEXusyhzOA2KYBlOzbysbUvZjhrRkw+jFuPTVQgU4RERGR48C/MpKzdC9zZvzEDp8O9OsWjGEEEBgA+3ftoSFjMUu2b2GHsyWnjuxDm8RYYpsmEulbpVVpeODhAQV5+XVOj2uJTiDefoD16/YeWs+RzvoNB/GITyBKN8KK/CcYhn6ViohI42UAvdLS0uZWLIiLizuG2RGRCrt37yY6OvpYZ0PKqTyOLyoPERERkcalPu277du3V3tdtf9i9oJl9O3R6ajkrVZmFr8+extTioZwy7ltsJcWkrUnlRWzf2XBrgBOHnM3o7oEY8FB2heP8Mg3pXQ/byS94/1g31w+mLyK1nc9z6iWu/nioUeY3/5+njz/0DM6Hds+46FHZ+E14BJG9IjF35rB3LdfYWHLu3j+slZYzVzmv3IHE9PaceElpxFr7icnoDM9kg7yZbX0itn0yWM8/ROcdN459Iox2TF3Bp/8aWHQ/x7knCQPcG4/PA/583jxpknYx7zB2G7/0mRgIiIiIiK1qNnWr/q7ID4+vrdaqyIiIiIiIiIiDWLFx98fy19fM/7pr7HYvPAPiyGpzTBuuf5UOkZ5Vq4XP/QGrsl9n8+/epMFeU7svkGENWtN0wDX0+1Ymw7h2ssz+eCbD3nhxwKcNk/8AqNJivAtW8Hwo9v5o1j31v+3d+9xVtcF/sffZ2YYhpvIHUQuIiJ4AyUrNStD89JqpeF1NUt/v+yyv9V1N9vKNfdh5VbW2r3Nfv3SEgo1yrxklpZZaalguV5SUUCRi6jcGZhzfn8ohlxmYGUGPuPz+V9zZs7nM+cT3+P5vubz/U7LdV+5Jy3dBmbf40fl9aM2fKbGjD7h/Jzb9epc87Nv5vNLkt7Dx+fY807NO0Y1tscLA+yA3KMTgM7Mjk7YQdmxtmOxHjsW6wEA0LkUt6MTAADoEG3t6HS3PgAAAACATso9OgHozIROAAAAAIBOymVrAejMtvoenfPmzWuPeQCb4N/bjsV67FisBwAAALTNPToB6My2OnS6JxoAAAAAQBlETgA6M5euBQAAAADopNyjE4DOTOgEAAAAAOik7OgEoDMTOgEAAAAAOik7OgHozIROAAAAAIBOyo5OADozoRMAAAAAoJOyoxOAzkzoBAAAAADopOzoBKAzEzoBAAAAADopOzo7u9WZ9ctr88vHl0fS3hE05+nf/zg3PrDMekAHEToBAAAAXotaHs20T3w4F1z9YJq391yAdmNHZ+e25vHrc8WUP+SplYL2jqGSNQtmZvp3puXPK/zbg47QsL0nAAAAAPDaVsuKWbdn2rRbcs+j87N0bUN69ts1e/JxZyQAABfZSURBVBz87rz3Xfumd3udu650S9/BgzOkb3d/CQ+dWKVSaffYWVvyYG74/pTcOmNultb3zW4Tj8wppxye3Xu88gC2as4v863LpmTVOy/LRw/rnRcfXZuFM27Mtdf/Jvc/uTjNXftl1AFH5eRT3pZR3Td9AKytfDK/+dHU3HDXI1nU3DX9dj8w7zj15LxlRLe88ida8uxdV+Sz37orO02+NBcePTCVrRhv0/NNai/cls+d9908sPZv31u/y7H5t09Pzm71r+613Cq1xbnzulvzwv7vy7v36p5KOu9abGhza9P6eFsz3/+pLhlx1Ek59I7Lct2v3p59/m6o91hoZ0InAAAAwHZUW3p3vvuF7+Uv/Q/NcWdOztAetSyZ91hm1XVLG+d5X526oZn0DxdmUjsOAWx/7b6js/pMfvG1/8xPl78xp//TWRm29tHcfNXVuey/uuaSc9+cvpWkunxO/vjz63LdzTMyf3Ul49b/+VUP5oapd2bF/kfnzGP7pfb0HzL9uqvy5bpB+Y/37ZOuG/1Cy3PfVV/MlY+OzckfvDBjuz+be398Zb73pWr6fPasjO/28jdm6Z+vzmXffzBru2zdeK3ON0lt5cqsrhuRY84/Kwft/OKButLYO4M7MnImqT792/zqwZ45+IKJ6VVJp1yLjX7nNtam1fG2eL6vUtc9c/hbd83tt/86jx51asaoMNCu/BMDAAAA2I5a5jyUR1b0zSGnnpmj93zpLPn4iXnTK75pUWb8dGp++rsHM+e5anYaOTHHnHZq3rZb91Rqq/Lkr6/OVT/7Yx5/dnUaeg3K/ieen3Pe1L/1x6pzMv3fPpXf7/eJfObEUalPkupzeeBnUzLttpmZ/UKy8/AJeduJp+SYvXZ+cUdKbXH++INv56cz52bB4mVpTtf0GTEhR556Rt4+uvs22g0DbEvtvaOz+vRdueOvTTnovNNy6NjGJCNy5kmP5vyv3p4/zH9TjhmcLLzj+5l6f98c8ZH3ZvY3r8oL6z9B0745/ZLPpL7hpePfhHFpeHJGvvrYX7Oguk+GbbgdruWJ3P+XZdntmPfk8H0HpJKR2WXyE7nr4rvzyDMtGf/SlsqWZ27Pt654MOPOfn+6TvlyHtri8aqtzzdJbeXyLK/0y/A9RmZE91ZenHXHzBlzs+C5ZWlOUwaMfUuOfF1DHrrjrvz37MVZ031I9j3yjLz/mDHpUUnS2nH7la98Fsy4N0/1npDTRjV00rXY8AVte21aHW8L5/uKtdvc+12r61SXQRMnZtcf/yb3PnFSxozu4AIOrzFCJwAAAMB2VN9/UAbUP58Hfjczi3Y/IP03OluzOo9Muyxf/UO/vOO0f8wZfZblL9dfmR9cPjUDPvv+7LPo5nz7yhnp++4P5OP77ZzaC/Oyst/OqSSpzt38YxtrzmPXXpYv/SJ54+QP5KRdk7m/vTbTvvjFNH/8whw/qktSW56nHno4S3Z9Tz545m7psnp+7rvhR5n61akZcun7s19Te79awNZq98vWrlyelWlKjx5/izldh43I4NybOfOqyeCGDDryY/niUZVU1s7MFZt4jpdDV5LUluf5F9am68DB6bOpg1VdvwwaUMldf7k/8w+flMEN1Tz3+ONZ3Gt0Rg966XnWzslN374uKw//55y8TzXTt2q8ujbnW1u2NMvrK2l+YXGWd+mTHl0282ce646Zw0/Kh88emYblj+XWH0zLlT8ckbeecGLOeU9TlsyYnquu+a9MH31pTtuzYSuO26sz69GnUjfyyAx/6X2j863FRgO2Of9Wx9uS+a43l9be7/ZZ2Po61Q3YPaN6Ts/js55PbXQ/fwgE7UjoBAAAANiOKgMn5awzn8o3fvCVXHDfiEw4+NAc9rZDsvfAphfvJ7b8ntx02+Ls+96P510H9kolyYj3Ppv7/3l67n7kjOzduCTL0iP7jB2X0SO6Jhn58nPXlm3+sQ3Vlt+XG3/5dIYd9+m8/4ghqUsybsygrJx7YW66aUaO+vCBeXHjUiVNQ8Zm371HpT7jMrbvwjzw77/LzCdast9Yu1ZgR9PeOzrrdtkju3f7Re6+9e4cesYbM6RxTZ5b8FxW11qydm1t3SS2MPSszTO//l5+8sRuOe4TB6bnJuPa4LztjJPz8OVX51OfnJnXj6nmv+9blcM/fEomdE+Sljx1y//LzTkqHzt6RLpk1taP18Z81zRX0rP7Y5n6yfPy3WqPDJ1weE4+/Z3Zr++mjoGVNA3eI3uPHZX67JG+8/+UGTfskomHHZh9G5LskTz8+y/lkUcWprrnkC0/blefz6LF1ew0pl8a1700nXEtNtTq/NsYr835bjTYZt/v9q61sU51fdO/T/Lnhc+mmn7x7gjtR+gEAAAA2K66ZMibzsrFr39XHr3nztxx+w25/GM/ye7v/Ej+4bgxaZo3O3NXr8qiK/4xZ728faWWaksljc+vTN0hh+Ude92TKZ/7eGYddFgmTXpLJo7olfok9Xts/rENVefNyuzmfpk4dmBevlpg/eCM3bNvps+clWdaDsyoTZxdrus/KP0ry7JseTvfBxD4H2nvHZ2VHq/Lyf/ryHz9u9/Oxz7wrVSSNHbvmjW1Xhm700bXHm1Fc566/Zu5bMqCHHDOBTl62OZOXdeycvG8LKrumje8ZXz6L7g3ldVzcs9vH8hb93hD+i35XabduCJvPveI7NqQpOXVjrexbhPfl0snvi9pWZH5j/w+06+cki9/uSEXXXhchrVatOqyc5/eqaxamqXNefHsfF3v9NkpeWjFitSyFcft2uo0N9fSpbHx5S+9FtfiFbN5rq3xWp9v/zZq5Prvd/Xj21inSmMauyarVzfHuyO0L6ETAAAAYAdQaeyXPQ46LnscdGQO/9kXcsm138vPJ/x73plaUtk5B539Lzl2/fuHpZKuvXuk0tArR5z/H9nvgTtz680354qLbswtJ/xLPnrsbmlsGLb5x7bVvOsaUp9aqtVt9ITANtXeOzqTSvpMODmf+M/js/TZ57O6S8803/mFfPInfTJ61y2Na2vzzO1fz+emLMzED340fz9hc5fYTmor7s3U7/w2fSZfmjPf0ieVTMoRh/wol3zmylxzwH45dckfcv/Sebn/M+fk5nU/09KS6o8+lg/N/ki+8oED0rAV47WqvnsGjZuUMyc/mplfvScz5v9dhu3S+u/c0NCQSm1VqrVakkpSaUh9/XprtKXH7UpjGhsrWdPcvP4XX7trkVqWzmx9vC+fXmtlvuNzzgGtX3/9Fe93ba1TrTnNzUlj10aXrYV2JnQCAAAA7FC6Zug+e6bftbdk/qJa6sYMy9Aut2T2/GoGHLzrpk/mVJoyaJ9JOW2fQ3LglE/l0lt/nb8evVv2bmjlsQ3OedcNGZnhXW7Jww8tSHX0i5euTcszefiRxWkcPjKD65OImVCc9t7R+bJKY3r1H5huC36bb/z8ifQ5+IRM6LFliWfVw9fk8qvnZfw5/5rT2whdtcVzM3dFj4wbstPL39c0Ys+M7HZz5i9cnh6HnpVP77n6bz9QnZubv/TNPH7gefnQ20enfivH2xLb/DVu7Zi+Tt3O6d+3LksWL86a5JU7Pl+Ta1FJz9e3Pl7d4p+3Mt9lqaVp68ZvbZ2qi7NocdK3f99szV5aYOsJnQAAAADb0dpHb8z/vWN1Ro8bmQE7Naa27OnMvOW2zGscnSNG1KfSa2KOfPP1+fwNX8s36o7Lm8f0S8PqxXlq2cAcesjodF04I7c9WM2uw/ql69pn89BTy5MevdKzklQXbP6xDVV6HJCjD98ll17/tXy36YQcPLSWuXdem+vnDs2RZ0zIJm9fBuzw2n9HZy0rFs3JvIWLMvfhP+W2X/w+C4a8M+dO3iut749b9+PP5s7pv8zzY47Pm/q9kNlPvvDSxBvTe/CQ9F59T77zqSvy1/0/kov/fu80DRybcf1+kjt/NC27TT4ow7suz+O/vi5/XLVLjhrbJ/Xd6zNk/QNWy6r0bKiky04DM7hPUyptjLdzW9vdqwvyxxvuzopdRmRgj7qsfOaB/Oond2fNyONzwKBXn7S2/LjdlJG7D0319sczu+WNGVOftPda7Fz/RKZ98uLcPvq8XH7Wfmlo77XYcLw2pl/XvW/r4/Vofb5bEznbWqfqoscza1mf7Lnb1j0vsPWETgAAAIDtppa1XXqm+5I/5aYf3JRFS5pTaeqdIaMm5pTz35PD+leSdMu4Uy7Iub1+mOl3fD9fmb4q6dY3ww48Ma87ZHQaXpidP93480xZsCxr63tkwMj9c+rZR2d4fbK2lcc23p3ZmNEnnJ9zu16da372zXx+SdJ7+Pgce96peceobXWhW6Cjtf+OzpY8dM1n8/WZXTNg2B4Zf8IF+T9vHpu+W3rmuWVOHnuiOcuXT8kl96/39fqhOfaiS/Kefhv8Do17ZvJ5/zt1U6/P1MtuydK1XdJ76Lgc/qFTc+yINm6yuAXjTR7ZxnPUVmXJUzNzyy0/zYJla9Ol56CMGn9i/mny2zN0C4Zvc3qtHbdfoS6DJuyfXX58Z+6ddWLGjH7xppTtuRaTh63JmjWV9NrppR2R7b0WG473ar3a+a4/9VbXqZoF996buTtPyCm7bYP/UwCtqiQ5ePbs2Xeu+8KwYcO243QAAAAANjZnzpxX/O/1z1/ccdd9OfQN+3f0lACK0P47Otluaotz2xf+Ndf1PDufPefATe7W36bDLf9dLj9vSnp++LKcPb79/wCmo8fbZpofyvc/8YX89dCLc9FxQ126Fl6lDf9bf/3PBcOHDz/EvzEAAAAAgE5K5OzEKn1zyPGT0uueazL9oZVp75Ve8+RjearPG3LouI6Jjh093raxNnNu/mF+03JQjp+0i8gJHcClawEAAAAAOik7Oju3xt2Py1knJY912eh65Nt+rL1Oz6WfaUl9B12NtaPH2zZaUtdvrxz3vqOyXw9354SOIHQCAAAAAHRSImdn15Tdjzgxu3fQaPUdXB07erxXr2uGHjI5Q7f3NOA1xM5pAAAAAIBOqlKxqwyAzkvoBAAAAADopOzoBKAzEzoBAAAAADqpiy++eHtPAQDajdAJAAAAANBJXXTRRdt7CgDQboROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAAAAABRH6AQAAAAAAACKI3QCAAAAAAAAxRE6AQAAAAAAgOIInQAAAAAAAEBxhE4AAAAAAACgOEInAAAAAAAAUByhEwAAAAAAACiO0AkAAAAAAAAUR+gEAAAAAAAAiiN0AgAAAAAAAMUROgEAAAAAAIDiCJ0AAAAAAABAcYROAAAAAAAAoDhCJwAAAAAAAFAcoRMAAAAAAAAojtAJAAAAAAAAFEfoBAAAAAAAAIojdAIAAAAAAADFEToBAAAAAACA4gidAAAAAAAAQHGETgAAAAAAAKA4QicAAAAAAABQHKETAAAAAAAAKI7QCQAAAOzQarXa9p4CAACwnW3qc8FGoXPhwoUdMhkAAACAtlSr1SxYsGCjrzt/AQAArx2b+1zQsOEXVq1a1SETAgAAAGhLpVJJc3PzRl/f8PzFHXfd11FTAgAAOtjmPhdsFDqTF/8qcsCAAe0+KQAAAIDNqVarre7cXHf+4tA37N+BswIAADpSa58LNnmPzlWrVmXhwoXugQEAAAB0uGq1mlqtloULF27yr7bXcf4CAAA6ry35XLDJHZ3Jix8W5s6dm6ampvTv3z+VSqXdJgoAAACQZIsC5/qcvwAAgM5nSz8XbDZ0rrPuAwMAAADAjsr5CwAAeO3Z5KVrAQAAAAAAAHZkQicAAAAAAABQnEqSg7f3JAAAAAAAAAC2xv8HJSH0KW9uM+YAAAAASUVORK5CYII=) #### 1.3.4.3. NVIDIA Nsight Compute and Systems In preparation...
github_jupyter
``` %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt import pandas as pd cases = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_US.csv') deaths = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_US.csv') print(cases.head()) print(deaths.head()) cases_CA = cases[cases["Province_State"] == "California"] cases_CA_indexed = cases_CA.set_index("Admin2") cases_CA_T = cases_CA_indexed.T cases_clean = cases_CA_T.drop(['UID','iso2','iso3','code3','FIPS','Province_State','Country_Region','Lat','Long_','Combined_Key']) deaths_clean = deaths[deaths["Province_State"] == "California"].set_index("Admin2").T.drop(['UID','iso2','iso3','code3','FIPS','Province_State','Country_Region','Lat','Long_','Combined_Key']).drop("Population",axis=0) counties = ['Alameda', 'San Francisco', 'San Mateo', 'Santa Clara'] plot = cases_clean[counties].plot() plot.set_title("COVID-19 cases in Bay Area Counties") plot = deaths_clean[counties].plot(figsize=(20,10)) plot.set_title("COVID-19 deaths in Bay Area Counties") cases_diff = cases_clean.diff().rolling(window=7).mean() deaths_diff = deaths_clean.diff().rolling(window=7).mean() pop = pd.read_csv('https://gist.githubusercontent.com/NillsF/7923a8c7f27ca98ec75b7e1529f259bb/raw/3bedefbe2e242addba3fb47cbcd239fbed16cd54/california.csv') pop["CTYNAME"] = pop["CTYNAME"].str.replace(" County", "") pop2 = pop.drop('GrowthRate',axis=1).set_index('CTYNAME') cases_pm = cases_clean.copy() for c in pop2.index.tolist(): cases_pm[c] = cases_pm[c]/pop2.loc[c , : ]['Pop'] cases_pm = cases_pm*1000000 deaths_pm = deaths_clean.copy() for c in pop2.index.tolist(): deaths_pm[c] = deaths_pm[c]/pop2.loc[c , : ]['Pop'] deaths_pm = deaths_pm*1000000 cases_pm_diff = cases_pm.diff().rolling(window=7).mean() deaths_pm_diff = deaths_pm.diff().rolling(window=7).mean() plot = cases_diff[counties].plot(figsize=(20,10)) plot.set_title("7 day moving avg of new COVID-19 cases ") plot = cases_pm_diff[counties].plot(figsize=(20,10)) plot.set_title("7 day moving avg of new COVID-19 cases per million inhabitants ") plot = deaths_pm_diff[counties].plot(figsize=(20,10)) plot.set_title("7 day moving avg of daily COVID-19 deaths per million inhabitants ") plot = cases_pm.sort_values(axis=1,by='7/20/20',ascending=False).iloc[:, : 10].plot(figsize=(20,10)) plot.set_title("Top 10 counties by COVID-19 cases per million inhabitants") plot = deaths_pm.sort_values(axis=1,by='7/20/20',ascending=False).iloc[:, : 10].plot(figsize=(20,10)) plot.set_title("Top 10 counties by COVID-19 deaths per million inhabitants") plot = cases_pm_diff.sort_values(axis=1,by='7/20/20',ascending=False).iloc[:, : 10].plot(figsize=(20,10)) plot.set_title("Top 10 counties by 7 day rolling avg COVID-19 case increases per million inhabitants") plot = deaths_pm_diff.sort_values(axis=1,by='7/20/20',ascending=False).iloc[:, : 10].plot(figsize=(20,10)) plot.set_title("Top 10 counties by 7 day rolling avg COVID-19 daily deaths per million inhabitants") ```
github_jupyter
# **CS224W - Colab 2** In Colab 2, we will work to construct our own graph neural network using PyTorch Geometric (PyG) and then apply that model on two Open Graph Benchmark (OGB) datasets. These two datasets will be used to benchmark your model's performance on two different graph-based tasks: 1) node property prediction, predicting properties of single nodes and 2) graph property prediction, predicting properties of entire graphs or subgraphs. First, we will learn how PyTorch Geometric stores graphs as PyTorch tensors. Then, we will load and inspect one of the Open Graph Benchmark (OGB) datasets by using the `ogb` package. OGB is a collection of realistic, large-scale, and diverse benchmark datasets for machine learning on graphs. The `ogb` package not only provides data loaders for each dataset but also model evaluators. Lastly, we will build our own graph neural network using PyTorch Geometric. We will then train and evaluate our model on the OGB node property prediction and graph property prediction tasks. **Note**: Make sure to **sequentially run all the cells in each section**, so that the intermediate variables / packages will carry over to the next cell We recommend you save a copy of this colab in your drive so you don't lose progress! Have fun and good luck on Colab 2 :) # Device You might need to use a GPU for this Colab to run quickly. Please click `Runtime` and then `Change runtime type`. Then set the `hardware accelerator` to **GPU**. # Setup As discussed in Colab 0, the installation of PyG on Colab can be a little bit tricky. First let us check which version of PyTorch you are running ``` import torch import os print("PyTorch has version {}".format(torch.__version__)) ``` Download the necessary packages for PyG. Make sure that your version of torch matches the output from the cell above. In case of any issues, more information can be found on the [PyG's installation page](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html). ``` # Install torch geometric if 'IS_GRADESCOPE_ENV' not in os.environ: !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html !pip install torch-geometric !pip install ogb ``` # 1) PyTorch Geometric (Datasets and Data) PyTorch Geometric has two classes for storing and/or transforming graphs into tensor format. One is `torch_geometric.datasets`, which contains a variety of common graph datasets. Another is `torch_geometric.data`, which provides the data handling of graphs in PyTorch tensors. In this section, we will learn how to use `torch_geometric.datasets` and `torch_geometric.data` together. ## PyG Datasets The `torch_geometric.datasets` class has many common graph datasets. Here we will explore its usage through one example dataset. ``` from torch_geometric.datasets import TUDataset if 'IS_GRADESCOPE_ENV' not in os.environ: root = './enzymes' name = 'ENZYMES' # The ENZYMES dataset pyg_dataset= TUDataset(root, name) # You will find that there are 600 graphs in this dataset print(pyg_dataset) ``` ## Question 1: What is the number of classes and number of features in the ENZYMES dataset? (5 points) ``` def get_num_classes(pyg_dataset): # TODO: Implement a function that takes a PyG dataset object # and returns the number of classes for that dataset. num_classes = 0 ############# Your code here ############ ## (~1 line of code) ## Note ## 1. Colab autocomplete functionality might be useful. num_classes = pyg_dataset.num_classes ######################################### return num_classes def get_num_features(pyg_dataset): # TODO: Implement a function that takes a PyG dataset object # and returns the number of features for that dataset. num_features = 0 ############# Your code here ############ ## (~1 line of code) ## Note ## 1. Colab autocomplete functionality might be useful. num_features = pyg_dataset.num_features ######################################### return num_features if 'IS_GRADESCOPE_ENV' not in os.environ: num_classes = get_num_classes(pyg_dataset) num_features = get_num_features(pyg_dataset) print("{} dataset has {} classes".format(name, num_classes)) print("{} dataset has {} features".format(name, num_features)) ``` ## PyG Data Each PyG dataset stores a list of `torch_geometric.data.Data` objects, where each `torch_geometric.data.Data` object represents a graph. We can easily get the `Data` object by indexing into the dataset. For more information such as what is stored in the `Data` object, please refer to the [documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html#torch_geometric.data.Data). ## Question 2: What is the label of the graph with index 100 in the ENZYMES dataset? (5 points) ``` def get_graph_class(pyg_dataset, idx): # TODO: Implement a function that takes a PyG dataset object, # an index of a graph within the dataset, and returns the class/label # of the graph (as an integer). label = -1 ############# Your code here ############ ## (~1 line of code) graph = pyg_dataset[idx] label = int(graph.y) ######################################### return label # Here pyg_dataset is a dataset for graph classification if 'IS_GRADESCOPE_ENV' not in os.environ: graph_0 = pyg_dataset[0] print(graph_0) idx = 100 label = get_graph_class(pyg_dataset, idx) print('Graph with index {} has label {}'.format(idx, label)) ``` ## Question 3: How many edges does the graph with index 200 have? (5 points) ``` def get_graph_num_edges(pyg_dataset, idx): # TODO: Implement a function that takes a PyG dataset object, # the index of a graph in the dataset, and returns the number of # edges in the graph (as an integer). You should not count an edge # twice if the graph is undirected. For example, in an undirected # graph G, if two nodes v and u are connected by an edge, this edge # should only be counted once. num_edges = 0 ############# Your code here ############ ## Note: ## 1. You can't return the data.num_edges directly - WHY NOT? ## 2. We assume the graph is undirected ## 3. Look at the PyG dataset built in functions ## (~4 lines of code) num_edges = pyg_dataset[idx].num_edges/2 num_edges_2 = pyg_dataset[idx]['edge_index'].size()[1]/2 assert num_edges == num_edges_2 ######################################### return num_edges if 'IS_GRADESCOPE_ENV' not in os.environ: idx = 200 num_edges = get_graph_num_edges(pyg_dataset, idx) print('Graph with index {} has {} edges'.format(idx, num_edges)) ``` # 2) Open Graph Benchmark (OGB) The Open Graph Benchmark (OGB) is a collection of realistic, large-scale, and diverse benchmark datasets for machine learning on graphs. Its datasets are automatically downloaded, processed, and split using the OGB Data Loader. The model performance can then be evaluated by using the OGB Evaluator in a unified manner. ## Dataset and Data OGB also supports PyG dataset and data classes. Here we take a look on the `ogbn-arxiv` dataset. ``` import torch_geometric.transforms as T from ogb.nodeproppred import PygNodePropPredDataset if 'IS_GRADESCOPE_ENV' not in os.environ: dataset_name = 'ogbn-arxiv' # Load the dataset and transform it to sparse tensor dataset = PygNodePropPredDataset(name=dataset_name, transform=T.ToSparseTensor()) print('The {} dataset has {} graph'.format(dataset_name, len(dataset))) # Extract the graph data = dataset[0] print(data) ``` ## Question 4: How many features are in the ogbn-arxiv graph? (5 points) ``` def graph_num_features(data): # TODO: Implement a function that takes a PyG data object, # and returns the number of features in the graph (as an integer). num_features = 0 ############# Your code here ############ ## (~1 line of code) num_features = data.num_features ######################################### return num_features if 'IS_GRADESCOPE_ENV' not in os.environ: num_features = graph_num_features(data) print('The graph has {} features'.format(num_features)) ``` # 3) GNN: Node Property Prediction In this section we will build our first graph neural network using PyTorch Geometric. Then we will apply it to the task of node property prediction (node classification). Specifically, we will use GCN as the foundation for your graph neural network ([Kipf et al. (2017)](https://arxiv.org/pdf/1609.02907.pdf)). To do so, we will work with PyG's built-in `GCNConv` layer. ## Setup ``` import torch import pandas as pd import torch.nn.functional as F print(torch.__version__) # The PyG built-in GCNConv from torch_geometric.nn import GCNConv import torch_geometric.transforms as T from ogb.nodeproppred import PygNodePropPredDataset, Evaluator ``` ## Load and Preprocess the Dataset ``` if 'IS_GRADESCOPE_ENV' not in os.environ: dataset_name = 'ogbn-arxiv' dataset = PygNodePropPredDataset(name=dataset_name, transform=T.ToSparseTensor()) data = dataset[0] # Make the adjacency matrix to symmetric data.adj_t = data.adj_t.to_symmetric() device = 'cpu' # If you use GPU, the device should be cuda print('Device: {}'.format(device)) data = data.to(device) split_idx = dataset.get_idx_split() train_idx = split_idx['train'].to(device) print(graph_num_features(data)) ``` ## GCN Model Now we will implement our GCN model! Please follow the figure below to implement the `forward` function. ![test](https://drive.google.com/uc?id=128AuYAXNXGg7PIhJJ7e420DoPWKb-RtL) ``` class GCN(torch.nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, num_layers, dropout, return_embeds=False): # TODO: Implement a function that initializes self.convs, # self.bns, and self.softmax. super(GCN, self).__init__() # A list of GCNConv layers self.convs = None # A list of 1D batch normalization layers self.bns = None # The log softmax layer self.softmax = None ############# Your code here ############ ## Note: ## 1. You should use torch.nn.ModuleList for self.convs and self.bns ## 2. self.convs has num_layers GCNConv layers ## 3. self.bns has num_layers - 1 BatchNorm1d layers ## 4. You should use torch.nn.LogSoftmax for self.softmax ## 5. The parameters you can set for GCNConv include 'in_channels' and ## 'out_channels'. For more information please refer to the documentation: ## https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GCNConv ## 6. The only parameter you need to set for BatchNorm1d is 'num_features' ## For more information please refer to the documentation: ## https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html ## (~10 lines of code) self.num_layers = num_layers self.convs = torch.nn.ModuleList() # convolutional layers self.bns = torch.nn.ModuleList() # batch normalization layers in_size = [input_dim] + (num_layers - 1)*[hidden_dim] # [input_dim, hidden_dim, ... , hidden_dim, output_dim] out_size = (num_layers - 1)*[hidden_dim] + [output_dim] # [hidden_dim, ... , hidden_dim, output_dim] for i in range(num_layers): self.convs.append(GCNConv(in_channels = in_size[i], out_channels = out_size[i])) if i < num_layers - 1: self.bns.append(torch.nn.BatchNorm1d(out_size[i])) self.softmax = torch.nn.LogSoftmax(1) # along feature vectors ######################################### # Probability of an element getting zeroed self.dropout = dropout # Skip classification layer and return node embeddings self.return_embeds = return_embeds def reset_parameters(self): for conv in self.convs: conv.reset_parameters() for bn in self.bns: bn.reset_parameters() def forward(self, x, adj_t): # TODO: Implement a function that takes the feature tensor x and # edge_index tensor adj_t and returns the output tensor as # shown in the figure. out = None ############# Your code here ############ ## Note: ## 1. Construct the network as shown in the figure ## 2. torch.nn.functional.relu and torch.nn.functional.dropout are useful ## For more information please refer to the documentation: ## https://pytorch.org/docs/stable/nn.functional.html ## 3. Don't forget to set F.dropout training to self.training ## 4. If return_embeds is True, then skip the last softmax layer ## (~7 lines of code) for i in range(self.num_layers): x = self.convs[i](x, adj_t) # GCN convolutional if i < self.num_layers - 1: x = self.bns[i](x) x = F.relu(x) x = F.dropout(x, p = self.dropout, training = self.training) # last layer if self.return_embeds: out = x if not self.return_embeds: out = self.softmax(x) ######################################### return out def train(model, data, train_idx, optimizer, loss_fn): # TODO: Implement a function that trains the model by # using the given optimizer and loss_fn. model.train() loss = 0 ############# Your code here ############ ## Note: ## 1. Zero grad the optimizer ## 2. Feed the data into the model ## 3. Slice the model output and label by train_idx ## 4. Feed the sliced output and label to loss_fn ## (~4 lines of code) optimizer.zero_grad() output = model(data['x'], data['adj_t'])[train_idx] label = (data.y.to(device)[train_idx]).flatten() loss = loss_fn(output,label) ######################################### loss.backward() optimizer.step() return loss.item() # Test function here @torch.no_grad() def test(model, data, split_idx, evaluator, save_model_results=False): # TODO: Implement a function that tests the model by # using the given split_idx and evaluator. model.eval() # The output of model on all data out = None ############# Your code here ############ ## (~1 line of code) ## Note: ## 1. No index slicing here out = model(data['x'], data['adj_t']) ######################################### y_pred = out.argmax(dim=-1, keepdim=True) train_acc = evaluator.eval({ 'y_true': data.y[split_idx['train']], 'y_pred': y_pred[split_idx['train']], })['acc'] valid_acc = evaluator.eval({ 'y_true': data.y[split_idx['valid']], 'y_pred': y_pred[split_idx['valid']], })['acc'] test_acc = evaluator.eval({ 'y_true': data.y[split_idx['test']], 'y_pred': y_pred[split_idx['test']], })['acc'] if save_model_results: print ("Saving Model Predictions") data = {} data['y_pred'] = y_pred.view(-1).cpu().detach().numpy() df = pd.DataFrame(data=data) # Save locally as csv df.to_csv('ogbn-arxiv_node.csv', sep=',', index=False) return train_acc, valid_acc, test_acc # Please do not change the args if 'IS_GRADESCOPE_ENV' not in os.environ: args = { 'device': device, 'num_layers': 3, 'hidden_dim': 256, 'dropout': 0.5, 'lr': 0.01, 'epochs': 100, } args if 'IS_GRADESCOPE_ENV' not in os.environ: model = GCN(data.num_features, args['hidden_dim'], dataset.num_classes, args['num_layers'], args['dropout']).to(device) evaluator = Evaluator(name='ogbn-arxiv') import copy if 'IS_GRADESCOPE_ENV' not in os.environ: # reset the parameters to initial random value model.reset_parameters() optimizer = torch.optim.Adam(model.parameters(), lr=args['lr']) loss_fn = F.nll_loss best_model = None best_valid_acc = 0 for epoch in range(1, 1 + args["epochs"]): loss = train(model, data, train_idx, optimizer, loss_fn) result = test(model, data, split_idx, evaluator) train_acc, valid_acc, test_acc = result if valid_acc > best_valid_acc: best_valid_acc = valid_acc best_model = copy.deepcopy(model) print(f'Epoch: {epoch:02d}, ' f'Loss: {loss:.4f}, ' f'Train: {100 * train_acc:.2f}%, ' f'Valid: {100 * valid_acc:.2f}% ' f'Test: {100 * test_acc:.2f}%') ``` ## Question 5: What are your `best_model` validation and test accuracies?(20 points) Run the cell below to see the results of your best of model and save your model's predictions to a file named *ogbn-arxiv_node.csv*. You can view this file by clicking on the *Folder* icon on the left side pannel. As in Colab 1, when you sumbit your assignment, you will have to download this file and attatch it to your submission. ``` if 'IS_GRADESCOPE_ENV' not in os.environ: best_result = test(best_model, data, split_idx, evaluator, save_model_results=True) train_acc, valid_acc, test_acc = best_result print(f'Best model: ' f'Train: {100 * train_acc:.2f}%, ' f'Valid: {100 * valid_acc:.2f}% ' f'Test: {100 * test_acc:.2f}%') ``` # 4) GNN: Graph Property Prediction In this section we will create a graph neural network for graph property prediction (graph classification). ## Load and preprocess the dataset ``` from ogb.graphproppred import PygGraphPropPredDataset, Evaluator from torch_geometric.data import DataLoader from tqdm.notebook import tqdm if 'IS_GRADESCOPE_ENV' not in os.environ: # Load the dataset dataset = PygGraphPropPredDataset(name='ogbg-molhiv') device = 'cuda' if torch.cuda.is_available() else 'cpu' print('Device: {}'.format(device)) split_idx = dataset.get_idx_split() # Check task type print('Task type: {}'.format(dataset.task_type)) # Load the dataset splits into corresponding dataloaders # We will train the graph classification task on a batch of 32 graphs # Shuffle the order of graphs for training set if 'IS_GRADESCOPE_ENV' not in os.environ: train_loader = DataLoader(dataset[split_idx["train"]], batch_size=32, shuffle=True, num_workers=0) valid_loader = DataLoader(dataset[split_idx["valid"]], batch_size=32, shuffle=False, num_workers=0) test_loader = DataLoader(dataset[split_idx["test"]], batch_size=32, shuffle=False, num_workers=0) if 'IS_GRADESCOPE_ENV' not in os.environ: # Please do not change the args args = { 'device': device, 'num_layers': 5, 'hidden_dim': 256, 'dropout': 0.5, 'lr': 0.001, 'epochs': 30, } args ``` ## Graph Prediction Model ### Graph Mini-Batching Before diving into the actual model, we introduce the concept of mini-batching with graphs. In order to parallelize the processing of a mini-batch of graphs, PyG combines the graphs into a single disconnected graph data object (*torch_geometric.data.Batch*). *torch_geometric.data.Batch* inherits from *torch_geometric.data.Data* (introduced earlier) and contains an additional attribute called `batch`. The `batch` attribute is a vector mapping each node to the index of its corresponding graph within the mini-batch: batch = [0, ..., 0, 1, ..., n - 2, n - 1, ..., n - 1] This attribute is crucial for associating which graph each node belongs to and can be used to e.g. average the node embeddings for each graph individually to compute graph level embeddings. ### Implemention Now, we have all of the tools to implement a GCN Graph Prediction model! We will reuse the existing GCN model to generate `node_embeddings` and then use `Global Pooling` over the nodes to create graph level embeddings that can be used to predict properties for the each graph. Remeber that the `batch` attribute will be essential for performining Global Pooling over our mini-batch of graphs. ``` from ogb.graphproppred.mol_encoder import AtomEncoder from torch_geometric.nn import global_add_pool, global_mean_pool ### GCN to predict graph property class GCN_Graph(torch.nn.Module): def __init__(self, hidden_dim, output_dim, num_layers, dropout): super(GCN_Graph, self).__init__() # Load encoders for Atoms in molecule graphs self.node_encoder = AtomEncoder(hidden_dim) # Node embedding model # Note that the input_dim and output_dim are set to hidden_dim self.gnn_node = GCN(hidden_dim, hidden_dim, hidden_dim, num_layers, dropout, return_embeds=True) self.pool = None ############# Your code here ############ ## Note: ## 1. Initialize self.pool as a global mean pooling layer ## For more information please refer to the documentation: ## https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#global-pooling-layers ######################################### # Output layer self.linear = torch.nn.Linear(hidden_dim, output_dim) def reset_parameters(self): self.gnn_node.reset_parameters() self.linear.reset_parameters() def forward(self, batched_data): # TODO: Implement a function that takes as input a # mini-batch of graphs (torch_geometric.data.Batch) and # returns the predicted graph property for each graph. # # NOTE: Since we are predicting graph level properties, # your output will be a tensor with dimension equaling # the number of graphs in the mini-batch # Extract important attributes of our mini-batch x, edge_index, batch = batched_data.x, batched_data.edge_index, batched_data.batch embed = self.node_encoder(x) out = None ############# Your code here ############ ## Note: ## 1. Construct node embeddings using existing GCN model ## 2. Use the global pooling layer to aggregate features for each individual graph ## For more information please refer to the documentation: ## https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#global-pooling-layers ## 3. Use a linear layer to predict each graph's property ## (~3 lines of code) ######################################### return out def train(model, device, data_loader, optimizer, loss_fn): # TODO: Implement a function that trains your model by # using the given optimizer and loss_fn. model.train() loss = 0 for step, batch in enumerate(tqdm(data_loader, desc="Iteration")): batch = batch.to(device) if batch.x.shape[0] == 1 or batch.batch[-1] == 0: pass else: ## ignore nan targets (unlabeled) when computing training loss. is_labeled = batch.y == batch.y ############# Your code here ############ ## Note: ## 1. Zero grad the optimizer ## 2. Feed the data into the model ## 3. Use `is_labeled` mask to filter output and labels ## 4. You may need to change the type of label to torch.float32 ## 5. Feed the output and label to the loss_fn ## (~3 lines of code) ######################################### loss.backward() optimizer.step() return loss.item() # The evaluation function def eval(model, device, loader, evaluator, save_model_results=False, save_file=None): model.eval() y_true = [] y_pred = [] for step, batch in enumerate(tqdm(loader, desc="Iteration")): batch = batch.to(device) if batch.x.shape[0] == 1: pass else: with torch.no_grad(): pred = model(batch) y_true.append(batch.y.view(pred.shape).detach().cpu()) y_pred.append(pred.detach().cpu()) y_true = torch.cat(y_true, dim = 0).numpy() y_pred = torch.cat(y_pred, dim = 0).numpy() input_dict = {"y_true": y_true, "y_pred": y_pred} if save_model_results: print ("Saving Model Predictions") # Create a pandas dataframe with a two columns # y_pred | y_true data = {} data['y_pred'] = y_pred.reshape(-1) data['y_true'] = y_true.reshape(-1) df = pd.DataFrame(data=data) # Save to csv df.to_csv('ogbg-molhiv_graph_' + save_file + '.csv', sep=',', index=False) return evaluator.eval(input_dict) if 'IS_GRADESCOPE_ENV' not in os.environ: model = GCN_Graph(args['hidden_dim'], dataset.num_tasks, args['num_layers'], args['dropout']).to(device) evaluator = Evaluator(name='ogbg-molhiv') import copy if 'IS_GRADESCOPE_ENV' not in os.environ: model.reset_parameters() optimizer = torch.optim.Adam(model.parameters(), lr=args['lr']) loss_fn = torch.nn.BCEWithLogitsLoss() best_model = None best_valid_acc = 0 for epoch in range(1, 1 + args["epochs"]): print('Training...') loss = train(model, device, train_loader, optimizer, loss_fn) print('Evaluating...') train_result = eval(model, device, train_loader, evaluator) val_result = eval(model, device, valid_loader, evaluator) test_result = eval(model, device, test_loader, evaluator) train_acc, valid_acc, test_acc = train_result[dataset.eval_metric], val_result[dataset.eval_metric], test_result[dataset.eval_metric] if valid_acc > best_valid_acc: best_valid_acc = valid_acc best_model = copy.deepcopy(model) print(f'Epoch: {epoch:02d}, ' f'Loss: {loss:.4f}, ' f'Train: {100 * train_acc:.2f}%, ' f'Valid: {100 * valid_acc:.2f}% ' f'Test: {100 * test_acc:.2f}%') ``` ## Question 6: What are your `best_model` validation and test ROC-AUC scores? (20 points) Run the cell below to see the results of your best of model and save your model's predictions over the validation and test datasets. The resulting files are named *ogbn-arxiv_graph_valid.csv* and *ogbn-arxiv_graph_test.csv*. Again, you can view these files by clicking on the *Folder* icon on the left side pannel. As in Colab 1, when you sumbit your assignment, you will have to download these files and attatch them to your submission. ``` if 'IS_GRADESCOPE_ENV' not in os.environ: train_acc = eval(best_model, device, train_loader, evaluator)[dataset.eval_metric] valid_acc = eval(best_model, device, valid_loader, evaluator, save_model_results=True, save_file="valid")[dataset.eval_metric] test_acc = eval(best_model, device, test_loader, evaluator, save_model_results=True, save_file="test")[dataset.eval_metric] print(f'Best model: ' f'Train: {100 * train_acc:.2f}%, ' f'Valid: {100 * valid_acc:.2f}% ' f'Test: {100 * test_acc:.2f}%') ``` ## Question 7 (Optional): Experiment with the two other global pooling layers in Pytorch Geometric. # Submission You will need to submit four files on Gradescope to complete this notebook. 1. Your completed *CS224W_Colab2.ipynb*. From the "File" menu select "Download .ipynb" to save a local copy of your completed Colab. **PLEASE DO NOT CHANGE THE NAME!** The autograder depends on the .ipynb file being called "CS224W_Colab2.ipynb". 2. *ogbn-arxiv_node.csv* 3. *ogbg-molhiv_graph_valid.csv* 4. *ogbg-molhiv_graph_test.csv* Download the csv files by selecting the *Folder* icon on the left panel. To submit your work, zip the files downloaded in steps 1-4 above and submit to gradescope. **NOTE:** DO NOT rename any of the downloaded files.
github_jupyter
``` import matplotlib import matplotlib.pyplot as plt import numpy as np import os import pandas as pd from sklearn.metrics import confusion_matrix,balanced_accuracy_score,roc_auc_score,roc_curve,recall_score,precision_score from p4tools import io ResultsPath = '../../Data/SummaryResults/' FiguresPath = '../../Data/Figures/' if not os.path.isdir(FiguresPath): os.mkdir(FiguresPath) NumRepeats=3 ResultsList=[] for Rep in range(NumRepeats): ResultsList.append(pd.read_csv(ResultsPath+'TileClassifier_LORO_final_repeat'+str(Rep)+'.csv')) Y_true=ResultsList[-1]['GroundTruth'].astype('uint8') Y_pred=ResultsList[0]['ClassifierConf'].values for Rep in range(1,NumRepeats): Y_pred=Y_pred+ResultsList[Rep]['ClassifierConf'].values Y_pred=Y_pred/NumRepeats Results_df = ResultsList[-1] Results_df['ClassifierConf']=Y_pred Recall95PC_Threshold=0.5 AUC = roc_auc_score(Y_true, Y_pred) conf_matrix = confusion_matrix(Y_true,Y_pred>Recall95PC_Threshold,labels=[0,1]) Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1]) Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1]) Precision = conf_matrix[1,1]/(conf_matrix[1,1]+conf_matrix[0,1]) Balanced_accuracy = balanced_accuracy_score(Y_true, Y_pred>Recall95PC_Threshold) print('Number of tiles classified in Leave-One-Region-Out Cross-Validation= ',Results_df.shape[0]) print('') print('Confusion matrix = ') print(conf_matrix) print('') print('sensitivity=',round(100*Sensitivity,2),'%') print('Specificity=',round(100*Specificity,2),'%') print('Precision=',round(100*Precision,2),'%') print('AUC=',round(AUC,3)) print('Balanced Accuracy =',round(100*Balanced_accuracy)) Recall95PC_Threshold=0.24 AUC = roc_auc_score(Y_true, Y_pred) conf_matrix = confusion_matrix(Y_true,Y_pred>Recall95PC_Threshold,labels=[0,1]) Sensitivity = conf_matrix[1,1]/(conf_matrix[1,0]+conf_matrix[1,1]) Specificity = conf_matrix[0,0]/(conf_matrix[0,0]+conf_matrix[0,1]) Precision = conf_matrix[1,1]/(conf_matrix[1,1]+conf_matrix[0,1]) Balanced_accuracy = balanced_accuracy_score(Y_true, Y_pred>Recall95PC_Threshold) print('Number of tiles classified in Leave-One-Region-Out Cross-Validation= ',Results_df.shape[0]) print('') print('Confusion matrix = ') print(conf_matrix) print('') print('sensitivity=',round(100*Sensitivity,2),'%') print('Specificity=',round(100*Specificity,2),'%') print('Precision=',round(100*Precision,2),'%') print('AUC=',round(AUC,3)) print('Balanced Accuracy =',round(100*Balanced_accuracy)) fig = plt.figure(figsize=(10,10)) fpr, tpr, thresholds = roc_curve(Y_true, Y_pred) plt.plot(1-fpr,tpr,linewidth=3) plt.xlabel('Specificity',fontsize=20) plt.ylabel('Recall',fontsize=20) plt.plot(Specificity,Sensitivity,'og',linewidth=30,markersize=20) plt.text(0.5, 0.5, 'AUC='+str(round(AUC,2)), fontsize=20) plt.text(0.2, 0.9, '95% recall point at', fontsize=20,color='green') plt.text(0.2, 0.85, '54% specificity', fontsize=20,color='green') matplotlib.rc('xtick', labelsize=20) matplotlib.rc('ytick', labelsize=20) fig.tight_layout() plt.savefig(FiguresPath+'Figure14.pdf') plt.show() #regions region_names_df = io.get_region_names() region_names_df = region_names_df.set_index('obsid') region_names_df.at['ESP_012620_0975','roi_name'] = 'Buffalo' region_names_df.at['ESP_012277_0975','roi_name'] = 'Buffalo' region_names_df.at['ESP_012348_0975','roi_name'] = 'Taichung' #other meta data ImageResults_df = io.get_meta_data() ImageResults_df = ImageResults_df.set_index('OBSERVATION_ID') ImageResults_df = pd.concat([ImageResults_df, region_names_df], axis=1, sort=False) ImageResults_df=ImageResults_df.dropna() UniqueP4Regions = ImageResults_df['roi_name'].unique() print("Number of P4 regions = ",len(UniqueP4Regions)) BAs=[] for ToLeaveOut in UniqueP4Regions: This_df = Results_df[Results_df['Region']==ToLeaveOut] y_true = This_df['GroundTruth'].values y_pred = This_df['ClassifierConf'].values Balanced_accuracy_cl = balanced_accuracy_score(y_true, y_pred>0.5) BAs.append(Balanced_accuracy_cl) regions_sorted=[x for y, x in sorted(zip(BAs,UniqueP4Regions))] fig=plt.figure(figsize=(15,15)) plt.bar(regions_sorted,100*np.array(sorted(BAs))) ax=fig.gca() ax.set_xticks(np.arange(0,len(regions_sorted))) ax.set_xticklabels(regions_sorted,rotation=90,fontsize=20) ax.set_ylabel('Balanced Accuracy (%)',fontsize=30) matplotlib.rc('xtick', labelsize=30) matplotlib.rc('ytick', labelsize=30) fig.tight_layout() plt.savefig(FiguresPath+'Figure15.pdf') plt.show() #regions region_names_df = io.get_region_names() region_names_df = region_names_df.set_index('obsid') region_names_df.at['ESP_012620_0975','roi_name'] = 'Buffalo' region_names_df.at['ESP_012277_0975','roi_name'] = 'Buffalo' region_names_df.at['ESP_012348_0975','roi_name'] = 'Taichung' #other meta data ImageResults_df = io.get_meta_data() ImageResults_df = ImageResults_df.set_index('OBSERVATION_ID') ImageResults_df = pd.concat([ImageResults_df, region_names_df], axis=1, sort=False) ImageResults_df=ImageResults_df.dropna() UniqueP4Regions = ImageResults_df['roi_name'].unique() print("Number of P4 regions = ",len(UniqueP4Regions)) Recalls=[] for ToLeaveOut in UniqueP4Regions: This_df = Results_df[Results_df['Region']==ToLeaveOut] y_true = This_df['GroundTruth'].values y_pred = This_df['ClassifierConf'].values Recall_cl = recall_score(y_true, y_pred>Recall95PC_Threshold) Recalls.append(Recall_cl) regions_sorted=[x for y, x in sorted(zip(Recalls,UniqueP4Regions))] Precisions=[] for ToLeaveOut in regions_sorted: This_df = Results_df[Results_df['Region']==ToLeaveOut] y_true = This_df['GroundTruth'].values y_pred = This_df['ClassifierConf'].values Precision_cl = precision_score(y_true, y_pred>Recall95PC_Threshold) Precisions.append(Precision_cl) fig=plt.figure(figsize=(15,15)) plt.bar(regions_sorted,100*np.array(sorted(Recalls))) plt.bar(regions_sorted,100*np.array(Precisions),alpha=0.5,color='g') ax=fig.gca() ax.set_xticks(np.arange(0,len(regions_sorted))) ax.set_xticklabels(regions_sorted,rotation=90,fontsize=20) ax.set_ylabel('Recall (%), Precision (%)',fontsize=30) matplotlib.rc('xtick', labelsize=30) matplotlib.rc('ytick', labelsize=30) plt.ylim([0,110]) fig.tight_layout() fig.legend(['Recall','Precision'],fontsize=20,loc='upper right') plt.savefig(FiguresPath+'Figure15_new.pdf') plt.show() sorted(Recalls) ```
github_jupyter
# Example 1: Detecting an obvious outlier ``` import numpy as np from isotree import IsolationForest ### Random data from a standard normal distribution np.random.seed(1) n = 100 m = 2 X = np.random.normal(size = (n, m)) ### Will now add obvious outlier point (3, 3) to the data X = np.r_[X, np.array([3, 3]).reshape((1, m))] ### Fit a small isolation forest model iso = IsolationForest(ntrees = 10, ndim = 2, nthreads = 1) iso.fit(X) ### Check which row has the highest outlier score pred = iso.predict(X) print("Point with highest outlier score: ", X[np.argsort(-pred)[0], ]) ``` # Example 2: Plotting outlier and density regions ``` import numpy as np, pandas as pd from isotree import IsolationForest import matplotlib.pyplot as plt from pylab import rcParams %matplotlib inline rcParams['figure.figsize'] = 10, 8 np.random.seed(1) group1 = pd.DataFrame({ "x" : np.random.normal(loc=-1, scale=.4, size = 1000), "y" : np.random.normal(loc=-1, scale=.2, size = 1000), }) group2 = pd.DataFrame({ "x" : np.random.normal(loc=+1, scale=.2, size = 1000), "y" : np.random.normal(loc=+1, scale=.4, size = 1000), }) X = pd.concat([group1, group2], ignore_index=True) ### Now add an obvious outlier which is within the 1d ranges ### (As an interesting test, remove it and see what happens, ### or check how its score changes when using sub-sampling) X = X.append(pd.DataFrame({"x" : [-1], "y" : [1]}), ignore_index = True) ### Single-variable Isolatio Forest iso_simple = IsolationForest(ndim=1, ntrees=100, penalize_range=False, prob_pick_pooled_gain=0) iso_simple.fit(X) ### Extended Isolation Forest iso_ext = IsolationForest(ndim=2, ntrees=100, penalize_range=False, prob_pick_pooled_gain=0) iso_ext.fit(X) ### SCiForest iso_sci = IsolationForest(ndim=2, ntrees=100, ntry=10, penalize_range=True, prob_pick_avg_gain=1, prob_pick_pooled_gain=0) iso_sci.fit(X) ### Fair-Cut Forest iso_fcf = IsolationForest(ndim=2, ntrees=100, penalize_range=False, prob_pick_avg_gain=0, prob_pick_pooled_gain=1) iso_fcf.fit(X) ### Plot as a heatmap pts = np.linspace(-3, 3, 250) space = np.array( np.meshgrid(pts, pts) ).reshape((2, -1)).T Z_sim = iso_simple.predict(space) Z_ext = iso_ext.predict(space) Z_sci = iso_sci.predict(space) Z_fcf = iso_fcf.predict(space) space_index = pd.MultiIndex.from_arrays([space[:, 0], space[:, 1]]) def plot_space(Z, space_index, X): df = pd.DataFrame({"z" : Z}, index = space_index) df = df.unstack() df = df[df.columns.values[::-1]] plt.imshow(df, extent = [-3, 3, -3, 3], cmap = 'hot_r') plt.scatter(x = X['x'], y = X['y'], alpha = .15, c = 'navy') plt.suptitle("Outlier and Density Regions", fontsize = 20) plt.subplot(2, 2, 1) plot_space(Z_sim, space_index, X) plt.title("Isolation Forest", fontsize=15) plt.subplot(2, 2, 2) plot_space(Z_ext, space_index, X) plt.title("Extended Isolation Forest", fontsize=15) plt.subplot(2, 2, 3) plot_space(Z_sci, space_index, X) plt.title("SCiForest", fontsize=15) plt.subplot(2, 2, 4) plot_space(Z_fcf, space_index, X) plt.title("Fair-Cut Forest", fontsize=15) plt.show() print("(Note that the upper-left corner has an outlier point,\n\ and that there is a slight slide in the axes of the heat colors and the points)") ``` # Example 3: calculating pairwise distances ``` import numpy as np, pandas as pd from isotree import IsolationForest from scipy.spatial.distance import cdist ### Generate random multivariate-normal data np.random.seed(1) n = 1000 m = 10 ### This is a random PSD matrix to use as covariance S = np.random.normal(size = (m, m)) S = S.T.dot(S) mu = np.random.normal(size = m, scale = 2) X = np.random.multivariate_normal(mu, S, n) ### Fitting the model iso = IsolationForest(prob_pick_avg_gain=0, prob_pick_pooled_gain=0) iso.fit(X) ### Calculate approximate distance D_sep = iso.predict_distance(X, square_mat = True) ### Compare against other distances D_euc = cdist(X, X, metric = "euclidean") D_cos = cdist(X, X, metric = "cosine") D_mah = cdist(X, X, metric = "mahalanobis") ### Correlations print("Correlations between different distance metrics") pd.DataFrame( np.corrcoef([D_sep.reshape(-1), D_euc.reshape(-1), D_cos.reshape(-1), D_mah.reshape(-1)]), columns = ['SeparaionDepth', 'Euclidean', 'Cosine', 'Mahalanobis'], index = ['SeparaionDepth', 'Euclidean', 'Cosine', 'Mahalanobis'] ) ``` # Example 4: imputing missing values ``` import numpy as np from isotree import IsolationForest ### Generate random multivariate-normal data np.random.seed(1) n = 1000 m = 5 ### This is a random PSD matrix to use as covariance S = np.random.normal(size = (m, m)) S = S.T.dot(S) mu = np.random.normal(size = m) X = np.random.multivariate_normal(mu, S, n) ### Set some values randomly as missing values_NA = (np.random.random(size = n * m) <= .15).reshape((n, m)) X_na = X.copy() X_na[values_NA] = np.nan ### Fitting the model iso = IsolationForest(build_imputer=True, prob_pick_pooled_gain=1, ntry=10) iso.fit(X_na) ### Impute missing values X_imputed = iso.transform(X_na) print("MSE for imputed values w/model: %f\n" % np.mean((X[values_NA] - X_imputed[values_NA])**2)) ### Comparison against simple mean imputation X_means = np.nanmean(X_na, axis = 0) X_imp_mean = X_na.copy() for cl in range(m): X_imp_mean[np.isnan(X_imp_mean[:,cl]), cl] = X_means[cl] print("MSE for imputed values w/means: %f\n" % np.mean((X[values_NA] - X_imp_mean[values_NA])**2)) ```
github_jupyter
# 决策树 ----- ``` # 准备工作 # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = ".." CHAPTER_ID = "decision_trees" def image_path(fig_id): return os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id) def save_fig(fig_id, tight_layout=True): print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(image_path(fig_id) + ".png", format='png', dpi=300) ``` # 训练与可视化 ``` from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier iris = load_iris() X = iris.data[:, 2:] # petal length and width y = iris.target tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42) tree_clf.fit(X, y) from sklearn.tree import export_graphviz export_graphviz(tree_clf, out_file=image_path("iris_tree.dot"), feature_names=iris.feature_names[2:], class_names = iris.target_names, rounded=True, filled=True, ) ``` 根据上面得到的dot文件,可以使用`$ dot -Tpng iris_tree.dot -o iris_tree.png `命令转换为图片,如下: ![iris_tree.png](attachment:iris_tree.png) 上图可以看到树是的预测过程。假设想分类鸢尾花, 可以从根节点开始。 首先看花瓣宽度, 如果小于2.45cm, 分入左边节点(深度1,左)。这种情况下,叶子节点不同继续询问,可以直接预测为Setosa鸢尾花。 如果宽度大于2.45cm, 移到右边子节点继续判断。由于不是叶子节点,因此继续判断, 花萼宽度如果小于1.75cm,则很大可能是Versicolor花(深度2, 左)。否则,可能是Virginica花(深度2, 右)。 其中参数含义如下:sample表示训练实例的个数。比如右节点中有100个实例, 花瓣宽度大于2.45cm。(深度1) 其中54个花萼宽度小于1.75cm。value表示实例中每个类别的分分类个数。 gini系数表示实例的杂乱程度。如果等于0, 表示所有训练实例都属于同一个类别。如上setosa花分类。 公式可以计算第i个节点的gini分数。$G_i = 1 - \sum_{k=1}^{n} p_{i,k}^{2}$ P(i,k)表示k实例在i节点中的分布比例。 比如2层左节点的gini等于:$1-(0/50)^{2}-(49/50)^{2}-(5/50)^{2} = 0.168$。 注意:sklearn中使用CART,生成二叉树。但是像ID3可以生成多个孩子的决策树。 ``` from matplotlib.colors import ListedColormap def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True): x1s = np.linspace(axes[0], axes[1], 100) x2s = np.linspace(axes[2], axes[3], 100) x1, x2 = np.meshgrid(x1s, x2s) X_new = np.c_[x1.ravel(), x2.ravel()] y_pred = clf.predict(X_new).reshape(x1.shape) custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap, linewidth=10) if not iris: custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50']) plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8) if plot_training: plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa") plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor") plt.plot(X[:, 0][y==2], X[:, 1][y==2], "g^", label="Iris-Virginica") plt.axis(axes) if iris: plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) else: plt.xlabel(r"$x_1$", fontsize=18) plt.ylabel(r"$x_2$", fontsize=18, rotation=0) if legend: plt.legend(loc="lower right", fontsize=14) plt.figure(figsize=(8, 4)) plot_decision_boundary(tree_clf, X, y) plt.plot([2.45, 2.45], [0, 3], "k-", linewidth=2) plt.plot([2.45, 7.5], [1.75, 1.75], "k--", linewidth=2) plt.plot([4.95, 4.95], [0, 1.75], "k:", linewidth=2) plt.plot([4.85, 4.85], [1.75, 3], "k:", linewidth=2) plt.text(1.40, 1.0, "Depth=0", fontsize=15) plt.text(3.2, 1.80, "Depth=1", fontsize=13) plt.text(4.05, 0.5, "(Depth=2)", fontsize=11) save_fig("decision_tree_decision_boundaries_plot") plt.show() ``` 上图显示了该决策树的决策边界。垂直线表示决策树的根节点(深度0), 花瓣长度等于2.45cm。 由于左边gini为0,只有一种分类,不再进一步分类判断。但是右边不是很纯,因此深度1的右边节点根据花萼宽度1.75cm进一步判断。 由于最大深度为2,决策树停止后面的判断。但是可以设置max_depth为3, 然后,两个深度2节点将各自添加另一个决策边界(由虚线表示)。 补充:可以看到决策树的过程容易理解,称之为白盒模型。与之不同的是,随机森林和神经网络一般称为黑盒模型。 它们预测效果很好,可以很容易地检查其计算结果, 来做出这些预测。但却难以解释为什么这样预测。 决策树提供了很好的和简单的分类规则,甚至可以在需要时手动分类。 # 进行预测和计算可能性 ``` tree_clf.predict_proba([[5, 1.5]]) tree_clf.predict([[5, 1.5]]) ``` ### CART:分类回归树 sklearn使用CART算法对训练决策树(增长树)。思想很简单:首先将训练集分为两个子集,根据特征k和阈值$t_k$(比如花瓣长度小于2.45cm)。重要的是怎么选出这个特征。 通过对每一组最纯的子集(k, $t_k$),根据大小的权重进行搜索。最小化如下损失函数: #### CART分类的损失函数 $J(k, t_k) = \frac{m_{left}}{m}G_{left} + \frac{m_{right}}{m}G_{right} $ ![gini.png](attachment:gini.png) 最小化如上函数,一旦成功分为两个子集, 就可以使用相同逻辑递归进行切分。当到达给定的最大深度时(max_depth)停止,或者不能再继续切分(数据集很纯,无法减少杂质)。 如下超参数控制其他的停止条件(min_samples_split, min_sample_leaf, min_weight_fraction_leaf, max_leaf_nodes). ### 计算复杂度 默认的,经常使用gini impurity测量标准,但是也可以使用entropy impuirty来测量。 ![gini_2.png](attachment:gini_2.png) ``` not_widest_versicolor = (X[:, 1] != 1.8) | (y==2) X_tweaked = X[not_widest_versicolor] y_tweaked = y[not_widest_versicolor] tree_clf_tweaked = DecisionTreeClassifier(max_depth=2, random_state=40) tree_clf_tweaked.fit(X_tweaked, y_tweaked) plt.figure(figsize=(8, 4)) plot_decision_boundary(tree_clf_tweaked, X_tweaked, y_tweaked, legend=False) plt.plot([0, 7.5], [0.8, 0.8], "k-", linewidth=2) plt.plot([0, 7.5], [1.75, 1.75], "k-", linewidth=2) plt.text(1.0, 0.9, "Depth=0", fontsize=15) plt.text(1.0, 1.80, "Depth=0", fontsize=13) save_fig("decision_tree_instability_plot") plt.show() ``` ### 限制超参数 如下情况所示,防止过拟合数据,需要限制决策树的自由度,这个过程也叫正则(限制)。 决策树的max_depth超参数来控制拟合程度,默认不限制。可以减少max_depth来限制模型,减少过拟合的风险。 ``` from sklearn.datasets import make_moons Xm, ym = make_moons(n_samples=100, noise=0.25, random_state=53) deep_tree_clf1 = DecisionTreeClassifier(random_state=42) deep_tree_clf2 = DecisionTreeClassifier(min_samples_leaf=4, random_state=42) deep_tree_clf1.fit(Xm, ym) deep_tree_clf2.fit(Xm, ym) plt.figure(figsize=(11, 4)) plt.subplot(121) plot_decision_boundary(deep_tree_clf1, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False) plt.title("No restrictions", fontsize=16) plt.subplot(122) plot_decision_boundary(deep_tree_clf2, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False) plt.title("min_samples_leaf = {}".format(deep_tree_clf2.min_samples_leaf), fontsize=14) save_fig("min_samples_leaf_plot") plt.show() ``` DecisionTreeClassifier有如下超参数:min_samples_split表示切分数据时包含的最小实例, min_samples_leaf表示一个叶子节点必须拥有的最小样本数目, min_weight_fraction_leaf(与min_samples_leaf相同,但表示为加权实例总数的一小部分), max_leaf_nodes(最大叶节点数)和max_features(在每个节点上分配的最大特性数), 增加min_*超参数或减少max_*超参数将使模型规范化。 其他算法开始时对决策树进行无约束训练,之后删除没必要的特征,称为减枝。 如果一个节点的所有子节点所提供的纯度改善没有统计学意义,则认为其和其子节点是不必要的。 ``` angle = np.pi / 180 * 20 rotation_maxtrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]]) Xr = X.dot(rotation_maxtrix) tree_clf_r = DecisionTreeClassifier(random_state=42) tree_clf_r.fit(Xr, y) plt.figure(figsize=(8, 3)) plot_decision_boundary(tree_clf_r, Xr, y, axes=[0.5, 7.5, -1.0, 1], iris=False) plt.show() ``` # 不稳定性 目前为止,决策树有很多好处:它们易于理解和解释,易于使用,用途广泛,而且功能强大。 但是也有一些限制。首先, 决策树喜欢正交决策边界(所有的分割都垂直于一个轴),这使得它们对训练集的旋转很敏感。如下右图所示,旋转45度之后,尽管分类的很好,但是不会得到更大推广。其中的一种解决办法是PCA(后面介绍)。 更普遍的,决策树对训练集中的微小变化很敏感。比如上图中移除一个实例的分类结果又很大的不同。 随机森林可以通过对许多树进行平均预测来限制这种不稳定, 对异常值,微小变化更加适用。 ``` np.random.seed(6) Xs = np.random.rand(100, 2) - 0.5 ys = (Xs[:, 0] > 0).astype(np.float32) * 2 angle = np.pi / 4 rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]]) Xsr = Xs.dot(rotation_matrix) tree_clf_s = DecisionTreeClassifier(random_state=42) tree_clf_s.fit(Xs, ys) tree_clf_sr = DecisionTreeClassifier(random_state=42) tree_clf_sr.fit(Xsr, ys) plt.figure(figsize=(11, 4)) plt.subplot(121) plot_decision_boundary(tree_clf_s, Xs, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False) plt.subplot(122) plot_decision_boundary(tree_clf_sr, Xsr, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False) save_fig("sensitivity_to_rotation_plot") plt.show() ``` ### 回归树 ``` import numpy as np # 带噪声的2阶训练集 np.random.seed(42) m = 200 X = np.random.rand(m ,1) y = 4 * (X - 0.5) ** 2 y = y + np.random.randn(m, 1) / 10 from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor(max_depth=2, random_state=42) tree_reg.fit(X, y) ``` 该回归决策树最大深度为2, dot后如下: ![regression_tree.png](attachment:regression_tree.png) 与分类树非常类似。 主要的不同在于,分类树根据每个节点预测每个分类。 比如当x1 = 0.6时进行预测。从根开始遍历树,最终到达叶节点,该节点预测值=0.1106。 这个预测仅仅是与此叶节点相关的110个训练实例的平均目标值。这个预测的结果是一个平均平方误差(MSE),在这110个实例中等于0.0151。 请注意,每个区域的预测值始终是该区域实例的平均目标值。该算法以一种使大多数训练实例尽可能接近预测值的方式来分割每个区域。 ``` from sklearn.tree import DecisionTreeRegressor tree_reg1 = DecisionTreeRegressor(random_state=42, max_depth=2) tree_reg2 = DecisionTreeRegressor(random_state=42, max_depth=3) tree_reg1.fit(X, y) tree_reg2.fit(X, y) def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"): x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1) y_pred = tree_reg.predict(x1) plt.axis(axes) if ylabel: plt.ylabel(ylabel, fontsize=18, rotation=0) plt.plot(X, y, "b.") plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$") plt.figure(figsize=(11, 4)) plt.subplot(121) plot_regression_predictions(tree_reg1, X, y) for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")): plt.plot([split, split], [-0.2, 1], style, linewidth=2) plt.text(0.21, 0.65, "Depth=0", fontsize=15) plt.text(0.01, 0.2, "Depth=1", fontsize=13) plt.text(0.65, 0.8, "Depth=1", fontsize=13) plt.legend(loc="upper center", fontsize=18) plt.title("max_depth=2", fontsize=14) plt.subplot(122) plot_regression_predictions(tree_reg2, X, y, ylabel=None) for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")): plt.plot([split, split], [-0.2, 1], style, linewidth=2) for split in (0.0458, 0.1298, 0.2873, 0.9040): plt.plot([split, split], [-0.2, 1], "k:", linewidth=1) plt.text(0.3, 0.5, "Depth=2", fontsize=13) plt.title("max_depth=3", fontsize=14) save_fig("tree_regression_plot") plt.show() # 画出分类图 export_graphviz( tree_reg1, out_file=image_path("regression_tree.dot"), feature_names=["x1"], rounded=True, filled=True ) tree_reg1 = DecisionTreeRegressor(random_state=42) tree_reg2 = DecisionTreeRegressor(random_state=42, min_samples_leaf=10) tree_reg1.fit(X, y) tree_reg2.fit(X, y) x1 = np.linspace(0, 1, 500).reshape(-1, 1) y_pred1 = tree_reg1.predict(x1) y_pred2 = tree_reg2.predict(x1) plt.figure(figsize=(11, 4)) plt.subplot(121) plt.plot(X, y, "b.") plt.plot(x1, y_pred1, "r.-", linewidth=2, label=r"$\hat{y}$") plt.axis([0, 1, -0.2, 1.1]) plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", fontsize=18, rotation=0) plt.legend(loc="upper center", fontsize=18) plt.title("No restrictions", fontsize=14) plt.subplot(122) plt.plot(X, y, "b.") plt.plot(x1, y_pred2, "r.-", linewidth=2, label=r"$\hat{y}$") plt.axis([0, 1, -0.2, 1.1]) plt.xlabel("$x_1$", fontsize=18) plt.title("min_samples_leaf={}".format(tree_reg2.min_samples_leaf), fontsize=14) save_fig("tree_regression_regularization_plot") ``` ![regression_tree.png](attachment:regression_tree.png) 如上图所示, 回归树根据最小化mse来切分数据集。 决策树在处理回归任务时倾向于过度拟合。 如上左图,超参数默认,不加约束时容易过拟合。通过设置min_samples_leaf 将模型更合理的约束。 # 课后习题 #### 1. 无约束情况下,一百万个实例的训练集训练得到的决策树的大约深度是多少? #### 2. 节点的gini impurity一般是小于还是大于其父节点?一般情况下这样,还是一直都这样? #### 3. 如果决策树过拟合, 减少max_depth是一个好方法吗? #### 4. 如果决策树欠拟合,缩放输入的特征是一个好方法吗? #### 5. 如果在一个包含100万个实例的训练集上训练决策树需要一个小时,那么在包含1000万个实例的训练集上训练另一个决策树需要花费多少时间呢? #### 6. 如果您的训练集包含100,000个实例,将设置presort=True 可以加快训练吗? #### 7. 训练并调节决策树模型,使用moons数据集。 a. 使用 make_moons(n_samples=10000, noise=0.4)生成数据集。 b. 使用train_test_split(). 切分数据集。 c. 使用网格搜索并进行交叉验证,去找到最合适的超参数。尝试max_leaf_nodes参数。 d. 使用全部数据进行训练,并在测试集上估计性能。应该在85到87之间。 #### 8. 生成森林。 a. 继续上一题, 生成训练集的1000个子集所谓验证集。随机选择100实例。 b. 每一个子集训练一棵树,使用上述得到的最合适超参数。在测试集上评估这1000棵树。 由于它们是在较小的集合上进行训练的,所以这些决策树可能比第一个决策树更糟糕,只实现了大约80%的精度。 c. 对于每个测试集实例,生成1,000个决策树的预测,并且只保留最频繁的预测(您可以使用SciPy的mode()函数来实现这一点)。这给了对测试集的多数投票预测. d. 评估测试集的这些预测:您应该获得比第一个模型稍高的精度(大约高0.5到1.5%)。恭喜你,你训练了一个随机森林分类器!
github_jupyter
# TAG MATRIX DATA SCRAPER - author: Richard Castro - sticky_rank: 1 - toc: true - badges: true - comments: false - categories: [Matrix] - image: images/scraper.jpg ``` import pandas as pd import requests import datetime date=datetime.datetime.now().strftime("%Y-%m-%d") import dash import dash_core_components as dcc df=pd.read_csv("https://raw.githubusercontent.com/pcm-dpc/COVID-19/master/dati-province/dpc-covid19-ita-province.csv") df.to_csv('../data/clients/CHEP/'+date+'-Italy.csv') ``` # CANADA STUFF ``` #SOURCE - https://github.com/eebrown/data2019nCoV df=pd.read_csv('https://raw.githubusercontent.com/eebrown/data2019nCoV/master/data-raw/covid19.csv') df=df.rename(columns={'pruid':'uid', 'prname':'province'}) col=['uid', 'province','date', 'numconf', 'numprob', 'numdeaths', 'numtotal', 'numtested', 'numrecover', 'percentrecover', 'ratetested', 'numtoday', 'percentoday', 'ratetotal', 'ratedeaths', 'numdeathstoday', 'percentdeath', 'numtestedtoday', 'numrecoveredtoday', 'percentactive', 'numactive', 'rateactive', 'numtotal_last14', 'ratetotal_last14', 'numdeaths_last14', 'ratedeaths_last14'] df_ca=df[col] df_ca.set_index('date', inplace=True) df_ca.to_csv('../data/sources/canada/'+date+'-eeBrown.csv') #SOURCE - https://www12.statcan.gc.ca/census-recensement/index-eng.cfm df=pd.read_csv('https://www12.statcan.gc.ca/census-recensement/2016/dp-pd/hlt-fst/pd-pl/Tables/CompFile.cfm?Lang=Eng&T=301&OFT=FULLCSV') df_cacen=df df_cacen.to_csv('../data/sources/canada/'+date+'-ca_census.csv') #SOURCE - ISHABERRY #PROVINCE LEVEL CASE DATA df=pd.read_csv('https://raw.githubusercontent.com/ishaberry/Covid19Canada/master/timeseries_prov/cases_timeseries_prov.csv') df.rename(columns={'date_report':'date'}, inplace=True) df.set_index('date') df_Isha=df df_Isha.to_csv('../data/sources/canada/'+date+'Isha_Prov_Cases.csv') #SOURCE - ISHABERRY #HEALTH REGION LEVEL CASE DATA df=pd.read_csv('https://raw.githubusercontent.com/ishaberry/Covid19Canada/master/timeseries_hr/cases_timeseries_hr.csv') df.rename(columns={'date_report':'date'}, inplace=True) df.set_index('date') df_Isha=df df_Isha.to_csv('../data/sources/canada/'+date+'Isha_HR_Cases.csv') #SOURCE - ISHABERRY #PROVINCE LEVEL TEST DATA df=pd.read_csv('https://raw.githubusercontent.com/ishaberry/Covid19Canada/master/timeseries_prov/testing_timeseries_prov.csv') df.rename(columns={'date_testing':'date'}, inplace=True) df.set_index('date') df_Isha=df df_Isha.to_csv('../data/sources/canada/'+date+'Isha_Province_Testing.csv') ``` # World-o-Meter ``` #WORLD O METER DATA #NEW YORK COUNTY DATA import datetime date=datetime.datetime.now().strftime("%Y-%m-%d") web=requests.get('https://www.worldometers.info/coronavirus/usa/new-york') ny=pd.read_html(web.text) ny=ny[1] ny.columns=map(str.lower, ny.columns) ny.to_csv('../data/sources/worldometer/'+date+'-NY-County-Data.csv') #CALIFORNIA COUNTY DATA cad=requests.get('https://www.worldometers.info/coronavirus/usa/california') ca=pd.read_html(cad.text) ca=ca[1] ca.columns=map(str.lower, ca.columns) ca.to_csv('../data/sources/worldometer/'+date+'-CA-County-Data.csv') #NEW JERSEY COUNTY DATA njd=requests.get('https://www.worldometers.info/coronavirus/usa/new-jersey') nj=pd.read_html(njd.text) nj=nj[1] nj.columns=map(str.lower, nj.columns) nj.to_csv('../data/sources/worldometer/'+date+'-NJ-County-Data.csv') #OHIO COUNTY DATA ohd=requests.get('https://www.worldometers.info/coronavirus/usa/ohio/') oh=pd.read_html(ohd.text) oh=oh[1] oh.columns=map(str.lower, oh.columns) oh.to_csv('../data/sources/worldometer/'+date+'-OH-County-Data.csv') #SOUTH CAROLINA COUNTY DATA scd=requests.get('https://www.worldometers.info/coronavirus/usa/south-carolina/') sc=pd.read_html(scd.text) sc=sc[1] sc.columns=map(str.lower, sc.columns) sc.to_csv('../data/sources/worldometer/'+date+'-SC-County-Data.csv') #PA COUNTY DATA pad=requests.get('https://www.worldometers.info/coronavirus/usa/pennsylvania/') pa=pd.read_html(pad.text) pa=pa[1] pa.columns=map(str.lower, pa.columns) pa.to_csv('../data/sources/worldometer/'+date+'-PA-County-Data.csv') #WASHINGTON COUNTY DATA wad=requests.get('https://www.worldometers.info/coronavirus/usa/washington/') wa=pd.read_html(wad.text) wa=wa[1] wa.columns=map(str.lower, wa.columns) wa.to_csv('../data/sources/worldometer/'+date+'-WA-County-Data.csv') #US STATE LEVEL DATA we=requests.get('https://www.worldometers.info/coronavirus/country/us/') us=pd.read_html(we.text) us=us[1] us.to_csv('../data/sources/worldometer/'+date+'-US-State-Data.csv') ``` # rt live ``` rtlive=pd.read_csv('https://d14wlfuexuxgcm.cloudfront.net/covid/rt.csv') rtlive.to_csv('../_data/data_sources/rtlive/rtlive'+date+'.csv') ``` # Mobility Reports <ul> <li>Google Mobility Reports</li> <li>Apple Mobility Reports</li> </ul> ``` #GOOGLE AND APPLE MOBILITY DATA BY COUNTY #apple=pd.read_csv('https://covid19-static.cdn-apple.com/covid19-mobility-data/2014HotfixDev8/v3/en-us/applemobilitytrends-2020-08-08.csv') #apple.to_csv('../_data/Data_Sources/Mobility_Reports/apple.csv') google=pd.read_csv('https://www.gstatic.com/covid19/mobility/Global_Mobility_Report.csv') google.to_csv('../_data/Data_Sources/google/google.csv') ``` # WORLD-O-METER DATASETS <ul> <li>NEW YORK <li>CALIFORNIA <li>NEW JERSEY <li>PA <li>SOUTH CAROLINA <li>OHIO <li>WASHINGTON STATE </ul> ``` healthDepartment=requests.get('https://data.ct.gov/Health-and-Human-Services/COVID-19-Tests-Cases-and-Deaths-By-Town-/28fr-iqnx/data') hd=pd.read_html(healthDepartment.text) ``` # COUNTY HEALTH DEPARTMANT DATASETS ``` #HEALTH DEPARTMENTS DATA flData=pd.read_csv('https://opendata.arcgis.com/datasets/222c9d85e93540dba523939cfb718d76_0.csv?outSR=%7B%22latestWkid%22%3A4326%2C%22wkid%22%3A4326%7D') flData.to_csv('../_data/Data_Sources/Fl-Data.csv') miData=pd.read_excel('https://www.michigan.gov/documents/coronavirus/Covid-19_Tests_by_County_2020-08-08_698830_7.xlsx') miData.to_csv('../_data/Data_Sources/Health-Department-Data/MI-Tests-County.csv') miData2=pd.read_excel('https://www.michigan.gov/documents/coronavirus/Cases_by_County_and_Date_2020-08-08_698828_7.xlsx') miData2.to_csv('../_data/Data_Sources/Health-Department-Data/MI-Cases-County.csv') miData3=pd.read_excel('https://www.michigan.gov/documents/coronavirus/Cases_and_Deaths_by_County_2020-08-08_698827_7.xlsx') miData3.to_csv('../_data/Data_Sources/Health-Department-Data/MI-Deaths-Cases-County.csv') miData4=pd.read_csv('https://raw.githubusercontent.com/jeffcore/covid-19-usa-by-state/master/COVID-19-Cases-USA-By-County.csv') miData4.to_csv('../_data/Data_Sources/Health-Department-Data/COVID-19-Cases-USA-By-County.csv') miData5=pd.read_csv('https://raw.githubusercontent.com/jeffcore/covid-19-usa-by-state/master/COVID-19-Deaths-USA-By-County.csv') miData5.to_csv('../_data/Data_Sources/Health-Department-Data/COVID-19-Deaths-USA-By-County.csv') miData6=pd.read_csv('https://raw.githubusercontent.com/jeffcore/covid-19-usa-by-state/master/COVID-19-Cases-USA-By-State.csv') miData6.to_csv('../_data/Data_Sources/Health-Department-Data/COVID-19-Cases-USA-By-State.csv') miData7=pd.read_csv('https://raw.githubusercontent.com/jeffcore/covid-19-usa-by-state/master/COVID-19-Deaths-USA-By-State.csv') miData7.to_csv('../_data/Data_Sources/Health-Department-Data/COVID-19-Deaths-USA-By-State.csv') #ANOTHER MODULE from bs4 import BeautifulSoup url=requests.get('https://covidactnow.org/us/fl/county/taylor_county?s=846164') soup = BeautifulSoup(requests.get(url).text) soup.findAll("table")[0].findAll("tr")[0] ``` # COVID TRACKER DATA ``` #COVID TRACKER DATA da1=pd.read_html('https://covidtracking.com/data/state/alabama') da1[1].to_csv('../_data/Data_Sources/CovidTracker/Alabama.csv') da2=pd.read_html('https://covidtracking.com/data/state/alaska') da2[1].to_csv('../_data/Data_Sources/CovidTracker/Alaska.csv') da3=pd.read_html('https://covidtracking.com/data/state/arizona') da3[1].to_csv('../_data/Data_Sources/CovidTracker/Arizona.csv') AR_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/arkansas') AR_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-ARKANSAS.csv') CA_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/california') CA_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-CALIFORNIA.csv') GA_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/GEORGIA') GA_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-GEORGIA.csv') KS_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/KANSAS') KS_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-KANSAS.csv') FL_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/FLORIDA') FL_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-FLORIDA.csv') IL_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/ILLINOIS') IL_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-ILLINOIS.csv') OH_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/OHIO') OH_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-OHIO.csv') TN_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/TENNESSEE') TN_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-TENNESSEE.csv') NE_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/NEBRASKA') NE_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-NEBRASKA.csv') PA_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/PENNSYLVANIA') PA_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-PENNSYLVANIA.csv') NC_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/NORTH-CAROLINA') NC_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-NORTHCAROLINA.csv') KY_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/KENTUCKY') KY_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-KENTUCKY.csv') CO_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/COLORADO') CO_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-COLORADO.csv') KS_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/KENTUCKY') KS_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-KENTUCKY.csv') NJ_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/NEW-JERSEY') NJ_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-NEWJERSEY.csv') MN_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/MINNESOTA') MN_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-MINNESOTA.csv') MI_COVIDTRACKER=pd.read_html('https://covidtracking.com/data/state/MICHIGAN') MI_COVIDTRACKER[1].to_csv('../_data/data_sources/covidtracker/'+date+'-MICHIGAN.csv') ``` # NEW YORK TIMES DATA ``` #NEW YORK TIMES DATA MASK_NYT=pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/mask-use/mask-use-by-county.csv') MASK_NYT.to_csv('../_data/data_sources/NYT/MASKUSAGE-'+date+'.csv') CASESDEATHSC_NYT=pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv') CASESDEATHSC_NYT.to_csv('../_data/data_sources/NYT/CASES-DEATHS-COUNTY-'+date+'.csv') CASESDEATHS_NYT=pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv') CASESDEATHS_NYT.to_csv('../_data/data_sources/NYT/CASES-DEATHS-STATE-'+date+'.csv') CASESDEATHSCD_NYT=pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/live/us-counties.csv') CASESDEATHSCD_NYT.to_csv('../_data/data_sources/NYT/CASES-DEATHS-COUNTY-DAILY-'+date+'.csv') CASESDEATHSD_NYT=pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/live/us-states.csv') CASESDEATHSD_NYT.to_csv('../_data/data_sources/NYT/CASES-DEATHS-STATE-DAILY-'+date+'.csv') EXDEATHS_NYT=pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/excess-deaths/deaths.csv') EXDEATHS_NYT.to_csv('../_data/data_sources/NYT/EXCESS-DEATHS-CITY-'+date+'.csv') ``` # linlab ``` LINLAB=pd.read_csv('https://raw.githubusercontent.com/lin-lab/COVID19-Viz/master/clean_data/rt_table_export.csv') LINLAB.to_csv('../data/sources/LINLAB/'+date+'-RTCOUNTYLEVEL.csv') wew=requests.get('https://www.geonames.org/statistics/') us=pd.read_html(wew.text) us[1].to_csv('../_data/data_sources/worldCountryPopulations.csv') ```
github_jupyter
Import all libraries ``` import requests import json import time import multiprocessing import psycopg2 ``` Functions for loading the data and counting rows in meanwhile ``` # Postgres def load_data_postgres(data): start = time.time() for line in data: response_insert = requests.post('http://localhost:3001/postgres/write/crypto',json=line).text end = time.time() print("postgres",end-start) # Firebase def load_data_firebase(data): start = time.time() prev_time = start for line in data: response_insert = requests.post('http://localhost:3001/firebase/write/crypto',json=line).text end = time.time() print("firebase",end-start) def time_measurement(num,l): start = time.time() prev_time = start query_list_stat_all=[] for i in range(num): time.sleep(20) time_now = time.time()-start # print or write to log here prev_time = time.time() response_write = int(requests.get('http://localhost:3001/postgres/read/crypto').text) #running queries after 20 sec quer_stat = query_postgres(l) quer_stat.append(response_write) time_now = time.time()-start quer_stat.append(time_now) print("No of rows {} and time {}".format(response_write,time_now)) print("query_stats",quer_stat) print("\n\n") query_list_stat_all.append(quer_stat) return query_list_stat_all def query_postgres(query_list): time_list = [] # row_count = 0 q_all_start = time.time() for i,que in enumerate(query_list): temp = {} temp['query'] = que start = time.time() response_query = requests.post('http://localhost:3001/postgres/query/crypto',json=temp) temp["id"] = i+1 # if i==0: # print(response_query) # row_count = json.loads(response_query.text)['rowCount'] end = time.time() temp["query_time"] = end-start # temp['row_count'] = row_count time_list.append(temp) q_all_end = time.time() final_list = [time_list,(q_all_end-q_all_start)] # time_list.append(q_all_end-q_all_start) # temp={} # temp['all_query_time'] = # time_list.append(temp) print("time taken to run all the queries", q_all_end-q_all_start) return final_list def query_firebase(query_list): q_all_start = time.time() for i,que in enumerate(query_list): temp = {} temp['query'] = que start = time.time() # post request response_query = requests.post('http://localhost:3001/firebase/query/crypto',json=temp) print(response_query) end = time.time() print("time taken for query {} is {}".format(i+1,(end-start))) q_all_end = time.time() print("time taken to run all the queries", q_all_end-q_all_start) # l = ["""select count(*) from crypto_tab;""", # """ select * from crypto_tab order by bitcoin_info->>'Date' DESC limit 100;""", # """select count(*) from crypto_tab group by bitcoin_info->>'Symbol';""", # """select * from crypto_tab where (bitcoin_info->>'Volume')::float > 2;""" # """select * from crypto_tab where bitcoin_info->>'Date' = '2018-12-07 21:52:00';""", # """select * from crypto_tab where (bitcoin_info->>'High')::float > 3800 and bitcoin_info->>'Symbol' = 'BTCUSD';""" # ] # query_postgres(l) l = ["""select count(*) from crypto_tab;""", """ select * from crypto_tab order by bitcoin_info->>'Date' DESC limit 100;""", """select * from crypto_tab where bitcoin_info->>'Date' = '2020-12-31 21:52:00';""", """select * from crypto_tab where (bitcoin_info->>'High')::float > 3800 and (bitcoin_info->>'High')::float < 4500; """, """select * from crypto_tab where (bitcoin_info->>'Volume')::float > 2;""" ] # query_postgres(l) ``` Select which data to load ``` # # BITCOIN DATA # json_file_path= './data/bitcoin_data_sample.json' # with open(json_file_path) as json_file: # bitcoin_data = json.load(json_file) # # ETHEREUM DATA # json_file_path= 'data/trajectory_data_60k.json' # with open(json_file_path) as json_file: # ethereum_data = json.load(json_file) # # LITECOIN DATA # json_file_path= './bitcoin_merged_data.json' # with open(json_file_path) as json_file: # data = json.load(json_file) # crpto DATA # json_file_path= './data/bitcoin_merged_data.json' # with open(json_file_path) as json_file: # crypto_data = json.load(json_file) # crypto_updated = crypto_data*2 load_data_postgres(crypto_data) # p1 = multiprocessing.Process(target=load_data_postgres, args=(crypto_data, )) # p2 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data, )) # p3 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data, )) # p4 = multiprocessing.Process(target=time_measurement, args=(len(bitcoin_data), )) # # starting process 1 # p1.start() # # starting process 2 # p2.start() # # starting process 3 # p3.start() # # starting process 4 # p4.start() # # wait until process 1 is finished # p1.join() # # wait until process 2 is finished # p2.join() # # wait until process 3 is finished # p3.join() # # wait until process 4 is finished # p4.join() # load_data_postgres(ethereum_data) # time_list = query_postgres(l) # time_measurement(len(data),l) ``` Run process for Firebase ``` p1 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data, )) p2 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data, )) p3 = multiprocessing.Process(target=load_data_postgres, args=(bitcoin_data, )) p4 = multiprocessing.Process(target=time_measurement, args=(len(bitcoin_data), )) # starting process 1 p1.start() # starting process 2 p2.start() # starting process 3 p3.start() # starting process 4 p4.start() # wait until process 1 is finished p1.join() # wait until process 2 is finished p2.join() # wait until process 3 is finished p3.join() # wait until process 4 is finished p4.join() ``` Run process for PostgreSQL ``` load_data_postgres(data) # BITCOIN DATA json_file_path= './data/trajectory_data_60k.json' with open(json_file_path) as json_file: data = json.load(json_file) len(data) sql = """insert into trajectory_path(path_info) values(%s)""" sql_count = """select count(*) from trajectory_path""" start_time = time.time() prev_time = start_time prv_count = 0 with psycopg2.connect(host='localhost', port=5432, database='applicationdb', user='postgres', password='4258') as conn: with conn.cursor() as cur: # q_all_start = time.time() # for i,que in enumerate(l): # temp = {} # temp['query'] = que # start = time.time() # # post request << # cur.execute(que) # # post request >> # end = time.time() # print("time taken for query {} is {}".format(i+1,(end-start))) # q_all_end = time.time() # print("time taken to run all the queries", q_all_end-q_all_start) #start_time = time.time() print(start_time) counter = 0 for line in data: line = str(line) line = line.replace('\'','\"') try: x = cur.execute(sql, (line,)) except (Exception, psycopg2.DatabaseError) as error: print(error) print(time.time()-start_time) continue dt = time.time() - prev_time if dt > 1: # print or write to log here prev_time = time.time() cur.execute(sql_count) result = cur.fetchone() print(result[0]) conn.commit() # except (Exception, psycopg2.DatabaseError) as error: # print(error) # print(time.time()-start_time) end_time = time.time() print(end_time) print("time taken to load the data is {} seconds".format(end_time-start_time)) ```
github_jupyter
``` import numpy as np from sklearn.datasets import fetch_olivetti_faces import matplotlib.pyplot as plt np.random.seed(42) %matplotlib inline data = fetch_olivetti_faces() x = data.data y = data.target print(x.shape) print(y.shape) plt.imshow(x[0].reshape(64, 64), cmap='gray') # Looking on a random set of images fig = plt.figure(figsize=(9, 9)) cols = 4 rows = 5 for ind in range(1, cols*rows+1): img = x[np.random.randint(x.shape[0])].reshape(64, 64) fig.add_subplot(rows, cols, ind) plt.imshow(img, cmap='gray') plt.axis("off") plt.show() x.shape # Splitting into train and test set and having equal proportions from sklearn.model_selection import StratifiedShuffleSplit split_test = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=42) for train_valid_ind, test_ind in split_test.split(x, y): x_train_valid, x_test = x[train_valid_ind], x[test_ind] y_train_valid, y_test = y[train_valid_ind], y[test_ind] split_valid = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_ind, valid_ind in split_valid.split(x_train_valid, y_train_valid): x_train, x_valid = x_train_valid[train_ind], x_train_valid[valid_ind] y_train, y_valid = y_train_valid[train_ind], y_train_valid[valid_ind] ``` ### PCA Reduction ``` from sklearn.decomposition import PCA pca = PCA(n_components=0.99) x_train_pca = pca.fit_transform(x_train) x_valid_pca = pca.transform(x_valid) def plot_faces(faces, label, n_rows = 4, n_cols = 5): plt.figure(figsize=(8, 5)) for index, (face, label) in enumerate(zip(faces, label)): plt.subplot(n_rows, n_cols, index+1) plt.imshow(face.reshape(64, 64), cmap='gray') plt.axis("off") plt.title(label) plt.show() ``` ### Modifying Images ``` from scipy import ndimage # rotated, flipped and darkened the images # flipping and darkening has been used from solution as turned out to be easier x_transformed = [] for face in x_train[:20]: transform = ndimage.rotate(face.reshape(64, 64), angle=np.random.choice([90, 180]), mode='constant')[:,::-1] transform[:, 1:-1] *= np.random.choice([1, 0.3]) x_transformed.append(transform) x_transformed = np.array(x_transformed) def error(pca, x): x_pca = pca.transform(x) x_reconstruct = pca.inverse_transform(x_pca) return np.square(x_reconstruct - x).mean(axis=-1) error(pca, x_train[:20]).mean() error(pca, x_transformed.reshape(-1, 4096)).mean() ``` ### The reconstruction error is not large ``` plot_faces(x_transformed, y_train[:20]) x_transformed_pca = pca.transform(x_transformed.reshape(-1, 4096)) plot_faces(pca.inverse_transform(x_transformed_pca), y_train[:20]) ``` ### All reconstructed images look similar as pca one type of alignment of the face but the transformed images have varying alignments
github_jupyter
``` %matplotlib inline ``` Creating Extensions Using numpy and scipy ========================================= **Author**: `Adam Paszke <https://github.com/apaszke>`_ **Updated by**: `Adam Dziedzic <https://github.com/adam-dziedzic>`_ In this tutorial, we shall go through two tasks: 1. Create a neural network layer with no parameters. - This calls into **numpy** as part of its implementation 2. Create a neural network layer that has learnable weights - This calls into **SciPy** as part of its implementation ``` import torch from torch.autograd import Function ``` Parameter-less example ---------------------- This layer doesn’t particularly do anything useful or mathematically correct. It is aptly named BadFFTFunction **Layer Implementation** ``` from numpy.fft import rfft2, irfft2 class BadFFTFunction(Function): @staticmethod def forward(ctx, input): numpy_input = input.detach().numpy() result = abs(rfft2(numpy_input)) return input.new(result) @staticmethod def backward(ctx, grad_output): numpy_go = grad_output.numpy() result = irfft2(numpy_go) return grad_output.new(result) # since this layer does not have any parameters, we can # simply declare this as a function, rather than as an nn.Module class def incorrect_fft(input): return BadFFTFunction.apply(input) ``` **Example usage of the created layer:** ``` input = torch.randn(8, 8, requires_grad=True) result = incorrect_fft(input) print(result) result.backward(torch.randn(result.size())) print(input) ``` Parametrized example -------------------- In deep learning literature, this layer is confusingly referred to as convolution while the actual operation is cross-correlation (the only difference is that filter is flipped for convolution, which is not the case for cross-correlation). Implementation of a layer with learnable weights, where cross-correlation has a filter (kernel) that represents weights. The backward pass computes the gradient wrt the input and the gradient wrt the filter. ``` from numpy import flip import numpy as np from scipy.signal import convolve2d, correlate2d from torch.nn.modules.module import Module from torch.nn.parameter import Parameter class ScipyConv2dFunction(Function): @staticmethod def forward(ctx, input, filter, bias): # detach so we can cast to NumPy input, filter, bias = input.detach(), filter.detach(), bias.detach() result = correlate2d(input.numpy(), filter.numpy(), mode='valid') result += bias.numpy() ctx.save_for_backward(input, filter, bias) return torch.as_tensor(result, dtype=input.dtype) @staticmethod def backward(ctx, grad_output): grad_output = grad_output.detach() input, filter, bias = ctx.saved_tensors grad_output = grad_output.numpy() grad_bias = np.sum(grad_output, keepdims=True) grad_input = convolve2d(grad_output, filter.numpy(), mode='full') # the previous line can be expressed equivalently as: # grad_input = correlate2d(grad_output, flip(flip(filter.numpy(), axis=0), axis=1), mode='full') grad_filter = correlate2d(input.numpy(), grad_output, mode='valid') return torch.from_numpy(grad_input), torch.from_numpy(grad_filter).to(torch.float), torch.from_numpy(grad_bias).to(torch.float) class ScipyConv2d(Module): def __init__(self, filter_width, filter_height): super(ScipyConv2d, self).__init__() self.filter = Parameter(torch.randn(filter_width, filter_height)) self.bias = Parameter(torch.randn(1, 1)) def forward(self, input): return ScipyConv2dFunction.apply(input, self.filter, self.bias) ``` **Example usage:** ``` module = ScipyConv2d(3, 3) print("Filter and bias: ", list(module.parameters())) input = torch.randn(10, 10, requires_grad=True) output = module(input) print("Output from the convolution: ", output) output.backward(torch.randn(8, 8)) print("Gradient for the input map: ", input.grad) ``` **Check the gradients:** ``` from torch.autograd.gradcheck import gradcheck moduleConv = ScipyConv2d(3, 3) input = [torch.randn(20, 20, dtype=torch.double, requires_grad=True)] test = gradcheck(moduleConv, input, eps=1e-6, atol=1e-4) print("Are the gradients correct: ", test) ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"); ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # TensorFlow 2.0 での tf.function と AutoGraph <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/beta/guide/autograph"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ja/beta/guide/autograph.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ja/beta/guide/autograph.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ja)にご連絡ください。 TensorFlow 2.0 では Eager Execution の使いやすさとTensorFlow 1.0 のパワーとを同時に提供します。この統合の中核となるのは `tf.function` です。これは Python の構文のサブセットを移植可能でハイパフォーマンスな TensorFlow のグラフに変換します。 `tf.function`の魅力的な特徴に AutoGraph があります。これはグラフを Python の構文そのものを用いて記述できるようにします。 AutoGraph で利用可能な Python の機能の一覧は、[AutoGraph Capabilities and Limitations (Autograph の性能と制限事項)](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/LIMITATIONS.md) で確認できます。また、`tf.function`の詳細については RFC [TF 2.0: Functions, not Sessions](https://github.com/tensorflow/community/blob/master/rfcs/20180918-functions-not-sessions-20.md) を参照してください。AutoGraph の詳細については `tf.autograph` を参照してください。 このチュートリアルでは `tf.function` と AutoGraph の基本的な特徴についてひととおり確認します。 ## セットアップ TensorFlow 2.0 Preview Nightly をインポートして、TF 2.0 モードを有効にしてください。 ``` from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np !pip install tensorflow==2.0.0-beta1 import tensorflow as tf ``` ## `tf.function` デコレータ `tf.function`を用いてある関数にアノテーションを付けたとしても、一般の関数と変わらずに呼び出せます。一方、実行時にはその関数はグラフへとコンパイルされます。これにより、より高速な実行や、 GPU や TPU での実行、SavedModel へのエクスポートといった利点が得られます。 ``` @tf.function def simple_nn_layer(x, y): return tf.nn.relu(tf.matmul(x, y)) x = tf.random.uniform((3, 3)) y = tf.random.uniform((3, 3)) simple_nn_layer(x, y) ``` アノテーションの結果を調べてみると、 TensorFlow ランタイムとのやり取りのすべてを処理する特別な呼び出し可能オブジェクトを確認できます。 ``` simple_nn_layer ``` 記述したコードで複数の関数を利用していたとしても、すべての関数にアノテーションを付ける必要はありません。アノテーションをつけた関数から呼び出されるすべての関数は、グラフモードで実行されます。 ``` def linear_layer(x): return 2 * x + 1 @tf.function def deep_net(x): return tf.nn.relu(linear_layer(x)) deep_net(tf.constant((1, 2, 3))) ``` グラフが大量の軽量な演算から構成される場合、関数は Eager Execution で実行するコードよりも高速になる場合があります。しかし、 graph が少量の (畳み込み演算のような) 計算に時間のかかる演算からなる場合、高速化はそれほど見込めないでしょう。 ``` import timeit conv_layer = tf.keras.layers.Conv2D(100, 3) @tf.function def conv_fn(image): return conv_layer(image) image = tf.zeros([1, 200, 200, 100]) # warm up conv_layer(image); conv_fn(image) print("Eager conv:", timeit.timeit(lambda: conv_layer(image), number=10)) print("Function conv:", timeit.timeit(lambda: conv_fn(image), number=10)) print("Note how there's not much difference in performance for convolutions") lstm_cell = tf.keras.layers.LSTMCell(10) @tf.function def lstm_fn(input, state): return lstm_cell(input, state) input = tf.zeros([10, 10]) state = [tf.zeros([10, 10])] * 2 # warm up lstm_cell(input, state); lstm_fn(input, state) print("eager lstm:", timeit.timeit(lambda: lstm_cell(input, state), number=10)) print("function lstm:", timeit.timeit(lambda: lstm_fn(input, state), number=10)) ``` ## Python の制御フローの利用 `tf.function`の内部でデータに依存した制御フローを用いる場合、Pythonの制御フロー構文を用いることができます。AutoGraph はそれらの構文を TensorFlow の Ops に書き換えます。たとえば、 `Tensor` に依存する `if` 文は、`tf.cond()` に変換されます。 次の例では `x` は `Tensor` です。ですが、 `if` 文は期待するどおりに動作しています。 ``` @tf.function def square_if_positive(x): if x > 0: x = x * x else: x = 0 return x print('square_if_positive(2) = {}'.format(square_if_positive(tf.constant(2)))) print('square_if_positive(-2) = {}'.format(square_if_positive(tf.constant(-2)))) ``` Note: この例ではスカラー値を用いた単純な条件を用いています。実利用する場合、典型的には<a href="#batching">バッチ処理</a>が用いられます。 AutoGraph は `while`, `for`, `if`, `break`, `continue`, `return` といった典型的なPythonの構文をサポートしています。また、これらを入れ子にして利用する場合もサポートしています。つまり、`Tensor`を返す式を`while` 文や `if` 文の条件式として用いることが可能です。また、`for` 文で `Tensor` の要素に渡って反復することも可能です。 ``` @tf.function def sum_even(items): s = 0 for c in items: if c % 2 > 0: print(c) continue s += c return s sum_even(tf.constant([10, 12, 15, 20])) ``` より高度な使い方をするユーザーのために、AutoGraph は低レベルAPIも提供しています。次の例では AutoGraph が生成したコードを確認できます。 ``` print(tf.autograph.to_code(sum_even.python_function)) ``` 次はより複雑な制御フローの例です。 ``` @tf.function def fizzbuzz(n): msg = tf.constant('') for i in tf.range(n): if tf.equal(i % 3, 0): tf.print('Fizz') elif tf.equal(i % 5, 0): tf.print('Buzz') else: tf.print(i) fizzbuzz(tf.constant(15)) ``` ## Keras での AutoGraph の利用 `tf.function` はオブジェクトのメソッドに対しても利用できます。たとえば、カスタムしたKeras モデルにデコレーターを適用できます、典型的には `call` 関数にアノテーションを付けることで実現できるでしょう。より詳細が必要な場合、`tf.keras` を確認してください。 ``` class CustomModel(tf.keras.models.Model): @tf.function def call(self, input_data): if tf.reduce_mean(input_data) > 0: return input_data else: return input_data // 2 model = CustomModel() model(tf.constant([-2, -4])) ``` ## 副作用 Eager モードのように、通常の場合 `tf.function` の中で、`tf.assign` や `tf.print` といった副作用のある命令を実行できます。また、実行時の順序を保つために、処理順について必要な依存関係を書き加えます。 ``` v = tf.Variable(5) @tf.function def find_next_odd(): v.assign(v + 1) if tf.equal(v % 2, 0): v.assign(v + 1) find_next_odd() v ``` ## 例: シンプルなモデルの学習 AutoGraph はこれまで見てきたよりもずっと多くの演算を TensorFlow の内部で実行できます。たとえば、学習のためのループ処理は単に制御フローなので、実際にそれを TensorFlow に持ち込んで処理できます。 ### データのダウンロード ``` def prepare_mnist_features_and_labels(x, y): x = tf.cast(x, tf.float32) / 255.0 y = tf.cast(y, tf.int64) return x, y def mnist_dataset(): (x, y), _ = tf.keras.datasets.mnist.load_data() ds = tf.data.Dataset.from_tensor_slices((x, y)) ds = ds.map(prepare_mnist_features_and_labels) ds = ds.take(20000).shuffle(20000).batch(100) return ds train_dataset = mnist_dataset() ``` ### モデルの定義 ``` model = tf.keras.Sequential(( tf.keras.layers.Reshape(target_shape=(28 * 28,), input_shape=(28, 28)), tf.keras.layers.Dense(100, activation='relu'), tf.keras.layers.Dense(100, activation='relu'), tf.keras.layers.Dense(10))) model.build() optimizer = tf.keras.optimizers.Adam() ``` ### 学習のためのループ処理の定義 ``` compute_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) compute_accuracy = tf.keras.metrics.SparseCategoricalAccuracy() def train_one_step(model, optimizer, x, y): with tf.GradientTape() as tape: logits = model(x) loss = compute_loss(y, logits) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) compute_accuracy(y, logits) return loss @tf.function def train(model, optimizer): train_ds = mnist_dataset() step = 0 loss = 0.0 accuracy = 0.0 for x, y in train_ds: step += 1 loss = train_one_step(model, optimizer, x, y) if tf.equal(step % 10, 0): tf.print('Step', step, ': loss', loss, '; accuracy', compute_accuracy.result()) return step, loss, accuracy step, loss, accuracy = train(model, optimizer) print('Final step', step, ': loss', loss, '; accuracy', compute_accuracy.result()) ``` ## バッチ処理 実際のアプリケーションにおいて、処理をバッチにまとめることはパフォーマンスの観点から重要です。AutoGraphを用いるのにもっとも適しているコードは、制御フローを _バッチ_ の単位で決定するようなコードです。もし、個々の _要素_ の単位で制御を決定する場合、パフォーマンスを保つために batch API を試してみてください。 一例として、次の Python コードががあったとします。 ``` def square_if_positive(x): return [i ** 2 if i > 0 else i for i in x] square_if_positive(range(-5, 5)) ``` TensorFlowに同等の処理を行わせる場合、次のように記述したくなるかもしれません。 (これは実際には動作します!) ``` @tf.function def square_if_positive_naive(x): result = tf.TensorArray(tf.int32, size=x.shape[0]) for i in tf.range(x.shape[0]): if x[i] > 0: result = result.write(i, x[i] ** 2) else: result = result.write(i, x[i]) return result.stack() square_if_positive_naive(tf.range(-5, 5)) ``` しかし、この場合、次のように書くこともできます。 ``` def square_if_positive_vectorized(x): return tf.where(x > 0, x ** 2, x) square_if_positive_vectorized(tf.range(-5, 5)) ```
github_jupyter
# SageMaker Pipelines to Train a BERT-Based Text Classifier In this lab, we will do the following: * Define a set of Workflow Parameters that can be used to parametrize a Workflow Pipeline * Define a Processing step that performs cleaning and feature engineering, splitting the input data into train and test data sets * Define a Training step that trains a model on the pre-processed train data set * Define a Processing step that evaluates the trained model's performance on the test data set * Define a Register Model step that creates a model package from the estimator and model artifacts used in training * Define a Conditional step that measures a condition based on output from prior steps and conditionally executes the Register Model step * Define and create a Pipeline in a Workflow DAG, with the defined parameters and steps defined * Start a Pipeline execution and wait for execution to complete # Terminology Amazon SageMaker Pipelines support the following steps: * Pipelines - A Directed Acyclic Graph of steps and conditions to orchestrate SageMaker jobs and resource creation. * Processing Job steps - A simplified, managed experience on SageMaker to run data processing workloads, such as feature engineering, data validation, model evaluation, and model interpretation. * Training Job steps - An iterative process that teaches a model to make predictions by presenting examples from a training dataset. * Conditional step execution - Provides conditional execution of branches in a pipeline. * Registering Models - Creates a model package resource in the Model Registry that can be used to create deployable models in Amazon SageMaker. * Parametrized Pipeline executions - Allows pipeline executions to vary by supplied parameters. * Transform Job steps - A batch transform to preprocess datasets to remove noise or bias that interferes with training or inference from your dataset, get inferences from large datasets, and run inference when you don't need a persistent endpoint. # Our BERT Pipeline In the Processing Step, we perform Feature Engineering to create BERT embeddings from the `review_body` text using the pre-trained BERT model, and split the dataset into train, validation and test files. To optimize for Tensorflow training, we saved the files in TFRecord format. In the Training Step, we fine-tune the BERT model to our Customer Reviews Dataset and add a new classification layer to predict the `star_rating` for a given `review_body`. In the Evaluation Step, we take the trained model and a test dataset as input, and produce a JSON file containing classification evaluation metrics. In the Condition Step, we decide whether to register this model if the accuracy of the model, as determined by our evaluation step exceeded some value. ![](./img/bert_sagemaker_pipeline.png) The pipeline that we create follows a typical Machine Learning Application pattern of pre-processing, training, evaluation, and model registration: ![A typical ML Application pipeline](img/pipeline-full.png) # Release Resources ``` %%html <p><b>Shutting down your kernel for this notebook to release resources.</b></p> <button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button> <script> try { els = document.getElementsByClassName("sm-command-button"); els[0].click(); } catch(err) { // NoOp } </script> %%javascript try { Jupyter.notebook.save_checkpoint(); Jupyter.notebook.session.delete(); } catch(err) { // NoOp } ```
github_jupyter
``` #These dictionaries describe the local hour of the satellite local_times = {"aquaDay":"13:30", "terraDay":"10:30", "terraNight":"22:30", "aquaNight":"01:30" } # and are used to load the correct file for dealing with the date-line. min_hours = {"aquaDay":2, "terraDay":-1, "aquaNight":-1, "terraNight":11} max_hours = {"aquaDay":24, "terraDay":22, "aquaNight":13, "terraNight":24} import xarray as xr import pandas as pd import numpy as np import matplotlib.pyplot as plt #Data loader for the satellite data, #returns a complete global map (regular lat-lon) #but sparsely filled, with only one stripe of data #showing where the satellite passed that hour def get_satellite_slice(date : str, utc_hour : int, satellite='aquaDay', latitude_bound = None #Recommend only using |lat| < 70 degrees ): #Due to crossing of the datetime, some times will be saved different date if utc_hour < min_hours[satellite]: file_date = str((np.datetime64(date) - np.timedelta64(1,'D'))) elif utc_hour > max_hours[satellite]: file_date = str((np.datetime64(date) + np.timedelta64(1,'D'))) else: file_date = date #print ('the UTC hour is', utc_hour) #print ('the file date is', file_date) #Open .tif file sat_xr = xr.open_rasterio(f'{satellite_folder}/{satellite}_errorGTE03K_04km_{file_date}.tif') #Rename spatial dimensions sat_xr = sat_xr.rename({'x':'longitude','y':'latitude'}) #Create time delta to change local to UTC time_delta = pd.to_timedelta(sat_xr.longitude.data/15,unit='H') #Convert local satellite time to UTC and round to nearest hour time = (pd.to_datetime([file_date + " " + "13:30"]*time_delta.shape[0]) - time_delta).round('H') #display(time) #print(time) #Select desired hour dt = np.datetime64(f'{date} {utc_hour:02}:00:00') right_time = np.expand_dims(time == dt,axis=(0,1)) #print ('right time', right_time.sum()) if right_time.sum() == 0: print("Warning: Correct time not found in dataset, likely problem in file selection") #Make subset subset = np.logical_and(np.isfinite(sat_xr),right_time) if subset.sum() == 0: print(f"Warning: No valid data found for {date} {utc_hour:02}h") if latitude_bound is not None: #print(f"Subsetting < {latitude_bound}") subset = np.logical_and(subset,np.expand_dims(np.abs(sat_xr.latitude) < latitude_bound,axis=(0,-1))) #Select valid data test_subset = sat_xr.where(subset).load() sat_xr.close() sat_xr = None #display(test_subset) #display(test_subset[0,::-1,:]) #display(test_subset.squeeze('band')) return test_subset[0,::-1,:] satellite_folder = '/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/' test_data = get_satellite_slice('2018-01-03',11,latitude_bound=70) plt.figure(dpi=200) #Make grid for plotting purposes xv, yv = np.meshgrid(test_data.longitude, test_data.latitude, indexing='xy') #Contour plot plt.contourf(xv,yv,test_data,levels=np.arange(230,320,10)) plt.xlim(0,100) plt.ylim(-80,-60) plt.colorbar() blob = [True,False] sum(blob) def get_era_data(date : str, utc_hour : int, field = 't2m'): print ('date = ', '_'.join(date.split('-'))) month = '_'.join(date.split('-')[:-1]) print ('month', month) ds_era = xr.open_dataset(f'{era_folder}/sfc_unstructured_{month}.grib',engine='cfgrib') #Grab correct field da = ds_era[field] # time_str = f"{date} {utc_hour:02}:00:00" print (time_str) da = da.sel(time=time_str) #Relabel longitude coordinate to be consistent with MODIS da = da.assign_coords({"longitude": (((da.longitude + 180) % 360) - 180)}) #Load data, perhaps this is too early? da = da.load() #Close file, attempt to not have memory leaks ds_era.close() ds_era = None return da era_folder = '/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/' da = get_era_data('2019-01-01',10) ``` --- ``` root = '/network/group/aopp/predict/TIP016_PAXTON_RPSPEEDY/ML4L/ECMWF_files/raw/' f = 'aquaDay_errorGTE03K_04km_2019-01-01.tif' sat_xr = xr.open_rasterio(root+f) #load the temperature data array sat_xr = sat_xr.rename({'x':'longitude','y':'latitude'}) time_delta = pd.to_timedelta(sat_xr.longitude.data/15,unit='H') file_date = '2019-01-01' local_times = "13:30" time = (pd.to_datetime([file_date + " " + local_times]*time_delta.shape[0]) - time_delta).round('H') time #Select desired hour date = '2019-01-01' utc_hour = 10 dt = np.datetime64(f'{date} {utc_hour:02}:00:00') right_time = np.expand_dims(time == dt,axis=(0,1)) if right_time.sum() == 0: print("Warning: Correct time not found in dataset, likely problem in file selection") #Make subset subset = np.logical_and(np.isfinite(sat_xr),right_time) subset ```
github_jupyter
``` class Person: def __init__(self, fname, lname): self.firstname = fname self.lastname = lname def printname(self): print(self.firstname, self.lastname) class Student(Person): def __init__(self, fname, lname, year): super().__init__(fname, lname) self.graduationyear = year x = Student("naz", "kashaf", 2021) print(x.graduationyear) ``` # Question break down in this code we will create a child class in which access parent class # creat a class "student" following parameter "i. Name ii. Batch No iii. Address" ## creat new child class "DevnationStudents" 1.in which we will call parent class const save "course""marks" list as empty 2.Add only those courses in the list which are not already in the list 3.add dict 4.Marks list should not contains duplicate entries of same courses ``` class Student: def __init__(self,name,batch_no,address): self.name = name self.batch_no = batch_no self.address = address def set_name(self): return self.name def set_batch_no(self): return self.batch_no def set_adress(self): return self.address def get_name(self): return self.name def get_batch_no(self): return self.batch_no def get_adress(self): return self.adress class Devnationstudents(Student): def __init__(self,name,batch_no, address, courses=[], marks=[]): super().__init__(name,batch_no,address) self.courses = courses self.marks = marks self.marks = [] def set_courses(self, courses): for i in courses: if i not in self.courses: self.courses.append(i) def check_point(self, point): dup = False for default_y in self.marks: if default_y["courses"] != point: dup = True return dup def set_marks(self, marks): for new_x in marks: if new_x["courses"] in self.courses: if (self.check_point(new_x["course"])): continue else: self.marks.append(new_x) def get_courses(self): return self.courses def get_marks(self): return self.marks x = Devnationstudents("kashaf", "naz", "swl") print(x) out_put = Devnationstudents("kashaf", 4, "Swl", ['data science','full stack'], [30, 40 , 20]) out_put.get_courses() out_put.set_courses(["Python", "R" ,'full stack']) out_put.get_courses() ```
github_jupyter
# GradientBoostingRegressor with MinMaxScaler This Code template is for regression analysis using a simple GradientBoostingRegressor based on the Gradient Boosting Ensemble Learning Technique using MinMaxScalerfor Feature Rescaling. ### Required Packages ``` import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as se from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split from sklearn.ensemble import GradientBoostingRegressor from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path = "" ``` List of features which are required for model training . ``` #x_values features=[] ``` Target feature for prediction. ``` #y_value target = '' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X = df[features] Y = df[target] ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)#performing datasplitting ``` ## Data Rescaling MinMaxScaler subtracts the minimum value in the feature and then divides by the range, where range is the difference between the original maximum and original minimum. We will fit an object of MinMaxScaler to train data then transform the same data via fit_transform(X_train) method, following which we will transform test data via transform(X_test) method. ``` minmax_scaler = MinMaxScaler() X_train = minmax_scaler.fit_transform(X_train) X_test = minmax_scaler.transform(X_test) ``` ### Model Gradient Boosting builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage a regression tree is fit on the negative gradient of the given loss function. #### Model Tuning Parameters 1. loss : {‘ls’, ‘lad’, ‘huber’, ‘quantile’}, default=’ls’ > Loss function to be optimized. ‘ls’ refers to least squares regression. ‘lad’ (least absolute deviation) is a highly robust loss function solely based on order information of the input variables. ‘huber’ is a combination of the two. ‘quantile’ allows quantile regression (use `alpha` to specify the quantile). 2. learning_ratefloat, default=0.1 > Learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators. 3. n_estimators : int, default=100 > The number of trees in the forest. 4. criterion : {‘friedman_mse’, ‘mse’, ‘mae’}, default=’friedman_mse’ > The function to measure the quality of a split. Supported criteria are ‘friedman_mse’ for the mean squared error with improvement score by Friedman, ‘mse’ for mean squared error, and ‘mae’ for the mean absolute error. The default value of ‘friedman_mse’ is generally the best as it can provide a better approximation in some cases. 5. max_depth : int, default=3 > The maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables. 6. max_features : {‘auto’, ‘sqrt’, ‘log2’}, int or float, default=None > The number of features to consider when looking for the best split: 7. random_state : int, RandomState instance or None, default=None > Controls both the randomness of the bootstrapping of the samples used when building trees (if <code>bootstrap=True</code>) and the sampling of the features to consider when looking for the best split at each node (if `max_features < n_features`). 8. verbose : int, default=0 > Controls the verbosity when fitting and predicting. 9. n_iter_no_change : int, default=None > <code>n_iter_no_change</code> is used to decide if early stopping will be used to terminate training when validation score is not improving. By default it is set to None to disable early stopping. If set to a number, it will set aside <code>validation_fraction</code> size of the training data as validation and terminate training when validation score is not improving in all of the previous <code>n_iter_no_change</code> numbers of iterations. The split is stratified. 10. tol : float, default=1e-4 > Tolerance for the early stopping. When the loss is not improving by at least tol for <code>n_iter_no_change</code> iterations (if set to a number), the training stops. ``` # Build Model here model = GradientBoostingRegressor(random_state = 123) model.fit(X_train, y_train) ``` #### Model Accuracy We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model. > **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction. ``` print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100)) ``` > **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ``` y_pred=model.predict(X_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred))) ``` #### Feature Importances The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction. ``` plt.figure(figsize=(8,6)) n_features = len(X.columns) plt.barh(range(n_features), model.feature_importances_, align='center') plt.yticks(np.arange(n_features), X.columns) plt.xlabel("Feature importance") plt.ylabel("Feature") plt.ylim(-1, n_features) ``` #### Prediction Plot First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "blue") plt.plot(range(20),model.predict(X_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator: Ganapathi Thota , Github: [Profile](https://github.com/Shikiz)
github_jupyter
# Plagiarism Text Data In this project, you will be tasked with building a plagiarism detector that examines a text file and performs binary classification; labeling that file as either plagiarized or not, depending on how similar the text file is when compared to a provided source text. The first step in working with any dataset is loading the data in and noting what information is included in the dataset. This is an important step in eventually working with this data, and knowing what kinds of features you have to work with as you transform and group the data! So, this notebook is all about exploring the data and noting patterns about the features you are given and the distribution of data. > There are not any exercises or questions in this notebook, it is only meant for exploration. This notebook will note be required in your final project submission. --- ## Read in the Data The cell below will download the necessary data and extract the files into the folder `data/`. This data is a slightly modified version of a dataset created by Paul Clough (Information Studies) and Mark Stevenson (Computer Science), at the University of Sheffield. You can read all about the data collection and corpus, at [their university webpage](https://ir.shef.ac.uk/cloughie/resources/plagiarism_corpus.html). > **Citation for data**: Clough, P. and Stevenson, M. Developing A Corpus of Plagiarised Short Answers, Language Resources and Evaluation: Special Issue on Plagiarism and Authorship Analysis, In Press. [Download] ``` !wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c4147f9_data/data.zip !unzip data # import libraries import pandas as pd import numpy as np import os ``` This plagiarism dataset is made of multiple text files; each of these files has characteristics that are is summarized in a `.csv` file named `file_information.csv`, which we can read in using `pandas`. ``` csv_file = 'data/file_information.csv' plagiarism_df = pd.read_csv(csv_file) # print out the first few rows of data info plagiarism_df.head(10) ``` ## Types of Plagiarism Each text file is associated with one **Task** (task A-E) and one **Category** of plagiarism, which you can see in the above DataFrame. ### Five task types, A-E Each text file contains an answer to one short question; these questions are labeled as tasks A-E. * Each task, A-E, is about a topic that might be included in the Computer Science curriculum that was created by the authors of this dataset. * For example, Task A asks the question: "What is inheritance in object oriented programming?" ### Four categories of plagiarism Each text file has an associated plagiarism label/category: 1. `cut`: An answer is plagiarized; it is copy-pasted directly from the relevant Wikipedia source text. 2. `light`: An answer is plagiarized; it is based on the Wikipedia source text and includes some copying and paraphrasing. 3. `heavy`: An answer is plagiarized; it is based on the Wikipedia source text but expressed using different words and structure. Since this doesn't copy directly from a source text, this will likely be the most challenging kind of plagiarism to detect. 4. `non`: An answer is not plagiarized; the Wikipedia source text is not used to create this answer. 5. `orig`: This is a specific category for the original, Wikipedia source text. We will use these files only for comparison purposes. > So, out of the submitted files, the only category that does not contain any plagiarism is `non`. In the next cell, print out some statistics about the data. ``` # print out some stats about the data print('Number of files: ', plagiarism_df.shape[0]) # .shape[0] gives the rows # .unique() gives unique items in a specified column print('Number of unique tasks/question types (A-E): ', (len(plagiarism_df['Task'].unique()))) print('Unique plagiarism categories: ', (plagiarism_df['Category'].unique())) ``` You should see the number of text files in the dataset as well as some characteristics about the `Task` and `Category` columns. **Note that the file count of 100 *includes* the 5 _original_ wikipedia files for tasks A-E.** If you take a look at the files in the `data` directory, you'll notice that the original, source texts start with the filename `orig_` as opposed to `g` for "group." > So, in total there are 100 files, 95 of which are answers (submitted by people) and 5 of which are the original, Wikipedia source texts. Your end goal will be to use this information to classify any given answer text into one of two categories, plagiarized or not-plagiarized. ### Distribution of Data Next, let's look at the distribution of data. In this course, we've talked about traits like class imbalance that can inform how you develop an algorithm. So, here, we'll ask: **How evenly is our data distributed among different tasks and plagiarism levels?** Below, you should notice two things: * Our dataset is quite small, especially with respect to examples of varying plagiarism levels. * The data is distributed fairly evenly across task and plagiarism types. ``` # Show counts by different tasks and amounts of plagiarism # group and count by task counts_per_task=plagiarism_df.groupby(['Task']).size().reset_index(name="Counts") print("\nTask:") display(counts_per_task) # group by plagiarism level counts_per_category=plagiarism_df.groupby(['Category']).size().reset_index(name="Counts") print("\nPlagiarism Levels:") display(counts_per_category) # group by task AND plagiarism level counts_task_and_plagiarism=plagiarism_df.groupby(['Task', 'Category']).size().reset_index(name="Counts") print("\nTask & Plagiarism Level Combos :") display(counts_task_and_plagiarism) ``` It may also be helpful to look at this last DataFrame, graphically. Below, you can see that the counts follow a pattern broken down by task. Each task has one source text (original) and the highest number on `non` plagiarized cases. ``` import matplotlib.pyplot as plt % matplotlib inline # counts group = ['Task', 'Category'] counts = plagiarism_df.groupby(group).size().reset_index(name="Counts") plt.figure(figsize=(8,5)) plt.bar(range(len(counts)), counts['Counts'], color = 'blue') ``` ## Up Next This notebook is just about data loading and exploration, and you do not need to include it in your final project submission. In the next few notebooks, you'll use this data to train a complete plagiarism classifier. You'll be tasked with extracting meaningful features from the text data, reading in answers to different tasks and comparing them to the original Wikipedia source text. You'll engineer similarity features that will help identify cases of plagiarism. Then, you'll use these features to train and deploy a classification model in a SageMaker notebook instance.
github_jupyter
#### New to Plotly? Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/). <br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online). <br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! #### Version Check Note: Tables are available in version <b>2.1.0+</b><br> Run `pip install plotly --upgrade` to update your Plotly version ``` import plotly plotly.__version__ ``` #### Import CSV Data ``` import pandas as pd import re import plotly.plotly as py import plotly.graph_objs as go df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/Mining-BTC-180.csv') # remove min:sec:millisec from dates for i, row in enumerate(df['Date']): p = re.compile(' 00:00:00') datetime = p.split(df['Date'][i])[0] df.iloc[i, 1] = datetime table = go.Table( header=dict( values=list(df.columns), line = dict(color='rgb(50, 50, 50)'), align = ['left'] * 5, fill = dict(color='#EDFAFF') ), cells=dict( values=[df.iloc[j] for j in range(10)], line = dict(color='rgb(50, 50, 50)'), align = ['left'] * 5, fill = dict(color='#f5f5fa') ) ) py.iplot([table]) ``` #### Table and Right Aligned Plots In Plotly there is no native way to insert a Plotly Table into a Subplot. To do this, create your own `Layout` object and defining multiple `xaxis` and `yaxis` to split up the chart area into different domains. Then for the traces you wish to insert in your final chart, set their `xaxis` and `yaxis` individually to map to the domains definied in the `Layout`. See the example below to see how to align 3 Scatter plots to the right and a Table on the top. ``` import plotly.plotly as py import plotly.graph_objs as go table_trace1 = go.Table( domain=dict(x=[0, 0.5], y=[0, 1.0]), columnwidth = [30] + [33, 35, 33], columnorder=[0, 1, 2, 3, 4], header = dict(height = 50, values = [['<b>Date</b>'],['<b>Number<br>transactions</b>'], ['<b>Output-volume(BTC)</b>'], ['<b>Market-Price</b>']], line = dict(color='rgb(50, 50, 50)'), align = ['left'] * 5, font = dict(color=['rgb(45, 45, 45)'] * 5, size=14), fill = dict(color='#d562be')), cells = dict(values = [df.iloc[j][1:5] for j in range(25)], line = dict(color='#506784'), align = ['left'] * 5, font = dict(color=['rgb(40, 40, 40)'] * 5, size=12), format = [None] + [", .2f"] * 2 + [',.4f'], prefix = [None] * 2 + ['$', u'\u20BF'], suffix=[None] * 4, height = 27, fill = dict(color=['rgb(235, 193, 238)', 'rgba(228, 222, 249, 0.65)'])) ) trace1=go.Scatter( x=df['Date'], y=df['Hash-rate'], xaxis='x1', yaxis='y1', mode='lines', line=dict(width=2, color='#9748a1'), name='hash-rate-TH/s' ) trace2=go.Scatter( x=df['Date'], y=df['Mining-revenue-USD'], xaxis='x2', yaxis='y2', mode='lines', line=dict(width=2, color='#b04553'), name='mining revenue' ) trace3=go.Scatter( x=df['Date'], y=df['Transaction-fees-BTC'], xaxis='x3', yaxis='y3', mode='lines', line=dict(width=2, color='#af7bbd'), name='transact-fee' ) axis=dict( showline=True, zeroline=False, showgrid=True, mirror=True, ticklen=4, gridcolor='#ffffff', tickfont=dict(size=10) ) layout1 = dict( width=950, height=800, autosize=False, title='Bitcoin mining stats for 180 days', margin = dict(t=100), showlegend=False, xaxis1=dict(axis, **dict(domain=[0.55, 1], anchor='y1', showticklabels=False)), xaxis2=dict(axis, **dict(domain=[0.55, 1], anchor='y2', showticklabels=False)), xaxis3=dict(axis, **dict(domain=[0.55, 1], anchor='y3')), yaxis1=dict(axis, **dict(domain=[0.66, 1.0], anchor='x1', hoverformat='.2f')), yaxis2=dict(axis, **dict(domain=[0.3 + 0.03, 0.63], anchor='x2', tickprefix='$', hoverformat='.2f')), yaxis3=dict(axis, **dict(domain=[0.0, 0.3], anchor='x3', tickprefix=u'\u20BF', hoverformat='.2f')), plot_bgcolor='rgba(228, 222, 249, 0.65)', annotations=[ dict( showarrow=False, text='The last 20 records', xref='paper', yref='paper', x=0, y=1.01, xanchor='left', yanchor='bottom', font=dict(size=15) ) ] ) fig1 = dict(data=[table_trace1, trace1, trace2, trace3], layout=layout1) py.iplot(fig1) ``` #### Vertical Table and Graph Subplot ``` import plotly.plotly as py import plotly.graph_objs as go table_trace2 = go.Table( domain=dict(x=[0, 1], y=[0, 1.0]), columnwidth = [30] + [33, 35, 33], columnorder=[0, 1, 2, 3, 4], header = dict(height = 50, values = [['<b>Date</b>'],['<b>Hash Rate, TH/sec</b>'], ['<b>Mining revenue</b>'], ['<b>Transaction fees</b>']], line = dict(color='rgb(50, 50, 50)'), align = ['left'] * 5, font = dict(color=['rgb(45, 45, 45)'] * 5, size=14), fill = dict(color='#d562be')), cells = dict(values = [df['Date'][-20:], df['Hash-rate'][-20:], df['Mining-revenue-USD'][-20:], df['Transaction-fees-BTC'][-20:]], line = dict(color='#506784'), align = ['left'] * 5, font = dict(color=['rgb(40, 40, 40)'] * 5, size=12), format = [None] + [", .2f"] * 2 + [',.4f'], prefix = [None] * 2 + ['$', u'\u20BF'], suffix=[None] * 4, height = 27, fill = dict(color=['rgb(235, 193, 238)', 'rgba(228, 222, 249, 0.65)'])) ) trace4=go.Scatter( x=df['Date'], y=df['Hash-rate'], xaxis='x1', yaxis='y1', mode='lines', line=dict(width=2, color='#9748a1'), name='hash-rate-TH/s' ) trace5=go.Scatter( x=df['Date'], y=df['Mining-revenue-USD'], xaxis='x2', yaxis='y2', mode='lines', line=dict(width=2, color='#b04553'), name='mining revenue' ) trace6=go.Scatter( x=df['Date'], y=df['Transaction-fees-BTC'], xaxis='x3', yaxis='y3', mode='lines', line=dict(width=2, color='#af7bbd'), name='transact-fee' ) axis=dict( showline=True, zeroline=False, showgrid=True, mirror=True, ticklen=4, gridcolor='#ffffff', tickfont=dict(size=10) ) layout2 = dict( width=950, height=800, autosize=False, title='Bitcoin mining stats for 180 days', margin = dict(t=100), showlegend=False, xaxis1=dict(axis, **dict(domain=[0, 1], anchor='y1', showticklabels=False)), xaxis2=dict(axis, **dict(domain=[0, 1], anchor='y2', showticklabels=False)), xaxis3=dict(axis, **dict(domain=[0, 1], anchor='y3')), yaxis1=dict(axis, **dict(domain=[2 * 0.21 + 0.02 + 0.02, 0.68], anchor='x1', hoverformat='.2f')), yaxis2=dict(axis, **dict(domain=[0.21 + 0.02, 2 * 0.21 + 0.02], anchor='x2', tickprefix='$', hoverformat='.2f')), yaxis3=dict(axis, **dict(domain=[0.0, 0.21], anchor='x3', tickprefix=u'\u20BF', hoverformat='.2f')), plot_bgcolor='rgba(228, 222, 249, 0.65)', annotations=[ dict( showarrow=False, text='The last 20 records', xref='paper', yref='paper', x=0.415, y=1.01, xanchor='left', yanchor='bottom', font=dict(size=15) ) ] ) fig2 = dict(data=[table_trace2, trace4, trace5, trace6], layout=layout2) py.iplot(fig2) ``` #### Reference See https://plot.ly/python/reference/#table for more information regarding chart attributes! <br> For examples of Plotly Tables, see: https://plot.ly/python/table/ ``` from IPython.display import display, HTML display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />')) display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">')) ! pip install git+https://github.com/plotly/publisher.git --upgrade import publisher publisher.publish( 'table-subplots.ipynb', 'python/table-subplots/', 'Table and Chart Subplots', 'How to create a subplot with tables and charts in Python with Plotly.', title = 'Table and Chart Subplots | plotly', has_thumbnail='true', thumbnail='table_subplots.jpg', language='python', display_as='multiple_axes', order=11) ```
github_jupyter
### Name: Anjum Rohra # Overview ![it_sector-2.jpg](attachment:it_sector-2.jpg) Being a popular finance journalist of Europe, everyone is waiting for the IT Salary Survey report you release every 3 years. The IT Sector is booming and the younger aspirants keep themselves updated with the trends by the beautiful visualizations your report contains. Given the survey data from 2018 - 2020, it’s time to put your creative hat on and lay out insightful visualizations for the masses. ### Importing the required libraries ``` import plotly.express as px import pandas as pd import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Output, Input ``` ### Importing data ``` data_18 = pd.read_csv('Survey_2018.csv') data_19 = pd.read_csv('Survey_2019.csv') data_20 = pd.read_csv('Survey_2020.csv') ``` ### Imputing the missing values #### 2020 imputation ``` data_20 = data_20.rename(columns = {'Position ': 'Position'}) data_20.isnull().sum() data_20['Age'].fillna(data_20['Age'].mean(), inplace=True) data_20['Gender'].fillna('Male', inplace=True) data_20['Position'].fillna('Software Engineer', inplace=True) data_20['Position'].replace({'Account Managet': 'Account Manager', 'agile master ':'Agile Coach', 'Data analyst ':'Data Analyst', 'Data Analyst ':'Data Analyst', 'Dana Analyst':'Data Analyst', 'Fullstack engineer, ну или Software engineer': 'Fullstack engineer, Software engineer', 'Software Architekt':'Software Architect', 'DatabEngineer':'Data Engineer', 'data engineer':'Data Engineer', 'support engineer':'Support Engineer', 'Systemadministrator':'System Administrator', 'Team lead':'Team Lead', 'Tech Leader':'Tech Lead', 'Technical Lead':'Tech Lead', 'Security engineer':'Security Engineer' }, inplace=True) data_20['Your main technology / programming language'].replace({'Javascript / Typescript':'JavaScript/TypeScript','JavaScript / TypeScript':'JavaScript/TypeScript','TypeScript, JavaScript':'JavaScript/TypeScript','JavaScript/Typescript':'JavaScript/TypeScript','JavaScript / typescript':'JavaScript/TypeScript','Javascript/Typescript':'JavaScript/TypeScript','JavaScript, TypeScript':'JavaScript/TypeScript','kotlin':'Kotlin','Javascript':'JavaScript','JavaScript ':'JavaScript','Javascript ':'JavaScript','javascript':'JavaScript','Typescript':'TypeScript','typescript':'TypeScript','Typescript ':'TypeScript','python':'Python','Python ':'Python','pythin':'Python','python ':'Python','Pyrhon':'Python','scala':'Scala','Java / Scala':'Java/Scala','python, scala':'Scala / Python','C, C++':'C/C++','C++/c':'C/C++','C++, C#':'C/C++','c/c++':'C/C++','--':'Java','-':'Java'}, inplace=True) data_20['Your main technology / programming language'].fillna('Java', inplace=True) data_20['Total years of experience'].replace({'1,5':'1.5','2,5':'2.5','1 (as QA Engineer) / 11 in total':'11','15, thereof 8 as CTO':'15','6 (not as a data scientist, but as a lab scientist)':'6','383':'38.3','less than year':'0'}, inplace=True) data_20['Total years of experience'].mode() data_20['Total years of experience'].unique() data_20['Total years of experience']=data_20['Total years of experience'].astype('float64') data_20['Total years of experience'].median() data_20['Total years of experience'].fillna(data_20['Total years of experience'].median(), inplace=True) data_20['Years of experience in Germany'].unique() data_20['Years of experience in Germany'].replace({'⁰':'0','-':'10','< 1':'<1','1,7':'1.7','2,5':'2.5','0,5':'0.5','1,5':'1.5','4,5':'4.5','3,5':'3.5','4 month':'4 months','less than year':'<1'}, inplace=True) data_20['Years of experience in Germany'].mode() data_20['Years of experience in Germany'].fillna('2', inplace=True) data_20['Years of experience in Germany'].unique() data_18.drop('Timestamp',axis=1,inplace=True) data_19.drop('Zeitstempel',axis=1,inplace=True) data_20.drop('Timestamp',axis=1,inplace=True) data_20['Seniority level'].fillna('Senior', inplace=True) data_20['Other technologies/programming languages you use often'].fillna('Javascript / Typescript', inplace=True) data_20['Yearly bonus + stocks in EUR'].fillna('0', inplace=True) data_20['Annual bonus+stocks one year ago. Only answer if staying in same country'].fillna('0', inplace=True) data_20['Number of vacation days'].fillna('30', inplace=True) data_20['Employment status'].fillna('Full-time employee', inplace=True) data_20['Contract duration'].fillna('Unlimited contract', inplace=True) data_20['Main language at work'].fillna('English', inplace=True) data_20['Company size'].fillna('1000+', inplace=True) data_20['Company type'].fillna('Product', inplace=True) data_20['Have you lost your job due to the coronavirus outbreak?'].fillna('No', inplace=True) data_20['Have you received additional monetary support from your employer due to Work From Home? If yes, how much in 2020 in EUR'].fillna('0', inplace=True) data_20['Annual brutto salary (without bonus and stocks) one year ago. Only answer if staying in the same country'].fillna(data_20['Annual brutto salary (without bonus and stocks) one year ago. Only answer if staying in the same country'].median(), inplace=True) data_20['Have you been forced to have a shorter working week (Kurzarbeit)? If yes, how many hours per week'].fillna(data_20['Have you been forced to have a shorter working week (Kurzarbeit)? If yes, how many hours per week'].mean(), inplace=True) data_20.isnull().sum() ``` #### 2018 imputation ``` data_18.isnull().sum() data_18['Gender'].fillna('M', inplace=True) data_18['City'].fillna('Berlin', inplace=True) data_18['Position'].fillna('Java Developer', inplace=True) data_18['Your level'].fillna('Senior', inplace=True) data_18['Are you getting any Stock Options?'].fillna('No', inplace=True) data_18['Main language at work'].fillna('English', inplace=True) data_18['Company size'].fillna('100-1000', inplace=True) data_18['Company type'].fillna('Product', inplace=True) data_18['Age'].fillna(data_18['Age'].mean(), inplace=True) data_18['Years of experience'].fillna(data_18['Years of experience'].mean(), inplace=True) data_18['Current Salary'].fillna(data_18['Current Salary'].mean(), inplace=True) data_18['Salary one year ago'].fillna(data_18['Salary one year ago'].mean(), inplace=True) data_18['Salary two years ago'].fillna(data_18['Salary two years ago'].mean(), inplace=True) data_18['Gender'].value_counts() data_18['Gender'].replace({'M':'Male','F':'Female'}, inplace=True) data_18['Company type'].unique() data_18['Company type'].replace({'Consulting Company':'Consulting','Consultancy':'Consulting','Consulting Company':'Consulting','Consult':'Consulting','IT Consulting ':'IT Consulting','IT Consultancy ':'IT Consulting','IT Consultants':'IT Consulting','Outsorce':'Outsourcing','Outsource':'Outsourcing','E-Commerce firm':'E-Commerce','e-commerce':'E-Commerce','Ecommerce':'E-Commerce'}, inplace=True) data_18.isnull().sum() ``` #### 2019 imputation ``` data_19.drop('0',axis=1,inplace=True) data_19.isnull().sum() data_19['Seniority level'].fillna('Senior', inplace=True) data_19['Position (without seniority)'].fillna('Backend Developer', inplace=True) data_19['Your main technology / programming language'].fillna('Python', inplace=True) data_19['Main language at work'].fillna('English', inplace=True) data_19['Company name '].fillna('Zalando', inplace=True) data_19['Company size'].fillna('100-1000', inplace=True) data_19['Company type'].fillna('Product', inplace=True) data_19['Contract duration'].fillna('unlimited', inplace=True) data_19['Company business sector'].fillna('Commerce', inplace=True) data_19['Age'].fillna(data_19['Age'].mean(), inplace=True) data_19['Years of experience'].fillna(data_19['Years of experience'].mean(), inplace=True) data_19['Yearly brutto salary (without bonus and stocks)'].fillna(data_19['Yearly brutto salary (without bonus and stocks)'].mean(), inplace=True) data_19['Yearly bonus'].fillna(data_19['Yearly bonus'].mean(), inplace=True) data_19['Yearly stocks'].fillna(data_19['Yearly stocks'].mean(), inplace=True) data_19['Yearly brutto salary (without bonus and stocks) one year ago. Only answer if staying in same country'].fillna(data_19['Yearly brutto salary (without bonus and stocks) one year ago. Only answer if staying in same country'].mean(), inplace=True) data_19['Yearly bonus one year ago. Only answer if staying in same country'].fillna(data_19['Yearly bonus one year ago. Only answer if staying in same country'].mean(), inplace=True) data_19['Yearly stocks one year ago. Only answer if staying in same country'].fillna(data_19['Yearly stocks one year ago. Only answer if staying in same country'].mean(), inplace=True) data_19['Number of vacation days'].fillna(data_19['Number of vacation days'].mean(), inplace=True) data_19['Number of home office days per month'].fillna(data_19['Number of home office days per month'].mean(), inplace=True) data_19['Seniority level'].value_counts() data_19['Position (without seniority)'].unique() data_19['Your main technology / programming language'].unique() data_19['Main language at work'].unique() data_19['Company name '].unique() data_19['Company name '].replace({'google':'Google','check24':'Check24','CHECK24':'Check24','Here':'HERE'}, inplace=True) data_19['Company business sector'].unique() data_19.isnull().sum() ``` ## Data Visualization with Dash Application ``` app = dash.Dash(__name__) app.layout = html.Div([ html.Div([ dcc.Dropdown(id='years', multi=False, clearable=False, options=[{'label':x, 'value':x} for x in sorted(['2018','2019','2020'])], value="2018") ],style={'width':'50%'}), html.Div([ dcc.Dropdown(id='parameters', multi=False, clearable=False, options=[{'label':x, 'value':x} for x in sorted(['City','Gender','Seniority level','Main language at work','Current Salary'])], value="Gender") ],style={'width':'50%'}), # html.Div([ # dcc.Graph(id='my-pieplot', figure={}) # ]), html.Div([ dcc.Graph(id='my-plot', figure={}) ]) ]) # Callback - app interactivity section------------------------------------ @app.callback( #Output(component_id='my-pieplot', component_property='figure'), Output(component_id='my-plot', component_property='figure'), Input(component_id='years', component_property='value'), Input(component_id='parameters', component_property='value') ) def update_graph(year_chosen, parameter): print(year_chosen) print(parameter) if (year_chosen == '2018'): if (parameter == 'City'): city_18 = data_18['City'].value_counts().head(10) city_18 = pd.DataFrame(city_18) fig = px.bar(data_frame=city_18,y='City',color='City') fig.update_xaxes(title_text="<b>Cities</b>") fig.update_layout(title_text="<b>Respondent's City Analysis (2018)</b>") fig.update_yaxes(title_text="<b>Employee count</b> ", secondary_y=False) elif (parameter == 'Gender'): df = data_18 fig=px.pie(data_frame=df,names='Gender') fig.update_traces(textinfo = 'label+percent') fig.update_layout(title_text="<b>Gender ratio of the respondents in 2018</b>") elif (parameter == 'Seniority level'): fig=px.pie(data_frame=data_18,names='Your level') fig.update_traces(textinfo = 'label+percent') fig.update_layout(title_text="<b>Seniority levels in 2018</b>") elif (parameter == 'Main language at work'): language = data_18['Main language at work'].value_counts() language = pd.DataFrame(language) fig = px.bar(data_frame=language,y='Main language at work',color='Main language at work') fig.update_xaxes(title_text="<b>Languages</b>") fig.update_layout(title_text="<b>Language spoken at work (2018)</b>") fig.update_yaxes(title_text="<b>Employee count</b> ", secondary_y=False) elif (parameter == 'Current Salary'): experience = data_18.groupby('Years of experience') mean_salary_18 = experience['Current Salary'].mean() mean_salary_18=pd.DataFrame(mean_salary_18) mean_salary_18 = mean_salary_18.reset_index(drop=False) fig = px.line(data_frame=mean_salary_18,x='Years of experience',y='Current Salary') fig.update_layout(title_text="<b>Salary info for the year 2018</b>") elif (year_chosen == '2019'): if (parameter == 'City'): city_19 = data_19['City'].value_counts().head(10) city_19 = pd.DataFrame(city_19) fig = px.bar(data_frame=city_19,y='City',color='City') fig.update_xaxes(title_text="<b>Cities</b>") fig.update_layout(title_text="<b>Respondent's City Analysis (2019)</b>") fig.update_yaxes(title_text="<b>Employee count</b> ", secondary_y=False) elif (parameter == 'Gender'): df = data_19 fig=px.pie(data_frame=df,names='Gender') fig.update_traces(textinfo = 'label+percent') fig.update_layout(title_text="<b>Gender ratio of the respondents in 2019</b>") elif (parameter == 'Seniority level'): fig=px.pie(data_frame=data_19,names='Seniority level') fig.update_traces(textinfo = 'label+percent') fig.update_layout(title_text="<b>Seniority levels in 2019</b>") elif (parameter == 'Main language at work'): language = data_19['Main language at work'].value_counts() language = pd.DataFrame(language) fig = px.bar(data_frame=language,y='Main language at work',color='Main language at work') fig.update_xaxes(title_text="<b>Languages</b>") fig.update_layout(title_text="<b>Language spoken at work (2019)</b>") fig.update_yaxes(title_text="<b>Employee count</b> ", secondary_y=False) elif (parameter == 'Current Salary'): experience = data_19.groupby('Years of experience') mean_salary_19 = experience['Yearly brutto salary (without bonus and stocks)'].mean() mean_salary_19=pd.DataFrame(mean_salary_19) mean_salary_19 = mean_salary_19.reset_index(drop=False) fig = px.line(data_frame=mean_salary_19,x='Years of experience',y='Yearly brutto salary (without bonus and stocks)') fig.update_layout(title_text="<b>Salary info for the year 2019</b>") elif (year_chosen == '2020'): if (parameter == 'City'): city_20 = data_20['City'].value_counts().head(10) city_20 = pd.DataFrame(city_20) fig = px.bar(data_frame=city_20,y='City',color='City') fig.update_xaxes(title_text="<b>Cities</b>") fig.update_layout(title_text="<b>Respondent's City Analysis (2020)</b>") fig.update_yaxes(title_text="<b>Employee count</b> ", secondary_y=False) elif (parameter == 'Gender'): df = data_20 fig=px.pie(data_frame=df,names='Gender') fig.update_traces(textinfo = 'label+percent') fig.update_layout(title_text="<b>Gender ratio of the respondents in 2020</b>") elif (parameter == 'Seniority level'): fig=px.pie(data_frame=data_20,names='Seniority level') fig.update_traces(textinfo = 'label+percent') fig.update_layout(title_text="<b>Seniority levels in 2020</b>") elif (parameter == 'Main language at work'): language = data_20['Main language at work'].value_counts() language = pd.DataFrame(language) fig = px.bar(data_frame=language,y='Main language at work',color='Main language at work') fig.update_xaxes(title_text="<b>Languages</b>") fig.update_layout(title_text="<b>Language spoken at work (2020)</b>") fig.update_yaxes(title_text="<b>Employee count</b> ", secondary_y=False) elif (parameter == 'Current Salary'): experience = data_20.groupby('Total years of experience') mean_salary_20 = experience['Yearly brutto salary (without bonus and stocks) in EUR'].mean() mean_salary_20=pd.DataFrame(mean_salary_20) mean_salary_20 = mean_salary_20.reset_index(drop=False) fig = px.line(data_frame=mean_salary_20,x='Total years of experience',y='Yearly brutto salary (without bonus and stocks) in EUR') fig.update_layout(title_text="<b>Salary info for the year 2020</b>") return fig if __name__=='__main__': app.run_server(debug=False, port=8001) ```
github_jupyter
``` from pymongo import MongoClient client = MongoClient(host='localhost', port=27017) db = client['test'] cursor = db.score.find() for data in list(cursor): print(data['a']) db.score.save({'a':6, 'exam': 6}) #내려간 커서를 다시 위로 끌어올림 cursor.rewind() for data in list(cursor): print(data['a']) # 데이터 수정 db.score.update( { 'a' : 5 }, { '$set' : { 'exam' : 10 } } ) cursor.rewind() for data in list(cursor): print(data) # 데이터 1개 삭제 db.score.delete_one( { 'a' : 5 } ) cursor.rewind() for data in list(cursor): print(data) ``` ### 연습문제 - 인터파크 투어에서 조회한 결과를 MongoDB로 저장하기 컬렉션 : tour - 필드 : 이미지경로-img, 상세페이지경로-link, 상품명-title, 상품설명-desc, 가격-price ### headless로 크롬 실행 ```options = webdriver.ChromeOptions() options.add_argument("headless") options.add_argument("window-size=1920x1080") driver = webdriver.Chrome(executable_path='C:/Code/chromedriver.exe', options=options)``` ``` from selenium import webdriver as wd driver = wd.Chrome(executable_path='C:/Code/chromedriver.exe') from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time from pymongo import MongoClient client = MongoClient(host='localhost', port=27017) db = client['test'] driver.get('http://tour.interpark.com') driver.implicitly_wait(10) driver.find_element_by_id('SearchGNBText').send_keys('달랏') driver.find_element_by_css_selector('button.search-btn').click() WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.CLASS_NAME, 'oTravelBox')) ) driver.find_element_by_css_selector('.oTravelBox .moreBtn').click() for page in range(1, 2): driver.execute_script("searchModule.SetCategoryList(%s, '')" % page) time.sleep(2) boxItems = driver.find_elements_by_css_selector('.panelZone > .oTravelBox > .boxList > li') for li in boxItems: tour_dict = {} tour_dict['이미지']= li.find_element_by_css_selector('img.img').get_attribute('src') tour_dict['링크'] = li.find_element_by_css_selector('a').get_attribute('onclick') tour_dict['상품명'] = li.find_element_by_css_selector('h5.proTit').text tour_dict['추가설명'] = li.find_element_by_css_selector('.proSub').text tour_dict['가격'] = li.find_element_by_css_selector('strong.proPrice').text db.tour.save(tour_dict) # for info in li.find_elements_by_css_selector('.info-row .proInfo'): # print(info.text) # print('=' * 20) driver.close() cursor = db.tour.find() for data in list(cursor): print(data) from pymongo import MongoClient client = MongoClient(host='localhost', port=27017) db = client['test'] db.tour.count() import pandas as pd cursor=db.tour.find() result = [] for data in cursor: result.append([data['상품명'],data['가격']] ) pd.DataFrame(result, columns = ['상품명','가격']) ```
github_jupyter
``` %matplotlib inline import glob import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import tensorflow.contrib.learn as skflow from sklearn import metrics datadir='/home/bonnin/dev/cifar-10-batches-bin/' plt.ion() G = glob.glob (datadir + '*.bin') A = np.fromfile(G[0],dtype=np.uint8).reshape([10000,3073]) labels = A [:,0] images = A [:,1:].reshape([10000,3,32,32]).transpose (0,2,3,1) plt.imshow(images[15]) print labels[11] images_unroll = A [:,1:] ''' Convert classes labels from escalars to one-hot vectrors.''' def dense_to_one_hot (labels_dense, num_classes=10): num_labels = labels_dense.shape [0] index_offset = np.arange (num_labels) * num_classes labels_one_hot = np.zeros ((num_labels,num_classes)) labels_one_hot.flat [index_offset +labels_dense.ravel()]=1 return labels_one_hot labels_hot = dense_to_one_hot(labels, num_classes= 10) sess= tf.InteractiveSession() #classifier=skflow.TensorFlowLinearClassifier(n_classes= 10, batch_size= 100, steps= 1000, learning_rate=0.1) #classifier.fit(images_unroll,labels) #score = metrics.accuracy_score (labels, classifier.predict (images_unroll)) #print ('Accuracy: {0:f}',format(score)) #W = classifier.weights_ #sx.sy = (16,32) #f.con = plt. subplots(sx,sy, sharex='col', sharey='row') #for xx in range(sx): # for yy in range(sy): # con [xx.yy].pcolormesh #need to read in data in [n_samples. Height. Widht. #so element A [0.0.0] 3 RGB bytes def max_pool_2x2(tensor_in): return tf.nn.max_pool(tensor_in, ksize= [1,2,2,1], strides= [1,2,2,1], padding='SAME') def conv_model (X, y): X= tf. reshape(X, [-1, 32, 32, 3]) with tf.variable_scope('conv_layer1'): h_conv1=skflow.ops.conv2d(X, n_filters=16, filter_shape=[5,5], bias=True, activation=tf.nn.relu)#print (h_conv1) h_pool1=max_pool_2x2(h_conv1)#print (h_pool1) with tf.variable_scope('conv_layer2'): h_conv2=skflow.ops.conv2d(h_pool1, n_filters=16, filter_shape=[5,5], bias=True, activation=tf.nn.relu) #print (h_conv2) h_pool2=max_pool_2x2(h_conv2) #print (h_pool2) #needs work h_pool2_flat = tf.reshape(h_pool2, [-1,8*8*16 ]) h_fc1 = skflow.ops.dnn(h_pool2_flat, [96,48], activation=tf.nn.relu , dropout=0.5) return skflow.models.logistic_regression(h_fc1,y) images = np.array(images,dtype=np.float32) classifier = skflow.TensorFlowEstimator(model_fn=conv_model, n_classes=10, batch_size=100, steps=2000, learning_rate=0.01) %time classifier.fit(images, labels, logdir='/tmp/cnn_train/') %time score =metrics.accuracy_score(labels, classifier.predict(images)) print ('Accuracy: {0:f}'.format(score)) #Examining fitted weights #First 'onvolutional Layer print ('1st Convolutional Layer weights and Bias') print (classifier.get_tensor_value('conv_layer1/convolution/filters:0')) print (classifier.get_tensor_value('conv_layer1/convolution/filters:1')) ```
github_jupyter
## Crypto Arbitrage In this Challenge, you'll take on the role of an analyst at a high-tech investment firm. The vice president (VP) of your department is considering arbitrage opportunities in Bitcoin and other cryptocurrencies. As Bitcoin trades on markets across the globe, can you capitalize on simultaneous price dislocations in those markets by using the powers of Pandas? For this assignment, you’ll sort through historical trade data for Bitcoin on two exchanges: Bitstamp and Coinbase. Your task is to apply the three phases of financial analysis to determine if any arbitrage opportunities exist for Bitcoin. This aspect of the Challenge will consist of 3 phases. 1. Collect the data. 2. Prepare the data. 3. Analyze the data. ### Import the required libraries and dependencies. ``` import pandas as pd from pathlib import Path %matplotlib inline ``` ## Collect the Data To collect the data that you’ll need, complete the following steps: Instructions. 1. Using the Pandas `read_csv` function and the `Path` module, import the data from `bitstamp.csv` file, and create a DataFrame called `bitstamp`. Set the DatetimeIndex as the Timestamp column, and be sure to parse and format the dates. 2. Use the `head` (and/or the `tail`) function to confirm that Pandas properly imported the data. 3. Repeat Steps 1 and 2 for `coinbase.csv` file. ### Step 1: Using the Pandas `read_csv` function and the `Path` module, import the data from `bitstamp.csv` file, and create a DataFrame called `bitstamp`. Set the DatetimeIndex as the Timestamp column, and be sure to parse and format the dates. ``` # Read in the CSV file called "bitstamp.csv" using the Path module. # The CSV file is located in the Resources folder. # Set the index to the column "Date" # Set the parse_dates and infer_datetime_format parameters bitstamp = pd.read_csv( Path("Resources/bitstamp.csv"), index_col = "Timestamp", parse_dates = True, infer_datetime_format = True ) ``` ### Step 2: Use the `head` (and/or the `tail`) function to confirm that Pandas properly imported the data. ``` # Use the head (and/or tail) function to confirm that the data was imported properly. # YOUR CODE HERE display(bitstamp.head()) display(bitstamp.tail()) ``` ### Step 3: Repeat Steps 1 and 2 for `coinbase.csv` file. ``` # Read in the CSV file called "coinbase.csv" using the Path module. # The CSV file is located in the Resources folder. # Set the index to the column "Timestamp" # Set the parse_dates and infer_datetime_format parameters coinbase = pd.read_csv( Path("Resources/coinbase.csv"), index_col = "Timestamp", parse_dates = True, infer_datetime_format = True ) # Use the head (and/or tail) function to confirm that the data was imported properly. display(coinbase.head()) display(coinbase.tail()) ``` ## Prepare the Data To prepare and clean your data for analysis, complete the following steps: 1. For the bitstamp DataFrame, replace or drop all `NaN`, or missing, values in the DataFrame. 2. Use the `str.replace` function to remove the dollar signs ($) from the values in the Close column. 3. Convert the data type of the Close column to a `float`. 4. Review the data for duplicated values, and drop them if necessary. 5. Repeat Steps 1–4 for the coinbase DataFrame. ### Step 1: For the bitstamp DataFrame, replace or drop all `NaN`, or missing, values in the DataFrame. ``` # For the bitstamp DataFrame, replace or drop all NaNs or missing values in the DataFrame # Displays the null count display(bitstamp.isnull().sum()) # Uses fillna() to replace all null values with 0 bitstamp = bitstamp.fillna(0) # Verifies nulls have been replaced display(bitstamp.isnull().sum()) ``` ### Step 2: Use the `str.replace` function to remove the dollar signs ($) from the values in the Close column. ``` # Display head to show before $ removed display(bitstamp.head()) # Use the str.replace function to remove the dollar sign, $ bitstamp.loc[:,"Close"] = bitstamp.loc[:,"Close"].str.replace("$","") # Display head to show after has been removed. display(bitstamp.head()) ``` ### Step 3: Convert the data type of the Close column to a `float`. ``` # Convert the Close data type to a float # display type before change display(bitstamp["Close"].dtypes) # change the column type to float bitstamp.loc[:, "Close"] = bitstamp.loc[:, "Close"].astype("float") # display the new type display(bitstamp["Close"].dtypes) ``` ### Step 4: Review the data for duplicated values, and drop them if necessary. ``` # Review the data for duplicate values, and drop them if necessary # Count the number of duplicates display(bitstamp.duplicated().sum()) # Drop the duplicates bitstamp = bitstamp.drop_duplicates() # Count the number of duplicates display(bitstamp.duplicated().sum()) ``` ### Step 5: Repeat Steps 1–4 for the coinbase DataFrame. ``` # Step 1 - Remove the Nulls # Displays the null count display(coinbase.isnull().sum()) # Uses fillna() to replace all null values with 0 coinbase = coinbase.fillna(0) # Verifies nulls have been replaced display(coinbase.isnull().sum()) # Step 2 - Remove $ from Close column # Display head to show before $ removed display(coinbase.head()) # Use the str.replace function to remove the dollar sign, $ coinbase.loc[:,"Close"] = coinbase.loc[:,"Close"].str.replace("$","") # Display head to show after has been removed. display(coinbase.head()) # Step 3 - Convert the Close data type to a float # Display type before change display(coinbase["Close"].dtypes) # Change the column type to float coinbase.loc[:, "Close"] = coinbase.loc[:, "Close"].astype("float") # Display the new type display(coinbase["Close"].dtypes) # Step 4 - Review the data for duplicate values, and drop them if necessary # Count the number of duplicates display(coinbase.duplicated().sum()) # Drop the duplicates coinbase = coinbase.drop_duplicates() # Count the number of duplicates display(coinbase.duplicated().sum()) ``` ## Analyze the Data Your analysis consists of the following tasks: 1. Choose the columns of data on which to focus your analysis. 2. Get the summary statistics and plot the data. 3. Focus your analysis on specific dates. 4. Calculate the arbitrage profits. ### Step 1: Choose columns of data on which to focus your analysis. Select the data you want to analyze. Use `loc` or `iloc` to select the following columns of data for both the bitstamp and coinbase DataFrames: * Timestamp (index) * Close ``` # Use loc or iloc to select `Timestamp (the index)` and `Close` from bitstamp DataFrame bitstamp_sliced = bitstamp.iloc[:,[3]] # Review the first five rows of the DataFrame bitstamp_sliced.head() # Use loc or iloc to select `Timestamp (the index)` and `Close` from coinbase DataFrame coinbase_sliced = coinbase.iloc[:,[3]] # Review the first five rows of the DataFrame coinbase_sliced.head() ``` ### Step 2: Get summary statistics and plot the data. Sort through the time series data associated with the bitstamp and coinbase DataFrames to identify potential arbitrage opportunities. To do so, complete the following steps: 1. Generate the summary statistics for each DataFrame by using the `describe` function. 2. For each DataFrame, create a line plot for the full period of time in the dataset. Be sure to tailor the figure size, title, and color to each visualization. 3. In one plot, overlay the visualizations that you created in Step 2 for bitstamp and coinbase. Be sure to adjust the legend and title for this new visualization. 4. Using the `loc` and `plot` functions, plot the price action of the assets on each exchange for different dates and times. Your goal is to evaluate how the spread between the two exchanges changed across the time period that the datasets define. Did the degree of spread change as time progressed? ``` # Generate the summary statistics for the bitstamp DataFrame bitstamp_sliced.describe() # Generate the summary statistics for the coinbase DataFrame coinbase_sliced.describe() # Create a line plot for the bitstamp DataFrame for the full length of time in the dataset # Be sure that the figure size, title, and color are tailored to each visualization bitstamp_sliced.plot(figsize=(10, 5), title="Bitstamp Closing Prices", color="blue") # Create a line plot for the coinbase DataFrame for the full length of time in the dataset # Be sure that the figure size, title, and color are tailored to each visualization coinbase_sliced.plot(figsize=(10, 5), title="Coinbase Closing Prices", color="orange") # Overlay the visualizations for the bitstamp and coinbase DataFrames in one plot # The plot should visualize the prices over the full lenth of the dataset # Be sure to include the parameters: legend, figure size, title, and color and label coinbase_sliced["Close"].plot(legend = True, figsize=(15, 7), title="Bitstamp v. Coinbase", color="orange", label = "Coinbase") bitstamp_sliced["Close"].plot(legend = True, figsize=(15, 7), color="blue", label = "Bitstamp") # Using the loc and plot functions, create an overlay plot that visualizes # the price action of both DataFrames for a one month period early in the dataset # Be sure to include the parameters: legend, figure size, title, and color and label coinbase_sliced["Close"].loc["2018-01-01" : "2018-01-31"].plot(legend = True, figsize=(15, 7), title="Jan. 2018 - Bitstamp v. Coinbase", color="orange", label = "Coinbase") bitstamp_sliced["Close"].loc["2018-01-01" : "2018-01-31"].plot(legend = True, figsize=(15, 7), color="blue", label = "Bitstamp") # Using the loc and plot functions, create an overlay plot that visualizes # the price action of both DataFrames for a one month period later in the dataset # Be sure to include the parameters: legend, figure size, title, and color and label coinbase_sliced["Close"].loc["2018-03-01" : "2018-03-31"].plot(legend = True, figsize=(15, 7), title="March 2018 - Bitstamp v. Coinbase", color="orange", label = "Coinbase") bitstamp_sliced["Close"].loc["2018-03-01" : "2018-03-31"].plot(legend = True, figsize=(15, 7), color="blue", label = "Bitstamp") ``` **Question** Based on the visualizations of the different time periods, has the degree of spread change as time progressed? **Answer** Yes. Early in the timeframe, there were more frequent and more severe divergences between the price of Bitcoin on Bitstamp and Coinbase. ### Step 3: Focus Your Analysis on Specific Dates Focus your analysis on specific dates by completing the following steps: 1. Select three dates to evaluate for arbitrage profitability. Choose one date that’s early in the dataset, one from the middle of the dataset, and one from the later part of the time period. 2. For each of the three dates, generate the summary statistics and then create a box plot. This big-picture view is meant to help you gain a better understanding of the data before you perform your arbitrage calculations. As you compare the data, what conclusions can you draw? ``` # Create an overlay plot that visualizes the two dataframes over a period of one day early in the dataset. # Be sure that the plots include the parameters `legend`, `figsize`, `title`, `color` and `label` coinbase_sliced["Close"].loc["2018-01-24" : "2018-01-24"].plot(legend = True, figsize=(15, 7), title="Jan. 25, 2018 - Bitstamp v. Coinbase", color="orange", label = "Coinbase") bitstamp_sliced["Close"].loc["2018-01-24" : "2018-01-24"].plot(legend = True, figsize=(15, 7), color="blue", label = "Bitstamp") # Using the early date that you have selected, calculate the arbitrage spread # by subtracting the bitstamp lower closing prices from the coinbase higher closing prices arbitrage_spread_early = bitstamp_sliced['Close'].loc['2018-01-24'] - coinbase_sliced['Close'].loc['2018-01-24'] # Generate summary statistics for the early DataFrame arbitrage_spread_early.describe() # Visualize the arbitrage spread from early in the dataset in a box plot arbitrage_spread_early.plot(kind = "box", legend = True, figsize=(15, 7), title="Arbitrage Spread - Early", color="blue") # Create an overlay plot that visualizes the two dataframes over a period of one day from the middle of the dataset. # Be sure that the plots include the parameters `legend`, `figsize`, `title`, `color` and `label` coinbase_sliced["Close"].loc["2018-02-23" : "2018-02-23"].plot(legend = True, figsize=(15, 7), title="Feb. 25, 2018 - Bitstamp v. Coinbase", color="orange", label = "Coinbase") bitstamp_sliced["Close"].loc["2018-02-23" : "2018-02-23"].plot(legend = True, figsize=(15, 7), color="blue", label = "Bitstamp") # Using the date in the middle that you have selected, calculate the arbitrage spread # by subtracting the bitstamp lower closing prices from the coinbase higher closing prices arbitrage_spread_middle = bitstamp_sliced['Close'].loc['2018-02-23'] - coinbase_sliced['Close'].loc['2018-02-23'] # Generate summary statistics arbitrage_spread_middle.describe() # Visualize the arbitrage spread from the middle of the dataset in a box plot arbitrage_spread_middle.plot(kind = "box", legend = True, figsize=(15, 7), title="Arbitrage Spread - Middle", color="blue") # Create an overlay plot that visualizes the two dataframes over a period of one day from late in the dataset. # Be sure that the plots include the parameters `legend`, `figsize`, `title`, `color` and `label` coinbase_sliced["Close"].loc["2018-03-23" : "2018-03-23"].plot(legend = True, figsize=(15, 7), title="March 25, 2018 - Bitstamp v. Coinbase", color="orange", label = "Coinbase") bitstamp_sliced["Close"].loc["2018-03-23" : "2018-03-23"].plot(legend = True, figsize=(15, 7), color="blue", label = "Bitstamp") # Using the date from the late that you have selected, calculate the arbitrage spread # by subtracting the bitstamp lower closing prices from the coinbase higher closing prices arbitrage_spread_late = bitstamp_sliced['Close'].loc['2018-03-23'] - coinbase_sliced['Close'].loc['2018-03-23'] # Generate summary statistics for the late DataFrame arbitrage_spread_late.describe() # Visualize the arbitrage spread from late in the dataset in a box plot arbitrage_spread_late.plot(kind = "box", legend = True, figsize=(15, 7), title="Arbitrage Spread - Late", color="blue") ``` ### Step 4: Calculate the Arbitrage Profits Calculate the potential profits for each date that you selected in the previous section. Your goal is to determine whether arbitrage opportunities still exist in the Bitcoin market. Complete the following steps: 1. For each of the three dates, measure the arbitrage spread between the two exchanges by subtracting the lower-priced exchange from the higher-priced one. Then use a conditional statement to generate the summary statistics for each arbitrage_spread DataFrame, where the spread is greater than zero. 2. For each of the three dates, calculate the spread returns. To do so, divide the instances that have a positive arbitrage spread (that is, a spread greater than zero) by the price of Bitcoin from the exchange you’re buying on (that is, the lower-priced exchange). Review the resulting DataFrame. 3. For each of the three dates, narrow down your trading opportunities even further. To do so, determine the number of times your trades with positive returns exceed the 1% minimum threshold that you need to cover your costs. 4. Generate the summary statistics of your spread returns that are greater than 1%. How do the average returns compare among the three dates? 5. For each of the three dates, calculate the potential profit, in dollars, per trade. To do so, multiply the spread returns that were greater than 1% by the cost of what was purchased. Make sure to drop any missing values from the resulting DataFrame. 6. Generate the summary statistics, and plot the results for each of the three DataFrames. 7. Calculate the potential arbitrage profits that you can make on each day. To do so, sum the elements in the profit_per_trade DataFrame. 8. Using the `cumsum` function, plot the cumulative sum of each of the three DataFrames. Can you identify any patterns or trends in the profits across the three time periods? (NOTE: The starter code displays only one date. You'll want to do this analysis for two additional dates). #### 1. For each of the three dates, measure the arbitrage spread between the two exchanges by subtracting the lower-priced exchange from the higher-priced one. Then use a conditional statement to generate the summary statistics for each arbitrage_spread DataFrame, where the spread is greater than zero. *NOTE*: For illustration, only one of the three dates is shown in the starter code below. ``` # For the date early in the dataset, measure the arbitrage spread between the two exchanges # by subtracting the lower-priced exchange from the higher-priced one # Determines the arbitrage spread at three different times - early, middle, and late arbitrage_spread_early = bitstamp_sliced['Close'].loc['2018-01-24'] - coinbase_sliced['Close'].loc['2018-01-24'] arbitrage_spread_middle = bitstamp_sliced['Close'].loc['2018-02-23'] - coinbase_sliced['Close'].loc['2018-02-23'] arbitrage_spread_late = bitstamp_sliced['Close'].loc['2018-03-23'] - coinbase_sliced['Close'].loc['2018-03-23'] # Filters for positive returns at three different times - early, middle, and late spread_return_early_positive = arbitrage_spread_early[arbitrage_spread_early>0] display(spread_return_early_positive.describe()) spread_return_middle_positive = arbitrage_spread_middle[arbitrage_spread_middle>0] display(spread_return_middle_positive.describe()) spread_return_late_positive = arbitrage_spread_late[arbitrage_spread_late>0] display(spread_return_late_positive.describe()) ``` #### 2. For each of the three dates, calculate the spread returns. To do so, divide the instances that have a positive arbitrage spread (that is, a spread greater than zero) by the price of Bitcoin from the exchange you’re buying on (that is, the lower-priced exchange). Review the resulting DataFrame. ``` # For the date early in the dataset, calculate the spread returns by dividing the instances when the arbitrage spread is positive (> 0) # by the price of Bitcoin from the exchange you are buying on (the lower-priced exchange). spread_return_early= spread_return_early_positive / coinbase_sliced['Close'].loc['2018-01-24'] spread_return_middle= spread_return_middle_positive / coinbase_sliced['Close'].loc['2018-02-23'] spread_return_late= spread_return_late_positive / coinbase_sliced['Close'].loc['2018-03-23'] # Review the spread return DataFrame display(spread_return_early.describe()) display(spread_return_middle.describe()) display(spread_return_late.describe()) ``` #### 3. For each of the three dates, narrow down your trading opportunities even further. To do so, determine the number of times your trades with positive returns exceed the 1% minimum threshold that you need to cover your costs. ``` # For the date early in the dataset, determine the number of times your trades with positive returns # exceed the 1% minimum threshold (.01) that you need to cover your costs profitable_trades_early = spread_return_early[spread_return_early > .01] profitable_trades_middle = spread_return_middle[spread_return_middle > .01] profitable_trades_late = spread_return_late[spread_return_late > .01] # Review the first five profitable trades display(profitable_trades_early.head()) display(profitable_trades_middle.head()) display(profitable_trades_late.head()) ``` #### 4. Generate the summary statistics of your spread returns that are greater than 1%. How do the average returns compare among the three dates? ``` # For the date early in the dataset, generate the summary statistics for the profitable trades # or you trades where the spread returns are are greater than 1% display(profitable_trades_early.describe()) display(profitable_trades_middle.describe()) display(profitable_trades_late.describe()) ``` #### 5. For each of the three dates, calculate the potential profit, in dollars, per trade. To do so, multiply the spread returns that were greater than 1% by the cost of what was purchased. Make sure to drop any missing values from the resulting DataFrame. ``` # For the date early in the dataset, calculate the potential profit per trade in dollars # Multiply the profitable trades by the cost of the Bitcoin that was purchased profit_early = spread_return_early[spread_return_early > .01] * coinbase_sliced['Close'].loc['2018-01-24'] profit_middle = spread_return_middle[spread_return_middle > .01] * coinbase_sliced['Close'].loc['2018-02-23'] profit_late = spread_return_late[spread_return_late > .01] * coinbase_sliced['Close'].loc['2018-03-23'] # Drop any missing values from the profit DataFrame profit_per_trade_early = profit_early.dropna() profit_per_trade_middle = profit_middle.dropna() profit_per_trade_late = profit_late.dropna() # View the early profit DataFrame display(profit_per_trade_early.head()) display(profit_per_trade_middle.head()) display(profit_per_trade_late.head()) ``` #### 6. Generate the summary statistics, and plot the results for each of the three DataFrames. ``` # Generate the summary statistics for the early profit per trade DataFrame display(profit_per_trade_early.describe()) display(profit_per_trade_middle.describe()) display(profit_per_trade_late.describe()) # Plot the results for the early profit per trade DataFrame profit_per_trade_early.plot(legend = True, figsize=(15, 7), title="Profit from Trades Made on Jan. 24, 2018", color="green", label = "Profit") ``` #### 7. Calculate the potential arbitrage profits that you can make on each day. To do so, sum the elements in the profit_per_trade DataFrame. ``` # Calculate the sum of the potential profits for the early profit per trade DataFrame display(profit_per_trade_early.sum()) display(profit_per_trade_middle.sum()) display(profit_per_trade_late.sum()) ``` #### 8. Using the `cumsum` function, plot the cumulative sum of each of the three DataFrames. Can you identify any patterns or trends in the profits across the three time periods? ``` # Use the cumsum function to calculate the cumulative profits over time for the early profit per trade DataFrame cumulative_profit_early = profit_per_trade_early.cumsum()# YOUR CODE HERE # Plot the cumulative sum of profits for the early profit per trade DataFrame cumulative_profit_early.plot() ``` **Question:** After reviewing the profit information across each date from the different time periods, can you identify any patterns or trends? **Answer:** The clearest pattern is that arbitrage opportunities do not last very long. So, it is important to act on them early because over time, the spread narrows and eventually disappears. This is consistent with the idea of efficient marketplaces. Over time, the market finds and exploints inefficiencies, therefore elimating them. This does not mean that they do not exist, it just means that they are rarer in established markets than emerging ones and, even in emerging markets, do not last long when there is a sufficiently profitable spread or volume such that the profit is high.
github_jupyter
Design MERFISH probes using the example inputs from Jeff Moffitt. The original MATLAB design pipeline can be found at https://github.com/ZhuangLab/MERFISH_analysis. # Prepare inputs ``` # Download the input data # This is for the UNIX-like operating systems. If you are using Windows, just download the files accordingly. !mkdir temporary_data !wget http://zhuang.harvard.edu/merfish/MERFISHData/MERFISH_Examples2.zip -O temporary_data/MERFISH_Examples2.zip !unzip -o temporary_data/MERFISH_Examples2.zip -d temporary_data # Make a path for output !mkdir temporary_data/MERFISH_Examples2/outputs # Define all the input files you need in this script import os ref_folder = r'\\10.245.74.212\Chromatin_NAS_2\Libraries\CTP-10_Aire\MERFISH_designer' #os.makedirs(os.path.join(ref_folder, 'MERFISH_Examples2', 'outputs')) codebook_file = os.path.join(ref_folder, r'MERFISH_Examples2\codebook.csv') transcripts_fasta_file = os.path.join(ref_folder, r'MERFISH_Examples2\transcripts.fasta') fpkm_tracking_file = os.path.join(ref_folder, r'MERFISH_Examples2\isoforms.fpkm_tracking') readout_fasta_file = os.path.join(ref_folder, r'MERFISH_Examples2\readouts.fasta') forward_primer_file = r'\\10.245.74.212\Chromatin_NAS_2\Libraries\Primers\forward_primers.fasta' reverse_primer_file = r'\\10.245.74.212\Chromatin_NAS_2\Libraries\Primers\reverse_primers.fasta' ncRNA_file = os.path.join(ref_folder, r'MERFISH_Examples2\Homo_sapiens.GRCh38.ncrna.fa') # Define the output files ottable_transcriptome_file = os.path.join(ref_folder, r'MERFISH_Examples2\outputs\ottable_transcriptome.pkl') selected_primers_file = os.path.join(ref_folder, r'MERFISH_Examples2\outputs\selected_primers.csv') probe_output_file = os.path.join(ref_folder, r'MERFISH_Examples2\outputs\designed_probes.csv') transcript_level_report_file = os.path.join(ref_folder, r'MERFISH_Examples2\outputs\transcript_level_report.csv') print(transcripts_fasta_file) ``` # Initialize data structures ``` # Import the modules import os import sys from IPython.display import display import MERFISH_probe_design import MERFISH_probe_design.IO.file_io as fio import MERFISH_probe_design.probe_design.probe_dict as p_d import MERFISH_probe_design.probe_design.OTTable_dict as ot import MERFISH_probe_design.probe_design.readout_sequences as rs import MERFISH_probe_design.probe_design.probe_selection as ps import MERFISH_probe_design.probe_design.quality_check as qc from MERFISH_probe_design.probe_design import filters from MERFISH_probe_design.probe_design import plot from MERFISH_probe_design.probe_design import primer_design ``` ## load transcriptome ``` # Load the transcriptome as a pandas data frame transcriptome = fio.load_transcriptome(transcripts_fasta_file, fpkm_tracking_file) # Make sure that the transcriptome data frame has the standard column names. # The standard columns are: transcript_id, sequence, gene_id, gene_short_name and FPKM. # Also remove the non-standard columns for clarity. transcriptome = qc.check_and_standardize_transcriptome(transcriptome, remove_non_standard_columns=True) transcriptome # Let's have a look at what's inside the transcriptome # Load the transcriptome as a pandas data frame transcriptome = fio.load_transcriptome(transcripts_fasta_file, )#fpkm_tracking_file) # Make sure that the transcriptome data frame has the standard column names. # The standard columns are: transcript_id, sequence, gene_id, gene_short_name and FPKM. # Also remove the non-standard columns for clarity. transcriptome = qc.check_and_standardize_transcriptome(transcriptome, remove_non_standard_columns=True) transcriptome # Let's have a look at what's inside the transcriptome # Load the codebook cb_version, cb_name, bit_names, barcode_table = fio.load_merlin_codebook(codebook_file) gene_ids = list(barcode_table['name'][barcode_table['id'] != '']) # Get the non-blank gene names transcript_ids = set(barcode_table['id'][barcode_table['id'] != '']) # Get the non-blank transcript ids barcode_table # Let's have a look at the barcode table # Initialize the probe dictionary which is the carrier of the probes throught the design process. probe_dict = p_d.init_probe_dict(gene_ids, transcriptome, 'gene_short_name', K=30) p_d.print_probe_dict(probe_dict) # The probe_dict is just a dictionary of dictionary of pandas data frames. # Let's have a look at the data frame of an example transcript. probe_dict['VPS13D']['ENST00000613099.4'] # Select the transcripts that we want to target # The target transcripts are already defined in the codebook probe_dict = p_d.select_transcripts_by_ids(probe_dict, transcript_ids) p_d.print_probe_dict(probe_dict) # We excluded all the transcripts that are not our direct targets # Initialize the off-target counting tables # OTTable for rRNA/tRNAs ncRNAs = fio.load_fasta_into_df(ncRNA_file) ottable_rtRNAs = ot.get_OTTable_for_rtRNAs(ncRNAs, 15) # OTTables for the genes we target gene_ottable_dict = ot.get_gene_OTTables(transcriptome, gene_ids, 'gene_short_name', 17) %%time # OTTable for the transcriptome. # Let's save this big table to save time when we need it again if os.path.exists(ottable_transcriptome_file): ottable_transcriptome = ot.OTTable.load_pkl(ottable_transcriptome_file) else: ottable_transcriptome = ot.get_OTTable_for_transcriptome(transcriptome, 17) ottable_transcriptome.save_pkl(ottable_transcriptome_file) ``` # Select target regions ``` # Calculate and plot the GC contents of the target regions filters.calc_gc_for_probe_dict(probe_dict, column_key_seq='target_sequence', column_key_write='target_GC') plot.plot_hist(probe_dict, column_key='target_GC') # Filter GC cotent and plot the GC content after filtering filters.filter_probe_dict_by_metric(probe_dict, 'target_GC', lower_bound=43, upper_bound=63) plot.plot_hist(probe_dict, column_key='target_GC') # Calculate and filter the melting temperature using the method in JM's MATLAB code. # Alternatively, you can use the newer Tm calculation method that are commented out # in the two subsequent cells. filters.calc_tm_JM_for_probe_dict(probe_dict, monovalentSalt=0.3, probe_conc=5e-9, column_key_seq='target_sequence', column_key_write='target_Tm') plot.plot_hist(probe_dict, column_key='target_Tm') filters.filter_probe_dict_by_metric(probe_dict, 'target_Tm', lower_bound=66, upper_bound=76) plot.plot_hist(probe_dict, column_key='target_Tm') ## Calculate and plot the melting-temperatures (Tm) #filters.calc_tm_for_probe_dict(probe_dict, Na_conc=300, fmd_percentile=30, probe_conc=0.05, # column_key_seq='target_sequence', column_key_write='target_Tm') #plot.plot_hist(probe_dict, column_key='target_Tm') ## Filter Tm and plot Tm distribution after filtering #filters.filter_probe_dict_by_metric(probe_dict, 'target_Tm', lower_bound=46, upper_bound=56) #plot.plot_hist(probe_dict, column_key='target_Tm') # Calculate and plot the off-targets to rRNA/tRNAs ot.calc_OTs(probe_dict, ottable_rtRNAs, 'target_sequence', 'target_OT_rtRNA', 15) plot.plot_hist(probe_dict, 'target_OT_rtRNA', y_max=400) # Filter out probes that have any rRNA/tRNA off-targets filters.filter_probe_dict_by_metric(probe_dict, 'target_OT_rtRNA', upper_bound=0.5) plot.plot_hist(probe_dict, 'target_OT_rtRNA') # Get the FPKMs of the transcripts transcript_fpkms = dict(zip(list(transcriptome['transcript_id']), list(transcriptome['FPKM']))) # Calculate the specificities and isoform specificities of the target regions ot.calc_specificity(probe_dict, ottable_transcriptome, gene_ottable_dict, transcript_fpkms, 'target_sequence', 'target_specificity', 'target_isospecificity', 17) plot.plot_hist(probe_dict, 'target_specificity') plot.plot_hist(probe_dict, 'target_isospecificity') # Filter the specificities and isoform specificities of the target regions filters.filter_probe_dict_by_metric(probe_dict, 'target_specificity', lower_bound=0.75) filters.filter_probe_dict_by_metric(probe_dict, 'target_isospecificity', lower_bound=0.75) plot.plot_hist(probe_dict, 'target_specificity') plot.plot_hist(probe_dict, 'target_isospecificity') ``` # Design readout sequences ``` # Load the readout sequences into a data frame readout_seqs = fio.load_fasta_into_df(readout_fasta_file) rs.append_on_bit_ids_to_readout_sequences(readout_seqs, bit_names) readout_seqs # Add the readout sequences. Here we randomly add 3 readout sequences to each probe. # Add an "A" between the concatenated sequences. rs.add_readout_seqs_to_probes_random(probe_dict, readout_seqs, barcode_table, 3, spacer='', gene_id_key='name', n_threads=8) # Filter out probes that have off-targets to rRNA/tRNAs ot.calc_OTs(probe_dict, ottable_rtRNAs, 'target_readout_sequence', 'target_readout_OT_rtRNA', 15) plot.plot_hist(probe_dict, 'target_readout_OT_rtRNA', y_max=400) filters.filter_probe_dict_by_metric(probe_dict, 'target_readout_OT_rtRNA', upper_bound=0.5) plot.plot_hist(probe_dict, 'target_readout_OT_rtRNA') # NOTE: This step is optional since JM didn't have this step. # Calculate how many more off-targets to the transcriptome are introduced due to the readout sequences. # The off-target counts are weighted down by the FPKMs of the on-target transcripts ot.calc_OT_diffs(probe_dict, ottable_transcriptome, gene_ottable_dict, transcript_fpkms, 'target_sequence', 'target_readout_sequence', 'readout_OT_increase', 17) plot.plot_hist(probe_dict, 'readout_OT_increase', y_max=400) # Filter out the probes with extra off-targets due to the readouts # Require the new weighted off-targets to be minor compared to the on-target weight. filters.filter_probe_dict_by_metric(probe_dict, 'readout_OT_increase', upper_bound=0.25 * (30 - 17 + 1)) plot.plot_hist(probe_dict, 'readout_OT_increase') ``` # Select probes ``` %%time # Select probes by a stochastic greedy algorithms that optimizes the on-bit coverage # and minimizes the overlapping between probes. ps.select_probes_greedy_stochastic(probe_dict, N_probes_per_transcript=92, N_on_bits=4, N_threads=16) # Let's plot the probe coverage of an example transcript seq_len = len(transcriptome[transcriptome['transcript_id'] == 'ENST00000380152.7'].iloc[0]['sequence']) plot.plot_sequence_coverage(probe_dict['BRCA2']['ENST00000380152.7'], seq_len) ps ``` # Primer design ``` # Load the primer candidates into data frames forward_primers, reverse_primers = fio.load_primers(forward_primer_file, reverse_primer_file) display(forward_primers) display(reverse_primers) # Selet primers # Make an off-target from the current probe sequences. ottable_target_readout = ot.get_OTTable_for_probe_dictionary(probe_dict, 'target_readout_sequence', 15) # Calculate the off-targets for the primer sequences and their reverse-complements # Usually, there shouln't be any off-targets ot.calc_OTs_df(forward_primers, ottable_target_readout, 'sequence', 'sequence_OT', 15) ot.calc_OTs_df(forward_primers, ottable_target_readout, 'sequence_rc', 'sequence_rc_OT', 15) ot.calc_OTs_df(reverse_primers, ottable_target_readout, 'sequence', 'sequence_OT', 15) ot.calc_OTs_df(reverse_primers, ottable_target_readout, 'sequence_rc', 'sequence_rc_OT', 15) # Select primers with lowest OTs forward_primers = primer_design.randomly_select_primers_with_lowest_OT(forward_primers) reverse_primers = primer_design.randomly_select_primers_with_lowest_OT(reverse_primers) # Now each primer table should only a single row of the selected primer display(forward_primers) display(reverse_primers) # Save the selected primers forward_primers.append(reverse_primers, ignore_index=True).to_csv(selected_primers_file) # Add the primer sequences # NOTE: the sequence after primer addition should be (reverse_primer)-(target_readouts)-(forward_primer_rc) primer_design.add_primer_sequences(probe_dict, reverse_primers.iloc[0]['sequence'], forward_primers.iloc[0]['sequence_rc'], input_column='target_readout_sequence', output_column='target_readout_primer_sequence') # Notice that the T7 promoter (the first 17 bases of the reverse primer) will be lost after in vitro transcription # create a column of the T7 transcribed sequences for the subsequent quality check primer_design.add_primer_sequences(probe_dict, reverse_primers.iloc[0]['sequence'][17:], forward_primers.iloc[0]['sequence_rc'], input_column='target_readout_sequence', output_column='target_readout_primer_sequence_t7_transcribed') ``` # Quality check ``` # Filter out probes that have off-targets to rRNA/tRNAs ot.calc_OTs(probe_dict, ottable_rtRNAs, 'target_readout_primer_sequence_t7_transcribed', 'target_readout_primer_t7_transcribed_OT_rtRNA', 15) plot.plot_hist(probe_dict, 'target_readout_primer_t7_transcribed_OT_rtRNA', y_max=400) filters.filter_probe_dict_by_metric(probe_dict, 'target_readout_primer_t7_transcribed_OT_rtRNA', upper_bound=0.5) plot.plot_hist(probe_dict, 'target_readout_primer_t7_transcribed_OT_rtRNA') # Calculate how many more off-targets to the transcriptome are introduced due to the primer sequences. # The off-target counts are weighted down by the FPKMs of the on-target transcripts ot.calc_OT_diffs(probe_dict, ottable_transcriptome, gene_ottable_dict, transcript_fpkms, 'target_readout_sequence', 'target_readout_primer_sequence_t7_transcribed', 'primer_OT_increase', 17) plot.plot_hist(probe_dict, 'primer_OT_increase', y_max=400) # Filter out the probes with extra off-targets due to the primers # Require the new weighted off-targets to be minor compared to the on-target weight. filters.filter_probe_dict_by_metric(probe_dict, 'primer_OT_increase', upper_bound=0.25 * (30 - 17 + 1)) plot.plot_hist(probe_dict, 'primer_OT_increase') %%time # Filter out the probes that self complement or complement with other probes. # Iterately remove the probes with high numbers of cis/trans-complementarity # This filtering strategy is a compromise between speed and the number of probes to keep while True: # Make a OTTable from the reverse-complement sequences of the probes. ottable_probes_rc = ot.get_OTTable_for_probe_dictionary(probe_dict, 'target_readout_primer_sequence', 15, rc=True) # The off-targets in this table indicates cis/trans-complementarity ot.calc_OTs(probe_dict, ottable_probes_rc, 'target_readout_primer_sequence', 'probe_cis_trans_OT', 15) max_ot = max(plot.get_values_from_probe_dict(probe_dict, 'probe_cis_trans_OT')) if max_ot == 0: break # Remove probes that have any cis/trans-complementarity filters.filter_probe_dict_by_metric(probe_dict, 'probe_cis_trans_OT', upper_bound=max_ot - 0.5) plot.plot_hist(probe_dict, 'probe_cis_trans_OT') # Also get the reverse-complementary sequences of the designed probes p_d.get_rc_sequences(probe_dict, 'target_readout_primer_sequence', 'target_readout_primer_sequence_rc') # Write the designed probes p_d.probe_dict_to_df(probe_dict).to_csv(probe_output_file, index=False) # Write the transcript level report transcript_level_report = qc.generate_transcript_level_report(probe_dict, transcriptome) display(transcript_level_report) transcript_level_report.to_csv(transcript_level_report_file, index=False) ```
github_jupyter
``` %gui qt %matplotlib qt from glob import glob import os, sys # only necessary if using without installing sys.path.append("..") from xmcd_projection import * from skimage.io import imread, imsave from PIL import Image import meshio import trimesh ``` ### Get file paths ``` msh_file = "DW_MeshConv4nm_25nmSep.vtu" mag_file = "DW_MeshConv4nm_25nmSep.csv" # scale for the points scale = 1e9 ``` ## Generate raytracing - skip if generated ``` # get the mesh, scale the points to nm msh = Mesh.from_file(msh_file, scale=scale) ``` #### Make sure that the projection vector is correct and that the structure is oriented well ``` # get the projection vector p = get_projection_vector(90, 0) # direction of xrays n = [0, 1, 1] # normal to the projection plane x0 = [-100, 0, 0] # point on the projection plane # prepare raytracing object raytr = RayTracing(msh, p, n=n, x0=x0) struct = raytr.struct struct_projected = raytr.struct_projected vis = MeshVisualizer(struct, struct_projected) vis.set_camera(dist=2e5) vis.show() ``` ## If raytracing file generated - skip if not ``` # load raytracing if exists raytr = np.load("raytracing.npy", allow_pickle=True).item() struct = raytr.struct struct_projected = raytr.struct_projected ``` ## Generate and save raytracing ``` raytr.get_piercings() # np.save("raytracing.npy", raytr, allow_pickle=True) ``` ## Get the xmcd #### Get magnetisation, fix vertex shuffling Note: sometimes if the mesh file has multiple parts, the paraview export and the original mesh coordinates are not in the same order. I add a function to fix that when necessary ``` magnetisation, mag_points = load_mesh_magnetisation(mag_file, scale=scale) shuffle_file = "shuffle_indx.npy" try: shuffle_indx = np.load(shuffle_file) except FileNotFoundError: print('File not found. Generating shuffle indx') shuffle_indx = msh.get_shuffle_indx(mag_points) np.save(shuffle_file, shuffle_indx) magnetisation = magnetisation[shuffle_indx, :] magnetisation, mag_points = load_mesh_magnetisation(mag_file, scale=scale) shuffle_indx = msh.get_shuffle_indx(mag_points) magnetisation = magnetisation[shuffle_indx, :] ``` ### Get the colours and XMCD values ``` xmcd_value = raytr.get_xmcd(magnetisation) mag_colors = get_struct_face_mag_color(struct, magnetisation) azi=90 center_struct = [0, 0, 0] dist_struct = 1e4 center_peem = [100, -200, 0] dist_peem = 8e4 vis = MeshVisualizer(struct, struct_projected, projected_xmcd=xmcd_value, struct_colors=mag_colors) vis.show(azi=azi, center=center_peem, dist=dist_peem) Image.fromarray(vis.get_image_np()) ``` #### View different parts of the image separately #### Both ``` vis.update_colors(xmcd_value, mag_colors) vis.view_both(azi=azi, center=center_peem, dist=dist_peem) Image.fromarray(vis.get_image_np()) ``` #### Projection ``` vis.view_projection(azi=azi, center=center_peem, dist=dist_peem) Image.fromarray(vis.get_image_np()) ``` #### Structure ``` center_struct = [75, 50, 0] dist_struct = 1e4 vis.view_struct(azi=azi, center=center_struct, dist=dist_struct) Image.fromarray(vis.get_image_np()) ``` #### Blurred image ``` vis.view_projection(azi=azi, center=center_peem, dist=dist_peem) Image.fromarray((vis.get_blurred_image(desired_background=0.7)*255).astype(np.uint8)) ``` #### Saving one render ``` vis.view_both(azi=azi, center=center_peem, dist=dist_peem) vis.save_render('mumax_shadow.png') vis.view_projection() blurred = vis.get_blurred_image(desired_background=0.7) imsave('mumax_shadow_blurred.png', (blurred*255).astype(np.uint8), check_contrast=False) vis.view_struct(azi=azi, center=center_struct, dist=dist_struct) vis.save_render('mumax_structure_view.png') ```
github_jupyter
# Dask Overview Dask is a flexible library for parallel computing in Python that makes scaling out your workflow smooth and simple. On the CPU, Dask uses Pandas (NumPy) to execute operations in parallel on DataFrame (array) partitions. Dask-cuDF extends Dask where necessary to allow its DataFrame partitions to be processed by cuDF GPU DataFrames as opposed to Pandas DataFrames. For instance, when you call dask_cudf.read_csv(…), your cluster’s GPUs do the work of parsing the CSV file(s) with underlying cudf.read_csv(). Dask also supports array based workflows using CuPy. ## When to use Dask If your workflow is fast enough on a single GPU or your data comfortably fits in memory on a single GPU, you would want to use cuDF or CuPy. If you want to distribute your workflow across multiple GPUs, have more data than you can fit in memory on a single GPU, or want to analyze data spread across many files at once, you would want to use Dask. One additional benefit Dask provides is that it lets us easily spill data between device and host memory. This can be very useful when we need to do work that would otherwise cause out of memory errors. In this brief notebook, you'll walk through an example of using Dask on a single GPU. Because we're using Dask, the same code in this notebook would work on two, eight, 16, or 100s of GPUs. # Creating a Local Cluster The easiest way to scale workflows on a single node is to use the `LocalCUDACluster` API. This lets us create a GPU cluster, using one worker per GPU by default. In this case, we'll pass the following arguments. - `CUDA_VISIBLE_DEVICES`, to limit our cluster to a single GPU (for demonstration purposes). - `device_memory_limit`, to illustrate how we can spill data between GPU and CPU memory. Artificial memory limits like this reduce our performance if we don't actually need them, but can let us accomplish much larger tasks when we do. - `rmm_pool_size`, to use the RAPIDS Memory Manager to allocate one big chunk of memory upfront rather than having our operations call `cudaMalloc` all the time under the hood. This improves performance, and is generally a best practice. ``` from dask.distributed import Client, fire_and_forget, wait from dask_cuda import LocalCUDACluster from dask.utils import parse_bytes import dask cluster = LocalCUDACluster( CUDA_VISIBLE_DEVICES="0,1", device_memory_limit=parse_bytes("3GB"), rmm_pool_size=parse_bytes("16GB"), ) client = Client(cluster) client ``` Click the **Dashboard** link above to view your Dask dashboard. ## cuDF DataFrames to Dask DataFrames Dask lets scale our cuDF workflows. We'll walk through a couple of examples below, and then also highlight how Dask lets us spill data from GPU to CPU memory. First, we'll create a dataframe with CPU Dask and then send it to the GPU ``` import cudf import dask_cudf ddf = dask_cudf.from_dask_dataframe(dask.datasets.timeseries()) ddf.head() ``` ### Example One: Groupby-Aggregations ``` ddf.groupby(["id", "name"]).agg({"x":['sum', 'mean']}).head() ``` Run the code above again. If you look at the task stream in the dashboard, you'll notice that we're creating the data every time. That's because Dask is lazy. We need to `persist` the data if we want to cache it in memory. ``` ddf = ddf.persist() wait(ddf); ddf.groupby(["id", "name"]).agg({"x":['sum', 'mean']}).head() ``` This is the same API as cuDF, except it works across many GPUs. ### Example Two: Rolling Windows We can also do things like rolling window calculations with Dask and GPUs. ``` ddf.head() rolling = ddf[['x','y']].rolling(window=3) type(rolling) rolling.mean().head() ``` ## Larger than GPU Memory Workflows What if we needed to scale up even more, but didn't have enough GPU memory? Dask handles spilling for us, so we don't need to worry about it. The `device_memory_limit` parameter we used while creating the LocalCluster determines when we should start spilling. In this case, we'll start spilling when we've used about 4GB of GPU memory. Let's create a larger dataframe to use as an example. ``` ddf = dask_cudf.from_dask_dataframe(dask.datasets.timeseries(start="2000-01-01", end="2003-12-31", partition_freq='60d')) ddf = ddf.persist() len(ddf) print(f"{ddf.memory_usage(deep=True).sum().compute() / 1e9} GB of data") ddf.head() ``` Let's imagine we have some downstream operations that require all the data from a given unique identifier in the same partition. We can repartition our data based on the `name` column using the `shuffle` API. Repartitioning our large dataframe will spike GPU memory higher than 4GB, so we'll need to spill to CPU memory. ``` ddf = ddf.shuffle(on="id") ddf = ddf.persist() len(ddf) ``` Watch the Dask Dashboard while this runs. You should see a lot of tasks in the stream like `disk-read` and `disk-write`. Setting a `device_memory_limit` tells dask to spill to CPU memory and potentially disk (if we overwhelm CPU memory). This lets us do these large computations even when we're almost out of memory (though in this case, we faked it). # Dask Custom Functions Dask DataFrames also provide a `map_partitions` API, which is very useful for parallelizing custom logic that doesn't quite fit perfectly or doesn't need to be used with the Dask dataframe API. Dask will `map` the function to every partition of the distributed dataframe. Now that we have all the rows of each `id` collected in the same partitions, what if we just wanted to sort **within each partition**. Avoiding global sorts is usually a good idea if possible, since they're very expensive operations. ``` sorted_ddf = ddf.map_partitions(lambda x: x.sort_values("id")) len(sorted_ddf) ``` We could also do something more complicated and wrap it into a function. Let's do a rolling window on the two value columns after sorting by the id column. ``` def sort_and_rolling_mean(df): df = df.sort_values("id") df = df.rolling(3)[["x", "y"]].mean() return df result = ddf.map_partitions(sort_and_rolling_mean) result = result.persist() wait(result); # let's look at a random partition result.partitions[12].head() ``` Pretty cool. When we're using `map_partitions`, the function is executing on the individual cuDF DataFrames that make up our Dask DataFrame. This means we can do any cuDF operation, run CuPy array manipulations, or anything else we want. # Dask Delayed Dask also provides a `delayed` API, which is useful for parallelizing custom logic that doesn't quite fit into the DataFrame API. Let's imagine we wanted to run thousands of regressions models on different combinations of two features. We can do this experiment super easily with dask.delayed. ``` from cuml.linear_model import LinearRegression from dask import delayed import dask import numpy as np from itertools import combinations # Setup data np.random.seed(12) nrows = 1000000 ncols = 50 df = cudf.DataFrame({f"x{i}": np.random.randn(nrows) for i in range(ncols)}) df['y'] = np.random.randn(nrows) feature_combinations = list(combinations(df.columns.drop("y"), 2)) feature_combinations[:10] len(feature_combinations) # Many calls to linear regression, parallelized with Dask @delayed def fit_ols(df, feature_cols, target_col="y"): clf = LinearRegression() clf.fit(df[list(feature_cols)], df[target_col]) return feature_cols, clf.coef_, clf.intercept_ # scatter the data to the workers beforehand data_future = client.scatter(df, broadcast=True) results = [] for features in feature_combinations: # note how i'm passing the scattered data future res = fit_ols(data_future, features) results.append(res) res = dask.compute(results) res = res[0] print("Features\t\tCoefficients\t\t\tIntercept") for i in range(5): print(res[i][0], res[i][1].values, res[i][2], sep="\t") ``` # Handling Parquet Files Dask and cuDF provide accelerated Parquet readers and writers, and it's useful to take advantage of these tools. To start, let's write out our DataFrame `ddf` to Parquet files using the `to_parquet` API and delete it from memory. ``` print(ddf.npartitions) ddf.to_parquet("ddf.parquet") del ddf ``` Let's take a look at what happened. ``` !ls ddf.parquet | head ``` We end up with many parquet files, and one metadata file. Dask will write one file per partition. Let's read the data back in with `dask_cudf.read_parquet`. ``` ddf = dask_cudf.read_parquet("ddf.parquet/") ddf ``` Why do we have more partitions than files? It turns out, Dask's readers do things like chunk our data by default. Additionally, the `_metadata` file helps provide guidelines for reading the data. But, we can still read them on a per-file basis if want by using a `*` wildcard in the filepath and ignoring the metadata. ``` ddf = dask_cudf.read_parquet("ddf.parquet/*.parquet") ddf ``` Let's now write one big parquet file and then read it back in. We can `repartition` our dataset down to a single partition. ``` ddf.repartition(npartitions=1).to_parquet("big_ddf.parquet") dask_cudf.read_parquet("big_ddf.parquet/") ``` We still get lots of partitions? We can control the splitting behavior using the `split_row_groups` parameter. ``` dask_cudf.read_parquet("big_ddf.parquet/", split_row_groups=False) ``` In general, we want to avoid massive partitions. The sweet spot is probably around 2-3 GB of data per partition for a 32GB V100. # Understanding Persist and Compute Before we close, it's worth coming back to the concepts of `persist` and `compute`. We've seen them several times, but haven't gone into depth. Most Dask operations are lazy. This is a common pattern in distributed computing, but is likely unfamiliar to those who primarily use single-machine libraries like pandas and cuDF. As a result, you'll usually need to call an **eager** operation like `len` or `persist` to actually trigger work. In general, you should avoid calling `compute` except when collecting small datasets or scalars. When we spin up a cluster, we're interacting with our cluster in what we call the `Client` Python process. When we created a `Client` object above, this is what we did. Calling `compute` brings all of the results back to a single GPU cuDF DataFrame in the client process, not in any of the worker processes. This means we're not using the same memory pool, so we could go out of memory if we're not careful. For those of you with Spark experience, you can think of `persist` as triggering work and caching the dataframe in distributed memory and `compute` as collecting the data or results into a single GPU dataframe (cuDF) on the driver. ### Should I Persist My Data? Persisting is generally a good idea if the data needs to be accessed multiple times, to avoid repeated computation. However, if the size of your data would lead to memory pressure, this could cause spilling, which hurts performance. As a best practice, we recommend persisting only when necessary or when you're using an eager operation in the middle of your workflow (to avoid repeating computation). Note that calling `df.head` is an eager operation, which will trigger some computation. If you're going to be doing exploratory data analysis or visually inspecting the data, you would want to persist beforehand. # Summary RAPIDS lets us scale up and take advantage of GPU acceleration. Dask lets us scale out to multiple machines. Dask supports both cuDF DataFrames and CuPy arrays, with generally the same APIs as the single-machine libraries. We encourage you to read the Dask [documentation](https://docs.dask.org/en/latest/) to learn more, and also look at our [10 Minute Guide to cuDF and Dask cuDF](https://docs.rapids.ai/api/cudf/nightly/10min.html)
github_jupyter
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/FeatureCollection/distance.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/distance.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/FeatureCollection/distance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('Installing geemap ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) import ee import geemap ``` ## Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function. ``` Map = geemap.Map(center=[40,-100], zoom=4) Map ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset # Collection.distance example. # Computes the distance to the nearest feature in a collection. # Construct a FeatureCollection from a list of geometries. fc = ee.FeatureCollection([ ee.Geometry.Point(-72.94411, 41.32902), ee.Geometry.Point(-72.94411, 41.33402), ee.Geometry.Point(-72.94411, 41.33902), # The geometries do not need to be the same type. ee.Geometry.LineString( -72.93411, 41.30902, -72.93411, 41.31902, -72.94411, 41.31902) ]) # Compute distance from the dfeatures, to a max of 1000 meters. distance = fc.distance(1000, 100) Map.setCenter(-72.94, 41.32, 13) Map.addLayer(distance, {'min': 0, 'max': 1000, 'palette': ['yellow', 'red']}, 'distance') Map.addLayer(fc, {}, 'Features') ``` ## Display Earth Engine data layers ``` Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ```
github_jupyter
# Hypothesis: Are digitised practices causing more failures? ## Hypothesis We believe that practices undergoing Lloyd Gerge digitisation have an increased failure rate. We will know this to be true when we look at their data for the last three months, and see that either their failures have increased, or that in general their failures are higher than average. ## Context From the months of May-Aug 2021, we see a steady increase of TPP-->EMIS Large message general failures. A general hypothesis is that this is due to record sizes increasing, which could be due to Lloyd George digitisation. This has prompted a more general hypothesis to identify whether digitisation is impacting failure rates. ## Scope - Generate a transfer outcomes table for each of the below CCGs split down for May, June, July: - Sunderland - Fylde and Wyre - Chorley and South Ribble - Blackpool - Birmingham and Solihull - Show technical failure rate for each month, for each practice in the CCG - Separate out outcomes for transfers in, and transfers out - Do this for practices as a sender and as a requester ``` import pandas as pd import numpy as np import paths from data.practice_metadata import read_asid_metadata asid_lookup=read_asid_metadata("prm-gp2gp-ods-metadata-preprod", "v2/2021/8/organisationMetadata.json") transfer_file_location = "s3://prm-gp2gp-transfer-data-preprod/v4/" transfer_files = [ "2021/5/transfers.parquet", "2021/6/transfers.parquet", "2021/7/transfers.parquet" ] transfer_input_files = [transfer_file_location + f for f in transfer_files] transfers_raw = pd.concat(( pd.read_parquet(f) for f in transfer_input_files )) transfers = transfers_raw\ .join(asid_lookup.add_prefix("requesting_"), on="requesting_practice_asid", how="left")\ .join(asid_lookup.add_prefix("sending_"), on="sending_practice_asid", how="left")\ transfers['month']=transfers['date_requested'].dt.to_period('M') def generate_monthly_outcome_breakdown(transfers, columns): total_transfers = ( transfers .groupby(columns) .size() .to_frame("Total Transfers") ) transfer_outcomes=pd.pivot_table( transfers, index=columns, columns=["status"], aggfunc='size' ) transfer_outcomes_pc = ( transfer_outcomes .div(total_transfers["Total Transfers"],axis=0) .multiply(100) .round(2) .add_suffix(" %") ) failed_transfers = ( transfers .assign(failed_transfer=transfers["status"] != "INTEGRATED_ON_TIME") .groupby(columns) .agg({'failed_transfer': 'sum'}) .rename(columns={'failed_transfer': 'ALL_FAILURE'}) ) failed_transfers_pc = ( failed_transfers .div(total_transfers["Total Transfers"],axis=0) .multiply(100) .round(2) .add_suffix(" %") ) return pd.concat([ total_transfers, transfer_outcomes, failed_transfers, transfer_outcomes_pc, failed_transfers_pc, ],axis=1).fillna(0) ``` ## Generate national transfer outcomes ``` national_metrics_monthly=generate_monthly_outcome_breakdown(transfers, ["month"]) national_metrics_monthly ``` ## Generate digitised CCG transfer outcomes ``` ccgs_to_investigate = [ "NHS SUNDERLAND CCG", 'NHS FYLDE AND WYRE CCG', 'NHS CHORLEY AND SOUTH RIBBLE CCG', 'NHS BLACKPOOL CCG', 'NHS BIRMINGHAM AND SOLIHULL CCG' ] is_requesting_ccg_of_interest = transfers.requesting_ccg_name.isin(ccgs_to_investigate) is_sending_ccg_of_interest = transfers.sending_ccg_name.isin(ccgs_to_investigate) requesting_transfers_of_interest = transfers[is_requesting_ccg_of_interest] sending_transfers_of_interest = transfers[is_sending_ccg_of_interest] ``` ### Requesting CCGs (Digitised) ``` requesting_ccgs_monthly=generate_monthly_outcome_breakdown( transfers=requesting_transfers_of_interest, columns=["requesting_ccg_name", "month"] ) requesting_ccgs_monthly ``` ### Sending CCGs (Digitised) ``` sending_ccgs_monthly=generate_monthly_outcome_breakdown( transfers=sending_transfers_of_interest, columns=["sending_ccg_name", "month"] ) sending_ccgs_monthly ``` ### Requesting practices (digitised) ``` requesting_practices_monthly=generate_monthly_outcome_breakdown( transfers=requesting_transfers_of_interest, columns=["requesting_ccg_name", "requesting_practice_name", "requesting_practice_ods_code", "requesting_supplier", "month"] ) requesting_practices_monthly ``` ### Sending practices (digitised) ``` sending_practices_monthly=generate_monthly_outcome_breakdown( transfers=sending_transfers_of_interest, columns=["sending_ccg_name", "sending_practice_name", "sending_practice_ods_code", "sending_supplier", "month"] ) sending_practices_monthly ``` ## Looking at failure rate trends by CCG when requesting a record ``` barplot_config = { 'color': ['lightsteelblue', 'cornflowerblue', 'royalblue'], 'edgecolor':'black', 'kind':'bar', 'figsize': (15,6), 'rot': 30 } def requesting_ccg_barplot(column_name, title): ( pd .concat({'All CCGs': national_metrics_monthly}, names=['requesting_ccg_name']) .append(requesting_ccgs_monthly) .unstack() .plot( y=column_name, title=title, **barplot_config ) ) requesting_ccg_barplot('ALL_FAILURE %', 'Total Failure Percentage (Digitised CCGs - Requesting)') requesting_ccg_barplot('TECHNICAL_FAILURE %', 'Technical Failure Percentage (Digitised CCGs - Requesting)') requesting_ccg_barplot('PROCESS_FAILURE %', 'Process Failure Percentage (Digitised CCGs - Requesting)') requesting_ccg_barplot('UNCLASSIFIED_FAILURE %', 'Unlassified Failure Percentage (Digitised CCGs - Requesting)') ``` ## Looking at failure rate trends by CCG when sending a record ``` def sending_ccg_barplot(column_name, title): ( pd .concat({'All CCGs': national_metrics_monthly}, names=['sending_ccg_name']) .append(sending_ccgs_monthly) .unstack() .plot( y=column_name, title=title, **barplot_config ) ) sending_ccg_barplot('ALL_FAILURE %', 'Total Failure Percentage (Digitised CCGs - Sending)') sending_ccg_barplot('TECHNICAL_FAILURE %', 'Technical Failure Percentage (Digitised CCGs - Sending)') sending_ccg_barplot('PROCESS_FAILURE %', 'Process Failure Percentage (Digitised CCGs - Sending)') sending_ccg_barplot('UNCLASSIFIED_FAILURE %', 'Unlassified Failure Percentage (Digitised CCGs - Sending)') ``` ## Write CCG transfer outcomes by sending and requesting practice to Excel ``` with pd.ExcelWriter('PRMT-2332-Digitisation-Failure-Rates-May-July-2021.xlsx') as writer: national_metrics_monthly.to_excel(writer, sheet_name="National Baseline") requesting_ccgs_monthly.to_excel(writer, sheet_name="Digitised CCGs (Req)") sending_ccgs_monthly.to_excel(writer, sheet_name="Digitised CCGs (Send)") requesting_practices_monthly.to_excel(writer, sheet_name="Digitised Practices (Req)") sending_practices_monthly.to_excel(writer, sheet_name="Digitised Practices (Send)") ```
github_jupyter
# Data Collection Using Web Scraping ## To solve this problem we will need the following data : ● List of neighborhoods in Pune. ● Latitude and Longitudinal coordinates of those neighborhoods. ● Venue data for each neighborhood. ## Sources ● For the list of neighborhoods, I used (https://en.wikipedia.org/wiki/Category:Neighbourhoods_in_Pune) ● For Latitude and Longitudinal coordinates: Python Geocoder Package (https://geocoder.readthedocs.io/) ● For Venue data: Foursquare API (https://foursquare.com/) ## Methods to extract data from Sources To extract the data we will use python packages like requests, beautifulsoup and geocoder. We will use Requests and beautifulsoup packages for web scraping(https://en.wikipedia.org/wiki/Category:Neighbourhoods_in_Pune ) to get the list of neighborhoods in Pune and geocoder package to get the latitude and longitude coordinates of each neighborhood. Then we will use Folium to plot these neighborhoods on the map. After that, we will use the foursquare API to get the venue data of those neighborhoods. Foursquare API will provide many categories of the venue data but we are particularly interested in the supermarket category in order to help us to solve the business problem. ## Imports ``` import numpy as np # library to handle data in a vectorized manner import pandas as pd # library for data analsysis pd.set_option("display.max_columns", None) pd.set_option("display.max_rows", None) import json # library to handle JSON files from geopy.geocoders import Nominatim # convert an address into latitude and longitude values !pip install geocoder import geocoder # to get coordinates !pip install requests import requests # library to handle requests from bs4 import BeautifulSoup # library to parse HTML and XML documents from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe print("Libraries imported.") ``` ## Collecting the nebourhood data using Requests, BeautifulSoup, and Geocoder labries ``` data = requests.get("https://en.wikipedia.org/wiki/Category:Neighbourhoods_in_Pune").text # parse data from the html into a beautifulsoup object soup = BeautifulSoup(data, 'html.parser') # create a list to store neighborhood data neighborhood_List = [] # append the data into the list for row in soup.find_all("div", class_="mw-category")[0].findAll("li"): neighborhood_List.append(row.text) # create a new DataFrame from the list Pune_df = pd.DataFrame({"Neighborhood": neighborhood_List}) Pune_df.tail() # define a function to get coordinates def get_cord(neighborhood): coords = None # loop until you get the coordinates while(coords is None): g = geocoder.arcgis('{}, Pune, Maharashtra'.format(neighborhood)) coords = g.latlng return coords # create a list and store the coordinates coords = [ get_cord(neighborhood) for neighborhood in Pune_df["Neighborhood"].tolist() ] coords[:10] df_coords = pd.DataFrame(coords, columns=['Latitude', 'Longitude']) # merge the coordinates into the original dataframe Pune_df['Latitude'] = df_coords['Latitude'] Pune_df['Longitude'] = df_coords['Longitude'] # check the neighborhoods and the coordinates print(Pune_df.shape) Pune_df.head(10) # save the DataFrame as CSV file Pune_df.to_csv("Pune_df.csv", index=False) ``` ## Collecting the nebourhood venue data using Foursquare API ``` # define Foursquare Credentials and Version CLIENT_ID = '5HUDVH14DMECWUAFI2MICONBTTDPW1CCL1C4TFGE3FEHEUHJ' # your Foursquare ID CLIENT_SECRET = 'R0WIH5UIW2SADKBUW4B4WMY2QWBBT0Q02IURAXQXVJZMTDIV' # your Foursquare Secret VERSION = '20180605' # Foursquare API version print('Your credentails:') print('CLIENT_ID: ' + CLIENT_ID) print('CLIENT_SECRET:' + CLIENT_SECRET) radius = 3000 LIMIT = 150 venues = [] for lat, long, neighborhood in zip(Pune_df['Latitude'], Pune_df['Longitude'], Pune_df['Neighborhood']): # create the API request URL url = "https://api.foursquare.com/v2/venues/explore?client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}".format( CLIENT_ID, CLIENT_SECRET, VERSION, lat, long, radius, LIMIT) # make the GET request results = requests.get(url).json()["response"]['groups'][0]['items'] # return only relevant information for each nearby venue for venue in results: venues.append(( neighborhood, lat, long, venue['venue']['name'], venue['venue']['location']['lat'], venue['venue']['location']['lng'], venue['venue']['categories'][0]['name'])) # convert the venues list into a new DataFrame venues_df = pd.DataFrame(venues) # define the column names venues_df.columns = ['Neighborhood', 'Latitude', 'Longitude', 'VenueName', 'VenueLatitude', 'VenueLongitude', 'VenueCategory'] print(venues_df.shape) venues_df.head() print('There are {} uniques categories.'.format(len(venues_df['VenueCategory'].unique()))) # print out the list of categories venues_df['VenueCategory'].unique() venues_df.to_csv("venues_df.csv") ```
github_jupyter
# Monte Carlo Simulation This notebook provides introduction to Monte Carlo simulation using a toy example. The example is built upon a famous casino game known as **Rouulette** (Ruletka). During the game, players (bettros) make a bet on an integer, colour or a range and win if they bet was correct. Roulettes usually have 37 slots including one green (0), eighteen red (1-18) and eighteen black (19-36) slots. In our case, we assume better is betting on a range 1-18 (which is equivalent to betting on red) and wins if randomly tossed ball ends up on a slot in that range. Otherwise, bettor loses. If former (win), his budget is increased by the amount of bet, if latter (loose) the budget is reduced by that amount. In this example we will compare to bettors: - **Simple bettor** - has fixed budget, has decided how many periods to play beforehand, but always bets the same amount independent of any other factor. - **Smart bettor** - has fixed budget, has decided how many periods to play beforehand, bets the initial amount after winning, but doubles the bet if lost. The simulation is expected to show that **Simple bettor** is a clear loser, while **Smart bettor**, if budget is enough, cab be a clear winner (in average terms). As the ball is tossed randomly, we will need a random integer generator. We will as well plot results, so plotting libraries will be handy as well. We will start from developing the **spinner** function and then separate functions for simple and smart bettors. We will simulate both multiple (e.g. 100) times to see what happens. Note: In both functions there is a components to make sure budget does not become negative. While it is active in simple bettor, for the smart one it is commented out for you to see what can happen if no condition is set on budget (i.e. budget can even become negative). To make it more realistic you are encouraged to take **#** out in **smart bettor** and uncomment the following component and see what happens: ``` # if budget<=0: # break ``` ``` import random # to generate random inetegers import matplotlib.pyplot as plt # to plot simulation import seaborn as sns # to plot distribution plt.rc('figure', figsize=(20.0, 10.0)) # make the default plots bigger # the random spinner def spinner(): slot = random.randint(1,37) if slot == 0: return "lost" elif 1<=slot <= 18: return "won" elif 19<=slot<=36: return "lost" # the simple bettor def simple_bettor(budget,bet,periods): X_axis = [] Y_axis = [] currentPeriod = 1 while currentPeriod <= periods: result = spinner() if result == "won": budget = budget + bet elif result == "lost": budget = budget - bet if budget<=0: break X_axis.append(currentPeriod) Y_axis.append(budget) currentPeriod = currentPeriod + 1 plt.plot(X_axis,Y_axis) return Y_axis[-1] # the smart/doubler bettor def smart_bettor(budget,bet,periods): X_axis = [] Y_axis = [] currentPeriod = 1 initial_bet = bet while currentPeriod <= periods: result = spinner() if result == "won": budget = budget + bet bet = initial_bet elif result == "lost": budget = budget - bet bet = bet*2 #if budget<=0: # break X_axis.append(currentPeriod) Y_axis.append(budget) currentPeriod = currentPeriod + 1 plt.subplot(121) plt.plot(X_axis,Y_axis) return Y_axis[-1] # the simulation of multiple possible futures (for simple) futures = 1 while futures < 101: simple_bettor(10000,100,1000) futures = futures + 1 plt.title('Simple bettor') plt.ylabel('Budget') plt.xlabel('Periods') plt.show() # the simulation of multiple possible futures (for smart) futures = 1 outcomes = [] while futures < 101: outcomes.append(smart_bettor(10000,100,1000)) futures = futures + 1 plt.title('Smart bettor') plt.ylabel('Budget') plt.xlabel('Periods') plt.subplot(122) sns.distplot(outcomes,bins=25,vertical=True) #plt.subplots_adjust(wspace=0.5) plt.show() ```
github_jupyter
# Tabulate results ``` import os import sys from typing import Tuple import pandas as pd from tabulate import tabulate from tqdm import tqdm sys.path.append('../src') from read_log_file import read_log_file LOG_HOME_DIR = os.path.join('../logs_v1/') assert os.path.isdir(LOG_HOME_DIR) MODEL_NAMES = ['logistic_regression', 'transformer_encoder', 'bert-base-uncased', 'bert-base-multilingual-cased'] SETUPS = ['zero', 'few50', 'few100', 'few150', 'few200', 'full', 'trg'] def get_best_score_from_dict(di: dict) -> dict: """Get max value from a dict""" keys_with_max_val = [] # find max value max_val = -float('inf') for k, v in di.items(): if v > max_val: max_val = v # find all keys with max value for k, v in di.items(): if v == max_val: keys_with_max_val.append(k) return { 'k': keys_with_max_val, 'v': max_val, } def create_best_results_df(langs: str) -> Tuple[pd.DataFrame, pd.DataFrame]: results_dict = {} for model_name in MODEL_NAMES: results_dict[model_name] = {} log_dir = os.path.join(LOG_HOME_DIR, langs, model_name) log_filenames = os.listdir(log_dir) for fname in log_filenames: results_dict[model_name][fname] = read_log_file( log_file_path=os.path.join(log_dir, fname), plot=False, verbose=False, )['best_val_metrics']['f1'] best_results_dict = {'Setup': SETUPS} best_hparams_dict = {'Setup': SETUPS} best_results_dict.update({model_name: [] for model_name in MODEL_NAMES}) best_hparams_dict.update({model_name: [] for model_name in MODEL_NAMES}) for model_name in MODEL_NAMES: for setup in SETUPS: best_score = get_best_score_from_dict( {k: v for k, v in results_dict[model_name].items() if k.startswith(f'{setup}_')} ) best_results_dict[model_name].append( best_score['v'] ) best_hparams_dict[model_name].append( best_score['k'] ) best_results_df = pd.DataFrame(best_results_dict) best_hparams_df = pd.DataFrame(best_hparams_dict) return best_results_df, best_hparams_df def highlight_best_score(df: pd.DataFrame) -> pd.DataFrame: """Highlight best score in each row""" return df.style.apply(lambda x: ['background: red' if isinstance(v, float) and v == max(x.iloc[1:]) else '' for v in x], axis=1) def tabulate_markdown(df: pd.DataFrame) -> str: """Tabulate in markdown format and bold best scores in each row""" df = df.round(4) for model_name in MODEL_NAMES: df[model_name] = df[model_name].astype(str) for idx in range(len(df)): max_val = max(float(df.iloc[idx][model_name]) for model_name in MODEL_NAMES) for model_name in MODEL_NAMES: cell_val = float(df.iloc[idx][model_name]) if cell_val == max_val: df.at[idx, model_name] = f'**{cell_val}**' else: df.at[idx, model_name] = f'{cell_val}' return tabulate(df, headers='keys', showindex=False, tablefmt='github') best_results_dfs_dict = {} best_hparams_dfs_dict = {} for langs in tqdm(['enbg', 'enar', 'bgen', 'bgar', 'aren', 'arbg']): best_results_dfs_dict[langs], best_hparams_dfs_dict[langs] = create_best_results_df(langs) ``` ## en-bg ``` highlight_best_score(best_results_dfs_dict['enbg']) print(tabulate_markdown(best_results_dfs_dict['enbg'])) best_hparams_dfs_dict['enbg'] ``` ## en-ar ``` highlight_best_score(best_results_dfs_dict['enar']) print(tabulate_markdown(best_results_dfs_dict['enar'])) best_hparams_dfs_dict['enar'] ``` ## bg-en ``` highlight_best_score(best_results_dfs_dict['bgen']) print(tabulate_markdown(best_results_dfs_dict['bgen'])) best_hparams_dfs_dict['bgen'] ``` ## bg-ar ``` highlight_best_score(best_results_dfs_dict['bgar']) print(tabulate_markdown(best_results_dfs_dict['bgar'])) best_hparams_dfs_dict['bgar'] ``` ## ar-en ``` highlight_best_score(best_results_dfs_dict['aren']) print(tabulate_markdown(best_results_dfs_dict['aren'])) best_hparams_dfs_dict['aren'] ``` ## ar-bg ``` highlight_best_score(best_results_dfs_dict['arbg']) print(tabulate_markdown(best_results_dfs_dict['arbg'])) best_hparams_dfs_dict['arbg'] ```
github_jupyter
# Preparation ``` # dependencies import pandas as pd import numpy as np import missingno as msno import matplotlib.pyplot as plt import re from sklearn.model_selection import train_test_split from textwrap import wrap from sklearn.preprocessing import StandardScaler import warnings warnings.filterwarnings("ignore") import math %matplotlib inline # import data shelter_outcomes = pd.read_csv("C:/Users/sulem/OneDrive/Desktop/machin learnign/Project3/aac_shelter_outcomes.csv") # filter animal type for just cats cats = shelter_outcomes[shelter_outcomes['animal_type'] == 'Cat'] #print(cats.head()) # remove age_upon_outcome and recalculate to standard units (days) age = cats.loc[:,['datetime', 'date_of_birth']] # convert to datetime age.loc[:,'datetime'] = pd.to_datetime(age['datetime']) age.loc[:,'date_of_birth'] = pd.to_datetime(age['date_of_birth']) # calculate cat age in days cats.loc[:,'age'] = (age.loc[:,'datetime'] - age.loc[:,'date_of_birth']).dt.days # get dob info cats['dob_month'] = age.loc[:, 'date_of_birth'].dt.month cats['dob_day'] = age.loc[:, 'date_of_birth'].dt.day cats['dob_dayofweek'] = age.loc[:, 'date_of_birth'].dt.dayofweek # get month from datetime cats['month'] = age.loc[:,'datetime'].dt.month # get day of month cats['day'] = age.loc[:,'datetime'].dt.day # get day of week cats['dayofweek'] = age.loc[:, 'datetime'].dt.dayofweek # get hour of day cats['hour'] = age.loc[:, 'datetime'].dt.hour # get quarter cats['quarter'] = age.loc[:, 'datetime'].dt.quarter # clean up breed attribute # get breed attribute for processing # convert to lowercase, remove mix and strip whitespace # remove space in 'medium hair' to match 'longhair' and 'shorthair' # split on either space or '/' breed = cats.loc[:, 'breed'].str.lower().str.replace('mix', '').str.replace('medium hair', 'mediumhair').str.strip().str.split('/', expand=True) cats['breed'] = breed[0] cats['breed1'] = breed[1] # clean up color attribute # convert to lowercase # strip spaces # split on '/' color = cats.loc[:, 'color'].str.lower().str.strip().str.split('/', expand=True) cats['color'] = color[0] cats['color1'] = color[1] # clean up sex_upon_outcome sex = cats['sex_upon_outcome'].str.lower().str.strip().str.split(' ', expand=True) sex[0].replace('spayed', True, inplace=True) sex[0].replace('neutered', True, inplace=True) sex[0].replace('intact', False, inplace=True) sex[1].replace(np.nan, 'unknown', inplace=True) cats['spayed_neutered'] = sex[0] cats['sex'] = sex[1] # add in domesticated attribute cats['domestic'] = np.where(cats['breed'].str.contains('domestic'), 1, 0) # combine outcome and outcome subtype into a single attribute cats['outcome_subtype'] = cats['outcome_subtype'].str.lower().str.replace(' ', '-').fillna('unknown') cats['outcome_type'] = cats['outcome_type'].str.lower().str.replace(' ', '-').fillna('unknown') cats['outcome'] = cats['outcome_type'] + '_' + cats['outcome_subtype'] # drop unnecessary columns cats.drop(columns=['animal_id', 'name', 'animal_type', 'age_upon_outcome', 'date_of_birth', 'datetime', 'monthyear', 'sex_upon_outcome', 'outcome_subtype', 'outcome_type'], inplace=True) #print(cats['outcome'].value_counts()) cats.head() cats.drop(columns=['breed1'], inplace=True) # Breed, Color, Color1, Spayed_Netured and Sex attributes need to be one hot encoded cats_ohe = pd.get_dummies(cats, columns=['breed', 'color', 'color1', 'spayed_neutered', 'sex']) cats_ohe.head() out_t={'euthanasia_suffering' : 0, 'died_in-kennel' : 0, 'return-to-owner_unknown' : 0, 'transfer_partner' : 1, 'euthanasia_at-vet' : 2, 'adoption_foster' : 3, 'died_in-foster' : 0, 'transfer_scrp' : 4, 'euthanasia_medical' : 0, 'transfer_snr' : 0, 'died_enroute' : 0, 'rto-adopt_unknown' : 0, 'missing_in-foster' : 0, 'adoption_offsite' : 0, 'adoption_unknown' :5,'euthanasia_rabies-risk' : 0, 'unknown_unknown' : 0, 'adoption_barn' : 0, 'died_unknown' : 0, 'died_in-surgery' : 0, 'euthanasia_aggressive' : 0, 'euthanasia_unknown' : 0, 'missing_unknown' : 0, 'missing_in-kennel' : 0, 'missing_possible-theft' : 0, 'died_at-vet' : 0, 'disposal_unknown' : 0, 'euthanasia_underage' : 0, 'transfer_barn' : 0} #output is converted from string to catogries 0 to 5 represent each output # separate outcome from data outcome = cats_ohe['outcome'] cats_ohe.drop(columns=['outcome']) print(cats_ohe.head()) # split the data X_train, X_test, y_train, y_test = train_test_split(cats_ohe, outcome, test_size=0.2, random_state=0) X_train.drop(columns=['outcome'], inplace=True) y_train = [out_t[item] for item in y_train] #print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) x_train_ar=X_train.values y_target_ar=np.asarray(y_train) x_train_ar = StandardScaler().fit(x_train_ar).transform(x_train_ar) print(x_train_ar.shape) print(y_target_ar.shape) unique, counts = np.unique(y_target_ar, return_counts=True) np.asarray((unique, counts)) plt.pie(np.asarray(( counts)), labels=np.unique(y_target_ar), startangle=90, autopct='%.1f%%') plt.show() ``` # Evaluation # Modeling # Exceptional Work ``` # Example adapted from https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch12/ch12.ipynb # Original Author: Sebastian Raschka # This is the optional book we use in the course, excellent intuitions and straightforward programming examples # please note, however, that this code has been manipulated to reflect our assumptions and notation. import numpy as np from scipy.special import expit import pandas as pd import sys # start with a simple base classifier, which can't be fit or predicted # it only has internal classes to be used by classes that will subclass it class TwoLayerPerceptronBase(object): def __init__(self, n_hidden=30, C=0.0, epochs=500, eta=0.001, random_state=None,phi='sig'): np.random.seed(random_state) self.n_hidden = n_hidden self.l2_C = C self.epochs = epochs self.eta = eta self.phi=phi @staticmethod def _encode_labels(y): """Encode labels into one-hot representation""" onehot = pd.get_dummies(y).values.T return onehot def _initialize_weights(self): """Initialize weights with small random numbers.""" W1_num_elems = (self.n_features_ + 1)*self.n_hidden W1 = np.random.uniform(-1.0, 1.0,size=W1_num_elems) W1 = W1.reshape(self.n_hidden, self.n_features_ + 1) # reshape to be W W2_num_elems = (self.n_hidden + 1)*self.n_output_ W2 = np.random.uniform(-1.0, 1.0, size=W2_num_elems) W2 = W2.reshape(self.n_output_, self.n_hidden + 1) return W1, W2 @staticmethod def _sigmoid(z,phi): """Use scipy.special.expit to avoid overflow""" # 1.0 / (1.0 + np.exp(-z)) if phi=='sig': return expit(z) if phi=='lin': return z if phi=='silu': return expit(z)*z if phi=='relu': bol= z>=0 #z=bol*z return np.maximum(0,z.copy()) @staticmethod def _add_bias_unit(X, how='column'): """Add bias unit (column or row of 1s) to array at index 0""" if how == 'column': ones = np.ones((X.shape[0], 1)) X_new = np.hstack((ones, X)) elif how == 'row': ones = np.ones((1, X.shape[1])) X_new = np.vstack((ones, X)) return X_new @staticmethod def _L2_reg(lambda_, W1, W2): """Compute L2-regularization cost""" # only compute for non-bias terms return (lambda_/2.0) * np.sqrt(np.mean(W1[:, 1:] ** 2) + np.mean(W2[:, 1:] ** 2)) def _cost(self,A3,Y_enc,W1,W2): '''Get the objective function value''' cost = np.mean((Y_enc-A3)**2) L2_term = self._L2_reg(self.l2_C, W1, W2) return cost + L2_term def _feedforward(self, X, W1, W2): """Compute feedforward step """ A1 = self._add_bias_unit(X, how='column') A1 = A1.T Z1 = W1 @ A1 A2 = self._sigmoid(Z1,self.phi) A2 = self._add_bias_unit(A2, how='row') Z2 = W2 @ A2 A3 = self._sigmoid(Z2,'sig') return A1, Z1, A2, Z2, A3 def _div(b,A_,phi): if phi=='sig': return A_*(1-A_) if phi=='lin': return 1 if phi=='silu': return (expit(A_)*A_)+(expit(A_)*(1-expit(A_)*A_)) if phi=='relu': bol= A_>=0 return 1 def _get_gradient(self, A1, A2, A3, Z1, Z2, Y_enc, W1, W2): """ Compute gradient step using backpropagation. """ # vectorized backpropagation Z1_with_bias = self._add_bias_unit(Z1,how='row') Z2_with_bias = self._add_bias_unit(Z2,how='row') V2 = -2*(Y_enc-A3)*self._div(A3,self.phi) # last layer sensitivity V1 = self._div(A2,self.phi)*(W2.T @ V2) # back prop the sensitivity if self.phi=='relu': #print(Z2_with_bias.shape) #print(V2.shape) V1[Z1_with_bias<=0] = 0 V2[Z2<=0] = 0 grad2 = V2 @ A2.T # no bias on final layer grad1 = V1[1:,:] @ A1.T # dont back prop sensitivity of bias # regularize weights that are not bias terms grad1[:, 1:] += W1[:, 1:] * self.l2_C grad2[:, 1:] += W2[:, 1:] * self.l2_C return grad1, grad2 def predict(self, X): """Predict class labels""" _, _, _, _, A3 = self._feedforward(X, self.W1, self.W2) y_pred = np.argmax(A3, axis=0) return y_pred from sklearn.metrics import accuracy_score # just start with the vectorized version and minibatch class TLPMiniBatch(TwoLayerPerceptronBase): def __init__(self, alpha=0.0, decrease_const=0.0, shuffle=True, minibatches=1, **kwds): # need to add to the original initializer self.alpha = alpha self.decrease_const = decrease_const self.shuffle = shuffle self.minibatches = minibatches # but keep other keywords super().__init__(**kwds) def fit(self, X, y, print_progress=False): """ Learn weights from training data. With mini-batch""" X_data, y_data = X.copy(), y.copy() Y_enc = self._encode_labels(y) # init weights and setup matrices self.n_features_ = X_data.shape[1] self.n_output_ = Y_enc.shape[0] self.W1, self.W2 = self._initialize_weights() delta_W1_prev = np.zeros(self.W1.shape) delta_W2_prev = np.zeros(self.W2.shape) self.cost_ = [] self.score_ = [] # get starting acc self.score_.append(accuracy_score(y_data,self.predict(X_data))) for i in range(self.epochs): # adaptive learning rate self.eta /= (1 + self.decrease_const*i) if print_progress>0 and (i+1)%print_progress==0: sys.stderr.write('\rEpoch: %d/%d' % (i+1, self.epochs)) sys.stderr.flush() if self.shuffle: idx_shuffle = np.random.permutation(y_data.shape[0]) X_data, Y_enc, y_data = X_data[idx_shuffle], Y_enc[:, idx_shuffle], y_data[idx_shuffle] mini = np.array_split(range(y_data.shape[0]), self.minibatches) mini_cost = [] for idx in mini: # feedforward A1, Z1, A2, Z2, A3 = self._feedforward(X_data[idx], self.W1, self.W2) cost = self._cost(A3,Y_enc[:, idx],self.W1,self.W2) mini_cost.append(cost) # this appends cost of mini-batch only # compute gradient via backpropagation grad1, grad2 = self._get_gradient(A1=A1, A2=A2, A3=A3, Z1=Z1, Z2=Z2, Y_enc=Y_enc[:, idx], W1=self.W1,W2=self.W2) # momentum calculations delta_W1, delta_W2 = self.eta * grad1, self.eta * grad2 self.W1 -= (delta_W1 + (self.alpha * delta_W1_prev)) self.W2 -= (delta_W2 + (self.alpha * delta_W2_prev)) delta_W1_prev, delta_W2_prev = delta_W1, delta_W2 self.cost_.append(mini_cost) self.score_.append(accuracy_score(y_data,self.predict(X_data))) return self %%time params = dict(n_hidden=100, C=.0001, # tradeoff L2 regularizer epochs=200, # iterations eta=0.001, # learning rate random_state=1, phi='lin') nn_mini = TLPMiniBatch(**params, alpha=0.001,# momentum calculation decrease_const=0.0001, # decreasing eta minibatches=50, # minibatch size shuffle=True) nn_mini.fit(x_train_ar, y_target_ar, print_progress=50) yhat = nn_mini.predict(x_train_ar) print('Accuracy:',accuracy_score(y_target_ar,yhat)) ```
github_jupyter
<a href="https://colab.research.google.com/github/PacktPublishing/Hands-On-Computer-Vision-with-PyTorch/blob/master/Chapter15/Handwriting_transcription.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !wget https://www.dropbox.com/s/l2ul3upj7dkv4ou/synthetic-data.zip !unzip -qq synthetic-data.zip !pip install torch_snippets torch_summary editdistance from torch_snippets import * from torchsummary import summary import editdistance device = 'cuda' if torch.cuda.is_available() else 'cpu' fname2label = lambda fname: stem(fname).split('@')[0] images = Glob('synthetic-data') vocab = 'QWERTYUIOPASDFGHJKLZXCVBNMqwertyuiopasdfghjklzxcvbnm' B,T,V = 64, 32, len(vocab) H,W = 32, 128 class OCRDataset(Dataset): def __init__(self, items, vocab=vocab, preprocess_shape=(H,W), timesteps=T): super().__init__() self.items = items self.charList = {ix+1:ch for ix,ch in enumerate(vocab)} self.charList.update({0: '`'}) self.invCharList = {v:k for k,v in self.charList.items()} self.ts = timesteps def __len__(self): return len(self.items) def sample(self): return self[randint(len(self))] def __getitem__(self, ix): item = self.items[ix] image = cv2.imread(item, 0) label = fname2label(item) return image, label def collate_fn(self, batch): images, labels, label_lengths, label_vectors, input_lengths = [], [], [], [], [] for image, label in batch: images.append(torch.Tensor(self.preprocess(image))[None,None]) label_lengths.append(len(label)) labels.append(label) label_vectors.append(self.str2vec(label)) input_lengths.append(self.ts) images = torch.cat(images).float().to(device) label_lengths = torch.Tensor(label_lengths).long().to(device) label_vectors = torch.Tensor(label_vectors).long().to(device) input_lengths = torch.Tensor(input_lengths).long().to(device) return images, label_vectors, label_lengths, input_lengths, labels def str2vec(self, string, pad=True): string = ''.join([s for s in string if s in self.invCharList]) val = list(map(lambda x: self.invCharList[x], string)) if pad: while len(val) < self.ts: val.append(0) return val def preprocess(self, img, shape=(32,128)): target = np.ones(shape)*255 try: H, W = shape h, w = img.shape fx = H/h fy = W/w f = min(fx, fy) _h = int(h*f) _w = int(w*f) _img = cv2.resize(img, (_w,_h)) target[:_h,:_w] = _img except: ... return (255-target)/255 def decoder_chars(self, pred): decoded = "" last = "" pred = pred.cpu().detach().numpy() for i in range(len(pred)): k = np.argmax(pred[i]) if k > 0 and self.charList[k] != last: last = self.charList[k] decoded = decoded + last elif k > 0 and self.charList[k] == last: continue else: last = "" return decoded.replace(" "," ") def wer(self, preds, labels): c = 0 for p, l in zip(preds, labels): c += p.lower().strip() != l.lower().strip() return round(c/len(preds), 4) def cer(self, preds, labels): c, d = [], [] for p, l in zip(preds, labels): c.append(editdistance.eval(p, l) / len(l)) return round(np.mean(c), 4) def evaluate(self, model, ims, labels, lower=False): model.eval() preds = model(ims).permute(1,0,2) # B, T, V+1 preds = [self.decoder_chars(pred) for pred in preds] return {'char-error-rate': self.cer(preds, labels), 'word-error-rate': self.wer(preds, labels), 'char-accuracy' : 1 - self.cer(preds, labels), 'word-accuracy' : 1 - self.wer(preds, labels)} from sklearn.model_selection import train_test_split trn_items, val_items = train_test_split(Glob('synthetic-data'), test_size=0.2, random_state=22) trn_ds = OCRDataset(trn_items) val_ds = OCRDataset(val_items) trn_dl = DataLoader(trn_ds, batch_size=B, collate_fn=trn_ds.collate_fn, drop_last=True, shuffle=True) val_dl = DataLoader(val_ds, batch_size=B, collate_fn=val_ds.collate_fn, drop_last=True) from torch_snippets import Reshape, Permute class BasicBlock(nn.Module): def __init__(self, ni, no, ks=3, st=1, padding=1, pool=2, drop=0.2): super().__init__() self.ks = ks self.block = nn.Sequential( nn.Conv2d(ni, no, kernel_size=ks, stride=st, padding=padding), nn.BatchNorm2d(no, momentum=0.3), nn.ReLU(inplace=True), nn.MaxPool2d(pool), nn.Dropout2d(drop) ) def forward(self, x): return self.block(x) class Ocr(nn.Module): def __init__(self, vocab): super().__init__() self.model = nn.Sequential( BasicBlock( 1, 128), BasicBlock(128, 128), BasicBlock(128, 256, pool=(4,2)), Reshape(-1, 256, 32), Permute(2, 0, 1) # T, B, D ) self.rnn = nn.Sequential( nn.LSTM(256, 256, num_layers=2, dropout=0.2, bidirectional=True), ) self.classification = nn.Sequential( nn.Linear(512, vocab+1), nn.LogSoftmax(-1), ) def forward(self, x): x = self.model(x) x, lstm_states = self.rnn(x) y = self.classification(x) return y def ctc(log_probs, target, input_lengths, target_lengths, blank=0): loss = nn.CTCLoss(blank=blank, zero_infinity=True) ctc_loss = loss(log_probs, target, input_lengths, target_lengths) return ctc_loss model = Ocr(len(vocab)).to(device) !pip install torch_summary from torchsummary import summary summary(model, torch.zeros((1,1,32,128)).to(device)) def train_batch(data, model, optimizer, criterion): model.train() imgs, targets, label_lens, input_lens, labels = data optimizer.zero_grad() preds = model(imgs) loss = criterion(preds, targets, input_lens, label_lens) loss.backward() optimizer.step() results = trn_ds.evaluate(model, imgs.to(device), labels) return loss, results @torch.no_grad() def validate_batch(data, model): model.eval() imgs, targets, label_lens, input_lens, labels = data preds = model(imgs) loss = criterion(preds, targets, input_lens, label_lens) return loss, val_ds.evaluate(model, imgs.to(device), labels) model = Ocr(len(vocab)).to(device) criterion = ctc optimizer = optim.AdamW(model.parameters(), lr=3e-3) n_epochs = 50 log = Report(n_epochs) for ep in range( n_epochs): # if ep in lr_schedule: optimizer = AdamW(ocr.parameters(), lr=lr_schedule[ep]) N = len(trn_dl) for ix, data in enumerate(trn_dl): pos = ep + (ix+1)/N loss, results = train_batch(data, model, optimizer, criterion) # scheduler.step() ca, wa = results['char-accuracy'], results['word-accuracy'] log.record(pos=pos, trn_loss=loss, trn_char_acc=ca, trn_word_acc=wa, end='\r') val_results = [] N = len(val_dl) for ix, data in enumerate(val_dl): pos = ep + (ix+1)/N loss, results = validate_batch(data, model) ca, wa = results['char-accuracy'], results['word-accuracy'] log.record(pos=pos, val_loss=loss, val_char_acc=ca, val_word_acc=wa, end='\r') log.report_avgs(ep+1) print() for jx in range(5): img, label = val_ds.sample() _img = torch.Tensor(val_ds.preprocess(img)[None,None]).to(device) pred = model(_img)[:,0,:] pred = trn_ds.decoder_chars(pred) print(f'Pred: `{pred}` :: Truth: `{label}`') print() log.plot_epochs(['trn_word_acc','val_word_acc'], title='Training and validation word accuracy') ```
github_jupyter
## Dependencies ``` import json, glob from tweet_utility_scripts import * from tweet_utility_preprocess_roberta_scripts_aux import * from transformers import TFRobertaModel, RobertaConfig from tokenizers import ByteLevelBPETokenizer from tensorflow.keras import layers from tensorflow.keras.models import Model ``` # Load data ``` test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv') print('Test samples: %s' % len(test)) display(test.head()) ``` # Model parameters ``` input_base_path = '/kaggle/input/276-tweet-train-5fold-roberta-avg-last4-onecy-exp3/' with open(input_base_path + 'config.json') as json_file: config = json.load(json_file) config vocab_path = input_base_path + 'vocab.json' merges_path = input_base_path + 'merges.txt' base_path = '/kaggle/input/qa-transformers/roberta/' # vocab_path = base_path + 'roberta-base-vocab.json' # merges_path = base_path + 'roberta-base-merges.txt' config['base_model_path'] = base_path + 'roberta-base-tf_model.h5' config['config_path'] = base_path + 'roberta-base-config.json' model_path_list = glob.glob(input_base_path + '*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep='\n') ``` # Tokenizer ``` tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path, lowercase=True, add_prefix_space=True) ``` # Pre process ``` test['text'].fillna('', inplace=True) test['text'] = test['text'].apply(lambda x: x.lower()) test['text'] = test['text'].apply(lambda x: x.strip()) x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test) ``` # Model ``` module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True) def model_fn(MAX_LEN): input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model') _, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask}) h12 = hidden_states[-1] h11 = hidden_states[-2] h10 = hidden_states[-3] h09 = hidden_states[-4] avg_hidden = layers.Average()([h12, h11, h10, h09]) logits = layers.Dense(2, use_bias=False, name='qa_outputs')(avg_hidden) start_logits, end_logits = tf.split(logits, 2, axis=-1) start_logits = tf.squeeze(start_logits, axis=-1, name='y_start') end_logits = tf.squeeze(end_logits, axis=-1, name='y_end') model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits]) return model ``` # Make predictions ``` NUM_TEST_IMAGES = len(test) test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN'])) test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN'])) for model_path in model_path_list: print(model_path) model = model_fn(config['MAX_LEN']) model.load_weights(model_path) test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'])) test_start_preds += test_preds[0] test_end_preds += test_preds[1] ``` # Post process ``` test['start'] = test_start_preds.argmax(axis=-1) test['end'] = test_end_preds.argmax(axis=-1) test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1) # Post-process test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1) test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1) test['selected_text'].fillna(test['text'], inplace=True) ``` # Visualize predictions ``` test['text_len'] = test['text'].apply(lambda x : len(x)) test['label_len'] = test['selected_text'].apply(lambda x : len(x)) test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' '))) test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' '))) test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids)) test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids)) test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1) display(test.head(10)) display(test.describe()) ``` # Test set predictions ``` submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv') submission['selected_text'] = test['selected_text'] submission.to_csv('submission.csv', index=False) submission.head(10) ```
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np import statsmodels.formula.api as smf import statsmodels.api as sm from statsmodels.graphics.regressionplots import influence_plot import sklearn startup=pd.read_csv("50_Startups.csv") startup startup.describe() startup.head() startup.info() startup1=startup.rename({'R&D Spend':'RDS','Administration':'ADMS','Marketing Spend':'MKTS'},axis=1) startup1 startup1[startup1.duplicated()] startup.corr() sns.set_style(style='darkgrid') sns.pairplot(startup1) model=smf.ols("Profit~RDS+ADMS+MKTS",data=startup1).fit() model.params model.tvalues , np.round(model.pvalues,5) (model.rsquared , model.rsquared_adj) slr_a=smf.ols("Profit~ADMS",data=startup1).fit() slr_a.tvalues , slr_a.pvalues slr_m=smf.ols("Profit~MKTS",data=startup1).fit() slr_m.tvalues , slr_m.pvalues mlr_am=smf.ols("Profit~ADMS+MKTS",data=startup1).fit() mlr_am.tvalues , mlr_am.pvalues rsq_r=smf.ols("RDS~ADMS+MKTS",data=startup1).fit().rsquared vif_r=1/(1-rsq_r) rsq_a=smf.ols("ADMS~RDS+MKTS",data=startup1).fit().rsquared vif_a=1/(1-rsq_a) rsq_m=smf.ols("MKTS~RDS+ADMS",data=startup1).fit().rsquared vif_m=1/(1-rsq_m) # Putting the values in Dataframe format d1={'Variables':['RDS','ADMS','MKTS'],'VIF':[vif_r,vif_a,vif_m]} Vif_df=pd.DataFrame(d1) Vif_df sm.qqplot(model.resid,line='q') plt.title("Normal Q-Q plot of residuals") plt.show() sm.qqplot(model.resid,line='q') plt.title("Normal Q-Q plot of residuals") plt.show() list(np.where(model.resid<-30000)) def standard_values(vals) : return (vals-vals.mean())/vals.std() plt.scatter(standard_values(model.fittedvalues),standard_values(model.resid)) plt.title('Residual Plot') plt.xlabel('standardized fitted values') plt.ylabel('standardized residual values') plt.show() fig=plt.figure(figsize=(15,8)) sm.graphics.plot_regress_exog(model,'RDS',fig=fig) plt.show() fig=plt.figure(figsize=(15,8)) sm.graphics.plot_regress_exog(model,'ADMS',fig=fig) plt.show() fig=plt.figure(figsize=(15,8)) sm.graphics.plot_regress_exog(model,'MKTS',fig=fig) plt.show() (c,_)=model.get_influence().cooks_distance c fig=plt.figure(figsize=(20,7)) plt.stem(np.arange(len(startup1)),np.round(c,5)) plt.xlabel('Row Index') plt.ylabel('Cooks Distance') plt.show() np.argmax(c) , np.max(c) influence_plot(model) plt.show() k=startup1.shape[1] n=startup1.shape[0] leverage_cutoff = (3*(k+1))/n leverage_cutoff startup1[startup1.index.isin([49])] startup2=startup1.drop(startup1.index[[49]],axis=0).reset_index(drop=True) startup2 model2 = smf.ols("Profit~RDS+ADMS+MKTS",data=startup2).fit() sm.graphics.plot_partregress_grid(model) model2=smf.ols("Profit~RDS+ADMS+MKTS",data=startup2).fit() (c,_)=model2.get_influence().cooks_distance c np.argmax(c) , np.max(c) startup2=startup2.drop(startup2.index[[np.argmax(c)]],axis=0).reset_index(drop=True) startup2 final_model=smf.ols("Profit~RDS+ADMS+MKTS",data=startup2).fit() final_model.rsquared , final_model.aic print("model accuracy is improved to",final_model.rsquared) final_model.rsquared startup2 new_data=pd.DataFrame({'RDS':70000,"ADMS":90000,"MKTS":140000},index=[0]) new_data final_model.predict(new_data) pred_y=final_model.predict(startup2) pred_y df={'Prep_Models':['Model','Final_Model'],'Rsquared':[model.rsquared,final_model.rsquared]} table=pd.DataFrame(df) print('FINAL MODEL :-') table ```
github_jupyter
## Importing Modules ``` #%matplotlib notebook from tqdm import tqdm %matplotlib inline #Module to handle regular expressions import re #manage files import os #Library for emoji import emoji #Import pandas and numpy to handle data import pandas as pd import numpy as np #import libraries for accessing the database import psycopg2 from sqlalchemy import create_engine from postgres_credentials import * #import libraries for visualization import matplotlib.pyplot as plt import seaborn as sns from wordcloud import WordCloud from PIL import Image #Import nltk to check english lexicon import nltk from nltk.tokenize import word_tokenize from nltk.corpus import ( wordnet, stopwords ) #import libraries for tokenization and ML import json; import keras; import keras.preprocessing.text as kpt; #from keras.preprocessing.text import Tokenizer; import sklearn from sklearn.preprocessing import Normalizer from sklearn.feature_extraction.text import ( CountVectorizer, TfidfVectorizer ) from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score #Import all libraries for creating a deep neural network #Sequential is the standard type of neural network with stackable layers from keras.models import ( Sequential, model_from_json ) #Dense: Standard layers with every node connected, dropout: avoids overfitting from keras.layers import Dense, Dropout, Activation; #To anotate database from pycorenlp import StanfordCoreNLP #Querying the database def query_database(tabletweets): engine = create_engine("postgresql+psycopg2://%s:%s@%s:%d/%s" %(usertwitter, passwordtwitter, hosttwitter, porttwitter, dbnametwitter)) table = pd.read_sql_query("select * from %s" %tabletweets,con=engine, index_col="id") return table ``` ## Preprocessing the text Before we dig into analyzing the public opinion on 'Avengers', there is an important step that we need to take: preprocessing the tweet text. But what does this mean? Text preprocessing includes a basic text cleaning following a set of simple rules commonly used but also, advanced techniques that takes into account syntactic and lexical information. ``` #preprocess text in tweets by removing links, @UserNames, blank spaces, etc. def preprocessing_text(table): #put everythin in lowercase table['tweet'] = table['tweet'].str.lower() #Replace rt indicating that was a retweet table['tweet'] = table['tweet'].str.replace('rt', '') #Replace occurences of mentioning @UserNames table['tweet'] = table['tweet'].replace(r'@\w+', '', regex=True) #Replace links contained in the tweet table['tweet'] = table['tweet'].replace(r'http\S+', '', regex=True) table['tweet'] = table['tweet'].replace(r'www.[^ ]+', '', regex=True) #remove numbers table['tweet'] = table['tweet'].replace(r'[0-9]+', '', regex=True) #replace special characters and puntuation marks table['tweet'] = table['tweet'].replace(r'[!"#$%&()*+,-./:;<=>?@[\]^_`{|}~]', '', regex=True) return table #Replace elongated words by identifying those repeated characters and then remove them and compare the new word with the english lexicon def in_dict(word): if wordnet.synsets(word): #if the word is in the dictionary, we'll return True return True def replace_elongated_word(word): regex = r'(\w*)(\w+)\2(\w*)' repl = r'\1\2\3' if in_dict(word): return word new_word = re.sub(regex, repl, word) if new_word != word: return replace_elongated_word(new_word) else: return new_word def detect_elongated_words(row): regexrep = r'(\w*)(\w+)(\2)(\w*)' words = [''.join(i) for i in re.findall(regexrep, row)] for word in words: if not in_dict(word): row = re.sub(word, replace_elongated_word(word), row) return row def stop_words(table): #We need to remove the stop words stop_words_list = stopwords.words('english') table['tweet'] = table['tweet'].str.lower() table['tweet'] = table['tweet'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop_words_list)])) return table def replace_antonyms(word): #We get all the lemma for the word for syn in wordnet.synsets(word): for lemma in syn.lemmas(): #if the lemma is an antonyms of the word if lemma.antonyms(): #we return the antonym return lemma.antonyms()[0].name() return word def handling_negation(row): #Tokenize the row words = word_tokenize(row) speach_tags = ['JJ', 'JJR', 'JJS', 'NN', 'VB', 'VBD', 'VBG', 'VBN', 'VBP'] #We obtain the type of words that we have in the text, we use the pos_tag function tags = nltk.pos_tag(words) #Now we ask if we found a negation in the words tags_2 = '' if "n't" in words and "not" in words: tags_2 = tags[min(words.index("n't"), words.index("not")):] words_2 = words[min(words.index("n't"), words.index("not")):] words = words[:(min(words.index("n't"), words.index("not")))+1] elif "n't" in words: tags_2 = tags[words.index("n't"):] words_2 = words[words.index("n't"):] words = words[:words.index("n't")+1] elif "not" in words: tags_2 = tags[words.index("not"):] words_2 = words[words.index("not"):] words = words[:words.index("not")+1] for index, word_tag in enumerate(tags_2): if word_tag[1] in speach_tags: words = words+[replace_antonyms(word_tag[0])]+words_2[index+2:] break return ' '.join(words) def cleaning_table(table): #This function will process all the required cleaning for the text in our tweets table = preprocessing_text(table) table['tweet'] = table['tweet'].apply(lambda x: detect_elongated_words(x)) table['tweet'] = table['tweet'].apply(lambda x: handling_negation(x)) table = stop_words(table) return table ``` ## Data Visualization After we have cleaned our data but before we start building our model for sentiment analysis, we can perform an exploratory data analysis to see what are the most frequent words that appear in our 'Avengers' tweets. For this part, we will show graphs regarding tweets labelled as positive separated from those labelled as negative. ``` #Vectorization for Data Visualization def vectorization(table): #CountVectorizer will convert a collection of text documents to a matrix of token counts #Produces a sparse representation of the counts #Initialize vector = CountVectorizer() #We fit and transform the vector created frequency_matrix = vector.fit_transform(table.tweet) #Sum all the frequencies for each word sum_frequencies = np.sum(frequency_matrix, axis=0) #Now we use squeeze to remove single-dimensional entries from the shape of an array that we got from applying np.asarray to #the sum of frequencies. frequency = np.squeeze(np.asarray(sum_frequencies)) #Now we get into a dataframe all the frequencies and the words that they correspond to frequency_df = pd.DataFrame([frequency], columns=vector.get_feature_names()).transpose() return frequency_df def word_cloud(tweets): #We get the directory that we are working on file = os.getcwd() #We read the mask image into a numpy array avengers_mask = np.array(Image.open(os.path.join(file, "avengers.png"))) #Now we store the tweets into a series to be able to process #tweets_list = pd.Series([t for t in tweet_table.tweet]).str.cat(sep=' ') #We generate the wordcloud using the series created and the mask word_cloud = WordCloud(width=2000, height=1000, max_font_size=200, background_color="black", max_words=2000, mask=avengers_mask, contour_width=1, contour_color="steelblue", colormap="nipy_spectral", stopwords=["avengers"]) word_cloud.generate(tweets) #wordcloud = WordCloud(width=1600, height=800,max_font_size=200).generate(tweets_list) #Now we plot both figures, the wordcloud and the mask #plt.figure(figsize=(15,15)) plt.figure(figsize=(10,10)) plt.imshow(word_cloud, interpolation="hermite") plt.axis("off") #plt.imshow(avengers_mask, cmap=plt.cm.gray, interpolation="bilinear") #plt.axis("off") plt.show() def graph(word_frequency, sent): labels = word_frequency[0][1:51].index title = "Word Frequency for %s" %sent #Plot the figures plt.figure(figsize=(10,5)) plt.bar(np.arange(50), word_frequency[0][1:51], width = 0.8, color = sns.color_palette("bwr"), alpha=0.5, edgecolor = "black", capsize=8, linewidth=1); plt.xticks(np.arange(50), labels, rotation=90, size=14); plt.xlabel("50 more frequent words", size=14); plt.ylabel("Frequency", size=14); #plt.title('Word Frequency for %s', size=18) %sent; plt.title(title, size=18) plt.grid(False); plt.gca().spines["top"].set_visible(False); plt.gca().spines["right"].set_visible(False); plt.show() def regression_graph(table): table = table[1:] #We set the style of seaborn sns.set_style("whitegrid") #Initialize the figure plt.figure(figsize=(6,6)) #we obtain the points from matplotlib scatter points = plt.scatter(table["Positive"], table["Negative"], c=table["Positive"], s=75, cmap="bwr") #graph the colorbar plt.colorbar(points) #we graph the regplot from seaborn sns.regplot(x="Positive", y="Negative",fit_reg=False, scatter=False, color=".1", data=table) plt.xlabel("Frequency for Positive Tweets", size=14) plt.ylabel("Frequency for Negative Tweets", size=14) plt.title("Word frequency in Positive vs. Negative Tweets", size=14) plt.grid(False) sns.despine() ``` ## Preparing data for model After visualizing our data, the next step is to split our dataset into training and test sets. For doing so, we'll take advantage of the train_test_split functionality of sklearn package. We will take 20% of the dataset for testing following the 20–80% rule. From the remaining 80% used for the training set, we'll save a part for validation of our model. ``` #Split Data into training and test dataset def splitting(table): X_train, X_test, y_train, y_test = train_test_split(table.tweet, table.sentiment, test_size=0.2, shuffle=True) return X_train, X_test, y_train, y_test ``` m ``` #Tokenization for analysis def tokenization_tweets(dataset, features): tokenization = TfidfVectorizer(max_features=features) tokenization.fit(dataset) dataset_transformed = tokenization.transform(dataset).toarray() return dataset_transformed ``` ## Train model ``` #Create a Neural Network #Create the model def train(X_train_mod, y_train, features, shuffle, drop, layer1, layer2, epoch, lr, epsilon, validation): model_nn = Sequential() model_nn.add(Dense(layer1, input_shape=(features,), activation='relu')) model_nn.add(Dropout(drop)) model_nn.add(Dense(layer2, activation='sigmoid')) model_nn.add(Dropout(drop)) model_nn.add(Dense(3, activation='softmax')) optimizer = keras.optimizers.Adam(lr=lr, beta_1=0.9, beta_2=0.999, epsilon=epsilon, decay=0.0, amsgrad=False) model_nn.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) model_nn.fit(np.array(X_train_mod), y_train, batch_size=32, epochs=epoch, verbose=1, validation_split=validation, shuffle=shuffle) return model_nn ``` ## Test model ``` def test(X_test, model_nn): prediction = model_nn.predict(X_test) return prediction ``` ## Main code ``` if __name__ == "__main__": tabletweets = "tweets_avengers" tweet_table = query_database(tabletweets) tweet_table = cleaning_table(tweet_table) if __name__ == "__main__": #First we draw a word cloud #For All tweets word_cloud(pd.Series([t for t in tweet_table.tweet]).str.cat(sep=' ')) #For positive tweets word_cloud(pd.Series([t for t in tweet_table[tweet_table.sentiment == "Positive"].tweet]).str.cat(sep=' ')) #For negative tweets word_cloud(pd.Series([t for t in tweet_table[tweet_table.sentiment == "Negative"].tweet]).str.cat(sep=' ')) if __name__ == "__main__": #Get the frequency word_frequency = vectorization(tweet_table).sort_values(0, ascending = False) word_frequency_pos = vectorization(tweet_table[tweet_table['sentiment'] == 'Positive']).sort_values(0, ascending = False) word_frequency_neg = vectorization(tweet_table[tweet_table['sentiment'] == 'Negative']).sort_values(0, ascending = False) #Graph with frequency words all, positive and negative tweets and get the frequency graph(word_frequency, 'all') graph(word_frequency_pos, 'positive') graph(word_frequency_neg, 'negative') if __name__ == "__main__": #Concatenate word frequency for positive and negative table_regression = pd.concat([word_frequency_pos, word_frequency_neg], axis=1, sort=False) table_regression.columns = ["Positive", "Negative"] regression_graph(table_regression) if __name__ == "__main__": tabletweets = "tweets_avengers_labeled" tweet_table = query_database(tabletweets) if __name__ == "__main__": tweet_table['sentiment'] = tweet_table['sentiment'].apply(lambda x: 2 if x == 'Positive' else (0 if x == 'Negative' else 1)) if __name__ == "__main__": X_train, X_test, y_train, y_test = splitting(tweet_table) def model1(X_train, y_train): features = 3500 shuffle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.001 epsilon = None validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shuffle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model1(X_train, y_train) def model2(X_train, y_train): features = 3000 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.001 epsilon = None validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model2(X_train, y_train) def model3(X_train, y_train): features = 3500 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.002 epsilon = None validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model_final = model3(X_train, y_train) def model4(X_train, y_train): features = 5000 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 2 lr = 0.005 epsilon = None validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model4(X_train, y_train) def model5(X_train, y_train): features = 3500 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.002 epsilon = 1e-5 validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model5(X_train, y_train) def model6(X_train, y_train): features = 3500 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.002 epsilon = 1e-8 validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model6(X_train, y_train) def model7(X_train, y_train): features = 3500 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 6 lr = 0.002 epsilon = 1e-8 validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; #model7(X_train, y_train) def model8(X_train, y_train): features = 3500 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.002 epsilon = 1e-9 validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model8(X_train, y_train) def model9(X_train, y_train): features = 3500 shufle = False drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.002 epsilon = 1e-9 validation = 0.1 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model9(X_train, y_train) def model10(X_train, y_train): features = 3500 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.002 epsilon = 1e-9 validation = 0.2 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model10(X_train, y_train) def model11(X_train, y_train): features = 3000 shufle = True drop = 0.5 layer1 = 512 layer2 = 256 epoch = 5 lr = 0.002 epsilon = 1e-9 validation = 0.2 X_train_mod = tokenization_tweets(X_train, features) model = train(X_train_mod, y_train, features, shufle, drop, layer1, layer2, epoch, lr, epsilon, validation) return model; model11(X_train, y_train) def save_model(model): model_json = model.to_json() with open('model.json', 'w') as json_file: json_file.write(model_json) model.save_weights('model.h5') model_final = model7(X_train, y_train) save_model(model_final) if __name__ == "__main__": tabletweetsnew = "tweets_predict_avengers" tweet_table_new = query_database(tabletweetsnew) tweet_table_new = cleaning_table(tweet_table_new) if __name__ == "__main__": X_new = tokenization_tweets(tweet_table_new.tweet, 3500) new_prediction = model_final.predict(X_new) if __name__ == "__main__": labels = ['Negative', 'Neutral', 'Positive'] sentiments = [labels[np.argmax(pred)] for pred in new_prediction] tweet_table_new["sentiment"] = sentiments sizes = [sentiments.count('Negative'), sentiments.count('Neutral'), sentiments.count('Positive')] explode = (0, 0, 0.1) labels = 'Negative', 'Neutral', 'Positive' plt.figure(figsize=(5,5)) plt.pie(sizes, explode=explode, colors="bwr", labels=labels, autopct='%1.1f%%', shadow=True, startangle=90, wedgeprops={'alpha':0.8}) plt.axis('equal') plt.show() if __name__ == "__main__": engine = create_engine("postgresql+psycopg2://%s:%s@%s:%d/%s" %(usertwitter, passwordtwitter, hosttwitter, porttwitter, dbnametwitter)) tweet_table_new.to_sql("tweets_avengers_new_labeled", con=engine, if_exists="append") ``` ### Extra analysis for interaction network ``` if __name__ == "__main__": tweet_table_interaction = pd.read_csv("tweets_final.csv") tweet_table_interaction.rename(columns = {"text": "tweet"}, inplace=True) tweet_table_interaction = cleaning_table(tweet_table_interaction) X_interaction = tokenization_tweets(tweet_table_interaction.tweet, 3500) if __name__ == "__main__": # Open json file of saved model json_file = open('model.json', 'r') loaded_model_json = json_file.read() json_file.close() # Create a model model = model_from_json(loaded_model_json) # Weight nodes with saved values model.load_weights('model.h5') if __name__ == "__main__": int_prediction = model.predict(X_interaction) labels = ['Negative', 'Neutral', 'Positive'] sentiments = [labels[np.argmax(pred)] for pred in int_prediction] tweet_table_interaction["sentiment"] = sentiments tweet_table_interaction.to_csv("tweets_final_sentiment.csv") ```
github_jupyter
# QA Sentiment Analysis: Critical Thinking 9.2.2 # Naive Bayes ### By JEFFREY BLACK # Introduction Question: "How does increasing the sample size affect a t test? Why does it affect a t test in this manner?" Answer: "In the long run, it means that the obtained t is more likely to be significant. In terms of the formula used to calculate t, increasing the sample size will decrease the standard error of the difference between means. This , in turn, will increase the size of the obtained t. A larger obtained t means that the obtained value is more likely to exceed the critical value and be significant." *** # Importing Packages ``` import numpy as np import pandas as pd import os from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import (f1_score,precision_score,recall_score, confusion_matrix) from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import RandomizedSearchCV, train_test_split # for hyperparameter tuning ``` *** # Loading and Preprocessing the Data ``` CTC_9_2_2 = pd.read_excel("/Users/jeffreyblack/Desktop/NLPProject/QA_CTC.xlsx", sheet_name = 'CTC_9_2_2') CTC_9_2_2 X_train, X_test, y_train, y_test = train_test_split(CTC_9_2_2['Answers'] , CTC_9_2_2['Grade'], test_size=0.20, random_state=42) ``` *** # Feature Extraction ### Convert reviews into vectors using the bag-of-words model Note: I did not remove stop-words ``` def extract_features(x_train, x_test): # This function extracts document features for input documents, x # Source: # https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer vectorizer = TfidfVectorizer(max_features=10000, ngram_range = (1,3)) train = vectorizer.fit_transform(x_train) test = vectorizer.transform(x_test) test.toarray() print((vectorizer.get_feature_names())) return train, test ``` Calling the TF-IDF Vectorizer to extract the features for the training and test predictors. ``` feats_train, feats_test = extract_features(X_train, X_test) # training and test set features ``` *** # Model Training: Naive Bayes ### Fit the training data using Multinomial Naive Bayes classifier ``` def build_NB_classifier(x, y): # This function builds a Multinomial Naive Bayes classifier with input (x,y): # Source: # https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html clf = MultinomialNB() clf.fit(x, y) return clf nb_clf = build_NB_classifier(feats_train, y_train) ``` ## Hyperparameter Tuning I decided to use Random Search Cross Validation in Scikit-Learn to determine the best hyperparameters needed for tuning the Naive Bayes classifier model. The RandomizedSearchCV allowed me to define a grid of hyperparameter randes and randomly sample from the grid, while performing K-fold cross validation with each combination of values. ``` # Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing). alpha = [0, 1.0] # Whether to learn class prior probabilities or not. If false, a uniform prior will be used. fit_prior = [True, False] # Prior probabilities of the classes. If specified the priors are not adjusted according to the data. class_prior = [None, [0.05, 0.95],[0.1, 0.9],[0.2, 0.8],[0.25, 0.85], [0.3, 0.7],[0.35, 0.75], [0.4, 0.6],[0.45, 0.65]] # Create the random grid random_grid = {'alpha': alpha, 'fit_prior': fit_prior, 'class_prior': class_prior} print(random_grid) # Use the random grid to search for best hyperparameters # First create the base model to tune nb = MultinomialNB() # Random search of parameters, using 3 fold cross validation, # search across 100 different combinations, and use all available cores nb_random = RandomizedSearchCV(estimator = nb, param_distributions = random_grid, cv=3, scoring='f1_weighted', n_iter=1000, return_train_score = True) # Fit the random search model nb_random.fit(feats_train, y_train) # finding the best parameters nb_random.best_params_ ``` Using the output above, I tuned the Multinomial Naive Bayes classifier below. ``` def build_NB_classifier_tuned(x, y): # This function builds a Multinomial Naive Bayes classifier with input (x,y): # Source: # https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html clf = MultinomialNB(fit_prior = False, class_prior = None, alpha = 1.0) clf.fit(x, y) return clf nb_clf_tuned = build_NB_classifier_tuned(feats_train, y_train) ``` *** # Model Evaluation Functions I used 3 evaluation metrics: recall, precision, and F1-score. I also used a confusion matrix to visualize false-positive, false-negative, true-positive, and true-negative. ``` def recall_evaluator(x, y_truth, clf): # Function to evalute model performance, using recall: # Source: # https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score result = 0.0 result = recall_score(y_true = y_truth, y_pred = clf.predict(x), average='weighted') return result def precision_evaluator(x, y_truth, clf): # Function to evalute model performance, using precision: # Source: # https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score result = 0.0 result = precision_score(y_true = y_truth, y_pred = clf.predict(x), average='weighted') return result def f1_evaluator(x, y_truth, clf): # Function to evalute model performance, using F1-score: # Source: # https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score result = 0.0 result = f1_score(y_true = y_truth, y_pred = clf.predict(x), average='weighted') return result ``` *** ## Summary Results of Naive Bayes ### Original model evaluation: ``` recall_nb_score = recall_evaluator(feats_test, y_test, nb_clf) precision_nb_score = precision_evaluator(feats_test, y_test, nb_clf) f1_nb_score = f1_evaluator(feats_test, y_test, nb_clf) pred_nb = nb_clf.predict(feats_test) print('Naive Bayes Recall: ', recall_nb_score) print('Naive Bayes Precision: ', precision_nb_score) print('Naive Bayes F1: ', f1_nb_score) print("Confusion Matrix for Naive Bayes Classifier:") print(confusion_matrix(y_test, pred_nb)) ``` ### After hyperparameter tuning: ``` recall_nb_tuned_score = recall_evaluator(feats_test, y_test, nb_clf_tuned) precision_nb_tuned_score = precision_evaluator(feats_test, y_test, nb_clf_tuned) f1_nb_tuned_score = f1_evaluator(feats_test, y_test, nb_clf_tuned) pred_nb_tuned = nb_clf_tuned.predict(feats_test) print('Naive Bayes Recall: ', recall_nb_tuned_score) print('Naive Bayes Precision: ', precision_nb_tuned_score) print('Naive Bayes F1: ', f1_nb_tuned_score) print("Confusion Matrix for Naive Bayes Classifier:") print(confusion_matrix(y_test, pred_nb_tuned)) ```
github_jupyter
<a href="https://colab.research.google.com/github/ziatdinovmax/atomai/blob/master/atomai_atomstat.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Multivariate analysis of ferroic distortions with *atomstat* module Prepared by Maxim Ziatdinov E-mail: [email protected] In this notebook we show how the atomic coordinates derived via a pre-trained neural network from the atom-resolved image can be used to explore the extant atomic displacement patterns in the material and build the collection of the building blocks for the distorted lattice. For more details see our paper in Appl. Phys. Lett. 115, 052902 (2019). ## Install AtomAI Installation: ``` !pip install atomai ``` Import modules: ``` import atomai as aoi import numpy as np ``` Download the trained weights and test image: ``` download_link_model = 'https://drive.google.com/uc?id=18hXcw0tZ_fALtI2Fir1fHirAt27tRqj4' download_link_img = 'https://drive.google.com/uc?id=1peHF1lvpOKlOSMjREB2aSscyolrQQhoh' !gdown -q $download_link_model -O 'simple_model.tar' !gdown -q $download_link_img -O 'test_img.npy' ``` ## Ferroic blocks analysis with atomstat First we need to load the trained model. To do this, we specify a path to file with the trained weights and model specifics. We are going to use the weights trained in the [atomai-atomnet notebook](https://colab.research.google.com/github/ziatdinovmax/atomai/blob/master/examples/notebooks/atomai_atomnet.ipynb#scrollTo=XGxhL7ha1Y3R). ``` # Path to file with trained weights model_dict_path = '/content/simple_model.tar' # load the weights into the model skeleton model = aoi.load_model(model_dict_path) ``` Make a prediction with the loaded model: ``` # Load experimental data expdata = np.load('test_img.npy') # Get NN output with coordinates and classes nn_output, coordinates = model.predict(expdata) ``` Here we are going to use *atomstat* module to get local image descriptors first (i.e. stack of subimages around one of the atom types) and then perform different types of statistical analysis on them. This is similar to what we did in *Applied Physics Letters 115, 052902 (2019)* (although here we are going to use a different model and the image was downsized by a factor of 2 to allow faster inference, without using a GPU). Get local descriptors, which are subimages centered on one of the sublattices: ``` imstack = aoi.stat.imlocal(nn_output, coordinates, window_size=32, coord_class=1) ``` Compute PCA scree plot to estimate the number of components/sources for the multivariate analysis below: ``` imstack.pca_scree_plot(plot_results=True); ``` Do PCA analysis and plot results: ``` pca_results = imstack.imblock_pca(4, plot_results=True) ``` Do ICA analysis and plot results: ``` ica_results = imstack.imblock_ica(4, plot_results=True) ``` Do NMF analysis and plot results: ``` nmf_results = imstack.imblock_nmf(4, plot_results=True) ```
github_jupyter
## Load Estonian weather service - https://www.ilmateenistus.ee/teenused/ilmainfo/ilmatikker/ ``` import requests import datetime import xml.etree.ElementTree as ET import pandas as pd from pandas.api.types import is_string_dtype from pandas.api.types import is_numeric_dtype import geopandas as gpd import fiona from fiona.crs import from_epsg import numpy as np from shapely.geometry import Point import matplotlib.pyplot as plt %matplotlib inline req = requests.get("http://www.ilmateenistus.ee/ilma_andmed/xml/observations.php") print(req.encoding) print(req.headers['content-type']) tree = ET.fromstring(req.content.decode(req.encoding) ) print(tree.tag) print(tree.attrib) ts = tree.attrib['timestamp'] print(datetime.datetime.fromtimestamp(int(ts))) data = {'stations' : [], 'wmocode': [], 'precipitations': [], 'airtemperature': [], 'windspeed': [], 'waterlevel': [], 'watertemperature': [], 'geometry': [] } counter = 0 for station in tree.findall('station'): counter += 1 # print(station.tag, child.attrib) # < name > Virtsu < /name > – jaama nimi. name = station.find('name').text data['stations'].append(name) # < wmocode > 26128 < /wmocode > – jaama WMO kood. wmocode = station.find('wmocode').text data['wmocode'].append(wmocode) try: # < longitude > 23.51355555534363 < /longitude > – jaama asukoha koordinaat. lon = station.find('longitude').text # < latitude > 58.572674999100215 < /latitude > – jaama asukoha koordinaat. lat = station.find('latitude').text coords = Point(float(lon), float(lat)) data['geometry'].append(coords) except ValueError as ve: pass # < phenomenon > Light snowfall < /phenomenon > – jaamas esinev ilmastikunähtus, selle puudumisel pilvisuse aste (kui jaamas tehakse manuaalseid pilvisuse mõõtmisi). Täielik nimekiri nähtustest on allpool olevas tabelis. # < visibility > 34.0 < /visibility > – nähtavus (km). # < precipitations > 0 < /precipitations > – sademed (mm) viimase tunni jooksul. Lume, lörtsi, rahe ja teiste taoliste sademete hulk on samuti esitatud vee millimeetritena. 1 cm lund ~ 1 mm vett. precip = station.find('precipitations').text data['precipitations'].append(precip) # < airpressure > 1005.4 < /airpressure > – õhurõhk (hPa). Normaalrõhk on 1013.25 hPa. # < relativehumidity > 57 < /relativehumidity > – suhteline õhuniiskus (%). # < airtemperature > -3.6 < /airtemperature > – õhutemperatuur (°C). temp = station.find('airtemperature').text data['airtemperature'].append(temp) # < winddirection > 101 < /winddirection > – tuule suund (°). # < windspeed > 3.2 < /windspeed > – keskmine tuule kiirus (m/s). wind = station.find('windspeed').text data['windspeed'].append(wind) # < windspeedmax > 5.1 < /windspeedmax > – maksimaalne tuule kiirus ehk puhangud (m/s). # < waterlevel > -49 < /waterlevel > – veetase (cm Kroonlinna nulli suhtes) waterlevel = station.find('waterlevel').text data['waterlevel'].append(waterlevel) # < waterlevel_eh2000 > -28 < waterlevel_eh2000/ > – veetase (cm Amsterdami nulli suhtes) # waterlevel_eh2000 = station.find('waterlevel_eh2000').text # < watertemperature > -0.2 < /watertemperature > – veetemperatuur (°C) watertemp = station.find('watertemperature').text data['watertemperature'].append(watertemp) print(counter) df = pd.DataFrame(data) for field in ['precipitations','airtemperature','windspeed','waterlevel','watertemperature']: if field in df.columns: if is_string_dtype(df[field]): df[field] = df[field].astype(float) display(df.head(5)) geo_df = gpd.GeoDataFrame(df, crs=from_epsg(4326), geometry='geometry') geo_df.plot() water_df = geo_df.dropna(subset=['precipitations']) water_df.plot(column='precipitations', legend=True) geo_df_3301 = geo_df.dropna(subset=['precipitations']).to_crs(epsg=3301) geo_df_3301['x'] = geo_df_3301['geometry'].apply(lambda p: p.x) geo_df_3301['y'] = geo_df_3301['geometry'].apply(lambda p: p.y) display(geo_df_3301.head(5)) geo_df_3301.to_file('ilmateenistus_precip_stations.shp', encoding='utf-8') ``` ## IDW in Python from scratch blogpost https://www.geodose.com/2019/09/creating-idw-interpolation-from-scratch-python.html - IDW Algorithm Implementation in Python - IDW Interpolation Algorithm Based on Block Radius Sampling Point - IDW Interpolation based on Minimum Number of Sampling Point ``` geo_df_3301.dtypes from idw_basic import idw_rblock, idw_npoint x_idw_list1, y_idw_list1, z_head1 = idw_rblock(x=geo_df_3301['x'].astype(float).values.tolist(), y=geo_df_3301['y'].astype(float).values.tolist(), z=geo_df_3301['precipitations'].values.tolist(), grid_side_length=200, search_radius=50000, p=1.5) display(len(x_idw_list1)) display(len(y_idw_list1)) display(len(z_head1)) display(np.array(z_head1).shape) plt.matshow(z_head1, origin='lower') plt.colorbar() plt.show() ``` _idw_npoint_ might take very long, due to ierative search radius increase to find at least n nearest neighbours ``` x_idw_list2, y_idw_list2, z_head2 = idw_npoint(x=geo_df_3301['x'].astype(float).values.tolist(), y=geo_df_3301['y'].astype(float).values.tolist(), z=geo_df_3301['airtemperature'].values.tolist(), grid_side_length=100, n_points=3, p=1.5, rblock_iter_distance=50000) display(len(x_idw_list2)) display(len(y_idw_list2)) display(len(z_head2)) display(np.array(z_head2).shape) plt.matshow(z_head2, origin='lower') plt.colorbar() plt.show() ``` ## Inverse distance weighting (IDW) in Python with a KDTree By Copyright (C) 2016 Paul Brodersen <[email protected]> under GPL-3.0 code: https://github.com/paulbrodersen/inverse_distance_weighting Inverse distance weighting is an interpolation method that computes the score of query points based on the scores of their k-nearest neighbours, weighted by the inverse of their distances. As each query point is evaluated using the same number of data points, this method allows for strong gradient changes in regions of high sample density while imposing smoothness in data sparse regions. uses: - numpy - scipy.spatial (for cKDTree) ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline import idw_knn XY_obs_coords = np.vstack([geo_df_3301['x'].values, geo_df_3301['y'].values]).T z_arr = geo_df_3301['precipitations'].values display(XY_obs_coords.shape) display(z_arr.shape) # returns a function that is trained (the tree setup) for the interpolation on the grid idw_tree = idw_knn.tree(XY_obs_coords, z_arr) all_dist_m = geo_df_3301['x'].max() - geo_df_3301['x'].min() dist_km_x = all_dist_m / 1000 display(dist_km_x) all_dist_m_y = geo_df_3301['y'].max() - geo_df_3301['y'].min() dist_km_y = all_dist_m_y / 1000 display(dist_km_y) # prepare grids # number of target interpolation grid shape along x and y axis, e.g. 150*100 raster pixels nx=int(dist_km_x) ny=int(dist_km_y) # preparing the "output" grid x_spacing = np.linspace(geo_df_3301['x'].min(), geo_df_3301['x'].max(), nx) y_spacing = np.linspace(geo_df_3301['y'].min(), geo_df_3301['y'].max(), ny) # preparing the target grid x_y_grid_pairs = np.meshgrid(x_spacing, y_spacing) x_y_grid_pairs_list = np.reshape(x_y_grid_pairs, (2, -1)).T display(f"x_y_grid_pairs {len(x_y_grid_pairs)}") display(f"x_y_grid_pairs_list reshaped {x_y_grid_pairs_list.shape}") # now interpolating onto the target grid z_arr_interp = idw_tree(x_y_grid_pairs_list) display(f"z_arr_interp {z_arr_interp.shape}") # plot fig, (ax1, ax2) = plt.subplots(1,2, sharex=True, sharey=True, figsize=(10,3)) ax1.scatter(XY_obs_coords[:,0], XY_obs_coords[:,1], c=geo_df_3301['precipitations'], linewidths=0) ax1.set_title('Observation samples') ax2.contourf(x_spacing, y_spacing, z_arr_interp.reshape((ny,nx))) ax2.set_title('Interpolation') plt.show() z_arr_interp.shape plt.matshow(z_arr_interp.reshape((ny,nx)), origin='lower') plt.colorbar() plt.show() display(f"x_spacing {x_spacing.shape}") display(f"y_spacing {y_spacing.shape}") # is a x_y_grid_pair a list of two ndarrays, each is fully spatial 100x150 fields, one holds the x coords the other the y coords x_mg = np.meshgrid(x_spacing, y_spacing) display(f"x_mg {type(x_mg)} {len(x_mg)} len0 {type(x_mg[0])} {len(x_mg[0])} {x_mg[0].shape} len1 {type(x_mg[1])} {len(x_mg[1])} {x_mg[0].shape}") # the yget reshaped into two long flattened arrays the joint full list of target x y pairs representing all grid locations x_mg_interp_prep = np.reshape(x_mg, (2, -1)).T display(f"x_mg_interp_prep {type(x_mg_interp_prep)} {len(x_mg_interp_prep)} {x_mg_interp_prep.shape}") ``` ## Interpolation in Python with Radial Basis Function - https://stackoverflow.com/a/3114117 ``` from scipy.interpolate import Rbf def scipy_idw(x, y, z, xi, yi): interp = Rbf(x, y, z, function='linear') return interp(xi, yi) def plot(x,y,z,grid): plt.figure() grid_flipped = np.flipud(grid) plt.imshow(grid, extent=(x.min(), x.max(), y.min(), y.max()), origin='lower') # plt.hold(True) plt.scatter(x,y,c=z) plt.colorbar() # nx, ny = 50, 50 x=geo_df_3301['x'].astype(float).values y=geo_df_3301['y'].astype(float).values z=geo_df_3301['precipitations'].values xi = np.linspace(x.min(), x.max(), nx) yi = np.linspace(y.min(), y.max(), ny) xi, yi = np.meshgrid(xi, yi) xi, yi = xi.flatten(), yi.flatten() grid2 = scipy_idw(x,y,z,xi,yi) grid2 = grid2.reshape((ny, nx)) plot(x,y,z,grid2) plt.title("Scipy's Rbf with function=linear") # plot fig, (ax1, ax2, ax3) = plt.subplots(1,3, sharex=True, sharey=True, figsize=(10,3)) ax1.scatter(x,y, c=z, linewidths=0) ax1.set_title('Observation samples') ax2.contourf(np.linspace(x.min(), x.max(), nx), np.linspace(y.min(), y.max(), ny), grid2) ax2.set_title('Interpolation contours') ax3.imshow(np.flipud(grid2), extent=(x.min(), x.max(), y.min(), y.max())) ax3.set_title('RBF pixels') plt.show() ``` ## surface/contour/mesh plotting of interpolated grids https://matplotlib.org/3.1.0/gallery/images_contours_and_fields/pcolormesh_levels.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-levels-py ``` from matplotlib.colors import BoundaryNorm from matplotlib.ticker import MaxNLocator from matplotlib import cm nbins=15 levels = MaxNLocator(nbins=nbins).tick_values(z_arr_interp.min(), z_arr_interp.max()) # pick the desired colormap, sensible levels, and define a normalization # instance which takes data values and translates those into levels. cmap = plt.get_cmap('viridis') norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True) # plot fig, (ax1, ax2) = plt.subplots(1,2, sharex=True, sharey=True, figsize=(10,3)) im = ax1.pcolormesh(x_idw_list1, y_idw_list1, np.array(z_head1), cmap=cmap, norm=norm) fig.colorbar(im, ax=ax1) ax1.set_title('pcolormesh with normalisation (nbins={})'.format(nbins)) im2 = ax2.pcolormesh(x_idw_list1, y_idw_list1, np.array(z_head1), cmap=cm.viridis) fig.colorbar(im2, ax=ax2) ax2.set_title('pcolormesh without explicit normalisation') plt.show() # plot fig, (ax1, ax2) = plt.subplots(1,2, sharex=True, sharey=True, figsize=(10,3)) cf = ax1.contourf(x_spacing, y_spacing, z_arr_interp.reshape((ny,nx)), levels=levels, cmap=cmap) fig.colorbar(cf, ax=ax1) ax1.set_title('contourf with {} levels'.format(nbins)) cf2 = ax2.contourf(x_spacing, y_spacing, z_arr_interp.reshape((ny,nx)), cmap=cm.viridis) fig.colorbar(cf2, ax=ax2) ax2.set_title('contourf with defaut levels') plt.show() z_arr_interp.reshape((ny,nx)).shape ``` ## Writing interpolated array to a raster file - GeoTiff raster with GDAL Python ``` from fiona.crs import from_epsg import pyproj import osgeo.osr import gdal gdal.UseExceptions() # wkt_projection = CRS("EPSG:3301") -> techniclly should tae crs from the geodataframe crs = pyproj.Proj(from_epsg(3301)) srs = osgeo.osr.SpatialReference() srs.ImportFromProj4(crs.srs) wkt_projection = srs.ExportToWkt() # # KDTree z_arr_interp # ncols = nx nrows = ny cell_unit_sizeX = (geo_df_3301['x'].max() - geo_df_3301['x'].min()) / ncols cell_unit_sizeY = (geo_df_3301['y'].max() - geo_df_3301['y'].min()) / nrows testnp = z_arr_interp.reshape((ny,nx)) xllcorner = geo_df_3301['x'].min() xulcorner = geo_df_3301['x'].min() yllcorner = geo_df_3301['y'].min() yulcorner = geo_df_3301['y'].max() nodata_value = -9999 driver = gdal.GetDriverByName("GTiff") dataset = driver.Create("kdtree_precip_rasterout1.tif", ncols, nrows, 1, gdal.GDT_Float32 ) dataset.SetProjection(wkt_projection) dataset.SetGeoTransform((xulcorner,cell_unit_sizeX,0,yulcorner,0,-cell_unit_sizeY)) dataset.GetRasterBand(1).WriteArray(np.flipud(testnp)) band = dataset.GetRasterBand(1) band.SetNoDataValue(nodata_value) dataset.FlushCache() # dereference band to avoid gotcha described previously band = None dataset = None # # RBF grid2 # testnp = grid2.reshape((ny,nx)) ncols = nx nrows = ny cell_unit_sizeX = (geo_df_3301['x'].max() - geo_df_3301['x'].min()) / ncols cell_unit_sizeY = (geo_df_3301['y'].max() - geo_df_3301['y'].min()) / nrows xllcorner = geo_df_3301['x'].min() xulcorner = geo_df_3301['x'].min() yllcorner = geo_df_3301['y'].min() yulcorner = geo_df_3301['y'].max() nodata_value = -9999 driver = gdal.GetDriverByName("GTiff") dataset = driver.Create("rbf_precip_rasterout1.tif", ncols, nrows, 1, gdal.GDT_Float32 ) dataset.SetProjection(wkt_projection) dataset.SetGeoTransform((xulcorner,cell_unit_sizeX,0,yulcorner,0,-cell_unit_sizeY)) dataset.GetRasterBand(1).WriteArray(np.flipud(testnp)) band = dataset.GetRasterBand(1) band.SetNoDataValue(nodata_value) dataset.FlushCache() # dereference band to avoid gotcha described previously band = None dataset = None ncols = 200 nrows = 200 cell_unit_sizeX = (geo_df_3301['x'].max() - geo_df_3301['x'].min()) / ncols cell_unit_sizeY = (geo_df_3301['y'].max() - geo_df_3301['y'].min()) / nrows xllcorner = geo_df_3301['x'].min() xulcorner = geo_df_3301['x'].min() yllcorner = geo_df_3301['y'].min() yulcorner = geo_df_3301['y'].max() nodata_value = -9999 driver = gdal.GetDriverByName("GTiff") # dataset = driver.Create("%s"%(OutputFile), NROWS, NCOLS, 1, gdal.GDT_Float32 ) dataset = driver.Create("idw_basic_precip_rasterout1.tif", ncols, nrows, 1, gdal.GDT_Float32 ) dataset.SetProjection(wkt_projection) dataset.SetGeoTransform((xulcorner,cell_unit_sizeX,0,yulcorner,0,-cell_unit_sizeY)) dataset.GetRasterBand(1).WriteArray(np.flipud(np.array(z_head1))) band = dataset.GetRasterBand(1) band.SetNoDataValue(nodata_value) dataset.FlushCache() # dereference band to avoid gotcha described previously band = None dataset = None ``` ## Point Query RasterStats - https://pythonhosted.org/rasterstats/manual.html#basic-example ``` from rasterstats import point_query xm = gpd.read_file('ilmateenistus_precip_stations.shp', encoding="utf-8") pts_kd = point_query('ilmateenistus_precip_stations.shp', "kdtree_precip_rasterout1.tif") pts_rbf = point_query('ilmateenistus_precip_stations.shp', "rbf_precip_rasterout1.tif") pts_idw = point_query('ilmateenistus_precip_stations.shp', "idw_basic_precip_rasterout1.tif") xm['pcp_kdtree'] = pts_kd xm['pcp_rbf'] = pts_rbf xm['pcp_idw'] = pts_idw xm = xm[['precipitat','pcp_kdtree','pcp_rbf','pcp_idw']].dropna() from sklearn.metrics import mean_squared_error, r2_score x_l = [] for rst in ['pcp_kdtree', 'pcp_rbf', 'pcp_idw']: rmse = np.sqrt(mean_squared_error(xm['precipitat'], xm[rst])) r2 = r2_score(xm['precipitat'], xm[rst]) x_l.append({ 'name': rst, 'rmse': rmse, 'r2': r2}) pd.DataFrame(x_l) ```
github_jupyter
# Distirbuted Training of Mask-RCNN in Amazon SageMaker using EFS This notebook is a step-by-step tutorial on distributed tranining of [Mask R-CNN](https://arxiv.org/abs/1703.06870) implemented in [TensorFlow](https://www.tensorflow.org/) framework. Mask R-CNN is also referred to as heavy weight object detection model and it is part of [MLPerf](https://www.mlperf.org/training-results-0-6/). Concretely, we will describe the steps for training [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) and [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) in [Amazon SageMaker](https://aws.amazon.com/sagemaker/) using [Amazon EFS](https://aws.amazon.com/efs/) file-system as data source. The outline of steps is as follows: 1. Stage COCO 2017 dataset in [Amazon S3](https://aws.amazon.com/s3/) 2. Copy COCO 2017 dataset from S3 to Amazon EFS file-system mounted on this notebook instance 3. Build Docker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/) 4. Configure data input channels 5. Configure hyper-prarameters 6. Define training metrics 7. Define training job and start training Before we get started, let us initialize two python variables ```aws_region``` and ```s3_bucket``` that we will use throughout the notebook: ``` aws_region = # aws-region-code e.g. us-east-1 s3_bucket = # your-s3-bucket-name ``` ## Stage COCO 2017 dataset in Amazon S3 We use [COCO 2017 dataset](http://cocodataset.org/#home) for training. We download COCO 2017 training and validation dataset to this notebook instance, extract the files from the dataset archives, and upload the extracted files to your Amazon [S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html). The ```prepare-s3-bucket.sh``` script executes this step. ``` !cat ./prepare-s3-bucket.sh ``` Using your *Amazon S3 bucket* as argument, run the cell below. If you have already uploaded COCO 2017 dataset to your Amazon S3 bucket, you may skip this step. ``` %%time !./prepare-s3-bucket.sh {s3_bucket} ``` ## Copy COCO 2017 dataset from S3 to Amazon EFS Next, we copy COCO 2017 dataset from S3 to EFS file-system. The ```prepare-efs.sh``` script executes this step. ``` !cat ./prepare-efs.sh ``` If you have already copied COCO 2017 dataset from S3 to your EFS file-system, skip this step. ``` %%time !./prepare-efs.sh {s3_bucket} ``` ## Build and push SageMaker training images For this step, the [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) attached to this notebook instance needs full access to Amazon ECR service. If you created this notebook instance using the ```./stack-sm.sh``` script in this repository, the IAM Role attached to this notebook instance is already setup with full access to ECR service. Below, we have a choice of two different implementations: 1. [TensorPack Faster-RCNN/Mask-RCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) implementation supports a maximum per-GPU batch size of 1, and does not support mixed precision. It can be used with mainstream TensorFlow releases. 2. [AWS Samples Mask R-CNN](https://github.com/aws-samples/mask-rcnn-tensorflow) is an optimized implementation that supports a maximum batch size of 4 and supports mixed precision. This implementation uses custom TensorFlow ops. The required custom TensorFlow ops are available in [AWS Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) images in ```tensorflow-training``` repository with image tag ```1.15.2-gpu-py36-cu100-ubuntu18.04```, or later. It is recommended that you build and push both SageMaker training images and use either image for training later. ### TensorPack Faster-RCNN/Mask-RCNN Use ```./container/build_tools/build_and_push.sh``` script to build and push the TensorPack Faster-RCNN/Mask-RCNN training image to Amazon ECR. ``` !cat ./container/build_tools/build_and_push.sh ``` Using your *AWS region* as argument, run the cell below. ``` %%time ! ./container/build_tools/build_and_push.sh {aws_region} ``` Set ```tensorpack_image``` below to Amazon ECR URI of the image you pushed above. ``` tensorpack_image = # mask-rcnn-tensorpack-sagemaker ECR URI ``` ### AWS Samples Mask R-CNN Use ```./container-optimized/build_tools/build_and_push.sh``` script to build and push the AWS Samples Mask R-CNN training image to Amazon ECR. ``` !cat ./container-optimized/build_tools/build_and_push.sh ``` Using your *AWS region* as argument, run the cell below. ``` %%time ! ./container-optimized/build_tools/build_and_push.sh {aws_region} ``` Set ```aws_samples_image``` below to Amazon ECR URI of the image you pushed above. ``` aws_samples_image = # mask-rcnn-tensorflow-sagemaker ECR URI ``` ## SageMaker Initialization First we upgrade SageMaker to 2.3.0 API. If your notebook is already using latest Sagemaker 2.x API, you may skip the next cell. ``` ! pip install --upgrade pip ! pip install sagemaker==2.3.0 ``` We have staged the data and we have built and pushed the training docker image to Amazon ECR. Now we are ready to start using Amazon SageMaker. ``` %%time import os import time import boto3 import sagemaker from sagemaker import get_execution_role from sagemaker.estimator import Estimator role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role print(f'SageMaker Execution Role:{role}') client = boto3.client('sts') account = client.get_caller_identity()['Account'] print(f'AWS account:{account}') session = boto3.session.Session() region = session.region_name print(f'AWS region:{region}') ``` Next, we set the Amazon ECR image URI used for training. You saved this URI in a previous step. ``` training_image = # set to tensorpack_image or aws_samples_image print(f'Training image: {training_image}') ``` ## Define SageMaker Data Channels Next, we define the *train* and *log* data channels using EFS file-system. To do so, we need to specify the EFS file-system id, which is shown in the output of the command below. ``` !df -kh | grep 'fs-' | sed 's/\(fs-[0-9a-z]*\).*/\1/' ``` Set the EFS ```file_system_id``` below to the ouput of the command shown above. In the cell below, we define the `train` data input channel. ``` from sagemaker.inputs import FileSystemInput # Specify EFS ile system id. file_system_id = # 'fs-xxxxxxxx' print(f"EFS file-system-id: {file_system_id}") # Specify directory path for input data on the file system. # You need to provide normalized and absolute path below. file_system_directory_path = '/mask-rcnn/sagemaker/input/train' print(f'EFS file-system data input path: {file_system_directory_path}') # Specify the access mode of the mount of the directory associated with the file system. # Directory must be mounted 'ro'(read-only). file_system_access_mode = 'ro' # Specify your file system type file_system_type = 'EFS' train = FileSystemInput(file_system_id=file_system_id, file_system_type=file_system_type, directory_path=file_system_directory_path, file_system_access_mode=file_system_access_mode) ``` Below we create the log output directory and define the `log` data output channel. ``` # Specify directory path for log output on the EFS file system. # You need to provide normalized and absolute path below. # For example, '/mask-rcnn/sagemaker/output/log' # Log output directory must not exist file_system_directory_path = f'/mask-rcnn/sagemaker/output/log-{int(time.time())}' # Create the log output directory. # EFS file-system is mounted on '$HOME/efs' mount point for this notebook. home_dir=os.environ['HOME'] local_efs_path = os.path.join(home_dir,'efs', file_system_directory_path[1:]) print(f"Creating log directory on EFS: {local_efs_path}") assert not os.path.isdir(local_efs_path) ! sudo mkdir -p -m a=rw {local_efs_path} assert os.path.isdir(local_efs_path) # Specify the access mode of the mount of the directory associated with the file system. # Directory must be mounted 'rw'(read-write). file_system_access_mode = 'rw' log = FileSystemInput(file_system_id=file_system_id, file_system_type=file_system_type, directory_path=file_system_directory_path, file_system_access_mode=file_system_access_mode) data_channels = {'train': train, 'log': log} ``` Next, we define the model output location in S3. Set ```s3_bucket``` to your S3 bucket name prior to running the cell below. The model checkpoints, logs and Tensorboard events will be written to the log output directory on the EFS file system you created above. At the end of the model training, they will be copied from the log output directory to the `s3_output_location` defined below. ``` prefix = "mask-rcnn/sagemaker" #prefix in your bucket s3_output_location = f's3://{s3_bucket}/{prefix}/output' print(f'S3 model output location: {s3_output_location}') ``` ## Configure Hyper-parameters Next we define the hyper-parameters. Note, some hyper-parameters are different between the two implementations. The batch size per GPU in TensorPack Faster-RCNN/Mask-RCNN is fixed at 1, but is configurable in AWS Samples Mask-RCNN. The learning rate schedule is specified in units of steps in TensorPack Faster-RCNN/Mask-RCNN, but in epochs in AWS Samples Mask-RCNN. The detault learning rate schedule values shown below correspond to training for a total of 24 epochs, at 120,000 images per epoch. <table align='left'> <caption>TensorPack Faster-RCNN/Mask-RCNN Hyper-parameters</caption> <tr> <th style="text-align:center">Hyper-parameter</th> <th style="text-align:center">Description</th> <th style="text-align:center">Default</th> </tr> <tr> <td style="text-align:center">mode_fpn</td> <td style="text-align:left">Flag to indicate use of Feature Pyramid Network (FPN) in the Mask R-CNN model backbone</td> <td style="text-align:center">"True"</td> </tr> <tr> <td style="text-align:center">mode_mask</td> <td style="text-align:left">A value of "False" means Faster-RCNN model, "True" means Mask R-CNN moodel</td> <td style="text-align:center">"True"</td> </tr> <tr> <td style="text-align:center">eval_period</td> <td style="text-align:left">Number of epochs period for evaluation during training</td> <td style="text-align:center">1</td> </tr> <tr> <td style="text-align:center">lr_schedule</td> <td style="text-align:left">Learning rate schedule in training steps</td> <td style="text-align:center">'[240000, 320000, 360000]'</td> </tr> <tr> <td style="text-align:center">batch_norm</td> <td style="text-align:left">Batch normalization option ('FreezeBN', 'SyncBN', 'GN', 'None') </td> <td style="text-align:center">'FreezeBN'</td> </tr> <tr> <td style="text-align:center">images_per_epoch</td> <td style="text-align:left">Images per epoch </td> <td style="text-align:center">120000</td> </tr> <tr> <td style="text-align:center">data_train</td> <td style="text-align:left">Training data under data directory</td> <td style="text-align:center">'coco_train2017'</td> </tr> <tr> <td style="text-align:center">data_val</td> <td style="text-align:left">Validation data under data directory</td> <td style="text-align:center">'coco_val2017'</td> </tr> <tr> <td style="text-align:center">resnet_arch</td> <td style="text-align:left">Must be 'resnet50' or 'resnet101'</td> <td style="text-align:center">'resnet50'</td> </tr> <tr> <td style="text-align:center">backbone_weights</td> <td style="text-align:left">ResNet backbone weights</td> <td style="text-align:center">'ImageNet-R50-AlignPadding.npz'</td> </tr> <tr> <td style="text-align:center">load_model</td> <td style="text-align:left">Pre-trained model to load</td> <td style="text-align:center"></td> </tr> <tr> <td style="text-align:center">config:</td> <td style="text-align:left">Any hyperparamter prefixed with <b>config:</b> is set as a model config parameter</td> <td style="text-align:center"></td> </tr> </table> <table align='left'> <caption>AWS Samples Mask-RCNN Hyper-parameters</caption> <tr> <th style="text-align:center">Hyper-parameter</th> <th style="text-align:center">Description</th> <th style="text-align:center">Default</th> </tr> <tr> <td style="text-align:center">mode_fpn</td> <td style="text-align:left">Flag to indicate use of Feature Pyramid Network (FPN) in the Mask R-CNN model backbone</td> <td style="text-align:center">"True"</td> </tr> <tr> <td style="text-align:center">mode_mask</td> <td style="text-align:left">A value of "False" means Faster-RCNN model, "True" means Mask R-CNN moodel</td> <td style="text-align:center">"True"</td> </tr> <tr> <td style="text-align:center">eval_period</td> <td style="text-align:left">Number of epochs period for evaluation during training</td> <td style="text-align:center">1</td> </tr> <tr> <td style="text-align:center">lr_epoch_schedule</td> <td style="text-align:left">Learning rate schedule in epochs</td> <td style="text-align:center">'[(16, 0.1), (20, 0.01), (24, None)]'</td> </tr> <tr> <td style="text-align:center">batch_size_per_gpu</td> <td style="text-align:left">Batch size per gpu ( Minimum 1, Maximum 4)</td> <td style="text-align:center">4</td> </tr> <tr> <td style="text-align:center">batch_norm</td> <td style="text-align:left">Batch normalization option ('FreezeBN', 'SyncBN', 'GN', 'None') </td> <td style="text-align:center">'FreezeBN'</td> </tr> <tr> <td style="text-align:center">images_per_epoch</td> <td style="text-align:left">Images per epoch </td> <td style="text-align:center">120000</td> </tr> <tr> <td style="text-align:center">data_train</td> <td style="text-align:left">Training data under data directory</td> <td style="text-align:center">'train2017'</td> </tr> <tr> <td style="text-align:center">backbone_weights</td> <td style="text-align:left">ResNet backbone weights</td> <td style="text-align:center">'ImageNet-R50-AlignPadding.npz'</td> </tr> <tr> <td style="text-align:center">load_model</td> <td style="text-align:left">Pre-trained model to load</td> <td style="text-align:center"></td> </tr> <tr> <td style="text-align:center">config:</td> <td style="text-align:left">Any hyperparamter prefixed with <b>config:</b> is set as a model config parameter</td> <td style="text-align:center"></td> </tr> </table> ``` hyperparameters = { "mode_fpn": "True", "mode_mask": "True", "eval_period": 1, "batch_norm": "FreezeBN" } ``` ## Define Training Metrics Next, we define the regular expressions that SageMaker uses to extract algorithm metrics from training logs and send them to [AWS CloudWatch metrics](https://docs.aws.amazon.com/en_pv/AmazonCloudWatch/latest/monitoring/working_with_metrics.html). These algorithm metrics are visualized in SageMaker console. ``` metric_definitions=[ { "Name": "fastrcnn_losses/box_loss", "Regex": ".*fastrcnn_losses/box_loss:\\s*(\\S+).*" }, { "Name": "fastrcnn_losses/label_loss", "Regex": ".*fastrcnn_losses/label_loss:\\s*(\\S+).*" }, { "Name": "fastrcnn_losses/label_metrics/accuracy", "Regex": ".*fastrcnn_losses/label_metrics/accuracy:\\s*(\\S+).*" }, { "Name": "fastrcnn_losses/label_metrics/false_negative", "Regex": ".*fastrcnn_losses/label_metrics/false_negative:\\s*(\\S+).*" }, { "Name": "fastrcnn_losses/label_metrics/fg_accuracy", "Regex": ".*fastrcnn_losses/label_metrics/fg_accuracy:\\s*(\\S+).*" }, { "Name": "fastrcnn_losses/num_fg_label", "Regex": ".*fastrcnn_losses/num_fg_label:\\s*(\\S+).*" }, { "Name": "maskrcnn_loss/accuracy", "Regex": ".*maskrcnn_loss/accuracy:\\s*(\\S+).*" }, { "Name": "maskrcnn_loss/fg_pixel_ratio", "Regex": ".*maskrcnn_loss/fg_pixel_ratio:\\s*(\\S+).*" }, { "Name": "maskrcnn_loss/maskrcnn_loss", "Regex": ".*maskrcnn_loss/maskrcnn_loss:\\s*(\\S+).*" }, { "Name": "maskrcnn_loss/pos_accuracy", "Regex": ".*maskrcnn_loss/pos_accuracy:\\s*(\\S+).*" }, { "Name": "mAP(bbox)/IoU=0.5", "Regex": ".*mAP\\(bbox\\)/IoU=0\\.5:\\s*(\\S+).*" }, { "Name": "mAP(bbox)/IoU=0.5:0.95", "Regex": ".*mAP\\(bbox\\)/IoU=0\\.5:0\\.95:\\s*(\\S+).*" }, { "Name": "mAP(bbox)/IoU=0.75", "Regex": ".*mAP\\(bbox\\)/IoU=0\\.75:\\s*(\\S+).*" }, { "Name": "mAP(bbox)/large", "Regex": ".*mAP\\(bbox\\)/large:\\s*(\\S+).*" }, { "Name": "mAP(bbox)/medium", "Regex": ".*mAP\\(bbox\\)/medium:\\s*(\\S+).*" }, { "Name": "mAP(bbox)/small", "Regex": ".*mAP\\(bbox\\)/small:\\s*(\\S+).*" }, { "Name": "mAP(segm)/IoU=0.5", "Regex": ".*mAP\\(segm\\)/IoU=0\\.5:\\s*(\\S+).*" }, { "Name": "mAP(segm)/IoU=0.5:0.95", "Regex": ".*mAP\\(segm\\)/IoU=0\\.5:0\\.95:\\s*(\\S+).*" }, { "Name": "mAP(segm)/IoU=0.75", "Regex": ".*mAP\\(segm\\)/IoU=0\\.75:\\s*(\\S+).*" }, { "Name": "mAP(segm)/large", "Regex": ".*mAP\\(segm\\)/large:\\s*(\\S+).*" }, { "Name": "mAP(segm)/medium", "Regex": ".*mAP\\(segm\\)/medium:\\s*(\\S+).*" }, { "Name": "mAP(segm)/small", "Regex": ".*mAP\\(segm\\)/small:\\s*(\\S+).*" } ] ``` ## Define SageMaker Training Job Next, we use SageMaker [Estimator](https://sagemaker.readthedocs.io/en/stable/estimators.html) API to define a SageMaker Training Job. We recommned using 32 GPUs, so we set ```instance_count=4``` and ```instance_type='ml.p3.16xlarge'```, because there are 8 Tesla V100 GPUs per ```ml.p3.16xlarge``` instance. We recommend using 100 GB [Amazon EBS](https://aws.amazon.com/ebs/) storage volume with each training instance, so we set ```volume_size = 100```. We run the training job in your private VPC, so we need to set the ```subnets``` and ```security_group_ids``` prior to running the cell below. You may specify multiple subnet ids in the ```subnets``` list. The subnets included in the ```sunbets``` list must be part of the output of ```./stack-sm.sh``` CloudFormation stack script used to create this notebook instance. Specify only one security group id in ```security_group_ids``` list. The security group id must be part of the output of ```./stack-sm.sh``` script. For ```instance_type``` below, you have the option to use ```ml.p3.16xlarge``` with 16 GB per-GPU memory and 25 Gbs network interconnectivity, or ```ml.p3dn.24xlarge``` with 32 GB per-GPU memory and 100 Gbs network interconnectivity. The ```ml.p3dn.24xlarge``` instance type offers significantly better performance than ```ml.p3.16xlarge``` for Mask R-CNN distributed TensorFlow training. ``` # Give Amazon SageMaker Training Jobs Access to FileSystem Resources in Your Amazon VPC. security_group_ids = # ['sg-xxxxxxxx'] subnets = # [ 'subnet-xxxxxxx', 'subnet-xxxxxxx', 'subnet-xxxxxxx' ] sagemaker_session = sagemaker.session.Session(boto_session=session) mask_rcnn_estimator = Estimator(image_uri=training_image, role=role, instance_count=4, instance_type='ml.p3.16xlarge', volume_size = 100, max_run = 400000, output_path=s3_output_location, sagemaker_session=sagemaker_session, hyperparameters = hyperparameters, metric_definitions = metric_definitions, subnets=subnets, security_group_ids=security_group_ids) ``` Finally, we launch the SageMaker training job. See ```Training Jobs``` in SageMaker console to monitor the training job. ``` import time job_name=f'mask-rcnn-efs-{int(time.time())}' print(f"Launching Training Job: {job_name}") # set wait=True below if you want to print logs in cell output mask_rcnn_estimator.fit(inputs=data_channels, job_name=job_name, logs="All", wait=False) ```
github_jupyter
# Plotting and Programming in Python (Continued) ## Plotting ``` %matplotlib inline import matplotlib.pyplot as plt time = [0, 1, 2, 3] position = [0, 100, 200, 300] plt.plot(time, position) plt.xlabel('Time (hr)') plt.ylabel('Position (km)') ``` ## Plot direclty from Pandas DataFrame ``` import pandas as pd data = pd.read_csv('./gapminder_gdp_oceania.csv', index_col='country') # so we want to keep the (year) part only for clarity when plotting GDP vs. years # To do this we use strip(), which removes from the string the characters stated in the argument # This method works on strings, so we call str before strip() years = data.columns.str.strip('gdpPercap_') # Convert year values to integers, saving results back to dataframe data.columns = years.astype(int) # note astype() --> casting function data.loc['Australia'].plot() # More examples: # GDP Per Capita data.T.plot() # line by default plt.ylabel('GDP per capita') plt.xlabel('Year') plt.title('Gdp per Bapita in Oceana') # MANY styles of plots are available plt.style.use('ggplot') data.T.plot(kind='bar') # line, bar, barh, hist, box, area, pie, scatter, hexbin plt.ylabel('GDP per capita') # Plotting data using the matplotlib.plot() function direclty years = data.columns gdp_australia = data.loc['Australia'] plt.plot(years, gdp_australia, 'g--') # last flag determines color of line plt.title('Annual GDP in Australia', fontsize=15) plt.ylabel('GDP') plt.xlabel('Year') ``` # Can plot many sets of data together ``` # Select two countries' worth of data. gdp_australia = data.loc['Australia'] gdp_nz = data.loc['New Zealand'] # Plot with differently-colored markers. plt.plot(years, gdp_australia, 'b-', label='Australia') plt.plot(years, gdp_nz, 'y-', label='New Zealand') # Create legend. plt.legend(loc='upper left') # location parameter plt.xlabel('Year') plt.ylabel('GDP per capita ($)') plt.title('GDP per capita ($) in Oceana') # Scatterplot examples: plt.scatter(gdp_australia, gdp_nz) data.T.plot.scatter(x = 'Australia', y = 'New Zealand') # Transpose --> so country indices are now values # Minima and Maxima data_europe = pd.read_csv('./gapminder_gdp_europe.csv', index_col='country') # Note: use of strip technique to clean up labels years = data_europe.columns.str.strip('gdpPercap_') data_europe.columns = years; data_europe.min().plot(label='min') data_europe.max().plot(label='max') plt.legend(loc='best') plt.xticks(rotation=50) # rotate tick labels # Correlations data_asia = pd.read_csv('./gapminder_gdp_asia.csv', index_col='country') data_asia.describe().T.plot(kind='scatter', x='min', y='max') # Variability of Max is much higher than Min --> take a look at Max variable data_asia = pd.read_csv('./gapminder_gdp_asia.csv', index_col='country') years = data_asia.columns.str.strip('gdbPercapita_') data_asia.columns = years data_asia.max().plot() plt.xticks(rotation=80) print(data_asia.idxmax()) # Remember idxmax function (max value for each index) # More Correlations # Create a plot showing correlation between GDP and life expectancy for 2007 data_all = pd.read_csv('./gapminder_all.csv', index_col='country') data_all.plot(kind='scatter', x='gdpPercap_2007', y='lifeExp_2007', s=data_all['pop_2007']/1e6) # change size of plotted points plt.title('Life Expectancy vs. GDP in 2007', fontsize=16) ``` ## Save your plot to a file ``` # fig = plt.gcf() --> get current figure data.T.plot(kind='line') # must get the current figure AFTER it has been plotted fig = plt.gcf() plt.legend(loc='upper left') plt.xlabel('GDP per capita') plt.ylabel('Year') fig.savefig('my_figure.png') ```
github_jupyter
# Декораторы Декоратор это функция, которая в качестве одного из аргументов принимает объект и что-то возвращает. Декораторы в Python можно применять ко всему: функциям, классам и методам. Основная цель декораторов – изменить поведение объекта, не меняя сам объект. Это очень гибкая функциональная возможность языка. Декорирование функций происходит с помощью следующего синтаксиса ```Python @decorator def function(): ... ``` Такая запись будет аналогично следующему определению: ```Python def function(): ... function = decorator(function) ``` В этом случае результат выполнения функции ```decorator``` записывается обратно по имени ```function```. С помощью декораторов можно, например, измерять время выполнения функций, контролировать количество вызовов, кеширование, вывод предупреждений об использовании устаревших функций, трассировка, использование в контрактном программировании. Рассмотрим пример измерения времени выполнения кода функции. ``` import time def timeit(f): def inner(*args, **kwargs): start = time.time() res = f(*args, **kwargs) end = time.time() print(f'{end - start} seconds') return res return inner @timeit def my_sum(*args, **kwargs): """Функция суммы""" return sum(*args, **kwargs) res = my_sum([i for i in range(int(1e5))]) ``` В такой реализации есть несколько проблем: - нет возможности отключить трассировку; - вывод в стандартный поток вывода (```sys.stdout```); - пропала строка документации и атрибуты декорируемой функции. ``` print(f'{my_sum.__name__ = }') print(f'{my_sum.__doc__ = }') help(my_sum) ``` Так как в Python функции являются объектами, то их можно изменять во время выполнения. В этом кроется решение этой проблемы. Можно скопировать нужные атрибуты декорируемой функции. Чтобы не копировать каждый атрибут вручную существует готовая реализация этого функционала в модуле ```functools``` стандартной библиотеки. ``` from functools import wraps def timeit(f): @wraps(f) def inner(*args, **kwargs): start = time.time() res = f(*args, **kwargs) end = time.time() print(f'{end - start} seconds') return res return inner @timeit def my_sum(*args, **kwargs): """Функция суммы""" return sum(*args, **kwargs) print(f'{my_sum.__name__ = }') print(f'{my_sum.__doc__ = }') help(my_sum) ``` # Параметризованные декораторы У реализованного нами декоратора сильно ограниченное применения, попробуем его расширить. Отключение декоратора можно реализовать, используя глобальную переменную, например, ```dec_enabled```, принимающую значение ```True```, если декоратор активен и ```False``` в противном случае. Возможность вывода не только в стандартный поток (```sys.stdout```), но и в поток ошибок (```sys.stderr```) или файл можно с помощью передачи аргументов. Добавление аргументов к декораторам немного усложняет задачу. ```python @decorator(arg) def foo(): ... ``` В этом случае добавляется дополнительный этап, а именно вычисление декоратора. ```python def foo(): ... dec = decorator(x) # новый этап foo = dec(foo) ``` Решить проблему передачи аргументов можно несколькими способами. Первый из них, и не самый лучший заключается в добавлении еще одной вложенной функции. ``` import sys dec_enabled = True def timeit(file): def dec(func): @wraps(func) def inner(*args, **kwargs): start = time.time() res = func(*args, **kwargs) end = time.time() print(f'{end - start} seconds', file=file) return res return inner if dec_enabled else func return dec @timeit(sys.stderr) def my_sum(*args, **kwargs): """Функция суммы""" return sum(*args, **kwargs) res = my_sum([i for i in range(int(1e5))]) print(res) ``` Такой вариант будет работать при декорировании следующим образом ```@timeit(sys.stderr)```. Однако постоянно писать декораторы с тройной вложенностью это не путь питониста. Можно один раз сделать декоратор для декоратора, позволяющий передавать аргументы (да, декоратор для декоратора). ``` from functools import update_wrapper def with_args(dec): @wraps(dec) def wrapper(*args, **kwargs): def decorator(func): res = dec(func, *args, **kwargs) update_wrapper(res, func) return res return decorator return wrapper ``` Функция ```with_args``` принимает декоратор, оборачивает его в обертку ```wrapper```, внутри которой происходит создание нового декоратора. Исходный декоратор при этом не изменяется. ``` dec_enabled = True @with_args def timeit(func, file): def inner(*args, **kwargs): start = time.time() res = func(*args, **kwargs) end = time.time() print(f'{end - start} seconds', file=file) return res return inner if dec_enabled else func @timeit(sys.stderr) def my_sum(*args, **kwargs): """Функция суммы""" return sum(*args, **kwargs) res = my_sum([i for i in range(int(1e5))]) print(res) ``` Однако это все еще слишком сложно. Гораздо удобнее добавить возможность вызывать декоратор без аргументов. Попробуем воспользоваться только ключевыми аргументами. ``` dec_enabled = True def timeit(func=None, *, file=sys.stderr): if func is None: def dec(func): return timeit(func, file=file) return dec if dec_enabled else func @wraps(func) def inner(*args, **kwargs): start = time.time() res = func(*args, **kwargs) end = time.time() print(f'{end - start} seconds', file=file) return res return inner if dec_enabled else func ``` Теперь декоратор ```timeit``` можно вызывать двумя способами. Во-первых, не передавая никаких аргументов. Тогда вывод будет осуществляться в стандартный поток вывода. При этом помня, что декоратор раскрывается как ```f = timeit(f)```, можно видеть, что аргумент ```func``` принимает значение функции ```f```. Тогда первое условие не будет выполнено, а будет создана обертка ```inner```. ``` dec_enabled = True @timeit def my_sum(*args, **kwargs): """Функция суммы""" return sum(*args, **kwargs) res = my_sum([i for i in range(int(1e5))]) print(res) ``` Во-вторых, передавая в качестве именованного аргумента ```file``` ```sys.stderr``` или имя файла. В этом случае происходит явный вызов декоратора ```timeit(file=sys.stderr)``` без передачи аргумента ```func```, в связи с этим он принимает значение ```None```, а значит, выполняется первое условие и создается обертка ```dec```. ``` dec_enabled = True @timeit(file=sys.stderr) def my_sum(*args, **kwargs): """Функция суммы""" return sum(*args, **kwargs) res = my_sum([i for i in range(int(1e5))]) print(res) ``` Благодаря переменной ```dec_enabled``` измерение времени можно отключить. В этом случае никаких накладных расходов, связанных с вызовом дополнительных функций не будет. К одной функции можно применить сразу несколько декораторов, порядок их работы будет зависеть от порядка их применения к функции. Рассмотрим на примере гамбургера. ``` def with_bun(f): @wraps(f) def inner(): print('-' * 8) f() print('-' * 8) return inner def with_vegetables(f): @wraps(f) def inner(): print(' onion') f() print(' tomato') return inner def with_sauce(f): @wraps(f) def inner(): print(' sauce') f() return inner ``` Определим основную функцию и задекорируем ее. ``` @with_bun @with_vegetables @with_sauce def burger(): print(' cutlet') burger() ``` Если явно такое декорирование, то получиться следующая последовательность вызовов: ``` def burger(): print(' cutlet') burger = with_sauce(burger) burger = with_vegetables(burger) burger = with_bun(burger) burger() ``` Первым будет применяться самый нижний (внутренний) декоратор. Если изменить последовательность декорирования, то результат ожидаемо измениться. Вот еще пару примеров декораторов. Декоратор трассировки вызовов функций: ``` def trace(function=None, *, file=sys.stderr): if function is None: def dec(function): return trace(function, file=file) return dec if dec_enabled else function @wraps(function) def inner(*args, **kwargs): print(f'{function.__name__}, {args}, {kwargs}') return function(*args, **kwargs) return inner if dec_enabled else function @trace def foo(): print('Nothing') foo() ``` Декоратор проверки входа пользователя в систему (в упрощенном виде). ``` def is_authenticated(user): return user in ('monty', 'guido') def login_required(function=None, login_url=''): def user_passes_test(view_func): @wraps(view_func) def wrapped(user, *args, **kwargs): if is_authenticated(user): return view_func(user, *args, **kwargs) print(f'Пользователь {user} перенаправлен на страницу логина: {login_url}') return wrapped if function: return user_passes_test(function) return user_passes_test @login_required(login_url='localhost/login') def foo(user): print(f'{user = }') foo('monty') foo('guido') foo('pyuty') ``` # Полезные ссылки - [Decorators with parameters?](https://stackoverflow.com/questions/5929107/decorators-with-parameters) - [Reuven M. Lerner - Practical decorators - PyCon 2019](https://www.youtube.com/watch?v=MjHpMCIvwsY&feature=youtu.be)
github_jupyter
<a href="https://colab.research.google.com/github/john-s-butler-dit/Numerical-Analysis-Python/blob/master/Chapter%2008%20-%20Heat%20Equations/801_Heat%20Equation-%20FTCS.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # The Explicit Forward Time Centered Space (FTCS) Difference Equation for the Heat Equation #### John S Butler [email protected] [Course Notes](https://johnsbutler.netlify.com/files/Teaching/Numerical_Analysis_for_Differential_Equations.pdf) [Github](https://github.com/john-s-butler-dit/Numerical-Analysis-Python) ## Overview This notebook will implement the explicit Forward Time Centered Space (FTCS) Difference method for the Heat Equation. ## The Heat Equation The Heat Equation is the first order in time ($t$) and second order in space ($x$) Partial Differential Equation [1-3]: \begin{equation} \frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2},\end{equation} The equation describes heat transfer on a domain \begin{equation} \Omega = \{ t \geq 0\leq x \leq 1\}. \end{equation} with an initial condition at time $t=0$ for all $x$ and boundary condition on the left ($x=0$) and right side ($x=1$). ## Forward Time Centered Space (FTCS) Difference method This notebook will illustrate the Forward Time Centered Space (FTCS) Difference method for the Heat Equation with the __initial conditions__ \begin{equation} u(x,0)=2x, \ \ 0 \leq x \leq \frac{1}{2}, \end{equation} \begin{equation} u(x,0)=2(1-x), \ \ \frac{1}{2} \leq x \leq 1, \end{equation} and __boundary condition__ \begin{equation}u(0,t)=0, u(1,t)=0. \end{equation} ``` # LIBRARY # vector manipulation import numpy as np # math functions import math # THIS IS FOR PLOTTING %matplotlib inline import matplotlib.pyplot as plt # side-stepping mpl backend import warnings warnings.filterwarnings("ignore") ``` ## Discete Grid The region $\Omega$ is discretised into a uniform mesh $\Omega_h$. In the space $x$ direction into $N$ steps giving a stepsize of \begin{equation}h=\frac{1-0}{N},\end{equation} resulting in \begin{equation}x[i]=0+ih, \ \ \ i=0,1,...,N,\end{equation} and into $N_t$ steps in the time $t$ direction giving a stepsize of \begin{equation} k=\frac{1-0}{N_t}\end{equation} resulting in \begin{equation}t[j]=0+jk, \ \ \ j=0,...,15.\end{equation} The Figure below shows the discrete grid points for $N=10$ and $Nt=100$, the known boundary conditions (green), initial conditions (blue) and the unknown values (red) of the Heat Equation. ``` N=10 Nt=1000 h=1/N k=1/Nt r=k/(h*h) time_steps=15 time=np.arange(0,(time_steps+.5)*k,k) x=np.arange(0,1.0001,h) X, Y = np.meshgrid(x, time) fig = plt.figure() plt.plot(X,Y,'ro'); plt.plot(x,0*x,'bo',label='Initial Condition'); plt.plot(np.ones(time_steps+1),time,'go',label='Boundary Condition'); plt.plot(x,0*x,'bo'); plt.plot(0*time,time,'go'); plt.xlim((-0.02,1.02)) plt.xlabel('x') plt.ylabel('time (ms)') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.title(r'Discrete Grid $\Omega_h,$ h= %s, k=%s'%(h,k),fontsize=24,y=1.08) plt.show(); ``` ## Discrete Initial and Boundary Conditions The discrete initial conditions are \begin{equation} w[i,0]=2x[i], \ \ 0 \leq x[i] \leq \frac{1}{2} \end{equation} \begin{equation}w[i,0]=2(1-x[i]), \ \ \frac{1}{2} \leq x[i] \leq 1 \end{equation} and the discrete boundary conditions are \begin{equation} w[0,j]=0, w[10,j]=0, \end{equation} where $w[i,j]$ is the numerical approximation of $U(x[i],t[j])$. The Figure below plots values of $w[i,0]$ for the inital (blue) and boundary (red) conditions for $t[0]=0.$ ``` w=np.zeros((N+1,time_steps+1)) b=np.zeros(N-1) # Initial Condition for i in range (1,N): w[i,0]=2*x[i] if x[i]>0.5: w[i,0]=2*(1-x[i]) # Boundary Condition for k in range (0,time_steps): w[0,k]=0 w[N,k]=0 fig = plt.figure(figsize=(8,4)) plt.plot(x,w[:,0],'o:',label='Initial Condition') plt.plot(x[[0,N]],w[[0,N],0],'go',label='Boundary Condition t[0]=0') #plt.plot(x[N],w[N,0],'go') plt.xlim([-0.1,1.1]) plt.ylim([-0.1,1.1]) plt.title('Intitial and Boundary Condition',fontsize=24) plt.xlabel('x') plt.ylabel('w') plt.legend(loc='best') plt.show() ``` ## The Explicit Forward Time Centered Space (FTCS) Difference Equation The explicit Forwards Time Centered Space (FTCS) difference equation of the Heat Equation is derived by discretising \begin{equation} \frac{\partial u_{ij}}{\partial t} = \frac{\partial^2 u_{ij}}{\partial x^2},\end{equation} around $(x_i,t_{j})$ giving the difference equation \begin{equation} \frac{w_{ij+1}-w_{ij}}{k}=\frac{w_{i+1j}-2w_{ij}+w_{i-1j}}{h^2}, \end{equation} rearranging the equation we get \begin{equation} w_{ij+1}=rw_{i-1j}+(1-2r)w_{ij}+rw_{i+1j}, \end{equation} for $i=1,...9$ where $r=\frac{k}{h^2}$. This gives the formula for the unknown term $w_{ij+1}$ at the $(ij+1)$ mesh points in terms of $x[i]$ along the jth time row. Hence we can calculate the unknown pivotal values of $w$ along the first row of $j=1$ in terms of the known boundary conditions. This can be written in matrix form \begin{equation}\mathbf{w}_{j+1}=A\mathbf{w}_{j} +\mathbf{b}_{j} \end{equation} for which $A$ is a $9\times9$ matrix: \begin{equation} \left(\begin{array}{c} w_{1j+1}\\ w_{2j+1}\\ w_{3j+1}\\ w_{4j+1}\\ w_{5j+1}\\ w_{6j+1}\\ w_{7j+1}\\ w_{8j+1}\\ w_{9j+1}\\ \end{array}\right). =\left(\begin{array}{cccc cccc} 1-2r&r& 0&0&0 &0&0&0\\ r&1-2r&r&0&0&0 &0&0&0\\ 0&r&1-2r &r&0&0& 0&0&0\\ 0&0&r&1-2r &r&0&0& 0&0\\ 0&0&0&r&1-2r &r&0&0& 0\\ 0&0&0&0&r&1-2r &r&0&0\\ 0&0&0&0&0&r&1-2r &r&0\\ 0&0&0&0&0&0&r&1-2r&r\\ 0&0&0&0&0&0&0&r&1-2r\\ \end{array}\right) \left(\begin{array}{c} w_{1j}\\ w_{2j}\\ w_{3j}\\ w_{4j}\\ w_{5j}\\ w_{6j}\\ w_{7j}\\ w_{8j}\\ w_{9j}\\ \end{array}\right)+ \left(\begin{array}{c} rw_{0j}\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ rw_{10j}\\ \end{array}\right). \end{equation} It is assumed that the boundary values $w_{0j}$ and $w_{10j}$ are known for $j=1,2,...$, and $w_{i0}$ for $i=0,...,10$ is the initial condition. The Figure below shows the values of the $9\times 9$ matrix in colour plot form for $r=\frac{k}{h^2}$. ``` A=np.zeros((N-1,N-1)) for i in range (0,N-1): A[i,i]=1-2*r # DIAGONAL for i in range (0,N-2): A[i+1,i]=r # UPPER DIAGONAL A[i,i+1]=r # LOWER DIAGONAL fig = plt.figure(figsize=(6,4)); #plt.matshow(A); plt.imshow(A,interpolation='none'); plt.xticks(np.arange(N-1), np.arange(1,N-0.9,1)); plt.yticks(np.arange(N-1), np.arange(1,N-0.9,1)); clb=plt.colorbar(); clb.set_label('Matrix elements values'); #clb.set_clim((-1,1)); plt.title('Matrix r=%s'%(np.round(r,3)),fontsize=24) fig.tight_layout() plt.show(); ``` ## Results To numerically approximate the solution at $t[1]$ the matrix equation becomes \begin{equation} \mathbf{w}_{1}=A\mathbf{w}_{0} +\mathbf{b}_{0} \end{equation} where all the right hand side is known. To approximate solution at time $t[2]$ we use the matrix equation \begin{equation} \mathbf{w}_{2}=A\mathbf{w}_{1} +\mathbf{b}_{1}. \end{equation} Each set of numerical solutions $w[i,j]$ for all $i$ at the previous time step is used to approximate the solution $w[i,j+1]$. The Figure below shows the numerical approximation $w[i,j]$ of the Heat Equation using the FTCS method at $x[i]$ for $i=0,...,10$ and time steps $t[j]$ for $j=1,...,15$. The left plot shows the numerical approximation $w[i,j]$ as a function of $x[i]$ with each color representing the different time steps $t[j]$. The right plot shows the numerical approximation $w[i,j]$ as colour plot as a function of $x[i]$, on the $x[i]$ axis and time $t[j]$ on the $y$ axis. For $r>\frac{1}{2}$ the method is unstable resulting a solution that oscillates unnaturally between positive and negative values for each time step. ``` fig = plt.figure(figsize=(12,6)) plt.subplot(121) for j in range (1,time_steps+1): b[0]=r*w[0,j-1] b[N-2]=r*w[N,j-1] w[1:(N),j]=np.dot(A,w[1:(N),j-1]) plt.plot(x,w[:,j],'o:',label='t[%s]=%s'%(j,np.round(time[j],4))) plt.xlabel('x') plt.ylabel('w') #plt.legend(loc='bottom', bbox_to_anchor=(0.5, -0.1)) plt.legend(bbox_to_anchor=(-.4, 1), loc=2, borderaxespad=0.) plt.subplot(122) plt.imshow(w.transpose()) plt.xticks(np.arange(len(x)), x) plt.yticks(np.arange(len(time)), np.round(time,4)) plt.xlabel('x') plt.ylabel('time') clb=plt.colorbar() clb.set_label('Temperature (w)') plt.suptitle('Numerical Solution of the Heat Equation r=%s'%(np.round(r,3)),fontsize=24,y=1.08) fig.tight_layout() plt.show() ``` ## Local Trunction Error The local truncation error of the classical explicit difference approach to \begin{equation} \frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2}=0, \end{equation} with \begin{equation} F_{ij}(w)=\frac{w_{ij+1}-w_{ij}}{k}-\frac{w_{i+1j}-2w_{ij}+w_{i-1j}}{h^2}=0, \end{equation} is \begin{equation} T_{ij}=F_{ij}(U)=\frac{U_{ij+1}-U_{ij}}{k}-\frac{U_{i+1j}-2U_{ij}+U_{i-1j}}{h^2}, \end{equation} By Taylors expansions we have \begin{eqnarray*} U_{i+1j}&=&U((i+1)h,jk)=U(x_i+h,t_j)\\ &=&U_{ij}+h\left(\frac{\partial U}{\partial x} \right)_{ij}+\frac{h^2}{2}\left(\frac{\partial^2 U}{\partial x^2} \right)_{ij}+\frac{h^3}{6}\left(\frac{\partial^3 U}{\partial x^3} \right)_{ij} +...\\ U_{i-1j}&=&U((i-1)h,jk)=U(x_i-h,t_j)\\ &=&U_{ij}-h\left(\frac{\partial U}{\partial x} \right)_{ij}+\frac{h^2}{2}\left(\frac{\partial^2 U}{\partial x^2} \right)_{ij}-\frac{h^3}{6}\left(\frac{\partial^3 U}{\partial x^3} \right)_{ij} +...\\ U_{ij+1}&=&U(ih,(j+1)k)=U(x_i,t_j+k)\\ &=&U_{ij}+k\left(\frac{\partial U}{\partial t} \right)_{ij}+\frac{k^2}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij}+\frac{k^3}{6}\left(\frac{\partial^3 U}{\partial t^3} \right)_{ij} +... \end{eqnarray*} substitution into the expression for $T_{ij}$ then gives \begin{eqnarray*} T_{ij}&=&\left(\frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2} \right)_{ij}+\frac{k}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij} -\frac{h^2}{12}\left(\frac{\partial^4 U}{\partial x^4} \right)_{ij}\\ & & +\frac{k^2}{6}\left(\frac{\partial^3 U}{\partial t^3} \right)_{ij} -\frac{h^4}{360}\left(\frac{\partial^6 U}{\partial x^6} \right)_{ij}+ ... \end{eqnarray*} But $U$ is the solution to the differential equation so \begin{equation} \left(\frac{\partial U}{\partial t} - \frac{\partial^2 U}{\partial x^2} \right)_{ij}=0,\end{equation} the principal part of the local truncation error is \begin{equation} \frac{k}{2}\left(\frac{\partial^2 U}{\partial t^2} \right)_{ij}-\frac{h^2}{12}\left(\frac{\partial^4 U}{\partial x^4} \right)_{ij}. \end{equation} Hence the truncation error is \begin{equation} T_{ij}=O(k)+O(h^2). \end{equation} ## Stability Analysis To investigating the stability of the fully explicit FTCS difference method of the Heat Equation, we will use the von Neumann method. The FTCS difference equation is: \begin{equation}\frac{1}{k}(w_{pq+1}-w_{pq})=\frac{1}{h_x^2}(w_{p-1q}-2w_{pq}+w_{p+1q}),\end{equation} approximating \begin{equation}\frac{\partial U}{\partial t}=\frac{\partial^2 U}{\partial x^2}\end{equation} at $(ph,qk)$. Substituting $w_{pq}=e^{i\beta x}\xi^{q}$ into the difference equation gives: \begin{equation}e^{i\beta ph}\xi^{q+1}-e^{i\beta ph}\xi^{q}=r\{e^{i\beta (p-1)h}\xi^{q}-2e^{i\beta ph}\xi^{q}+e^{i\beta (p+1)h}\xi^{q} \} \end{equation} where $r=\frac{k}{h_x^2}$. Divide across by $e^{i\beta (p)h}\xi^{q}$ leads to \begin{equation} \xi-1=r(e^{i\beta (-1)h} -2+e^{i\beta h}),\end{equation} \begin{equation}\xi= 1+r (2\cos(\beta h)-2),\end{equation} \begin{equation}\xi=1-4r(\sin^2(\beta\frac{h}{2})).\end{equation} Hence \begin{equation}\left| 1-4r(\sin^2(\beta\frac{h}{2}) )\right|\leq 1\end{equation} for this to hold \begin{equation} 4r(\sin^2(\beta\frac{h}{2}) )\leq 2 \end{equation} which means \begin{equation} r\leq \frac{1}{2}.\end{equation} therefore the equation is conditionally stable as $0 < \xi \leq 1$ for $r<\frac{1}{2}$ and all $\beta$ . ## References [1] G D Smith Numerical Solution of Partial Differential Equations: Finite Difference Method Oxford 1992 [2] Butler, J. (2019). John S Butler Numerical Methods for Differential Equations. [online] Maths.dit.ie. Available at: http://www.maths.dit.ie/~johnbutler/Teaching_NumericalMethods.html [Accessed 14 Mar. 2019]. [3] Wikipedia contributors. (2019, February 22). Heat equation. In Wikipedia, The Free Encyclopedia. Available at: https://en.wikipedia.org/w/index.php?title=Heat_equation&oldid=884580138 [Accessed 14 Mar. 2019]. ``` ```
github_jupyter
## DS/CMPSC 410 MiniProject #3 ### Spring 2021 ### Instructor: John Yen ### TA: Rupesh Prajapati and Dongkuan Xu ### Learning Objectives - Be able to apply thermometer encoding to encode numerical variables into binary variable format. - Be able to apply k-means clustering to the Darknet dataset based on both thermometer encoding and one-hot encoding. - Be able to use external labels (e.g., mirai, zmap, and masscan) to evaluate the result of k-means clustering. - Be able to investigate characteristics of a cluster using one-hot encoded feature. ### Total points: 100 - Exercise 1: 5 points - Exercise 2: 5 points - Exercise 3: 5 points - Exercise 4: 15 points - Exercise 5: 5 points - Exercise 6: 10 points - Exercise 7: 5 points - Exercise 8: 5 points - Exercise 9: 10 points - Exercise 10: 5 points - Exercise 11: 10 points - Exercise 12: 20 points ### Due: 5 pm, April 23, 2021 ``` import pyspark import csv from pyspark import SparkContext from pyspark.sql import SparkSession from pyspark.sql.types import StructField, StructType, StringType, LongType from pyspark.sql.functions import col, column from pyspark.sql.functions import expr from pyspark.sql.functions import split from pyspark.sql.functions import array_contains from pyspark.sql import Row from pyspark.ml import Pipeline from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, IndexToString, PCA from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator import pandas as pd import numpy as np import math ss = SparkSession.builder.master("local").appName("ClusteringTE").getOrCreate() ``` ## Exercise 1 (5 points) Complete the path for input file in the code below and enter your name in this Markdown cell: - Name: Kangdong Yuan ``` Scanners_df = ss.read.csv("/storage/home/kky5082/ds410/Lab10/sampled_profile.csv", header= True, inferSchema=True ) ``` ## We can use printSchema() to display the schema of the DataFrame Scanners_df to see whether it was inferred correctly. ``` Scanners_df.printSchema() Scanners_df.where(col('mirai')).count() ``` # Part A: One Hot Encoding ## This part is identical to that of Miniproject Deliverable #2 We want to apply one hot encoding to the set of ports scanned by scanners. - A.1 Like Mini Project deliverable 1 and 2, we first convert the feature "ports_scanned_str" to a feature that is an Array of ports - A.2 We then calculate the total number of scanners for each port - A.3 We identify the top n port to use for one-hot encoding (You choose the number n). - A.4 Generate one-hot encoded feature for these top n ports. ``` # Scanners_df.select("ports_scanned_str").show(30) Scanners_df2=Scanners_df.withColumn("Ports_Array", split(col("ports_scanned_str"), "-") ) # Scanners_df2.persist().show(10) ``` ## A.1 We only need the column ```Ports_Array``` to calculate the top ports being scanned ``` Ports_Scanned_RDD = Scanners_df2.select("Ports_Array").rdd # Ports_Scanned_RDD.persist().take(5) ``` ## Because each port number in the Ports_Array column for each row occurs only once, we can count the total occurance of each port number through flatMap. ``` Ports_list_RDD = Ports_Scanned_RDD.map(lambda row: row[0] ) # Ports_list_RDD.persist() Ports_list2_RDD = Ports_Scanned_RDD.flatMap(lambda row: row[0] ) Port_count_RDD = Ports_list2_RDD.map(lambda x: (x, 1)) # Port_count_RDD.take(2) Port_count_total_RDD = Port_count_RDD.reduceByKey(lambda x,y: x+y, 1) # Port_count_total_RDD.persist().take(5) Sorted_Count_Port_RDD = Port_count_total_RDD.map(lambda x: (x[1], x[0])).sortByKey( ascending = False) # Sorted_Count_Port_RDD.persist().take(50) ``` ## Exercise 2 (5%) Select top_ports to be the number of top ports you want to use for one-hot encoding. I recommend a number between 20 and 40. ``` top_ports=30 Sorted_Ports_RDD= Sorted_Count_Port_RDD.map(lambda x: x[1]) Top_Ports_list = Sorted_Ports_RDD.take(top_ports) # Top_Ports_list # Scanners_df3=Scanners_df2.withColumn(FeatureName, array_contains("Ports_Array", Top_Ports_list[0])) # Scanners_df3.show(10) ``` ## A.4 Generate Hot-One Encoded Feature for each of the top ports in the Top_Ports_list - Iterate through the Top_Ports_list so that each top port is one-hot encoded. ## Exercise 3 (5 %) Complete the following PySpark code for encoding the n ports using One Hot Encoding, where n is specified by the variable ```top_ports``` ``` for i in range(0, top_ports - 1): # "Port" + Top_Ports_list[i] is the name of each new feature created through One Hot Encoding Scanners_df3 = Scanners_df2.withColumn("Port" + Top_Ports_list[i], array_contains("Ports_Array", Top_Ports_list[i])) Scanners_df2 = Scanners_df3 Scanners_df2.printSchema() ``` # Part B Thermometer Encoding of Numerical Variables ## We encode the numerical variable numports (number of ports being scanned) using thermometer encoding ``` pow(2,15) Scanners_df3=Scanners_df2.withColumn("TE_numports_0", col("numports") > 0) Scanners_df2 = Scanners_df3 Scanners_df3.count() Scanners_df3.where(col('TE_numports_0')).count() ``` # Exercise 4 (15%) Complete the following pyspark code to use the column "numports" to create 16 additional columns as follows: - TE_numports_0 : True, if the scanner scans more than 0 ports, otherwise False. - TE_numports_1 : True, if the scanner scans more than 2**0 (1) port, otherwise False. - TE_numports_2 : True, if the scanner scans more than 2**1 (2) ports, otherwise False. - TE_numports_3 : True, if the scanner scans more than 2**2 (4) ports, otherwise False ... - TE_numports_15 : True, if the scanner scans more than 2**14 ports, otherwise False - TE_numports_16 : True, if the scanner scans more than 2**15 (32768) ports, otherwise False ``` for i in range(0, 16): # "TE_numports_" + str(i+1) is the name of each new feature created for each Bin in Thermometer Encoding Scanners_df3 = Scanners_df2.withColumn("TE_numports_" + str(i+1), col("numports") > pow(2,i)) Scanners_df2 = Scanners_df3 Scanners_df2.printSchema() ``` # Exercise 5 (5 points) What is the total number of scanners that scan more than 2^15 (i.e., 32768) ports? Complete the code below using Scanners_df2 to find out the answer. ``` HFScanners_df2 = Scanners_df2.where(col('TE_numports_15')) HFScanners_df2.count() ``` # Exercise 6 (10 points) Complete the following code to use k-means to cluster the scanners using the following - thermometer encoding of 'numports' numerical feature - one-hot encoding of top k ports (k chosen by you in Exercise 2). ## Specify Parameters for k Means Clustering ``` km = KMeans(featuresCol="features", predictionCol="prediction").setK(50).setSeed(123) km.explainParams() input_features = [] for i in range(0, top_ports - 1): input_features.append( "Port"+Top_Ports_list[i] ) for i in range(0, 15): input_features.append( "TE_numports_" + str(i)) print(input_features) va = VectorAssembler().setInputCols(input_features).setOutputCol("features") data= va.transform(Scanners_df2) data.persist() kmModel=km.fit(data) kmModel predictions = kmModel.transform(data) predictions.persist() Cluster1_df=predictions.where(col("prediction")==0) Cluster1_df.persist().count() ``` ## Exercise 7 (5 points) Complete the following code to find the size of all of the clusters generated. ``` summary = kmModel.summary summary.clusterSizes ``` # Exercise 8 (5 points) Complete the following code to find the Silhouette Score of the clustering result. ``` evaluator = ClusteringEvaluator() silhouette = evaluator.evaluate(predictions) print('Silhouette Score of the Clustering Result is ', silhouette) centers = kmModel.clusterCenters() centers[0] print("Cluster Centers:") i=0 for center in centers: print("Cluster ", str(i+1), center) i = i+1 ``` # Part C Percentage of Mirai Malwares in Each Cluster # Exercise 9 (10 points) Complete the following code to compute the percentage of Mirai Malwares, Zmap, and Masscan in each cluster. ``` cluster_eval_df = pd.DataFrame( columns = ['cluster ID', 'size', 'cluster center', 'mirai_ratio', 'zmap_ratio', 'masscan_ratio'] ) for i in range(0, 50): cluster_i = predictions.where(col('prediction')==i) cluster_i_size = cluster_i.count() cluster_i_mirai_count = cluster_i.where(col('mirai')).count() cluster_i_mirai_ratio = cluster_i_mirai_count/cluster_i_size if cluster_i_mirai_count > 0: print("Cluster ", i, "; Mirai Ratio:", cluster_i_mirai_ratio, "; Cluster Size: ", cluster_i_size) cluster_i_zmap_ratio = (cluster_i.where(col('zmap')).count())/cluster_i_size cluster_i_masscan_ratio = (cluster_i.where(col('masscan')).count())/cluster_i_size cluster_eval_df.loc[i]=[i, cluster_i_size, centers[i], cluster_i_mirai_ratio, cluster_i_zmap_ratio, cluster_i_masscan_ratio ] ``` # Exercise 10 (5 points) Identify all of the clusters that have a large percentage of Mirai malware. For example, you can choose clusters with at least 80% of Mirai ratio. If you use a different threshold (other than 80%), describe the threshold you used and the rational of your choice. ## Answer to Exercise 10: ## if I choose 80% as threshold - Cluster 5 ; Mirai Ratio: 0.8424333084018948 ; Cluster Size: 16044 - Cluster 37 ; Mirai Ratio: 0.8878737541528239 ; Cluster Size: 1204 ... ``` # You can filter predictions DataFrame (Spark) to get all scanners in a cluster. # For example, the code below selects scanners in cluster 5. However, you should # replace 5 with the ID of the cluster you want to investigate. cluster_selected = predictions.where((col('prediction')==5) | (col('prediction')==37)) # If you prefer to use Pandas dataframe, you can use the following to convert a cluster to a Pandas dataframe cluster_selected_df = cluster_selected.select("*").toPandas() cluster_selected.printSchema() ``` # Exercise 11 (10 points) Complete the following code to find out, for each of the clusters you identified in Exercise 10, - (1) (5 points) determine whether they scan a common port, and - (2) (5 points) what is the port number if most of them in a cluster scan a common port. You canuse the code below to find out what top port is scanned by the scanner in a cluster. ``` # You fill in the ??? based on the cluster you want to investigate. cluster_5= predictions.where(col('prediction')==5) cluster_37= predictions.where(col('prediction')==37) for i in range(0, top_ports -1): port_num = "Port" + Top_Ports_list[i] port_i_count = cluster_5.where(col(port_num)).count() if port_i_count > 0: print("Scanners of Port ", Top_Ports_list[i], " = ", port_i_count) for i in range(0, top_ports -1): port_num = "Port" + Top_Ports_list[i] port_i_count = cluster_37.where(col(port_num)).count() if port_i_count > 0: print("Scanners of Port ", Top_Ports_list[i], " = ", port_i_count) ``` # Answer to Exercise 11 - (1) (5 points) They all scan the common ports, cluster scan the port 23, and cluster 37 also scan the port23, and port 23 is the common port. - (2) (5 points) The top port in cluster 5 is port2 which has 16044 times. And the top port in cluster 37 is port 2323, which is also common port and has 1204 times. # Exercise 12 (20 points) Based on the results above and those of mini project deliverable #2, answer the following questions: - (a) Why the clustering result of mini project #3 is better than that of #2? (5 points) - (b) Based on your answer of (a), what is the general lesson you learned for solving clustering problems? (5 points) - (c) Did you find anything interesting and/or surprising using Mirai labels to evaluate the clustering result? (5 points) - (d) Based on your answer of (c), what is the general lesson you learned regarding evaluating clustering? (5 points) # Answer to Exercise 12: - (a) Because the mini project#3 use the thermometer Encoding, because mini-project-#2 contain mixture of numerical variables and (One Hot Encoded) categorical variables, but thermometer Encoding can improve it. But, in real execution the project 3 get worser score than project 2. I think we can use hypermeter tunning to improve the proformance of kmean. - (b) To improve the cluster proformacne, I need avoid mixture of numerical variables and (One Hot Encoded), numerical variables not normalized, and high dimensional feature space. Moreover, I can use thermometer Encoding to improve my kmean cluster. - (c) I find that Mirai labels give me a better way to do Clustering Validation. And, it return accurate ratio for me to know the validation score. - (d) External label are used when we propose a new clustering technique and we want to validate it or we want to compare it to existing techniques. In these cases, we get a bunch of datasets for which we know the ground truth and see if our clustering technique is able to produce clustering solutions that are similar to it. So we can use external validity to improve our Clustering Validation.
github_jupyter
<a href="https://cognitiveclass.ai/"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png" width="200" align="center"> </a> <h1>Dictionaries in Python</h1> <p><strong>Welcome!</strong> This notebook will teach you about the dictionaries in the Python Programming Language. By the end of this lab, you'll know the basics dictionary operations in Python, including what it is, and the operations on it.</p> <div class="alert alert-block alert-info" style="margin-top: 20px"> <a href="http://cocl.us/topNotebooksPython101Coursera"> <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png" width="750" align="center"> </a> </div> <h2>Table of Contents</h2> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ul> <li> <a href="#dic">Dictionaries</a> <ul> <li><a href="content">What are Dictionaries?</a></li> <li><a href="key">Keys</a></li> </ul> </li> <li> <a href="#quiz">Quiz on Dictionaries</a> </li> </ul> <p> Estimated time needed: <strong>20 min</strong> </p> </div> <hr> <h2 id="Dic">Dictionaries</h2> <h3 id="content">What are Dictionaries?</h3> A dictionary consists of keys and values. It is helpful to compare a dictionary to a list. Instead of the numerical indexes such as a list, dictionaries have keys. These keys are the keys that are used to access values within a dictionary. <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsList.png" width="650" /> An example of a Dictionary <code>Dict</code>: ``` # Create the dictionary Dict = {"key1": 1, "key2": "2", "key3": [3, 3, 3], "key4": (4, 4, 4), ('key5'): 5, (0, 1): 6} Dict ``` The keys can be strings: ``` # Access to the value by the key Dict["key1"] ``` Keys can also be any immutable object such as a tuple: ``` # Access to the value by the key Dict[(0, 1)] ``` Each key is separated from its value by a colon "<code>:</code>". Commas separate the items, and the whole dictionary is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this "<code>{}</code>". ``` # Create a sample dictionary release_year_dict = {"Thriller": "1982", "Back in Black": "1980", \ "The Dark Side of the Moon": "1973", "The Bodyguard": "1992", \ "Bat Out of Hell": "1977", "Their Greatest Hits (1971-1975)": "1976", \ "Saturday Night Fever": "1977", "Rumours": "1977"} release_year_dict ``` In summary, like a list, a dictionary holds a sequence of elements. Each element is represented by a key and its corresponding value. Dictionaries are created with two curly braces containing keys and values separated by a colon. For every key, there can only be one single value, however, multiple keys can hold the same value. Keys can only be strings, numbers, or tuples, but values can be any data type. It is helpful to visualize the dictionary as a table, as in the following image. The first column represents the keys, the second column represents the values. <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsStructure.png" width="650" /> <h3 id="key">Keys</h3> You can retrieve the values based on the names: ``` # Get value by keys release_year_dict['Thriller'] ``` This corresponds to: <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyOne.png" width="500" /> Similarly for <b>The Bodyguard</b> ``` # Get value by key release_year_dict['The Bodyguard'] ``` <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyTwo.png" width="500" /> Now let you retrieve the keys of the dictionary using the method <code>release_year_dict()</code>: ``` # Get all the keys in dictionary release_year_dict.keys() ``` You can retrieve the values using the method <code>values()</code>: ``` # Get all the values in dictionary release_year_dict.values() ``` We can add an entry: ``` # Append value with key into dictionary release_year_dict['Graduation'] = '2007' release_year_dict ``` We can delete an entry: ``` # Delete entries by key del(release_year_dict['Thriller']) del(release_year_dict['Graduation']) release_year_dict ``` We can verify if an element is in the dictionary: ``` # Verify the key is in the dictionary 'The Bodyguard' in release_year_dict ``` <hr> <h2 id="quiz">Quiz on Dictionaries</h2> <b>You will need this dictionary for the next two questions:</b> ``` # Question sample dictionary soundtrack_dic = {"The Bodyguard":"1992", "Saturday Night Fever":"1977"} soundtrack_dic ``` a) In the dictionary <code>soundtrack_dict</code> what are the keys ? ``` # Write your code below and press Shift+Enter to execute soundtrack_dic.keys() ``` Double-click __here__ for the solution. <!-- Your answer is below: soundtrack_dic.keys() # The Keys "The Bodyguard" and "Saturday Night Fever" --> b) In the dictionary <code>soundtrack_dict</code> what are the values ? ``` # Write your code below and press Shift+Enter to execute soundtrack_dic.values() ``` Double-click __here__ for the solution. <!-- Your answer is below: soundtrack_dic.values() # The values are "1992" and "1977" --> <hr> <b>You will need this dictionary for the following questions:</b> The Albums <b>Back in Black</b>, <b>The Bodyguard</b> and <b>Thriller</b> have the following music recording sales in millions 50, 50 and 65 respectively: a) Create a dictionary <code>album_sales_dict</code> where the keys are the album name and the sales in millions are the values. ``` # Write your code below and press Shift+Enter to execute album_sales_dict = {"Back in Black":50, "The Bodyguard":50, "Thriller":65} album_sales_dict ``` Double-click __here__ for the solution. <!-- Your answer is below: album_sales_dict = {"The Bodyguard":50, "Back in Black":50, "Thriller":65} --> b) Use the dictionary to find the total sales of <b>Thriller</b>: ``` # Write your code below and press Shift+Enter to execute album_sales_dict["Thriller"] ``` Double-click __here__ for the solution. <!-- Your answer is below: album_sales_dict["Thriller"] --> c) Find the names of the albums from the dictionary using the method <code>keys</code>: ``` # Write your code below and press Shift+Enter to execute album_sales_dict.keys() ``` Double-click __here__ for the solution. <!-- Your answer is below: album_sales_dict.keys() --> d) Find the names of the recording sales from the dictionary using the method <code>values</code>: ``` # Write your code below and press Shift+Enter to execute album_sales_dict.values() ``` Double-click __here__ for the solution. <!-- Your answer is below: album_sales_dict.values() --> <hr> <h2>The last exercise!</h2> <p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href="https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/" target="_blank">this article</a> to learn how to share your work. <hr> <div class="alert alert-block alert-info" style="margin-top: 20px"> <h2>Get IBM Watson Studio free of charge!</h2> <p><a href="https://cocl.us/bottemNotebooksPython101Coursera"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png" width="750" align="center"></a></p> </div> <h3>About the Authors:</h3> <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p> Other contributors: <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a> <hr> <p>Copyright &copy; 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
github_jupyter
In this assignment, you'll continue working with the U.S. Education Dataset from Kaggle. The data gives detailed state level information on several facets of education on an annual basis. To learn more about the data and the column descriptions, you can view the Kaggle link above. Access this data using the Thinkful database using these credentials: * postgres_user = 'dsbc_student' * postgres_pw = '7*.8G9QH21' * postgres_host = '142.93.121.174' * postgres_port = '5432' * postgres_db = 'useducation' Don't forget to apply the most suitable missing value filling techniques from the previous checkpoint to the data. Provide the answers to the following only after you've addressed missing values! To complete this assignment, submit a link to a Jupyter notebook containing your solutions to the following tasks: 1. Consider the two variables: TOTAL_REVENUE and TOTAL_EXPENDITURE. Do these variables have outlier values? 2. If you detect outliers in the TOTAL_REVENUE and TOTAL_EXPENDITURE variables, apply the techniques you learned in this checkpoint to eliminate them and validate that there's no outlier values after you handled them. 3. Create another variable by subtracting the original TOTAL_EXPENDITURE from TOTAL_REVENUE (before you eliminated the outliers). You can think of it as a kind of budget deficit in education. Do you find any outlier values in this new variable? 4. If so, eliminate them using the technique you think most suitable. 5. Now create another variable by subtracting the TOTAL_EXPENDITURE from TOTAL_REVENUE. This time, use the outlier eliminated versions of TOTAL_EXPENDITURE from TOTAL_REVENUE. In this newly created variable, can you find any outliers? If so, eliminate them. 6. Compare some basic descriptive statistics of the budget variables you end up with in the 3rd and the 4th questions. Do you see any differences? 7. If our variable of interest is the budget deficit variable, which method do you think is the appropriate in dealing with the outliers in this variable: the method in the 3rd question or the one in the 4th question? ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd from sqlalchemy import create_engine import warnings warnings.filterwarnings('ignore') postgres_user = 'dsbc_student' postgres_pw = '7*.8G9QH21' postgres_host = '142.93.121.174' postgres_port = '5432' postgres_db = 'useducation' engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format( postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db)) education_df = pd.read_sql_query('select * from useducation',con=engine) # no need for an open connection, # as we're only doing a single query engine.dispose() fill_list = ["STATE_REVENUE", "LOCAL_REVENUE", "TOTAL_EXPENDITURE", "INSTRUCTION_EXPENDITURE", "SUPPORT_SERVICES_EXPENDITURE", "OTHER_EXPENDITURE", "CAPITAL_OUTLAY_EXPENDITURE", "GRADES_PK_G", "GRADES_KG_G", "GRADES_4_G", "GRADES_8_G", "GRADES_12_G", "GRADES_1_8_G", "GRADES_9_12_G", "GRADES_ALL_G"] states = education_df["STATE"].unique() for state in states: education_df.loc[education_df["STATE"] == state, fill_list] = education_df.loc[education_df["STATE"] == state, fill_list].interpolate() # we drop the null values after interpolation education_df.dropna(inplace=True) ``` ## 1. Consider the two variables: TOTAL_REVENUE and TOTAL_EXPENDITURE. Do these variables have outlier values? ``` education_df.info() education_df.head() ``` __Time series data, I can interpolate the missing values__ Z-Score Test ``` from scipy.stats import zscore z_scores = zscore(education_df['TOTAL_REVENUE']) for threshold in range(1,10): print("The score threshold is: {}".format(threshold)) print("The indices of the outliers:") print(np.where(z_scores > threshold)) print("Number of outliers is: {}".format(len((np.where(z_scores > threshold)[0])))) z_scores = zscore(education_df['TOTAL_EXPENDITURE']) for threshold in range(1,10): print("The score threshold is: {}".format(threshold)) print("The indices of the outliers:") print(np.where(z_scores > threshold)) print("Number of outliers is: {}".format(len((np.where(z_scores > threshold)[0])))) ``` According to Zscores both have outliers ## 2. If you detect outliers in the TOTAL_REVENUE and TOTAL_EXPENDITURE variables, apply the techniques you learned in this checkpoint to eliminate them and validate that there's no outlier values after you handled them. ``` from scipy.stats.mstats import winsorize winsorized_revenue = winsorize(education_df["TOTAL_REVENUE"], (0, 0.05)) winsorized_expenditure = winsorize(education_df["TOTAL_EXPENDITURE"], (0, 0.05)) z_scores = zscore(winsorized_revenue) for threshold in range(1,10): print("The score threshold is: {}".format(threshold)) print("The indices of the outliers:") print(np.where(z_scores > threshold)) print("Number of outliers is: {}".format(len((np.where(z_scores > threshold)[0])))) z_scores = zscore(winsorized_expenditure) for threshold in range(1,10): print("The score threshold is: {}".format(threshold)) print("The indices of the outliers:") print(np.where(z_scores > threshold)) print("Number of outliers is: {}".format(len((np.where(z_scores > threshold)[0])))) ``` After the outlier threshold of 3 (75%) we lose our outliers, Winsorization worked. ## 3. Create another variable by subtracting the original TOTAL_EXPENDITURE from TOTAL_REVENUE (before you eliminated the outliers). You can think of it as a kind of budget deficit in education. Do you find any outlier values in this new variable? ``` education_df['Deficit'] = education_df['TOTAL_REVENUE'] - education_df['TOTAL_EXPENDITURE'] plt.boxplot(education_df['Deficit'], whis = 5) ``` appears so! ## 4. If so, eliminate them using the technique you think most suitable. ``` winsorized_budget = winsorize(education_df['Deficit'], (0.05, 0.05)) plt.boxplot(winsorized_budget, whis = 5) ``` Looks like outliers were taken care of ## 5. Now create another variable by subtracting the TOTAL_EXPENDITURE from TOTAL_REVENUE. This time, use the outlier eliminated versions of TOTAL_EXPENDITURE from TOTAL_REVENUE. In this newly created variable, can you find any outliers? If so, eliminate them. ``` education_df['winsordeficit'] = winsorized_revenue - winsorized_expenditure plt.boxplot(education_df['winsordeficit'], whis=5) winsorizedbudget2 = winsorize(education_df['winsordeficit'], (0.05, 0.05)) plt.boxplot(winsorizedbudget2, whis=5) ``` ## 6. Compare some basic descriptive statistics of the budget variables you end up with in the 3rd and the 4th questions. Do you see any differences? ``` education_df.describe() ``` There are some pretty substantial differences between the two ## 7. If our variable of interest is the budget deficit variable, which method do you think is the appropriate in dealing with the outliers in this variable: the method in the 3rd question or the one in the 4th question? for a more accurate representation, using the data from the original variables is best. The method in the 3rd question.
github_jupyter
<a href="https://colab.research.google.com/github/Ayanlola2002/deep-learning-flower-identifier/blob/master/Image_Classifier_Project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Developing an AI application Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. <img src='https://i.imgur.com/6n64KAw.png' width=500px> The project is broken down into multiple steps: * Load and preprocess the image dataset * Train the image classifier on your dataset * Use the trained classifier to predict image content We'll lead you through each part which you'll implement in Python. When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new. First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here. ## Install PyTorch ``` # http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision import torch ``` ## Download the dataset ``` !wget -O cat_to_name.json "https://raw.githubusercontent.com/GabrielePicco/deep-learning-flower-identifier/master/cat_to_name.json" !wget "https://s3.amazonaws.com/content.udacity-data.com/courses/nd188/flower_data.zip" !unzip flower_data.zip ``` ## Import the test environment ``` !git clone https://github.com/GabrielePicco/deep-learning-flower-identifier !pip install requests !pip install airtable import sys sys.path.insert(0, 'deep-learning-flower-identifier') from test_model_pytorch_facebook_challenge import publish_evaluated_model, calc_accuracy ``` ## Import ``` !pip install --no-cache-dir -I pillow # Imports here %matplotlib inline import time import os import json import copy import matplotlib.pyplot as plt import seaborn as sns import numpy as np from PIL import Image from collections import OrderedDict import torch from torch import nn, optim from torch.optim import lr_scheduler from torch.autograd import Variable from torchvision import datasets, models, transforms from google.colab import files ``` ## Load the data Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). You can [download the data here](https://s3.amazonaws.com/content.udacity-data.com/courses/nd188/flower_data.zip). The dataset is split into two parts, training and validation. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. If you use a pre-trained network, you'll also need to make sure the input data is resized to 224x224 pixels as required by the networks. The validation set is used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size. The pre-trained networks available from `torchvision` were trained on the ImageNet dataset where each color channel was normalized separately. For both sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1. ``` data_dir = './flower_data' train_dir = os.path.join(data_dir, 'train') valid_dir = os.path.join(data_dir, 'valid') dirs = {'train': train_dir, 'valid': valid_dir} size = 224 data_transforms = data_transforms = { 'train': transforms.Compose([ transforms.RandomRotation(45), transforms.RandomResizedCrop(size), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'valid': transforms.Compose([ transforms.Resize(size + 32), transforms.CenterCrop(size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } image_datasets = {x: datasets.ImageFolder(dirs[x], transform=data_transforms[x]) for x in ['train', 'valid']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=32, shuffle=True) for x in ['train', 'valid']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'valid']} class_names = image_datasets['train'].classes ``` ### Label mapping You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers. ``` with open('cat_to_name.json', 'r') as f: cat_to_name = json.load(f) ``` # Building and training the classifier Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features. We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students! You can also ask questions on the forums or join the instructors in office hours. Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do: * Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use) * Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout * Train the classifier layers using backpropagation using the pre-trained network to get the features * Track the loss and accuracy on the validation set to determine the best hyperparameters We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal! When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project. ``` model = models.vgg19(pretrained=True) # freeze all pretrained model parameters for param in model.parameters(): param.requires_grad_(False) print(model) classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(25088, 4096)), ('relu', nn.ReLU()), ('fc2', nn.Linear(4096, 102)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier def train_model(model, criteria, optimizer, scheduler, num_epochs=25, device='cuda'): model.to(device) since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'valid']: if phase == 'train': scheduler.step() model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criteria(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'valid' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model # Criteria NLLLoss which is recommended with Softmax final layer criteria = nn.NLLLoss() # Observe that all parameters are being optimized optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) # Decay LR by a factor of 0.1 every 4 epochs sched = lr_scheduler.StepLR(optimizer, step_size=4, gamma=0.1) # Number of epochs eps=5 device = "cuda" if torch.cuda.is_available() else "cpu" model_ft = train_model(model, criteria, optimizer, sched, eps, device) ``` ## Save the checkpoint Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on. ```model.class_to_idx = image_datasets['train'].class_to_idx``` Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now. ``` model_file_name = 'classifier.pth' model.class_to_idx = image_datasets['train'].class_to_idx model.cpu() torch.save({'arch': 'vgg19', 'state_dict': model.state_dict(), 'class_to_idx': model.class_to_idx}, model_file_name) ``` ## Loading the checkpoint At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. ``` def load_model(checkpoint_path): chpt = torch.load(checkpoint_path) pretrained_model = getattr(models, chpt['arch']) if callable(pretrained_model): model = pretrained_model(pretrained=True) for param in model.parameters(): param.requires_grad = False else: print("Sorry base architecture not recognized") model.class_to_idx = chpt['class_to_idx'] # Create the classifier classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(25088, 4096)), ('relu', nn.ReLU()), ('fc2', nn.Linear(4096, 102)), ('output', nn.LogSoftmax(dim=1)) ])) # Put the classifier on the pretrained network model.classifier = classifier model.load_state_dict(chpt['state_dict']) return model model = load_model('classifier.pth') calc_accuracy(model, input_image_size=224, testset_path=valid_dir) ``` # Inference for classification Now you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like ```python probs, classes = predict(image_path, model) print(probs) print(classes) > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] > ['70', '3', '45', '62', '55'] ``` First you'll need to handle processing the input image such that it can be used in your network. ## Image Preprocessing You'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image. Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`. As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions. ``` def process_image(image_path): ''' Scales, crops, and normalizes a PIL image for a PyTorch model, returns an Numpy array ''' # Open the image from PIL import Image img = Image.open(image_path) # Resize if img.size[0] > img.size[1]: img.thumbnail((10000, 256)) else: img.thumbnail((256, 10000)) # Crop left_margin = (img.width-224)/2 bottom_margin = (img.height-224)/2 right_margin = left_margin + 224 top_margin = bottom_margin + 224 img = img.crop((left_margin, bottom_margin, right_margin, top_margin)) # Normalize img = np.array(img)/255 mean = np.array([0.485, 0.456, 0.406]) #provided mean std = np.array([0.229, 0.224, 0.225]) #provided std img = (img - mean)/std # Move color channels to first dimension as expected by PyTorch img = img.transpose((2, 0, 1)) return img ``` To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). ``` def imshow(image, ax=None, title=None): if ax is None: fig, ax = plt.subplots() if title: plt.title(title) # PyTorch tensors assume the color channel is first # but matplotlib assumes is the third dimension image = image.transpose((1, 2, 0)) # Undo preprocessing mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = std * image + mean # Image needs to be clipped between 0 and 1 image = np.clip(image, 0, 1) ax.imshow(image) return ax ``` ## Class Prediction Once you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values. To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well. Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes. ```python probs, classes = predict(image_path, model) print(probs) print(classes) > [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339] > ['70', '3', '45', '62', '55'] ``` ``` def predict(image_path, model, top_num=5): # Process image img = process_image(image_path) # Numpy -> Tensor image_tensor = torch.from_numpy(img).type(torch.FloatTensor) # Add batch of size 1 to image model_input = image_tensor.unsqueeze(0) image_tensor.to('cpu') model_input.to('cpu') model.to('cpu') # Probs probs = torch.exp(model.forward(model_input)) # Top probs top_probs, top_labs = probs.topk(top_num) top_probs = top_probs.detach().numpy().tolist()[0] top_labs = top_labs.detach().numpy().tolist()[0] # Convert indices to classes idx_to_class = {val: key for key, val in model.class_to_idx.items()} top_labels = [idx_to_class[lab] for lab in top_labs] top_flowers = [cat_to_name[idx_to_class[lab]] for lab in top_labs] return top_probs, top_labels, top_flowers ``` ## Sanity Checking Now that you can use a trained model for predictions, check to make sure it makes sense. Even if the validation accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this: <img src='https://i.imgur.com/KvRmrUp.png' width=300px> You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above. ``` def plot_solution(image_path, model): # Set up plot plt.figure(figsize = (6,10)) ax = plt.subplot(2,1,1) # Set up title flower_num = image_path.split('/')[3] title_ = cat_to_name[flower_num] # Plot flower img = process_image(image_path) imshow(img, ax, title = title_); # Make prediction probs, labs, flowers = predict(image_path, model) # Plot bar chart plt.subplot(2,1,2) sns.barplot(x=probs, y=flowers, color=sns.color_palette()[0]); plt.show() image_path = os.path.join(valid_dir, '28/image_05265.jpg') plot_solution(image_path, model) ``` ## Download the model Alternatively, download the file from the google colaboratory explorer menu ``` #files.download('classifier.pth') ``` ## Publish the result on the Airtable shared leaderboard ``` publish_evaluated_model(model, input_image_size=224, username="@Slack.Username", model_name="VGG19", optim="Adam", criteria="NLLLoss", scheduler="StepLR", epoch=5) ```
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt data = pd.read_csv("iris.data") data.head() data = pd.read_csv("iris.data", header= None) column_name = ['sepal length', 'sepal width', 'petal length', 'petal width', 'class'] data.columns = column_name data.head() ``` ``` # Split the dataset by class values, returns a dictionary def separate_by_class(dataset): separated = dict() for i in range(len(dataset)): vector = dataset[i] class_value = vector[-1] if (class_value not in separated): separated[class_value] = list() separated[class_value].append(vector) return separated # Test separating data by class dataset = [[3.393533211,2.331273381,0], [3.110073483,1.781539638,0], [1.343808831,3.368360954,0], [3.582294042,4.67917911,0], [2.280362439,2.866990263,0], [7.423436942,4.696522875,1], [5.745051997,3.533989803,1], [9.172168622,2.511101045,1], [7.792783481,3.424088941,1], [7.939820817,0.791637231,1]] separated = separate_by_class(dataset) for label in separated: print(label) for row in separated[label]: print(row) x = data[['sepal length', 'sepal width']] x.head() # function to calculate average(mean) def mean(data): return sum(data)/ float(len(data)) from math import sqrt # function to calculate the standard deviation def stdev(data): avg = mean(data) variance = sum([(x-avg)**2 for x in data]) / float(len(data)-1) return sqrt(variance) t = [1,2,3,5, 6.7,8, 9.3] a = stdev(t) print(a) # Calculate the mean, stdev and count for each column in a dataset def summarize_dataset(dataset): summaries = [(mean(column), stdev(column), len(column)) for column in zip(*dataset)] del(summaries[-1]) return summaries a = summarize_dataset(dataset) a # Split dataset by class then calculate statistics for each row def summarize_by_class(dataset): separated = separate_by_class(dataset) summaries = dict() for class_value, rows in separated.items(): summaries[class_value] = summarize_dataset(rows) return summaries summary = summarize_by_class(dataset) for label in summary: print(label) for row in summary[label]: print(row) from math import pi from math import exp # Calculate the Gaussian probability distribution function for x def calculate_probability(x, mean, stdev): exponent = exp(-((x-mean)**2 / (2 * stdev**2 ))) return (1 / (sqrt(2 * pi) * stdev)) * exponent # Test Gaussian PDF print(calculate_probability(1.0, 1.0, 1.0)) print(calculate_probability(2.0, 1.0, 1.0)) print(calculate_probability(0.0, 1.0, 1.0)) # Calculate the probabilities of predicting each class for a given row def calculate_class_probabilities(summaries, row): total_rows = sum([summaries[label][0][2] for label in summaries]) probabilities = dict() for class_value, class_summaries in summaries.items(): probabilities[class_value] = summaries[class_value][0][2]/float(total_rows) for i in range(len(class_summaries)): mean, stdev, _ = class_summaries[i] probabilities[class_value] *= calculate_probability(row[i], mean, stdev) return probabilities summaries = summarize_by_class(dataset) probabilities = calculate_class_probabilities(summaries, dataset[0]) print(probabilities) def variance1(data, ddof=0): n = len(data) mean = sum(data) / n return sum((x - mean) ** 2 for x in data) / (n - ddof) ``` # Iris ``` # Convert string column to float def str_column_to_float(dataset, column): for row in dataset: row[column] = float(row[column].strip()) # Convert string column to integer def str_column_to_int(dataset, column): class_values = [row[column] for row in dataset] unique = set(class_values) lookup = dict() for i, value in enumerate(unique): lookup[value] = i for row in dataset: row[column] = lookup[row[column]] return lookup # Split a dataset into k folds def cross_validation_split(dataset, n_folds): dataset_split = list() dataset_copy = list(dataset) fold_size = int(len(dataset) / n_folds) for _ in range(n_folds): fold = list() while len(fold) < fold_size: index = randrange(len(dataset_copy)) fold.append(dataset_copy.pop(index)) dataset_split.append(fold) return dataset_split # Calculate accuracy percentage def accuracy_metric(actual, predicted): correct = 0 for i in range(len(actual)): if actual[i] == predicted[i]: correct += 1 return correct / float(len(actual)) * 100.0 # Evaluate an algorithm using a cross validation split def evaluate_algorithm(dataset, algorithm, n_folds, *args): folds = cross_validation_split(dataset, n_folds) scores = list() for fold in folds: train_set = list(folds) train_set.remove(fold) train_set = sum(train_set, []) test_set = list() for row in fold: row_copy = list(row) test_set.append(row_copy) row_copy[-1] = None predicted = algorithm(train_set, test_set, *args) actual = [row[-1] for row in fold] accuracy = accuracy_metric(actual, predicted) scores.append(accuracy) return scores # Split the dataset by class values, returns a dictionary def separate_by_class(dataset): separated = dict() for i in range(len(dataset)): vector = dataset[i] class_value = vector[-1] if (class_value not in separated): separated[class_value] = list() separated[class_value].append(vector) return separated # Calculate the mean of a list of numbers def mean(numbers): return sum(numbers)/float(len(numbers)) # Calculate the standard deviation of a list of numbers def stdev(numbers): avg = mean(numbers) variance = sum([(x-avg)**2 for x in numbers]) / float(len(numbers)-1) return sqrt(variance) # Calculate the mean, stdev and count for each column in a dataset def summarize_dataset(dataset): summaries = [(mean(column), stdev(column), len(column)) for column in zip(*dataset)] del(summaries[-1]) return summaries # Split dataset by class then calculate statistics for each row def summarize_by_class(dataset): separated = separate_by_class(dataset) summaries = dict() for class_value, rows in separated.items(): summaries[class_value] = summarize_dataset(rows) return summaries # Calculate the Gaussian probability distribution function for x def calculate_probability(x, mean, stdev): exponent = exp(-((x-mean)**2 / (2 * stdev**2 ))) return (1 / (sqrt(2 * pi) * stdev)) * exponent # Calculate the probabilities of predicting each class for a given row def calculate_class_probabilities(summaries, row): total_rows = sum([summaries[label][0][2] for label in summaries]) probabilities = dict() for class_value, class_summaries in summaries.items(): probabilities[class_value] = summaries[class_value][0][2]/float(total_rows) for i in range(len(class_summaries)): mean, stdev, _ = class_summaries[i] probabilities[class_value] *= calculate_probability(row[i], mean, stdev) return probabilities # Predict the class for a given row def predict(summaries, row): probabilities = calculate_class_probabilities(summaries, row) best_label, best_prob = None, -1 for class_value, probability in probabilities.items(): if best_label is None or probability > best_prob: best_prob = probability best_label = class_value return best_label # Naive Bayes Algorithm def naive_bayes(train, test): summarize = summarize_by_class(train) predictions = list() for row in test: output = predict(summarize, row) predictions.append(output) return(predictions) # Load a CSV file def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(file) for row in csv_reader: if not row: continue dataset.append(row) return dataset from csv import reader from random import seed from random import randrange seed(1) filename = 'iris.data' dataset = load_csv(filename) for i in range(len(dataset[0])-1): str_column_to_float(dataset, i) # convert class column to integers str_column_to_int(dataset, len(dataset[0])-1) # evaluate algorithm n_folds = 5 scores = evaluate_algorithm(dataset, naive_bayes, n_folds) print('Scores: %s' % scores) print('Mean Accuracy: %.3f%%' % (sum(scores)/float(len(scores)))) v = variance1(scores) print('Variance of classifier: ', v) ```
github_jupyter
# Stochastic Variational GP Regression ## Overview In this notebook, we'll give an overview of how to use SVGP stochastic variational regression ((https://arxiv.org/pdf/1411.2005.pdf)) to rapidly train using minibatches on the `3droad` UCI dataset with hundreds of thousands of training examples. This is one of the more common use-cases of variational inference for GPs. If you are unfamiliar with variational inference, we recommend the following resources: - [Variational Inference: A Review for Statisticians](https://arxiv.org/abs/1601.00670) by David M. Blei, Alp Kucukelbir, Jon D. McAuliffe. - [Scalable Variational Gaussian Process Classification](https://arxiv.org/abs/1411.2005) by James Hensman, Alex Matthews, Zoubin Ghahramani. ``` import tqdm import math import torch import gpytorch from matplotlib import pyplot as plt # Make plots inline %matplotlib inline ``` For this example notebook, we'll be using the `song` UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 80% of the data as training and the last 20% as testing. **Note**: Running the next cell will attempt to download a **~136 MB** file to the current directory. ``` import urllib.request import os from scipy.io import loadmat from math import floor # this is for running the notebook in our testing framework smoke_test = ('CI' in os.environ) if not smoke_test and not os.path.isfile('../elevators.mat'): print('Downloading \'elevators\' UCI dataset...') urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat') if smoke_test: # this is for running the notebook in our testing framework X, y = torch.randn(1000, 3), torch.randn(1000) else: data = torch.Tensor(loadmat('../elevators.mat')['data']) X = data[:, :-1] X = X - X.min(0)[0] X = 2 * (X / X.max(0)[0]) - 1 y = data[:, -1] train_n = int(floor(0.8 * len(X))) train_x = X[:train_n, :].contiguous() train_y = y[:train_n].contiguous() test_x = X[train_n:, :].contiguous() test_y = y[train_n:].contiguous() if torch.cuda.is_available(): train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda() ``` ## Creating a DataLoader The next step is to create a torch `DataLoader` that will handle getting us random minibatches of data. This involves using the standard `TensorDataset` and `DataLoader` modules provided by PyTorch. In this notebook we'll be using a fairly large batch size of 1024 just to make optimization run faster, but you could of course change this as you so choose. ``` from torch.utils.data import TensorDataset, DataLoader train_dataset = TensorDataset(train_x, train_y) train_loader = DataLoader(train_dataset, batch_size=1024, shuffle=True) test_dataset = TensorDataset(test_x, test_y) test_loader = DataLoader(test_dataset, batch_size=1024, shuffle=False) ``` ## Creating a SVGP Model For most variational/approximate GP models, you will need to construct the following GPyTorch objects: 1. A **GP Model** (`gpytorch.models.ApproximateGP`) - This handles basic variational inference. 1. A **Variational distribution** (`gpytorch.variational._VariationalDistribution`) - This tells us what form the variational distribution q(u) should take. 1. A **Variational strategy** (`gpytorch.variational._VariationalStrategy`) - This tells us how to transform a distribution q(u) over the inducing point values to a distribution q(f) over the latent function values for some input x. Here, we use a `VariationalStrategy` with `learn_inducing_points=True`, and a `CholeskyVariationalDistribution`. These are the most straightforward and common options. #### The GP Model The `ApproximateGP` model is GPyTorch's simplest approximate inference model. It approximates the true posterior with a distribution specified by a `VariationalDistribution`, which is most commonly some form of MultivariateNormal distribution. The model defines all the variational parameters that are needed, and keeps all of this information under the hood. The components of a user built `ApproximateGP` model in GPyTorch are: 1. An `__init__` method that constructs a mean module, a kernel module, a variational distribution object and a variational strategy object. This method should also be responsible for construting whatever other modules might be necessary. 2. A `forward` method that takes in some $n \times d$ data `x` and returns a MultivariateNormal with the *prior* mean and covariance evaluated at `x`. In other words, we return the vector $\mu(x)$ and the $n \times n$ matrix $K_{xx}$ representing the prior mean and covariance matrix of the GP. ``` from gpytorch.models import ApproximateGP from gpytorch.variational import CholeskyVariationalDistribution from gpytorch.variational import VariationalStrategy class GPModel(ApproximateGP): def __init__(self, inducing_points): variational_distribution = CholeskyVariationalDistribution(inducing_points.size(0)) variational_strategy = VariationalStrategy(self, inducing_points, variational_distribution, learn_inducing_locations=True) super(GPModel, self).__init__(variational_strategy) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) inducing_points = train_x[:500, :] model = GPModel(inducing_points=inducing_points) likelihood = gpytorch.likelihoods.GaussianLikelihood() if torch.cuda.is_available(): model = model.cuda() likelihood = likelihood.cuda() ``` ### Training the Model The cell below trains the model above, learning both the hyperparameters of the Gaussian process **and** the parameters of the neural network in an end-to-end fashion using Type-II MLE. Unlike when using the exact GP marginal log likelihood, performing variational inference allows us to make use of stochastic optimization techniques. For this example, we'll do one epoch of training. Given the small size of the neural network relative to the size of the dataset, this should be sufficient to achieve comparable accuracy to what was observed in the DKL paper. The optimization loop differs from the one seen in our more simple tutorials in that it involves looping over both a number of training iterations (epochs) *and* minibatches of the data. However, the basic process is the same: for each minibatch, we forward through the model, compute the loss (the `VariationalELBO` or ELBO), call backwards, and do a step of optimization. ``` num_epochs = 1 if smoke_test else 4 model.train() likelihood.train() optimizer = torch.optim.Adam([ {'params': model.parameters()}, {'params': likelihood.parameters()}, ], lr=0.01) # Our loss object. We're using the VariationalELBO mll = gpytorch.mlls.VariationalELBO(likelihood, model, num_data=train_y.size(0)) epochs_iter = tqdm.notebook.tqdm(range(num_epochs), desc="Epoch") for i in epochs_iter: # Within each iteration, we will go over each minibatch of data minibatch_iter = tqdm.notebook.tqdm(train_loader, desc="Minibatch", leave=False) for x_batch, y_batch in minibatch_iter: optimizer.zero_grad() output = model(x_batch) loss = -mll(output, y_batch) minibatch_iter.set_postfix(loss=loss.item()) loss.backward() optimizer.step() ``` ### Making Predictions The next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in `preds.mean()`). Because the test set is substantially smaller than the training set, we don't need to make predictions in mini batches here, although this can be done by passing in minibatches of `test_x` rather than the full tensor. ``` model.eval() likelihood.eval() means = torch.tensor([0.]) with torch.no_grad(): for x_batch, y_batch in test_loader: preds = model(x_batch) means = torch.cat([means, preds.mean.cpu()]) means = means[1:] print('Test MAE: {}'.format(torch.mean(torch.abs(means - test_y.cpu())))) ```
github_jupyter
``` import torch from abc import ABC, abstractmethod import numpy as np def calc_out_shape(input_matrix_shape, out_channels, kernel_size, stride, padding): batch_size, channels_count, input_height, input_width = input_matrix_shape output_height = (input_height + 2 * padding - (kernel_size - 1) - 1) // stride + 1 output_width = (input_width + 2 * padding - (kernel_size - 1) - 1) // stride + 1 return batch_size, out_channels, output_height, output_width class ABCConv2d(ABC): def __init__(self, in_channels, out_channels, kernel_size, stride): self.in_channels = in_channels self.out_channels = out_channels self.kernel_size = kernel_size self.stride = stride def set_kernel(self, kernel): self.kernel = kernel @abstractmethod def __call__(self, input_tensor): pass def create_and_call_conv2d_layer(conv2d_layer_class, stride, kernel, input_matrix): out_channels = kernel.shape[0] in_channels = kernel.shape[1] kernel_size = kernel.shape[2] layer = conv2d_layer_class(in_channels, out_channels, kernel_size, stride) layer.set_kernel(kernel) return layer(input_matrix) class Conv2d(ABCConv2d): def __init__(self, in_channels, out_channels, kernel_size, stride): self.conv2d = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding=0, bias=False) def set_kernel(self, kernel): self.conv2d.weight.data = kernel def __call__(self, input_tensor): return self.conv2d(input_tensor) def test_conv2d_layer(conv2d_layer_class, batch_size=2, input_height=4, input_width=4, stride=2): kernel = torch.tensor( [[[[0., 1, 0], [1, 2, 1], [0, 1, 0]], [[1, 2, 1], [0, 3, 3], [0, 1, 10]], [[10, 11, 12], [13, 14, 15], [16, 17, 18]]]]) in_channels = kernel.shape[1] input_tensor = torch.arange(0, batch_size * in_channels * input_height * input_width, out=torch.FloatTensor()) \ .reshape(batch_size, in_channels, input_height, input_width) custom_conv2d_out = create_and_call_conv2d_layer( conv2d_layer_class, stride, kernel, input_tensor) conv2d_out = create_and_call_conv2d_layer( Conv2d, stride, kernel, input_tensor) return torch.allclose(custom_conv2d_out, conv2d_out) \ and (custom_conv2d_out.shape == conv2d_out.shape) class Conv2dMatrixV2(ABCConv2d): # Функция преобразования кернела в нужный формат. def _convert_kernel(self): matrix_zeros = np.zeros((self.kernel.size()[0],self.kernel.size()[1] * self.kernel.size()[2]*self.kernel.size()[3])) j, k = 0, 0 for image in self.kernel[0].numpy(): for i in range(len(image)): matrix_zeros[:, k:k+self.kernel.size()[2]] = image[i] k += self.kernel.size()[2] j+=1 converted_kernel = torch.from_numpy(matrix_zeros).float()# Реализуйте преобразование кернела. return converted_kernel # Функция преобразования входа в нужный формат. def _convert_input(self, torch_input, output_height, output_width): matrix_zeros = np.zeros((torch_input.size()[1] *self.kernel.size()[2]*self.kernel.size()[3], torch_input.size()[0] )) for i in range(matrix_zeros.shape[1]): j = 0 for core in torch_input[i].numpy(): core_reshape = core.reshape(1, core.shape[0]*core.shape[1]) k = 0 for g in range(self.kernel.size()[2]): matrix_zeros[j:j+self.kernel.size()[2], i] = core_reshape[:,k:k+self.kernel.size()[2]] # np.flip(core_reshape, 1) j += self.kernel.size()[2] k += self.kernel.size()[2]+1 converted_input = torch.from_numpy(matrix_zeros).float() # Реализуйте преобразование входа. return converted_input def __call__(self, torch_input): batch_size, out_channels, output_height, output_width\ = calc_out_shape( input_matrix_shape=torch_input.shape, out_channels=self.kernel.shape[0], kernel_size=self.kernel.shape[2], stride=self.stride, padding=0) converted_kernel = self._convert_kernel() converted_input = self._convert_input(torch_input, output_height, output_width) conv2d_out_alternative_matrix_v2 = converted_kernel @ converted_input return conv2d_out_alternative_matrix_v2.transpose(0, 1).view(torch_input.shape[0], self.out_channels, output_height, output_width).transpose(1, 3).transpose(2, 3) # Проверка происходит автоматически вызовом следующего кода # (раскомментируйте для самостоятельной проверки, # в коде для сдачи задания должно быть закомментировано): print(test_conv2d_layer(Conv2dMatrixV2)) ```
github_jupyter
# MRCA estimation ------- You can access your data via the dataset number. For example, ``handle = open(get(42), 'r')``. To save data, write your data to a file, and then call ``put('filename.txt')``. The dataset will then be available in your galaxy history. Notebooks can be saved to Galaxy by clicking the large green button at the top right of the IPython interface.<br> More help and informations can be found on the project [website](https://github.com/bgruening/galaxy-ipython). ## Inputs ------ This notebook expects two inputs from Galaxy history: 1. a comma separated list of accession numbers and corresponding collection dates 2. a phylogenetic tree (in newick format) in which OTU labels correspond to accession numbers from input 1 Here is an example of input 1: ``` Accession,Collection_Date MT049951,2020-01-17 MT019531,2019-12-30 MT019529,2019-12-23 MN975262,2020-01-11 MN996528,2019-12-30 MT019532,2019-12-30 MT019530,2019-12-30 MN994468,2020-01-22 ``` ``` # Set history items for datasets containing accession/dates and a maximum likelihood tree: # These numbers correspond to numbers of Galaxy datasets acc_date = 1 tree = 116 !pip install --upgrade pip==20.0.2 !pip install --upgrade statsmodels==0.11.0 !pip install --upgrade pandas==0.24.2 from Bio import Phylo as phylo from matplotlib import pyplot as plt import pandas as pd import datetime import statsmodels.api as sm import statsmodels.formula.api as smf %matplotlib inline # Get accessions and dates acc_path = get(acc_date) # Get ML tree tree_path = get(tree) !mv {acc_path} acc_date.csv !mv {tree_path} tree.nwk col_dates = pd.read_csv('acc_date.csv') col_dates tree = next( phylo.parse( 'tree.nwk', "newick" ) ) plt.rcParams['figure.figsize'] = [15, 50] phylo.draw( tree ) def root_to_tip( tree, date_df ): accum = [] def tree_walker( clade, total_branch_length ): for child in clade.clades: if child.is_terminal: if child.name is not None: date = date_df[date_df['Accession']==child.name]['Collection_Date'].to_string(index=False) accum.append( ( child.name, date, total_branch_length + child.branch_length ) ) tree_walker( child, total_branch_length + child.branch_length ) tree_walker( tree.clade, 0 ) return pd.DataFrame( accum, columns=["name","date","distance_to_root"] ) for clade in list( tree.find_clades() ): tree.root_with_outgroup( clade ) df = root_to_tip( tree, col_dates ) df['date'] = pd.to_datetime(df['date']) df['date_as_numeric'] = [d.year + (d.dayofyear-1)/365 for d in df['date']] plt.rcParams['figure.figsize'] = [15, 10] df.plot( x="date", y="distance_to_root" ) df['date_as_numeric'] = [d.year + (d.dayofyear-1)/365 for d in df['date']] ``` ## MRCA timing is ... ``` import datetime def decimal_to_calendar (decimal): years = int (decimal) d = datetime.datetime (years, 1,1) + datetime.timedelta (days = int ((decimal-years)*365)) return d model = smf.ols(formula='distance_to_root ~ date_as_numeric ', data=df) results = model.fit() print( results.summary() ) print ("Root predicted at {}".format(decimal_to_calendar(-results.params.Intercept/results.params.date_as_numeric))) ```
github_jupyter
**Notas para contenedor de docker:** Comando de docker para ejecución de la nota de forma local: nota: cambiar `<ruta a mi directorio>` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker. ``` docker run --rm -v <ruta a mi directorio>:/datos --name jupyterlab_numerical -p 8888:8888 -d palmoreck/jupyterlab_numerical:1.1.0 ``` password para jupyterlab: `qwerty` Detener el contenedor de docker: ``` docker stop jupyterlab_numerical ``` Documentación de la imagen de docker `palmoreck/jupyterlab_numerical:1.1.0` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/numerical). --- Nota generada a partir de [liga1](https://drive.google.com/file/d/1zCIHNAxe5Shc36Qo0XjehHgwrafKSJ_t/view), [liga2](https://drive.google.com/file/d/1RMwUXEN_SOHKue-J9Cx3Ldvj9bejLjiM/view). ``` !pip3 install --user -q cvxpy import os cur_directory = os.getcwd() dir_alg_python = '/algoritmos/Python' os.chdir(cur_directory + dir_alg_python) import math import numpy as np from utils_2nd_version import compute_error from algorithms_for_cieco_2nd_version import path_following_method_infeasible_init_point_2nd_version, \ path_following_method_infeasible_init_point ``` # Está lista la implementación para puntos iniciales no factibles en $Ax=b$ pero no para la parte de las desigualdades $f_i(x) <0 \quad \forall i=1,\dots,m$ # Primer ejemplo $$ \min \quad x_1^2 + x_2^2 + x_3^2 + x_4^2 -2x_1-3x_4$$ $$\text{sujeto a: } $$ $$ \begin{array}{c} 2x_1 + x_2 + x_3 + 4x_4 = 7 \\ x_1 + x_2 + 2x_3 + x_4 = 6 \end{array} $$ $$x_1, x_2, x_3, x_4 \geq 0$$ ## Infeasible for Ax=b ``` from utils_2nd_version import constraint_inequalities_funcs_eval fo = lambda x: x[0]**2 + x[1]**2 + x[2]**2 + x[3]**2-2*x[0]-3*x[3] const = {0: lambda x: -x[0], 1: lambda x: -x[1], 2: lambda x: -x[2], 3: lambda x: -x[3] } A= np.array([[2,1,1,4], [1,1,2,1]]) b=np.array([7,6]) x_ast=np.array([1.1232876712328763,0.6506849315068493, 1.8287671232876714,0.5684931506849317]) x_0 = np.array([-5,-5,-5,-5],dtype=float) #with -5,-5,-5,-5 return one entry of x negative nu_0 = np.array([0,0], dtype=float) p_ast=fo(x_ast) p_ast tol_outer_iter = 1e-6 tol=1e-8 tol_backtracking=1e-8 maxiter=30 mu=10 x = path_following_method_infeasible_init_point_2nd_version(fo, A, b, const, x_0, nu_0, tol, tol_backtracking = tol_backtracking, x_ast = x_ast, p_ast=p_ast, maxiter=maxiter, mu=mu, tol_outer_iter = tol_outer_iter ) x A@x ``` # Comparación con [cvxpy](https://github.com/cvxgrp/cvxpy) ``` import cvxpy as cp x1 = cp.Variable() x2 = cp.Variable() x3 = cp.Variable() x4 = cp.Variable() # Create two constraints. constraints = [2*x1+x2+x3+4*x4-7 == 0,x1+x2+2*x3+x4-6 == 0,x1>=0,x2>=0,x3>=0,x4>=0] # Form objective. obj = cp.Minimize(x1**2+x2**2+x3**2+x4**2-2*x1-3*x4) # Form and solve problem. prob = cp.Problem(obj, constraints) prob.solve() # Returns the optimal value. print("status:", prob.status) print("optimal value", prob.value) print("optimal var", x1.value, x2.value, x3.value,x4.value) ``` # Segundo ejemplo $$\min 2x_1 + 5x_2$$ $$\text{sujeto a: }$$ $$ \begin{array}{c} 6-x_1-x_2 \leq 0 \\ -18 + x_1 +2x_2 \leq 0\\ x_1, x_2 \geq 0 \end{array} $$ ``` fo = lambda x: 2*x[0] + 5*x[1] const = {0: lambda x: 6-x[0]-x[1], 1: lambda x: -18+x[0]+2*x[1], 2: lambda x: -x[0], 3: lambda x: -x[1] } A=np.array([0,0],dtype=float) b = 0 x_ast = np.array([6,0], dtype=float) #x_0 = np.array([-2,-2], dtype=float) x_0 = np.array([5,0], dtype=float) #x_0 = np.array([10,-1], dtype=float) #x_0 = np.array([6.5,0], dtype=float) p_ast=fo(x_ast) p_ast nu_0 = np.array([0,0], dtype=float) tol_outer_iter = 1e-3 tol=1e-8 tol_backtracking=1e-4 maxiter=30 mu=10 fo(x_0) x = path_following_method_infeasible_init_point_2nd_version(fo, A, b, const, x_0, nu_0, tol, tol_backtracking, x_ast, p_ast, maxiter, mu, tol_outer_iter = tol_outer_iter ) from utils import constraint_inequalities_funcs_eval constraint_inequalities_funcs_eval(x, const) ``` # Comparación con [cvxpy](https://github.com/cvxgrp/cvxpy) ``` x1 = cp.Variable() x2 = cp.Variable() # Create two constraints. constraints = [6-x1-x2 <= 0,-18+x1+2*x2<=0,x1>=0,x2>=0] # Form objective. obj = cp.Minimize(2*x1+5*x2) # Form and solve problem. prob = cp.Problem(obj, constraints) prob.solve() # Returns the optimal value. print("status:", prob.status) print("optimal value", prob.value) print("optimal var", x1.value, x2.value) ``` **Referencias:** * S. P. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, 2009.
github_jupyter
<div style="width:1000 px"> <div style="float:right; width:98 px; height:98px;"> <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;"> </div> <h1>Introduction to Pandas</h1> <h3>Unidata Python Workshop</h3> <div style="clear:both"></div> </div> <hr style="height:2px;"> ## Overview: * **Teaching:** 30 minutes * **Exercises:** 30 minutes ### Questions 1. What is Pandas? 1. What are the basic Pandas data structures? 1. How can I read data into Pandas? 1. What are some of the data operations available in Pandas? ### Objectives 1. <a href="#series">Data Series</a> 1. <a href="#frames">Data Frames</a> 1. <a href="#loading">Loading Data in Pandas</a> 1. <a href="#missing">Missing Data</a> 1. <a href="#manipulating">Manipulating Data</a> <a name="series"></a> ## Data Series Data series are one of the fundamental data structures in Pandas. You can think of them like a dictionary; they have a key (index) and value (data/values) like a dictionary, but also have some handy functionality attached to them. To start out, let's create a series from scratch. We'll imagine these are temperature observations. ``` from pandas import Series temperatures = Series([23, 20, 25, 18]) temperatures ``` The values on the left are the index (zero based integers by default) and on the right are the values. Notice that the data type is an integer. Any NumPy datatype is acceptable in a series. That's great, but it'd be more useful if the station were associated with those values. In fact you could say we want the values *indexed* by station name. ``` temperatures = Series([23, 20, 25, 18], index=['TOP', 'OUN', 'DAL', 'DEN']) temperatures ``` Now, very similar to a dictionary, we can use the index to access and modify elements. ``` temperatures['DAL'] temperatures[['DAL', 'OUN']] ``` We can also do basic filtering, math, etc. ``` temperatures[temperatures > 20] temperatures + 2 ``` Remember how I said that series are like dictionaries? We can create a series striaght from a dictionary. ``` dps = {'TOP': 14, 'OUN': 18, 'DEN': 9, 'PHX': 11, 'DAL': 23} dewpoints = Series(dps) dewpoints ``` It's also easy to check and see if an index exists in a given series: ``` 'PHX' in dewpoints 'PHX' in temperatures ``` Series have a name attribute and their index has a name attribute. ``` temperatures.name = 'temperature' temperatures.index.name = 'station' temperatures ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Create a series of pressures for stations TOP, OUN, DEN, and DAL (assign any values you like).</li> <li>Set the series name and series index name.</li> <li>Print the pressures for all stations which have a dewpoint below 15.</li> </ul> </div> ``` # Your code goes here ``` <button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>View Solution</button> <div id="sol1" class="collapse"> <code><pre> pressures = Series([1012.1, 1010.6, 1008.8, 1011.2], index=['TOP', 'OUN', 'DEN', 'DAL']) pressures.name = 'pressure' pressures.index.name = 'station' print(pressures[dewpoints < 15]) </pre></code> </div> <a href="#top">Top</a> <hr style="height:2px;"> <a name="frames"></a> ## Data Frames Series are great, but what about a bunch of related series? Something like a table or a spreadsheet? Enter the data frame. A data frame can be thought of as a dictionary of data series. They have indexes for their rows and their columns. Each data series can be of a different type , but they will all share a common index. The easiest way to create a data frame by hand is to use a dictionary. ``` from pandas import DataFrame data = {'station': ['TOP', 'OUN', 'DEN', 'DAL'], 'temperature': [23, 20, 25, 18], 'dewpoint': [14, 18, 9, 23]} df = DataFrame(data) df ``` You can access columns (data series) using dictionary type notation or attribute type notation. ``` df['temperature'] df.dewpoint ``` Notice the index is shared and that the name of the column is attached as the series name. You can also create a new column and assign values. If I only pass a scalar it is duplicated. ``` df['wspeed'] = 0. df ``` Let's set the index to be the station. ``` df.index = df.station df ``` Well, that's close, but we now have a redundant column, so let's get rid of it. ``` df.drop('station', 1, inplace=True) df ``` Now let's get a row from the dataframe instead of a column. ``` df.loc['DEN'] ``` We can even transpose the data easily if we needed that do make things easier to merge/munge later. ``` df.T ``` Look at the `values` attribute to access the data as a 1D or 2D array for series and data frames recpectively. ``` df.values df.temperature.values ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Add a series of rain observations to the existing data frame.</li> <li>Apply an instrument correction of -2 to the dewpoint observations.</li> </ul> </div> ``` # Your code goes here ``` <button data-toggle="collapse" data-target="#sol2" class='btn btn-primary'>View Solution</button> <div id="sol2" class="collapse"> <code><pre> df['rain'] = [0, 0.4, 0.2, 0] df.dewpoint = df.dewpoint - 2 df </pre></code> </div> <a href="#top">Top</a> <hr style="height:2px;"> <a name="loading"></a> ## Loading Data in Pandas The real power of pandas is in manupulating and summarizing large sets of tabular data. To do that, we'll need a large set of tabular data. We've included a file in this directory called `JAN17_CO_ASOS.txt` that has all of the ASOS observations for several stations in Colorado for January of 2017. It's a few hundred thousand rows of data in a tab delimited format. Let's load it into Pandas. ``` import pandas as pd df = pd.read_table('Jan17_CO_ASOS.txt') df.head() df = pd.read_table('Jan17_CO_ASOS.txt', parse_dates=['valid']) df.head() df = pd.read_table('Jan17_CO_ASOS.txt', parse_dates=['valid'], na_values='M') df.head() ``` Let's look in detail at those column names. Turns out we need to do some cleaning of this file. Welcome to real world data analysis. ``` df.columns df.columns = ['station', 'time', 'temperature', 'dewpoint', 'pressure'] df.head() ``` For other formats of data CSV, fixed width, etc. that are tools to read it as well. You can even read excel files straight into Pandas. <a href="#top">Top</a> <hr style="height:2px;"> <a name="missing"></a> ## Missing Data We've already dealt with some missing data by turning the 'M' string into actual NaN's while reading the file in. We can do one better though and delete any rows that have all values missing. There are similar operations that could be performed for columns. You can even drop if any values are missing, all are missing, or just those you specify are missing. ``` len(df) df.dropna(axis=0, how='all', subset=['temperature', 'dewpoint', 'pressure'], inplace=True) len(df) df.head() ``` <div class="alert alert-success"> <b>EXERCISE</b>: Create a new data frame called df2 that contains all data that only have temperature, dewpoint and pressure observations. </div> ``` # Your code goes here # df2 = ``` <button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>View Solution</button> <div id="sol3" class="collapse"> <code><pre> df2 = df.dropna(how='any') df2 </pre></code> </div> Lastly, we still have the original index values. Let's reindex to a new zero-based index for only the rows that have valid data in them. ``` df.reset_index(drop=True, inplace=True) df.head() ``` <a href="#top">Top</a> <hr style="height:2px;"> <a name="manipulating"></a> ## Manipulating Data We can now take our data and do some intersting things with it. Let's start with a simple min/max. ``` print('Min: {}\nMax: {}'.format(df.temperature.min(), df.temperature.max())) ``` You can also do some useful statistics on data with attached methods like corr for correlation coefficient. ``` df.temperature.corr(df.dewpoint) ``` We can also call a `groupby` on the data frame to start getting some summary information for each station. ``` df.groupby('station').mean() ``` <div class="alert alert-success"> <b>EXERCISE</b>: Calculate the min, max, and standard deviation of the temperature field grouped by each station. </div> ``` # Calculate min # Calculate max # Calculate standard deviation ``` <button data-toggle="collapse" data-target="#sol4" class='btn btn-primary'>View Solution</button> <div id="sol4" class="collapse"> <code><pre> print(df.groupby('station').min()) print(df.groupby('station').max()) print(df.groupby('station').std()) </pre></code> </div> Now, let me show you how to do all of that and more in a single call. ``` df.groupby('station').describe() ``` Now let's suppose we're going to make a meteogram or similar and want to get all of the data for a single station. ``` df.groupby('station').get_group('0CO').head().reset_index(drop=True) ``` <div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li>Round the temperature column to whole degrees.</li> <li>Group the observations by temperaturee and use the count method to see how many instances of the rounded temperatures there are in the dataset.</li> </div> ``` # Your code goes here ``` <button data-toggle="collapse" data-target="#sol5" class='btn btn-primary'>View Solution</button> <div id="sol5" class="collapse"> <code><pre> df.temperature = df.temperature.round() df.groupby('temperature').count() </pre></code> </div> <a href="#top">Top</a> <hr style="height:2px;">
github_jupyter
## Generate test.txt, train.txt, validation.txt and class_weights to be used in training ``` %load_ext autoreload %autoreload 2 import os os.getcwd() import sys, glob, shutil os.chdir(os.path.dirname(os.getcwd())) os.getcwd() ``` ## Input params ``` import fnmatch import numpy as np import pickle base = "data/English" datasets = ["GoodImg","BadImg"] config = {"GoodImg":{"train":0.8, "validation":0.10, "test":0.10}, "BadImg":{"train":0.8, "validation":0.10, "test":0.10} } phases = ["train", "test", "validation"] for dataset in datasets: if os.path.exists(os.path.join(base, dataset, "train.txt")): os.remove(os.path.join(base, dataset, "train.txt")) if os.path.exists(os.path.join(base, dataset, "validation.txt")): os.remove(os.path.join(base, dataset, "validation.txt")) if os.path.exists(os.path.join(base, dataset, "test.txt")): os.remove(os.path.join(base, dataset, "test.txt")) if os.path.exists(os.path.join(base, dataset, dataset+".pkl")): os.remove(os.path.join(base, dataset, dataset+".pkl")) files_per_klass = [] no_of_files_per_klass = [] class_weights = {} for klass in os.listdir(os.path.join(base, dataset)): if not klass.startswith("."): dire = os.path.join(base, dataset, klass) files = [] #glob recursive doesn't work below 3.5 for root, dirnames, filenames in os.walk(dire): for filename in fnmatch.filter(filenames, '*.png'): files.append(os.path.join(root, filename)) files_per_klass.append((klass, files)) no_of_files_per_klass.append(len(files)) maxo = max(no_of_files_per_klass) train_per = config[dataset]["train"] #no of files in training per class val_per = config[dataset]["validation"] #no of files in validation per class for klass, files in files_per_klass: images = {} train_size = int(train_per*len(files)) val_size = int(val_per*len(files)) images["train"] = files[:train_size] images["validation"] = files[train_size:(train_size+val_size)] images["test"] = files[(train_size+val_size):] label = int(klass[-2:]) - 1 class_weights[label] = maxo//len(files) for phase in phases: with open('{}/{}.txt'.format(os.path.join(base,dataset),phase), 'a') as f: for image in images[phase]: image = os.path.relpath(image, os.path.join(base,dataset)) f.write('{} {}\n'.format(image, label)) with open('{}/{}.pkl'.format(os.path.join(base,dataset),dataset), 'wb') as f: pickle.dump(class_weights, f, pickle.HIGHEST_PROTOCOL) for phase in phases: content = [] with open('{}/{}.txt'.format(os.path.join(base,dataset),phase), 'r') as f: content = f.readlines() np.random.shuffle(content) print(dataset, " ", phase, " ", len(content)) with open('{}/{}.txt'.format(os.path.join(base,dataset), phase), 'w') as f: for line in content: f.write(line) print("file shuffling completed for {} \n".format(dataset)) ``` ### Just for Checking the generated class_weights ``` f = open('data/English/GoodImg/GoodImg.pkl', 'rb') class_weight = pickle.load(f) class_weight[0] ```
github_jupyter
420-A52-SF - Algorithmes d'apprentissage supervisé - Hiver 2020 - Spécialisation technique en Intelligence Artificielle<br/> MIT License - Copyright (c) 2020 Mikaël Swawola <br/> ![Projet #2 - Solution](static/project2-banner.png) <br/> ``` %reload_ext autoreload %autoreload 2 %matplotlib inline ``` ## 0 - Import des bibliothèques ``` from datetime import datetime from tqdm import tqdm from collections import defaultdict import numpy as np import pandas as pd from sklearn.model_selection import cross_val_score from sklearn.metrics import roc_auc_score, log_loss, f1_score, roc_curve, confusion_matrix from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.utils import resample from sklearn.utils.fixes import loguniform from scipy.stats import randint from sklearn.model_selection import RandomizedSearchCV from sklearn.model_selection import GridSearchCV from sklearn.dummy import DummyClassifier from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import BaggingClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier import xgboost as xgb from helpers import convertDir2Deg, plot_confusion_matrix import matplotlib.pyplot as plt import seaborn as sns # Configuration de la visualisation sns.set(style="darkgrid") sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 2.5, }) sns.set(rc={'figure.figsize':(11.7,8.27)}) ``` ## 1 - Chargement et exploration sommaire des données ``` AUS = pd.read_csv('AUS_train.csv', index_col=['Date'], parse_dates=True) AUS.head() AUS = AUS.drop(columns=['Unnamed: 0']) AUS.columns ``` ## Date ``` AUS['Year'] = AUS.index.year AUS['Month'] = AUS.index.month ``` ### Location ``` AUS['Location'].unique() AUS['Location'].value_counts().plot(kind='barh') AUS = pd.get_dummies(AUS, columns = ['Location'], prefix="loc", drop_first=True) AUS.columns ``` ## Wind direction ``` AUS['WindGustDir'] = AUS['WindGustDir'].apply(lambda d : convertDir2Deg(d)) AUS['WindDir9am'] = AUS['WindDir9am'].apply(lambda d : convertDir2Deg(d)) AUS['WindDir3pm'] = AUS['WindDir3pm'].apply(lambda d : convertDir2Deg(d)) AUS['Wind1Cos'] = np.cos(AUS['WindGustDir']*2*np.pi/360) AUS['Wind1Sin'] = np.sin(AUS['WindGustDir']*2*np.pi/360) AUS['Wind2Cos'] = np.cos(AUS['WindDir9am']*2*np.pi/360) AUS['Wind2Sin'] = np.sin(AUS['WindDir9am']*2*np.pi/360) AUS['Wind3Cos'] = np.cos(AUS['WindDir3pm']*2*np.pi/360) AUS['Wind3Sin'] = np.sin(AUS['WindDir3pm']*2*np.pi/360) AUS = AUS.drop(columns=['WindGustDir','WindDir9am','WindDir3pm']) ``` #### Vérification de la proportion des classes positives (Rain) et négatives (No rain) ``` AUS['RainTomorrow'].value_counts().plot(kind='bar') AUS['RainTomorrow'] = (AUS['RainTomorrow'] == 'Yes').astype(int) AUS['RainToday'] = (AUS['RainToday'] == 'Yes').astype(int) y = AUS[['RainTomorrow']].values.ravel() # Le ravel sert à éviter un warning tanant ... AUS = AUS.drop(columns=['RainTomorrow']) X = AUS.values m = len(y) ``` ## 3 - Sous-échantillonnage du jeu de données Puisque le jeu de données est volumineux, nous allons commencer cette étude d'apprentissage supervisé avec seulement 20 % des données ``` X_sub, y_sub = resample(X, y, n_samples=0.2*m, stratify=y, random_state=2020) ``` ## 4 - Modèle de référence ``` clf_dummy = DummyClassifier(strategy="most_frequent").fit(X_sub, y_sub) dummy_score = log_loss(y_sub, clf_dummy.predict_proba(X_sub)[:,1]) history = {} history['Baseline'] = dummy_score history ``` ## 5 - Régression logistique [class sklearn.linear_model.LogisticRegression(penalty='l2', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='lbfgs', max_iter=100, multi_class='auto', verbose=0, warm_start=False, n_jobs=None, l1_ratio=None)](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) ``` # Standardisation scaler = StandardScaler().fit(X_sub) X_sub_scale = scaler.transform(X_sub) # Grille de recherche parameters = {'C':[0.01, 0.1, 1], 'l1_ratio':[0, 0.1, 0.2, 0.3, 0.4], 'penalty': ['none', 'elasticnet']} # Modèle clf_logreg = LogisticRegression(max_iter=10000, solver='saga', random_state=2020) # Recherche sur grille avec validation croisée clf_logreg_grid = GridSearchCV(clf_logreg, parameters, cv=5, scoring="neg_log_loss", verbose=1, n_jobs=-1) clf_logreg_grid.fit(X_sub_scale, y_sub) print(f'Meilleurs paramètres: {clf_logreg_grid.best_params_}') print(f'Meilleur score (mean CV): {clf_logreg_grid.best_score_}') history['Logistic regression'] = -clf_logreg_grid.best_score_ history ``` ## 6 - K plus proches voisins [class sklearn.neighbors.KNeighborsClassifier(n_neighbors=5, weights='uniform', algorithm='auto', leaf_size=30, p=2, metric='minkowski', metric_params=None, n_jobs=None, **kwargs)](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) ``` # Grille de recherche parameters = {'n_neighbors':[75, 100, 125, 150], 'p':[1, 2], 'weights':['uniform', 'distance']} # Modèle clf_knn = KNeighborsClassifier() # Recherche sur grille avec validation croisée clf_knn_grid = GridSearchCV(clf_knn, parameters, cv=5, scoring="neg_log_loss", verbose=1, n_jobs=-1) clf_knn_grid.fit(X_sub, y_sub) print(f'Meilleurs paramètres: {clf_knn_grid.best_params_}') print(f'Meilleur score (mean CV): {clf_knn_grid.best_score_}') history['KNN'] = -clf_knn_grid.best_score_ history ``` ## 7 - Arbres de décision [class sklearn.tree.DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort='deprecated', ccp_alpha=0.0)](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) ``` # Distributions des hyperparamètres distributions = dict( criterion=['gini', 'entropy'], ccp_alpha=loguniform(1e-4, 1e3), max_depth=randint(2, 128)) # Modèle clf_tree = DecisionTreeClassifier(random_state=2020) # Recherche aléatoire avec validation croisée clf_tree_rnd = RandomizedSearchCV(clf_tree, distributions, n_iter=1000, cv=5, scoring="neg_log_loss", verbose=1, n_jobs=-1, random_state=2020) clf_tree_rnd.fit(X_sub, y_sub) print(f'Meilleurs paramètres: {clf_tree_rnd.best_params_}') print(f'Meilleur score (mean CV): {clf_tree_rnd.best_score_}') history['Decision Tree'] = -clf_tree_rnd.best_score_ history ``` ## 8 - Bagging (arbres de décision) [class sklearn.ensemble.BaggingClassifier(base_estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=None, random_state=None, verbose=0)](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html) ``` clf_bag = BaggingClassifier(base_estimator=clf_tree_rnd.best_estimator_, n_estimators=1000, verbose=1, n_jobs=-1, random_state=2020) clf_bag.fit(X_sub, y_sub) # Score de validation croisée cv_score = cross_val_score(clf_bag, X_sub, y_sub, cv=5, scoring="neg_log_loss", verbose=1, n_jobs=-1).mean() print(f'Score (mean CV): {cv_score}') history['Bagging'] = -cv_score history ``` ## 9 - Forêts aléatoires [class sklearn.ensemble.RandomForestClassifier(n_estimators=100, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, class_weight=None, ccp_alpha=0.0, max_samples=None)](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) ``` # Grille de recherche parameters = {'ccp_alpha': [1e-3, 1e-2, 1e-1, 1], 'criterion':['gini','entropy'], 'max_features': [None, 'log2', 'sqrt']} # Modèle clf_rf = RandomForestClassifier(n_estimators=100, random_state=2020) # Recherche sur grille avec validation croisée clf_rf_grid = GridSearchCV(clf_rf, parameters, cv=5, scoring="neg_log_loss", verbose=1, n_jobs=-1) clf_rf_grid.fit(X_sub, y_sub) print(f'Meilleurs paramètres: {clf_rf_grid.best_params_}') print(f'Meilleur score (mean CV): {clf_rf_grid.best_score_}') history['Random Forests'] = -clf_rf_grid.best_score_ history ``` ## 10 - Gradient Boosting ``` # Grille de recherche parameters = { 'learning_rate': [0.01, 0.1, 1], 'max_features': ['sqrt', None], 'loss': ['deviance', 'exponential'], 'ccp_alpha': [1e-5, 1e-4, 1e-3]} # Modèle clf_gb = GradientBoostingClassifier(n_estimators=100, random_state=2020) # Recherche sur grille avec validation croisée clf_gb_grid = GridSearchCV(clf_gb, parameters, cv=5, scoring="neg_log_loss", verbose=1, n_jobs=-1) clf_gb_grid.fit(X_sub, y_sub) print(f'Meilleurs paramètres: {clf_gb_grid.best_params_}') print(f'Meilleur score (mean CV): {clf_gb_grid.best_score_}') history['Gradient Boosting'] = -clf_gb_grid.best_score_ history ``` ## 11 - XGBoost ``` # Grille de recherche parameters = { 'learning_rate': [0.001, 0.01, 0.1], 'reg_alpha': [1e-4, 1e-3, 1e-2], 'reg_lambda': [1e-4, 1e-3, 1e-2]} # Modèle clf_xgb = xgb.XGBClassifier(objective='binary:logistic', colsample_bytree=0.3, max_depth=30, n_estimators=100, random_state=2020) # Recherche sur grille avec validation croisée clf_xgb_grid = GridSearchCV(clf_xgb, parameters, cv=5, scoring="neg_log_loss", verbose=1, n_jobs=-1) clf_xgb_grid.fit(X_sub, y_sub) print(f'Meilleurs paramètres: {clf_xgb_grid.best_params_}') print(f'Meilleur score (mean CV): {clf_xgb_grid.best_score_}') history['XGBoost'] = -clf_xgb_grid.best_score_ history ``` ## 12 - Courbes d'apprentissage pour le meilleur modèle ### XGBoost ``` lcurves = defaultdict(list) for p in tqdm([0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7]): X_, y_ = resample(X, y, n_samples=p*m, stratify=y, random_state=2020) Xt, Xv, yt, yv = train_test_split(X_, y_, test_size=0.3, stratify=y_, random_state=2020) clf_xgb_grid.best_estimator_.fit(Xt, yt) lcurves['Train'].append(log_loss(yt, clf_xgb_grid.predict_proba(Xt)[:,1])) lcurves['Val'].append(log_loss(yv, clf_xgb_grid.predict_proba(Xv)[:,1])) ``` #### Affichage des courbes d'apprentissage ``` plt.plot(lcurves['Train'], label="Train") plt.plot(lcurves['Val'], label="Validation") plt.legend() ``` ### Gradient Boosting ``` lcurves = defaultdict(list) for p in tqdm([0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7]): X_, y_ = resample(X, y, n_samples=p*m, stratify=y, random_state=2020) Xt, Xv, yt, yv = train_test_split(X_, y_, test_size=0.3, stratify=y_, random_state=2020) clf_gb_grid.best_estimator_.fit(Xt, yt) lcurves['Train'].append(log_loss(yt, clf_gb_grid.predict_proba(Xt)[:,1])) lcurves['Val'].append(log_loss(yv, clf_gb_grid.predict_proba(Xv)[:,1])) plt.plot(lcurves['Train'], label="Train") plt.plot(lcurves['Val'], label="Validation") plt.legend() ``` ## 13 - Réentraînement du meilleur modèle en prenant en compte les meilleurs hyperparamètres ``` clf_best = GradientBoostingClassifier( n_estimators=100, ccp_alpha=0.0001, learning_rate=0.1, loss='exponential', max_features= None, random_state=2020) clf_best.fit(X, y) cv_score = cross_val_score(clf_best, X, y, cv=5, scoring="neg_log_loss", verbose=1, n_jobs=-1) cv_score.mean() ``` ## 14 - Métriques #### Prédictions ``` y_train_pred_proba_best = clf_best.predict_proba(X)[:,1] ``` #### Aire sous la courbe ``` print(f'AUC = {roc_auc_score(y, y_train_pred_proba_best)}') ``` #### Courbe ROC ``` fpr_rf, tpr_rf, thresholds = roc_curve(y, y_train_pred_proba_best) fig = plt.figure(4, figsize=(6, 6)) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr_rf, tpr_rf, label='Meilleur modèle') plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve') plt.show() ``` #### Recherche du meilleur seuil ``` selected_threshold = thresholds[np.argmax(-fpr_rf + tpr_rf)] selected_threshold ``` #### F1 score ``` f1_score(y, y_train_pred_proba_best > selected_threshold) ``` #### Matrice de confusion ``` fig = plt.figure(3, figsize=(6, 6)) cnf_matrix = confusion_matrix(y, y_train_pred_proba_best > selected_threshold) np.set_printoptions(precision=2) plot_confusion_matrix(cnf_matrix, classes=['0','1'], title='Matrice de confusion') # Accuracy (61331+17755)/(61331+17755+15846+4604) ``` ## 15 - Prédictions sur le jeu de test #### On applique les mêmes transformations que pour le jeu d'entraînement ``` AUS_test = pd.read_csv('AUS_test.csv', index_col=['Date'], parse_dates=True) AUS_test = AUS_test.drop(columns=['Unnamed: 0']) AUS_test['Year'] = AUS_test.index.year AUS_test['Month'] = AUS_test.index.month AUS_test = pd.get_dummies(AUS_test, columns = ['Location'], prefix="loc", drop_first=True) AUS_test['WindGustDir'] = AUS_test['WindGustDir'].apply(lambda d : convertDir2Deg(d)) AUS_test['WindDir9am'] = AUS_test['WindDir9am'].apply(lambda d : convertDir2Deg(d)) AUS_test['WindDir3pm'] = AUS_test['WindDir3pm'].apply(lambda d : convertDir2Deg(d)) AUS_test['Wind1Cos'] = np.cos(AUS_test['WindGustDir']*2*np.pi/360) AUS_test['Wind1Sin'] = np.sin(AUS_test['WindGustDir']*2*np.pi/360) AUS_test['Wind2Cos'] = np.cos(AUS_test['WindDir9am']*2*np.pi/360) AUS_test['Wind2Sin'] = np.sin(AUS_test['WindDir9am']*2*np.pi/360) AUS_test['Wind3Cos'] = np.cos(AUS_test['WindDir3pm']*2*np.pi/360) AUS_test['Wind3Sin'] = np.sin(AUS_test['WindDir3pm']*2*np.pi/360) AUS_test = AUS_test.drop(columns=['WindGustDir','WindDir9am','WindDir3pm']) AUS_test['RainToday'] = (AUS_test['RainToday'] == 'Yes').astype(int) X_test = AUS_test.values ``` #### Calcul des prédictions sur le jeu de test ``` y_test_pred_proba_best = clf_best.predict_proba(X_test)[:,1] ``` #### Lecture de la véritable réponse ``` AUS_response = pd.read_csv('AUS_test_Rain_tomorrow.csv', index_col=['Date'], parse_dates=True) y_true = (AUS_response['RainTomorrow'] == 'Yes').astype(int) ``` #### Calcul du log-loss ``` log_loss(y_true, y_test_pred_proba_best) ``` #### Aire sous la courbe ``` print(f'AUC = {roc_auc_score(y_true, y_test_pred_proba_best)}') ``` #### Courbe ROC ``` fpr_rf, tpr_rf, thresholds = roc_curve(y_true, y_test_pred_proba_best) fig = plt.figure(4, figsize=(6, 6)) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr_rf, tpr_rf, label='Meilleur modèle') plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve') plt.show() ``` #### Score F1 ``` f1_score(y_true, y_test_pred_proba_best > selected_threshold) # Attention, utiliser le seuil trouvé par validation croisée ! ``` #### Matrice de confusion ``` fig = plt.figure(3, figsize=(6, 6)) cnf_matrix = confusion_matrix(y_true, y_test_pred_proba_best > selected_threshold) np.set_printoptions(precision=2) plot_confusion_matrix(cnf_matrix, classes=['0','1'], title='Matrice de confusion') # Accuracy (26094+7500)/(26094+7500+7045+2018) ```
github_jupyter
``` !git clone https://github.com/parhamzm/Beijing-Pollution-DataSet !ls Beijing-Pollution-DataSet import torch import torchvision import torch.nn as nn from torchvision import transforms import pandas as pd import matplotlib.pyplot as plt import numpy as np from torch.utils.data import random_split from math import sqrt from numpy import concatenate from matplotlib import pyplot from pandas import read_csv from pandas import DataFrame from pandas import concat from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import LabelEncoder from sklearn.metrics import mean_squared_error from numpy import array from numpy import hstack ``` # **Data Pre Processing** ``` DATA_DIR = "Beijing-Pollution-DataSet/" from pandas import read_csv from datetime import datetime from random import randint def select_month(sequences, n_samples=250): X, y = list(), list() rand_hour = randint(0, 24) rand_day = randint(0, 7) for i in range(0, n_samples): start_ix = rand_hour + rand_day*24 + 672 * i # 168 : Week hours! idxs = [] for j in range(0, 4): if j <=2: idx = start_ix + (j * 168) # Add different weeks idxs.append(idx) if j == 3: # Target idy = start_ix + (j * 168) seq_x = sequences[idxs, :] seq_y = sequences[idy, 0] y.append(seq_y) X.append(seq_x) return X, y # split a multivariate sequence into samples def split_sequences(sequences, n_steps, n_samples=12000, start_from=0): X, y = list(), list() for i in range(start_from, (start_from + n_samples)): # find the end of this pattern end_ix = i + n_steps # check if we are beyond the dataset # if end_ix > len(sequences): # break # gather input and output parts of the pattern seq_x = sequences[i:end_ix, :] seq_y = sequences[end_ix, 0] y.append(seq_y) X.append(seq_x) return array(X), array(y) # load dataset DATA_DIR = "Beijing-Pollution-DataSet/" data = np.load(DATA_DIR + 'polution_dataSet.npy') scaled_data = data x, y = select_month(data, n_samples=65) print("X shape => ", np.array(x).shape) print("y shape => ", np.array(y).shape) x = np.array(x) y = np.array(y) dataset = data train_X, train_y = x[0:50], y[0:50] #split_sequences(dataset, n_timesteps, n_samples=15000, start_from=0) valid_X, valid_y = x[50:], y[50:] #split_sequences(dataset, n_timesteps, n_samples=3000, start_from=15000) test_loader_X = torch.utils.data.DataLoader(dataset=(train_X), batch_size=20, shuffle=False) # train_X = torch.tensor(train_X, dtype=torch.float32) # train_y = torch.tensor(train_y, dtype=torch.float32) print("Train X Shape :=> ", train_X.shape) print("Train Y Shape :=> ", train_y.shape) print("####################################") print("Test X Shape :=> ", valid_X.shape) print("Test Y Shape :=> ", valid_y.shape) class RNN(torch.nn.Module): def __init__(self, n_features=8, n_output=1, seq_length=6, n_hidden_layers=233, n_layers=1): super(RNN, self).__init__() self.n_features = n_features self.seq_len = seq_length self.n_output = n_output self.n_hidden = n_hidden_layers # number of hidden states self.n_layers = n_layers # number of LSTM layers (stacked) # define RNN with specified parameters # bath_first means that the first dim of the input and output will be the batch_size self.rnn = nn.RNN(input_size=self.n_features, hidden_size=self.n_hidden, num_layers=self.n_layers, batch_first=True) # last, fully connected layer self.l_linear = torch.nn.Linear(self.n_hidden*self.seq_len, self.n_output) def forward(self, x, hidden): # hidden_state = torch.zeros(self.n_layers, x.size(0), self.n_hidden).requires_grad_() # cell_state = torch.zeros(self.n_layers, x.size(0), self.n_hidden).requires_grad_() batch_size = x.size(0) rnn_out, hidden = self.rnn(x, hidden) # print(rnn_out.shape) rnn_out = rnn_out.contiguous().view(batch_size, -1) # lstm_out(with batch_first = True) is # (batch_size,seq_len,num_directions * hidden_size) # for following linear layer we want to keep batch_size dimension and merge rest # .contiguous() -> solves tensor compatibility error # x = lstm_out.contiguous().view(batch_size, -1) out = self.l_linear(rnn_out) return out, hidden torch.manual_seed(13) model = RNN(n_features=8, n_output=1, seq_length=3, n_hidden_layers=233, n_layers=1) criterion = nn.MSELoss() optimizer = torch.optim.Adagrad(model.parameters(), lr=0.001) model = model#.to(device) criterion = criterion#.to(device) for p in model.parameters(): print(p.numel()) import time start_time = time.time() hidden = None hidden_test = None epochs = 200 model.train() batch_size = 5 running_loss_history = [] val_running_loss_history = [] for epoch in range(epochs): running_loss = 0.0 val_running_loss = 0.0 model.train() for b in range(0, len(train_X), batch_size): inpt = train_X[b:b+batch_size, :, :] target = train_y[b:b+batch_size] # print("Input Shape :=> ", inpt.shape) x_batch = torch.tensor(inpt, dtype=torch.float32) y_batch = torch.tensor(target, dtype=torch.float32) output, hidden = model(x_batch, hidden) hidden = hidden.data loss = criterion(output.view(-1), y_batch) running_loss += loss.item() loss.backward() optimizer.step() optimizer.zero_grad() else: with torch.no_grad(): # it will temprerorerly set all the required grad flags to be false model.eval() for b in range(0, len(valid_X), batch_size): inpt = valid_X[b:b+batch_size, :, :] target = valid_y[b:b+batch_size] x_batch_test = torch.tensor(inpt, dtype=torch.float32) y_batch_test = torch.tensor(target, dtype=torch.float32) # model.init_hidden(x_batch_test.size(0)) output_test, hidden_test = model(x_batch_test, hidden_test) hidden_test = hidden_test.data loss_valid = criterion(output_test.view(-1), y_batch_test) val_running_loss += loss_valid.item() val_epoch_loss = val_running_loss / len(valid_X) val_running_loss_history.append(val_epoch_loss) epoch_loss = running_loss / len(train_X) running_loss_history.append(epoch_loss) print('step : ' , epoch , ' Train loss : ' , epoch_loss, ', Valid Loss : => ', val_epoch_loss) print("***->>>-----------------------------------------------<<<-***") total_time = time.time() - start_time print("===========================================================") print("*********************************************************") print("The total Training Time is Equal with ==> : {0} Sec.".format(total_time)) print("*********************************************************") print("===========================================================") f, ax = plt.subplots(1, 1, figsize=(10, 7)) plt.title("Train & Valid Loss - RNN", fontsize=18) plt.xlabel("Epoch") plt.ylabel("Loss") plt.plot(running_loss_history, label='Train') plt.plot(val_running_loss_history, label='Test') # pyplot.plot(history.history['val_loss'], label='test') plt.legend() plt.show() test_x, test_y = x[50:], y[50:] model.eval() test_x = torch.tensor(test_x, dtype=torch.float32) test_y = torch.tensor(test_y, dtype=torch.float32) res, hid = model(test_x, None) loss_test = criterion(res.view(-1), test_y) future = 100 window_size = 11 # preds = dataset[15000:15100, 0].tolist() # print(len(preds)) # print(preds) # for i in range (future): # # seq = torch.FloatTensor(preds[-window_size:]) # with torch.no_grad(): # # seq = torch.tensor(seq, dtype=torch.float32).view(1, 11, 8) # # model.hidden = (torch.zeros(1, 1, model.hidden_size), # # torch.zeros(1, 1, model.hidden_size)) # preds.append(model(seq)) # print(preds[11:]) fig = plt.figure(figsize=(20, 7)) plt.title("Beijing Polution Prediction - RNN", fontsize=18) plt.ylabel('Polution') plt.xlabel('Num data') plt.grid(True) plt.autoscale(axis='x', tight=True) fig.autofmt_xdate() plt.plot(test_y, label="Real") plt.plot(res.detach().numpy(), label="Prediction") plt.legend() plt.show() test_x, test_y = x[50:], y[50:] model.eval() test_running_loss = 0 with torch.no_grad(): # it will temprerorerly set all the required grad flags to be false model.eval() for b in range(0, len(test_x), batch_size): inpt = test_x[b:b+batch_size, :, :] target = test_y[b:b+batch_size] x_batch_test = torch.tensor(inpt, dtype=torch.float32) y_batch_test = torch.tensor(target, dtype=torch.float32) # model.init_hidden(x_batch_test.size(0)) output_test, hidden_test = model(x_batch_test, hidden_test) hidden_test = hidden_test.data loss_test = criterion(output_test.view(-1), y_batch_test) test_running_loss += loss_test.item() test_epoch_loss = test_running_loss / len(test_x) print("##########################################################") print(">>>>---------------------------------------------------<<<<") print(">>>>----------***************************--------------<<<<") print("**** Test Loss :==>>> ", test_epoch_loss) print(">>>>----------***************************--------------<<<<") print(">>>>---------------------------------------------------<<<<") print("##########################################################") ```
github_jupyter
# Simulation iteration Let $A$ be an $m \times m$ real symmetric matrix with eigenvalue decomposition: $\newcommand{\ffrac}{\displaystyle \frac} \newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}A = Q \Lambda \Tran{Q}$, where the eigenvalues of $A$ ar ordered as $\left| \lambda_1 \right| > \left| \lambda_2 \right| > \left| \lambda_3 \right| \geq \cdots \geq \left| \lambda_m \right|$ with corresponding orthogonal orthonormal eigenvectors $\vec{q}_1, \vec{q}_2, \dots, \vec{q}_m$, the columns of matrix $Q$. Now we try to obtain the two largest eigenvalues $\lambda_1, \lambda_2$ and their corresponding eigenvectors. We start from $\vec{e}_1, \vec{e}_2$ to do the power iteration. $$\vec{e}_1 = \sum_{i=1}^{m} \alpha_i \vec{q}_i, \vec{e}_2 = \sum_{i=1}^{m} \beta_i \vec{q}_i$$ So after we assume that $\alpha_1 \neq 0$, as $k \to \infty$, $\left\| \ffrac{ A^k \vec{e}_1} {\alpha_1 \lambda_1^k} \right\| \to \left\| \vec{q}_1 \right\|$,$\left\| \ffrac{ A^k \vec{e}_2} {\beta_1 \lambda_1^k} \right\| \to \left\| \vec{q}_1 \right\|$ with convergence speed the ratio of $\ffrac{\lambda_2} {\lambda_1}$; also seeing from $\beta_1 A^k \vec{e}_1 - \alpha_1 A^k \vec{e}_2$, if we also assume that $\ffrac{\alpha_2} {\alpha_1} \neq \ffrac{\beta_2} {\beta_1}$, $i.e.$, $\left| \begin{array}{cc} \alpha_1 & \alpha_2 \\ \beta_1 & \beta_2 \end{array} \right| \neq 0$, as $k \to \infty$, in term of spanned space: $\left \langle A^k\vec{e}_1, A^k\vec{e}_2 \right \rangle \to \left \langle \vec{q}_1, \vec{q}_2 \right \rangle$. Now from $QR$ factorization, we have $$A^k = \left[\begin{array}{cccc} A^k\vec{e}_1 & A^k \vec{e}_2 & \cdots A^k \vec{e}_m \end{array} \right] = Q^{(k)} R^{(k)} \Rightarrow \left[\begin{array}{cc} A^k\vec{e}_1 & A^k \vec{e}_2 \end{array} \right] = \underline{Q}^{(k)}\underline{R}^{(k)} = \left[\begin{array}{cc} \underline{\vec{q}}_1^{(k)} & \underline{\vec{q}}_2^{(k)} \end{array} \right]\underline{R}^{(k)}$$ and as $k \to \infty$ we also have $\underline{\vec{q}}_i^{(k)} \to \vec{q}_i$. And we can generalize this method to obtain all the eigenvalues and eigenvectors of $A$. We first assume that $\left| \lambda_1 \right| > \left| \lambda_2 \right| > \cdots > \left| \lambda_m \right|$. And express the unit vector as the linear combination of $\vec{q}_1, \vec{q}_2, \dots , \vec{q}_m$: $\vec{e}_i = \displaystyle\sum_{j=1}^{m} z_{ij} \vec{q}_j$, $i.e.$, $$I = \left[\begin{array}{cccc} \vec{q}_1 & \vec{q}_2 & \cdots \vec{q}_m \end{array} \right] \Tran{\left[ \begin{array}{cccc} z_{11} & z_{12} & \cdots & z_{1m} \\ z_{21} & z_{22} & \cdots & z_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ z_{m1} & z_{m2} & \cdots & z_{mm} \\ \end{array}\right]} = Q \Tran{Q}$$ which require that $\left| \begin{array}{cccc} z_{11} & z_{12} & \cdots & z_{1i} \\ z_{21} & z_{22} & \cdots & z_{2i} \\ \vdots & \vdots & \ddots & \vdots \\ z_{i1} & z_{i2} & \cdots & z_{ii} \\ \end{array}\right| \neq 0$ for $i = 1 , 2 , \dots , m$, $i.e.$, we assume that all the **leading principal minors of the matrix** $Q$ are nonsingular. And actually we have $\left[ \begin{array}{cccc} z_{11} & z_{12} & \cdots & z_{1m} \\ z_{21} & z_{22} & \cdots & z_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ z_{m1} & z_{m2} & \cdots & z_{mm} \\ \end{array}\right] = Q$. Then as $k \to \infty$, we have for $j = 1, 2, \dots, m$ $$\left \langle A^k \vec{e}_1, A^k \vec{e}_2, \cdots, A^k \vec{e}_j \right \rangle \longrightarrow \left \langle \vec{q}_1, \vec{q}_2, \cdots, \vec{q}_j \right \rangle$$ Thus, each $\vec{q}_j$ is the limit of the component in $A^k \vec{e}_j$ orthogonal to the space $\left \langle A^k \vec{e}_1, A^k \vec{e}_2, \cdots, A^k \vec{e}_{j-1} \right \rangle$. And such orthogonal components can be obtained from the $QR$ factorization of the matrix $A^k$. $$A^k = \left[\begin{array}{cccc} A^k\vec{e}_1 & A^k \vec{e}_2 & \cdots A^k \vec{e}_m \end{array} \right] = \underline{Q}^{(k)} \underline{R}^{(k)} = \left[ \begin{array}{cccc} \underline{\vec{q}}_1^{(k)} & \underline{\vec{q}}_2^{(k)} & \cdots & \underline{\vec{q}}_m^{(k)} \end{array} \right] \underline{R}^{(k)}$$ And then as $k \to \infty$, $\underline{\vec{q}}_j^{(k)} \to \vec{q}_j$ for all possible $j$ with speed of each a ratio of $\left| \ffrac{\lambda_{j+1}} {\lambda_{j}}\right|$. In conclusion, for any real symmetric matrix $A$, as long as all its eigenvalues are *distinct* and all *leading principal minors* of the matrix $Q$ are nonsingular, then the orthogonal matrix $\underline{Q}^{(k)}$ from the $QR$ decomposition of $A^k$ converges to $Q$, then we can use Rayleigh quotient to find the corresponding eigenvalues. $Algorithm$ Given a real symmetric matrix $A$, we apply the **Simultaneous iteration** ``` AK = A for k = 1:n AK = A * AK end Q*R = AK ``` However, this method is not very stable. Since all columns of $A^K$ will converges to $\vec{q}_1$, so the condition number will get larger and larger with increasing $k$. Here's a more stable one. ``` AK = A for k = 1:n Q * R = AK AK = A * Q end ``` In addition, it's also hard to determine the convergence of the algorithm. See another one below. # QR algorithm without shift $Algorithm$ $QR$ algorithm without shift ```MATLAB A^{(0)} = A for k = 1:until convergence Q^{(k)} * R^{(k)} = A^{(k − 1)} A^{(k)} = R^{(k)} * Q^{(k)} end ``` For this algorithm, $A^{(k)}$ will converges to $\Lambda$. And what we gonna prove is for $k = 1, 2, \dots $, two equations hold. $$ A^k = Q^{(1)} Q^{(2)} \cdots Q^{(k)} R^{(k)} \cdots R^{(2)} R^{(1)} := \underline{Q}^{(k)}\underline{R}^{(k)} \\ A^{(k)}:=\Tran{\left( Q^{(1)} Q^{(2)} \cdots Q^{(k)} \right)} A \left( Q^{(1)} Q^{(2)} \cdots Q^{(k)} \right) $$ $Proof$ $$ A^1 = A^{(0)} = Q^{(1)} R^{(1)} := \underline{Q}^{(1)}\underline{R}^{(1)} \\ A^{(1)}:= R^{(1)} Q^{(1)} = \Tran{\left( Q^{(1)} \right)}Q^{(1)}R^{(1)} \left( Q^{(1)} \right) = \Tran{\left( Q^{(1)} \right)} A^{(0)} \left( Q^{(1)} \right) = \Tran{\left( Q^{(1)} \right)}A \left( Q^{(1)} \right) \\ Q^{(2)}R^{(2)} = A^{(1)} \\ A^{(2)}:= R^{(2)} Q^{(2)} = \Tran{\left( Q^{(2)} \right)}Q^{(2)}R^{(2)} \left( Q^{(2)} \right) = \Tran{\left( Q^{(2)} \right)} A^{(1)} \left( Q^{(2)} \right) = \Tran{\left( Q^{(2)} \right)} \Tran{\left( Q^{(1)} \right)}A \left( Q^{(1)} \right) \left( Q^{(2)} \right)\\ A^2 = A\left( Q^{(1)}R^{(1)} \right) = Q^{(1)}\left( \Tran{Q^{(1)}}A^{(1)}Q^{(1)} \right) R^{(1)} = Q^{(1)}\left( A^{(2)} \right) R^{(1)} = Q^{(1)}Q^{(2)}R^{(2)}R^{(1)} := \underline{Q}^{(2)}\underline{R}^{(2)} $$ Times after times of iteration, we have $$ A^{(k)} = \Tran{{\underline{Q}}^{(k)}} A {\underline{Q}}^{(k)}\\ A^k = \underline{Q}^{(k)}\underline{R}^{(k)} $$ so as $k \to \infty$ we have $\Tran{Q} A Q = \Lambda$. Here's the theorem $Theorem$ Let $A$ is a real symmetric matrix with eigenvalue decomposition $A = Q \lambda \Tran{Q}$, and $\left| \lambda_1 \right| > \left| \lambda_2 \right| > \cdots > \left| \lambda_m \right|$. Assume that all the leading principal minors of $Q$ are nonsingular. Then from the $QR$ algorithm, $A^{(k)}$ converges to $\Lambda$, as $k \to \infty$, with a linear convergence rate determined by $\max\limits_{1 \leq j < m} \ffrac{\left| \lambda_{j+1} \right|} {\left| \lambda_{j} \right|}$ # Deflation in the implementation of QR algorithm So actually the sequence of $A^{(k)}$ will converge to $\Lambda$ with entries other than on the diagonal close to $0$, at least less than a certain tolerance value, $\varepsilon$. $Algorithm$ ```MATLAB function [ Anew ] = qralg( A ) l = length( A ); while ( | A(l,l-1) | > varepsilon ) [ Q, R ] = qr( A ); A = R * Q; end Anew = A; end function [ eigens ] = qreigens( A ) for k = m:-1:2 Anew = qralg( A ); eigens( k ) = Anew( k,k ); A = Anew(1:k-1 , 1:k-1 ); end eigens(1) = A(1,1); end ``` So each time we obtain an eigenvalue in the downright corner of the matrix, we reduce the one less dimension to that matrix until all the eigenvalues are found. # QR algorithm with shift Compared with power iteration, the inverse iteration and the Rayleigh quotient iteration converge faster when *shift value* is sufficiently close to a true eigenvalue of $A$. And when doing inverse iteration we can introduce a shift into the $QR$ algorithm and make it converges faster. See the algorithm first $$A^{k} = \underline{Q}^{(k)} \underline{R}^{k} \Rightarrow A^{-k} = \left( \underline{R}^{(k)} \right)^{-1} \left( \underline{Q}^{(k)} \right)^{-1} = \left( \underline{R}^{(k)} \right)^{-1} \Tran{\left( \underline{Q}^{(k)} \right)} \\ $$ Then since $A$ is symmetric, we have $\left( \Tran{A} \right)^{-k} = A^{-k} = \left( \underline{Q}^{(k)} \right) \left( \Tran{ \underline{R}^{(k)}} \right)^{-1}$. Notice that here $\left( \Tran{ \underline{R}^{(k)}} \right) ^{-1}$ is an lower triangular matrix. Then we denote $P$ the $m$ by $m$ permutation matrix. $P = \left[ \begin{array}{cccc} & & & 1 \\ & & 1 & \\ & \ddots & & \\ 1 & & & \end{array}\right] = \left[ \begin{array}{cccc} \vec{e}_m & \vec{e}_{m-1} & \cdots & \vec{e}_1 \end{array}\right]$. Then we have $$A^{-k}P = \left( \underline{Q}^{(k)}P \right) P\left( \Tran{ \underline{R}^{(k)}} \right)^{-1}P$$ So then we denote $P\left( \Tran{ \underline{R}^{(k)}} \right)^{-1}P$ as $\tilde{\underline{R}}^{(k)}$, we have the following fomula $$\left( A^{-1} \right)^{k} \left[ \begin{array}{cccc} \vec{e}_m & \vec{e}_{m-1} & \cdots & \vec{e}_1 \end{array}\right] = \left[ \begin{array}{cccc} \underline{\vec{q}}_m^{(k)} & \underline{\vec{q}}_{m-1}^{(k)} & \cdots & \underline{\vec{q}}_1^{(k)} \end{array}\right]\tilde{\underline{R}}^{(k)} $$ And the $QR$ algorithm with a constant shift value $\mu$ is as following. $Algorithm$ ``` A^{(0)} = A; for k = 1:n Q^{(k)} * R^{(k)} = A^{(k-1)} - \mu * I; A^{(k)} = R^{(k)} Q^{(k)} + \mu * I; end ``` This is essentially the $QR$ algorithm on the matrix $A - \mu I$. So similarly we have $$ \left( A - \mu I \right)^k = Q^{(1)} Q^{(2)} \cdots Q^{(k)} R^{(k)} \cdots R^{(2)} R^{(1)} := \underline{Q}^{(k)}\underline{R}^{(k)} \\ A^{(k)}:=\Tran{\left( Q^{(1)} Q^{(2)} \cdots Q^{(k)} \right)} A \left( Q^{(1)} Q^{(2)} \cdots Q^{(k)} \right) $$ Then written in terms of the inverse iteration, we have $$\left( \left( A-\mu I \right)^{-1} \right)^{k} \left[ \begin{array}{cccc} \vec{e}_m & \vec{e}_{m-1} & \cdots & \vec{e}_1 \end{array}\right] = \left[ \begin{array}{cccc} \underline{\vec{q}}_m^{(k)} & \underline{\vec{q}}_{m-1}^{(k)} & \cdots & \underline{\vec{q}}_1^{(k)} \end{array}\right]\tilde{\underline{R}}^{(k)} $$ Still the $\underline{\vec{q}}_{m}^{(k)}$ will converge to the eigenvalues of $A$, but the speed is now determined by the ratio $\ffrac{\left| \lambda_m - \mu \right|} {\left| \lambda_{m - 1} - \mu \right|}$, and we can even update the shift value $\mu$ in each iteration as we did before in the Rayleigh quotient iteration $Algorithm$ ``` A^{(0)} = A; for k = 1:n Q^{(k)} R^{(k)} = A^{(k-1)} - \mu^{(k)} * I A^{(k)} = R^{(k)} Q^{(k)} + \mu^{(k)} * I end ``` *** But how to find that shift value, and how to update each time? We want the $\mu^{(k)}$ are sufficiently close to $\lambda_m$, so when the convergence to $\lambda_m$ is achieved, we not only deflate the matrix to a smaller one but also shift the value of $\mu^{(k)} = A^{(k-1)}(m,m)$. Here's the explanation: From Rayleigh quotient, $\Tran{ \underline{\vec{q}}_m^{(k-1)} }A\underline{\vec{q}}_m^{(k-1)}$ and since $A^{(k-1)} = \Tran{ \left( \underline{Q}^{(k-1)} \right) } A \underline{Q}^{(k-1)}$, $$\Tran{ \underline{\vec{q}}_m^{(k-1)} }A\underline{\vec{q}}_m^{(k-1)} = \left( \Tran{\vec{e}_{m}} \Tran{ \left( \underline{Q}^{(k-1)} \right) } \right) A \left( \underline{Q}^{(k-1)}\vec{e}_{m} \right) = \Tran{\vec{e}_{m}}A^{(k-1)}\vec{e}_{m} = A^{(k-1)}(m,m) := \mu^{(k)}$$ And there's a better shift: *Wilkinson shift*, defined as follows: the one eigenvalue of the lower-rightmost $2 \times 2$ submatrix of $A^{(k-1)}$ that is closer to $A^{(k-1)}_{m,m}$. $$\left[ \begin{array}{cc} A^{(k-1)}_{m-1,m-1} & A^{(k-1)}_{m-1,m} \\[1em] A^{(k-1)}_{m,m-1} & A^{(k-1)}_{m,m} \end{array} \right]$$ And we can write that as $$\mu^{(k)} = A^{(k-1)}_{m,m} - \DeclareMathOperator*{\sign}{sign} \frac{ \sign \left( \delta \right) {A^{(k-1)} _{m,m-1}}^2} { \left| \delta \right| + \sqrt{ \delta^2 + {A^{(k-1)} _{m,m-1}}^2} } \\[1em] \delta = \frac{\left( A^{(k-1)} _{m-1,m-1} - A^{(k-1)} _{m,m} \right)} {2}$$
github_jupyter
``` import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression import seaborn as sns sns.set() raw_data = pd.read_csv('1.04. Real-life example.csv') raw_data.head() ``` ## Preprocessing ``` raw_data.describe(include='all') data = raw_data.drop(['Model'],axis=1) data.describe(include='all') # dropped the Model category because it is irrelevant to increasing the accuracy of my model. data.isnull().sum() #this shows the missing values in my dataset data_no_mv = data.dropna(axis=0) #I drop the missing values here, which is acceptable because it is less than 5% of the observations. data_no_mv.describe(include='all') ``` ### PDFs #### Here I check the Probability Distribution Functions (PDF) of the Independent Variables Price, Year, Mileage, and Engine Volume to identify and weed out the Outliers. They can adversely affect the accuracy of my Regression model because a Regression attempts to draw a line closest to all the data; including the Outliers might inflate/deflate my model. ``` sns.distplot(data_no_mv['Price']) q = data_no_mv['Price'].quantile(0.99) data_1 = data_no_mv[data_no_mv['Price']<q] data_1.describe(include='all') # I decided to exclude the observations in the 99th percentile and above to get rid of the Outliers. sns.distplot(data_1['Price']) # Now the Price variable only includes observations up to the 98th percentile and has much fewer Outliers. sns.distplot(data_no_mv['Mileage']) q = data_1['Mileage'].quantile(0.99) data_2 = data_1[data_1['Mileage']<q] # Similar to the Price variable, I decided to exclude the observations in the 99th percentile and beyond to remove the Outliers. sns.distplot(data_2['Mileage']) sns.distplot(data_no_mv['EngineV']) # The PDF looks unusual compared to the previous two. data_3 = data_2[data_2['EngineV']<6.6] # After research, I found out that the normal interval of the Engine Volume falls between 06. to 6.5. # The observations beyond 6.5 are mostly 99.99 - a variable that was used in the past to label missing values. It is a bad idea to label missing values in this manner now. # I decided to remove such observations as they are Outliers. sns.distplot(data_3['EngineV']) sns.distplot(data_no_mv['Year']) # Most cars are newer but there are a few vintage cars in the variable. q = data_3['Year'].quantile(0.01) data_4 = data_3[data_3['Year']>q] # I decided to remove the 1st percentile and keep the rest sns.distplot(data_4['Year']) data_cleaned = data_4.reset_index(drop=True) #I reset the index to completely forget the old index. data_cleaned.describe(include='all') # This excludes ~250 problematic observations that could've hindered the accuracy of my model if left unchecked. ``` ## Checking the OLS assumptions ### Distribution ``` f, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize =(15,3)) ax1.scatter(data_cleaned['Year'],data_cleaned['Price']) ax1.set_title('Price and Year') ax2.scatter(data_cleaned['EngineV'],data_cleaned['Price']) ax2.set_title('Price and EngineV') ax3.scatter(data_cleaned['Mileage'],data_cleaned['Price']) ax3.set_title('Price and Mileage') plt.show() # These are not linear regressions and shows that I should first transform one or more variables to run the Regression. sns.distplot(data_cleaned['Price']) #Here I check the distribution of the dependent variable Price. log_price = np.log(data_cleaned['Price']) # Here I used the log transformation to fix heteroscedasticity and remove outliers from the variable Price. data_cleaned['log_price'] = log_price data_cleaned f, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize =(15,3)) ax1.scatter(data_cleaned['Year'],data_cleaned['log_price']) ax1.set_title('Log Price and Year') ax2.scatter(data_cleaned['EngineV'],data_cleaned['log_price']) ax2.set_title('Log Price and EngineV') ax3.scatter(data_cleaned['Mileage'],data_cleaned['log_price']) ax3.set_title('Log Price and Mileage') plt.show() # After performing the log transformation on Price, the PDFs now show a linear regression line. data_cleaned = data_cleaned.drop(['Price'],axis=1) # Here I dropped the variable Price and replaced it with log_Price because the former has no statistical significance to my model. ``` ### Multicollinearity ``` data_cleaned.columns.values from statsmodels.stats.outliers_influence import variance_inflation_factor variables = data_cleaned[['Mileage','Year','EngineV']] vif = pd.DataFrame() vif["VIF"] = [variance_inflation_factor(variables.values, i) for i in range(variables.shape[1])] vif["features"] = variables.columns # Through Statsmodels, I used the Variance Inflation Factor here to check for multicollinearity in my variables. # While I expect multicollinearity in my data, I wanted to check the variables the introduce unnacceptable correlation to my model; they have high VIFs. vif data_no_multicollinearity = data_cleaned.drop(['Year'],axis=1) # Dropped 'Year' because it has an unacceptably high VIF and is therefore a feature that introduces correlation in my data data_with_dummies = pd.get_dummies(data_no_multicollinearity, drop_first=True) # This identifies categorical variables and creates dummies automatically to avoid multicollinearity in my Model data_with_dummies.head() data_with_dummies.columns.values cols = ['log_price', 'Mileage', 'EngineV', 'Brand_BMW', 'Brand_Mercedes-Benz', 'Brand_Mitsubishi', 'Brand_Renault', 'Brand_Toyota', 'Brand_Volkswagen', 'Body_hatch', 'Body_other', 'Body_sedan', 'Body_vagon', 'Body_van', 'Engine Type_Gas', 'Engine Type_Other', 'Engine Type_Petrol', 'Registration_yes'] data_preprocessed = data_with_dummies[cols] data_preprocessed.head() # Here I arranged the data into a table. ``` ## Training my Regression Model ``` targets = data_preprocessed['log_price'] inputs = data_preprocessed.drop(['log_price'], axis=1) # I removed log_price in the inputs to exclude the transformed dependent variable from my inputs. from sklearn.preprocessing import StandardScaler scaler = StandardScaler () scaler.fit(inputs) inputs_scaled = scaler.transform(inputs) # This standardizes my inputs; in other words, it subtractrs the mean and divide by the standard deviation from each observation. from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(inputs_scaled, targets, test_size=0.2, random_state=365) # I did this to avoid overfitting my model to my data. # The default setting of the train-test split is 75-25, but here I chose 80-20. # I used 'random_state' to ensure that I get the same random shuffle every time I split my data. reg = LinearRegression() reg.fit(x_train, y_train) y_hat = reg.predict(x_train) plt.scatter(y_train, y_hat) plt.xlabel('Targets(y_train)', size=20) plt.ylabel('Predictions(y_hat)', size=20) plt.xlim(6,13) plt.ylim(6,13) plt.show() ``` ### Using Residuals to check the Model ``` sns.distplot(y_train-y_hat) plt.title("Residuals PDF", size=20) #to check whether the Residuals is normally distributed and the variability of the outcome reg.score(x_train, y_train) reg.intercept_ # The intercept or bias calibrates the model: without it, each feature will be off the mark. reg.coef_ reg_summary=pd.DataFrame(inputs.columns.values, columns=['Features']) reg_summary['Weights']=reg.coef_ reg_summary # A feature with a coefficient of 0 means that it has no significance to the model. #to know the categorical variables of my features data_cleaned['Brand'].unique() data_cleaned['Body'].unique() data_cleaned['Engine Type'].unique() data_cleaned['Registration'].unique() ``` ## Testing my Model ``` y_hat_test = reg.predict(x_test) plt.scatter(y_test, y_hat_test, alpha=0.2) plt.xlabel('Targets(y_test)', size=20) plt.ylabel('Predictions(y_hat_test)', size=20) plt.xlim(6, 13) plt.ylim(6, 13) plt.show() df_pf = pd.DataFrame(np.exp(y_hat_test), columns=['Prediction']) #this returns the exponential of Y Hat Test and removes the log. df_pf.head() y_test = y_test.reset_index(drop=True) y_test.head() df_pf['Target'] = np.exp(y_test) df_pf df_pf['Residual'] = df_pf['Target'] - df_pf['Prediction'] df_pf['Difference%'] = np.absolute(df_pf['Residual']/df_pf['Target']*100) df_pf pd.options.display.max_rows = 999 pd.set_option('display.float_format', lambda x: '%2f' % x) df_pf.sort_values(by=['Difference%']) # This table shows the difference in percetage of the prediction and the target using the test data. # I included the Residuals because examining them as the same as examining the heart of the alogirthm. ```
github_jupyter
<a href="https://colab.research.google.com/github/AnzorGozalishvili/active_learning_playground/blob/main/notebooks/regular_sentiment_analysis_pipeline.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Simple Sentiment Analysis Pipeline Here we train simple 2 layer neural network for sentiment analysis. - Model: 2 Fully Connected layer NN (PyTorch) - Dataset: Sentiment Analysis - Embedding: spacy en_core_web_lg (mean aggregated embeddings of the text) Install Requirements from [repository](https://github.com/AnzorGozalishvili/active_learning_playground) ``` !wget https://raw.githubusercontent.com/AnzorGozalishvili/active_learning_playground/main/requirements.txt !pip install -r requirements.txt !rm requirements.txt !pip install spacy-sentence-bert==0.1.2 ``` # Imports ``` # system import os import sys # data and models import numpy as np import pandas as pd import scipy # utilities import random import re import datetime # text embeddings import spacy import spacy_sentence_bert # scikit-learn stuff import sklearn from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score, roc_auc_score, precision_score, recall_score # PyTorch stuff import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim # visualization import matplotlib.pyplot as plt from tqdm import tqdm # dataset retrieval from io import BytesIO from zipfile import ZipFile from urllib.request import urlopen ``` # Set Random Seeds For reproducibility we set several random seeds which are recommended by PyTorch. ([See here](https://pytorch.org/docs/stable/notes/randomness.html)) ``` random.seed(hash("setting random seeds") % 2**32 - 1) np.random.seed(hash("improves reproducibility") % 2**32 - 1) torch.manual_seed(hash("PyTorch") % 2**32 - 1) RANDOM_SEED = 42 ``` # Dataset Let's download dataset from given [url](https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip), then take a look at samples. ## Retrieve dataset ``` def get_dataset(): resp = urlopen("https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip") zipfile = ZipFile(BytesIO(resp.read())) lines = list() for line in zipfile.open('SMSSpamCollection').readlines(): lines.append(line.decode('utf-8')) data = pd.DataFrame(data=lines) new = data[0].str.split("\t", n = 1, expand = True) data["text"]= new[1] data["label"]= new[0] data.drop(columns=[0], inplace = True) return data dataset = get_dataset() ``` ## Explore Samples ``` dataset.head() dataset.shape ``` ## Generate Train/Test splits and move forward We see the imbalance in target variable ``` dataset.label.value_counts() ``` We have duplicated records ``` dataset.duplicated().sum() ``` remove these duplicates ``` dataset.drop_duplicates(inplace=True) dataset.reset_index(drop=True, inplace=True) ``` split into train/test splits with 20/80 ratio ``` train, test = train_test_split(dataset, test_size=0.2, random_state=RANDOM_SEED) ``` Store these sets into dataset directory ``` DATASET_NAME = "SMSSpamCollection" if not os.path.exists('data'): os.mkdir('data') if not os.path.exists(f'data/{DATASET_NAME}'): os.mkdir(f'data/{DATASET_NAME}') train.to_csv(f'data/{DATASET_NAME}/train.csv') test.to_csv(f'data/{DATASET_NAME}/test.csv') ``` Load again and continue ``` train = pd.read_csv(f'data/{DATASET_NAME}/train.csv', index_col=0) test = pd.read_csv(f'data/{DATASET_NAME}/test.csv', index_col=0) train.shape, test.shape train.head(2) ``` # Generate Embeddings We use spacy embeddings to vectorize our samples ``` class Vectorizer: """Generates text embedding using deep learning model""" def __init__(self, *args, **kwargs): self.model = spacy_sentence_bert.load_model(kwargs.get('model', 'en_paraphrase_distilroberta_base_v1')) def __call__(self, text): if not text: text = "" return self.model(text).vector vectorizer = Vectorizer() EMBEDDING_DIM = vectorizer('sample text for embedding').shape[0]; EMBEDDING_DIM train['vector'] = train.text.apply(vectorizer).apply(lambda x: x.tolist()) test['vector'] = test.text.apply(vectorizer).apply(lambda x: x.tolist()) DATASET_NAME = "SMSSpamCollection" if not os.path.exists('data'): os.mkdir('data') if not os.path.exists(f'data/{DATASET_NAME}'): os.mkdir(f'data/{DATASET_NAME}') train.to_csv(f'data/{DATASET_NAME}/train_vectorized.csv') test.to_csv(f'data/{DATASET_NAME}/test_vectorized.csv') train = pd.read_csv(f'data/{DATASET_NAME}/train_vectorized.csv', index_col=0) test = pd.read_csv(f'data/{DATASET_NAME}/test_vectorized.csv', index_col=0) train['vector'] = train.vector.apply(eval) test['vector'] = test.vector.apply(eval) ``` # PyTorch ML Pipeline ## Model Example of model is taken from [here](https://github.com/rmunro/pytorch_active_learning/blob/master/active_learning_basics.py) ``` class MLP(nn.Module): """Simple 2 Layer Fully Connected NN (MLP)""" def __init__(self, num_labels, emb_dim): super(MLP, self).__init__() # Define model with one hidden layer with 128 neurons self.linear1 = nn.Linear(emb_dim, 128) self.linear2 = nn.Linear(128, num_labels) def forward(self, vector): hidden1 = self.linear1(vector).clamp(min=0) # ReLU output = self.linear2(hidden1) return F.log_softmax(output, dim=1) MLP(num_labels=2, emb_dim=EMBEDDING_DIM) train.sample() torch.Tensor(train.vector.iloc[:10].values.tolist()) class Trainer: """Trains PyTorch model on training data and also evaluated""" def __init__(self, *args, **kwargs): self.model = kwargs.get('model', MLP(num_labels=2, emb_dim=EMBEDDING_DIM)) self.loss_function = kwargs.get('loss_function', nn.NLLLoss()) self.optimizer = kwargs.get('optimizer', optim.SGD(self.model.parameters(), lr=0.01)) self.label_to_idx = kwargs.get('label_to_idx', {'ham': 0, 'spam': 1}) self.idx_to_label = {v:k for k,v in self.label_to_idx.items()} self.batch_size = kwargs.get('batch_size', 64) self.losses = [] def train(self, training_data, test_data, epochs): for epoch in range(epochs): print(f'Epoch: {str(epoch)}') shuffled_training_data = training_data.sample(frac=1.0, random_state=RANDOM_SEED + epoch) for batch_idx, start_idx in enumerate(range(0, len(shuffled_training_data), self.batch_size)): vecs = torch.Tensor( shuffled_training_data.vector.iloc[start_idx:start_idx+self.batch_size].tolist() ) targets = torch.LongTensor( shuffled_training_data.label.iloc[start_idx:start_idx+self.batch_size].apply(lambda x: self.label_to_idx[x]).tolist() ) self.model.zero_grad() log_probs = self.model(vecs) loss = self.loss_function(log_probs, targets) loss.backward() self.optimizer.step() self.losses.append(loss.item()) print(f"\tBatch: {batch_idx}\tLoss: {self.losses[-1]}") eval_results = self.evaluate(test_data) print(f"Evaluation Results: {repr(eval_results)}") # save model to path that is alphanumeric and includes number of items and accuracies in filename timestamp = re.sub('\.[0-9]*','_',str(datetime.datetime.now())).replace(" ", "_").replace("-", "").replace(":","") f1_score = str(eval_results['f1']) model_path = "models/"+timestamp+f1_score+".params" if not os.path.exists('models'): os.mkdir('models') torch.save(self.model.state_dict(), model_path) return model_path def evaluate(self, dataset): targets = [] preds = [] probs = [] with torch.no_grad(): for idx, row in dataset.iterrows(): vec = torch.Tensor(row.vector).view(1, -1) target = self.label_to_idx[row.label] logits = self.model(vec) prob = np.exp(logits.cpu().data.numpy()[0]).tolist() pred = np.argmax(prob) probs.append(prob[1]) preds.append(pred) targets.append(target) results = { "f1": round(f1_score(targets, preds, pos_label=1), 3), "precision": round(precision_score(targets, preds, pos_label=1), 3), "recall": round(recall_score(targets, preds, pos_label=1), 3), "roc_auc": round(roc_auc_score(targets, probs, labels=list(self.label_to_idx.keys())), 3), } return results def plot_loss(self): plt.figure(figsize=(10, 6)) plt.plot(self.losses) plt.show() LABEL_TO_IDX = {item:idx for idx, item in enumerate(sorted(train.label.unique().tolist()))}; LABEL_TO_IDX mlp = MLP(num_labels=2, emb_dim=EMBEDDING_DIM) trainer = Trainer( **{ "model": mlp, "loss_function": nn.NLLLoss(), "optimizer": optim.SGD(mlp.parameters(), lr=0.01), "label_to_idx": LABEL_TO_IDX, "batch_size": 256, } ) trainer.train(training_data=train, test_data=test, epochs=10) trainer.plot_loss() ```
github_jupyter
# Bias Removal Climate models can have biases relative to different verification datasets. Commonly, biases are removed by postprocessing before verification of forecasting skill. `climpred` provides convenience functions to do so. ``` import climpred import xarray as xr import matplotlib.pyplot as plt from climpred import HindcastEnsemble hind = climpred.tutorial.load_dataset('CESM-DP-SST') # CESM-DPLE hindcast ensemble output. obs = climpred.tutorial.load_dataset('ERSST') # observations hind["lead"].attrs["units"] = "years" ``` We begin by removing a mean climatology for the observations, since `CESM-DPLE` generates its anomalies over this same time period. ``` obs = obs - obs.sel(time=slice('1964', '2014')).mean('time') hindcast = HindcastEnsemble(hind) hindcast = hindcast.add_observations(obs) hindcast.plot() ``` The warming of the `observations` is similar to `initialized`. ## Mean bias removal Typically, bias depends on lead-time and therefore should therefore also be removed depending on lead-time. ``` bias = hindcast.verify(metric='bias', comparison='e2o', dim=[], alignment='same_verifs') bias.SST.plot() ``` Against `observations`, there is small cold bias in 1980 and 1990 initialization years and warm bias before and after. ``` # lead-time dependant mean bias over all initializations is quite small but negative mean_bias = bias.mean('init') mean_bias.SST.plot() ``` ### Cross Validatation To remove the mean bias quickly, the mean bias over all initializations is subtracted. For formally correct bias removal with cross validation, the given initialization is left out when subtracting the mean bias. `climpred` wraps these functions in `HindcastEnsemble.remove_bias(how='mean', cross_validate={bool})`. ``` hindcast.remove_bias(how='mean', cross_validate=True, alignment='same_verifs').plot() plt.title('hindcast lead timeseries removed for unconditional mean bias') plt.show() ``` ## Skill Distance-based accuracy metrics like (`mse`,`rmse`,`nrmse`,...) are sensitive to mean bias removal. Correlations like (`pearson_r`, `spearman_r`) are insensitive to bias correction. ``` metric='rmse' hindcast.verify(metric=metric, comparison='e2o', dim='init', alignment='same_verifs')['SST'].plot(label='no bias correction') hindcast.remove_bias(cross_validate=False, alignment='same_verifs') \ .verify(metric=metric, comparison='e2o', dim='init', alignment='same_verifs').SST.plot(label='bias correction without cross validation') hindcast.remove_bias(cross_validate=True, alignment='same_verifs') \ .verify(metric=metric, comparison='e2o', dim='init', alignment='same_verifs').SST.plot(label='formally correct bias correction with cross validation') plt.legend() plt.title(f"{metric.upper()} SST evaluated against observations") plt.show() ```
github_jupyter