markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
As before we can access the attributes of the instance of the class by using the dot notation: | SkinnyBlueRectangle.height
SkinnyBlueRectangle.width
SkinnyBlueRectangle.color | _____no_output_____ | MIT | Basics of Python/Notebooks/Python_Objects_and_Classes.ipynb | VNSST/Hands_on_Machine_Learning_using_Python |
We can draw the object: | SkinnyBlueRectangle.drawRectangle() | _____no_output_____ | MIT | Basics of Python/Notebooks/Python_Objects_and_Classes.ipynb | VNSST/Hands_on_Machine_Learning_using_Python |
Let’s create the object “FatYellowRectangle” of type Rectangle : | FatYellowRectangle = Rectangle(20,5,'yellow') | _____no_output_____ | MIT | Basics of Python/Notebooks/Python_Objects_and_Classes.ipynb | VNSST/Hands_on_Machine_Learning_using_Python |
We can access the attributes of the instance of the class by using the dot notation: | FatYellowRectangle.height
FatYellowRectangle.width
FatYellowRectangle.color | _____no_output_____ | MIT | Basics of Python/Notebooks/Python_Objects_and_Classes.ipynb | VNSST/Hands_on_Machine_Learning_using_Python |
We can draw the object: | FatYellowRectangle.drawRectangle() | _____no_output_____ | MIT | Basics of Python/Notebooks/Python_Objects_and_Classes.ipynb | VNSST/Hands_on_Machine_Learning_using_Python |
Pytorch Rals-C-SAGAN* Ra - Relativistic Average;* Ls - Least Squares;* C - Conditional;* SA - Self-Attention;* DCGAN - Deep Convolutional Generative Adversarial NetworkReferences:* https://www.kaggle.com/speedwagon/ralsgan-dogs* https://www.kaggle.com/cdeotte/dog-breed-cgan* https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/cgan/cgan.py* https://github.com/voletiv/self-attention-GAN-pytorch/blob/master/sagan_models.py | loss_calculation = 'hinge'
# loss_calculation = 'rals'
batch_size = 32
crop_dog = True #犬のアノテーションを使用するかどうか
noisy_label = True #ラベルスムージング的な
R_uni = (0.70, 0.95) #ラベルスムージングするときのrealの範囲
F_uni = (0.05, 0.15) #ラベルスムージングするときのfakeの範囲
Gcbn = False # generatorにConditionalBatchNorm2dを使うかどうか
Glrelu = True # generatorにLeakyLeLUを使うかどうか
flip_p = 0.5 # RandomHorizontalFlipの割合
n_epochs = 301
use_pixelnorm = True
# optimizerゾーン
G_opt = 'adaboundw'
# G_opt = 'adam'
G_lr = 0.0002
G_betas = (0.5, 0.99) #ada系のみ
G_final_lr=0.5 # adaboundのみ
G_weight_decay=5e-4 # adaboundのみ
G_eta_min = 0.00001 # コサインアニーリングのパラメタ
D_opt = 'adaboundw'
# D_opt = 'adam'
# D_opt = 'SGD'
D_lr = 0.00005
D_betas = (0.1, 0.99) #ada系のみ
D_final_lr=0.1 # adaboundのみ
D_weight_decay=0 #adaboundのみ
D_eta_min = 0.00005
import os
import PIL
import torchvision
import torchvision.datasets as dset
from torch.utils.data import Dataset, DataLoader
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
import xml.etree.ElementTree as ET
import numpy as np
import imgaug as ia
import imgaug.augmenters as iaa
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
from torch.nn.init import xavier_uniform_
import time
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
from torch.nn.utils import spectral_norm
import torch.utils.data
import torchvision
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
from torch import nn, optim
import torch.nn.functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image
import matplotlib.image as mpimg
import torch.nn.functional as F
from torch.nn import Parameter
import numpy as np
import os
import gzip, pickle
import tensorflow as tf
from scipy import linalg
import pathlib
import urllib
import warnings
from tqdm import tqdm
from PIL import Image
import zipfile
from tqdm import tqdm_notebook as tqdm
kernel_start_time = time.perf_counter() | _____no_output_____ | MIT | nb/base_gan.ipynb | raxman0721/kaggle-dog_gan |
Helper Blocks | import math
import torch
from torch.optim import Optimizer
class AdaBound(Optimizer):
"""Implements AdaBound algorithm.
It has been proposed in `Adaptive Gradient Methods with Dynamic Bound of Learning Rate`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): Adam learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
final_lr (float, optional): final (SGD) learning rate (default: 0.1)
gamma (float, optional): convergence speed of the bound functions (default: 1e-3)
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
amsbound (boolean, optional): whether to use the AMSBound variant of this algorithm
.. Adaptive Gradient Methods with Dynamic Bound of Learning Rate:
https://openreview.net/forum?id=Bkg3g2R9FX
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), final_lr=0.1, gamma=1e-3,
eps=1e-8, weight_decay=0, amsbound=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
if not 0.0 <= final_lr:
raise ValueError("Invalid final learning rate: {}".format(final_lr))
if not 0.0 <= gamma < 1.0:
raise ValueError("Invalid gamma parameter: {}".format(gamma))
defaults = dict(lr=lr, betas=betas, final_lr=final_lr, gamma=gamma, eps=eps,
weight_decay=weight_decay, amsbound=amsbound)
super(AdaBound, self).__init__(params, defaults)
self.base_lrs = list(map(lambda group: group['lr'], self.param_groups))
def __setstate__(self, state):
super(AdaBound, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('amsbound', False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group, base_lr in zip(self.param_groups, self.base_lrs):
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError(
'Adam does not support sparse gradients, please consider SparseAdam instead')
amsbound = group['amsbound']
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
if amsbound:
# Maintains max of all exp. moving avg. of sq. grad. values
state['max_exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
if amsbound:
max_exp_avg_sq = state['max_exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['weight_decay'] != 0:
grad = grad.add(group['weight_decay'], p.data)
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsbound:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = max_exp_avg_sq.sqrt().add_(group['eps'])
else:
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
# Applies bounds on actual learning rate
# lr_scheduler cannot affect final_lr, this is a workaround to apply lr decay
final_lr = group['final_lr'] * group['lr'] / base_lr
lower_bound = final_lr * (1 - 1 / (group['gamma'] * state['step'] + 1))
upper_bound = final_lr * (1 + 1 / (group['gamma'] * state['step']))
step_size = torch.full_like(denom, step_size)
step_size.div_(denom).clamp_(lower_bound, upper_bound).mul_(exp_avg)
p.data.add_(-step_size)
return loss
class AdaBoundW(Optimizer):
"""Implements AdaBound algorithm with Decoupled Weight Decay (arxiv.org/abs/1711.05101)
It has been proposed in `Adaptive Gradient Methods with Dynamic Bound of Learning Rate`_.
Arguments:
params (iterable): iterable of parameters to optimize or dicts defining
parameter groups
lr (float, optional): Adam learning rate (default: 1e-3)
betas (Tuple[float, float], optional): coefficients used for computing
running averages of gradient and its square (default: (0.9, 0.999))
final_lr (float, optional): final (SGD) learning rate (default: 0.1)
gamma (float, optional): convergence speed of the bound functions (default: 1e-3)
eps (float, optional): term added to the denominator to improve
numerical stability (default: 1e-8)
weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
amsbound (boolean, optional): whether to use the AMSBound variant of this algorithm
.. Adaptive Gradient Methods with Dynamic Bound of Learning Rate:
https://openreview.net/forum?id=Bkg3g2R9FX
"""
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), final_lr=0.1, gamma=1e-3,
eps=1e-8, weight_decay=0, amsbound=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
if not 0.0 <= final_lr:
raise ValueError("Invalid final learning rate: {}".format(final_lr))
if not 0.0 <= gamma < 1.0:
raise ValueError("Invalid gamma parameter: {}".format(gamma))
defaults = dict(lr=lr, betas=betas, final_lr=final_lr, gamma=gamma, eps=eps,
weight_decay=weight_decay, amsbound=amsbound)
super(AdaBoundW, self).__init__(params, defaults)
self.base_lrs = list(map(lambda group: group['lr'], self.param_groups))
def __setstate__(self, state):
super(AdaBoundW, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('amsbound', False)
def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group, base_lr in zip(self.param_groups, self.base_lrs):
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError(
'Adam does not support sparse gradients, please consider SparseAdam instead')
amsbound = group['amsbound']
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
if amsbound:
# Maintains max of all exp. moving avg. of sq. grad. values
state['max_exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
if amsbound:
max_exp_avg_sq = state['max_exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsbound:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = max_exp_avg_sq.sqrt().add_(group['eps'])
else:
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
# Applies bounds on actual learning rate
# lr_scheduler cannot affect final_lr, this is a workaround to apply lr decay
final_lr = group['final_lr'] * group['lr'] / base_lr
lower_bound = final_lr * (1 - 1 / (group['gamma'] * state['step'] + 1))
upper_bound = final_lr * (1 + 1 / (group['gamma'] * state['step']))
step_size = torch.full_like(denom, step_size)
step_size.div_(denom).clamp_(lower_bound, upper_bound).mul_(exp_avg)
if group['weight_decay'] != 0:
decayed_weights = torch.mul(p.data, group['weight_decay'])
p.data.add_(-step_size)
p.data.sub_(decayed_weights)
else:
p.data.add_(-step_size)
return loss
# スペクトラルノルム使ったコンボそう
def snconv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True):
return spectral_norm(nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,
stride=stride, padding=padding, dilation=dilation, groups=groups, bias=bias))
# スペクトラルノルム使った全結合層
def snlinear(in_features, out_features):
return spectral_norm(nn.Linear(in_features=in_features, out_features=out_features))
#スペクトラルノルム使ったエンべ層
def sn_embedding(num_embeddings, embedding_dim):
return spectral_norm(nn.Embedding(num_embeddings=num_embeddings, embedding_dim=embedding_dim))
#パイトーチ本のアテンションクラス
class Self_Attention_book(nn.Module):
""" Self-AttentionのLayer"""
def __init__(self, in_dim):
super(Self_Attention_book, self).__init__()
# 1×1の畳み込み層によるpointwise convolutionを用意
self.query_conv = nn.Conv2d(
in_channels=in_dim, out_channels=in_dim//8, kernel_size=1)
self.key_conv = nn.Conv2d(
in_channels=in_dim, out_channels=in_dim//8, kernel_size=1)
self.value_conv = nn.Conv2d(
in_channels=in_dim, out_channels=in_dim, kernel_size=1)
# Attention Map作成時の規格化のソフトマックス
self.softmax = nn.Softmax(dim=-2)
# 元の入力xとSelf-Attention Mapであるoを足し算するときの係数
# output = x +gamma*o
# 最初はgamma=0で、学習させていく
self.gamma = nn.Parameter(torch.zeros(1))
def forward(self, x):
# 入力変数
X = x
# 畳み込みをしてから、サイズを変形する。 B,C',W,H→B,C',N へ
proj_query = self.query_conv(X).view(
X.shape[0], -1, X.shape[2]*X.shape[3]) # サイズ:B,C',N
proj_query = proj_query.permute(0, 2, 1) # 転置操作
proj_key = self.key_conv(X).view(
X.shape[0], -1, X.shape[2]*X.shape[3]) # サイズ:B,C',N
# かけ算
S = torch.bmm(proj_query, proj_key) # bmmはバッチごとの行列かけ算です
# 規格化
attention_map_T = self.softmax(S) # 行i方向の和を1にするソフトマックス関数
attention_map = attention_map_T.permute(0, 2, 1) # 転置をとる
# Self-Attention Mapを計算する
proj_value = self.value_conv(X).view(
X.shape[0], -1, X.shape[2]*X.shape[3]) # サイズ:B,C,N
o = torch.bmm(proj_value, attention_map.permute(
0, 2, 1)) # Attention Mapは転置してかけ算
# Self-Attention MapであるoのテンソルサイズをXにそろえて、出力にする
o = o.view(X.shape[0], X.shape[1], X.shape[2], X.shape[3])
out = x+self.gamma*o
return out
#カーネルのアテンションクラス
class Self_Attn(nn.Module):
""" Self attention Layer"""
def __init__(self, in_channels):
super(Self_Attn, self).__init__()
self.in_channels = in_channels
self.snconv1x1_theta = snconv2d(in_channels=in_channels, out_channels=in_channels//8, kernel_size=1, stride=1, padding=0)
self.snconv1x1_phi = snconv2d(in_channels=in_channels, out_channels=in_channels//8, kernel_size=1, stride=1, padding=0)
self.snconv1x1_g = snconv2d(in_channels=in_channels, out_channels=in_channels//2, kernel_size=1, stride=1, padding=0)
self.snconv1x1_attn = snconv2d(in_channels=in_channels//2, out_channels=in_channels, kernel_size=1, stride=1, padding=0)
self.maxpool = nn.MaxPool2d(2, stride=2, padding=0)
self.softmax = nn.Softmax(dim=-1)
self.sigma = nn.Parameter(torch.zeros(1))
def forward(self, x):
_, ch, h, w = x.size()
# Theta path
theta = self.snconv1x1_theta(x)
theta = theta.view(-1, ch//8, h*w)
# Phi path
phi = self.snconv1x1_phi(x)
phi = self.maxpool(phi)
phi = phi.view(-1, ch//8, h*w//4)
# Attn map
attn = torch.bmm(theta.permute(0, 2, 1), phi)
attn = self.softmax(attn)
# g path
g = self.snconv1x1_g(x)
g = self.maxpool(g)
g = g.view(-1, ch//2, h*w//4)
# Attn_g
attn_g = torch.bmm(g, attn.permute(0, 2, 1))
attn_g = attn_g.view(-1, ch//2, h, w)
attn_g = self.snconv1x1_attn(attn_g)
# Out
out = x + self.sigma * attn_g
return out
class ConditionalBatchNorm2d(nn.Module):
def __init__(self, num_features, num_classes):
super().__init__()
self.num_features = num_features
self.bn = nn.BatchNorm2d(num_features)
self.embed = nn.Embedding(num_classes, num_features * 2)
self.embed.weight.data[:, :num_features].fill_(1.) # Initialize scale to 1
self.embed.weight.data[:, num_features:].zero_() # Initialize bias at 0
def forward(self, inputs):
x, y = inputs
out = self.bn(x)
gamma, beta = self.embed(y).chunk(2, 1)
out = gamma.view(-1, self.num_features, 1, 1) * out + beta.view(-1, self.num_features, 1, 1)
return out | _____no_output_____ | MIT | nb/base_gan.ipynb | raxman0721/kaggle-dog_gan |
Generator and Discriminator | class UpConvBlock(nn.Module):
"""
n_cl クラス数(120),
k_s=カーネルサイズ(4),
stride=stride(2),
padding=padding(0),
bias=バイアス入れるかどうか(False),
dropout_p=dropout_p(0.0),
use_cbn=Conditional Batch Normalization使うかどうか(True)
Lrelu=LeakyReLU使うかどうか(True)(FalseはReLU)
slope=Lreluのslope(0.05)
"""
def __init__(self, n_input, n_output, n_cl, k_s=4, stride=2, padding=0,
bias=False, dropout_p=0.0, use_cbn=True, Lrelu=True, slope=0.05):
super(UpConvBlock, self).__init__()
self.use_cbn = use_cbn
self.dropout_p=dropout_p
self.upconv = spectral_norm(nn.ConvTranspose2d(n_input, n_output, kernel_size=k_s, stride=stride, padding=padding, bias=bias))
if use_cbn:
self.cond_bn = ConditionalBatchNorm2d(n_output, n_cl)
else:
self.bn = nn.BatchNorm2d(n_output)
if Lrelu:
self.activ = nn.LeakyReLU(slope, inplace=True)
else:
self.activ = nn.ReLU(inplace=True)
self.dropout = nn.Dropout2d(p=dropout_p)
def forward(self, inputs):
x0, labels = inputs
x = self.upconv(x0)
if self.use_cbn:
x = self.activ(self.cond_bn((x, labels)))
else:
x = self.activ(self.bn(x))
if self.dropout_p > 0.0:
x = self.dropout(x)
return x
class Generator(nn.Module):
def __init__(self, nz=128, num_classes=120, channels=3, nfilt=64,use_cbn=True, Lrelu=True):
super(Generator, self).__init__()
self.nz = nz
self.num_classes = num_classes
self.channels = channels
self.label_emb = nn.Embedding(num_classes, nz)
self.upconv1 = UpConvBlock(2*nz, nfilt*16, num_classes, k_s=4, stride=1, padding=0, dropout_p=0.15,use_cbn=use_cbn,Lrelu=Lrelu)
self.upconv2 = UpConvBlock(nfilt*16, nfilt*8, num_classes, k_s=4, stride=2, padding=1, dropout_p=0.10,use_cbn=use_cbn,Lrelu=Lrelu)
self.upconv3 = UpConvBlock(nfilt*8, nfilt*4, num_classes, k_s=4, stride=2, padding=1, dropout_p=0.05,use_cbn=use_cbn,Lrelu=Lrelu)
self.upconv4 = UpConvBlock(nfilt*4, nfilt*2, num_classes, k_s=4, stride=2, padding=1, dropout_p=0.05,use_cbn=use_cbn,Lrelu=Lrelu)
self.upconv5 = UpConvBlock(nfilt*2, nfilt, num_classes, k_s=4, stride=2, padding=1, dropout_p=0.05,use_cbn=use_cbn,Lrelu=Lrelu)
self.self_attn = Self_Attention_book(nfilt)
self.upconv6 = UpConvBlock(nfilt, 3, num_classes, k_s=3, stride=1, padding=1)
self.out_conv = spectral_norm(nn.Conv2d(3, 3, 3, 1, 1, bias=False))
self.out_activ = nn.Tanh()
def forward(self, inputs):
z, labels = inputs
enc = self.label_emb(labels).view((-1, self.nz, 1, 1))
enc = F.normalize(enc, p=2, dim=1)
x = torch.cat((z, enc), 1)
x = self.upconv1((x, labels))
x = self.upconv2((x, labels))
x = self.upconv3((x, labels))
x = self.upconv4((x, labels))
x = self.upconv5((x, labels))
x = self.self_attn(x)
x = self.upconv6((x, labels))
x = self.out_conv(x)
img = self.out_activ(x)
return img
class Discriminator(nn.Module):
def __init__(self, num_classes=120, channels=3, nfilt=64):
super(Discriminator, self).__init__()
self.channels = channels
self.num_classes = num_classes
def down_convlayer(n_input, n_output, k_s=4, stride=2, padding=0, dropout_p=0.0):
block = [spectral_norm(nn.Conv2d(n_input, n_output, kernel_size=k_s, stride=stride, padding=padding, bias=False)),
nn.BatchNorm2d(n_output),
nn.LeakyReLU(0.2, inplace=True),
]
if dropout_p > 0.0: block.append(nn.Dropout(p=dropout_p))
return block
self.label_emb = nn.Embedding(num_classes, 64*64)
self.model = nn.Sequential(
*down_convlayer(self.channels + 1, nfilt, 4, 2, 1),
Self_Attn(nfilt),
*down_convlayer(nfilt, nfilt*2, 4, 2, 1, dropout_p=0.20),
*down_convlayer(nfilt*2, nfilt*4, 4, 2, 1, dropout_p=0.5),
*down_convlayer(nfilt*4, nfilt*8, 4, 2, 1, dropout_p=0.35),
spectral_norm(nn.Conv2d(nfilt*8, 1, 4, 1, 0, bias=False)),
)
def forward(self, inputs):
imgs, labels = inputs
enc = self.label_emb(labels).view((-1, 1, 64, 64))
enc = F.normalize(enc, p=2, dim=1)
x = torch.cat((imgs, enc), 1) # 4 input feature maps(3rgb + 1label)
out = self.model(x)
return out.view(-1)
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
nn.init.normal_(m.weight.data, 0.0, 0.02)
elif classname.find('BatchNorm') != -1:
nn.init.normal_(m.weight.data, 1.0, 0.02)
nn.init.constrideant_(m.bias.data, 0)
# ----------------------------------------------------------------------------
# Pixelwise feature vector normalization.
# reference: https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py#L120
# ----------------------------------------------------------------------------
class PixelwiseNorm(nn.Module):
def __init__(self):
super(PixelwiseNorm, self).__init__()
def forward(self, x, alpha=1e-8):
"""
forward pass of the module
:param x: input activations volume
:param alpha: small number for numerical stability
:return: y => pixel normalized activations
"""
y = x.pow(2.).mean(dim=1, keepdim=True).add(alpha).sqrt() # [N1HW]
y = x / y # normalize the input x volume
return y
class Generator_pix(nn.Module):
def __init__(self, nz=128, num_classes=120, channels=3, nfilt=64,use_cbn=True, Lrelu=True):
super(Generator, self).__init__()
self.nz = nz
self.num_classes = num_classes
self.channels = channels
self.label_emb = nn.Embedding(num_classes, nz)
self.upconv1 = UpConvBlock(2*nz, nfilt*16, num_classes, k_s=4, stride=1, padding=0, dropout_p=0.15,use_cbn=use_cbn,Lrelu=Lrelu)
self.upconv2 = UpConvBlock(nfilt*16, nfilt*8, num_classes, k_s=4, stride=2, padding=1, dropout_p=0.10,use_cbn=use_cbn,Lrelu=Lrelu)
self.upconv3 = UpConvBlock(nfilt*8, nfilt*4, num_classes, k_s=4, stride=2, padding=1, dropout_p=0.05,use_cbn=use_cbn,Lrelu=Lrelu)
self.upconv4 = UpConvBlock(nfilt*4, nfilt*2, num_classes, k_s=4, stride=2, padding=1, dropout_p=0.05,use_cbn=use_cbn,Lrelu=Lrelu)
self.upconv5 = UpConvBlock(nfilt*2, nfilt, num_classes, k_s=4, stride=2, padding=1, dropout_p=0.05,use_cbn=use_cbn,Lrelu=Lrelu)
self.self_attn = Self_Attention_book(nfilt)
self.upconv6 = UpConvBlock(nfilt, 3, num_classes, k_s=3, stride=1, padding=1)
self.out_conv = spectral_norm(nn.Conv2d(3, 3, 3, 1, 1, bias=False))
self.pixnorm = PixelwiseNorm()
self.out_activ = nn.Tanh()
def forward(self, inputs):
z, labels = inputs
enc = self.label_emb(labels).view((-1, self.nz, 1, 1))
enc = F.normalize(enc, p=2, dim=1)
x = torch.cat((z, enc), 1)
x = self.upconv1((x, labels))
x = self.upconv2((x, labels))
x = self.pixnorm(x)
x = self.upconv3((x, labels))
x = self.pixnorm(x)
x = self.upconv4((x, labels))
x = self.pixnorm(x)
x = self.upconv5((x, labels))
x = self.self_attn(x)
x = self.upconv6((x, labels))
x = self.out_conv(x)
img = self.out_activ(x)
return img
class Discriminator_pix(nn.Module):
def __init__(self, num_classes=120, channels=3, nfilt=64):
super(Discriminator, self).__init__()
self.channels = channels
self.num_classes = num_classes
def down_convlayer(n_input, n_output, k_s=4, stride=2, padding=0, dropout_p=0.0, use_pixnorm=True):
block = [spectral_norm(nn.Conv2d(n_input, n_output, kernel_size=k_s, stride=stride, padding=padding, bias=False)),
nn.BatchNorm2d(n_output),
nn.LeakyReLU(0.2, inplace=True),
]
if dropout_p > 0.0: block.append(nn.Dropout(p=dropout_p))
if use_pixnorm: block.append(PixelwiseNorm())
return block
self.label_emb = nn.Embedding(num_classes, 64*64)
self.model = nn.Sequential(
*down_convlayer(self.channels + 1, nfilt, 4, 2, 1,use_pixnorm=False),
Self_Attn(nfilt),
*down_convlayer(nfilt, nfilt*2, 4, 2, 1, dropout_p=0.20),
*down_convlayer(nfilt*2, nfilt*4, 4, 2, 1, dropout_p=0.5),
*down_convlayer(nfilt*4, nfilt*8, 4, 2, 1, dropout_p=0.35,use_pixnorm=False),
spectral_norm(nn.Conv2d(nfilt*8, 1, 4, 1, 0, bias=False)),
)
def forward(self, inputs):
imgs, labels = inputs
enc = self.label_emb(labels).view((-1, 1, 64, 64))
enc = F.normalize(enc, p=2, dim=1)
x = torch.cat((imgs, enc), 1) # 4 input feature maps(3rgb + 1label)
out = self.model(x)
return out.view(-1) | _____no_output_____ | MIT | nb/base_gan.ipynb | raxman0721/kaggle-dog_gan |
Data loader | class DataGenerator(Dataset):
def __init__(self, directory, transform=None, n_samples=np.inf, crop_dogs=True):
self.directory = directory
self.transform = transform
self.n_samples = n_samples
self.samples, self.labels = self.load_dogs_data(directory, crop_dogs)
def load_dogs_data(self, directory, crop_dogs):
required_transforms = torchvision.transforms.Compose([
torchvision.transforms.Resize(64),
torchvision.transforms.CenterCrop(64),
])
imgs = []
labels = []
paths = []
for root, _, fnames in sorted(os.walk(directory)):
for fname in sorted(fnames)[:min(self.n_samples, 999999999999999)]:
path = os.path.join(root, fname)
paths.append(path)
for path in paths:
# Load image
try: img = dset.folder.default_loader(path)
except: continue
# Get bounding boxes
annotation_basename = os.path.splitext(os.path.basename(path))[0]
annotation_dirname = next(
dirname for dirname in os.listdir('../input/annotation/Annotation/') if
dirname.startswith(annotation_basename.split('_')[0]))
if crop_dogs:
tree = ET.parse(os.path.join('../input/annotation/Annotation/',
annotation_dirname, annotation_basename))
root = tree.getroot()
objects = root.findall('object')
for o in objects:
bndbox = o.find('bndbox')
xmin = int(bndbox.find('xmin').text)
ymin = int(bndbox.find('ymin').text)
xmax = int(bndbox.find('xmax').text)
ymax = int(bndbox.find('ymax').text)
object_img = required_transforms(img.crop((xmin, ymin, xmax, ymax)))
imgs.append(object_img)
labels.append(annotation_dirname.split('-')[1].lower())
else:
object_img = required_transforms(img)
imgs.append(object_img)
labels.append(annotation_dirname.split('-')[1].lower())
return imgs, labels
def __getitem__(self, index):
sample = self.samples[index]
label = self.labels[index]
if self.transform is not None:
sample = self.transform(sample)
return np.asarray(sample), label
def __len__(self):
return len(self.samples) | _____no_output_____ | MIT | nb/base_gan.ipynb | raxman0721/kaggle-dog_gan |
Training Parameters | database = '../input/all-dogs/all-dogs/'
crop_dogs = crop_dog
n_samples = np.inf
BATCH_SIZE = batch_size
epochs = n_epochs
use_soft_noisy_labels=noisy_label #ラベルスムージングするかどうか
loss_calc = loss_calculation
nz = 128
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
transform = transforms.Compose([transforms.RandomHorizontalFlip(p=flip_p),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_data = DataGenerator(database, transform=transform, n_samples=n_samples, crop_dogs=crop_dogs)
decoded_dog_labels = {i:breed for i, breed in enumerate(sorted(set(train_data.labels)))}
encoded_dog_labels = {breed:i for i, breed in enumerate(sorted(set(train_data.labels)))}
train_data.labels = [encoded_dog_labels[l] for l in train_data.labels] # encode dog labels in the data generator
train_loader = torch.utils.data.DataLoader(train_data, shuffle=True,
batch_size=BATCH_SIZE, num_workers=4)
print("Dog breeds loaded: ", len(encoded_dog_labels))
print("Data samples loaded:", len(train_data))
if use_pixelnorm:
netG = Generator_pix(nz, num_classes=len(encoded_dog_labels), nfilt=64,use_cbn=Gcbn, Lrelu=Glrelu).to(device)
netD = Discriminator_pix(num_classes=len(encoded_dog_labels), nfilt=64).to(device)
else:
netG = Generator(nz, num_classes=len(encoded_dog_labels), nfilt=64,use_cbn=Gcbn, Lrelu=Glrelu).to(device)
netD = Discriminator(num_classes=len(encoded_dog_labels), nfilt=64).to(device)
weights_init(netG)
weights_init(netD)
print("Generator parameters: ", sum(p.numel() for p in netG.parameters() if p.requires_grad))
print("Discriminator parameters:", sum(p.numel() for p in netD.parameters() if p.requires_grad))
if G_opt == 'adaboundw':
optimizerG = AdaBoundW(netG.parameters(), lr=G_lr, betas=G_betas,final_lr=G_final_lr,weight_decay=G_weight_decay)
elif G_opt == 'adam':
optimizerG = optim.Adam(netG.parameters(), lr=G_lr, betas=G_betas)
if D_opt == 'adaboundw':
optimizerD = AdaBoundW(netD.parameters(), lr=D_lr, betas=D_betas,final_lr=D_final_lr,weight_decay=D_weight_decay)
elif D_opt == 'adam':
optimizerD = optim.Adam(netD.parameters(), lr=D_lr, betas=D_betas)
elif D_opt == 'SGD':
optimizerD = optim.SGD(netD.parameters(), lr=D_lr)
lr_schedulerG = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizerG, T_0=epochs//20, eta_min=G_eta_min)
lr_schedulerD = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizerD, T_0=epochs//20, eta_min=D_eta_min)
def mse(imageA, imageB):
err = np.sum((imageA.astype("float") - imageB.astype("float")) ** 2)
err /= float(imageA.shape[0] * imageA.shape[1])
return err
def show_generated_img(n_images=5, nz=128):
sample = []
for _ in range(n_images):
noise = torch.randn(1, nz, 1, 1, device=device)
dog_label = torch.randint(0, len(encoded_dog_labels), (1, ), device=device)
gen_image = netG((noise, dog_label)).to("cpu").clone().detach().squeeze(0)
gen_image = gen_image.numpy().transpose(1, 2, 0)
sample.append(gen_image)
figure, axes = plt.subplots(1, len(sample), figsize=(64, 64))
for index, axis in enumerate(axes):
axis.axis('off')
image_array = (sample[index] + 1.) / 2.
axis.imshow(image_array)
plt.show()
def analyse_generated_by_class(n_images=5):
good_breeds = []
for l in range(len(decoded_dog_labels)):
sample = []
for _ in range(n_images):
noise = torch.randn(1, nz, 1, 1, device=device)
dog_label = torch.full((1,) , l, device=device, dtype=torch.long)
gen_image = netG((noise, dog_label)).to("cpu").clone().detach().squeeze(0)
gen_image = gen_image.numpy().transpose(1, 2, 0)
sample.append(gen_image)
d = np.round(np.sum([mse(sample[k], sample[k+1]) for k in range(len(sample)-1)])/n_images, 1)
if d < 1.0: continue # had mode colapse(discard)
print(f"Generated breed({d}): ", decoded_dog_labels[l])
figure, axes = plt.subplots(1, len(sample), figsize=(64, 64))
for index, axis in enumerate(axes):
axis.axis('off')
image_array = (sample[index] + 1.) / 2.
axis.imshow(image_array)
plt.show()
good_breeds.append(l)
return good_breeds
def create_submit(good_breeds):
print("Creating submit")
os.makedirs('../output_images', exist_ok=True)
im_batch_size = 32
n_images = 10000
all_dog_labels = np.random.choice(good_breeds, size=n_images, replace=True)
for i_batch in range(0, n_images, im_batch_size):
noise = torch.randn(im_batch_size, nz, 1, 1, device=device)
dog_labels = torch.from_numpy(all_dog_labels[i_batch: (i_batch+im_batch_size)]).to(device)
gen_images = netG((noise, dog_labels))
gen_images = (gen_images.to("cpu").clone().detach() + 1) / 2
for ii, img in enumerate(gen_images):
save_image(gen_images[ii, :, :, :], os.path.join('../output_images', f'image_{i_batch + ii:05d}.png'))
import shutil
shutil.make_archive('images', 'zip', '../output_images') | _____no_output_____ | MIT | nb/base_gan.ipynb | raxman0721/kaggle-dog_gan |
Training loop | d_loss_log = []
g_loss_log = []
dout_real_log = []
dout_fake_log = []
dout_fake_log2 = []
iter_n = len(train_loader) - 1 #最後の余ったバッチは計算されないから-1
for epoch in range(epochs):
epoch_g_loss = 0.0 # epochの損失和
epoch_d_loss = 0.0 # epochの損失和
epoch_dout_real = 0.0
epoch_dout_fake = 0.0
epoch_dout_fake2 = 0.0
epoch_time = time.perf_counter()
if time.perf_counter() - kernel_start_time > 31000:
print("Time limit reached! Stopping kernel!"); break
for ii, (real_images, dog_labels) in tqdm(enumerate(train_loader),total=len(train_loader)):
if real_images.shape[0]!= BATCH_SIZE: continue
# ラベルにノイズを入れる。そして時々fakeとrealを入れ替える。
if use_soft_noisy_labels:
real_labels = torch.squeeze(torch.empty((BATCH_SIZE, 1), device=device).uniform_(*R_uni))
fake_labels = torch.squeeze(torch.empty((BATCH_SIZE, 1), device=device).uniform_(*F_uni))
for p in np.random.choice(BATCH_SIZE, size=np.random.randint((BATCH_SIZE//8)), replace=False):
real_labels[p], fake_labels[p] = fake_labels[p], real_labels[p] # swap labels
else:
real_labels = torch.full((BATCH_SIZE, 1), 1.0, device=device)
fake_labels = torch.full((BATCH_SIZE, 1), 0.0, device=device)
############################
# (1) Update D network
###########################
netD.zero_grad()
dog_labels = torch.tensor(dog_labels, device=device)
real_images = real_images.to(device)
noise = torch.randn(BATCH_SIZE, nz, 1, 1, device=device)
outputR = netD((real_images, dog_labels))
fake_images = netG((noise, dog_labels))
outputF = netD((fake_images.detach(), dog_labels))
if loss_calc == 'rals':
errD = (torch.mean((outputR - torch.mean(outputF) - real_labels) ** 2) +
torch.mean((outputF - torch.mean(outputR) + real_labels) ** 2))/2
elif loss_calc == 'hinge':
d_loss_real = torch.nn.ReLU()(1 - outputR).mean()
# 誤差 outputRがreal_labels以上で誤差0になる。outputR>1で、
# real_labels - outputRが負の場合ReLUで0にする
d_loss_fake = torch.nn.ReLU()(1 + outputF).mean()
# 誤差 outputFがreal_labels以下で誤差0になる。outputF>1で、
# real_labels - outputFが負の場合ReLUで0にする
errD = (d_loss_real / 3) + (d_loss_fake / 2) # fakeのlossを強めに流す→fakeに敏感なD→何でもfakeにしがち→Gのlossが流れやすい。→Gに厳しい
errD.backward(retain_graph=True)
optimizerD.step()
############################
# (2) Update G network
###########################
netG.zero_grad()
outputF2 = netD((fake_images, dog_labels))
if loss_calc == 'rals':
errG = (torch.mean((outputR - torch.mean(outputF2) + real_labels) ** 2) +
torch.mean((outputF2 - torch.mean(outputR) - real_labels) ** 2))/2
elif loss_calc == 'hinge':
errG = - outputF2.mean()
errG.backward()
optimizerG.step()
lr_schedulerG.step(epoch)
lr_schedulerD.step(epoch)
# --------------------
# 3. 記録
# --------------------
epoch_d_loss += errD.item()
epoch_g_loss += errG.item()
epoch_dout_real += outputR.mean().item()
epoch_dout_fake += outputF.mean().item()
epoch_dout_fake2 += outputF2.mean().item()
d_loss_log.append(epoch_d_loss/iter_n)
g_loss_log.append(epoch_g_loss/iter_n)
dout_real_log.append(epoch_dout_real/iter_n)
dout_fake_log.append(epoch_dout_fake/iter_n)
dout_fake_log2.append(epoch_dout_fake2/iter_n)
print('loss=%s 1epochの中での平均値 \n %.2fs [%d/%d] Loss_D: %.4f Loss_G: %.4f outputR: %.4f outputF: %.4f / %.4f' % (loss_calc,
time.perf_counter()-epoch_time, epoch+1, epochs, d_loss_log[-1], g_loss_log[-1],dout_real_log[-1], dout_fake_log[-1],dout_fake_log2[-1] ))
print('最後のバッチのloss等 \n %.2fs [%d/%d] Loss_D: %.4f Loss_G: %.4f outputR: %.4f outputF: %.4f / %.4f' % (
time.perf_counter()-epoch_time, epoch+1, epochs, errD.item(), errG.item(),outputR.mean().item(), outputF.mean().item(),outputF2.mean().item() ))
if epoch % 10 == 0:
show_generated_img(6)
| _____no_output_____ | MIT | nb/base_gan.ipynb | raxman0721/kaggle-dog_gan |
Visualise generated results by label and submit | good_breeds = analyse_generated_by_class(6)
create_submit(good_breeds)
import matplotlib.pyplot as plt
plt.figure()
plt.title("Learning Curve")
plt.xlabel("epoch")
plt.ylabel("loss")
# Traing score と Test score をプロット
plt.plot(d_loss_log, color="r", label="d_loss")
plt.plot(g_loss_log, color="g", label="g_loss")
plt.legend(loc="best")
plt.show()
import matplotlib.pyplot as plt
plt.figure()
plt.title("Learning Curve")
plt.xlabel("epoch")
plt.ylabel("loss")
# Traing score と Test score をプロット
plt.plot(dout_real_log, color="r", label="dout_r")
plt.plot(dout_fake_log, color="g", label="dout_f")
plt.legend(loc="best")
plt.show() | _____no_output_____ | MIT | nb/base_gan.ipynb | raxman0721/kaggle-dog_gan |
Data Analysis for Data Analyst Job Landscape 2020 GoalThere were 2 main motivations for me to do this project,* (1) Understand the current job market for data centric jobs* (2) Find out the what employers are looking for in data centric jobs BackgroundThe project data analysis occurs in the 3rd section: Exploratory Analysis where I explore various themes that I wanted to understand to fufil my 2 goals that I listed above. I used data visualization to explore the various themes, for ease of understanding and giving me an easy snapshot of the job landscape at this current moment. The data was pulled from Glassdoor on 22nd May 2020. Methodology* [1. Packages](point_1)* [2. Reading Datasets](point_2)* [3. Exploratory Analysis](point_3) * [3.1 One Important Caveat](point_3_1) * [3.2 Average Base Pay Comparison Across Job Titles](point_3_2) * [3.3 Number of Jobs listed on Glassdoor](point_3_3) * [3.4 Technical Skills](point_3_4) * [3.5 Academic Skills](point_3_5) * [3.6 Education Level](point_3_6) * [3.7 Job demand by Ownership](point_3_7) * [3.8 Job demand by Industry](point_3_8) * [3.9 Rating Distribution](point_3_9)* [4. Word Cloud](point_4) Click button to show/hide code | from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
1. PackagesThere are various data analysis packages within Python that I'll be using for my anaysis | """Data Science Packages"""
import pandas as pd
import numpy as np
from scipy.stats import norm
from pandas import DataFrame
"""Data Visualisation Packages"""
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import plotly.graph_objects as go
from matplotlib import pyplot
from matplotlib.pyplot import figure
import squarify
"""World Cloud"""
import nltk
from wordcloud import WordCloud, ImageColorGenerator, STOPWORDS
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
2. Reading DatasetsAfter performing the Data Cleaning in Part 2, I'm using my clean dataset in this data analysis portion. | print("---------------------- Job Dataset --------------------------")
df = pd.read_excel(
'/Users/james/Documents/GitHub/Exploring-the-Big-Data-and-Analytics-Job-Market-in-Singapore-2020/Job Titles CSV Files/Job_dataset.xlsx')
df.head()
# Lisiting all the columns that are in the csv file
df.columns
print("---------------------- Salary Dataset --------------------------")
salary_df = pd.read_excel(
'/Users/james/Documents/GitHub/Exploring-the-Big-Data-and-Analytics-Job-Market-in-Singapore-2020/Job Titles CSV Files/Salary by title.xlsx')
salary_df | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
3. Exploratory Analysis 3.1 One Important Caveat An Important Caveat I would like to make which I believe is important to address before I begin my Exploratory Analysis is how representative the salary statistics I gathered is. As I have outputted in Section 2, it can be seen that the number of respondents for the Glassdoor Singapore Data related jobs is the extremely small sample size that is available. Therefore, it's important to note that this results is an indication and estimated for us to have a better understanding of the current salaries in this industry but should not be taken as the **yardstick**. Looking at other resources available, I have found that Glassdoor was my choice website to obtain my data despite the limitations due to its unbiasness which I deemed as an important element of this project. 3.2 Average Base Pay Comparison Across Job TitlesWe want to compare the average base pay for the different job titles. And within the job titles, if there's any deviations in salary for the level of seniority. | sns.set(style="whitegrid")
fig = sns.catplot(x="Job Title", y="Average Base Pay", hue="Position Level", data=salary_df,
kind="bar", palette="muted", aspect=8/3)
fig.despine(left=True)
fig.set_ylabels("Average Base Pay")
fig.set(title="Pay Comparison for Various Job Titles")
fig.savefig("Average Base Pay Comparison Across Job Titles.png") | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
**Findings:** The salary plot allows us to see that the best paid position is as a Quantitative Analyst, followed closely by the Senior Data Scientist and Senior Technology Consultant. For fresh graduates, the expected pay for the Data Scientist/Engineer/Analyst role is **S\\$ 50666 /year or S\\$ 4222 /month**. 3.3 Number of Jobs listed on GlassdoorWe want to compare across the different job titles how many avaialbe positions are there listed on Glassdoor. | df_jobs_available = df['Job Title'].value_counts().rename_axis(
'Job Title').reset_index(name='Number of Jobs')
sns.set(style="whitegrid")
sns_plot = sns.barplot(x="Job Title", y="Number of Jobs", data=df_jobs_available).set_title(
'Number of Jobs listed on Glassdoor')
sns.set(rc={'figure.figsize': (16, 8)})
sns_plot.figure.savefig("Number of Jobs listed on Glassdoor.png")
total_jobs = df_jobs_available["Number of Jobs"].sum()
print("The total number of job listings found on Glassdoor is " + str(total_jobs))
print("\n")
df_jobs_available["Relative Frequency, %"] = round((df_jobs_available["Number of Jobs"]/total_jobs)*100, 2)
df_jobs_available | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
**Findings:** We found that the Data Scientist job title has the most number of jobs available by a large margin with 925 jobs. Followed by Data Analyst and Data Engineer with 477 and 440 jobs postings respectively. Surprisingly, there were more managerial positions for the Data Driven jobs compared to machine learning engineer. 3.4 Technical SkillsThe technology industry is heavily dependent on the proficiency of the technical skill sets. Therefore, I searched through the Job Description and pulled out the technical skills requested by companies. Methodology: I searched through the job description and pull the top 8 most mentioned technical skillset. There are 2 representations which are (1) Dataframe and (2) Histogram. | # Creating dictionary counting the number of times a particular technical skill is called
technical_skills = ['AWS', 'Excel', 'Python',
'R', 'Spark', 'Hadoop', 'Scala', 'SQL']
adict = {}
for every_skill in technical_skills:
that_sum = df[every_skill].sum()
adict[every_skill] = that_sum
print(adict)
# Representing these numbers with a dataframe
df_technical_skill = DataFrame(list(adict.items()), columns=[
'Technical Skills', 'Frequency'])
df_technical_skill["Relative Frequency, %"] = round((df_technical_skill["Frequency"]/total_jobs)*100, 2)
df_technical_skill
# Creating barplot representing the values in the dataframe
sns.set(style="whitegrid")
sns_plot = sns.barplot(x="Technical Skills", y="Frequency", data=df_technical_skill).set_title(
'Technical Skills requested for Job')
sns.set(rc={'figure.figsize': (16, 8)})
sns_plot.figure.savefig("Technical Skills requested for Job.png") | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
**Findings:** As I expected Python was the most requested skillset that employer wanted prospective hires to have, it's closely followed by SQL. Big data platforms such as Apache Spark and Hadoop alongside Scala are relatively high in demand as well. I was very surprise to see that R was not highly requested in the technology industry but I postulate that R is used greater in academic circles. 3.5 Academic SkillsOn top of the technical skills that are required for the technology jobs, there's an importance of academic skills with the heavy use of mathematical concepts in this industry. I'll search through the job description and find the academic skillset that companies are looking forwards. | # Creating dictionary counting the number of times a particular academic skill is called
academic_skills = ['Calculus', 'Database Management',
'Machine Learning', 'Statistics', 'DevOps']
adict1 = {}
for every_skill in academic_skills:
that_sum = df[every_skill].sum()
adict1[every_skill] = that_sum
# Representing these numbers with a dataframe
df_academic_skill = DataFrame(list(adict1.items()), columns=[
'Academic Skills', 'Frequency'])
df_academic_skill["Relative Frequency, %"] = round((df_academic_skill["Frequency"]/total_jobs)*100, 2)
df_academic_skill
# Creating barplot representing the values in the dataframe
sns.set(style="whitegrid")
sns_plot = sns.barplot(x="Academic Skills", y="Frequency", data=df_academic_skill).set_title(
'Academic Skills requested for Job')
sns.set(rc={'figure.figsize': (16, 8)})
sns_plot.figure.savefig("Academic Skills requested for Job.png") | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
**Findings:** Unsurprinsingly, the top academic skill set looked for by employers is Machine Learning with predictive analysis. However other academic skills sets such as DevOps, Statistics and Database Management is actually rarely mention, and Calculus was not mention at all. I postulate that many employers believe that these skills should be instilled into them during their academic training. Therefore, in the next sub-section, I'll investigate the education level that employers expect. 3.6 Education LevelEducation is a big part of our lives, and I would want to know what education levels are they looking for. | df.columns
df.rename({"Bachelors Degreee" : "Bachelors Degree"}, axis=1)
# Creating dictionary counting the number of times a particular Education Level is called
education_level = ['Bachelors Degreee', 'Masters','PhD', 'No Education Specified']
adict2 = {}
for every_level in education_level:
that_sum = df[every_level].sum()
adict2[every_level] = that_sum
adict2
df_education_level = DataFrame(list(adict2.items()), columns=[
'Education Level', 'Frequency'])
df_education_level["Relative Frequency, %"] = round((df_education_level["Frequency"]/total_jobs)*100, 2)
df_education_level
# Creating barplot representing the values in the dataframe
sns.set(style="whitegrid")
sns_plot = sns.barplot(x="Education Level", y="Frequency", data=df_education_level).set_title(
'Education Level requested for Job')
sns.set(rc={'figure.figsize': (16, 8)})
sns_plot.figure.savefig("Minimum Education Level required.png") | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
**Findings:** I found that most jobs posting for data-driven jobs look for hires with Bachelors Degree. However, it can be noted that there's a sizeable numbers of employers looking for masters and PhD level of qualification. There's a sizeable portion of employers who do not specify university level of qualification either as they do not require a university qualification or they omitted the education level in the Job Description. 3.7 Job demand by OwnershipData related jobs are on the rise, but I want to investigate where this demand for jobs are at. Methodology: To represent the job demand by Ownership, I'll use the Treemap graph to visualize the results. | # We drop the rows with null values and count the number of jobs by type of ownership
df_ownership = df[df['Type of ownership'] != '-1']
df_ownership = df_ownership['Type of ownership'].value_counts(
).rename_axis('Ownership').reset_index(name='Number of Jobs')
# Specific number of jobs by the different ownership
df_ownership["Relative Frequency, %"] = round((df_ownership["Number of Jobs"]/total_jobs)*100, 2)
df_ownership
# Creating the Tree Map Visualisation
squarify.plot(sizes=df_ownership['Number of Jobs'],
label=df_ownership['Ownership'], alpha=.8)
plt.show()
plt.savefig('Job demand by Ownership.png') | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
**Findings:** We found that by ownership, the biggest hire of data driven jobs is the private sector. Followed by public companies and government firms. This is not surprising that private and public company are the biggest players, as they are profit driven and would want to capitalise on new technology and skill set that can help to streamline their operations. 3.8 Job demand by IndustryData related jobs are on the rise, but I want to investigate where this demand for jobs are at. Methodology: To represent the job demand by industry, I'll use the Treemap graph to visualize the results. | # We drop the rows with null values and count the number of jobs by industry
df_industry = df[df['Industry'] != '-1']
df_industry = df_industry['Industry'].value_counts().rename_axis(
'Industry').reset_index(name='Number of Jobs')
# Specific number of jobs by the different industry
df_industry["Relative Frequency, %"] = round((df_industry["Number of Jobs"]/total_jobs)*100, 2)
df_industry
# Creating the Tree Map Visualisation
squarify.plot(sizes=df_industry['Number of Jobs'],
label=df_industry['Industry'], alpha=.8)
plt.show()
plt.savefig('Job demand by Industry.png') | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
**Findings:** I was surprised to find that Government Agencies was the largest employer of the data driven jobs. The second largest employer by Industry is unsurprisingly is the Internet(Technology) Industry. Other surprising results was the Banking and Asset Management Industry. 3.9 Rating DistributionWe want to investigate the distribution of company rating across in this technology industry. | # Removing null value for ratings
df_rating = df[df['Rating'] != -1]
sns.set(style="whitegrid")
n, bins, patches = plt.hist(x=df_rating['Rating'], bins='auto',
alpha=0.7, rwidth=0.85)
plt.xlabel('Company Ratings')
plt.ylabel('Frequency')
plt.title('Distribution of Company Ratings')
# Set a clean upper y-axis limit.
maxfreq = n.max()
plt.ylim(ymax=np.ceil(maxfreq / 10) * 10 if maxfreq % 10 else maxfreq + 10)
plt.savefig('Rating Distribution.png') | _____no_output_____ | MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
**Findings:** We find that the average company rating in the technology sector is around 3.75/5. Word Cloud in Job DescriptionWe want to visualise what are the words that are most frequently repeated in the job description, we use a word cloud algorithm to represent the results below. | nltk.download('stopwords')
nltk.download('punkt')
words = " ".join(df['Job Description'])
def punctuation_stop(text):
"""remove punctuation and stop words"""
filtered = []
stop_words = set(stopwords.words('english'))
word_tokens = word_tokenize(text)
for w in word_tokens:
if w not in stop_words and w.isalpha():
filtered.append(w.lower())
return filtered
words_filtered = punctuation_stop(words)
text = " ".join([ele for ele in words_filtered])
wc = WordCloud(background_color="white", random_state=1,
stopwords=STOPWORDS, max_words=2000, width=800, height=1500)
wc.generate(text)
plt.figure(figsize=[10, 10])
plt.imshow(wc, interpolation="bilinear")
plt.axis('off')
plt.show()
wc.to_file('Job Description Word Cloud.png') | [nltk_data] Downloading package stopwords to /Users/james/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package punkt to /Users/james/nltk_data...
[nltk_data] Package punkt is already up-to-date!
| MIT | Part 3. Data Analysis.ipynb | jamesgsw/Exploring-Data-and-Analytics-Job-Market-Outlook-in-Singapore-2020 |
Part 2 | def dist(A, B):
"""Taxi-driver distance"""
return abs(B[0] - A[0]) + abs(B[1] - A[1])
def move_hole(hole, G, node_grid):
Gx, Gy = G.coord
target = (Gx-1, Gy)
while hole.coord != target:
if hole.coord[0] >= Gx and hole.coord[1] > Gy:
hole = node_grid[hole.coord[0]-1][hole.coord[1]]
elif hole.coord[1] > Gy and hole.coord[0] < Gx:
hole = node_grid[hole.coord[0]][hole.coord[1]-1]
elif hole.coord[0]
G, *_ = [node for node in nodes if node.coord == (maxx, 0)]
T = nodes[0]
maxx, maxy = nodes[-1].coord
node_grid =[nodes[i*(maxy+1):i*(maxy+1)+ maxy+1] for i in range(maxx+1)]
steps = 0
while G.coord != T.coord:
holes = [B for A, B in permutations(nodes, r=2) if A == G and 0 < A.used <= B.avail]
closest_hole = next(sorted(holes, key=lambda A: dist(A.coord, (G.coord[0]-1, G.coord[1]))))
move_hole(closest_hole, G, nodes)
G, T
holes
node_grid | _____no_output_____ | MIT | aoc-2016/22/22.ipynb | bsamseth/advent-of-code-2018 |
Reporting using Jupyter Book [Jupyter Book](https://jupyterbook.org/intro.html) is an open source project for building beautiful, publication-quality books and documents from computational material.In our case it will help us to export our Jupyter Notebooks into nice to look at HTML files. Installation- **If you're running this notebook from the environment of todays lesson `jupyter-book` should already be installed.**- To install it with conda use the following command to install it from the `conda-forge` channel```bashconda install -c conda-forge jupyter-book``` Initialize a template - Jupyter Book relies on certain files (mostly for configuration). In order to create them, use the `jupyter-book create ` command.- To keep things tidy lets use the `reports` directory.> Hint: We can execute terminal commands directly inside our jupyter notebook by starting the line with `!` | !jupyter-book create ../reports/jupyterbook | _____no_output_____ | MIT | 06-2021-05-28/notebooks/06-01_Reporting_using_jupyterbook.ipynb | eotp/python-FU-class |
- Before we take a look at the generated files, lets copy over the jupyter notebooks we want to include in our HTML report Task:- Copy `06-00_Processing_of_Tabular_Data.ipynb` and `06-00_Temperature_anomalies.ipynb` into `../reports/jupyterbook`> Hint: you don't have to leave the Jupyter Universe to do that. Just open the explorer on the left (the folder icon) and use right-click -> copy. | # alternatively use this snippet to copy the desired files
!cp 06-00_Processing_of_Tabular_Data.ipynb 06-00_Temperature_anomalies.ipynb ../reports/jupyterbook/ | _____no_output_____ | MIT | 06-2021-05-28/notebooks/06-01_Reporting_using_jupyterbook.ipynb | eotp/python-FU-class |
- Now let us take a look at the files File inspection We need to change the content of two files for everything to run smoothly _config.yml- Stores configuration parameters such as the Title, Author and execution behaviour- You can change `title` and `author` to whatever you like- The **only parameter we need to change is set `execute_notebooks` to `"off"`** - This ensures `jupyter-book` doesn't rerun our notebooks but takes them as they are (with the current outputs)- The remaining fields can be removedSample file:```yml _config.yml Book settings Learn more at https://jupyterbook.org/customize/config.htmltitle: Your Titleauthor: Your Namelogo: logo.png Force re-execution of notebooks on each build. See https://jupyterbook.org/content/execute.htmlexecute: execute_notebooks: 'off'``` _toc.yml- Table of Content file- Specifies which files to render into HTML- Make sure you see the jupyter notebook files `06-00_Processing_of_Tabular_Data.ipynb` and `06-00_Temperature_anomalies.ipynb` in `reports/jupyterbook`- Include both of these files in the Table of ContentsSample file:```yml Table of content Learn more at https://jupyterbook.org/customize/toc.html- file: 06-00_Processing_of_Tabular_Data.ipynb- file: 06-00_Temperature_anomalies.ipynb``` Remaining files:- `logo.png` will be displayed inside the HTML- All remaining files are for demonstration purposes only and serve no use for us (you can savely remove them) Generate HTML files- In order to generate the html files use `jupyter-book build ` | !jupyter-book build ../reports/jupyterbook | _____no_output_____ | MIT | 06-2021-05-28/notebooks/06-01_Reporting_using_jupyterbook.ipynb | eotp/python-FU-class |
Safety Net- Make sure you specify the `_toc.yml` and `_config.yml` files as specified above- If it should still fail for you, try running this: | !rm -rf ../reports/jupyterbook
!jupyter-book create ../reports/jupyterbook
!cp 06-00_Processing_of_Tabular_Data.ipynb 06-00_Temperature_anomalies.ipynb ../reports/jupyterbook/
!cp ../src/_solutions/_toc.yml ../src/_solutions/_config.yml ../reports/jupyterbook/
!jupyter-book build ../reports/jupyterbook | _____no_output_____ | MIT | 06-2021-05-28/notebooks/06-01_Reporting_using_jupyterbook.ipynb | eotp/python-FU-class |
Update Logv14 : Added some new features , now local CV has climed to 0.87 just with Meta-Features and can climb even more , this is the first notebook exploring high end score with just the metafeatures.I have also removed the embedding layer because the model performs better without itv15 : Inference added About this competitionHello everyone , In this competition , we are asked to classify whether a person has beningn or a melignant melanoma based on the images of skin lesions taken from various parts of the body over different period of times . We have been also given some metadata information to improve results. Given this let's see what we know so far What we know so far?We know the data is highly imbalanced with almost 98 percent of images being beningn .The discussion forum is filled with threads expressing kernels about the mystery images (image clusters that are present in test set but not in train set). This notebook does not include any EDA and discussions regarding what has already been discovered and explained thoroughly but for all those who are just getting started with this competition , I am adding all the necessary and important kernels and discussions to get up to speed quickly :-* [EDA kernel by Andrada](https://www.kaggle.com/andradaolteanu/siim-melanoma-competition-eda-augmentations)* [Exceptional Kernel giving all the insights about this competition by Laura](https://www.kaggle.com/allunia/don-t-turn-into-a-smoothie-after-the-shake-up)* [Code used to merge external Data and Data Splitting by Alex](https://www.kaggle.com/shonenkov/merge-external-data)* [Best Public TensorFlow Pipeline with explanantion of best CV strategy by chris deotte](https://www.kaggle.com/cdeotte/triple-stratified-kfold-with-tfrecords)* [Mystery Images Discussion Thread by Chris Deotte](https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/168028)Now that you know what the competition is about the underlying difficulties and solutions which people have adapted so far let's understand what this notebook is all about About this NotebookWe have been given two types of data , one is the images of skin lesions of patients , other is the tabular meta data . Now there are three ways of combining these two information together :-* Build a CNN image model and find a way to input the tabular data into the CNN image model* Build a Tabular data model and find a way to extract image embeddings or image features and input into the Tabular data model* Build 2 separate models for images and metadata and ensemble**We have tried all three and the third option works the best and gives significant amount of boost. Now another question comes to find what models can we use for modelling with tabular data , this notebook tries to answer this question only**What if I say you can use a neural network architecture based on attention and transformers especially designed for tabular data,that too with your own custom loss, custom lr and all the techniques that you might be applying with your image model , one such architecture which can give very could results IMO is Google's Tabnet. People have already applied Tabnet for this competition using Pytorch-Tabnet implementation by Sebastien ([@optimo](https://www.kaggle.com/optimo)) . The Implementation can be found [here](https://github.com/dreamquark-ai/tabnet). However this implementations comes with following limitations : * We cannot use custom losses, custom LR schedulers* We cannot use custom samplers which we found to have improved results considerably* We are provided with a scikit-learn type interface which makes it really easy to use tabnet but at the same time takes away the benifits of it being a deep learning model**This Notebook also tries to address and solve these limitations** How this Notebook solves the LimitationsHere in this notebook I show how to use Tabnet as a custom Model instead of the scikit-learn type interface provided by Pytorch-Tabnet , thanks to Sebastien and his active responses on the github repo . I show how anyone can use Tabnet just like a torchvision or any torch-hub model for any downstream task . I have tried to write a code in such a way that anyone can use this code and apply it to their tasks by just changing the dataset and dataloader. Here are the components that any deep learning models need :-* Dataset + DataLoader* Model* Criterion/Loss* Training Loop* Evaluation Loop* Engine for uniting all of them together Things used Specific to this competition* For Data I am using the dataset and folds provided by [@alex](https://www.kaggle.com/shonenkov) [here](https://www.kaggle.com/shonenkov/melanoma-merged-external-data-512x512-jpeg) which has been generated using the notebook [here](https://www.kaggle.com/shonenkov/merge-external-data)* Embeddings for Categorical Variables* Soft Margin Focal Loss , because it has seemed to work the best as of now , you can play around with that * Balance Sampler for balancing classes in each batch* A Plotting fucntion adapted from chris Deotte's Training kernel* ReduceOn Pleateau Lr Scheduler **I hope you all like my efforts and find this kernel useful** Before diving into code , if you want to understand how tabnet works , you can watch the following talk givenby Sebastien | from IPython.display import IFrame, YouTubeVideo
YouTubeVideo('ysBaZO8YmX8',width=600, height=400) | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
If you want to do it the scikit learn way here is a [notebook](https://www.kaggle.com/tanulsingh077/achieving-sota-results-with-tabnet) where I explain how to that | #Installing Pytorch-Tabnet
#!pip install pytorch-tabnet
import numpy as np
import pandas as pd
import random
import os
import seaborn as sns
from tqdm.autonotebook import tqdm
from fastprogress import master_bar, progress_bar
tqdm.pandas()
from scipy.stats import skew
import pickle
import glob
#Visuals
import matplotlib.pyplot as plt
#torch
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset,DataLoader
from catalyst.data.sampler import BalanceClassSampler
#CV2
import cv2
#Importing Tabnet
from pytorch_tabnet.tab_network import TabNet
#error
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import roc_auc_score | /usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:6: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
| MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
UtilsSince we are writing our custom model , we need early stopping which is present in Pytorch-Tabnet's implementation as a built-in.The following Early-Stopping Implementation can monitor both minimization and maximization of quantities | class EarlyStopping:
def __init__(self, patience=7, mode="max", delta=0.001,verbose=True):
self.patience = patience
self.counter = 0
self.mode = mode
self.best_score = None
self.early_stop = False
self.delta = delta
self.verbose = verbose
if self.mode == "min":
self.val_score = np.Inf
else:
self.val_score = -np.Inf
def __call__(self, epoch_score, model, model_path):
if self.mode == "min":
score = -1.0 * epoch_score
else:
score = np.copy(epoch_score)
if self.best_score is None:
self.best_score = score
self.save_checkpoint(epoch_score, model, model_path)
elif score < self.best_score + self.delta:
self.counter += 1
if self.verbose:
print('EarlyStopping counter: {} out of {}'.format(self.counter, self.patience))
if self.counter >= self.patience:
self.early_stop = True
else:
self.best_score = score
self.save_checkpoint(epoch_score, model, model_path)
self.counter = 0
def save_checkpoint(self, epoch_score, model, model_path):
if epoch_score not in [-np.inf, np.inf, -np.nan, np.nan]:
if self.verbose:
print('Validation score improved ({} --> {}). Saving model!'.format(self.val_score, epoch_score))
torch.save(model.state_dict(), model_path)
self.val_score = epoch_score | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
ConfigurationWe define all the configuration needed elsewhere in the notebook here | BATCH_SIZE = 1024
EPOCHS = 150
LR = 0.02
seed = 2020 # seed for reproducible results
patience = 50
device = torch.device('cuda')
FOLDS = 5 | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
Seed | def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = True
seed_everything(seed) | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
Data Preparation and Feature EngineeringHere we input the data and prepare it for inputting to the model | # Defining Categorical variables and their Indexes, embedding dimensions , number of classes each have
df =pd.read_csv('/data/full/folds_13062020.csv')
df.head()
# df = pd.concat([df, pd.get_dummies(df.source)], axis=1)
# df = pd.concat([df, pd.get_dummies(df.anatom_site_general_challenge)], axis=1)
# df.head()
# df['age_approx'] = (df.age_approx - df.age_approx.min()) / df.age_approx.max()
# df.head()
# df = pd.concat([df, pd.get_dummies(df.sex)], axis=1)
# df.drop('unknown', axis=1, inplace=True)
# df.head()
# features = df.iloc[:, -11:].columns.tolist() + ['age_approx']
features = ['sex', 'age_approx', 'anatom_site_general_challenge']
cat = ['sex', 'anatom_site_general_challenge']
target = 'target'
categorical_columns = []
for col in cat:
print(col, df[col].nunique())
l_enc = LabelEncoder()
df[col] = l_enc.fit_transform(df[col].values)
#SAVING LABEL _ ENC
output = open(f'/out/{col}_encoder.pkl', 'wb')
pickle.dump(l_enc, output)
output.close()
categorical_columns.append(col)
class MelanomaDataset(Dataset):
def __init__(self,features,target):
self.features = features
self.target = target
def __len__(self):
return len(self.features)
def __getitem__(self,idx):
return{
'features': torch.tensor(self.features[idx],dtype=torch.float),
'target': self.one_hot(2, self.target[idx])
}
def get_targets(self):
return list(self.target)
@staticmethod
def one_hot(size, target):
tensor = torch.zeros(size, dtype=torch.float32)
tensor[target] = 1.
return tensor | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
ModelHere we built our Custom Tabnet model | class CustomTabnet(nn.Module):
def __init__(self, input_dim, output_dim,n_d=8, n_a=8,n_steps=3, gamma=1.3,
cat_idxs=[], cat_dims=[], cat_emb_dim=1,n_independent=2, n_shared=2,
momentum=0.02,mask_type="sparsemax"):
super(CustomTabnet, self).__init__()
self.tabnet = TabNet(input_dim=input_dim,output_dim=output_dim, n_d=n_d, n_a=n_a,n_steps=n_steps, gamma=gamma,
cat_idxs=cat_idxs, cat_dims=cat_dims, cat_emb_dim=cat_emb_dim,n_independent=n_independent,
n_shared=n_shared, momentum=momentum,mask_type="sparsemax")
def forward(self, x):
return self.tabnet(x) | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
LossDefining SoftMarginFocal Loss which is to be used as a criterion | class SoftMarginFocalLoss(nn.Module):
def __init__(self, margin=0.2, gamma=2):
super(SoftMarginFocalLoss, self).__init__()
self.gamma = gamma
self.margin = margin
self.weight_pos = 2
self.weight_neg = 1
def forward(self, inputs, targets):
em = np.exp(self.margin)
log_pos = -F.logsigmoid(inputs)
log_neg = -F.logsigmoid(-inputs)
log_prob = targets*log_pos + (1-targets)*log_neg
prob = torch.exp(-log_prob)
margin = torch.log(em + (1-em)*prob)
weight = targets*self.weight_pos + (1-targets)*self.weight_neg
loss = self.margin + weight * (1 - prob) ** self.gamma * log_prob
loss = loss.mean()
return loss | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
TrainingOur Custom Training loop | def train_fn(dataloader,model,criterion,optimizer,device,scheduler,epoch):
model.train()
train_targets=[]
train_outputs=[]
for bi,d in enumerate(dataloader):
features = d['features']
target = d['target']
features = features.to(device, dtype=torch.float)
target = target.to(device, dtype=torch.float)
optimizer.zero_grad()
output,_ = model(features)
loss = criterion(output,target)
loss.backward()
optimizer.step()
if scheduler is not None:
scheduler.step()
output = 1 - F.softmax(output,dim=-1).cpu().detach().numpy()[:,0]
train_targets.extend(target.cpu().detach().numpy().argmax(axis=1).astype(int).tolist())
train_outputs.extend(output)
return loss.item(),train_outputs,train_targets | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
EvaluationCustom Evaluation loop | def eval_fn(data_loader,model,criterion,device):
fin_targets=[]
fin_outputs=[]
model.eval()
with torch.no_grad():
for bi, d in enumerate(data_loader):
features = d["features"]
target = d["target"]
features = features.to(device, dtype=torch.float)
target = target.to(device, dtype=torch.float)
outputs,_ = model(features)
loss_eval = criterion(outputs,target)
outputs = 1 - F.softmax(outputs,dim=-1).cpu().detach().numpy()[:,0]
fin_targets.extend(target.cpu().detach().numpy().argmax(axis=1).astype(int).tolist())
fin_outputs.extend(outputs)
return loss_eval.item(),fin_outputs,fin_targets | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
PlotterFunction for plotting the losses and auc_scores for each fold | def print_history(fold,history,num_epochs=EPOCHS):
plt.figure(figsize=(15,5))
plt.plot(
np.arange(num_epochs),
history['train_history_auc'],
'-o',
label='Train AUC',
color='#ff7f0e'
)
plt.plot(
np.arange(num_epochs),
history['val_history_auc'],
'-o',
label='Val AUC',
color='#1f77b4'
)
x = np.argmax(history['val_history_auc'])
y = np.max(history['val_history_auc'])
xdist = plt.xlim()[1] - plt.xlim()[0]
ydist = plt.ylim()[1] - plt.ylim()[0]
plt.scatter(x, y, s=200, color='#1f77b4')
plt.text(
x-0.03*xdist,
y-0.13*ydist,
'max auc\n%.2f'%y,
size=14
)
plt.ylabel('AUC', size=14)
plt.xlabel('Epoch', size=14)
plt.legend(loc=2)
plt2 = plt.gca().twinx()
plt2.plot(
np.arange(num_epochs),
history['train_history_loss'],
'-o',
label='Train Loss',
color='#2ca02c'
)
plt2.plot(
np.arange(num_epochs),
history['val_history_loss'],
'-o',
label='Val Loss',
color='#d62728'
)
x = np.argmin(history['val_history_loss'])
y = np.min(history['val_history_loss'])
ydist = plt.ylim()[1] - plt.ylim()[0]
plt.scatter(x, y, s=200, color='#d62728')
plt.text(
x-0.03*xdist,
y+0.05*ydist,
'min loss',
size=14
)
plt.ylabel('Loss', size=14)
plt.title(f'FOLD {fold + 1}',size=18)
plt.legend(loc=3)
plt.show() | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
EngineEngine where we unite everything | def run(fold):
df_train = df[df.fold != fold]
df_valid = df[df.fold == fold]
# Defining DataSet
train_dataset = MelanomaDataset(
df_train[features].values,
df_train[target].values
)
valid_dataset = MelanomaDataset(
df_valid[features].values,
df_valid[target].values
)
# Defining DataLoader with BalanceClass Sampler
train_loader = DataLoader(
train_dataset,
sampler=BalanceClassSampler(
labels=train_dataset.get_targets(),
mode="downsampling",
),
batch_size=BATCH_SIZE,
pin_memory=True,
drop_last=True,
num_workers=4
)
valid_loader = torch.utils.data.DataLoader(
valid_dataset,
batch_size=BATCH_SIZE,
num_workers=4,
shuffle=False,
pin_memory=True,
drop_last=False,
)
# Defining Device
device = torch.device("cuda")
# Defining Model for specific fold
model = CustomTabnet(input_dim = len(features),
output_dim = 2,
n_d=32,
n_a=32,
n_steps=4,
gamma=1.6,
cat_emb_dim=2,
n_independent=2,
n_shared=2,
momentum=0.02,
mask_type="sparsemax")
model.to(device)
#DEfining criterion
criterion = SoftMarginFocalLoss()
criterion.to(device)
# Defining Optimizer with weight decay to params other than bias and layer norms
param_optimizer = list(model.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.001},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0},
]
optimizer = torch.optim.AdamW(optimizer_parameters, lr=LR)
# Defining LR SCheduler
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='max',
factor=0.5, patience=10, verbose=True,
threshold=0.0001, threshold_mode='rel',
cooldown=0, min_lr=0, eps=1e-08)
#DEfining Early Stopping Object
es = EarlyStopping(patience=patience,verbose=False)
# History dictionary to store everything
history = {
'train_history_loss': [],
'train_history_auc': [],
'val_history_loss': [],
'val_history_auc': [],
}
# THE ENGINE LOOP
mb = progress_bar(range(EPOCHS), total=EPOCHS)
for epoch in mb:
train_loss,train_out,train_targets = train_fn(train_loader, model,criterion, optimizer, device,scheduler=None,epoch=epoch)
val_loss,outputs, targets = eval_fn(valid_loader, model, criterion,device)
train_auc = roc_auc_score(train_targets, train_out)
auc_score = roc_auc_score(targets, outputs)
scheduler.step(auc_score)
#mb.set_postfix(Train_Loss=train_loss,Train_AUC_SCORE = train_auc,Valid_Loss = val_loss,Valid_AUC_SCORE = auc_score)
mb.comment = f'train Loss: {train_loss:.4f}, valid_loss: {val_loss:.4f}, auc_score: {auc_score:.4f}'
history['train_history_loss'].append(train_loss)
history['train_history_auc'].append(train_auc)
history['val_history_loss'].append(val_loss)
history['val_history_auc'].append(auc_score)
es(val_loss,model,f'model_{fold}.pth')
if es.early_stop:
print('Maximum Patience {} Reached , Early Stopping'.format(patience))
break
print_history(fold,history,num_epochs=epoch+1)
run(fold=0)
run(fold=1)
run(fold=2)
run(fold=3)
run(fold=4) | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
Inference | df_test =pd.read_csv('/data/full/test.csv')
df_test['anatom_site_general_challenge'].fillna('unknown',inplace=True)
df_test['target'] = 0
# df_test.head()
# df_test['age_approx'] = (df_test.age_approx - df_test.age_approx.min()) / df_test.age_approx.max()
# df_test = pd.concat([df_test, pd.get_dummies(df_test.sex), pd.get_dummies(df_test.anatom_site_general_challenge)], axis=1)
# df_test['ISIC19'] = 0
# df_test['ISIC20'] = 1
# df_test['target'] = 0
# df_test['lateral torso'] = 0
features = ['sex', 'age_approx', 'anatom_site_general_challenge']
cat = ['sex', 'anatom_site_general_challenge']
target = 'target'
categorical_columns = []
for col in cat:
print(col, df_test[col].nunique())
pkl_file = open(f'/out/{col}_encoder.pkl', 'rb')
l_enc = pickle.load(pkl_file)
df_test[col] = l_enc.transform(df_test[col].values)
pkl_file.close()
def load_model():
models = []
paths = glob.glob('/out/model_*')
for path in tqdm(paths,total=len(paths)):
model = CustomTabnet(input_dim = len(features),
output_dim = 2,
n_d=32,
n_a=32,
n_steps=4,
gamma=1.6,
cat_emb_dim=2,
n_independent=2,
n_shared=2,
momentum=0.02,
mask_type="sparsemax")
model.to(device)
loader = torch.load(path)
model.load_state_dict(loader)
models.append(model)
return models
models = load_model()
def make_prediction(data_loader):
predictions = np.zeros((len(df_test),FOLDS))
for i,model in enumerate(models):
fin_outputs=[]
model.eval()
with torch.no_grad():
for bi, d in enumerate(data_loader):
features = d["features"]
target = d["target"]
features = features.to(device, dtype=torch.float)
outputs,_ = model(features)
outputs = 1 - F.softmax(outputs,dim=-1).cpu().detach().numpy()[:,0]
fin_outputs.extend(outputs)
predictions[:,i] = fin_outputs
return predictions
test_dataset = MelanomaDataset(
df_test[features].values,
df_test[target].values
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=BATCH_SIZE,
num_workers=4,
shuffle=False,
pin_memory=True,
drop_last=False,
)
pred = make_prediction(test_loader) | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
Writing Submission File | pred = pred.mean(axis=-1)
pred
pred.min()
ss = pd.read_csv('/data/full/sample_submission.csv')
ss['target'] = pred
#ss.to_csv('/out/tabnet_submission.csv',index=False)
ss.head()
#!kaggle competitions submit -c siim-isic-melanoma-classification -f submission.csv -m "Tabnet One Hot" | _____no_output_____ | MIT | pl/tabnet.ipynb | ronaldokun/isic2019 |
Day 5: Poisson Distribution II https://www.hackerrank.com/challenges/s10-poisson-distribution-2 Objective In this challenge, we go further with Poisson distributions. We recommend reviewing the previous challenge's Tutorial before attempting this problem. Task The manager of a industrial plant is planning to buy a machine of either type or type . For each day’s operation:- The number of repairs, , that machine needs is a Poisson random variable with mean . The daily cost of operating is .- The number of repairs, , that machine needs is a Poisson random variable with mean . The daily cost of operating is .Assume that the repairs take a negligible amount of time and the machines are maintained nightly to ensure that they operate like new at the start of each day. Find and print the expected daily cost for each machine. Input FormatA single line comprised of space-separated values denoting the respective means for and :```0.88 1.55```If you do not wish to read this information from stdin, you can hard-code it into your program. Output FormatThere are two lines of output. Your answers must be rounded to a scale of decimal places (i.e., format):On the first line, print the expected daily cost of machine .On the second line, print the expected daily cost of machine . | a_m, b_m = [float(i) for i in input().split(" ")]
print(round(160 + 40 * (a_m + a_m ** 2), 3))
print(round(128 + 40 * (b_m + b_m ** 2), 3)) | 0.88 1.55
226.176
286.1
| MIT | python/10daysOfStatistics/Day_5_Poisson_Distribution_II .ipynb | muatik/interactive-coding-challenges |
Build a Conditional GAN GoalsIn this notebook, you're going to make a conditional GAN in order to generate hand-written images of digits, conditioned on the digit to be generated (the class vector). This will let you choose what digit you want to generate.You'll then do some exploration of the generated images to visualize what the noise and class vectors mean. Learning Objectives1. Learn the technical difference between a conditional and unconditional GAN.2. Understand the distinction between the class and noise vector in a conditional GAN. Getting StartedFor this assignment, you will be using the MNIST dataset again, but there's nothing stopping you from applying this generator code to produce images of animals conditioned on the species or pictures of faces conditioned on facial characteristics.Note that this assignment requires no changes to the architectures of the generator or discriminator, only changes to the data passed to both. The generator will no longer take `z_dim` as an argument, but `input_dim` instead, since you need to pass in both the noise and class vectors. In addition to good variable naming, this also means that you can use the generator and discriminator code you have previously written with different parameters.You will begin by importing the necessary libraries and building the generator and discriminator. Packages and Visualization | import torch
from torch import nn
from tqdm.auto import tqdm
from torchvision import transforms
from torchvision.datasets import MNIST
from torchvision.utils import make_grid
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
torch.manual_seed(0) # Set for our testing purposes, please do not change!
def show_tensor_images(image_tensor, num_images=25, size=(1, 28, 28), nrow=5, show=True):
'''
Function for visualizing images: Given a tensor of images, number of images, and
size per image, plots and prints the images in an uniform grid.
'''
image_tensor = (image_tensor + 1) / 2
image_unflat = image_tensor.detach().cpu()
image_grid = make_grid(image_unflat[:num_images], nrow=nrow)
plt.imshow(image_grid.permute(1, 2, 0).squeeze())
if show:
plt.show() | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
Generator and Noise | class Generator(nn.Module):
'''
Generator Class
Values:
input_dim: the dimension of the input vector, a scalar
im_chan: the number of channels in the images, fitted for the dataset used, a scalar
(MNIST is black-and-white, so 1 channel is your default)
hidden_dim: the inner dimension, a scalar
'''
def __init__(self, input_dim=10, im_chan=1, hidden_dim=64):
super(Generator, self).__init__()
self.input_dim = input_dim
# Build the neural network
self.gen = nn.Sequential(
self.make_gen_block(input_dim, hidden_dim * 4),
self.make_gen_block(hidden_dim * 4, hidden_dim * 2, kernel_size=4, stride=1),
self.make_gen_block(hidden_dim * 2, hidden_dim),
self.make_gen_block(hidden_dim, im_chan, kernel_size=4, final_layer=True),
)
def make_gen_block(self, input_channels, output_channels, kernel_size=3, stride=2, final_layer=False):
'''
Function to return a sequence of operations corresponding to a generator block of DCGAN;
a transposed convolution, a batchnorm (except in the final layer), and an activation.
Parameters:
input_channels: how many channels the input feature representation has
output_channels: how many channels the output feature representation should have
kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
stride: the stride of the convolution
final_layer: a boolean, true if it is the final layer and false otherwise
(affects activation and batchnorm)
'''
if not final_layer:
return nn.Sequential(
nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride),
nn.BatchNorm2d(output_channels),
nn.ReLU(inplace=True),
)
else:
return nn.Sequential(
nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride),
nn.Tanh(),
)
def forward(self, noise):
'''
Function for completing a forward pass of the generator: Given a noise tensor,
returns generated images.
Parameters:
noise: a noise tensor with dimensions (n_samples, input_dim)
'''
x = noise.view(len(noise), self.input_dim, 1, 1)
return self.gen(x)
def get_noise(n_samples, input_dim, device='cpu'):
'''
Function for creating noise vectors: Given the dimensions (n_samples, input_dim)
creates a tensor of that shape filled with random numbers from the normal distribution.
Parameters:
n_samples: the number of samples to generate, a scalar
input_dim: the dimension of the input vector, a scalar
device: the device type
'''
return torch.randn(n_samples, input_dim, device=device) | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
Discriminator | class Discriminator(nn.Module):
'''
Discriminator Class
Values:
im_chan: the number of channels in the images, fitted for the dataset used, a scalar
(MNIST is black-and-white, so 1 channel is your default)
hidden_dim: the inner dimension, a scalar
'''
def __init__(self, im_chan=1, hidden_dim=64):
super(Discriminator, self).__init__()
self.disc = nn.Sequential(
self.make_disc_block(im_chan, hidden_dim),
self.make_disc_block(hidden_dim, hidden_dim * 2),
self.make_disc_block(hidden_dim * 2, 1, final_layer=True),
)
def make_disc_block(self, input_channels, output_channels, kernel_size=4, stride=2, final_layer=False):
'''
Function to return a sequence of operations corresponding to a discriminator block of the DCGAN;
a convolution, a batchnorm (except in the final layer), and an activation (except in the final layer).
Parameters:
input_channels: how many channels the input feature representation has
output_channels: how many channels the output feature representation should have
kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
stride: the stride of the convolution
final_layer: a boolean, true if it is the final layer and false otherwise
(affects activation and batchnorm)
'''
if not final_layer:
return nn.Sequential(
nn.Conv2d(input_channels, output_channels, kernel_size, stride),
nn.BatchNorm2d(output_channels),
nn.LeakyReLU(0.2, inplace=True),
)
else:
return nn.Sequential(
nn.Conv2d(input_channels, output_channels, kernel_size, stride),
)
def forward(self, image):
'''
Function for completing a forward pass of the discriminator: Given an image tensor,
returns a 1-dimension tensor representing fake/real.
Parameters:
image: a flattened image tensor with dimension (im_chan)
'''
disc_pred = self.disc(image)
return disc_pred.view(len(disc_pred), -1) | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
Class InputIn conditional GANs, the input vector for the generator will also need to include the class information. The class is represented using a one-hot encoded vector where its length is the number of classes and each index represents a class. The vector is all 0's and a 1 on the chosen class. Given the labels of multiple images (e.g. from a batch) and number of classes, please create one-hot vectors for each label. There is a class within the PyTorch functional library that can help you.Optional hints for get_one_hot_labels1. This code can be done in one line.2. The documentation for [F.one_hot](https://pytorch.org/docs/stable/nn.functional.htmltorch.nn.functional.one_hot) may be helpful. | # UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: get_one_hot_labels
import torch.nn.functional as F
def get_one_hot_labels(labels, n_classes):
'''
Function for creating one-hot vectors for the labels, returns a tensor of shape (?, num_classes).
Parameters:
labels: tensor of labels from the dataloader, size (?)
n_classes: the total number of classes in the dataset, an integer scalar
'''
#### START CODE HERE ####
return None
#### END CODE HERE ####
assert (
get_one_hot_labels(
labels=torch.Tensor([[0, 2, 1]]).long(),
n_classes=3
).tolist() ==
[[
[1, 0, 0],
[0, 0, 1],
[0, 1, 0]
]]
)
print("Success!") | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
Next, you need to be able to concatenate the one-hot class vector to the noise vector before giving it to the generator. You will also need to do this when adding the class channels to the discriminator.To do this, you will need to write a function that combines two vectors. Remember that you need to ensure that the vectors are the same type: floats. Again, you can look to the PyTorch library for help.Optional hints for combine_vectors1. This code can also be written in one line.2. The documentation for [torch.cat](https://pytorch.org/docs/master/generated/torch.cat.html) may be helpful.3. Specifically, you might want to look at what the `dim` argument of `torch.cat` does. | # UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: combine_vectors
def combine_vectors(x, y):
'''
Function for combining two vectors with shapes (n_samples, ?) and (n_samples, ?).
Parameters:
x: (n_samples, ?) the first vector.
In this assignment, this will be the noise vector of shape (n_samples, z_dim),
but you shouldn't need to know the second dimension's size.
y: (n_samples, ?) the second vector.
Once again, in this assignment this will be the one-hot class vector
with the shape (n_samples, n_classes), but you shouldn't assume this in your code.
'''
# Note: Make sure this function outputs a float no matter what inputs it receives
#### START CODE HERE ####
combined = None
#### END CODE HERE ####
return combined
combined = combine_vectors(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[5, 6], [7, 8]]));
# Check exact order of elements
assert torch.all(combined == torch.tensor([[1, 2, 5, 6], [3, 4, 7, 8]]))
# Tests that items are of float type
assert (type(combined[0][0].item()) == float)
# Check shapes
combined = combine_vectors(torch.randn(1, 4, 5), torch.randn(1, 8, 5));
assert tuple(combined.shape) == (1, 12, 5)
assert tuple(combine_vectors(torch.randn(1, 10, 12).long(), torch.randn(1, 20, 12).long()).shape) == (1, 30, 12)
print("Success!") | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
TrainingNow you can start to put it all together!First, you will define some new parameters:* mnist_shape: the number of pixels in each MNIST image, which has dimensions 28 x 28 and one channel (because it's black-and-white) so 1 x 28 x 28* n_classes: the number of classes in MNIST (10, since there are the digits from 0 to 9) | mnist_shape = (1, 28, 28)
n_classes = 10 | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
And you also include the same parameters from previous assignments: * criterion: the loss function * n_epochs: the number of times you iterate through the entire dataset when training * z_dim: the dimension of the noise vector * display_step: how often to display/visualize the images * batch_size: the number of images per forward/backward pass * lr: the learning rate * device: the device type | criterion = nn.BCEWithLogitsLoss()
n_epochs = 200
z_dim = 64
display_step = 500
batch_size = 128
lr = 0.0002
device = 'cuda'
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,)),
])
dataloader = DataLoader(
MNIST('.', download=False, transform=transform),
batch_size=batch_size,
shuffle=True) | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
Then, you can initialize your generator, discriminator, and optimizers. To do this, you will need to update the input dimensions for both models. For the generator, you will need to calculate the size of the input vector; recall that for conditional GANs, the generator's input is the noise vector concatenated with the class vector. For the discriminator, you need to add a channel for every class. | # UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: get_input_dimensions
def get_input_dimensions(z_dim, mnist_shape, n_classes):
'''
Function for getting the size of the conditional input dimensions
from z_dim, the image shape, and number of classes.
Parameters:
z_dim: the dimension of the noise vector, a scalar
mnist_shape: the shape of each MNIST image as (C, W, H), which is (1, 28, 28)
n_classes: the total number of classes in the dataset, an integer scalar
(10 for MNIST)
Returns:
generator_input_dim: the input dimensionality of the conditional generator,
which takes the noise and class vectors
discriminator_im_chan: the number of input channels to the discriminator
(e.g. C x 28 x 28 for MNIST)
'''
#### START CODE HERE ####
generator_input_dim = None
discriminator_im_chan = None
#### END CODE HERE ####
return generator_input_dim, discriminator_im_chan
def test_input_dims():
gen_dim, disc_dim = get_input_dimensions(23, (12, 23, 52), 9)
assert gen_dim == 32
assert disc_dim == 21
test_input_dims()
print("Success!")
generator_input_dim, discriminator_im_chan = get_input_dimensions(z_dim, mnist_shape, n_classes)
gen = Generator(input_dim=generator_input_dim).to(device)
gen_opt = torch.optim.Adam(gen.parameters(), lr=lr)
disc = Discriminator(im_chan=discriminator_im_chan).to(device)
disc_opt = torch.optim.Adam(disc.parameters(), lr=lr)
def weights_init(m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
if isinstance(m, nn.BatchNorm2d):
torch.nn.init.normal_(m.weight, 0.0, 0.02)
torch.nn.init.constant_(m.bias, 0)
gen = gen.apply(weights_init)
disc = disc.apply(weights_init) | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
Now to train, you would like both your generator and your discriminator to know what class of image should be generated. There are a few locations where you will need to implement code.For example, if you're generating a picture of the number "1", you would need to: 1. Tell that to the generator, so that it knows it should be generating a "1"2. Tell that to the discriminator, so that it knows it should be looking at a "1". If the discriminator is told it should be looking at a 1 but sees something that's clearly an 8, it can guess that it's probably fakeThere are no explicit unit tests here -- if this block of code runs and you don't change any of the other variables, then you've done it correctly! | # UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL
cur_step = 0
generator_losses = []
discriminator_losses = []
#UNIT TEST NOTE: Initializations needed for grading
noise_and_labels = False
fake = False
fake_image_and_labels = False
real_image_and_labels = False
disc_fake_pred = False
disc_real_pred = False
for epoch in range(n_epochs):
# Dataloader returns the batches and the labels
for real, labels in tqdm(dataloader):
cur_batch_size = len(real)
# Flatten the batch of real images from the dataset
real = real.to(device)
one_hot_labels = get_one_hot_labels(labels.to(device), n_classes)
image_one_hot_labels = one_hot_labels[:, :, None, None]
image_one_hot_labels = image_one_hot_labels.repeat(1, 1, mnist_shape[1], mnist_shape[2])
### Update discriminator ###
# Zero out the discriminator gradients
disc_opt.zero_grad()
# Get noise corresponding to the current batch_size
fake_noise = get_noise(cur_batch_size, z_dim, device=device)
# Now you can get the images from the generator
# Steps: 1) Combine the noise vectors and the one-hot labels for the generator
# 2) Generate the conditioned fake images
#### START CODE HERE ####
noise_and_labels = None
fake = None
#### END CODE HERE ####
# Make sure that enough images were generated
assert len(fake) == len(real)
# Check that correct tensors were combined
assert tuple(noise_and_labels.shape) == (cur_batch_size, fake_noise.shape[1] + one_hot_labels.shape[1])
# It comes from the correct generator
assert tuple(fake.shape) == (len(real), 1, 28, 28)
# Now you can get the predictions from the discriminator
# Steps: 1) Create the input for the discriminator
# a) Combine the fake images with image_one_hot_labels,
# remember to detach the generator (.detach()) so you do not backpropagate through it
# b) Combine the real images with image_one_hot_labels
# 2) Get the discriminator's prediction on the fakes as disc_fake_pred
# 3) Get the discriminator's prediction on the reals as disc_real_pred
#### START CODE HERE ####
fake_image_and_labels = None
real_image_and_labels = None
disc_fake_pred = None
disc_real_pred = None
#### END CODE HERE ####
# Make sure shapes are correct
assert tuple(fake_image_and_labels.shape) == (len(real), fake.detach().shape[1] + image_one_hot_labels.shape[1], 28 ,28)
assert tuple(real_image_and_labels.shape) == (len(real), real.shape[1] + image_one_hot_labels.shape[1], 28 ,28)
# Make sure that enough predictions were made
assert len(disc_real_pred) == len(real)
# Make sure that the inputs are different
assert torch.any(fake_image_and_labels != real_image_and_labels)
# Shapes must match
assert tuple(fake_image_and_labels.shape) == tuple(real_image_and_labels.shape)
assert tuple(disc_fake_pred.shape) == tuple(disc_real_pred.shape)
disc_fake_loss = criterion(disc_fake_pred, torch.zeros_like(disc_fake_pred))
disc_real_loss = criterion(disc_real_pred, torch.ones_like(disc_real_pred))
disc_loss = (disc_fake_loss + disc_real_loss) / 2
disc_loss.backward(retain_graph=True)
disc_opt.step()
# Keep track of the average discriminator loss
discriminator_losses += [disc_loss.item()]
### Update generator ###
# Zero out the generator gradients
gen_opt.zero_grad()
fake_image_and_labels = combine_vectors(fake, image_one_hot_labels)
# This will error if you didn't concatenate your labels to your image correctly
disc_fake_pred = disc(fake_image_and_labels)
gen_loss = criterion(disc_fake_pred, torch.ones_like(disc_fake_pred))
gen_loss.backward()
gen_opt.step()
# Keep track of the generator losses
generator_losses += [gen_loss.item()]
#
if cur_step % display_step == 0 and cur_step > 0:
gen_mean = sum(generator_losses[-display_step:]) / display_step
disc_mean = sum(discriminator_losses[-display_step:]) / display_step
print(f"Step {cur_step}: Generator loss: {gen_mean}, discriminator loss: {disc_mean}")
show_tensor_images(fake)
show_tensor_images(real)
step_bins = 20
x_axis = sorted([i * step_bins for i in range(len(generator_losses) // step_bins)] * step_bins)
num_examples = (len(generator_losses) // step_bins) * step_bins
plt.plot(
range(num_examples // step_bins),
torch.Tensor(generator_losses[:num_examples]).view(-1, step_bins).mean(1),
label="Generator Loss"
)
plt.plot(
range(num_examples // step_bins),
torch.Tensor(discriminator_losses[:num_examples]).view(-1, step_bins).mean(1),
label="Discriminator Loss"
)
plt.legend()
plt.show()
elif cur_step == 0:
print("Congratulations! If you've gotten here, it's working. Please let this train until you're happy with how the generated numbers look, and then go on to the exploration!")
cur_step += 1 | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
ExplorationYou can do a bit of exploration now! | # Before you explore, you should put the generator
# in eval mode, both in general and so that batch norm
# doesn't cause you issues and is using its eval statistics
gen = gen.eval() | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
Changing the Class VectorYou can generate some numbers with your new model! You can add interpolation as well to make it more interesting.So starting from a image, you will produce intermediate images that look more and more like the ending image until you get to the final image. Your're basically morphing one image into another. You can choose what these two images will be using your conditional GAN. | import math
### Change me! ###
n_interpolation = 9 # Choose the interpolation: how many intermediate images you want + 2 (for the start and end image)
interpolation_noise = get_noise(1, z_dim, device=device).repeat(n_interpolation, 1)
def interpolate_class(first_number, second_number):
first_label = get_one_hot_labels(torch.Tensor([first_number]).long(), n_classes)
second_label = get_one_hot_labels(torch.Tensor([second_number]).long(), n_classes)
# Calculate the interpolation vector between the two labels
percent_second_label = torch.linspace(0, 1, n_interpolation)[:, None]
interpolation_labels = first_label * (1 - percent_second_label) + second_label * percent_second_label
# Combine the noise and the labels
noise_and_labels = combine_vectors(interpolation_noise, interpolation_labels.to(device))
fake = gen(noise_and_labels)
show_tensor_images(fake, num_images=n_interpolation, nrow=int(math.sqrt(n_interpolation)), show=False)
### Change me! ###
start_plot_number = 1 # Choose the start digit
### Change me! ###
end_plot_number = 5 # Choose the end digit
plt.figure(figsize=(8, 8))
interpolate_class(start_plot_number, end_plot_number)
_ = plt.axis('off')
### Uncomment the following lines of code if you would like to visualize a set of pairwise class
### interpolations for a collection of different numbers, all in a single grid of interpolations.
### You'll also see another visualization like this in the next code block!
# plot_numbers = [2, 3, 4, 5, 7]
# n_numbers = len(plot_numbers)
# plt.figure(figsize=(8, 8))
# for i, first_plot_number in enumerate(plot_numbers):
# for j, second_plot_number in enumerate(plot_numbers):
# plt.subplot(n_numbers, n_numbers, i * n_numbers + j + 1)
# interpolate_class(first_plot_number, second_plot_number)
# plt.axis('off')
# plt.subplots_adjust(top=1, bottom=0, left=0, right=1, hspace=0.1, wspace=0)
# plt.show()
# plt.close() | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
Changing the Noise VectorNow, what happens if you hold the class constant, but instead you change the noise vector? You can also interpolate the noise vector and generate an image at each step. | n_interpolation = 9 # How many intermediate images you want + 2 (for the start and end image)
# This time you're interpolating between the noise instead of the labels
interpolation_label = get_one_hot_labels(torch.Tensor([5]).long(), n_classes).repeat(n_interpolation, 1).float()
def interpolate_noise(first_noise, second_noise):
# This time you're interpolating between the noise instead of the labels
percent_first_noise = torch.linspace(0, 1, n_interpolation)[:, None].to(device)
interpolation_noise = first_noise * percent_first_noise + second_noise * (1 - percent_first_noise)
# Combine the noise and the labels again
noise_and_labels = combine_vectors(interpolation_noise, interpolation_label.to(device))
fake = gen(noise_and_labels)
show_tensor_images(fake, num_images=n_interpolation, nrow=int(math.sqrt(n_interpolation)), show=False)
# Generate noise vectors to interpolate between
### Change me! ###
n_noise = 5 # Choose the number of noise examples in the grid
plot_noises = [get_noise(1, z_dim, device=device) for i in range(n_noise)]
plt.figure(figsize=(8, 8))
for i, first_plot_noise in enumerate(plot_noises):
for j, second_plot_noise in enumerate(plot_noises):
plt.subplot(n_noise, n_noise, i * n_noise + j + 1)
interpolate_noise(first_plot_noise, second_plot_noise)
plt.axis('off')
plt.subplots_adjust(top=1, bottom=0, left=0, right=1, hspace=0.1, wspace=0)
plt.show()
plt.close() | _____no_output_____ | MIT | Course1 - Build Basic Generative Adversarial Networks (GANs)/Week4/C1W4A_Build_a_Conditional_GAN_Original.ipynb | RamzanShahidkhan/Generative-Adversarial-Networks-Specialization |
tabula-py example notebooktabula-py is a tool for convert PDF tables to pandas DataFrame. tabula-py is a wrapper of [tabula-java](https://github.com/tabulapdf/tabula-java), which requires java on your machine. tabula-py also enales you to convert PDF tables into CSV/TSV files.tabula-py's PDF extraction accuracy is same as tabula-java or [tabula app](https://tabula.technology/); GUI tool of tabula, so if you want to know the performance of tabula-py, I highly recommend you to try tabula app.tabula-py is good for:- automation with Python script- advanced analytics after converting pandas DataFrame- casual analytics with Jupyter notebook or Google Colabolatory Check Java environment and install tabula-pytabula-py requires java environment so let's check the java environment on your machine. | !java -version | openjdk version "11.0.3" 2019-04-16
OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)
OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing)
openjdk version "11.0.3" 2019-04-16
OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)
OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing)
| MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
After confirming the java environment, install tabula-py by using pip. | # To be more precisely, it's better to use `{sys.executable} -m pip install tabula-py`
!pip install -q tabula-py | [K |████████████████████████████████| 10.4MB 4.2MB/s
[?25h | MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Before trying tabula-py, check your environment via tabula-py `environment_info()` function, which shows Python version, Java version, and your OS environment. | import tabula
tabula.environment_info() | Python version:
3.6.8 (default, Jan 14 2019, 11:02:34)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]]
Java version:
openjdk version "11.0.3" 2019-04-16
OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)
OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing)
tabula-py version: 1.4.0
platform: Linux-4.14.79+-x86_64-with-Ubuntu-18.04-bionic
uname:
uname_result(system='Linux', node='b5c4edf3fd8a', release='4.14.79+', version='#1 SMP Wed Dec 19 21:19:13 PST 2018', machine='x86_64', processor='x86_64')
linux_distribution: ('Ubuntu', '18.04', 'bionic')
mac_ver: ('', ('', '', ''), '')
| MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Read a PDF with `read_pdf()` functionLet's read a PDF from GitHub. tabula-py can load a PDF or file like object on both local or internet by using `read_pdf()` function. | pdf_path = "https://github.com/chezou/tabula-py/raw/master/tests/resources/data.pdf"
tabula.read_pdf(pdf_path, stream=True) | _____no_output_____ | MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Options for `read_pdf()`Note that `read_pdf()` function reads only page 1 by default. For more details, use `?read_pdf` and `?tabula.wrapper.build_options`. | help(tabula.read_pdf)
help(tabula.wrapper.build_options) | Help on function build_options in module tabula.wrapper:
build_options(kwargs=None)
Build options for tabula-java
Args:
options (str, optional):
Raw option string for tabula-java.
pages (str, int, :obj:`list` of :obj:`int`, optional):
An optional values specifying pages to extract from. It allows
`str`,`int`, :obj:`list` of :obj:`int`.
Example: '1-2,3', 'all' or [1,2]
guess (bool, optional):
Guess the portion of the page to analyze per page. Default `True`
If you use "area" option, this option becomes `False`.
Note that as of tabula-java 1.0.3, guess option becomes independent from
lattice and stream option, you can use guess and lattice/stream option
at the same time.
area (:obj:`list` of :obj:`float` or
:obj:`list` of :obj:`list` of :obj:`float`, optional):
Portion of the page to analyze(top,left,bottom,right).
Example; [269.875,12.75,790.5,561] or
[[12.1,20.5,30.1,50.2], [1.0,3.2,10.5,40.2]].
Default is entire page.
relative_area (bool, optional):
If all area values are between 0-100 (inclusive) and preceded by '%',
input will be taken as % of actual height or width of the page.
Default False.
lattice (bool, optional):
Force PDF to be extracted using lattice-mode extraction
(if there are ruling lines separating each cell, as in a PDF of an
Excel spreadsheet)
stream (bool, optional):
Force PDF to be extracted using stream-mode extraction
(if there are no ruling lines separating each cell, as in a PDF of an
Excel spreadsheet)
password (str, optional):
Password to decrypt document. Default is empty
silent (bool, optional):
Suppress all stderr output.
columns (list, optional):
X coordinates of column boundaries.
Example: [10.1, 20.2, 30.3]
format (str, optional):
Format for output file or extracted object. (CSV, TSV, JSON)
batch (str, optional):
Convert all .pdfs in the provided directory. This argument should be
directory.
output_path (str, optional):
Output file path. File format of it is depends on `format`.
Same as `--outfile` option of tabula-java.
Returns:
`obj`:list: Built list of options
| MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Let's set `pages` option. Here is the extraction result of page 3: | # set pages option
tabula.read_pdf(pdf_path, pages=3, stream=True)
# pass pages as string
tabula.read_pdf(pdf_path, pages="1-2,3", stream=True) | _____no_output_____ | MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
You can set `pages="all"` for extration all pages. If you hit OOM error with Java, you should set appropriate `-Xmx` option for `java_options`. | # extract all pages
tabula.read_pdf(pdf_path, pages="all", stream=True) | _____no_output_____ | MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Read multiple tables with `multiple_tables` optiontabula-py assumes single tabule for an output by default, because of the limitation of pandas. To avoid this issue, you can set `multiple_tables` option. By using this option, `read_pdf` function returns list of DataFrames. | # extract multiple from all pages
multi_tables = tabula.read_pdf(pdf_path, pages="all", multiple_tables=True)
print(multi_tables[0].head())
print(multi_tables[1].head()) | 0 1 2 3 4 5 6 7 8 9
0 mpg cyl disp hp drat wt qsec vs am gear
1 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4
2 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4
3 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4
4 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3
0 1 2 3 4
0 Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 5.1 3.5 1.4 0.2 setosa
2 4.9 3.0 1.4 0.2 setosa
3 4.7 3.2 1.3 0.2 setosa
4 4.6 3.1 1.5 0.2 setosa
| MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Read partial area of PDFIf you want to set a certain part of page, you can use `area` option. | # set area option
tabula.read_pdf(pdf_path, area=(126,149,212,462), pages=2) | _____no_output_____ | MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Extract to JSON, TSV, or CSVtabula-py has capability to convert not only DataFrame but also JSON, TSV, or CSV. You can set output format with `output_format` option. | # read pdf as JSON
tabula.read_pdf(pdf_path, output_format="json") | _____no_output_____ | MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Convert PDF tables into CSV, TSV, or JSON filesYou can convert files directly rather creating Python objects with `convert_into()` function. | # You can convert from pdf into JSON, CSV, TSV
tabula.convert_into(pdf_path, "test.json", output_format="json")
!cat test.json
tabula.convert_into(pdf_path, "test.tsv", output_format="tsv")
!cat test.tsv
tabula.convert_into(pdf_path, "test.csv", output_format="csv", stream=True)
!cat test.csv | "",mpg,cyl,disp,hp,drat,wt,qsec,vs,am,gear,carb
Mazda RX4,21.0,6,160.0,110,3.90,2.620,16.46,0,1,4,4
Mazda RX4 Wag,21.0,6,160.0,110,3.90,2.875,17.02,0,1,4,4
Datsun 710,22.8,4,108.0,93,3.85,2.320,18.61,1,1,4,1
Hornet 4 Drive,21.4,6,258.0,110,3.08,3.215,19.44,1,0,3,1
Hornet Sportabout,18.7,8,360.0,175,3.15,3.440,17.02,0,0,3,2
Valiant,18.1,6,225.0,105,2.76,3.460,20.22,1,0,3,1
Duster 360,14.3,8,360.0,245,3.21,3.570,15.84,0,0,3,4
Merc 240D,24.4,4,146.7,62,3.69,3.190,20.00,1,0,4,2
Merc 230,22.8,4,140.8,95,3.92,3.150,22.90,1,0,4,2
Merc 280,19.2,6,167.6,123,3.92,3.440,18.30,1,0,4,4
Merc 280C,17.8,6,167.6,123,3.92,3.440,18.90,1,0,4,4
Merc 450SE,16.4,8,275.8,180,3.07,4.070,17.40,0,0,3,3
Merc 450SL,17.3,8,275.8,180,3.07,3.730,17.60,0,0,3,3
Merc 450SLC,15.2,8,275.8,180,3.07,3.780,18.00,0,0,3,3
Cadillac Fleetwood,10.4,8,472.0,205,2.93,5.250,17.98,0,0,3,4
Lincoln Continental,10.4,8,460.0,215,3.00,5.424,17.82,0,0,3,4
Chrysler Imperial,14.7,8,440.0,230,3.23,5.345,17.42,0,0,3,4
Fiat 128,32.4,4,78.7,66,4.08,2.200,19.47,1,1,4,1
Honda Civic,30.4,4,75.7,52,4.93,1.615,18.52,1,1,4,2
Toyota Corolla,33.9,4,71.1,65,4.22,1.835,19.90,1,1,4,1
Toyota Corona,21.5,4,120.1,97,3.70,2.465,20.01,1,0,3,1
Dodge Challenger,15.5,8,318.0,150,2.76,3.520,16.87,0,0,3,2
AMC Javelin,15.2,8,304.0,150,3.15,3.435,17.30,0,0,3,2
Camaro Z28,13.3,8,350.0,245,3.73,3.840,15.41,0,0,3,4
Pontiac Firebird,19.2,8,400.0,175,3.08,3.845,17.05,0,0,3,2
Fiat X1-9,27.3,4,79.0,66,4.08,1.935,18.90,1,1,4,1
Porsche 914-2,26.0,4,120.3,91,4.43,2.140,16.70,0,1,5,2
Lotus Europa,30.4,4,95.1,113,3.77,1.513,16.90,1,1,5,2
Ford Pantera L,15.8,8,351.0,264,4.22,3.170,14.50,0,1,5,4
Ferrari Dino,19.7,6,145.0,175,3.62,2.770,15.50,0,1,5,6
Maserati Bora,15.0,8,301.0,335,3.54,3.570,14.60,0,1,5,8
Volvo 142E,21.4,4,121.0,109,4.11,2.780,18.60,1,1,4,2
| MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Use lattice mode for more accurate extraction for spreadsheet style tablesIf your tables have lines separating cells, you can use `lattice` option. By default, tabula-py sets `guess=True`, which is the same behavior for default of tabula app. If your tables don't have separation lines, you can try `stream` option.As it mentioned, try tabula app before struglling with tabula-py option. Or, [PDFplumber](https://github.com/jsvine/pdfplumber) can be an alternative since it has different extraction strategy. | tabula.read_pdf(pdf_path, pages="1", lattice=True) | _____no_output_____ | MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
Use tabula app templatetabula-py can handle tabula app template, which has area options set by GUI app to reuse. | !wget -q "https://github.com/chezou/tabula-py/raw/master/tests/resources/data.tabula-template.json"
tabula.read_pdf_with_template(pdf_path, "data.tabula-template.json") | _____no_output_____ | MIT | examples/tabula_example.ipynb | shailp52/tabula-py |
 **Welcome to ClointFusion, Made in India with ❤️ (Version 0.1.37)** ClointFusion's Jupyter Notebook* ClointFusion offers you Python based RPA platform for your Automation needs. * You now have access to more than 130 easy to use functions, which could be used in GUI and non GUI mode as well. If you prefer GUI mode, you could run your BOTs in Fully Automatic Mode or Interactive (Semi-Automatic) Mode.* You may use this platform to explore and experiment with these functions and contribute by giving a star on GitHub / writing blog article on ClointFusion / feedback / report any issues / help us in bug fixes / feature enhancement / add documentation / many more ways as you please.. For more details, on how to contribute, please visit ClointFusion GitHub repository at : https://github.com/ClointFusion/ClointFusion **Start here by importing** *ClointFusion* On executing below cell, if you are able to see **Welcome to ClointFusion, Made in India with ❤️** & 2 prompt messages, then you are all set.. Kudos ! (Depending on your internet speed, you might see a slight delay in getting this welcome message, as all required Python packages are to be downloaded & installed) | import ClointFusion as cf | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
NOTE: We recommend that you execute (press ▶️) for each function, block-by-block, as-it-is, before adjusting parameters/inputs. Once you've verified that the function is working, you are welcome to play with it, learn from manipulating inputs/parameters and even contribute in a way of bug fixing or enhancing with new features. Please read the CONTRIBUTING.md file in GitHub.As a reminder, this ClointFusion Jupyter Notebook is not for sharing; please refer to the **Copyright** directly below and **Code License Agreement** in the last cell of this notebook.*Team ClointFusion*,***Copyright:*** *The contents of this Jupyter Notebook, unless otherwise indicated, are Copyright of 2020 ClointFusion, https://cloint.com, released under BSD-4 License* **Test drive a *ClointFusion* function now !** Tip: While executing any of the below cells, there are chances that, Google Colab session may get crashed. In that case, please run above import cell once again & click on Reconnect, and you can continue with your exploration... | print("Hi {} ! Welcome to ClointFusion {}".format(cf.gui_get_any_input_from_user('your Name'),cf.show_emoji())) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Note : Executing *import ClointFusion as cf* in any Python file, would create a set of folders in C:\ClointFusion. The sub folders created are Batch_File, Config_Files, Error_Screenshots, Images, Logs, Output and StatusLogExcel. All these 7 folders would be placed in a parent folder, which has name that begins with My_Bot_Folder_Name. Here, Folder_Name has same name, as the one where you are importing ClointFusion as cf !Tip : Most of ClointFusion's functions work in Dual mode i.e if you pass the required arguments, ClointFusion works silently, else ClointFusion would popup a GUI, asking you for required inputs and here if semi_automatic_mode is ENABLED, your previous inputs would be used to run the function, else if semi_automatic_mode is DISABLED, GUI would be shown with previously used inputs & you could modify them.You have 2 functions turn ON/OFF the semi-automatic mode Working internally, to be made publiccf.ON_semi_automatic_mode()cf.OFF_semi_automatic_mode() **GUI Based Functions**We have 6 functions which take different inputs from users. | cf.OFF_semi_automatic_mode()
outlook_url = 'https://login.live.com/login.srf?wa=wsignin1.0&rpsnv=13&ct=1622187509&rver=7.0.6737.0&wp=MBI_SSL&wreply=https%3a%2f%2foutlook.live.com%2fowa%2f0%2f%3fstate%3d1%26redirectTo%3daHR0cHM6Ly9vdXRsb29rLmxpdmUuY29tL21haWwvMC9pbmJveC8%26nlp%3d1%26RpsCsrfState%3da1418c6a-2688-64b6-5738-67971d0c6e02&id=292841&aadredir=1&CBCXT=out&lw=1&fl=dob%2cflname%2cwld&cobrandid=90015'
cf.window_show_desktop()
response = cf.gui_get_consent_from_user(msgForUser="Want to start Mailing")
if response == 'Yes':
# cf.gui_get_dropdownlist_values_from_user()
mail_client = cf.gui_get_dropdownlist_values_from_user(msgForUser='Select the email Client',
dropdown_list=['Outlook', 'Yahoo'], multi_select=False)
if mail_client == ['Outlook']:
cf.browser_navigate_h(outlook_url)
# cf.gui_get_any_input_from_user()
username = cf.gui_get_any_input_from_user('Enter Username')
cf.key_write_enter(username)
password = cf.gui_get_any_input_from_user('Enter Password', password=True)
cf.key_write_enter(password)
close = cf.gui_get_consent_from_user(msgForUser="Want to Close Mail")
if close == 'Yes':
cf.browser_quit_h()
file = cf.gui_get_any_file_from_user()
print(file)
excel_file = cf.gui_get_excel_sheet_header_from_user()
print(excel_file)
folder = cf.gui_get_folder_path_from_user()
print(folder) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**Excel Based Functions** | import os
WORKSPACE_DIR = r"C:\Users\Hp\Desktop\Excel_Operations"
EXCEL_FILES_DIR = os.path.join(WORKSPACE_DIR,'Excel_Files')
test_xlsx_path = os.path.join(EXCEL_FILES_DIR,'Test','Test.xlsx')
new_test_xlsx_path = os.path.join(EXCEL_FILES_DIR,'Test','New_Test.xlsx') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Operations | ClointFusion- 20+ Excel Operations| Function Name | Accepted Parameters | Description || :--- | :--- | :--- ||excel_create_excel_file_in_given_folder()| fullpathToTheFodler='',excelFileName='',sheet_name='' |Creates a new excel file in the given folder|| excel_create_file() | fullPathToTheFile='',fileName='',sheet_name='' | Creates a new excel file in the provided folder || excel_get_all_sheet_names() | excelFilePath="" | Gives you all names of the sheets in the given excel sheet || excel_get_all_header_columns() | excel_path="",sheet_name="Sheet1",header=0 | Returns all the column names of the excel file || excel_if_value_exists() | excel_path="",sheet_name='Sheet1',header=0,usecols="",value="" | Check if a given value exists in given excel. Return True/False || excel_copy_paste_range_from_to_sheet() | excel_path="", sheet_name='Sheet1', startCol=0, startRow=0, endCol=0, endRow=0, copiedData="" | Pastes the copied data in the specific range of the given excel sheet || excel_get_row_column_count() | excel_path="", sheet_name="Sheet1", header=0 | Gets the row and coloumn count of the provided excel sheet || excel_copy_range_from_sheet() | excel_path="", sheet_name='Sheet1', startCol=0, startRow=0, endCol=0, endRow=0 | Copies the specific range from the provided excel sheet and returns copied data as a list || excel_split_by_column() | excel_path="",sheet_name='Sheet1',header=0,columnName="" | Splits the excel file by Column Name || excel_split_the_file_on_row_count() | excel_path="", sheet_name = 'Sheet1', rowSplitLimit="", outputFolderPath="", outputTemplateFileName ="Split" | Splits the excel file as per given row limit || excel_merge_all_files() | input_folder_path="",output_folder_path="" | Merges all the excel files in the given folder || excel_drop_columns() | excel_path="", sheet_name='Sheet1', header=0, columnsToBeDropped = "" | Drops the desired column from the given excel file || excel_sort_columns() | excel_path="",sheet_name='Sheet1', header=0, firstColumnToBeSorted=None, secondColumnToBeSorted=None,thirdColumnToBeSorted=None, firstColumnSortType=True, secondColumnSortType=True, thirdColumnSortType=True | A function which takes excel full path to excel and column names on which sort is to be performed || excel_clear_sheet() | excel_path="",sheet_name="Sheet1", header=0 | Clears the contents of given excel files keeping header row intact || excel_set_single_cell() | excel_path='',sheet_name='',header=0,columnName='',cellNumber=0,setText='' | Writes the given text to the desired column/cell number for the given excel file || excel_get_single_cell() | excel_path="",sheet_name="Sheet1",header=0, columnName="",cellNumber=0 | Gets the text from the desired column/cell number of the given excel file || excel_remove_duplicates() | excel_path="",sheet_name="Sheet1", header=0, columnName="", saveResultsInSameExcel=True, which_one_to_keep="first" | Drops the duplicates from the desired Column of the given excel file || excel_vlook_up() | filepath_1="", sheet_name_1 = 'Sheet1', header_1 = 0,filepath_2="", sheet_name_2 = 'Sheet1', header_2 = 0, Output_path="", OutputExcelFileName="", match_column_name="",how='left' | Performs excel_vlook_up on the given excel files for the desired columns. Possible values for how are "inner","left", "right", "outer" || excel_describe_data() | excel_path="",sheet_name='Sheet1',header=0 | Describe statistical data for the given excel || excel_change_corrupt_xls_to_xlsx() | xls_file ='',xlsx_file = '', xls_sheet_name='' | Repair corrupt excel file | Create a new excel file | # Creates a new excel file New_Test.xlsx in Test folder
cf.excel_create_file(fullPathToTheFile=os.path.join(EXCEL_FILES_DIR,'Test'),fileName='New_Test',sheet_name='Sheet1')
# Creating a new excel file Test.xlsx
cf.excel_create_excel_file_in_given_folder(fullPathToTheFolder=os.path.dirname(test_xlsx_path),excelFileName='Test.xlsx',sheet_name='Sheet1') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Add some data into it with Excel set single Cell | # Adding some data into the Test.xlsx file
'''
Output:
|Name | Age |
|-----|-----|
|A | 5 |
|B | 4 |
|C | 3 |
|D | 2 |
|E | 1 |
'''
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Name',cellNumber=0,setText='A')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Name',cellNumber=1,setText='B')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Name',cellNumber=2,setText='C')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Name',cellNumber=3,setText='D')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Name',cellNumber=4,setText='E')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Name',cellNumber=5,setText='F')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Name',cellNumber=6,setText='F')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Age',cellNumber=0,setText='5')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Age',cellNumber=1,setText='4')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Age',cellNumber=2,setText='3')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Age',cellNumber=3,setText='2')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Age',cellNumber=4,setText='1')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Age',cellNumber=5,setText='6')
cf.excel_set_single_cell(excel_path=test_xlsx_path,columnName='Age',cellNumber=6,setText='6') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Get All sheet names | # Get all the sheet names of Test.xlsx
cf.excel_get_all_sheet_names(excelFilePath=test_xlsx_path) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Get all Header Columns | cf.excel_get_all_header_columns(excel_path=test_xlsx_path,sheet_name='Sheet1') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Get Row Column Count | cf.excel_get_row_column_count(excel_path=test_xlsx_path,sheet_name='Sheet1') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Split by ColumnName | # Splitting the column into different excel files according to the columnName
cf.excel_split_by_column(excel_path=test_xlsx_path, columnName='Age') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Split by row count | # Divide the excel file based on the row count
cf.excel_split_the_file_on_row_count(excel_path=test_xlsx_path, rowSplitLimit=3, outputFolderPath=EXCEL_FILES_DIR) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Merge Files | merge_folder = os.path.join(EXCEL_FILES_DIR,'Test')
cf.excel_merge_all_files(input_folder_path=EXCEL_FILES_DIR, output_folder_path=merge_folder) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel sort Columns | cf.excel_sort_columns(excel_path=test_xlsx_path, firstColumnToBeSorted='Name', secondColumnToBeSorted='Age') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Drop Columns | split_excel_file1 = os.path.join(EXCEL_FILES_DIR, "Split-1.xlsx")
split_excel_file2 = os.path.join(EXCEL_FILES_DIR, "Split-2.xlsx")
cf.excel_drop_columns(excel_path=split_excel_file1, sheet_name='Sheet1', columnsToBeDropped='Age') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Clear Sheet | cf.excel_clear_sheet(excel_path=split_excel_file2, sheet_name='Sheet1') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Get Single Cell | cf.excel_get_single_cell(excel_path=test_xlsx_path, columnName='Name', cellNumber=0) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Remove Duplicates | cf.excel_remove_duplicates(excel_path=test_xlsx_path, columnName='Name') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel vlook-up | cf.excel_vlook_up(filepath_1=split_excel_file1, filepath_2=test_xlsx_path, match_column_name='Name') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Describe Data | cf.excel_describe_data(excel_path=test_xlsx_path,sheet_name='Sheet1') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Check if data exists | cf.excel_if_value_exists(excel_path=test_xlsx_path, usecols=['Name'], value='A') | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Copy Range from Sheet | copied_data = cf.excel_copy_range_from_sheet(excel_path=test_xlsx_path, startCol=1, startRow=1, endRow=5, endCol=2) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Excel Copy Paste Range From To | cf.excel_copy_paste_range_from_to_sheet(excel_path=new_test_xlsx_path, startCol=1, startRow=1, endRow=5, endCol=2, copiedData=copied_data) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Tip: In any case, if **ClointFusion-Labs**, stops responding, then just refresh this page and go to Connect and click **CONNECT** (no need to run jupyter commands again) and execute **import ClointFusion as cf** block again. Now, you can resume with your functions. **Mouse Operations** | # Moves the cursor to the given X Y Co-ordinates.
cf.mouse_move(1766,8)
# Clicks at the given X Y Co-ordinates on the screen using ingle / double / tripple click(s).
# Optionally copies selected data to clipboard (works for double / triple clicks).
cf.mouse_click(x=1100, y=777, left_or_right="left", no_of_clicks=1)
# Searches the given image on the screen and returns its center of X Y co-ordinates.
chrome = cf.mouse_search_snip_return_coordinates_x_y(r"C:\Users\Avinash Chodavarapu\Desktop\Demo\chrome.png", wait=200)
print(chrome) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**Simple Bot** | path = r"C:\Users\Avinash Chodavarapu\Desktop\Demo\avinash.xlsx"
row, column = cf.excel_get_row_column_count(excel_path=path)
for i in range(row-1):
unit_price = cf.excel_get_single_cell(excel_path=path,columnName="Unit price",cellNumber=i)
quantity = cf.excel_get_single_cell(excel_path=path,columnName="Quantity",cellNumber=i)
tax = cf.excel_get_single_cell(excel_path=path,columnName="Tax",cellNumber=1)
final = (unit_price * quantity) + tax
cf.excel_set_single_cell(excel_path = path,columnName="Total",cellNumber=i,setText=final)
cf.excel_sort_columns(excel_path=path,firstColumnToBeSorted="Total") | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**Window Operations**We have 5 functions to control a Window Application. | # Use this function, when you want to minimize all open windows and see the desktop.
# This function doesnot have GUI mode and does not take any parameters.
cf.window_show_desktop()
# Lets see all window operations via a small application.
# This application opens a notepad (automatically maximises), then minimizes, then again activate & maximise and finally close
#NON GUI Mode
# for i in range(1,3):
cf.launch_any_exe_bat_application("notepad")
cf.window_minimize_windows("notepad")
cf.window_activate_and_maximize_windows("notepad")
cf.window_close_windows("notepad")
#GUI Mode
# cf.launch_any_exe_bat_application()
# cf.window_minimize_windows()
# cf.window_activate_and_maximize_windows()
# cf.window_close_windows() | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**Windows Objects - The High Level Automation** | # Open Calculator
app, main_dlg = cf.win_obj_open_app(title="Calculator", program_path_with_name="C:\Windows\System32\calc.exe")
# Print all objects
cf.win_obj_get_all_objects(main_dlg)
# Open Standard Calc
cf.win_obj_mouse_click(main_dlg, auto_id="TogglePaneButton", control_type="button")
cf.win_obj_mouse_click(main_dlg, title="Standard Calculator", auto_id="Standard", control_type="list item")
print("Finding Square of 5 ...")
# Click on 5 Num
cf.win_obj_mouse_click(main_dlg, auto_id="num5Button", control_type="button")
# Click on Square
cf.win_obj_mouse_click(main_dlg, auto_id="xpower2Button", control_type="button")
# Get Text
ans = cf.win_obj_get_text(main_dlg, auto_id="CalculatorResults", control_type="text")
print("Answer: 5^2 = "+ans.split(" ")[-1])
# 5+5-10+2/3*9
cf.win_obj_key_press(main_dlg=main_dlg, write='5{+}5{-}10{+}2{/}3{*}9{=}', auto_id='CalculatorResults', control_type="Text")
# Get the Answer
read = cf.win_obj_get_text(main_dlg=main_dlg, auto_id='CalculatorResults')
print(read) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**Folder Operations** | # Here you may pass any folder path separated by \. This folder structure would be created, when you run this function.
#NON GUI Mode
cf.folder_create('C:\Test\Test12')
#GUI Mode
# cf.folder_create()
# Notice, 2 ways of using the functions ! This feature is true for almost all functions of ClointFusion.
# Use this function to create an empty text file in the chosen folder
# GUI Mode, without passing parameters
# cf.folder_create_text_file()
# NON GUI Mode
cf.folder_create_text_file(r"C:\Test","My Text File")
# Use this function to get all the files of the given folder as a list.
# Note the different ways of using it.
# With GUI mode, default extenstion is 'all'
# all_files_list = cf.folder_get_all_filenames_as_list()
# print(all_files_list)
# NON GUI, default extenstion is 'all'
files_list = cf.folder_get_all_filenames_as_list("C:\Test")
print(files_list)
# NON GUI, extenstion set to 'xlsx'
excel_files_list = cf.folder_get_all_filenames_as_list("C:\Test",extension='txt')
print(excel_files_list)
# Use this function to delete all the files of given folder.
# Be carewful, as it wont prompt, before performing delete.
# This function can be used in different ways as shown in above example
cf.folder_delete_all_files("C:\Test",file_extension_without_dot='txt')
# Use this function to rename any file by just passing file path
# If you pass ext as False than it will take extension of the file and if True than it will take extension of new file name.
# GUI
# cf.file_rename()
# Non GUI
cf.file_rename(old_file_path=r"C:\Test\My Text File", new_file_name="My New Text File", ext=False) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**Keyboard functions** Lets understand Keyboard functions by buidling a small application.Here, we shall launch a Notepad, type something, close & exit the notepad.. All using keyboard functions. | # Demonstrating keyboard functions.
# Launch notepad
cf.launch_any_exe_bat_application("notepad")
# Enter specified text into newly opened notepad
cf.key_write_enter(write_to_window="notepad",text_to_write="ClointFusion is Awesome")
cf.key_hit_enter(write_to_window="notepad")
# Exit notepad
cf.key_press(write_to_window="notepad",key_1="alt", key_2="f4", key_3="n")
| _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**String operations (GUI mode)** |
# GUI mode
cf.OFF_semi_automatic_mode()
cf.string_extract_only_numbers()
cf.string_extract_only_alphabets()
cf.string_remove_special_characters() | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**String operations (Non-GUI mode)** | # Function to extract numbers from a given string
num = cf.string_extract_only_numbers(inputString="C1l2o3i4n5t6F7u8i9o0n")
print("Returned value:",num)
print(type(num))
# Function to extract letters from a given string
print(cf.string_extract_only_alphabets(inputString="C1l2o#%^int&*Fus12i5on"))
# Function to remove all special characters (Example - '!','@','%')
print(cf.string_remove_special_characters(inputStr="C!@loin#$tFu*(sion")) | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
**Screenscraping functions** | # Clears previously found text (crtl+f highlight)
cf.screen_clear_search(delay=0.5)
# Copy pastes all the available text on the launched website to notepad and saves it in 'notepad-contents.txt'
cf.browser_activate(url="https://en.wikipedia.org/wiki/Robotic_process_automation")
cf.scrape_save_contents_to_notepad(folderPathToSaveTheNotepad="E:/ClointFusion Demo/RPA")
# Gets the focus on the screen by searching given text using crtl+f and performs copy/paste of all data. Useful in Citrix applications
# This is useful in Citrix applicat^%#ions
cf.scrape_get_contents_by_search_copy_paste("ClointFusion") | _____no_output_____ | BSD-4-Clause | ClointFusion_Labs.ipynb | JAY-007-TRIVEDI/ClointFusion |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.