markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Accuracy, precision and recallClassification accuracy for each class: | for i,j in enumerate(cm.diagonal()/cm.sum(axis=1)): print("%d: %.4f" % (i,j)) | _____no_output_____ | MIT | notebooks/sklearn-mnist-nn.ipynb | CSCfi/machine-learning-scripts |
Precision and recall for each class: | print(classification_report(y_test, pred_nn_fast, labels=labels)) | _____no_output_____ | MIT | notebooks/sklearn-mnist-nn.ipynb | CSCfi/machine-learning-scripts |
Failure analysisWe can also inspect the results in more detail. Let's use the `show_failures()` helper function (defined in `pml_utils.py`) to show the wrongly classified test digits.The helper function is defined as:```show_failures(predictions, y_test, X_test, trueclass=None, predictedclass=None, maxtoshow=10)```where:- `predictions` is a vector with the predicted classes for each test set image- `y_test` the _correct_ classes for the test set images- `X_test` the test set images- `trueclass` can be set to show only images for a given correct (true) class- `predictedclass` can be set to show only images which were predicted as a given class- `maxtoshow` specifies how many items to show | show_failures(pred_nn_fast, y_test, X_test) | _____no_output_____ | MIT | notebooks/sklearn-mnist-nn.ipynb | CSCfi/machine-learning-scripts |
We can use `show_failures()` to inspect failures in more detail. For example:* show failures in which the true class was "5": | show_failures(pred_nn_fast, y_test, X_test, trueclass='5') | _____no_output_____ | MIT | notebooks/sklearn-mnist-nn.ipynb | CSCfi/machine-learning-scripts |
* show failures in which the prediction was "0": | show_failures(pred_nn_fast, y_test, X_test, predictedclass='0') | _____no_output_____ | MIT | notebooks/sklearn-mnist-nn.ipynb | CSCfi/machine-learning-scripts |
* show failures in which the true class was "0" and the prediction was "2": | show_failures(pred_nn_fast, y_test, X_test, trueclass='0', predictedclass='2') | _____no_output_____ | MIT | notebooks/sklearn-mnist-nn.ipynb | CSCfi/machine-learning-scripts |
"Sequence classification using Recurrent Neural Networks"> "PyTorch implementation for sequence classification using RNNs"- toc: false- branch: master- badges: true- comments: true- categories: [PyTorch, classification, RNN]- image: images/- hide: false- search_exclude: true- metadata_key1: metadata_value1- metadata_key2: metadata_value2- use_math: true This notebook is copied/adapted from [here](https://github.com/Atcold/pytorch-Deep-Learning/blob/master/08-seq_classification.ipynb). For a detailed working of RNNs, please follow this [link](https://atcold.github.io/pytorch-Deep-Learning/en/week06/06-3/). This notebook also serves as a template for PyTorch implementation for any model architecture (simply replace the model section with your own model architecture) An example of many-to-one (sequence classification)Original experiment from [Hochreiter & Schmidhuber (1997)](www.bioinf.jku.at/publications/older/2604.pdf).The goal here is to classify sequences.Elements and targets are represented locally (input vectors with only one non-zero bit).The sequence starts with a `B`, ends with a `E` (the “trigger symbol”), and otherwise consists of randomly chosen symbols from the set `{a, b, c, d}` except for two elements at positions `t1` and `t2` that are either `X` or `Y`.For the `DifficultyLevel.HARD` case, the sequence length is randomly chosen between `100` and `110`, `t1` is randomly chosen between `10` and `20`, and `t2` is randomly chosen between `50` and `60`.There are `4` sequence classes `Q`, `R`, `S`, and `U`, which depend on the temporal order of `X` and `Y`.The rules are:```X, X -> Q,X, Y -> R,Y, X -> S,Y, Y -> U.``` 1. Dataset Exploration Let's explore our dataset. | from sequential_tasks import TemporalOrderExp6aSequence as QRSU
# Create a data generator. Predefined generator is implemented in file sequential_tasks.
example_generator = QRSU.get_predefined_generator(
difficulty_level=QRSU.DifficultyLevel.EASY,
batch_size=32,
)
example_batch = example_generator[1]
print(f'The return type is a {type(example_batch)} with length {len(example_batch)}.')
print(f'The first item in the tuple is the batch of sequences with shape {example_batch[0].shape}.')
print(f'The first element in the batch of sequences is:\n {example_batch[0][0, :, :]}')
print(f'The second item in the tuple is the corresponding batch of class labels with shape {example_batch[1].shape}.')
print(f'The first element in the batch of class labels is:\n {example_batch[1][0, :]}')
# Decoding the first sequence
sequence_decoded = example_generator.decode_x(example_batch[0][0, :, :])
print(f'The sequence is: {sequence_decoded}')
# Decoding the class label of the first sequence
class_label_decoded = example_generator.decode_y(example_batch[1][0])
print(f'The class label is: {class_label_decoded}') | The sequence is: BbXcXcbE
The class label is: Q
| Apache-2.0 | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs |
We can see that our sequence contain 8 elements starting with B and ending with E. This sequence belong to class Q as per the rule defined earlier. Each element is one-hot encoded. Thus, we can represent our first sequence (BbXcXcbE) with a sequence of rows of one-hot encoded vectors (as shown above). . Similarly, class `Q` can be decoded as [1,0,0,0]. 2. Defining the ModelLet's now define our simple recurrent neural network. | import torch
import torch.nn as nn
# Set the random seed for reproducible results
torch.manual_seed(1)
class SimpleRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
# This just calls the base class constructor
super().__init__()
# Neural network layers assigned as attributes of a Module subclass
# have their parameters registered for training automatically.
self.rnn = torch.nn.RNN(input_size, hidden_size, nonlinearity='relu', batch_first=True)
self.linear = torch.nn.Linear(hidden_size, output_size)
def forward(self, x):
# The RNN also returns its hidden state but we don't use it.
# While the RNN can also take a hidden state as input, the RNN
# gets passed a hidden state initialized with zeros by default.
h = self.rnn(x)[0]
x = self.linear(h)
return x | _____no_output_____ | Apache-2.0 | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs |
3. Defining the Training Loop | def train(model, train_data_gen, criterion, optimizer, device):
# Set the model to training mode. This will turn on layers that would
# otherwise behave differently during evaluation, such as dropout.
model.train()
# Store the number of sequences that were classified correctly
num_correct = 0
# Iterate over every batch of sequences. Note that the length of a data generator
# is defined as the number of batches required to produce a total of roughly 1000
# sequences given a batch size.
for batch_idx in range(len(train_data_gen)):
# Request a batch of sequences and class labels, convert them into tensors
# of the correct type, and then send them to the appropriate device.
data, target = train_data_gen[batch_idx]
data, target = torch.from_numpy(data).float().to(device), torch.from_numpy(target).long().to(device)
# Perform the forward pass of the model
output = model(data) # Step ①
# Pick only the output corresponding to last sequence element (input is pre padded)
output = output[:, -1, :] # For many-to-one RNN architecture, we need output from last RNN cell only.
# Compute the value of the loss for this batch. For loss functions like CrossEntropyLoss,
# the second argument is actually expected to be a tensor of class indices rather than
# one-hot encoded class labels. One approach is to take advantage of the one-hot encoding
# of the target and call argmax along its second dimension to create a tensor of shape
# (batch_size) containing the index of the class label that was hot for each sequence.
target = target.argmax(dim=1) # For example, [0,1,0,0] will correspond to 1 (index start from 0)
loss = criterion(output, target) # Step ②
# Clear the gradient buffers of the optimized parameters.
# Otherwise, gradients from the previous batch would be accumulated.
optimizer.zero_grad() # Step ③
loss.backward() # Step ④
optimizer.step() # Step ⑤
y_pred = output.argmax(dim=1)
num_correct += (y_pred == target).sum().item()
return num_correct, loss.item() | _____no_output_____ | Apache-2.0 | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs |
4. Defining the Testing Loop | def test(model, test_data_gen, criterion, device):
# Set the model to evaluation mode. This will turn off layers that would
# otherwise behave differently during training, such as dropout.
model.eval()
# Store the number of sequences that were classified correctly
num_correct = 0
# A context manager is used to disable gradient calculations during inference
# to reduce memory usage, as we typically don't need the gradients at this point.
with torch.no_grad():
for batch_idx in range(len(test_data_gen)):
data, target = test_data_gen[batch_idx]
data, target = torch.from_numpy(data).float().to(device), torch.from_numpy(target).long().to(device)
output = model(data)
# Pick only the output corresponding to last sequence element (input is pre padded)
output = output[:, -1, :]
target = target.argmax(dim=1)
loss = criterion(output, target)
y_pred = output.argmax(dim=1)
num_correct += (y_pred == target).sum().item()
return num_correct, loss.item() | _____no_output_____ | Apache-2.0 | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs |
5. Putting it All Together | import matplotlib.pyplot as plt
from plot_lib import set_default, plot_state, print_colourbar
set_default()
def train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=True):
# Automatically determine the device that PyTorch should use for computation
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Move model to the device which will be used for train and test
model.to(device)
# Track the value of the loss function and model accuracy across epochs
history_train = {'loss': [], 'acc': []}
history_test = {'loss': [], 'acc': []}
for epoch in range(max_epochs):
# Run the training loop and calculate the accuracy.
# Remember that the length of a data generator is the number of batches,
# so we multiply it by the batch size to recover the total number of sequences.
num_correct, loss = train(model, train_data_gen, criterion, optimizer, device)
accuracy = float(num_correct) / (len(train_data_gen) * train_data_gen.batch_size) * 100
history_train['loss'].append(loss)
history_train['acc'].append(accuracy)
# Do the same for the testing loop
num_correct, loss = test(model, test_data_gen, criterion, device)
accuracy = float(num_correct) / (len(test_data_gen) * test_data_gen.batch_size) * 100
history_test['loss'].append(loss)
history_test['acc'].append(accuracy)
if verbose or epoch + 1 == max_epochs:
print(f'[Epoch {epoch + 1}/{max_epochs}]'
f" loss: {history_train['loss'][-1]:.4f}, acc: {history_train['acc'][-1]:2.2f}%"
f" - test_loss: {history_test['loss'][-1]:.4f}, test_acc: {history_test['acc'][-1]:2.2f}%")
# Generate diagnostic plots for the loss and accuracy
fig, axes = plt.subplots(ncols=2, figsize=(9, 4.5))
for ax, metric in zip(axes, ['loss', 'acc']):
ax.plot(history_train[metric])
ax.plot(history_test[metric])
ax.set_xlabel('epoch', fontsize=12)
ax.set_ylabel(metric, fontsize=12)
ax.legend(['Train', 'Test'], loc='best')
plt.show()
return model | _____no_output_____ | Apache-2.0 | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs |
5. Simple RNN: 10 EpochsLet's create a simple recurrent network and train for 10 epochs. | # Setup the training and test data generators
difficulty = QRSU.DifficultyLevel.EASY
batch_size = 32
train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
# Setup the RNN and training settings
input_size = train_data_gen.n_symbols
hidden_size = 4
output_size = train_data_gen.n_classes
model = SimpleRNN(input_size, hidden_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)
max_epochs = 10
# Train the model
model = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs)
for parameter_group in list(model.parameters()):
print(parameter_group.size()) | torch.Size([4, 8])
torch.Size([4, 4])
torch.Size([4])
torch.Size([4])
torch.Size([4, 4])
torch.Size([4])
| Apache-2.0 | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs |
6. RNN: Increasing Epoch to 100 | # Setup the training and test data generators
difficulty = QRSU.DifficultyLevel.EASY
batch_size = 32
train_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
test_data_gen = QRSU.get_predefined_generator(difficulty, batch_size)
# Setup the RNN and training settings
input_size = train_data_gen.n_symbols
hidden_size = 4
output_size = train_data_gen.n_classes
model = SimpleRNN(input_size, hidden_size, output_size)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)
max_epochs = 100
# Train the model
model = train_and_test(model, train_data_gen, test_data_gen, criterion, optimizer, max_epochs, verbose=False) | [Epoch 100/100] loss: 0.0081, acc: 100.00% - test_loss: 0.0069, test_acc: 100.00%
| Apache-2.0 | _notebooks/2021-01-07-seq-classification.ipynb | aizardar/blogs |
Make RACs from initial structure | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pickle
from collections import defaultdict
from molSimplify.Informatics.autocorrelation import*
def make_rac(xyz_file, m_depth, l_depth, is_oct):
properties = ['electronegativity', 'size', 'polarizability', 'nuclear_charge']
this_mol = mol3D() # mol3D instance
this_mol.readfromxyz(xyz_file)
feature_names = []
mc_corrs = np.zeros(shape=(len(properties), (m_depth+1)))
metal_idx = this_mol.findMetal()[0]
mc_delta_metricz = np.zeros(shape=(len(properties), m_depth))
for idx, p in enumerate(properties):
delta_list = list(np.asarray(atom_only_deltametric(this_mol, p, m_depth, metal_idx, oct=is_oct)).flatten())
del delta_list[0]
mc_corrs[idx] = np.asarray(atom_only_autocorrelation(this_mol, p, m_depth, metal_idx, oct=is_oct)).flatten()
name_of_idx = ["MC-mult-{}-{}".format(p, x) for x in range(0, m_depth+1)]
mc_delta_metricz[idx] = delta_list
feature_names.extend(name_of_idx)
name_of_idx_diff = ["MC-diff-{}-{}".format(p, x) for x in range(1, m_depth+1)]
feature_names.extend(name_of_idx_diff)
if is_oct:
num_connectors = 6
else:
num_connectors = 5
distances = []
origin = this_mol.coordsvect()[metal_idx]
for xyz in this_mol.coordsvect():
distances.append(np.sqrt((xyz[0]-origin[0])**2+(xyz[1]-origin[1])**2+(xyz[2]-origin[2])**2))
nearest_neighbours = np.argpartition(distances, num_connectors)
nn = [x for x in nearest_neighbours[:num_connectors+1] if x != 0]
rest_of_autoz = np.zeros(shape=(len(properties), l_depth+1))
rest_of_deltas = np.zeros(shape=(len(properties), l_depth))
for idx, p in enumerate(properties):
rest_of_autoz[idx] = atom_only_autocorrelation(this_mol, p, l_depth, nn, oct=is_oct)
rest_of_deltas[idx] = atom_only_deltametric(this_mol, p, l_depth, nn)[1:]
name_of_idx = ["LC-mult-{}-{}".format(p, x) for x in range(0, l_depth+1)]
name_of_idx_diff = ["LC-diff-{}-{}".format(p, x) for x in range(1, l_depth+1)]
feature_names.extend(name_of_idx)
rac_res = np.concatenate((mc_corrs, mc_delta_metricz, rest_of_autoz, rest_of_deltas),
axis=None)
return rac_res, feature_names | _____no_output_____ | MIT | make_racs.ipynb | craigerboi/oer_active_learning |
Now we define different racs with differing feature depths so we can perform the gridsearch in rac_depth_search.ipynb | mc_depths = [2, 3, 4]
lc_depths = [0, 1]
oer_desc_data = pickle.load(open("racs_and_desc/oer_desc_data.p", "rb"),)
name2oer_desc_and_rac = defaultdict()
for mc_d in mc_depths:
for lc_d in lc_depths:
racs = []
oer_desc_for_ml = []
cat_names_for_ml = []
for name in oer_desc_data:
oer_desc = oer_desc_data[name][0]
rac = np.asarray(make_rac(oer_desc_data[name][1], mc_d, lc_d, is_oct=True)[0])
name2oer_desc_and_rac[name] = (oer_desc, rac)
pickle.dump(name2oer_desc_and_rac, open("racs_and_desc/data_mc{}_lc{}.p".format(mc_d, lc_d), "wb"))
# overwrite for the next iteration
name2oer_desc_and_rac = defaultdict()
| _____no_output_____ | MIT | make_racs.ipynb | craigerboi/oer_active_learning |
使用线性回归预测波士顿房价**作者:** [PaddlePaddle](https://github.com/PaddlePaddle) **日期:** 2021.05 **摘要:** 本示例教程将会演示如何使用线性回归完成波士顿房价预测。 一、简要介绍经典的线性回归模型主要用来预测一些存在着线性关系的数据集。回归模型可以理解为:存在一个点集,用一条曲线去拟合它分布的过程。如果拟合曲线是一条直线,则称为线性回归。如果是一条二次曲线,则被称为二次回归。线性回归是回归模型中最简单的一种。 本示例简要介绍如何用飞桨开源框架,实现波士顿房价预测。其思路是,假设uci-housing数据集中的房子属性和房价之间的关系可以被属性间的线性组合描述。在模型训练阶段,让假设的预测结果和真实值之间的误差越来越小。在模型预测阶段,预测器会读取训练好的模型,对从未遇见过的房子属性进行房价预测。 二、环境配置本教程基于Paddle 2.1 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.1 。 | import paddle
import numpy as np
import os
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
print(paddle.__version__) | 2.1.0
| Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
三、数据集介绍本示例采用uci-housing数据集,这是经典线性回归的数据集。数据集共7084条数据,可以拆分成506行,每行14列。前13列用来描述房屋的各种信息,最后一列为该类房屋价格中位数。 前13列用来描述房屋的各种信息 3.1 数据处理 | #下载数据
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data -O housing.data
# 从文件导入数据
datafile = './housing.data'
housing_data = np.fromfile(datafile, sep=' ')
feature_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE','DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
feature_num = len(feature_names)
# 将原始数据进行Reshape,变成[N, 14]这样的形状
housing_data = housing_data.reshape([housing_data.shape[0] // feature_num, feature_num])
# 画图看特征间的关系,主要是变量两两之间的关系(线性或非线性,有无明显较为相关关系)
features_np = np.array([x[:13] for x in housing_data], np.float32)
labels_np = np.array([x[-1] for x in housing_data], np.float32)
# data_np = np.c_[features_np, labels_np]
df = pd.DataFrame(housing_data, columns=feature_names)
matplotlib.use('TkAgg')
%matplotlib inline
sns.pairplot(df.dropna(), y_vars=feature_names[-1], x_vars=feature_names[::-1], diag_kind='kde')
plt.show()
# 相关性分析
fig, ax = plt.subplots(figsize=(15, 1))
corr_data = df.corr().iloc[-1]
corr_data = np.asarray(corr_data).reshape(1, 14)
ax = sns.heatmap(corr_data, cbar=True, annot=True)
plt.show() | _____no_output_____ | Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
3.2 数据归一化处理下图展示各属性的取值范围分布: | sns.boxplot(data=df.iloc[:, 0:13]) | _____no_output_____ | Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
从上图看出,各属性的数值范围差异太大,甚至不能够在一个画布上充分的展示各属性具体的最大、最小值以及异常值等。下面进行归一化。 做归一化(或 Feature scaling)至少有以下2个理由:* 过大或过小的数值范围会导致计算时的浮点上溢或下溢。* 不同的数值范围会导致不同属性对模型的重要性不同(至少在训练的初始阶段如此),而这个隐含的假设常常是不合理的。这会对优化的过程造成困难,使训练时间大大的加长. | features_max = housing_data.max(axis=0)
features_min = housing_data.min(axis=0)
features_avg = housing_data.sum(axis=0) / housing_data.shape[0]
BATCH_SIZE = 20
def feature_norm(input):
f_size = input.shape
output_features = np.zeros(f_size, np.float32)
for batch_id in range(f_size[0]):
for index in range(13):
output_features[batch_id][index] = (input[batch_id][index] - features_avg[index]) / (features_max[index] - features_min[index])
return output_features
# 只对属性进行归一化
housing_features = feature_norm(housing_data[:, :13])
# print(feature_trian.shape)
housing_data = np.c_[housing_features, housing_data[:, -1]].astype(np.float32)
# print(training_data[0])
# 归一化后的train_data, 看下各属性的情况
features_np = np.array([x[:13] for x in housing_data],np.float32)
labels_np = np.array([x[-1] for x in housing_data],np.float32)
data_np = np.c_[features_np, labels_np]
df = pd.DataFrame(data_np, columns=feature_names)
sns.boxplot(data=df.iloc[:, 0:13])
# 将训练数据集和测试数据集按照8:2的比例分开
ratio = 0.8
offset = int(housing_data.shape[0] * ratio)
train_data = housing_data[:offset]
test_data = housing_data[offset:] | _____no_output_____ | Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
四、模型组网线性回归就是一个从输入到输出的简单的全连接层。对于波士顿房价数据集,假设属性和房价之间的关系可以被属性间的线性组合描述。 | class Regressor(paddle.nn.Layer):
def __init__(self):
super(Regressor, self).__init__()
self.fc = paddle.nn.Linear(13, 1,)
def forward(self, inputs):
pred = self.fc(inputs)
return pred | _____no_output_____ | Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
定义绘制训练过程的损失值变化趋势的方法 `draw_train_process` . | train_nums = []
train_costs = []
def draw_train_process(iters, train_costs):
plt.title("training cost", fontsize=24)
plt.xlabel("iter", fontsize=14)
plt.ylabel("cost", fontsize=14)
plt.plot(iters, train_costs, color='red', label='training cost')
plt.show() | _____no_output_____ | Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
五、方式1:使用基础API完成模型训练&预测 5.1 模型训练下面展示模型训练的代码。这里用到的是线性回归模型最常用的损失函数--均方误差(MSE),用来衡量模型预测的房价和真实房价的差异。对损失函数进行优化所采用的方法是梯度下降法. | import paddle.nn.functional as F
y_preds = []
labels_list = []
def train(model):
print('start training ... ')
# 开启模型训练模式
model.train()
EPOCH_NUM = 500
train_num = 0
optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters=model.parameters())
for epoch_id in range(EPOCH_NUM):
# 在每轮迭代开始之前,将训练数据的顺序随机的打乱
np.random.shuffle(train_data)
# 将训练数据进行拆分,每个batch包含20条数据
mini_batches = [train_data[k: k+BATCH_SIZE] for k in range(0, len(train_data), BATCH_SIZE)]
for batch_id, data in enumerate(mini_batches):
features_np = np.array(data[:, :13], np.float32)
labels_np = np.array(data[:, -1:], np.float32)
features = paddle.to_tensor(features_np)
labels = paddle.to_tensor(labels_np)
# 前向计算
y_pred = model(features)
cost = F.mse_loss(y_pred, label=labels)
train_cost = cost.numpy()[0]
# 反向传播
cost.backward()
# 最小化loss,更新参数
optimizer.step()
# 清除梯度
optimizer.clear_grad()
if batch_id%30 == 0 and epoch_id%50 == 0:
print("Pass:%d,Cost:%0.5f"%(epoch_id, train_cost))
train_num = train_num + BATCH_SIZE
train_nums.append(train_num)
train_costs.append(train_cost)
model = Regressor()
train(model)
matplotlib.use('TkAgg')
%matplotlib inline
draw_train_process(train_nums, train_costs) | _____no_output_____ | Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
可以从上图看出,随着训练轮次的增加,损失在呈降低趋势。但由于每次仅基于少量样本更新参数和计算损失,所以损失下降曲线会出现震荡。 5.2 模型预测 | # 获取预测数据
INFER_BATCH_SIZE = 100
infer_features_np = np.array([data[:13] for data in test_data]).astype("float32")
infer_labels_np = np.array([data[-1] for data in test_data]).astype("float32")
infer_features = paddle.to_tensor(infer_features_np)
infer_labels = paddle.to_tensor(infer_labels_np)
fetch_list = model(infer_features)
sum_cost = 0
for i in range(INFER_BATCH_SIZE):
infer_result = fetch_list[i][0]
ground_truth = infer_labels[i]
if i % 10 == 0:
print("No.%d: infer result is %.2f,ground truth is %.2f" % (i, infer_result, ground_truth))
cost = paddle.pow(infer_result - ground_truth, 2)
sum_cost += cost
mean_loss = sum_cost / INFER_BATCH_SIZE
print("Mean loss is:", mean_loss.numpy())
def plot_pred_ground(pred, ground):
plt.figure()
plt.title("Predication v.s. Ground truth", fontsize=24)
plt.xlabel("ground truth price(unit:$1000)", fontsize=14)
plt.ylabel("predict price", fontsize=14)
plt.scatter(ground, pred, alpha=0.5) # scatter:散点图,alpha:"透明度"
plt.plot(ground, ground, c='red')
plt.show()
plot_pred_ground(fetch_list, infer_labels_np) | _____no_output_____ | Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
上图可以看出,训练出来的模型的预测结果与真实结果是较为接近的。 六、方式2:使用高层API完成模型训练&预测也可以用飞桨的高层API来做线性回归训练,高层API相较于底层API更加的简洁方便。 | import paddle
paddle.set_default_dtype("float64")
# step1:用高层API定义数据集,无需进行数据处理等,高层API为你一条龙搞定
train_dataset = paddle.text.datasets.UCIHousing(mode='train')
eval_dataset = paddle.text.datasets.UCIHousing(mode='test')
# step2:定义模型
class UCIHousing(paddle.nn.Layer):
def __init__(self):
super(UCIHousing, self).__init__()
self.fc = paddle.nn.Linear(13, 1, None)
def forward(self, input):
pred = self.fc(input)
return pred
# step3:训练模型
model = paddle.Model(UCIHousing())
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.MSELoss())
model.fit(train_dataset, eval_dataset, epochs=5, batch_size=8, verbose=1) | The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/5
step 51/51 [==============================] - loss: 624.0728 - 2ms/step
Eval begin...
step 13/13 [==============================] - loss: 397.2567 - 878us/step
Eval samples: 102
Epoch 2/5
step 51/51 [==============================] - loss: 422.2296 - 1ms/step
Eval begin...
step 13/13 [==============================] - loss: 394.6901 - 750us/step
Eval samples: 102
Epoch 3/5
step 51/51 [==============================] - loss: 417.4614 - 1ms/step
Eval begin...
step 13/13 [==============================] - loss: 392.1667 - 810us/step
Eval samples: 102
Epoch 4/5
step 51/51 [==============================] - loss: 423.6764 - 1ms/step
Eval begin...
step 13/13 [==============================] - loss: 389.6587 - 772us/step
Eval samples: 102
Epoch 5/5
step 51/51 [==============================] - loss: 461.0751 - 1ms/step
Eval begin...
step 13/13 [==============================] - loss: 387.1344 - 828us/step
Eval samples: 102
| Apache-2.0 | docs/practices/linear_regression/linear_regression.ipynb | Liu-xiandong/docs |
Exercícios Aula 02 - Thainá Mariane Souza Silva 816118386 Exercicios de Lista1 Crie um programa que recebe uma lista de números e - retorne o maior elemento - retorne a soma dos elementos - retorne o número de ocorrências do primeiro elemento da lista - retorne a média dos elementos - retorne o valor mais próximo da média dos elementos - retorne a soma dos elementos com valor negativo - retorne a quantidade de vizinhos iguais | import math
lista = [input("Digite uma lista") for i in range(5)]
maior = max(lista)
soma = sum(lista)
ocorrencia = lista.count(lista[0])
negativo = sum(i for i in list if i < 0)
media = sum(lista)/ len(lista)
print("O maior elemento da lista é: {} " .formart(maior))
print("A soma dos elementos da lista é: {} " .format(soma))
print("O número de ocorrencias do primeiro elemento é: {} " .format(lista))
print("A media dos elementos é: {} " .format(media))
print("Valor mais próximo da média dos elementos {}" for x in lista: media - x )
print("A soma dos valores negativos é: {}" .format(negativo))
vizinho = 0
for i in range(len(list)):
if(i < (len(list)-1)):
if(list[i] == list[i+1]):
vizinho += 1
print("A quantidade de vizinhos iguais é: {}" .format(vizinho))
| _____no_output_____ | MIT | Exercicio02.ipynb | thainamariianr/LingProg |
2 Faça um programa que receba duas listas e retorne True se são iguais ou False caso contrario. Duas listas são iguais se possuem os mesmos valores e na mesma ordem. |
#Para cada x no input ..... split quebrar a string de acordo com o que foi definido
lista = [input("Digite um valor para incluir na lista ") for i in range(3)]
lista2 = [input("Digite um valor para incluir na lista 2 ") for i in range(3)]
if lista == lista2:
print("True")
else:
print("False")
| _____no_output_____ | MIT | Exercicio02.ipynb | thainamariianr/LingProg |
3 Faça um programa que receba duas listas e retorne True se têm os mesmos elementos ou False caso contrário Duas listas possuem os mesmos elementos quando são compostas pelos mesmos valores, mas não obrigatoriamente na mesma ordem |
#Para cada x no input ..... split quebrar a string de acordo com o que foi definido
lista = [input("Digite um valor para incluir na lista: ") for i in range(3)]
lista = [input("Digite um valor para incluir na lista: ") for i in range(3)]
result = lista
if lista == lista2:
print("true")
else:
i=0
for i in lista[:]:
if i in lista2:
result.remove(i)
print("Resultado: ", result)
| _____no_output_____ | MIT | Exercicio02.ipynb | thainamariianr/LingProg |
4 Faça um programa que percorre uma lista com o seguinte formato: [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]]. Essa lista indica o número de faltas que cada time fez em cada jogo. Na lista acima, no jogo entre Brasil e Itália, o Brasil fez 10 faltas e a Itália fez 9. O programa deve imprimir na tela: - o total de faltas do campeonato - o time que fez mais faltas - o time que fez menos faltas | import operator
lista = [['Brasil', 'Italia', [10, 9]], ['Brasil', 'Espanha', [5, 7]], ['Italia', 'Espanha', [7,8]]]
dicionario = {"Brasil": 0, "Italia": 0, "Espanha": 0}
total_faltas = 0
for item in lista:
total_faltas += sum(item[2])
dicionario[item[0]] += item[2][0]
dicionario[item[1]] += item[2][1]
print(dicionario)
time_mais_falta = max(dicionario.items(), key=operator.itemgetter(1))[0]
time_menos_falta = min(dicionario.items(), key=operator.itemgetter(1))[0]
print(f"Total de faltas {total_faltas}")
print(f"Total de faltas {time_mais_falta}")
print(f"Total de faltas {time_menos_falta}")
| _____no_output_____ | MIT | Exercicio02.ipynb | thainamariianr/LingProg |
Exercicios Dicionario 5 Escreva um programa que conta a quantidade de vogais em uma string e armazena tal quantidade em um dicionário, onde a chave é a vogal considerada. | import string
palavra = input("Digite uma palavra: ")
vogal = ['a', 'e', 'i', 'o', 'u']
dicionario = {'a': 0, 'e': 0, 'i':0, 'o': 0, 'u': 0}
for letra in palavra:
if letra in vogal:
dicionario[letra] = dicionario[letra] + 1
print(dicionario)
| _____no_output_____ | MIT | Exercicio02.ipynb | thainamariianr/LingProg |
6 Escreva um programa que lê̂ duas notas de vários alunos e armazena tais notas em um dicionário, onde a chave é o nome do aluno. A entrada de dados deve terminar quando for lida uma string vazia como nome. Escreva uma função que retorna a média do aluno, dado seu nome. | texto = input("Digite o nome do aluno é duas notas, Nome, nota1, nota2 separando por ponto e virgula")
texto = texto.split(";")
notas = {}
count = 0
for n in texto:
nota = n.split(",")
notas[nota[0]] = {"nota1": nota[1], "nota2": nota[2]}
for n in notas:
media = (int(notas[n]['nota1']) + int(notas[n]['nota2']))/2
print("A media do aluno(a) {} é {} " .format(n, media))
| _____no_output_____ | MIT | Exercicio02.ipynb | thainamariianr/LingProg |
7 Uma pista de Kart permite 10 voltas para cada um de 6 corredores. Escreva um programa que leia todos os tempos em segundos e os guarde em um dicionário, onde a chave é o nome do corredor. Ao fnal diga de quem foi a melhor volta da prova e em que volta; e ainda a classifcação fnal em ordem (1o o campeão). O campeão é o que tem a menor média de tempos. | i = 0
while(i <= 6):
voltas = input("Digite os valores de voltas: 'Piloto':[2,6,8,1] ")
dic = dict(x.split() for x in voltas.splitlines())
print(dic) | _____no_output_____ | MIT | Exercicio02.ipynb | thainamariianr/LingProg |
Self Study 3 In this self study we perform character recognition using SVM classifiers. We use the MNIST dataset, which consists of 70000 handwritten digits 0..9 at a resolution of 28x28 pixels. Stuff we need: | import matplotlib.pyplot as plt
import numpy as np
import time
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix,accuracy_score
from sklearn.datasets import fetch_openml ##couldn't run with the previous code
| _____no_output_____ | MIT | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 |
Now we get the MNIST data. Using the fetch_mldata function, this will be downloaded from the web, and stored in the directory you specify as data_home (replace my path in the following cell): | from sklearn.datasets import fetch_openml
mnist = fetch_openml(name='mnist_784', data_home='/home/starksultana/Documentos/Mestrado_4o ano/2o sem AAU/ML/ML_selfstudy3')
| _____no_output_____ | MIT | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 |
The data has .data and .target attributes. The following gives us some basic information on the data: | print("Number of datapoints: {}\n".format(mnist.data.shape[0]))
print("Number of features: {}\n".format(mnist.data.shape[1]))
print("features: ", mnist.data[0].reshape(196,4))
print("List of labels: {}\n".format(np.unique(mnist.target))) | Number of datapoints: 70000
Number of features: 784
features: [[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 3. 18. 18. 18.]
[126. 136. 175. 26.]
[166. 255. 247. 127.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 30. 36. 94. 154.]
[170. 253. 253. 253.]
[253. 253. 225. 172.]
[253. 242. 195. 64.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 49.]
[238. 253. 253. 253.]
[253. 253. 253. 253.]
[253. 251. 93. 82.]
[ 82. 56. 39. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 18.]
[219. 253. 253. 253.]
[253. 253. 198. 182.]
[247. 241. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 80. 156. 107. 253.]
[253. 205. 11. 0.]
[ 43. 154. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 14. 1. 154.]
[253. 90. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 139.]
[253. 190. 2. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 11.]
[190. 253. 70. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 35. 241. 225. 160.]
[108. 1. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 81. 240. 253.]
[253. 119. 25. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 45. 186.]
[253. 253. 150. 27.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 16.]
[ 93. 252. 253. 187.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 249. 253. 249.]
[ 64. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 46. 130.]
[183. 253. 253. 207.]
[ 2. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 39. 148. 229. 253.]
[253. 253. 250. 182.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 24. 114.]
[221. 253. 253. 253.]
[253. 201. 78. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 23. 66. 213. 253.]
[253. 253. 253. 198.]
[ 81. 2. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 18. 171.]
[219. 253. 253. 253.]
[253. 195. 80. 9.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 55. 172. 226. 253.]
[253. 253. 253. 244.]
[133. 11. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[136. 253. 253. 253.]
[212. 135. 132. 16.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]]
List of labels: ['0' '1' '2' '3' '4' '5' '6' '7' '8' '9']
| MIT | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 |
We can plot individual datapoints as follows: | index = 9
print("Value of datapoint no. {}:\n{}\n".format(index,mnist.data[index]))
print("As image:\n")
plt.imshow(mnist.data[index].reshape(28,28),cmap=plt.cm.gray_r)
#plt.show() | Value of datapoint no. 9:
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 189. 190. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 143. 247. 153. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 136. 247. 242. 86. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 192. 252. 187. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 62. 185.
18. 0. 0. 0. 0. 89. 236. 217. 47. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 216. 253.
60. 0. 0. 0. 0. 212. 255. 81. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 206. 252.
68. 0. 0. 0. 48. 242. 253. 89. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 131. 251. 212.
21. 0. 0. 11. 167. 252. 197. 5. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 29. 232. 247. 63.
0. 0. 0. 153. 252. 226. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 45. 219. 252. 143. 0.
0. 0. 116. 249. 252. 103. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 4. 96. 253. 255. 253. 200. 122.
7. 25. 201. 250. 158. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 92. 252. 252. 253. 217. 252. 252.
200. 227. 252. 231. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 87. 251. 247. 231. 65. 48. 189. 252.
252. 253. 252. 251. 227. 35. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 190. 221. 98. 0. 0. 0. 42. 196.
252. 253. 252. 252. 162. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 111. 29. 0. 0. 0. 0. 62. 239.
252. 86. 42. 42. 14. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15. 148. 253.
218. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 121. 252. 231.
28. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 31. 221. 251. 129.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 218. 252. 160. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 122. 252. 82. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
As image:
| MIT | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 |
To make things a little bit simpler (and faster!), we can extract from the data binary subsets, that only contain the data for two selected digits: | digit0='4'
digit1='5'
mnist_bin_data=mnist.data[np.logical_or(mnist.target==digit0,mnist.target==digit1)]
mnist_bin_target=mnist.target[np.logical_or(mnist.target==digit0,mnist.target==digit1)]
print("The first datapoint now is: \n")
plt.imshow(mnist_bin_data[0].reshape(28,28),cmap=plt.cm.gray_r)
plt.show()
print(mnist_bin_target) | The first datapoint now is:
| MIT | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 |
**Exercise 1 [SVM]:** Split the mnist_bin data into training and test set. Learn different SVM models by varying the kernel functions (SVM). For each configuration, determine the time it takes to learn the model, and the accuracy on the test data. You can get the current time using:`import time` `now = time.time()`*Caution*: for some configurations, learning here can take a little while (several minutes).Using the numpy where() function, one can extract the indices of the test cases that were misclassified: `misclass = np.where(test != predictions)` Inspect some misclassified cases. Do they correspond to hard to recognize digits (also for the human reader)? How do results (time and accuracy) change, depending on whether you consider an 'easy' binary task (e.g., distinguishing '1' and '0'), or a more difficult one (e.g., '4' vs. '5'). Identify one or several good configurations that give a reasonable combination of accuracy and runtime. Use these configurations to perform a full classification of the 10 classes in the original dataset (after split into train/test). Using `sklearn.metrics.confusion_matrix` you can get an overview of all combinations of true and predicted labels. What does this tell you about which digits are easy, and which ones are difficult to recognize, and which ones are most easily confused? **Exercise 2 [SVM]:** Consider how the current data representation "presents" the digits to the classifiers, and try to improve this:**a)** Manually design feature functions for which you expect that based on your new features SVM classifiers can achieve a better accuracy than with the original features. Transform the data into your new feature space, and learn new classifiers. What accuracies do you get?**b)** Instead of designing an explicit feature mapping as in **a)**, define a suitable measure of similarity for the digits, and implement that measure as a kernel function. (Optional: verify that the function you have defined actually satisfies the positive-semidefiniteness property.) Use your kernel function as a custom kernel for the SVC classifier. See http://scikit-learn.org/stable/auto_examples/svm/plot_custom_kernel.htmlsphx-glr-auto-examples-svm-plot-custom-kernel-py for an example. | ###Exercise 1
''' Completely dies with 7 and 9, cant make it work :(
In the rest of the tasks it performed quite well with really high accuracies, for example 1 and 0 it ran with 99 % accuracy with a test size of 30 %,
and 7 misclassifications, ran in 1,72 secs. 6 and 3 it only has 23 misclassification but runs in 4 times the time( 4 secs), for 4 and 5 it runs in 6 secs but 47 misclassifcations
wih sigmoid it took 181secs and had aclassification of 53% on test data and 51 on training data when comparing 4 and 5
rbf was taking so long I had to shut it down. UPDATE** TOOK 180 secs and 53 accuracy...
and poly took 12 secs , so basically 4x the time. So Im sticking with linear kernel
'''
##you have to choose different types of digits in the upward cell
import time
now = time.time()
print("alive")
#x: np.ndarray = mnist_bin_data
#print(mnist_bin_data.shape)
y: np.ndarray = mnist_bin_target
trnX, tstX, trnY, tstY = train_test_split(mnist.data, mnist.target, test_size=0.2,random_state=20)
print( "I'm doing stuff no worries")
classifier = SVC(kernel='polynomial')
classifier.fit(trnX,trnY)
pred_labels_train=classifier.predict(trnX)
print("Don't worry im training!")
pred_labels=classifier.predict(tstX)
misclassified = np.where(tstY != pred_labels)
##accuracy
print("Accuracy test: {}".format(accuracy_score(tstY,pred_labels)))
print("Accuracy train: {}".format(accuracy_score(trnY,pred_labels_train)))
print("Time required: {}" .format(time.time()-now))
print(metrics.confusion_matrix(y_true=y_test, y_pred=y_pred)
print(confusion_matrix(tstY, pred_labels, labels=np.unique(mnist.target)))
print("misclassified nr:" , len(misclassified[0]))
#print("image misclassified",plt.imshow(mnist_bin_data[misclassified[0][0]].reshape(28,28),cmap=plt.cm.gray_r))
#### RESULTS ####
| _____no_output_____ | MIT | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 |
---Exercise 21st approach try to reshape the data?normalize the pixels? | ##exercise 2
from sklearn.preprocessing import StandardScaler
now = time.time()
#x: np.ndarray = mnist_bin_data
#y: np.ndarray = mnist_bin_target
print("don't worry i've just started")
scaler = StandardScaler()
trnX, tstX, trnY, tstY = train_test_split(mnist.data, mnist.target, test_size=0.3,random_state=20)
print("don't worry i'm alive")
scaler.fit_transform(trnX)
scaler.fit_transform(tstX)
model = SVC(kernel='linear')
model.fit(trnX, trnY)
pred_labels=model.predict(tstX)
print("Accuracy test: {}".format(accuracy_score(tstY,pred_labels)))
print("Accuracy train: {}".format(accuracy_score(trnY,pred_labels_train)))
print("Time required: {}" .format(time.time()-now))
#already getting really high accuracy so not really sure how to increase with the same classifier
#couldn't run the entire dataset it gets stuck, waited for a long time...
from sklearn import svm
now = time.time()
def good_kernel(trnX,trnY):
return (np.dot(trnX.T, trnY)/255)
clf = svm.SVC(kernel=good_kernel)
clf.fit(trnX, trnY)
pred_labels=model.predict(tstX)
print("Accuracy test: {}".format(accuracy_score(tstY,pred_labels)))
print("Accuracy train: {}".format(accuracy_score(trnY,pred_labels_train)))
print("Time required: {}" .format(time.time()-now))
###inear SVM is less prone to overfitting than non-linear.
#And you need to decide which kernel to choose based on your situation: if your number of features is really
#large compared to the training sample, just use linear kernel; if your number of features
#is small, but the training sample is large, you may also need linear kernel but try to add more features;
#rbf should work better since it the features are highly non linear
| _____no_output_____ | MIT | ML_selfstudy3/MLSelfStudy3-F20.ipynb | Jbarata98/ML_AAU1920 |
Object classification | from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import numpy as np
import os
import ast
from glob import glob
import random
import traceback
from tabulate import tabulate
import pickle
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from matplotlib import pyplot as plt | _____no_output_____ | MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
Parameters | new_data=True
load_old_params=True
save_params=False
selected_space=True
from google.colab import drive
drive.mount('/content/drive') | _____no_output_____ | MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
Utils functions | def translate(name):
translate_dict={"apple":"mela",
"ball":"palla",
"bell pepper":"peperone",
"binder":"raccoglitore",
"bowl":"ciotola",
"calculator":"calcolatrice",
"camera":"fotocamera",
"cell phone":"telefono",
"cereal box":"scatola",
"coffee mug":"tazza",
"comb":"spazzola",
"dry battery":"batteria",
"flashlight":"torcia",
"food box":"scatola",
"food can":"lattina",
"food cup":"barattolo",
"food jar":"barattolo",
"garlic":"aglio",
"lemon":"limone",
"lime":"lime",
"onion":"cipolla",
"orange":"arancia",
"peach":"pesca",
"pear":"pera",
"potato":"patata",
"tomato":"pomodoro",
"soda can":"lattina",
"marker":"pennarello",
"plate":"piatto",
"notebook":"quaderno",
"keyboard":"tastiera",
"glue stick":"colla",
"sponge":"spugna",
"toothpaste":"dentifricio",
"toothbrush":"spazzolino"
}
try:
return translate_dict[name]
except:
return name
def normalize_color(color):
return color
color_normalized=[]
for i,f in enumerate(color):
if i%3==0:
color_normalized.append(f/256)
else:
color_normalized.append((f+128)/256)
return color_normalized
def sort_and_cut_dict(dictionary,limit=3):
iterator=sorted(dictionary.items(), key=lambda item: item[1], reverse=True)[:limit]
coef=sum([i[1] for i in iterator])
return {k: v/coef for k, v in iterator}
| _____no_output_____ | MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
Data | obj_dir = "/content/drive/My Drive/Tesi/Code/Object_classification"
#obj_dir = "/Users/marco/Google Drive/Tesi/Code/Object_classification"
data_dir = obj_dir+"/Data"
model_filename = obj_dir+"/model.pkl"
exclusion_list=["binder","camera","cell phone","dry battery"]
test_folder=["apple_3",
"bell_pepper_1",
"bowl_3",
"cereal_box_1",
"coffe_mug_5",
"comb_5",
"flashlight_4",
"food_box_6",
"food_can_2",
"garlic_1",
"glue_stick_3",
"keyboard_2",
"lemon_1",
"lime_1",
"onion_1",
"orange_1",
"pear_4",
"plate_5",
"potato_5",
"soda_can_2",
"sponge_8",
"tomato_1",
"toothbrush_2"
]
if new_data:
color_train=[]
shape_train=[]
texture_train=[]
color_test=[]
shape_test=[]
texture_test=[]
y_train=[]
y_test=[]
file_list=glob(data_dir+'/**', recursive=True)
number_of_files=len(file_list)
with open(obj_dir+"/dictionary.pickle","rb") as f:
dictionary=pickle.load(f)
for j,filename in enumerate(file_list):
if os.path.isfile(filename) and filename.endswith(".txt"):
print("{:.2f}%".format(j*100/number_of_files))
name=" ".join(filename.split("_")[:-3]).rsplit("/", 1)[1]
if name in exclusion_list:
continue
name=translate(name)
folder=filename.split("/")[-2]
if folder not in dictionary.keys():
continue
with open(filename, "r") as f:
features=[]
try:
lines=f.readlines()
for line in lines:
features.append(ast.literal_eval(line))
if len(features)==3:
color,shape,texture=features
color=normalize_color(color)
if folder in test_folder:
color_test.append(color)
shape_test.append(shape)
texture_test.append(texture)
if selected_space:
y_test.append(folder)
else:
y_test.append(name)
else:
color_train.append(color)
shape_train.append(shape)
texture_train.append(texture)
if selected_space:
y_train.append(folder)
else:
y_train.append(name)
except:
print("Error in {}".format(filename))
continue
y_train=np.array(y_train)
y_test=np.array(y_test)
X_train=np.array([np.concatenate((c, s, t), axis=None) for c,s,t in zip(color_train,shape_train,texture_train)])
X_test=np.array([np.concatenate((c, s, t), axis=None) for c,s,t in zip(color_test,shape_test,texture_test)])
color_train=np.array(color_train)
shape_train=np.array(shape_train)
texture_train=np.array(texture_train)
color_test=np.array(color_test)
shape_test=np.array(shape_test)
texture_test=np.array(texture_test)
X_train=color_train
X_test=color_test
else:
X_train=np.load(obj_dir+"/input_train.npy")
X_test=np.load(obj_dir+"/input_test.npy")
color_train=np.load(obj_dir+"/color_train.npy")
shape_train=np.load(obj_dir+"/shape_train.npy")
texture_train=np.load(obj_dir+"/texture_train.npy")
color_test=np.load(obj_dir+"/color_test.npy")
shape_test=np.load(obj_dir+"/shape_test.npy")
texture_test=np.load(obj_dir+"/texture_test.npy")
y_train=np.load(obj_dir+"/output_train.npy")
y_test=np.load(obj_dir+"/output_test.npy") | 0.07%
0.07%
0.08%
0.09%
0.10%
0.10%
0.11%
0.12%
0.13%
0.13%
0.14%
0.15%
0.16%
0.16%
0.17%
0.18%
0.19%
0.19%
0.20%
0.21%
0.22%
0.22%
0.23%
0.24%
0.25%
0.25%
0.26%
0.27%
0.27%
0.28%
0.29%
0.30%
0.30%
0.31%
0.32%
0.33%
0.33%
0.34%
0.35%
0.36%
0.36%
0.37%
0.38%
0.39%
0.39%
0.40%
0.41%
0.42%
0.42%
0.43%
0.44%
0.45%
0.45%
0.46%
0.47%
0.48%
0.48%
0.49%
0.51%
0.51%
0.52%
0.53%
0.53%
0.54%
0.55%
0.56%
0.56%
0.57%
0.58%
0.59%
0.59%
0.60%
0.61%
0.62%
0.62%
0.63%
0.64%
0.65%
0.65%
0.66%
0.67%
0.68%
0.68%
0.69%
0.70%
0.71%
0.71%
0.72%
0.73%
0.74%
0.74%
0.75%
0.76%
0.77%
0.77%
0.78%
0.79%
0.79%
0.80%
0.81%
0.82%
0.82%
0.83%
0.84%
0.85%
0.85%
0.86%
0.87%
0.88%
0.88%
0.89%
0.90%
0.91%
0.91%
0.92%
0.94%
0.94%
0.95%
0.96%
0.97%
0.97%
0.98%
0.99%
1.00%
1.00%
1.01%
1.02%
1.03%
1.03%
1.04%
1.05%
1.05%
1.06%
1.07%
1.08%
1.08%
1.09%
1.10%
1.11%
1.11%
1.12%
1.13%
1.14%
1.14%
1.15%
1.16%
1.17%
1.17%
1.18%
1.19%
1.20%
1.20%
1.21%
1.22%
1.23%
1.23%
1.24%
1.25%
1.26%
1.26%
1.27%
1.28%
1.29%
1.29%
1.30%
1.31%
1.31%
1.32%
1.33%
1.34%
1.34%
1.35%
1.36%
1.37%
1.37%
1.39%
1.40%
1.40%
1.41%
1.42%
1.43%
1.43%
1.44%
1.45%
1.46%
1.46%
1.47%
1.48%
1.49%
1.49%
1.50%
1.51%
1.52%
1.52%
1.53%
1.54%
1.55%
1.55%
1.56%
1.57%
1.57%
1.58%
1.59%
1.60%
1.60%
1.61%
1.62%
1.63%
1.63%
1.64%
1.65%
1.66%
1.66%
1.67%
1.68%
1.69%
1.69%
1.70%
1.71%
1.72%
1.72%
1.73%
1.74%
1.75%
1.75%
1.76%
1.77%
1.78%
1.78%
1.79%
1.80%
1.81%
1.81%
1.82%
1.83%
1.83%
1.84%
1.85%
1.86%
1.86%
1.87%
1.88%
1.89%
1.89%
1.90%
1.91%
1.92%
1.92%
2.70%
2.71%
2.72%
2.73%
2.73%
2.74%
2.75%
2.76%
2.76%
2.77%
2.78%
2.79%
2.79%
2.80%
2.81%
2.82%
2.82%
2.83%
2.84%
2.85%
2.85%
2.86%
2.87%
2.87%
2.88%
2.89%
2.90%
2.90%
2.91%
2.92%
2.93%
2.93%
2.94%
2.95%
2.96%
2.96%
2.97%
2.98%
2.99%
2.99%
3.00%
3.01%
3.02%
3.02%
3.03%
3.04%
3.05%
3.05%
3.06%
3.07%
3.08%
3.08%
3.09%
3.10%
3.11%
3.11%
3.12%
3.13%
3.13%
3.14%
3.15%
3.16%
3.16%
3.17%
3.19%
3.19%
3.20%
3.21%
3.22%
3.22%
3.23%
3.24%
3.25%
3.25%
3.26%
3.27%
3.28%
3.28%
3.29%
3.30%
3.31%
3.31%
3.32%
3.33%
3.34%
3.34%
3.35%
3.36%
3.37%
3.37%
3.38%
3.39%
3.39%
3.40%
3.41%
3.42%
3.42%
3.43%
3.44%
3.45%
3.45%
3.46%
3.47%
3.48%
3.48%
3.49%
3.50%
3.51%
3.51%
3.52%
3.53%
3.54%
3.54%
3.55%
3.56%
3.57%
3.57%
3.58%
3.59%
3.60%
3.60%
3.61%
3.62%
3.63%
3.63%
3.64%
3.65%
3.66%
3.67%
3.68%
3.68%
3.69%
3.70%
3.71%
3.71%
3.72%
3.73%
3.74%
3.74%
3.75%
3.76%
3.77%
3.77%
3.78%
3.79%
3.80%
3.80%
3.81%
3.82%
3.83%
3.83%
3.84%
3.85%
3.86%
3.86%
3.87%
3.88%
3.89%
3.89%
3.90%
3.91%
3.91%
3.92%
3.93%
3.94%
3.94%
3.95%
3.96%
3.97%
3.97%
3.98%
3.99%
4.00%
4.00%
4.01%
4.02%
4.03%
4.03%
4.04%
4.05%
4.06%
4.06%
4.07%
4.08%
4.09%
4.09%
4.10%
4.11%
4.12%
4.13%
4.14%
4.15%
4.15%
4.16%
4.17%
4.17%
4.18%
4.19%
4.20%
4.20%
4.21%
4.22%
4.23%
4.23%
4.24%
4.25%
4.26%
4.26%
4.27%
4.28%
4.29%
4.29%
4.30%
4.31%
4.32%
4.32%
4.33%
4.34%
4.35%
4.35%
4.36%
4.37%
4.38%
4.38%
4.39%
4.40%
4.40%
4.41%
4.42%
4.43%
4.43%
4.44%
4.45%
4.46%
4.46%
4.47%
4.48%
4.49%
4.49%
4.50%
4.51%
4.52%
4.52%
4.53%
4.54%
4.55%
4.55%
4.56%
4.57%
4.58%
4.59%
4.60%
4.61%
4.61%
4.62%
4.63%
4.64%
4.64%
4.65%
4.66%
4.66%
4.67%
4.68%
4.69%
4.69%
4.70%
4.71%
4.72%
4.72%
4.73%
4.74%
4.75%
4.75%
4.76%
4.77%
4.78%
4.78%
4.79%
4.80%
4.81%
4.81%
4.82%
4.83%
4.84%
4.84%
4.85%
4.86%
4.87%
4.87%
4.88%
4.89%
4.90%
4.90%
4.91%
4.92%
4.92%
4.93%
4.94%
4.95%
4.95%
4.96%
4.97%
4.98%
4.98%
4.99%
5.00%
5.01%
5.01%
5.02%
5.03%
5.04%
5.04%
5.05%
5.06%
5.07%
5.07%
5.08%
5.10%
5.11%
5.12%
5.13%
5.13%
5.14%
5.15%
5.16%
5.16%
5.17%
5.18%
5.18%
5.19%
5.20%
5.21%
5.21%
5.22%
5.23%
5.24%
5.24%
5.25%
5.26%
5.27%
5.27%
5.28%
5.29%
5.30%
5.30%
5.31%
5.32%
5.33%
5.33%
5.34%
5.35%
5.36%
5.36%
5.37%
5.38%
5.39%
5.39%
5.40%
5.41%
5.42%
5.42%
5.43%
5.44%
5.44%
5.45%
5.46%
5.47%
5.47%
5.48%
5.50%
5.50%
5.51%
5.52%
5.53%
5.53%
5.54%
5.55%
5.56%
5.56%
5.57%
5.58%
5.59%
5.59%
5.60%
5.61%
5.62%
5.62%
5.63%
5.64%
5.65%
5.65%
5.66%
5.67%
5.68%
5.68%
5.69%
5.70%
5.70%
5.71%
5.72%
5.73%
5.73%
5.74%
5.75%
5.76%
5.76%
5.77%
5.78%
5.79%
5.79%
5.80%
5.81%
5.82%
5.82%
5.83%
5.84%
5.85%
5.85%
5.86%
5.87%
5.88%
5.88%
5.89%
5.90%
5.91%
5.91%
5.92%
5.93%
5.94%
5.94%
5.95%
5.96%
5.96%
5.97%
5.98%
5.99%
5.99%
6.00%
6.01%
6.02%
6.02%
6.03%
6.04%
6.05%
6.05%
6.06%
6.07%
6.08%
6.09%
6.10%
6.11%
6.11%
6.12%
6.13%
6.14%
6.14%
6.15%
6.16%
6.17%
6.17%
6.18%
6.19%
6.20%
6.20%
6.21%
6.22%
6.22%
6.23%
6.24%
6.25%
6.25%
6.26%
6.27%
6.28%
6.28%
6.29%
6.30%
6.31%
6.31%
6.32%
6.33%
6.34%
6.34%
6.35%
6.36%
6.37%
6.37%
6.38%
6.39%
6.40%
6.40%
6.41%
6.42%
6.43%
6.43%
6.44%
6.45%
6.46%
6.46%
6.47%
6.48%
6.48%
6.49%
6.50%
6.51%
6.52%
6.53%
6.54%
6.54%
6.55%
6.56%
6.57%
6.57%
6.58%
6.59%
6.60%
6.60%
6.61%
6.62%
6.63%
6.63%
6.64%
6.65%
6.66%
6.66%
6.67%
6.68%
6.69%
6.69%
6.70%
6.71%
6.72%
6.72%
6.73%
6.74%
6.74%
6.75%
6.76%
6.77%
6.77%
6.78%
6.79%
6.80%
6.80%
6.81%
6.82%
6.83%
6.83%
6.84%
6.85%
6.86%
6.86%
6.87%
6.88%
6.89%
6.89%
6.90%
6.91%
6.92%
6.92%
6.94%
6.95%
6.95%
6.96%
6.97%
6.98%
6.98%
6.99%
7.00%
7.00%
7.01%
7.02%
7.03%
7.03%
7.04%
7.05%
7.06%
7.06%
7.07%
7.08%
7.09%
7.09%
7.10%
7.11%
7.12%
7.12%
7.13%
7.14%
7.15%
7.15%
7.16%
7.17%
7.18%
7.18%
7.19%
7.20%
7.21%
7.21%
7.22%
7.23%
7.24%
7.24%
7.25%
7.26%
7.26%
7.27%
7.28%
7.29%
7.29%
7.30%
7.31%
7.32%
7.33%
7.34%
7.35%
7.35%
7.36%
7.37%
7.38%
7.38%
7.39%
7.40%
7.41%
7.41%
7.42%
7.43%
7.44%
7.44%
7.45%
7.46%
7.47%
7.47%
7.48%
7.49%
7.50%
7.50%
7.51%
7.52%
7.52%
7.53%
7.54%
7.55%
7.55%
7.56%
7.57%
7.58%
7.58%
7.59%
7.60%
7.61%
7.61%
7.62%
7.63%
7.64%
7.64%
7.65%
7.66%
7.67%
7.67%
7.68%
7.69%
7.70%
7.70%
7.71%
7.72%
7.73%
7.73%
7.74%
7.75%
7.76%
7.76%
7.77%
7.78%
7.78%
7.79%
7.80%
7.81%
7.81%
7.82%
7.83%
7.84%
7.84%
7.85%
7.86%
7.87%
7.87%
7.88%
7.89%
7.90%
7.90%
7.91%
7.92%
7.93%
7.93%
7.94%
7.95%
7.96%
7.96%
7.97%
7.98%
7.99%
7.99%
8.00%
8.01%
8.02%
8.02%
8.03%
8.04%
8.04%
8.05%
8.06%
8.07%
8.07%
8.08%
8.10%
8.11%
8.12%
8.13%
8.13%
8.14%
8.15%
8.16%
8.16%
8.17%
8.18%
8.19%
8.19%
8.20%
8.21%
8.22%
8.22%
8.23%
8.24%
8.25%
8.25%
8.26%
8.27%
8.28%
8.28%
8.29%
8.30%
8.30%
8.31%
8.32%
8.33%
8.33%
8.34%
8.35%
8.36%
8.36%
8.37%
8.38%
8.39%
8.39%
8.40%
8.41%
8.42%
8.42%
8.43%
8.44%
8.45%
8.46%
8.47%
8.48%
8.48%
8.49%
8.50%
8.51%
8.51%
8.52%
8.53%
8.54%
8.54%
8.55%
8.56%
8.56%
8.57%
8.58%
8.59%
8.59%
8.60%
8.61%
8.62%
8.62%
8.63%
8.64%
8.65%
8.65%
8.66%
8.67%
8.68%
8.68%
8.69%
8.70%
8.71%
8.71%
8.72%
8.73%
8.74%
8.74%
8.75%
8.76%
8.77%
8.77%
8.78%
8.79%
8.80%
8.80%
8.81%
8.82%
8.82%
8.83%
8.84%
8.85%
8.85%
8.86%
8.87%
8.88%
8.88%
8.89%
8.90%
8.91%
8.91%
8.92%
8.93%
8.94%
8.94%
8.95%
8.96%
8.97%
8.98%
8.99%
9.00%
9.00%
9.01%
9.02%
9.03%
9.03%
9.04%
9.05%
9.06%
9.06%
9.07%
9.08%
9.08%
9.09%
9.10%
9.11%
9.11%
9.12%
9.13%
9.14%
9.14%
9.15%
9.16%
9.17%
9.17%
9.18%
9.19%
9.20%
9.20%
9.21%
9.22%
9.23%
9.23%
9.24%
9.25%
9.26%
9.26%
9.27%
9.28%
9.29%
9.29%
9.30%
9.31%
9.32%
9.32%
9.33%
9.34%
9.34%
9.35%
9.36%
9.37%
9.37%
9.38%
9.40%
9.40%
9.41%
9.42%
9.43%
9.43%
9.44%
9.45%
9.46%
9.46%
9.47%
9.48%
9.49%
9.49%
9.50%
9.51%
9.52%
9.52%
9.53%
9.54%
9.55%
9.55%
9.56%
9.57%
9.58%
9.58%
9.59%
9.60%
9.60%
9.61%
9.62%
9.63%
9.63%
9.64%
9.65%
9.66%
9.66%
9.67%
9.68%
9.69%
9.69%
9.70%
9.71%
9.72%
9.72%
9.73%
9.74%
9.75%
9.75%
9.77%
9.78%
9.78%
9.79%
9.80%
9.81%
9.81%
9.82%
9.83%
9.84%
9.84%
9.85%
9.86%
9.86%
9.87%
9.88%
9.89%
9.89%
9.90%
9.91%
9.92%
9.92%
9.93%
9.94%
9.95%
9.95%
9.96%
9.97%
9.98%
9.98%
9.99%
10.00%
10.01%
10.01%
10.02%
10.03%
10.04%
10.04%
10.05%
10.06%
10.07%
10.07%
10.08%
10.09%
10.10%
10.10%
10.11%
10.12%
10.12%
10.13%
10.14%
10.15%
10.15%
10.16%
10.17%
10.18%
10.18%
10.19%
10.21%
10.22%
10.23%
10.24%
10.24%
10.25%
10.26%
10.27%
10.27%
10.28%
10.29%
10.30%
10.30%
10.31%
10.32%
10.33%
10.33%
10.34%
10.35%
10.36%
10.36%
10.37%
10.38%
10.38%
10.39%
10.40%
10.41%
10.41%
10.42%
10.43%
10.44%
10.44%
10.45%
10.46%
10.47%
10.47%
10.48%
10.49%
10.50%
10.50%
10.51%
10.52%
10.53%
10.53%
10.54%
10.55%
10.56%
10.56%
10.57%
10.58%
10.59%
10.59%
10.60%
10.61%
10.62%
10.62%
10.63%
10.64%
10.64%
10.65%
10.66%
10.67%
10.67%
10.68%
10.70%
10.70%
10.71%
10.72%
10.73%
10.73%
10.74%
10.75%
10.76%
10.76%
10.77%
10.78%
10.79%
10.79%
10.80%
10.81%
10.82%
10.82%
10.83%
10.84%
10.85%
10.85%
10.86%
10.87%
10.88%
10.88%
10.89%
10.90%
10.90%
10.91%
10.92%
10.93%
10.93%
10.94%
10.95%
10.96%
10.96%
10.97%
10.98%
10.99%
10.99%
11.00%
11.01%
11.02%
11.02%
11.04%
11.05%
11.05%
11.06%
11.07%
11.08%
11.08%
11.09%
11.10%
11.11%
11.11%
11.12%
11.13%
11.14%
11.14%
11.15%
11.16%
11.16%
11.17%
11.18%
11.19%
11.19%
11.20%
11.21%
11.22%
11.22%
11.23%
11.24%
11.25%
11.25%
11.26%
11.27%
11.28%
11.28%
11.29%
11.30%
11.31%
11.31%
11.32%
11.33%
11.34%
11.34%
11.35%
11.36%
11.37%
11.37%
11.38%
11.39%
11.40%
11.40%
11.41%
11.42%
11.42%
11.43%
11.44%
11.45%
11.45%
11.46%
11.47%
11.48%
11.49%
11.50%
11.51%
11.51%
11.52%
11.53%
11.54%
11.54%
11.55%
11.56%
11.57%
11.57%
11.58%
11.59%
11.60%
11.60%
11.61%
11.62%
11.63%
11.63%
11.64%
11.65%
11.66%
11.66%
11.67%
11.68%
11.68%
11.69%
11.70%
11.71%
11.71%
11.72%
11.73%
11.74%
11.74%
11.75%
11.76%
11.77%
11.77%
11.78%
11.79%
11.80%
11.80%
11.81%
11.82%
11.83%
11.83%
11.84%
11.85%
11.86%
11.86%
11.87%
11.88%
11.89%
11.89%
11.90%
11.91%
11.92%
11.92%
11.93%
11.94%
11.94%
11.95%
11.96%
11.97%
11.97%
11.98%
11.99%
12.00%
12.00%
12.01%
12.02%
12.03%
12.03%
12.04%
12.05%
12.06%
12.06%
12.07%
12.08%
12.09%
12.09%
12.10%
12.11%
12.12%
12.13%
12.14%
12.15%
12.15%
12.16%
12.17%
12.18%
12.18%
12.19%
12.20%
12.20%
12.21%
12.22%
12.23%
12.23%
12.24%
12.25%
12.26%
12.26%
12.27%
12.28%
12.29%
12.29%
12.30%
12.31%
12.32%
12.32%
12.33%
12.34%
12.35%
12.35%
12.36%
12.37%
12.38%
12.38%
12.39%
12.40%
12.41%
12.41%
12.42%
12.43%
12.44%
12.44%
12.45%
12.46%
12.46%
12.47%
12.48%
12.49%
12.49%
12.50%
12.51%
12.52%
12.52%
12.53%
12.54%
12.55%
12.55%
12.56%
12.57%
12.58%
12.58%
12.59%
12.60%
12.61%
12.61%
12.62%
12.63%
12.64%
12.64%
12.65%
12.66%
12.67%
12.67%
12.68%
12.69%
12.69%
12.70%
12.71%
12.72%
12.72%
12.73%
12.74%
12.75%
12.75%
12.77%
12.78%
12.78%
12.79%
12.80%
12.81%
12.81%
12.82%
12.83%
12.84%
12.84%
12.85%
12.86%
12.87%
12.87%
12.88%
12.89%
12.90%
12.90%
12.91%
12.92%
12.93%
12.93%
12.94%
12.95%
12.95%
12.96%
12.97%
12.98%
12.98%
12.99%
13.00%
13.01%
13.01%
13.02%
13.03%
13.04%
13.04%
13.05%
13.06%
13.07%
13.07%
13.08%
13.09%
13.10%
13.10%
13.11%
13.12%
13.13%
13.13%
13.14%
13.15%
13.16%
13.16%
13.17%
13.18%
13.19%
13.19%
13.20%
13.21%
13.21%
13.22%
13.24%
13.24%
13.25%
13.26%
13.27%
13.27%
13.28%
13.29%
13.30%
13.30%
13.31%
13.32%
13.33%
13.33%
13.34%
13.35%
13.36%
13.36%
13.37%
13.38%
13.39%
13.39%
13.40%
13.41%
13.42%
13.42%
13.43%
13.44%
13.45%
13.45%
13.46%
13.47%
13.47%
13.48%
13.49%
13.50%
13.50%
13.51%
13.52%
13.53%
13.53%
13.54%
13.55%
13.56%
13.56%
13.57%
13.58%
13.59%
13.59%
13.60%
13.61%
13.62%
13.63%
13.64%
13.65%
13.65%
13.66%
13.67%
13.68%
13.68%
13.69%
13.70%
13.71%
13.71%
13.72%
13.73%
13.73%
13.74%
13.75%
13.76%
13.76%
13.77%
13.78%
13.79%
13.79%
13.80%
13.81%
13.82%
13.82%
13.83%
13.84%
13.85%
13.85%
13.86%
13.87%
13.88%
13.88%
13.89%
13.90%
13.91%
13.91%
13.92%
13.93%
13.94%
13.94%
13.95%
13.96%
13.97%
13.97%
13.98%
13.99%
13.99%
14.00%
14.01%
14.02%
14.02%
14.03%
14.04%
14.05%
14.05%
14.11%
14.11%
14.12%
14.13%
14.14%
14.14%
14.15%
14.16%
14.17%
14.17%
14.18%
14.19%
14.20%
14.20%
14.21%
14.22%
14.23%
14.23%
14.24%
14.25%
14.25%
14.26%
14.27%
14.28%
14.28%
14.29%
14.30%
14.31%
14.31%
14.32%
14.33%
14.34%
14.34%
14.35%
14.36%
14.37%
14.37%
14.38%
14.39%
14.40%
14.40%
14.41%
14.42%
14.43%
14.43%
14.44%
14.45%
14.46%
14.46%
14.47%
14.48%
14.49%
14.49%
14.50%
14.51%
14.51%
14.52%
14.53%
14.54%
14.54%
14.55%
14.56%
14.57%
14.58%
14.59%
14.60%
14.60%
14.61%
14.62%
14.63%
14.63%
14.64%
14.65%
14.66%
14.66%
14.67%
14.68%
14.69%
14.69%
14.70%
14.71%
14.72%
14.72%
14.73%
14.74%
14.75%
14.75%
14.76%
14.77%
14.77%
14.78%
14.79%
14.80%
14.80%
14.81%
14.82%
14.83%
14.83%
14.84%
14.85%
14.86%
14.86%
14.87%
14.88%
14.89%
14.89%
14.90%
14.91%
14.92%
14.92%
14.93%
14.94%
14.95%
14.95%
14.96%
14.97%
14.98%
14.98%
14.99%
15.00%
15.01%
15.01%
15.02%
15.03%
15.03%
15.04%
15.06%
15.06%
15.07%
15.08%
15.09%
15.09%
15.10%
15.11%
15.12%
15.12%
15.13%
15.14%
15.15%
15.15%
15.16%
15.17%
15.18%
15.18%
15.19%
15.20%
15.21%
15.21%
15.22%
15.23%
15.24%
15.24%
15.25%
15.26%
15.27%
15.27%
15.28%
15.29%
15.29%
15.30%
15.31%
15.32%
15.32%
15.33%
15.34%
15.35%
15.35%
15.36%
15.37%
15.38%
15.38%
15.39%
15.41%
15.42%
15.43%
15.44%
15.44%
15.45%
15.46%
15.47%
15.47%
15.48%
15.49%
15.50%
15.50%
15.51%
15.52%
15.53%
15.53%
15.54%
15.55%
15.55%
15.56%
15.57%
15.58%
15.58%
15.59%
15.60%
15.61%
15.61%
15.62%
15.63%
15.64%
15.64%
15.65%
15.66%
15.67%
15.67%
15.68%
15.69%
15.70%
15.70%
15.71%
15.72%
15.73%
15.73%
15.74%
15.75%
15.76%
15.76%
15.77%
15.78%
15.79%
15.79%
15.80%
15.81%
15.81%
15.82%
15.84%
15.84%
15.85%
15.86%
15.87%
15.87%
15.88%
15.89%
15.90%
15.90%
15.91%
15.92%
15.93%
15.93%
15.94%
15.95%
15.96%
15.96%
15.97%
15.98%
15.99%
15.99%
16.00%
16.01%
16.02%
16.02%
16.03%
16.04%
16.05%
16.05%
16.06%
16.07%
16.07%
16.08%
16.09%
16.10%
16.10%
16.11%
16.12%
16.13%
16.13%
16.14%
16.15%
16.16%
16.16%
16.17%
16.18%
16.19%
16.19%
16.20%
Error in /content/drive/My Drive/Tesi/Code/Object_classification/Data/cereal_box/cereal_box_4/cereal_box_4_1_181.txt
16.21%
16.22%
16.22%
16.23%
16.24%
16.25%
16.25%
16.26%
16.27%
16.28%
16.29%
16.30%
16.31%
Error in /content/drive/My Drive/Tesi/Code/Object_classification/Data/cereal_box/cereal_box_5/cereal_box_5_4_25.txt
16.31%
16.32%
16.33%
16.33%
16.34%
16.35%
16.36%
16.36%
16.37%
16.38%
16.39%
16.39%
16.40%
16.41%
16.42%
16.42%
16.43%
16.44%
16.45%
16.45%
16.46%
16.47%
16.48%
16.48%
16.49%
16.50%
16.51%
| MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
Save input data | if selected_space:
new_y_train=[]
for i in y_train:
new_label=dictionary[i][1]
#new_label=new_label.split("-")[0]
new_y_train.append(new_label)
new_y_test=[]
for i in y_test:
new_label=dictionary[i][1]
#new_label=new_label.split("-")[0]
new_y_test.append(new_label)
y_train=np.array(new_y_train)
y_test=np.array(new_y_test)
if new_data and save_params:
np.save(obj_dir+"/input_train.npy",X_train)
np.save(obj_dir+"/input_test.npy",X_test)
np.save(obj_dir+"/color_train.npy",color_train)
np.save(obj_dir+"/shape_train.npy",shape_train)
np.save(obj_dir+"/texture_train.npy",texture_train)
np.save(obj_dir+"/color_test.npy",color_test)
np.save(obj_dir+"/shape_test.npy",shape_test)
np.save(obj_dir+"/texture_test.npy",texture_test)
np.save(obj_dir+"/output_test.npy",y_test)
np.save(obj_dir+"/output_train.npy",y_train) | _____no_output_____ | MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
Classifier fitting | if load_old_params and False:
with open(model_filename, 'rb') as file:
clf = pickle.load(file)
else:
clf = RandomForestClassifier(n_jobs=-1, n_estimators=30)
clf.fit(X_train,y_train)
print(clf.score(X_test,y_test)) | _____no_output_____ | MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
Saving parameters | if save_params:
with open(model_filename, 'wb') as file:
pickle.dump(clf, file) | _____no_output_____ | MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
Score | def classify_prediction(prediction):
sure=[]
unsure=[]
dubious=[]
cannot_answer=[]
for pred in prediction:
o,p=pred
values=list(p.values())
keys=list(p.keys())
# sure
if values[0]>0.8:
sure.append(pred)
# unsure
elif values[0]>0.6:
unsure.append(pred)
# dubious
elif values[0]>0.4:
dubious.append(pred)
# cannot_answer
else:
cannot_answer.append(pred)
return {"sure":sure, "unsure":unsure, "dubious":dubious, "cannot_answer":cannot_answer}
def calculate_accuracy(category,prediction):
counter=0
if category=="dubious":
for o,p in pred:
if o in list(p.keys())[0:2]:
counter+=1
elif category=="cannot_answer":
for o,p in pred:
if o not in list(p.keys())[0:2]:
counter+=1
else:
for o,p in pred:
if o.split("-")[0] in list(p.keys())[0]:
counter+=1
return counter/len(pred)
label_prob=clf.predict_proba(X_test)
pred=[[y_test[j],sort_and_cut_dict({clf.classes_[i]:v for i,v in enumerate(row)})] for j,row in enumerate(label_prob)]
pred_classified=classify_prediction(pred)
print("TOTAL TEST: {}".format(len(pred)))
for l,pred in pred_classified.items():
print(l.upper())
print(40*"-")
selected=[]
for o,p in pred:
if l=="dubious" and o not in list(p.keys())[0:2]:
selected.append([o,", ".join([str(a)+":"+str(round(b,2)) for a,b in list(p.items())])])
elif l=="cannot_answer" and o in list(p.keys())[0:2]:
selected.append([o,", ".join([str(a)+":"+str(round(b,2)) for a,b in list(p.items())])])
elif l=="unsure" and o.split("-")[0] not in list(p.keys())[0]:
selected.append([o,", ".join([str(a)+":"+str(round(b,2)) for a,b in list(p.items())])])
elif (l=="sure") and o != list(p.keys())[0]:
selected.append([o,", ".join([str(a)+":"+str(round(b,2)) for a,b in list(p.items())])])
print(tabulate(selected, headers=['Original','Predicted']))
print("Not correct: {}/{} - {:.2f}%".format(len(selected),len(pred),len(selected)*100/len(pred)))
accuracy=calculate_accuracy(l,pred)
print("Accuracy: {:.2f}".format(accuracy)) | _____no_output_____ | MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
Test | clf.score(X_test,y_test)
plt.plot(clf.feature_importances_)
clf.feature_importances_
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix
import pandas as pd
def classification_report(y_true, y_pred):
print(f"Accuracy: {accuracy_score(y_true, y_pred)}.")
print(f"Precision: {precision_score(y_true, y_pred, average='weighted', zero_division=True)}.")
print(f"Recall: {recall_score(y_true, y_pred, average='weighted')}.")
print(f"F1-Score: {f1_score(y_true, y_pred, average='weighted')}.")
print("\nSuddivisione per Classe")
matrix = confusion_matrix(y_true, y_pred)
# i falsi positivi si trovano sommando le colonne ed eliminando l'elemento diagonale (che rappresenta i veri positivi)
FP = matrix.sum(axis=0) - np.diag(matrix)
# i falsi negativi invece si individuano sommando le righe
FN = matrix.sum(axis=1) - np.diag(matrix)
TP = np.diag(matrix)
TN = matrix.sum() - (FP + FN + TP)
class_names = np.unique(y_true)
metrics_per_class = {}
class_accuracies = (TP+TN)/(TP+TN+FP+FN)
class_precisions = TP/(TP+FP)
class_recalls = TP/(TP+FN)
class_f1_scores = (2 * class_precisions * class_recalls) / (class_precisions + class_recalls)
i=0
for name in class_names:
metrics_per_class[name] = [class_accuracies.tolist().pop(i), class_precisions.tolist().pop(i), class_recalls.tolist().pop(i), class_f1_scores.tolist().pop(i), FP.tolist().pop(i), FN.tolist().pop(i)]
i += 1
result = pd.DataFrame(metrics_per_class, index=["Accuracy", "Precision", "Recall", "F1 Score", "FP", "FN"]).transpose()
print(result, end="\n\n")
return metrics_per_class
#from sklearn.metrics import classification_report
y_true=y_test
y_pred=clf.predict(X_test)
d=classification_report(y_true, y_pred)
exclusion_list=["batteria","ciotola","piatto","cipolla","pomodoro"]
for k in exclusion_list:
del d[k]
data=[]
labels = []
for k,v in d.items():
data.append([k]+v[:4])
labels.append(k)
data=np.array(data)
colors = ['red','yellow','blue','green']
df = pd.DataFrame(data.T, index=["Label","Accuracy", "Precision", "Recall", "F1 Score"]).transpose()
#df=df.set_index("Label")
df[["Accuracy", "Precision", "Recall", "F1 Score"]]=df[["Accuracy", "Precision", "Recall", "F1 Score"]].apply(pd.to_numeric)
ax = df.plot(x="Label", y=["Accuracy", "Precision", "Recall", "F1 Score"], kind="barh",figsize=(15,15))
plt.show()
| _____no_output_____ | MIT | Test/Object_classification/Object_classification_Random_Forest.ipynb | marcolamartina/LamIra |
You are given two strings as input. You want to find out if these **two strings** are **at most one edit away** from each other.An edit is defined as either- **inserting a character**: length increased by 1- **removing a character**: length decreased by 1- **replacing a character**: length doesn't change*this edit distance is also called Levenshtein distance!* | # method 1: brutal force
# O(N)
# N is the length of the **shorter** string
def oneEdit(s1, s2):
l1 = len(s1)
l2 = len(s2)
if (l1 == l2):
return checkReplace(s1, s2)
elif abs(l1-l2) == 1:
return checkInsRem(s1, s2)
else:
return False
def checkReplace(s1, s2):
foundDiff = 0
for i in range(len(s1)):
if s1[i] != s2[i]:
foundDiff += 1
if foundDiff > 1:
return False
else:
return True
# checking if i can insert to the shorter string to make it the longer string
def checkInsRem(s1, s2):
if len(s1) < len(s2):
short = s1
long = s2
else:
short = s2
long = s1
index_s = 0
index_l = 0
while (index_s<len(short)) and (index_l<len(long)):
if (short[index_s] != long[index_l]):
if index_s != index_l: # found the second different letter
return False
index_l += 1
else:
index_s += 1
index_l += 1
return True
s1 = 'pale'
s2 = 'phhle'
oneEdit(s1, s2)
s3 = 'pale'
s4 = 'ple'
oneEdit(s3, s4) | _____no_output_____ | MIT | notebooks/ch1_arrays_and_strings/1.5 One Away.ipynb | Julyzzzzzz/Practice-on-data-structures-and-algorithms |
Differential Methylated Genes - Pairwise | import pandas as pd
import anndata
import xarray as xr
from ALLCools.plot import *
from ALLCools.mcds import MCDS
from ALLCools.clustering import PairwiseDMG, cluster_enriched_features
import pathlib | _____no_output_____ | MIT | docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb | mukamel-lab/ALLCools |
Parameters | adata_path = '../step_by_step/100kb/adata.with_coords.h5ad'
cluster_col = 'L1'
# change this to the paths to your MCDS files
gene_fraction_dir = 'gene_frac/'
obs_dim = 'cell'
var_dim = 'gene'
# DMG
mc_type = 'CHN'
top_n = 1000
adj_p_cutoff = 1e-3
delta_rate_cutoff = 0.3
auroc_cutoff = 0.9
random_state = 0
n_jobs = 30 | _____no_output_____ | MIT | docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb | mukamel-lab/ALLCools |
Load | adata = anndata.read_h5ad(adata_path)
cell_meta = adata.obs.copy()
cell_meta.index.name = obs_dim
gene_meta = pd.read_csv(f'{gene_fraction_dir}/GeneMetadata.csv.gz', index_col=0)
gene_mcds = MCDS.open(f'{gene_fraction_dir}/*_da_frac.mcds', use_obs=cell_meta.index)
gene_mcds | _____no_output_____ | MIT | docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb | mukamel-lab/ALLCools |
Pairwise DMG | pwdmg = PairwiseDMG(max_cell_per_group=1000,
top_n=top_n,
adj_p_cutoff=adj_p_cutoff,
delta_rate_cutoff=delta_rate_cutoff,
auroc_cutoff=auroc_cutoff,
random_state=random_state,
n_jobs=n_jobs)
pwdmg.fit_predict(x=gene_mcds[f'{var_dim}_da_frac'].sel(mc_type=mc_type),
groups=cell_meta[cluster_col])
pwdmg.dmg_table.to_hdf(f'{cluster_col}.PairwiseDMG.{mc_type}.hdf', key='data')
pwdmg.dmg_table.head() | _____no_output_____ | MIT | docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb | mukamel-lab/ALLCools |
Aggregating Cluster DMGWeighted total AUROC aggregated from the pairwise comparisons Aggregate Pairwise Comparisons | cluster_dmgs = pwdmg.aggregate_pairwise_dmg(adata, groupby=cluster_col)
# save all the DMGs
with pd.HDFStore(f'{cluster_col}.ClusterRankedPWDMG.{mc_type}.hdf') as hdf:
for cluster, dmgs in cluster_dmgs.items():
hdf[cluster] = dmgs[dmgs > 0.0001] | _____no_output_____ | MIT | docs/allcools/cell_level/dmg/04-PairwiseDMG.ipynb | mukamel-lab/ALLCools |
Import libraries | import os
import warnings
warnings.filterwarnings('ignore')
#Packages related to data importing, manipulation, exploratory data #analysis, data understanding
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
from termcolor import colored as cl # text customization
#Packages related to data visualizaiton
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#Setting plot sizes and type of plot
plt.rc("font", size=14)
plt.rcParams['axes.grid'] = True
plt.figure(figsize=(6,3))
plt.gray()
from matplotlib.backends.backend_pdf import PdfPages
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn import metrics
from sklearn.impute import MissingIndicator, SimpleImputer
from sklearn.preprocessing import PolynomialFeatures, KBinsDiscretizer, FunctionTransformer
from sklearn.preprocessing import StandardScaler, MinMaxScaler, MaxAbsScaler
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, LabelBinarizer, OrdinalEncoder
import statsmodels.formula.api as smf
import statsmodels.tsa as tsa
from sklearn.linear_model import LogisticRegression, LinearRegression, ElasticNet, Lasso, Ridge
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor, export_graphviz, export
from sklearn.ensemble import BaggingClassifier, BaggingRegressor,RandomForestClassifier,RandomForestRegressor
from sklearn.ensemble import GradientBoostingClassifier,GradientBoostingRegressor, AdaBoostClassifier, AdaBoostRegressor
from sklearn.svm import LinearSVC, LinearSVR, SVC, SVR
from xgboost import XGBClassifier
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix | _____no_output_____ | MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
Importing data This dataset contains the real bank transactions made by European cardholders in the year 2013, the dataset can be downlaoded here: https://www.kaggle.com/mlg-ulb/creditcardfraud | data=pd.read_csv("creditcard.csv") | _____no_output_____ | MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
Checking transactions we can see that only 17% are fraud transactions | Total_transactions = len(data)
normal = len(data[data.Class == 0])
fraudulent = len(data[data.Class == 1])
fraud_percentage = round(fraudulent/normal*100, 2)
print(cl('Total number of Trnsactions are {}'.format(Total_transactions), attrs = ['bold']))
print(cl('Number of Normal Transactions are {}'.format(normal), attrs = ['bold']))
print(cl('Number of fraudulent Transactions are {}'.format(fraudulent), attrs = ['bold']))
print(cl('Percentage of fraud Transactions is {}'.format(fraud_percentage), attrs = ['bold'])) | [1mTotal number of Trnsactions are 284807[0m
[1mNumber of Normal Transactions are 284315[0m
[1mNumber of fraudulent Transactions are 492[0m
[1mPercentage of fraud Transactions is 0.17[0m
| MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
Feature Scaling | sc = StandardScaler()
amount = data['Amount'].values
data['Amount'] = sc.fit_transform(amount.reshape(-1, 1)) | _____no_output_____ | MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
Dropping columns and other features | data.drop(['Time'], axis=1, inplace=True)
data.drop_duplicates(inplace=True)
X = data.drop('Class', axis = 1).values
y = data['Class'].values | _____no_output_____ | MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
Training the model | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 1) | _____no_output_____ | MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
Decision Trees | DT = DecisionTreeClassifier(max_depth = 4, criterion = 'entropy')
DT.fit(X_train, y_train)
dt_yhat = DT.predict(X_test)
print('Accuracy score of the Decision Tree model is {}'.format(accuracy_score(y_test, dt_yhat)))
print('F1 score of the Decision Tree model is {}'.format(f1_score(y_test, dt_yhat))) | F1 score of the Decision Tree model is 0.7521367521367521
| MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
K nearest neighbor | n = 7
KNN = KNeighborsClassifier(n_neighbors = n)
KNN.fit(X_train, y_train)
knn_yhat = KNN.predict(X_test)
print('Accuracy score of the K-Nearest Neighbors model is {}'.format(accuracy_score(y_test, knn_yhat)))
print('F1 score of the K-Nearest Neighbors model is {}'.format(f1_score(y_test, knn_yhat))) | Accuracy score of the K-Nearest Neighbors model is 0.999288989494457
F1 score of the K-Nearest Neighbors model is 0.7949790794979079
| MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
Logistic Regression | lr = LogisticRegression()
lr.fit(X_train, y_train)
lr_yhat = lr.predict(X_test)
print('Accuracy score of the Logistic Regression model is {}'.format(accuracy_score(y_test, lr_yhat)))
print('F1 score of the Logistic Regression model is {}'.format(f1_score(y_test, lr_yhat))) | F1 score of the Logistic Regression model is 0.6666666666666666
| MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
SVM classifier | svm = SVC()
svm.fit(X_train, y_train)
svm_yhat = svm.predict(X_test)
print('Accuracy score of the Support Vector Machines model is {}'.format(accuracy_score(y_test, svm_yhat)))
print('F1 score of the Support Vector Machines model is {}'.format(f1_score(y_test, svm_yhat))) | F1 score of the Support Vector Machines model is 0.7813953488372093
| MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
Random Forest | rf = RandomForestClassifier(max_depth = 4)
rf.fit(X_train, y_train)
rf_yhat = rf.predict(X_test)
print('Accuracy score of the Random Forest model is {}'.format(accuracy_score(y_test, rf_yhat)))
print('F1 score of the Random Forest model is {}'.format(f1_score(y_test, rf_yhat))) | F1 score of the Random Forest model is 0.7397260273972602
| MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
XGBClassifier | xgb = XGBClassifier(max_depth = 4)
xgb.fit(X_train, y_train)
xgb_yhat = xgb.predict(X_test)
print('Accuracy score of the XGBoost model is {}'.format(accuracy_score(y_test, xgb_yhat)))
print('F1 score of the XGBoost model is {}'.format(f1_score(y_test, xgb_yhat))) | F1 score of the XGBoost model is 0.8495575221238937
| MIT | Credit Card Fraud Detection.ipynb | mouhamadibrahim/Credit-Card-Fraud-Detection |
 Serving Spark NLP with API: Synapse ML SynapseML Installation | import json
import os
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
# Defining license key-value pairs as local variables
locals().update(license_keys)
# Adding license key-value pairs to environment variables
os.environ.update(license_keys)
# Installing pyspark and spark-nlp
! pip install --upgrade -q pyspark==3.2.0 spark-nlp==$PUBLIC_VERSION
# Installing Spark NLP Healthcare
! pip install --upgrade -q spark-nlp-jsl==$JSL_VERSION --extra-index-url https://pypi.johnsnowlabs.com/$SECRET
! pip -q install requests | _____no_output_____ | Apache-2.0 | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop |
Imports and Spark Session | import pandas as pd
import pyspark
import sparknlp
import sparknlp_jsl
from pyspark.sql import SparkSession
from pyspark.ml import Pipeline, PipelineModel
import pyspark.sql.functions as F
from pyspark.sql.types import *
from sparknlp.base import *
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.training import *
from sparknlp.training import CoNLL
import time
import requests
import uuid
import json
import requests
from concurrent.futures import ThreadPoolExecutor
spark = SparkSession.builder \
.appName("Spark") \
.master("local[*]") \
.config("spark.driver.memory", "16G") \
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") \
.config("spark.kryoserializer.buffer.max", "2000M") \
.config("spark.jars.packages", "com.microsoft.azure:synapseml_2.12:0.9.5,com.johnsnowlabs.nlp:spark-nlp-spark32_2.12:"+PUBLIC_VERSION)\
.config("spark.jars", "https://pypi.johnsnowlabs.com/"+SECRET+"/spark-nlp-jsl-"+JSL_VERSION+"-spark32.jar")\
.config("spark.jars.repositories", "https://mmlspark.azureedge.net/maven")\
.getOrCreate()
print(sparknlp.version())
print(sparknlp_jsl.version())
spark
import synapse.ml
from synapse.ml.io import * | _____no_output_____ | Apache-2.0 | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop |
Preparing a pipeline with Entity Resolution | # Annotator that transforms a text column from dataframe into an Annotation ready for NLP
document_assembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
# Sentence Detector DL annotator, processes various sentences per line
sentenceDetectorDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl_healthcare", "en", 'clinical/models') \
.setInputCols(["document"]) \
.setOutputCol("sentence")
# Tokenizer splits words in a relevant format for NLP
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
# WordEmbeddingsModel pretrained "embeddings_clinical" includes a model of 1.7Gb that needs to be downloaded
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("word_embeddings")
# Named Entity Recognition for clinical concepts.
clinical_ner = MedicalNerModel.pretrained("ner_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "word_embeddings"]) \
.setOutputCol("ner")
ner_converter_icd = NerConverterInternal() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['PROBLEM'])\
.setPreservePosition(False)
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sentence_embeddings")\
.setCaseSensitive(False)
icd_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented_billable_hcc","en", "clinical/models") \
.setInputCols(["ner_chunk", "sentence_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
# Build up the pipeline
resolver_pipeline = Pipeline(
stages = [
document_assembler,
sentenceDetectorDL,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter_icd,
c2doc,
sbert_embedder,
icd_resolver
])
empty_data = spark.createDataFrame([['']]).toDF("text")
resolver_p_model = resolver_pipeline.fit(empty_data) | sentence_detector_dl_healthcare download started this may take some time.
Approximate size to download 367.3 KB
[OK!]
embeddings_clinical download started this may take some time.
Approximate size to download 1.6 GB
[OK!]
ner_clinical download started this may take some time.
Approximate size to download 13.9 MB
[OK!]
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_icd10cm_augmented_billable_hcc download started this may take some time.
Approximate size to download 1.1 GB
[OK!]
| Apache-2.0 | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop |
Adding a clinical note as a text example | clinical_note = """A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years
prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior
episode of HTG-induced pancreatitis three years prior to presentation, associated
with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2,
presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting.
Two weeks prior to presentation, she was treated with a five-day course of amoxicillin
for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin
for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months
at the time of presentation. Physical examination on presentation was significant for dry oral mucosa;
significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent
laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20,
creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c)
10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed
as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for
starvation ketosis, as she reported poor oral intake for three days prior to admission. However,
serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap
was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and
lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L -
the original sample was centrifuged and the chylomicron layer removed prior to analysis due to
interference from turbidity caused by lipemia again. The patient was treated with an insulin drip
for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within
24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting
of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on
40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg
two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She
had close follow-up with endocrinology post discharge."""
data = spark.createDataFrame([[clinical_note]]).toDF("text") | _____no_output_____ | Apache-2.0 | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop |
Creating a JSON file with the clinical noteSince SynapseML runs a webservice that accepts HTTP calls with json format | data_json = {"text": clinical_note } | _____no_output_____ | Apache-2.0 | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop |
Running a Synapse server | serving_input = spark.readStream.server() \
.address("localhost", 9999, "benchmark_api") \
.option("name", "benchmark_api") \
.load() \
.parseRequest("benchmark_api", data.schema)
serving_output = resolver_p_model.transform(serving_input) \
.makeReply("icd10cm_code")
server = serving_output.writeStream \
.server() \
.replyTo("benchmark_api") \
.queryName("benchmark_query") \
.option("checkpointLocation", "file:///tmp/checkpoints-{}".format(uuid.uuid1())) \
.start()
def post_url(args):
print(f"- Request {str(args[2])} launched!")
res = requests.post(args[0], data=args[1])
print(f"**Response {str(args[2])} received**")
return res
# If you want to send parallel calls, just add more tuples to list_of_urls array
# tuple: (URL from above, json, number_of_call)
list_of_urls = [("http://localhost:9999/benchmark_api",json.dumps(data_json), 0)]
with ThreadPoolExecutor() as pool:
response_list = list(pool.map(post_url,list_of_urls)) | _____no_output_____ | Apache-2.0 | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop |
Checking Results | for i in range (0, len(response_list[0].json())):
print(response_list[0].json()[i]['result']) | O2441
O2411
E11
K8520
B15
E669
Z6841
R35
R631
R630
R111
J988
E11
G600
K130
R52
M6283
R4689
O046
E785
E872
E639
H5330
R799
R829
E785
A832
G600
J988
| Apache-2.0 | tutorials/RestAPI/Serving_SparkNLP_with_Synapse.ipynb | iamvarol/spark-nlp-workshop |
5.1 - Introduction to convnetsThis notebook contains the code sample found in Chapter 5, Section 1 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----First, let's take a practical look at a very simple convnet example. We will use our convnet to classify MNIST digits, a task that you've already been through in Chapter 2, using a densely-connected network (our test accuracy then was 97.8%). Even though our convnet will be very basic, its accuracy will still blow out of the water that of the densely-connected model from Chapter 2.The 6 lines of code below show you what a basic convnet looks like. It's a stack of `Conv2D` and `MaxPooling2D` layers. We'll see in a minute what they do concretely.Importantly, a convnet takes as input tensors of shape `(image_height, image_width, image_channels)` (not including the batch dimension). In our case, we will configure our convnet to process inputs of size `(28, 28, 1)`, which is the format of MNIST images. We do this via passing the argument `input_shape=(28, 28, 1)` to our first layer. | from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu')) | _____no_output_____ | MIT | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks |
Let's display the architecture of our convnet so far: | model.summary() | Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 11, 11, 64) 18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 3, 3, 64) 36928
=================================================================
Total params: 55,744
Trainable params: 55,744
Non-trainable params: 0
_________________________________________________________________
| MIT | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks |
You can see above that the output of every `Conv2D` and `MaxPooling2D` layer is a 3D tensor of shape `(height, width, channels)`. The width and height dimensions tend to shrink as we go deeper in the network. The number of channels is controlled by the first argument passed to the `Conv2D` layers (e.g. 32 or 64).The next step would be to feed our last output tensor (of shape `(3, 3, 64)`) into a densely-connected classifier network like those you are already familiar with: a stack of `Dense` layers. These classifiers process vectors, which are 1D, whereas our current output is a 3D tensor. So first, we will have to flatten our 3D outputs to 1D, and then add a few `Dense` layers on top: | model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax')) | _____no_output_____ | MIT | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks |
We are going to do 10-way classification, so we use a final layer with 10 outputs and a softmax activation. Now here's what our network looks like: | model.summary() | Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 11, 11, 64) 18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 3, 3, 64) 36928
_________________________________________________________________
flatten_1 (Flatten) (None, 576) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 36928
_________________________________________________________________
dense_2 (Dense) (None, 10) 650
=================================================================
Total params: 93,322
Trainable params: 93,322
Non-trainable params: 0
_________________________________________________________________
| MIT | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks |
As you can see, our `(3, 3, 64)` outputs were flattened into vectors of shape `(576,)`, before going through two `Dense` layers.Now, let's train our convnet on the MNIST digits. We will reuse a lot of the code we have already covered in the MNIST example from Chapter 2. | from keras.datasets import mnist
from keras.utils import to_categorical
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5, batch_size=64) | Epoch 1/5
60000/60000 [==============================] - 10s 173us/step - loss: 0.1661 - accuracy: 0.9479
Epoch 2/5
60000/60000 [==============================] - 10s 165us/step - loss: 0.0454 - accuracy: 0.9857
Epoch 3/5
60000/60000 [==============================] - 10s 164us/step - loss: 0.0314 - accuracy: 0.9900
Epoch 4/5
60000/60000 [==============================] - 10s 161us/step - loss: 0.0237 - accuracy: 0.9927
Epoch 5/5
60000/60000 [==============================] - 10s 163us/step - loss: 0.0189 - accuracy: 0.9940
| MIT | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks |
Let's evaluate the model on the test data: | test_loss, test_acc = model.evaluate(test_images, test_labels)
test_acc | _____no_output_____ | MIT | .ipynb_checkpoints/5.1-introduction-to-convnets-checkpoint.ipynb | zhangdongwl/deep-learning-with-python-notebooks |
A short video on how bagging works https://www.youtube.com/watch?v=2Mg8QD0F1dQ | def bootstrap(X,Y, n=None):
#Bootstrap function
if n == None:
n = len(X)
resample_i = np.floor(np.random.rand(n)*len(X)).astype(int)
X_resample = X[resample_i]
Y_resample = Y[resample_i]
return X_resample, Y_resample
def bagging(n_sample,n_bag):
#Perform bagging procedure. Bootstrap and obtain an ensamble of models
X_resample, Y_resample = bootstrap(X_train,Y_train, n_sample)
bagModels = {}
for i in range(n_bag):
print("Model fitting on the {}th bootstrapped set".format(i+1))
model = model_fit(X_resample,Y_resample)
name = "model%s" % (i+1)
bagModels[name] = model
return bagModels
def model_fit(X_train,Y_train):
filters = 32 #filter = 1 x KERNEL
inpurt_shape = (X_train.shape[1:])
# create the model
model = Sequential()
model.add(Convolution2D(16, kernel_size=3, activation='elu', padding='same',
input_shape=inpurt_shape))
model.add(MaxPooling2D(pool_size=5))
model.add(Convolution2D(filters=filters, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=5))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='linear')) #change from logistic
model.compile(loss='mse', optimizer='adam', metrics=['accuracy','mse'])
# Fit the model
model.fit(X_train,
Y_train,
epochs=20,
batch_size=128,
verbose=1)
return model
def predict(bagModels):
# Model prediction for each bagged model before averaging
prediction = {}
for i in bagModels:
prediction[i] = bagModels[i].predict(X_test)
return prediction
def conversion(prediction):
# Convert confidence values into prediction
pred_list=[]
for i in range(len(prediction)):
index = np.argmax(prediction[i])
if index == 0:
pred = 'Ambiguous'
elif index == 1:
pred = 'No'
else:
pred = 'Yes'
pred_list.append(pred)
return pred_list
def baggedAccuracy(prediction,Y_test):
#Bagged accuracy calculation based on average confidences
sum_pred = 0
for i in prediction:
sum_pred += prediction[i]
bagged_prediction = sum_pred/30
bagged_list = conversion(bagged_prediction)
Ytest_list = conversion(Y_test)
correct_pred = sum(1 for i in range(len(bagged_list)) if bagged_list[i] == Ytest_list[i])
baggedAccuracy = correct_pred/len(bagged_list) * 100
return baggedAccuracy | _____no_output_____ | MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
First, we need to perform the splitting procedure as we did in the CNN notebook to get train and test sets Now lets perform bagging of an ensamble of 50 models with each model containing 3800 bootstrapped samples from X_train | bagModel = bagging(3800,50) | Model fitting on the 1th bootstrapped set
(28, 300, 1)
Epoch 1/20
3800/3800 [==============================] - 201s 53ms/step - loss: 0.1491 - acc: 0.4853 - mean_squared_error: 0.1491
Epoch 2/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1416 - acc: 0.5047 - mean_squared_error: 0.1416
Epoch 3/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1409 - acc: 0.5047 - mean_squared_error: 0.1409
Epoch 4/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1396 - acc: 0.5053 - mean_squared_error: 0.1396
Epoch 5/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1378 - acc: 0.5074 - mean_squared_error: 0.1378
Epoch 6/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1356 - acc: 0.5129 - mean_squared_error: 0.1356
Epoch 7/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1311 - acc: 0.5321 - mean_squared_error: 0.1311
Epoch 8/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1246 - acc: 0.5576 - mean_squared_error: 0.1246
Epoch 9/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1193 - acc: 0.5858 - mean_squared_error: 0.1193
Epoch 10/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1119 - acc: 0.6232 - mean_squared_error: 0.1119
Epoch 11/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1054 - acc: 0.6542 - mean_squared_error: 0.1054
Epoch 12/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0986 - acc: 0.6847 - mean_squared_error: 0.0986
Epoch 13/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0933 - acc: 0.7100 - mean_squared_error: 0.0933
Epoch 14/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0880 - acc: 0.7334 - mean_squared_error: 0.0880
Epoch 15/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0823 - acc: 0.7487 - mean_squared_error: 0.0823
Epoch 16/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0751 - acc: 0.7805 - mean_squared_error: 0.0751
Epoch 17/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0728 - acc: 0.7884 - mean_squared_error: 0.0728
Epoch 18/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0651 - acc: 0.8226 - mean_squared_error: 0.0651
Epoch 19/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0620 - acc: 0.8303 - mean_squared_error: 0.0620
Epoch 20/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0601 - acc: 0.8324 - mean_squared_error: 0.0601
Model fitting on the 2th bootstrapped set
(28, 300, 1)
Epoch 1/20
3800/3800 [==============================] - 25s 7ms/step - loss: 0.1517 - acc: 0.4818 - mean_squared_error: 0.1517
Epoch 2/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1423 - acc: 0.5061 - mean_squared_error: 0.1423
Epoch 3/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1407 - acc: 0.5066 - mean_squared_error: 0.1407
Epoch 4/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1392 - acc: 0.5055 - mean_squared_error: 0.1392
Epoch 5/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1372 - acc: 0.5055 - mean_squared_error: 0.1372
Epoch 6/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1339 - acc: 0.5268 - mean_squared_error: 0.1339
Epoch 7/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1292 - acc: 0.5432 - mean_squared_error: 0.1292
Epoch 8/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1193 - acc: 0.5868 - mean_squared_error: 0.1193
Epoch 9/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1117 - acc: 0.6300 - mean_squared_error: 0.1117
Epoch 10/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1064 - acc: 0.6466 - mean_squared_error: 0.1064
Epoch 11/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0987 - acc: 0.6879 - mean_squared_error: 0.0987
Epoch 12/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0918 - acc: 0.7079 - mean_squared_error: 0.0918
Epoch 13/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0828 - acc: 0.7437 - mean_squared_error: 0.0828
Epoch 14/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0808 - acc: 0.7574 - mean_squared_error: 0.0808
Epoch 15/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0724 - acc: 0.7929 - mean_squared_error: 0.0724
Epoch 16/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0687 - acc: 0.8105 - mean_squared_error: 0.0687
Epoch 17/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0638 - acc: 0.8203 - mean_squared_error: 0.0638
Epoch 18/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0607 - acc: 0.8408 - mean_squared_error: 0.0607
Epoch 19/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0546 - acc: 0.8571 - mean_squared_error: 0.0546
Epoch 20/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0513 - acc: 0.8700 - mean_squared_error: 0.0513
Model fitting on the 3th bootstrapped set
(28, 300, 1)
Epoch 1/20
3800/3800 [==============================] - 13s 3ms/step - loss: 0.1494 - acc: 0.4926 - mean_squared_error: 0.1494
Epoch 2/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1415 - acc: 0.5016 - mean_squared_error: 0.1415
Epoch 3/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1410 - acc: 0.5053 - mean_squared_error: 0.1410
Epoch 4/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1398 - acc: 0.5053 - mean_squared_error: 0.1398
Epoch 5/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1370 - acc: 0.5087 - mean_squared_error: 0.1370
Epoch 6/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1329 - acc: 0.5200 - mean_squared_error: 0.1329
Epoch 7/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1268 - acc: 0.5476 - mean_squared_error: 0.1268
Epoch 8/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1181 - acc: 0.5987 - mean_squared_error: 0.1181
Epoch 9/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1091 - acc: 0.6458 - mean_squared_error: 0.1091
Epoch 10/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1000 - acc: 0.6874 - mean_squared_error: 0.1000
Epoch 11/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0919 - acc: 0.7197 - mean_squared_error: 0.0919
Epoch 12/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0849 - acc: 0.7408 - mean_squared_error: 0.0849
Epoch 13/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0792 - acc: 0.7687 - mean_squared_error: 0.0792
Epoch 14/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0742 - acc: 0.7842 - mean_squared_error: 0.0742
Epoch 15/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0671 - acc: 0.8084 - mean_squared_error: 0.0671
Epoch 16/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0623 - acc: 0.8200 - mean_squared_error: 0.0623
Epoch 17/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0566 - acc: 0.8471 - mean_squared_error: 0.0566
Epoch 18/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0554 - acc: 0.8553 - mean_squared_error: 0.0554
Epoch 19/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0494 - acc: 0.8729 - mean_squared_error: 0.0494
Epoch 20/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.0464 - acc: 0.8808 - mean_squared_error: 0.0464
Model fitting on the 4th bootstrapped set
(28, 300, 1)
Epoch 1/20
3800/3800 [==============================] - 14s 4ms/step - loss: 0.1487 - acc: 0.4908 - mean_squared_error: 0.1487
Epoch 2/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1418 - acc: 0.5050 - mean_squared_error: 0.1418
Epoch 3/20
3800/3800 [==============================] - 7s 2ms/step - loss: 0.1407 - acc: 0.5058 - mean_squared_error: 0.1407
| MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
Bagging different numbers of models in an ensamble to test accuracy change | bagModel2 = bagging(3800,10)
bagged_predict.keys()
bagged_predict = predict(bagModel)
Accuracy= baggedAccuracy(bagged_predict,Y_test)
print("Bagged Accuracy(50 models): %.2f%% "%Accuracy) | Bagged Accuracy(50 models): 62.48%
| MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
Improved accuracy! Variance reduction helps! | bagModel2.keys()
bag_pred2 = predict(bagModel2)
Accuracy2= baggedAccuracy(bag_pred2,Y_test)
print("Bagged Accuracy(10 models): %.2f%% "%Accuracy2) | Bagged Accuracy(10 models): 62.29%
| MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
The results for 10-model and 50-model ensamble are only slightly different | n_sample = int(len(X_train)*0.6)
n_sample
model_num_list = [10,20,30,40,50]
def accuracy_bag(n_sample,model_num_list):
model_bags = []
accuracy_bags = []
for i in model_num_list:
print('Bagging {} models'.format(i))
bagmodel = bagging(n_sample,i)
bag_pred = predict(bagmodel)
Accuracy = baggedAccuracy(bag_pred,Y_test)
accuracy_bags.append(Accuracy)
return accuracy_bags | _____no_output_____ | MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
I tried to get the accuracy of an ensamble from 1 to 50 models but my machine broke down overnight.. I guess this is where GCP becomes handy. Next up: perform bagging for LSTM model. You can see from the accuracy plot: 3/5 of the bagging accuracy is better than a single model accuracy without bagging(Single model accuracy 60.97 in this case). Another point is as you randomly select observations for your training set and your bag size (the previous bagging accuracy was obtained by performing with another randomly selected training set and a bag size of 3800), the resulting accuracy can be different by several percentage. I think bagging would help our model accuracy but not in a tremendous way. The results proved that the variance of our model was not signicant comparing to bias. Tune the hyperparameters! | accuracybags = accuracy_bag(n_sample,model_num_list)
accuracybags
accuracybags_array = np.asarray(accuracybags)
from matplotlib import pyplot as plt
plt.figure(figsize=(10,10),dpi=80)
plt.scatter(model_num_list,accuracybags)
plt.xlabel('Ensamble model quantity',fontsize=20)
plt.ylabel('Bagging accuracy',fontsize=16) | _____no_output_____ | MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
The accuracy doesn't show an ascending trend as the ensamble contains more models, which is weird. Then I ran the model model for 3800 samples out of 4522 observations in the train set instead of 2639 samples with the same test train split. | n_sample = 3800
accuracybag2 = accuracy_bag(n_sample,[20,30])
accuracybag3 = accuracy_bag(n_sample,[40,50])
accuracybag4 = accuracy_bag(n_sample,[10])
accuracybag2
accuracybag3 | _____no_output_____ | MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
I did the bagging separately because I was afraid of my machine breaking down.. | accuracy_3800sample = accuracybag4 +accuracybag2 + accuracybag3
accuracy_2640sample = accuracybags
accuracy_3800sample | _____no_output_____ | MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
Combine the accuracy results for 2640 and 3800 samples for (10,20,30,40,50) models bagging | plt.figure(figsize=(10,10),dpi=80)
scatter_3800sample = plt.scatter([10,20,30,40,50],accuracy_3800sample,color = 'Blue')
scatter_2640sample = plt.scatter([10,20,30,40,50],accuracy_2640sample,color = 'Green')
plt.xlabel('Bagging Ensamble Model Quantity',fontsize=20)
plt.ylabel('Bagging Accuracy(%)',fontsize=16)
CNN_accuracy = plt.axhline(61, color="red",lw =3)
plt.legend((scatter_3800sample,scatter_2640sample,CNN_accuracy), ('3800 bagging sample', '2640 bagging sample','CNN accuracy without bagging'),loc = 'upper left',fontsize = 12) | _____no_output_____ | MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
I tried to get the accuracy bags from 1 to 50 models but my machine broke down overnight.. I guess this is where GCP becomes handy. Next up: perform bagging for LSTM model. You can see from the accuracy plot: 3/5 of the bagging accuracy is better than a single model accuracy without bagging(Single model accuracy 60.97 in this case). Another point is as you randomly select observations for your training set and your bag size (the previous bagging accuracy was obtained by performing with another randomly selected training set and a bag size of 3800), the resulting accuracy can be different by several percentage. I think bagging would help our model accuracy but not in a tremendous way. The results proved that the variance of our model was not signicant comparing to bias. Tune the hyperparameters! Please correct me if there is any problem in the code!!! The following cells are for trying to see if finding the most common prediction among models (voting) in the ensamble gives better results than averging the confidence values then make predictions | from collections import Counter
bagModel.keys()
lists = []
for i in range(30):
model_number = "model%s" % (i+1)
pred_list = conversion(prediction[model_number])
lists.append(pred_list)
Ytest_list=conversion(Y_test)
pred_list = []
for i in range(1522):
for j in range(30):
new_list = []
new_list.append(lists[j][i])
pred = Counter(new_list).most_common(1)[0][0]
pred_list.append(pred)
sum(1 for i in range(len(pred_list)) if pred_list[i] == Ytest_list[i]) | _____no_output_____ | MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
The result is worse! | from keras.layers import Dropout, Convolution2D, MaxPooling2D
top_words = 1000
max_words = 150
filters = 32 #filter = 1 x KERNEL
inpurt_shape = (X_train.shape[1:])
print(inpurt_shape)
# create the model
model = Sequential()
model.add(Convolution2D(16, kernel_size=3, activation='elu', padding='same',
input_shape=inpurt_shape))
model.add(MaxPooling2D(pool_size=5))
model.add(Convolution2D(filters=filters, kernel_size=3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=5))
model.add(Flatten())
model.add(Dense(250, activation='relu'))
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='linear')) #change from logistic
model.compile(loss='mse', optimizer='adam', metrics=['accuracy','mse'])
print(model.summary())
# Fit the model
model.fit(X_train,
Y_train,
validation_data=(X_test, Y_test),
epochs=20,
batch_size=128,
verbose=1)
# Final evaluation of the model
scores = model.evaluate(X_test, Y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100)) | (28, 300, 1)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_305 (Conv2D) (None, 28, 300, 16) 160
_________________________________________________________________
max_pooling2d_305 (MaxPoolin (None, 5, 60, 16) 0
_________________________________________________________________
conv2d_306 (Conv2D) (None, 5, 60, 32) 4640
_________________________________________________________________
max_pooling2d_306 (MaxPoolin (None, 1, 12, 32) 0
_________________________________________________________________
flatten_153 (Flatten) (None, 384) 0
_________________________________________________________________
dense_457 (Dense) (None, 250) 96250
_________________________________________________________________
dense_458 (Dense) (None, 250) 62750
_________________________________________________________________
dropout_153 (Dropout) (None, 250) 0
_________________________________________________________________
dense_459 (Dense) (None, 3) 753
=================================================================
Total params: 164,553
Trainable params: 164,553
Non-trainable params: 0
_________________________________________________________________
None
Train on 4565 samples, validate on 1522 samples
Epoch 1/20
4565/4565 [==============================] - 92s 20ms/step - loss: 0.1480 - acc: 0.5001 - mean_squared_error: 0.1480 - val_loss: 0.1352 - val_acc: 0.5237 - val_mean_squared_error: 0.1352
Epoch 2/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.1409 - acc: 0.5062 - mean_squared_error: 0.1409 - val_loss: 0.1357 - val_acc: 0.5237 - val_mean_squared_error: 0.1357
Epoch 3/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.1394 - acc: 0.5071 - mean_squared_error: 0.1394 - val_loss: 0.1348 - val_acc: 0.5250 - val_mean_squared_error: 0.1348
Epoch 4/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.1395 - acc: 0.5071 - mean_squared_error: 0.1395 - val_loss: 0.1342 - val_acc: 0.5250 - val_mean_squared_error: 0.1342
Epoch 5/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.1381 - acc: 0.5082 - mean_squared_error: 0.1381 - val_loss: 0.1330 - val_acc: 0.5250 - val_mean_squared_error: 0.1330
Epoch 6/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.1350 - acc: 0.5146 - mean_squared_error: 0.1350 - val_loss: 0.1298 - val_acc: 0.5388 - val_mean_squared_error: 0.1298
Epoch 7/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.1326 - acc: 0.5087 - mean_squared_error: 0.1326 - val_loss: 0.1274 - val_acc: 0.5618 - val_mean_squared_error: 0.1274
Epoch 8/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.1260 - acc: 0.5566 - mean_squared_error: 0.1260 - val_loss: 0.1211 - val_acc: 0.5802 - val_mean_squared_error: 0.1211
Epoch 9/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.1177 - acc: 0.5917 - mean_squared_error: 0.1177 - val_loss: 0.1172 - val_acc: 0.5926 - val_mean_squared_error: 0.1172
Epoch 10/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.1138 - acc: 0.6136 - mean_squared_error: 0.1138 - val_loss: 0.1158 - val_acc: 0.5900 - val_mean_squared_error: 0.1158
Epoch 11/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.1109 - acc: 0.6248 - mean_squared_error: 0.1109 - val_loss: 0.1147 - val_acc: 0.5966 - val_mean_squared_error: 0.1147
Epoch 12/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.1053 - acc: 0.6442 - mean_squared_error: 0.1053 - val_loss: 0.1150 - val_acc: 0.5946 - val_mean_squared_error: 0.1150
Epoch 13/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.1013 - acc: 0.6589 - mean_squared_error: 0.1013 - val_loss: 0.1126 - val_acc: 0.5959 - val_mean_squared_error: 0.1126
Epoch 14/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.0986 - acc: 0.6627 - mean_squared_error: 0.0986 - val_loss: 0.1142 - val_acc: 0.5894 - val_mean_squared_error: 0.1142
Epoch 15/20
4565/4565 [==============================] - 10s 2ms/step - loss: 0.0930 - acc: 0.6911 - mean_squared_error: 0.0930 - val_loss: 0.1139 - val_acc: 0.6012 - val_mean_squared_error: 0.1139
Epoch 16/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.0923 - acc: 0.6977 - mean_squared_error: 0.0923 - val_loss: 0.1188 - val_acc: 0.5966 - val_mean_squared_error: 0.1188
Epoch 17/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.0878 - acc: 0.7076 - mean_squared_error: 0.0878 - val_loss: 0.1143 - val_acc: 0.5953 - val_mean_squared_error: 0.1143
Epoch 18/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.0848 - acc: 0.7249 - mean_squared_error: 0.0848 - val_loss: 0.1166 - val_acc: 0.5953 - val_mean_squared_error: 0.1166
Epoch 19/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.0817 - acc: 0.7352 - mean_squared_error: 0.0817 - val_loss: 0.1146 - val_acc: 0.5999 - val_mean_squared_error: 0.1146
Epoch 20/20
4565/4565 [==============================] - 9s 2ms/step - loss: 0.0794 - acc: 0.7518 - mean_squared_error: 0.0794 - val_loss: 0.1143 - val_acc: 0.6097 - val_mean_squared_error: 0.1143
Accuracy: 60.97%
| MIT | examples/CNN Bagging.ipynb | sarahalamdari/DIRECT_capstone |
A basic machine learning problem: image classification . A basic machine learning problem: image classification```{admonition} Can a machine (function) tell the difference ? Mathematically, gray-scale image can be just taken as matrix in $R^{n_0\times n_0}$. The next figure shows different result from: human vision and computer representation: (pic not found) An image is just a big grid of numbers between [0,255] e.g. $800 \times 600 \times 3$ (3 channels RGB) Futhermore, color image can be taken as 3D tensor (matrix with 3 channel(RGB) ) in $R^{n_0\times n_0 \times 3}$. Then, let us think about the general supervised learning case. Each image = a big vector of pixel values - $d = 1280\times 720 \times 3$(width $\times$ height $\times$ RGB channel) ``` ```{admonition} 3 different sets of points in $\mathbb{R}^d$, are they separable? (cannot find three pictures here)``` ```{admonition} Convert into mathematical problemFind $f(\cdot; \theta): \mathbb{R}^d \to \mathbb{R}^3$ such that: (no picture)- Function interpolation- Data fitting ``` ```{admonition} How to formulate “learning”?- Data: $\{x_j, y_j\}_{j=1}^N$- Find $f^*$ in some function class s.t. $f^*(x_j) \approx y_j$.- Mathematically, solve the optimization problem by parameterizing the abstract function class$ \min_{\theta} \mathcal L(\theta)$- where$ \mathcal L( \theta):= {\mathbb E}_{(x,y)\sim \mathcal D}[\ell(f(x; \theta), y)]\approx L( \theta) := \frac{1}{N} \sum_{j=1}^N\ell(y_j, f(x_j; \theta))$- Here$\ell(y_j,f(x_j; \theta))$ is a general distance between real label $y_j$ and predicted label $f(x_j;\theta)$Two commonly used distances are - $l^2$ distance: $ \ell(y_j,f(x_j; \theta)) = \|y_j - f(x_j; \theta)\|^2.$ - KL-divergence distance:$\ell(y_j, f(x_j; \theta)) = \sum_{i=1}^k [y_j]_i \log\frac{[y_j]_i }{[f(x_j; \theta)]_i}.$``` ```{admonition} Application: image classificationTBD (cannot find pictures) ``` | from IPython.display import HTML
HTML('<iframe id="kaltura_player" src="https://cdnapisec.kaltura.com/p/2356971/sp/235697100/embedIframeJs/uiconf_id/41416911/partner_id/2356971?iframeembed=true&playerId=kaltura_player&entry_id=1_b5pq3bnx&flashvars[streamerType]=auto&flashvars[localizationCode]=en&flashvars[leadWithHTML5]=true&flashvars[sideBarContainer.plugin]=true&flashvars[sideBarContainer.position]=left&flashvars[sideBarContainer.clickToClose]=true&flashvars[chapters.plugin]=true&flashvars[chapters.layout]=vertical&flashvars[chapters.thumbnailRotator]=false&flashvars[streamSelector.plugin]=true&flashvars[EmbedPlayer.SpinnerTarget]=videoHolder&flashvars[dualScreen.plugin]=true&flashvars[hotspots.plugin]=1&flashvars[Kaltura.addCrossoriginToIframe]=true&&wid=1_qcnp6cit" width="560" height="590" allowfullscreen webkitallowfullscreen mozAllowFullScreen allow="autoplay *; fullscreen *; encrypted-media *" sandbox="allow-forms allow-same-origin allow-scripts allow-top-navigation allow-pointer-lock allow-popups allow-modals allow-orientation-lock allow-popups-to-escape-sandbox allow-presentation allow-top-navigation-by-user-activation" frameborder="0" title="Kaltura Player"></iframe>') | /anaconda3/lib/python3.7/site-packages/IPython/core/display.py:689: UserWarning: Consider using IPython.display.IFrame instead
warnings.warn("Consider using IPython.display.IFrame instead")
| MIT | ch01/Untitled.ipynb | liuzhengqi1996/math452 |
Calculating Pagerank on Wikidata | import numpy as np
import pandas as pd
import os
%env MY=/Users/pedroszekely/data/wikidata-20200504
%env WD=/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504 | env: MY=/Users/pedroszekely/data/wikidata-20200504
env: WD=/Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
We need to filter the wikidata edge file to remove all edges where `node2` is a literal. We can do this by running `ifexists` to keep edges where `node2` also appears in `node1`.This takes 2-3 hours on a laptop. | !time gzcat "$WD/wikidata_edges_20200504.tsv.gz" \
| kgtk ifexists --filter-on "$WD/wikidata_edges_20200504.tsv.gz" --input-keys node2 --filter-keys node1 \
| gzip > "$MY/wikidata-item-edges.tsv.gz"
!gzcat $MY/wikidata-item-edges.tsv.gz | wc | 460763981 3225347876 32869769062
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
We have 460 million edges that connect items to other items, let's make sure this is what we want before spending a lot of time computing pagerank | !gzcat $MY/wikidata-item-edges.tsv.gz | head | id node1 label node2 rank node2;magnitude node2;unit node2;date node2;item node2;lower node2;upper node2;latitude node2;longitude node2;precision node2;calendar node2;entity-type
Q8-P31-1 Q8 P31 Q331769 normal Q331769 item
Q8-P31-2 Q8 P31 Q60539479 normal Q60539479 item
Q8-P31-3 Q8 P31 Q9415 normal Q9415 item
Q8-P1343-1 Q8 P1343 Q20743760 normal Q20743760 item
Q8-P1343-2 Q8 P1343 Q1970746 normal Q1970746 item
Q8-P1343-3 Q8 P1343 Q19180675 normal Q19180675 item
Q8-P461-1 Q8 P461 Q169251 normal Q169251 item
Q8-P279-1 Q8 P279 Q16748867 normal Q16748867 item
Q8-P460-1 Q8 P460 Q935526 normal Q935526 item
gzcat: error writing to output: Broken pipe
gzcat: /Users/pedroszekely/data/wikidata-20200504/wikidata-item-edges.tsv.gz: uncompress failed
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
Let's do a sanity check to make sure that we have the edges that we want.We can do this by counting how many edges of each `entity-type`. Good news, we only have items and properties. | !time gzcat $MY/wikidata-item-edges.tsv.gz | kgtk unique $MY/wikidata-item-edges.tsv.gz --column 'node2;entity-type' | node1 label node2
item count 460737401
property count 26579
gzcat: error writing to output: Broken pipe
gzcat: /Users/pedroszekely/data/wikidata-20200504/wikidata-item-edges.tsv.gz: uncompress failed
real 21m44.450s
user 21m29.078s
sys 0m7.958s
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
We only needd `node`, `label` and `node2`, so let's remove the other columns | !time gzcat $MY/wikidata-item-edges.tsv.gz | kgtk remove-columns -c 'id,rank,node2;magnitude,node2;unit,node2;date,node2;item,node2;lower,node2;upper,node2;latitude,node2;longitude,node2;precision,node2;calendar,node2;entity-type' \
| gzip > $MY/wikidata-item-edges-only.tsv.gz
!gzcat $MY/wikidata-item-edges-only.tsv.gz | head
!gunzip $MY/wikidata-item-edges-only.tsv.gz | _____no_output_____ | MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
The `kgtk graph-statistics` command will compute pagerank. It will run out of memory on a laptop with 16GB of memory. | !time kgtk graph_statistics --directed --degrees --pagerank --log $MY/log.txt -i $MY/wikidata-item-edges-only.tsv > $MY/wikidata-pagerank-degrees.tsv | /bin/sh: line 1: 89795 Killed: 9 kgtk graph-statistics --directed --degrees --pagerank --log $MY/log.txt -i $MY/wikidata-item-edges-only.tsv > $MY/wikidata-pagerank-degrees.tsv
real 32m57.832s
user 19m47.624s
sys 8m58.352s
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
We ran it on a server with 256GM of memory. It used 50GB and produced the following files: | !exa -l "$WD"/*sorted*
!gzcat "$WD/wikidata-pagerank-only-sorted.tsv.gz" | head | node1 property node2 id
Q13442814 vertex_pagerank 0.02422254325848587 Q13442814-vertex_pagerank-881612
Q1860 vertex_pagerank 0.00842243515354162 Q1860-vertex_pagerank-140
Q5 vertex_pagerank 0.0073505352600377934 Q5-vertex_pagerank-188
Q5633421 vertex_pagerank 0.005898322426631837 Q5633421-vertex_pagerank-101732
Q21502402 vertex_pagerank 0.005796874633668408 Q21502402-vertex_pagerank-4838249
Q54812269 vertex_pagerank 0.005117345954282296 Q54812269-vertex_pagerank-4838258
Q1264450 vertex_pagerank 0.004881314896960181 Q1264450-vertex_pagerank-18326
Q602358 vertex_pagerank 0.004546331287981006 Q602358-vertex_pagerank-587
Q53869507 vertex_pagerank 0.0038679964665001417 Q53869507-vertex_pagerank-3160055
gzcat: error writing to output: Broken pipe
gzcat: /Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504/wikidata-pagerank-only-sorted.tsv.gz: uncompress failed
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
Oh, the `graph_statistics` command is not using standard column naming, using `property` instead of `label`.This will be fixed, for now, let's rename the columns. | !kgtk rename-col -i "$WD/wikidata-pagerank-only-sorted.tsv.gz" --mode NONE --output-columns node1 label node2 id | gzip > $MY/wikidata-pagerank-only-sorted.tsv.gz
!gzcat $MY/wikidata-pagerank-only-sorted.tsv.gz | head | node1 label node2 id
Q13442814 vertex_pagerank 0.02422254325848587 Q13442814-vertex_pagerank-881612
Q1860 vertex_pagerank 0.00842243515354162 Q1860-vertex_pagerank-140
Q5 vertex_pagerank 0.0073505352600377934 Q5-vertex_pagerank-188
Q5633421 vertex_pagerank 0.005898322426631837 Q5633421-vertex_pagerank-101732
Q21502402 vertex_pagerank 0.005796874633668408 Q21502402-vertex_pagerank-4838249
Q54812269 vertex_pagerank 0.005117345954282296 Q54812269-vertex_pagerank-4838258
Q1264450 vertex_pagerank 0.004881314896960181 Q1264450-vertex_pagerank-18326
Q602358 vertex_pagerank 0.004546331287981006 Q602358-vertex_pagerank-587
Q53869507 vertex_pagerank 0.0038679964665001417 Q53869507-vertex_pagerank-3160055
gzcat: error writing to output: Broken pipe
gzcat: /Users/pedroszekely/data/wikidata-20200504/wikidata-pagerank-only-sorted.tsv.gz: uncompress failed
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
Let's put the labels on the entity labels as columns so that we can read what is what. To do that, we concatenate the pagerank file with the labels file, and then ask kgtk to lift the labels into new columns. | !time kgtk cat -i "$MY/wikidata_labels.tsv" $MY/pagerank.tsv | gzip > $MY/pagerank-and-labels.tsv.gz
!time kgtk lift -i $MY/pagerank-and-labels.tsv.gz | gzip > "$WD/wikidata-pagerank-en.tsv.gz" |
real 32m37.811s
user 11m5.594s
sys 10m30.283s
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
Now we can look at the labels. Here are the top 20 pagerank items in Wikidata: | !gzcat "$WD/wikidata-pagerank-en.tsv.gz" | head -20 | node1 label node2 id node1;label label;label node2;label
Q13442814 vertex_pagerank 0.02422254325848587 Q13442814-vertex_pagerank-881612 'scholarly article'@en
Q1860 vertex_pagerank 0.00842243515354162 Q1860-vertex_pagerank-140 'English'@en
Q5 vertex_pagerank 0.0073505352600377934 Q5-vertex_pagerank-188 'human'@en
Q5633421 vertex_pagerank 0.005898322426631837 Q5633421-vertex_pagerank-101732 'scientific journal'@en
Q21502402 vertex_pagerank 0.005796874633668408 Q21502402-vertex_pagerank-4838249 'property constraint'@en
Q54812269 vertex_pagerank 0.005117345954282296 Q54812269-vertex_pagerank-4838258 'WikibaseQualityConstraints'@en
Q1264450 vertex_pagerank 0.004881314896960181 Q1264450-vertex_pagerank-18326 'J2000.0'@en
Q602358 vertex_pagerank 0.004546331287981006 Q602358-vertex_pagerank-587 'Brockhaus and Efron Encyclopedic Dictionary'@en
Q53869507 vertex_pagerank 0.0038679964665001417 Q53869507-vertex_pagerank-3160055 'property scope constraint'@en
Q30 vertex_pagerank 0.003722615192558219 Q30-vertex_pagerank-53 'United States of America'@en
Q2657718 vertex_pagerank 0.0036754039394037105 Q2657718-vertex_pagerank-2969 'Armenian Soviet Encyclopedia'@en
Q21503250 vertex_pagerank 0.0036258228083834655 Q21503250-vertex_pagerank-1652825 'type constraint'@en
Q19902884 vertex_pagerank 0.003403993346207395 Q19902884-vertex_pagerank-4843313 'Wikidata property definition'@en
Q6581097 vertex_pagerank 0.0030890199307556172 Q6581097-vertex_pagerank-128 'male'@en
Q21510865 vertex_pagerank 0.0029815432838705648 Q21510865-vertex_pagerank-1652828 'value type constraint'@en
P2302 vertex_pagerank 0.0028243647567065384 P2302-vertex_pagerank-20767739 'property constraint'@en
Q16521 vertex_pagerank 0.0028099172909745035 Q16521-vertex_pagerank-794 'taxon'@en
Q21502838 vertex_pagerank 0.0027485333861137183 Q21502838-vertex_pagerank-1652816 'conflicts-with constraint'@en
Q19652 vertex_pagerank 0.0026895742122130316 Q19652-vertex_pagerank-3428 'public domain'@en
gzcat: error writing to output: Broken pipe
gzcat: /Volumes/GoogleDrive/Shared drives/KGTK/datasets/wikidata-20200504/wikidata-pagerank-en.tsv.gz: uncompress failed
| MIT | examples/Example4 - Wikidata Pagerank.ipynb | robuso/kgtk |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.