markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Image Segmentation* This code is **only** `tensorflow API` version for [TensorFlow tutorials/Image Segmentation](https://github.com/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb) which is made of `tf.keras`.* You can see the detail description [tutorial link](https://github.com/tensorflow/models/blob/master/samples/outreach/blogs/segmentation_blogpost/image_segmentation.ipynb) * I use below dataset instead of [carvana-image-masking-challenge dataset](https://www.kaggle.com/c/carvana-image-masking-challenge/rules) in TensorFlow Tutorials which is a kaggle competition dataset. * carvana-image-masking-challenge dataset: Too large dataset (14GB)* [Gastrointestinal Image ANAlys Challenges (GIANA)](https://giana.grand-challenge.org) Dataset (345MB) * Train data: 300 images with RGB channels (bmp format) * Train lables: 300 images with 1 channels (bmp format) * Image size: 574 x 500
from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import time import functools import numpy as np import matplotlib.pyplot as plt %matplotlib inline import matplotlib as mpl mpl.rcParams['axes.grid'] = False mpl.rcParams['figure.figsize'] = (12,12) from sklearn.model_selection import train_test_split from PIL import Image from IPython.display import clear_output import tensorflow as tf slim = tf.contrib.slim tf.logging.set_verbosity(tf.logging.INFO) sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True)) os.environ["CUDA_VISIBLE_DEVICES"]="0"
/home/lab4all/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Get all the files Since this tutorial will be using a dataset from [Giana Dataset](https://giana.grand-challenge.org/Dates/).
# Unfortunately you cannot downlaod GIANA dataset from website # So I upload zip file on my dropbox # if you want to download from my dropbox uncomment below #!wget https://goo.gl/mxikqa #!mv mxikqa sd_train.zip #!unzip sd_train.zip #!mkdir ../../datasets #!mv sd_train ../../datasets #!rm sd_train.zip dataset_dir = '../../datasets/sd_train' img_dir = os.path.join(dataset_dir, "train") label_dir = os.path.join(dataset_dir, "train_labels") x_train_filenames = [os.path.join(img_dir, filename) for filename in os.listdir(img_dir)] x_train_filenames.sort() y_train_filenames = [os.path.join(label_dir, filename) for filename in os.listdir(label_dir)] y_train_filenames.sort() x_train_filenames, x_test_filenames, y_train_filenames, y_test_filenames = \ train_test_split(x_train_filenames, y_train_filenames, test_size=0.2, random_state=219) num_train_examples = len(x_train_filenames) num_test_examples = len(x_test_filenames) print("Number of training examples: {}".format(num_train_examples)) print("Number of test examples: {}".format(num_test_examples))
Number of training examples: 240 Number of test examples: 60
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Here's what the paths look like
x_train_filenames[:10] y_train_filenames[:10] y_test_filenames[:10]
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
VisualizeLet's take a look at some of the examples of different images in our dataset.
display_num = 5 r_choices = np.random.choice(num_train_examples, display_num) plt.figure(figsize=(10, 15)) for i in range(0, display_num * 2, 2): img_num = r_choices[i // 2] x_pathname = x_train_filenames[img_num] y_pathname = y_train_filenames[img_num] plt.subplot(display_num, 2, i + 1) plt.imshow(Image.open(x_pathname)) plt.title("Original Image") example_labels = Image.open(y_pathname) label_vals = np.unique(example_labels) plt.subplot(display_num, 2, i + 2) plt.imshow(example_labels) plt.title("Masked Image") plt.suptitle("Examples of Images and their Masks") plt.show()
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Set up Let’s begin by setting up some parameters. We’ll standardize and resize all the shapes of the images. We’ll also set up some training parameters:
# Set hyperparameters image_size = 128 img_shape = (image_size, image_size, 3) batch_size = 8 max_epochs = 100 print_steps = 50 save_epochs = 50 train_dir = 'train/exp1'
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Build our input pipeline with `tf.data`Since we begin with filenames, we will need to build a robust and scalable data pipeline that will play nicely with our model. If you are unfamiliar with **tf.data** you should check out my other tutorial introducing the concept! Our input pipeline will consist of the following steps:1. Read the bytes of the file in from the filename - for both the image and the label. Recall that our labels are actually images with each pixel annotated as car or background (1, 0). 2. Decode the bytes into an image format3. Apply image transformations: (optional, according to input parameters) * `resize` - Resize our images to a standard size (as determined by eda or computation/memory restrictions) * The reason why this is optional is that U-Net is a fully convolutional network (e.g. with no fully connected units) and is thus not dependent on the input size. However, if you choose to not resize the images, you must use a batch size of 1, since you cannot batch variable image size together * Alternatively, you could also bucket your images together and resize them per mini-batch to avoid resizing images as much, as resizing may affect your performance through interpolation, etc. * `hue_delta` - Adjusts the hue of an RGB image by a random factor. This is only applied to the actual image (not our label image). The `hue_delta` must be in the interval `[0, 0.5]` * `horizontal_flip` - flip the image horizontally along the central axis with a 0.5 probability. This transformation must be applied to both the label and the actual image. * `width_shift_range` and `height_shift_range` are ranges (as a fraction of total width or height) within which to randomly translate the image either horizontally or vertically. This transformation must be applied to both the label and the actual image. * `rescale` - rescale the image by a certain factor, e.g. 1/ 255.4. Shuffle the data, repeat the data (so we can iterate over it multiple times across epochs), batch the data, then prefetch a batch (for efficiency).It is important to note that these transformations that occur in your data pipeline must be symbolic transformations. Why do we do these image transformations?This is known as **data augmentation**. Data augmentation "increases" the amount of training data by augmenting them via a number of random transformations. During training time, our model would never see twice the exact same picture. This helps prevent [overfitting](https://developers.google.com/machine-learning/glossary/overfitting) and helps the model generalize better to unseen data. Processing each pathname
def _process_pathnames(fname, label_path): # We map this function onto each pathname pair img_str = tf.read_file(fname) img = tf.image.decode_bmp(img_str, channels=3) label_img_str = tf.read_file(label_path) label_img = tf.image.decode_bmp(label_img_str, channels=1) resize = [image_size, image_size] img = tf.image.resize_images(img, resize) label_img = tf.image.resize_images(label_img, resize) scale = 1 / 255. img = tf.to_float(img) * scale label_img = tf.to_float(label_img) * scale return img, label_img def get_baseline_dataset(filenames, labels, threads=5, batch_size=batch_size, max_epochs=max_epochs, shuffle=True): num_x = len(filenames) # Create a dataset from the filenames and labels dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) # Map our preprocessing function to every element in our dataset, taking # advantage of multithreading dataset = dataset.map(_process_pathnames, num_parallel_calls=threads) if shuffle: dataset = dataset.shuffle(num_x * 10) # It's necessary to repeat our data for all epochs dataset = dataset.repeat(max_epochs).batch(batch_size) return dataset
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Set up train and test datasetsNote that we apply image augmentation to our training dataset but not our validation dataset.
train_ds = get_baseline_dataset(x_train_filenames, y_train_filenames) test_ds = get_baseline_dataset(x_test_filenames, y_test_filenames, shuffle=False) train_ds
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Plot some train data
temp_ds = get_baseline_dataset(x_train_filenames, y_train_filenames, batch_size=1, max_epochs=1, shuffle=False) # Let's examine some of these augmented images temp_iter = temp_ds.make_one_shot_iterator() next_element = temp_iter.get_next() with tf.Session() as sess: batch_of_imgs, label = sess.run(next_element) # Running next element in our graph will produce a batch of images plt.figure(figsize=(10, 10)) img = batch_of_imgs[0] plt.subplot(1, 2, 1) plt.imshow(img) plt.subplot(1, 2, 2) plt.imshow(label[0, :, :, 0]) plt.show()
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Build the modelWe'll build the U-Net model. U-Net is especially good with segmentation tasks because it can localize well to provide high resolution segmentation masks. In addition, it works well with small datasets and is relatively robust against overfitting as the training data is in terms of the number of patches within an image, which is much larger than the number of training images itself. Unlike the original model, we will add batch normalization to each of our blocks. The Unet is built with an encoder portion and a decoder portion. The encoder portion is composed of a linear stack of [`Conv`](https://developers.google.com/machine-learning/glossary/convolution), `BatchNorm`, and [`Relu`](https://developers.google.com/machine-learning/glossary/ReLU) operations followed by a [`MaxPool`](https://developers.google.com/machine-learning/glossary/pooling). Each `MaxPool` will reduce the spatial resolution of our feature map by a factor of 2. We keep track of the outputs of each block as we feed these high resolution feature maps with the decoder portion. The Decoder portion is comprised of UpSampling2D, Conv, BatchNorm, and Relus. Note that we concatenate the feature map of the same size on the decoder side. Finally, we add a final Conv operation that performs a convolution along the channels for each individual pixel (kernel size of (1, 1)) that outputs our final segmentation mask in grayscale.
def conv_block(inputs, num_outputs, is_training, scope): batch_norm_params = {'decay': 0.9, 'epsilon': 0.001, 'is_training': is_training, 'scope': 'batch_norm'} with tf.variable_scope(scope) as scope: with slim.arg_scope([slim.conv2d], num_outputs=num_outputs, kernel_size=[3, 3], normalizer_fn=slim.batch_norm, normalizer_params=batch_norm_params): encoder = slim.conv2d(inputs, scope='conv1') encoder = slim.conv2d(encoder, scope='conv2') return encoder def encoder_block(inputs, num_outputs, is_training, scope): with tf.variable_scope(scope) as scope: encoder = conv_block(inputs, num_outputs, is_training, scope) encoder_pool = slim.max_pool2d(encoder, kernel_size=[2, 2], scope='pool') return encoder_pool, encoder def decoder_block(inputs, concat_tensor, num_outputs, is_training, scope): batch_norm_params = {'decay': 0.9, 'epsilon': 0.001, 'is_training': is_training, 'scope': 'batch_norm'} with tf.variable_scope(scope) as scope: decoder = slim.conv2d_transpose(inputs, num_outputs, kernel_size=[2, 2], stride=[2, 2], activation_fn=None, scope='convT') decoder = tf.concat([concat_tensor, decoder], axis=-1) decoder = slim.batch_norm(decoder, **batch_norm_params) decoder = tf.nn.relu(decoder) with slim.arg_scope([slim.conv2d], num_outputs=num_outputs, kernel_size=[3, 3], stride=[1, 1], normalizer_fn=slim.batch_norm, normalizer_params=batch_norm_params): decoder = slim.conv2d(decoder, scope='conv1') decoder = slim.conv2d(decoder, scope='conv2') return decoder class UNet(object): def __init__(self, train_ds, test_ds): self.train_ds = train_ds self.test_ds = test_ds def build_images(self): # tf.data.Iterator.from_string_handle의 output_shapes는 default = None이지만 꼭 값을 넣는 게 좋음 self.handle = tf.placeholder(tf.string, shape=[]) self.iterator = tf.data.Iterator.from_string_handle(self.handle, self.train_ds.output_types, self.train_ds.output_shapes) self.input_images, self.targets = self.iterator.get_next() def inference(self, inputs, is_training, reuse=False): with tf.variable_scope('', reuse=reuse) as scope: # inputs: [128, 128, 3] encoder0_pool, encoder0 = encoder_block(inputs, 32, is_training, 'encoder0') # encoder0_pool: [64, 64, 32], encoder0: [128, 128, 32] encoder1_pool, encoder1 = encoder_block(encoder0_pool, 64, is_training, 'encoder1') # encoder1_pool: [32, 32, 64], encoder1: [64, 64, 64] encoder2_pool, encoder2 = encoder_block(encoder1_pool, 128, is_training, 'encoder2') # encoder2_pool: [16, 16, 128], encoder2: [32, 32, 128] encoder3_pool, encoder3 = encoder_block(encoder2_pool, 256, is_training, 'encoder3') # encoder3_pool: [8, 8, 256], encoder3: [16, 16, 256] center = conv_block(encoder3_pool, 512, is_training, 'center') # center: [8, 8, 512] decoder3 = decoder_block(center, encoder3, 256, is_training, 'decoder3') # decoder3 = [16, 16, 256] decoder2 = decoder_block(decoder3, encoder2, 128, is_training, 'decoder2') # decoder2 = [32, 32, 128] decoder1 = decoder_block(decoder2, encoder1, 64, is_training, 'decoder1') # decoder1 = [64, 64, 64] decoder0 = decoder_block(decoder1, encoder0, 32, is_training, 'decoder0') # decoder0 = [128, 128, 32] logits = slim.conv2d(decoder0, 1, [1, 1], activation_fn=None, scope='outputs') # logits = [128, 128, 1] return logits def dice_coeff(self, y_true, y_logits): smooth = 1. # Flatten y_true_f = tf.reshape(y_true, [-1]) y_pred_f = tf.reshape(tf.nn.sigmoid(y_logits), [-1]) intersection = tf.reduce_sum(y_true_f * y_pred_f) score = (2. * intersection + smooth) / (tf.reduce_sum(y_true_f) + tf.reduce_sum(y_pred_f) + smooth) return score def dice_loss(self, y_true, y_logits): loss = 1 - self.dice_coeff(y_true, y_logits) return loss def bce_dice_loss(self, y_true, y_logits): loss = tf.losses.sigmoid_cross_entropy(y_true, y_logits) + self.dice_loss(y_true, y_logits) return loss def build(self): self.global_step = tf.train.get_or_create_global_step() self.build_images() self.logits = self.inference(self.input_images, is_training=True) self.logits_val = self.inference(self.input_images, is_training=False, reuse=True) self.predicted_images = tf.nn.sigmoid(self.logits_val) self.loss = self.bce_dice_loss(self.targets, self.logits) print("complete model build.")
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Create a model (UNet)
model = UNet(train_ds=train_ds, test_ds=test_ds) model.build() # show info for trainable variables t_vars = tf.trainable_variables() slim.model_analyzer.analyze_vars(t_vars, print_info=True) opt = tf.train.AdamOptimizer(learning_rate=2e-4) with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): opt_op = opt.minimize(model.loss, global_step=model.global_step) saver = tf.train.Saver(tf.global_variables(), max_to_keep=1000)
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Train a model
%%time sess = tf.Session(config=sess_config) sess.run(tf.global_variables_initializer()) tf.logging.info('Start Session.') train_iterator = train_ds.make_one_shot_iterator() train_handle = sess.run(train_iterator.string_handle()) test_iterator = test_ds.make_one_shot_iterator() test_handle = sess.run(test_iterator.string_handle()) # save loss values for plot loss_history = [] pre_epochs = 0 while True: try: start_time = time.time() _, global_step_, loss = sess.run([opt_op, model.global_step, model.loss], feed_dict={model.handle: train_handle}) epochs = global_step_ * batch_size / float(num_train_examples) duration = time.time() - start_time if global_step_ % print_steps == 0: clear_output(wait=True) examples_per_sec = batch_size / float(duration) print("Epochs: {:.2f} global_step: {} loss: {:.3f} ({:.2f} examples/sec; {:.3f} sec/batch)".format( epochs, global_step_, loss, examples_per_sec, duration)) loss_history.append([epochs, loss]) # print sample image img, label, predicted_label = sess.run([model.input_images, model.targets, model.predicted_images], feed_dict={model.handle: test_handle}) plt.figure(figsize=(10, 20)) plt.subplot(1, 3, 1) plt.imshow(img[0,: , :, :]) plt.title("Input image") plt.subplot(1, 3, 2) plt.imshow(label[0, :, :, 0]) plt.title("Actual Mask") plt.subplot(1, 3, 3) plt.imshow(predicted_label[0, :, :, 0]) plt.title("Predicted Mask") plt.show() # save model checkpoint periodically if int(epochs) % save_epochs == 0 and pre_epochs != int(epochs): tf.logging.info('Saving model with global step {} (= {} epochs) to disk.'.format(global_step_, int(epochs))) saver.save(sess, train_dir + 'model.ckpt', global_step=global_step_) pre_epochs = int(epochs) except tf.errors.OutOfRangeError: print("End of dataset") # ==> "End of dataset" tf.logging.info('Saving model with global step {} (= {} epochs) to disk.'.format(global_step_, int(epochs))) saver.save(sess, train_dir + 'model.ckpt', global_step=global_step_) break tf.logging.info('complete training...')
Epochs: 98.33 global_step: 2950 loss: 0.090 (248.52 examples/sec; 0.032 sec/batch)
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Plot the loss
loss_history = np.asarray(loss_history) plt.plot(loss_history[:,0], loss_history[:,1]) plt.show()
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Evaluate the test dataset and Plot
test_ds_eval = get_baseline_dataset(x_test_filenames, y_test_filenames, batch_size=num_test_examples, max_epochs=1, shuffle=False) test_iterator_eval = test_ds_eval.make_one_shot_iterator() test_handle_eval = sess.run(test_iterator_eval.string_handle()) mean_iou, mean_iou_op = tf.metrics.mean_iou(labels=tf.to_int32(tf.round(model.targets)), predictions=tf.to_int32(tf.round(model.predicted_images)), num_classes=2, name='mean_iou') sess.run(tf.local_variables_initializer()) sess.run(mean_iou_op, feed_dict={model.handle: test_handle_eval}) print("mean iou:", sess.run(mean_iou))
mean iou: 0.87998605
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Visualize testset
test_ds_visual = get_baseline_dataset(x_test_filenames, y_test_filenames, batch_size=1, max_epochs=1, shuffle=False) test_iterator_visual = test_ds_visual.make_one_shot_iterator() test_handle_visual = sess.run(test_iterator_visual.string_handle()) # Let's visualize some of the outputs # Running next element in our graph will produce a batch of images plt.figure(figsize=(10, 20)) for i in range(5): #img, label, predicted_label = sess.run([model.input_images, model.targets, model.predicted_images], img, label, predicted_label = sess.run([model.input_images, tf.to_int32(tf.round(model.targets)), tf.to_int32(tf.round(model.predicted_images))], feed_dict={model.handle: test_handle_visual}) plt.subplot(5, 3, 3 * i + 1) plt.imshow(img[0,: , :, :]) plt.title("Input image") plt.subplot(5, 3, 3 * i + 2) plt.imshow(label[0, :, :, 0]) plt.title("Actual Mask") plt.subplot(5, 3, 3 * i + 3) plt.imshow(predicted_label[0, :, :, 0]) plt.title("Predicted Mask") plt.suptitle("Examples of Input Image, Label, and Prediction") plt.show()
_____no_output_____
Apache-2.0
tf.version.1/06.images/01.image_segmentation.ipynb
jinhwanhan/tensorflow.tutorials
Green's function============== Fundamental solution-------------------------------
from sympy import * init_printing() x1, x2, xi1, xi2 = symbols('x_1 x_2 xi_1 xi_2') E = -1/(2*pi) * log(sqrt((x1-xi1)**2 + (x2-xi2)**2)) E
_____no_output_____
MIT
PythonCodes/Exercises/Class-SEAS/green/.ipynb_checkpoints/green-checkpoint.ipynb
Nicolucas/C-Scripts
**Task**: Check that $\nabla^2_\xi E = 0$ for $x \neq \xi$.*Hint*: https://docs.sympy.org/latest/tutorial/calculus.htmlderivatives
diff(E,x,2)
_____no_output_____
MIT
PythonCodes/Exercises/Class-SEAS/green/.ipynb_checkpoints/green-checkpoint.ipynb
Nicolucas/C-Scripts
Directional derivative------------------------------
n1, n2 = symbols('n_1 n_2')
_____no_output_____
MIT
PythonCodes/Exercises/Class-SEAS/green/.ipynb_checkpoints/green-checkpoint.ipynb
Nicolucas/C-Scripts
**Task**: Compute the directional derivative $\frac{\partial E}{\partial n}$. **Task** (optional): Write a function which returns the directional derivative of an expression.
def ddn(expr): pass
_____no_output_____
MIT
PythonCodes/Exercises/Class-SEAS/green/.ipynb_checkpoints/green-checkpoint.ipynb
Nicolucas/C-Scripts
Multidimensional Pattern identification with SAX In-site viewThis script performs pattern identification over the {time, attribute} cuboid, that covers the intra-building frame. It serves for within-site exploration on how a given building operates across time and building-specific attributes.The data is first normalized then transformed using SAX over normalized daily sequences. Motifs are identified across buildings, and a final clustering phase is executed over the reduced counts of sequences. Results are presented visually allowing interpretable analytics.
# Import modules import pandas as pd import numpy as np import time from sklearn.cluster import KMeans import sklearn.metrics as metrics from collections import Counter from sklearn.preprocessing import StandardScaler, MinMaxScaler # Plotting modules from plotly.offline import init_notebook_mode init_notebook_mode(connected = True) import matplotlib.pyplot as plt plt.rcdefaults() # Importing utility script import utils as ut # Version version = "v1.0" # Path definition path_data = "..\\data\\cube\\" path_fig_out = "..\\figures\\insite_view\\"
_____no_output_____
MIT
code/04_Mining_CuboidB.ipynb
JulienLeprince/multidimensional-building-data-cube-pattern-identification
Read
# Read Cuboid blg_id = "Fox_education_Melinda" df = pd.read_csv(path_data + "cuboid_B_"+blg_id+".csv", index_col="timestamp") # Format index to datetime object df.index = pd.to_datetime(df.index, format='%Y-%m-%d %H:%M:%S') df.head()
_____no_output_____
MIT
code/04_Mining_CuboidB.ipynb
JulienLeprince/multidimensional-building-data-cube-pattern-identification
Pre-Mining Motifs identification
# SAX Parameters day_number_of_pieces = 4 alphabet_size = 3 scaler_function = StandardScaler() # Normalize per attribute df_normalized = ut.scale_df_columns_NanRobust(df, df.columns, scaler=scaler_function) # Perform SAX transformation sax_dict, counts, sax_data = ut.SAX_mining(df_normalized, W=day_number_of_pieces, A=alphabet_size) # Plot the sequence counts per attribute for meter in df.columns.values: fig = ut.counter_plot(counts[meter], title=meter) fig.savefig(path_fig_out+"SAXcounts_StandardScaler_blg_"+blg_id+"_meter_"+meter+"_"+version+".jpg", dpi=300, bbox_inches='tight') # Reformating sax results for plotting sax_dict_data, index_map_dictionary = dict(), dict() for meter in sax_data: sax_dict_data[meter], index_map_dictionary[meter] = ut.sax_df_reformat(sax_data, sax_dict, meter) # Plotting all SAX sequences and saving figure fig = ut.SAX_dailyhm_visualization(sax_dict_data, sax_dict, index_map_dictionary) ut.png_output([len(sax_dict.keys())*250, 800]) fig.show() fig.write_image(path_fig_out+"SAX_blg_"+blg_id+"_"+version+".png") # Filter discords from established threshold threshold = 10 # motif number threshold indexes = dict() for meter in df.columns.values: df_count = pd.DataFrame.from_dict(Counter(sax_dict[meter]), orient='index').rename(columns={0:'count'}) df_count.fillna(0) motifs = df_count[df_count.values > threshold] indexes[meter] = [i for i,x in enumerate(sax_dict[meter]) if x in list(motifs.index)] # returns all indexes
_____no_output_____
MIT
code/04_Mining_CuboidB.ipynb
JulienLeprince/multidimensional-building-data-cube-pattern-identification
Mining Attribute motifs clusteringAttribute daily profile motifs are clustered together resulting in a reduced number of typical patterns from the previous motif identification thanks to SAX trasnformation.
# Identify optimal cluster number wcss, sil = [], [] for meter in sax_data: wcss_l, sil_l = ut.elbow_method(sax_data[meter].iloc[indexes[meter]].interpolate(method='linear').transpose(), n_cluster_max=20) wcss.append(wcss_l) sil.append(sil_l) # Get similarity index quantiles (cross attributes) arr_sil, arr_wcss = np.array(sil), np.array(wcss) wcss_med = np.quantile(arr_wcss, .5, axis=0) sil_med = np.quantile(arr_sil, .5, axis=0) err_wcss = [np.quantile(arr_wcss, .25, axis=0), np.quantile(arr_wcss, .75, axis=0)] err_sil = [np.quantile(arr_sil, .25, axis=0), np.quantile(arr_sil, .75, axis=0)] # Plots plt.rcParams.update({'font.size': 12}) plt.rcParams['font.sans-serif'] = ['Times New Roman'] fig = ut.similarity_index_werror_plot(wcss_med, sil_med, err_wcss, err_sil) fig.savefig(path_fig_out+"blg_"+blg_id+"_cluster_SimilarityIndex_"+version+".jpg", dpi=300, bbox_inches='tight') ## Clustering identified motifs # Cluster the identified motifs nb_clusters_opt = 4 kmeans = KMeans(n_clusters=nb_clusters_opt, init='k-means++', max_iter=300, n_init=10, random_state=0) kmeans_pred_y, clust_sax_data = dict(), dict() for meter in df.columns.values: clust_sax_data[meter] = sax_data[meter].iloc[indexes[meter]] kmeans_pred_y[meter] = kmeans.fit_predict(clust_sax_data[meter].interpolate(method='linear', limit_direction='both')) # Reformating cluster results for plotting clust_dict_data, index_map_dictionary = dict(), dict() max_shape = 0 for meter in sax_data: clust_dict_data[meter], index_map_dictionary[meter] = ut.sax_df_reformat(clust_sax_data, kmeans_pred_y, meter) max_shape = max(max_shape, max(np.shape(clust_dict_data[meter]))) # Adjusting reformaating from variable attribute motifs lengths for meter in sax_data: # Defining width of empty dataframe to add space_btw_saxseq = max_shape - max(np.shape(clust_dict_data[meter])) # Creating empty frame empty_sax_df = pd.DataFrame(columns=sax_data[meter].columns, index=[' ']*space_btw_saxseq) # Adding empty frame to the df clust_dict_data[meter] = clust_dict_data[meter].append(empty_sax_df) # Plotting cluster results results fig = ut.SAX_dailyhm_visualization(clust_dict_data, sax_dict, index_map_dictionary) ut.png_output([len(clust_dict_data.keys())*250, 800]) fig.show() fig.write_image(path_fig_out+"clust_blg_"+blg_id+"_"+version+".png")
_____no_output_____
MIT
code/04_Mining_CuboidB.ipynb
JulienLeprince/multidimensional-building-data-cube-pattern-identification
Practical Session 2: Classification algorithms*Notebook by Ekaterina Kochmar* 0.1 Your taskIn practical 1, you worked with the housing prices and bike sharing datasets on the tasks that required you to predict some value (e.g., price of a house) or amount (e.g., the count of rented bikes, or the number of registered users) based on a number of attributes – age of the house, number of rooms, income level of the house owners for the house price prediction (or weather conditions and time of the day for the prediction of the number of rented bikes). That is, you were predicting some continuous value.This time, your task is to predict a particular category the instance belongs to based on its characteristics. This type of tasks is called *classification*. Assignment: Handwritten digits datasetThe dataset that you will use in this assignment is the [*digits* dataset](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) which contains $1797$ images of $10$ hand-written digits. The digits have been preprocessed so that $32 \times 32$ bitmaps are divided into non-overlapping blocks of $4 \times 4$ and the number of on pixels are counted in each block. This generates an input matrix of $8 \times 8$ where each element is an integer in the range of $[0, ..., 16]$. This reduces dimensionality and gives invariance to small distortions.For further information on NIST preprocessing routines applied to this data, see M. D. Garris, J. L. Blue, G. T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C. L. Wilson, *NIST Form-Based Handprint Recognition System*, NISTIR 5469, 1994.As before, use the `sklearn`'s data uploading routines to load the dataset and get the data fields:
from sklearn import datasets digits = datasets.load_digits() list(digits.keys()) digits X, y = digits["data"], digits["target"] X.shape y.shape
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
You can access the digits and visualise them using the following code (feel free to select another digit):
import matplotlib from matplotlib import pyplot as plt some_digit = X[3] some_digit_image = some_digit.reshape(8, 8) plt.imshow(some_digit_image, cmap=matplotlib.cm.binary, interpolation="nearest") plt.axis("off") plt.show() y[3]
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
For the rest of the practical, apply the data preprocessing techniques, implement and evaluate the classification models on the digits dataset using the steps that you applied above to the iris dataset. Step 2: Splitting the data into training and test subsets
from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) split.get_n_splits(X, y) print(split) for train_index, test_index in split.split(X, y): print("TRAIN:", len(train_index), "TEST:", len(test_index)) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
StratifiedShuffleSplit(n_splits=1, random_state=42, test_size=0.2, train_size=None) TRAIN: 1437 TEST: 360 (1437, 64) (1437,) (360, 64) (360,)
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Check proportions
import pandas as pd # def original_proportions(data): # props = {} # for value in set(data["target"]): # data_value = [i for i in data["target"] if i==value] # props[value] = len(data_value) / len(data["target"]) # return props def subset_proportions(subset): props = {} for value in set(subset): data_value = [i for i in subset if i==value] props[value] = len(data_value) / len(subset) return props compare_props = pd.DataFrame({ "Overall": subset_proportions(digits["target"]), "Stratified tr": subset_proportions(y_train), "Stratified ts": subset_proportions(y_test), }) compare_props["Strat. tr %error"] = 100 * compare_props["Stratified tr"] / compare_props["Overall"] - 100 compare_props["Strat. ts %error"] = 100 * compare_props["Stratified ts"] / compare_props["Overall"] - 100 compare_props.sort_index()
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Case 1: Binary Classification
y_train_zero = (y_train == 0) # will return True when the label is 0 (i.e., zero) y_test_zero = (y_test == 0) y_test_zero zero_example = X_test[10]
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Perceptron
from sklearn.linear_model import SGDClassifier sgd = SGDClassifier(max_iter=5, tol=None, random_state=42, loss="perceptron", eta0=1, learning_rate="constant", penalty=None) sgd.fit(X_train, y_train_zero) sgd.predict([zero_example])
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Trying it for label 1
y_train_one = (y_train == 1) # True when the label is 1 (i.e., versicolor) y_test_one = (y_test == 1) y_test_one one_example = X_test[40] print("Class", y_test[40], "(", digits.target_names[y_test[40]], ")") sgd.fit(X_train, y_train_one) print(sgd.predict([one_example]))
Class 1 ( 1 ) [ True]
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Perceptron did well Logistic Regression
from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression() log_reg.fit(X_train, y_train_zero) print(log_reg.predict([zero_example])) log_reg.fit(X_train, y_train_one) log_reg.predict([one_example])
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Looks like Logistic regression didn't get the 1 Naive Bayes
from sklearn.naive_bayes import GaussianNB, MultinomialNB gnb = MultinomialNB() # or: gnb = GaussianNB() gnb.fit(X_train, y_train_zero) gnb.predict([zero_example]) gnb.fit(X_train, y_train_one) gnb.predict([one_example])
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Naive Bayes did good Step 3: Evaluation Performance measures- Acc for cross-val
from sklearn.model_selection import cross_val_score print(cross_val_score(log_reg, X_train, y_train_zero, cv=5, scoring="accuracy")) print(cross_val_score(gnb, X_train, y_train_zero, cv=5, scoring="accuracy")) print(cross_val_score(sgd, X_train, y_train_zero, cv=5, scoring="accuracy")) print(cross_val_score(log_reg, X_train, y_train_one, cv=5, scoring="accuracy")) print(cross_val_score(gnb, X_train, y_train_one, cv=5, scoring="accuracy")) print(cross_val_score(sgd, X_train, y_train_one, cv=5, scoring="accuracy"))
[0.97916667 0.97916667 0.96515679 0.97212544 0.95470383] [0.61805556 0.62847222 0.61324042 0.66550523 0.51916376] [0.97222222 0.95486111 0.95470383 0.95470383 0.95818815]
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Brute force predicting only non-ones
from sklearn.base import BaseEstimator import numpy as np np.random.seed(42) class NotXClassifier(BaseEstimator): def fit(self, X, y=None): pass def predict(self, X): return np.zeros((len(X), 1), dtype=bool) notone_clf = NotXClassifier() cross_val_score(notone_clf, X_train, y_train_one, cv=5, scoring="accuracy")
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
- Confusion Matrix
from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix y_train_pred = cross_val_predict(log_reg, X_train, y_train_zero, cv=5) confusion_matrix(y_train_zero, y_train_pred) y_train_pred = cross_val_predict(gnb, X_train, y_train_zero, cv=5) confusion_matrix(y_train_zero, y_train_pred) y_train_pred = cross_val_predict(log_reg, X_train, y_train_one, cv=5) confusion_matrix(y_train_one, y_train_pred) y_train_pred = cross_val_predict(gnb, X_train, y_train_one, cv=5) confusion_matrix(y_train_one, y_train_pred)
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
- precision, recall, f1
from sklearn.metrics import precision_score, recall_score, f1_score y_train_pred = cross_val_predict(gnb, X_train, y_train_one, cv=5) precision = precision_score(y_train_one, y_train_pred) # == 36 / (36 + 5) recall = recall_score(y_train_one, y_train_pred) # == 36 / (36 + 4) f1 = f1_score(y_train_one, y_train_pred) print(precision, recall, f1) y_train_pred = cross_val_predict(log_reg, X_train, y_train_one, cv=5) precision = precision_score(y_train_one, y_train_pred) # == 15 / (15 + 9) recall = recall_score(y_train_one, y_train_pred) # == 15 / (15 + 25) f1 = f1_score(y_train_one, y_train_pred) print(precision, recall, f1)
0.20454545454545456 0.9863013698630136 0.3388235294117647 0.832258064516129 0.8835616438356164 0.8571428571428571
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Oh no, poor gnb - Precision-recall treade-off Confidence score
log_reg.fit(X_train, y_train_one) y_scores = log_reg.decision_function([one_example]) y_scores threshold = 0 y_one_pred = (y_scores > threshold) y_one_pred threshold = -2 y_one_pred = (y_scores > threshold) y_one_pred
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Confidence scores
y_scores = cross_val_predict(log_reg, X_train, y_train_one, cv=5, method="decision_function") y_scores
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Plot precision vs recall
from sklearn.metrics import precision_recall_curve precisions, recalls, thresholds = precision_recall_curve(y_train_one, y_scores) def plot_pr_vs_threshold(precisions, recalls, thresholds): plt.plot(thresholds, precisions[:-1], "b--", label="Precision") plt.plot(thresholds, recalls[:-1], "g--", label="Recall") plt.xlabel("Threshold") plt.legend(loc="upper right") plt.ylim([0, 1]) plot_pr_vs_threshold(precisions, recalls, thresholds) plt.show() def plot_precision_vs_recall(precisions, recalls): plt.plot(recalls, precisions, "b-", linewidth=2) plt.xlabel("Recall") plt.ylabel("Precision") plt.axis([0, 1, 0, 1]) plot_precision_vs_recall(precisions, recalls) plt.show()
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
- The Receiver Operating Characteristic (ROC)
from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_train_one, y_scores) def plot_roc_curve(fpr, tpr, label=None): plt.plot(fpr, tpr, linewidth=2, label=label) plt.plot([0, 1], [0, 1], "k--") plt.axis([0, 1, 0, 1.01]) plt.xlabel("False positive rate (fpr)") plt.ylabel("True positive rate (tpr)") plot_roc_curve(fpr, tpr) plt.show() # Area from sklearn.metrics import roc_auc_score roc_auc_score(y_train_one, y_scores) # Now with GNB y_probas_gnb = cross_val_predict(gnb, X_train, y_train_one, cv=3, method="predict_proba") y_scores_gnb = y_probas_gnb[:, 1] # score = proba of the positive class fpr_gnb, tpr_gnb, thresholds_gnb = roc_curve(y_train_one, y_scores_gnb) plt.plot(fpr, tpr, "b:", label="Logistic Regression") plot_roc_curve(fpr_gnb, tpr_gnb, "Gaussian Naive Bayes") plt.legend(loc="lower right") plt.show() #Area roc_auc_score(y_train_one, y_scores_gnb)
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Looks like Logistic Regression outperformed Gaussian Naive Bayes Step 4: Data transformations Kernel trick- with gamma = 1 we get EXTREMELY bad results- gamma = 0.001 solves that
from sklearn.kernel_approximation import RBFSampler rbf_features = RBFSampler(gamma=0.001, random_state=42) X_train_features = rbf_features.fit_transform(X_train) print(X_train.shape, "->", X_train_features.shape) sgd_rbf = SGDClassifier(max_iter=100, random_state=42, loss="perceptron", eta0=1, learning_rate="constant", penalty=None) sgd_rbf.fit(X_train_features, y_train_one) sgd_rbf.score(X_train_features, y_train_one) print(X_train_features[0])
(1437, 64) -> (1437, 100) [-0.08948691 0.07840032 0.13012926 0.01843734 0.05243378 0.12363736 -0.13138175 0.12487656 -0.06774296 -0.13116408 0.14131867 0.13014406 -0.1412175 0.01781151 0.08980399 0.14045931 0.06150205 0.11652648 0.13544217 -0.11087697 0.13594337 -0.0800289 0.0574227 0.01216671 0.11133797 0.00604765 0.12907269 0.04008129 0.10124134 0.14130664 0.09733658 -0.14111269 0.11467299 -0.03910098 -0.05214749 -0.05723397 -0.02252198 -0.1064269 0.00072984 -0.08188124 0.01504524 -0.1212134 -0.0339027 0.10711778 0.01232271 -0.10386685 -0.08298496 0.13956306 -0.03454778 0.14113989 -0.09677051 0.03187626 -0.07078854 -0.12390397 0.13693932 0.09349667 -0.12903172 0.0018465 -0.02683269 -0.062455 0.14121793 -0.01998847 0.13880371 0.13414756 -0.14132905 0.13276154 -0.14141921 -0.05054704 0.12889829 0.13459871 -0.03282508 0.13367935 -0.06263253 -0.11907552 0.14105804 0.13411986 0.06823374 0.08644726 0.09729963 0.14135676 -0.04737141 0.0218788 0.09904029 -0.12565361 0.1260095 0.04542973 0.08625159 -0.06465836 0.09918457 0.13192078 0.10236442 0.13360416 -0.081419 -0.09102759 0.13254435 -0.05242659 0.04783216 -0.14066595 -0.02853276 -0.11711412]
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
- Precision, recall and F1 : non-kernel vs kernel GNB
y_train_pred = cross_val_predict(sgd, X_train, y_train_one, cv=5) precision = precision_score(y_train_one, y_train_pred) recall = recall_score(y_train_one, y_train_pred) f1 = f1_score(y_train_one, y_train_pred) print(precision, recall, f1) y_train_pred = cross_val_predict(sgd_rbf, X_train_features, y_train_one, cv=5) precision = precision_score(y_train_one, y_train_pred) recall = recall_score(y_train_one, y_train_pred) f1 = f1_score(y_train_one, y_train_pred) print(precision, recall, f1)
0.8270676691729323 0.7534246575342466 0.7885304659498208 0.9154929577464789 0.8904109589041096 0.9027777777777778
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Case 2: Multi-class classification
sgd.fit(X_train, y_train) # i.e., all instances, not just one class print(sgd.predict([zero_example])) print(sgd.predict([one_example]))
[0] [1]
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
half good
sgd_rbf.fit(X_train_features, y_train) # i.e., all instances, not just one class X_test_features = rbf_features.transform(X_test) zero_rbf_example = X_test_features[10] # note that you need to transform the test data in the same way, too one_rbf_example = X_test_features[3] print(sgd_rbf.predict([zero_rbf_example])) print(sgd_rbf.predict([one_rbf_example]))
[0] [6]
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
half good
zero_scores = sgd_rbf.decision_function([zero_rbf_example]) print(zero_scores) # check which class gets the maximum score prediction = np.argmax(zero_scores) print(prediction) # check which class this corresponds to in the classifier print(sgd_rbf.classes_[prediction]) print(digits.target_names[sgd_rbf.classes_[prediction]])
[[ 1.32752637 -8.43047571 -0.56672488 -3.41607774 -2.43529497 -2.63031545 -3.12302686 -2.30546944 -3.48919124 -6.71624512]] 0 0 0
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
good
# with the kernel one_scores = sgd_rbf.decision_function([one_rbf_example]) print(one_scores) prediction = np.argmax(one_scores) print(prediction) print(digits.target_names[sgd_rbf.classes_[prediction]])
[[-1.43787351 -1.87790689 -2.6351228 -3.70534368 -2.11141745 -3.57570642 -0.83998359 -3.2773025 -2.55029636 -3.2336616 ]] 6 6
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
):
# without the kernel one_scores = sgd.decision_function([one_example]) print(one_scores) prediction = np.argmax(one_scores) print(prediction) print(digits.target_names[sgd.classes_[prediction]])
[[-10026. -333. -5977. -2605. -5370. -6327. -7540. -2234. -1181. -6917.]] 1 1
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
One VS One
from sklearn.multiclass import OneVsOneClassifier ovo_clf = OneVsOneClassifier(SGDClassifier(max_iter=100, random_state=42, loss="perceptron", eta0=1, learning_rate="constant", penalty=None)) ovo_clf.fit(X_train_features, y_train) ovo_clf.predict([one_rbf_example]) len(ovo_clf.estimators_)
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
10-way Naive Bayes
gnb.fit(X_train, y_train) gnb.predict([one_example])
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
wowIt correctly classifies the *one* example, so let's check how confident it is about this prediction (use `predict_proba` with `NaiveBayes` and `decision_function` with the `SGDClassifier`):
gnb.predict_proba([one_example])
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Cross-validation performance
print(cross_val_score(sgd_rbf, X_train_features, y_train, cv=5, scoring="accuracy")) print(cross_val_score(ovo_clf, X_train_features, y_train, cv=5, scoring="accuracy")) print(cross_val_score(gnb, X_train, y_train, cv=5, scoring="accuracy"))
[0.9375 0.90625 0.91637631 0.91986063 0.90592334] [0.93402778 0.92013889 0.92334495 0.91986063 0.91986063] [0.85416667 0.83333333 0.81881533 0.85365854 0.77700348]
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
ScalingLet's apply scaling
from sklearn.preprocessing import StandardScaler, MinMaxScaler #scaler = StandardScaler() scalar = MinMaxScaler() X_train_scaled = scaler.fit_transform(X_train.astype(np.float64)) X_train_features_scaled = scaler.fit_transform(X_train_features.astype(np.float64)) print(cross_val_score(sgd_rbf, X_train_features_scaled, y_train, cv=5, scoring="accuracy")) print(cross_val_score(ovo_clf, X_train_features_scaled, y_train, cv=5, scoring="accuracy")) print(cross_val_score(gnb, X_train_scaled, y_train, cv=5, scoring="accuracy"))
[0.93402778 0.90625 0.8989547 0.87108014 0.87456446] [0.94444444 0.9375 0.90940767 0.92682927 0.89198606] [0.79166667 0.78472222 0.76655052 0.80836237 0.72473868]
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
- StandardScaler() only made things worse[0.93402778 0.90625 0.8989547 0.87108014 0.87456446][0.94444444 0.9375 0.90940767 0.92682927 0.89198606][0.79166667 0.78472222 0.76655052 0.80836237 0.72473868]- MinMaxScaler() gives exact same values Step 5: Error Analysis
y_train_pred = cross_val_predict(sgd_rbf, X_train_features_scaled, y_train, cv=3) conf_mx = confusion_matrix(y_train, y_train_pred) conf_mx plt.imshow(conf_mx, cmap = "jet") plt.show() row_sums = conf_mx.sum(axis=1, keepdims=True) norm_conf_mx = conf_mx / row_sums np.fill_diagonal(norm_conf_mx, 0) plt.imshow(norm_conf_mx, cmap = "jet") plt.show() y_train_pred = cross_val_predict(sgd, X_train_features_scaled, y_train, cv=3) conf_mx = confusion_matrix(y_train, y_train_pred) conf_mx plt.imshow(conf_mx, cmap = "jet") plt.show() row_sums = conf_mx.sum(axis=1, keepdims=True) norm_conf_mx = conf_mx / row_sums np.fill_diagonal(norm_conf_mx, 0) plt.imshow(norm_conf_mx, cmap = "jet") plt.show() y_train_pred = cross_val_predict(ovo_clf, X_train_features_scaled, y_train, cv=3) conf_mx = confusion_matrix(y_train, y_train_pred) conf_mx plt.imshow(conf_mx, cmap = "jet") plt.show() row_sums = conf_mx.sum(axis=1, keepdims=True) norm_conf_mx = conf_mx / row_sums np.fill_diagonal(norm_conf_mx, 0) plt.imshow(norm_conf_mx, cmap = "jet") plt.show()
_____no_output_____
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Final step – evaluating on the test set
# non-kernel perceptron from sklearn.metrics import accuracy_score y_pred = sgd.predict(X_test) accuracy_score(y_test, y_pred) precision = precision_score(y_test, y_pred, average='weighted') recall = recall_score(y_test, y_pred, average='weighted') f1 = f1_score(y_test, y_pred, average='weighted') print(precision, recall, f1) # kernel perceptron from sklearn.metrics import accuracy_score X_test_features_scaled = scaler.transform(X_test_features.astype(np.float64)) y_pred = sgd_rbf.predict(X_test_features_scaled) accuracy_score(y_test, y_pred) precision = precision_score(y_test, y_pred, average='weighted') recall = recall_score(y_test, y_pred, average='weighted') f1 = f1_score(y_test, y_pred, average='weighted') print(precision, recall, f1)
0.9330274822584538 0.9305555555555556 0.9299190345492117
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
The OvO SGD classifier:
# One vs one from sklearn.metrics import accuracy_score X_test_features_scaled = scaler.transform(X_test_features.astype(np.float64)) y_pred = ovo_clf.predict(X_test_features_scaled) accuracy_score(y_test, y_pred) precision = precision_score(y_test, y_pred, average='weighted') recall = recall_score(y_test, y_pred, average='weighted') f1 = f1_score(y_test, y_pred, average='weighted') print(precision, recall, f1)
0.9038423919737274 0.8944444444444445 0.8906734076041858
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Naive Bayes
#Naive Bayes gnb.fit(X_train, y_train) y_pred = gnb.predict(X_test) accuracy_score(y_test, y_pred) precision = precision_score(y_test, y_pred, average='weighted') recall = recall_score(y_test, y_pred, average='weighted') f1 = f1_score(y_test, y_pred, average='weighted') print(precision, recall, f1)
0.8479871939298477 0.8111111111111111 0.8150828576150382
Apache-2.0
DSPNP_practical2/DSPNP_notebook2_digits.ipynb
frantu08/Data_Science_Unit_20-21
Collision Avoidance - Data CollectionIf you ran through the basic motion notebook, hopefully you're enjoying how easy it can be to make your Jetbot move around! Thats very cool! But what's even cooler, is making JetBot move around all by itself! This is a super hard task, that has many different approaches but the whole problem is usually broken down into easier sub-problems. It could be argued that one of the mostimportant sub-problems to solve, is the problem of preventing the robot from entering dangerous situations! We're calling this *collision avoidance*. In this set of notebooks, we're going to attempt to solve the problem using deep learning and a single, very versatile, sensor: the camera. You'll see how with a neural network, camera, and the NVIDIA Jetson Nano, we can teach the robot a very useful behavior!The approach we take to avoiding collisions is to create a virtual "safety bubble" around the robot. Within this safety bubble, the robot is able to spin in a circle without hitting any objects (or other dangerous situations like falling off a ledge). Of course, the robot is limited by what's in it's field of vision, and we can't prevent objects from being placed behind the robot, etc. But we can prevent the robot from entering these scenarios itself.The way we'll do this is super simple: First, we'll manually place the robot in scenarios where it's "safety bubble" is violated, and label these scenarios ``blocked``. We save a snapshot of what the robot sees along with this label.Second, we'll manually place the robot in scenarios where it's safe to move forward a bit, and label these scenarios ``free``. Likewise, we save a snapshot along with this label.That's all that we'll do in this notebook; data collection. Once we have lots of images and labels, we'll upload this data to a GPU enabled machine where we'll *train* a neural network to predict whether the robot's safety bubble is being violated based off of the image it sees. We'll use this to implement a simple collision avoidance behavior in the end :)> IMPORTANT NOTE: When JetBot spins in place, it actually spins about the center between the two wheels, not the center of the robot chassis itself. This is an important detail to remember when you're trying to estimate whether the robot's safety bubble is violated or not. But don't worry, you don't have to be exact. If in doubt it's better to lean on the cautious side (a big safety bubble). We want to make sure JetBot doesn't enter a scenario that it couldn't get out of by turning in place. Display live camera feedSo let's get started. First, let's initialize and display our camera like we did in the *teleoperation* notebook. > Our neural network takes a 224x224 pixel image as input. We'll set our camera to that size to minimize the filesize of our dataset (we've tested that it works for this task).> In some scenarios it may be better to collect data in a larger image size and downscale to the desired size later.
import traitlets import ipywidgets.widgets as widgets from IPython.display import display from jetbot import Camera, bgr8_to_jpeg # camera = Camera.instance(width=224, height=224) image = widgets.Image(format='jpeg', width=224, height=224) # this width and height doesn't necessarily have to match the camera camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)
_____no_output_____
MIT
.ipynb_checkpoints/data_collection-collisionavoidance_Jetbot_Joystick-checkpoint.ipynb
tomMEM/Jetbot-Project
Awesome, next let's create a few directories where we'll store all our data. We'll create a folder ``dataset`` that will contain two sub-folders ``free`` and ``blocked``, where we'll place the images for each scenario.
import os blocked_dir = 'dataset/blocked' free_dir = 'dataset/free' # we have this "try/except" statement because these next functions can throw an error if the directories exist already try: os.makedirs(free_dir) os.makedirs(blocked_dir) except FileExistsError: print('Directories are not created because they already exist')
Directories are not created because they already exist
MIT
.ipynb_checkpoints/data_collection-collisionavoidance_Jetbot_Joystick-checkpoint.ipynb
tomMEM/Jetbot-Project
If you refresh the Jupyter file browser on the left, you should now see those directories appear. Next, let's create and display some buttons that we'll use to save snapshotsfor each class label. We'll also add some text boxes that will display how many images of each category that we've collected so far. This is useful because we want to makesure we collect about as many ``free`` images as ``blocked`` images. It also helps to know how many images we've collected overall.
button_layout = widgets.Layout(width='128px', height='64px') free_button = widgets.Button(description='add free', button_style='success', layout=button_layout) blocked_button = widgets.Button(description='add blocked', button_style='danger', layout=button_layout) free_count = widgets.IntText(layout=button_layout, value=len(os.listdir(free_dir))) blocked_count = widgets.IntText(layout=button_layout, value=len(os.listdir(blocked_dir))) x=0 #from here TB controller = widgets.Controller(index=0) # replace with index of your controller button_layout = widgets.Layout(width='200px', height='64px') #TB free_left = widgets.FloatText(layout=button_layout, value=x, description='forward') #TB free_right = widgets.FloatText(layout=button_layout, value=x, description='turning') #TB motorleft = widgets.FloatText(layout=button_layout, value=x, description='Motor Left') #TB motorright = widgets.FloatText(layout=button_layout, value=x, description='Motor Right') #TB speed_widget = widgets.FloatSlider(value=0.5, min=0.05, max=1.0, step=0.001, description='speed') #TB higher speed requires smaller turn_gain values: 2.5 for speed 0.22, around 2 for speed 0.4 turn_gain_widget = widgets.FloatSlider(value=1, min=0.05, step=0.001, max=4.0, description='turn sensitivity') #TB value different for different forward speed, but very small differences motoradjustment_widget = widgets.FloatSlider(value=0.02, min=0.00, max=0.2, step=0.0001, description='motoradjustment') from jetbot import Robot import traitlets import math robot = Robot() #TB to show the controller values left_link = traitlets.dlink((controller.axes[1], 'value'), (free_left, 'value'), transform=lambda x: -x) right_link = traitlets.dlink((controller.axes[0], 'value'), (free_right, 'value'), transform=lambda x: -x) def on_value_change(change): x= free_right.value y= free_left.value leftnew, rightnew = steering(x, y) motorright.value= round(float(leftnew),2) motorleft.value= round(float(rightnew + motoradjustment_widget.value),2) # adjust the motor that lags behind #motoradjustment value important to keep bot driving straight, small offset-values like 0.05 robot.right_motor.value=motorright.value robot.left_motor.value=motorleft.value def steering(x, y): #script from stackexchange of user Pedro Werneck #https://electronics.stackexchange.com/questions/19669/algorithm-for-mixing-2-axis-analog-input-to-control-a-differential-motor-drive # convert to polar r = math.hypot(x, y) t = math.atan2(y, x) # rotate by 45 degrees t += math.pi / -4.0 # back to cartesian left = r * math.cos(t) right = r * math.sin(t) # rescale the new coords left = left * math.sqrt(2) right = right * math.sqrt(2) # clamp to -1/+1 scalefactor= speed_widget.value left = max(scalefactor*-1.0, min(left, scalefactor)) right = max(scalefactor*-1.0, min(right, scalefactor)) #gamma correction for response sensitivity of joystick while turning : TB gamma=turn_gain_widget.value #using slider for joystick 1-4, for object recognition 2-40 if left <0 : left= -1* (((abs(left)/scalefactor)**(1/gamma))*scalefactor) else: left= ((abs(left)/scalefactor)**(1/gamma))*scalefactor if right <0: right= -1*(((abs(right)/scalefactor)**(1/gamma))*scalefactor) else: right= ((abs(right)/scalefactor)**(1/gamma))*scalefactor return left, right free_left.observe(on_value_change, names='value') free_right.observe(on_value_change, names='value') #left_link = traitlets.dlink((motorleft, 'value'), (robot.left_motor, 'value')) #right_link = traitlets.dlink((motorright, 'value'), (robot.right_motor, 'value')) from jetbot import Heartbeat def handle_heartbeat_status(change): if change['new'] == Heartbeat.Status.dead: camera_link.unlink() left_link.unlink() right_link.unlink() robot.stop() heartbeat = Heartbeat(period=0.5) # attach the callback function to heartbeat status heartbeat.observe(handle_heartbeat_status, names='status')
_____no_output_____
MIT
.ipynb_checkpoints/data_collection-collisionavoidance_Jetbot_Joystick-checkpoint.ipynb
tomMEM/Jetbot-Project
Right now, these buttons wont do anything. We have to attach functions to save images for each category to the buttons' ``on_click`` event. We'll save the valueof the ``Image`` widget (rather than the camera), because it's already in compressed JPEG format!To make sure we don't repeat any file names (even across different machines!) we'll use the ``uuid`` package in python, which defines the ``uuid1`` method to generatea unique identifier. This unique identifier is generated from information like the current time and the machine address.
from uuid import uuid1 snapshot_image = widgets.Image(format='jpeg', width=224, height=224) def save_snapshot(directory): image_path = os.path.join(directory, str(uuid1()) + '.jpg') with open(image_path, 'wb') as f: f.write(image.value) # display snapshot that was saved snapshot_image.value = image.value def save_free(change): global free_dir, free_count if change['new']: save_snapshot(free_dir) free_count.value = len(os.listdir(free_dir)) def save_blocked(change): global blocked_dir, blocked_count if change['new']: save_snapshot(blocked_dir) blocked_count.value = len(os.listdir(blocked_dir)) def save_free_button(): global free_dir, free_count save_snapshot(free_dir) free_count.value = len(os.listdir(free_dir)) def save_blocked_button(): global blocked_dir, blocked_count save_snapshot(blocked_dir) blocked_count.value = len(os.listdir(blocked_dir)) # attach the callbacks, we use a 'lambda' function to ignore the # parameter that the on_click event would provide to our function # because we don't need it. controller.buttons[5].observe(save_free, names='value') #TB gamepad button number 5 controller.buttons[7].observe(save_blocked, names='value') #TB gamepad button numer 7 free_button.on_click(lambda x: save_free_button()) blocked_button.on_click(lambda x: save_blocked_button()) #display(image) display(widgets.HBox([image, snapshot_image])) display(controller) display(widgets.VBox([ speed_widget, turn_gain_widget, motoradjustment_widget, ])) display(widgets.HBox([free_left, free_right, motorleft, motorright])) display(widgets.HBox([free_count, free_button])) display(widgets.HBox([blocked_count, blocked_button])) import time camera.unobserve_all() time.sleep(1.0) robot.stop()
_____no_output_____
MIT
.ipynb_checkpoints/data_collection-collisionavoidance_Jetbot_Joystick-checkpoint.ipynb
tomMEM/Jetbot-Project
Great! Now the buttons above should save images to the ``free`` and ``blocked`` directories. You can use the Jupyter Lab file browser to view these files!Now go ahead and collect some data 1. Place the robot in a scenario where it's blocked and press ``add blocked``2. Place the robot in a scenario where it's free and press ``add free``3. Repeat 1, 2> REMINDER: You can move the widgets to new windows by right clicking the cell and clicking ``Create New View for Output``. Or, you can just re-display them> together as we will belowHere are some tips for labeling data1. Try different orientations2. Try different lighting3. Try varied object / collision types; walls, ledges, objects4. Try different textured floors / objects; patterned, smooth, glass, etc.Ultimately, the more data we have of scenarios the robot will encounter in the real world, the better our collision avoidance behavior will be. It's importantto get *varied* data (as described by the above tips) and not just a lot of data, but you'll probably need at least 100 images of each class (that's not a science, just a helpful tip here). But don't worry, it goes pretty fast once you get going :) NextOnce you've collected enough data, we'll need to copy that data to our GPU desktop or cloud machine for training. First, we can call the following *terminal* command to compressour dataset folder into a single *zip* file.> The ! prefix indicates that we want to run the cell as a *shell* (or *terminal*) command.> The -r flag in the zip command below indicates *recursive* so that we include all nested files, the -q flag indicates *quiet* so that the zip command doesn't print any output
!zip -r -q dataset.zip dataset
_____no_output_____
MIT
.ipynb_checkpoints/data_collection-collisionavoidance_Jetbot_Joystick-checkpoint.ipynb
tomMEM/Jetbot-Project
The Performance Of Models Trained On The MNIST Dataset On Custom-Drawn Images
import numpy as np import tensorflow as tf import sklearn, sklearn.linear_model, sklearn.multiclass, sklearn.naive_bayes import matplotlib.pyplot as plt import pandas as pd plt.rcParams["figure.figsize"] = (10, 10) plt.rcParams.update({'font.size': 12})
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Defining the data
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Making 1D versions of the MNIST images for the one-vs-rest classifier
train_images_flat = train_images.reshape((train_images.shape[0], train_images.shape[1] * train_images.shape[2])) / 255.0 test_images_flat = test_images.reshape((test_images.shape[0], test_images.shape[1] * test_images.shape[2])) / 255.0
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Making a 4D dataset and categorical labels for the neural net
train_images = np.expand_dims(train_images, axis=-1) / 255.0 test_images = np.expand_dims(test_images, axis=-1) / 255.0 #train_images = train_images.reshape(60000, 28, 28, 1) #test_images = test_images.reshape(10000, 28, 28, 1) train_labels_cat = tf.keras.utils.to_categorical(train_labels) test_labels_cat = tf.keras.utils.to_categorical(test_labels) def plot_images(images, labels, rows=5, cols=5, label='Label'): fig, axes = plt.subplots(rows, cols) fig.figsize=(15, 15) indices = np.random.choice(len(images), rows * cols) counter = 0 for i in range(rows): for j in range(cols): axes[i, j].imshow(images[indices[counter]]) axes[i, j].set_title(f"{label}: {labels[indices[counter]]}") axes[i, j].set_xticks([]) axes[i, j].set_yticks([]) counter += 1 plt.tight_layout() plt.show() plot_images(train_images, train_labels)
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Training Defining and training the one-vs-rest classifier
log_reg = sklearn.linear_model.SGDClassifier(loss='log', max_iter=1000, penalty='l2') classifier = sklearn.multiclass.OneVsRestClassifier(log_reg) classifier.fit(train_images_flat, train_labels)
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Defining and training the neural net
from tensorflow.keras import Sequential from tensorflow.keras import layers def create_model(): model = Sequential([ layers.Conv2D(64, 5, activation='relu', input_shape=(28, 28, 1)), layers.MaxPool2D(2), layers.Conv2D(128, 5, activation='relu'), layers.MaxPool2D(2), layers.GlobalAveragePooling2D(), layers.Dense(10, activation='softmax') ]) model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy']) return model model = create_model() model.summary() train_gen = tf.keras.preprocessing.image.ImageDataGenerator(zoom_range=0.3, height_shift_range=0.10, width_shift_range=0.10, rotation_range=10) train_datagen = train_gen.flow(train_images, train_labels_cat, batch_size=256) '''def scheduler(epoch): initial_lr = 0.001 lr = initial_lr * np.exp(-0.1 * epoch) return lr from tensorflow.keras.callbacks import LearningRateScheduler lr_scheduler = LearningRateScheduler(scheduler, verbose=1)''' history = model.fit(train_datagen, initial_epoch=0, epochs=30, batch_size=256, validation_data=(test_images, test_labels_cat)) model.save('cnn-64-128-5-aug') #model.load_weights('cnn-64-128-5-aug')
INFO:tensorflow:Assets written to: cnn-64-128-5-aug\assets
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Assessing model performance Loading drawn images
def read_images(filepaths, reverse=False): images = [] images_flat = [] for filepath in filepaths: image = tf.io.read_file(filepath) image = tf.image.decode_image(image, channels=1) image = tf.image.resize(image, (28, 28)) if reverse: image = np.where(image == 255, 0, 255) else: image = image.numpy() image = image / 255.0 images.append(image) images_flat.append(image.reshape(28 * 28)) return np.array(images), np.array(images_flat) filepaths = tf.io.gfile.glob('images/*.png') list.sort(filepaths, key=lambda x: int(x[12:-4])) images, images_flat = read_images(filepaths, True) images.shape
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Creating labels for the one-vs-rest classifier and the neural net
labels = 100 * [0] + 98 * [1] + 100 * [2] + 101 * [3] + 99 * [4] + 111 * [5] + 89 * [6] + 110 * [7] + 93 * [8] + 112 * [9] labels = np.array(labels) labels.shape labels_cat = tf.keras.utils.to_categorical(labels) labels_cat.shape labels_cat[0]
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Plotting the drawn images and their corresponding labels
plot_images(images, labels)
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Evaluating model performance Neural net on MNIST test dataset
model.evaluate(test_images, test_labels_cat) from sklearn.metrics import classification_report, confusion_matrix predictions = np.argmax(model.predict(test_images), axis=-1) conf_mat = confusion_matrix(test_labels, predictions) conf_mat class_report = classification_report(test_labels, predictions, output_dict=True) class_report
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Neural net on drawn images
model.evaluate(images, labels_cat) predictions = np.argmax(model.predict(images), axis=-1) rows, cols = 5, 5 fig, axes = plt.subplots(rows, cols) fig.figsize=(15, 15) indices = np.random.choice(len(images), rows * cols) counter = 0 for i in range(rows): for j in range(cols): axes[i, j].imshow(images[indices[counter]]) axes[i, j].set_title(f"Prediction: {predictions[indices[counter]]}\n" f"True label: {labels[indices[counter]]}") axes[i, j].set_xticks([]) axes[i, j].set_yticks([]) counter += 1 plt.tight_layout() plt.show()
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Plotting wrong predictions
wrong_predictions = list(filter(lambda x: x[1][0] != x[1][1], list(enumerate(zip(predictions, labels))))) len(wrong_predictions) cols, rows = 5, 5 fig, axes = plt.subplots(rows, cols) fig.figsize=(15, 15) counter = 0 for i in range(rows): for j in range(cols): axes[i, j].imshow(images[wrong_predictions[counter][0]]) axes[i, j].set_title(f"Prediction: {wrong_predictions[counter][1][0]}\n" f"True label: {wrong_predictions[counter][1][1]}") axes[i, j].set_xticks([]) axes[i, j].set_yticks([]) counter += 1 plt.tight_layout() plt.show() from sklearn.metrics import classification_report, confusion_matrix conf_mat = confusion_matrix(labels, predictions) conf_mat class_report = classification_report(labels, predictions, output_dict=True) class_report
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
One-vs-rest classifier on MNIST test dataset
classifier.score(test_images_flat, test_labels) predictions = classifier.predict(test_images_flat) conf_mat = confusion_matrix(test_labels, predictions) conf_mat class_report = classification_report(test_labels, predictions, output_dict=True) class_report
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
One-vs-rest classifier on drawn images
classifier.score(images_flat, labels) predictions = classifier.predict(images_flat) conf_mat = confusion_matrix(labels, predictions) conf_mat class_report = classification_report(labels, predictions, output_dict=True) class_report
_____no_output_____
MIT
notebooks/digit-classification-test.ipynb
MovsisyanM/digit-recognizer
Plot Comparison Between Algorithms Expects the input data to contain CSV files containing rewards per timestep
import os import numpy as np %reload_ext autoreload %autoreload 2
_____no_output_____
MIT
plot/plotFromRewards.ipynb
architsakhadeo/OHT
We need to read the CSV files (from a function in another file) to get the reward at each timestep for each run of each algorithm. Only the `dataPath` directories will be loaded.`load_data` loads the CSV files containing rewards as a python list of Pandas DataFrames.`dataPath` contains the exact path of the directories containing the CSV files. This path is relative to the `data` directory. It assumes every element will be path for a different algorithm. It will overwrite if two paths are for different parameter settings of the same algorithm.Expects there to be more than 1 input CSV file.
dataPath = ['esarsa/alpha-0.015625_driftProb--1,-1,-1,-1_driftScale-1000_enable-debug-0_epsilon-0.05_gamma-0.95_lambda-0.8_sensorLife-1,1,1,1_tiles-4_tilings-32', 'dqn/alpha-0.015625_driftProb--1,-1,-1,-1_driftScale-100_enable-debug-0_epsilon-0.05_gamma-0.95_lambda-0.8_sensorLife-1,1,1,1_tiles-4_tilings-32/'] basePath = '../data/' algorithms = [dataPath[i].split('/')[0] for i in range(len(dataPath))] Data = {} from loadFromRewards import load_data for i in range(len(dataPath)): if os.path.isdir(basePath + dataPath[i]) == True: Data[algorithms[i]] = load_data(basePath+dataPath[i]) print('Data will be stored for', ', '.join([k for k in Data.keys()])) print('Loaded the rewards data from the csv files')
Data will be stored for esarsa Loaded the rewards data from the csv files
MIT
plot/plotFromRewards.ipynb
architsakhadeo/OHT
The rewards can be transformed into the following values of transformation =1. 'Returns'2. 'Failures'3. 'Average-Rewards'4. 'Rewards' (no change)----------------------------------------------------------------------------------------------There is an additional parameter of window which can be any non-negative integer. It is used for the 'Average-Rewards' transformation to maintain a moving average over a sliding window. By default window is 0.- If window is 500 and timesteps are 10000, then the first element is the average of the performances of timesteps from 1 - 500. The second element is the average of the performances of timesteps from 2 - 501. The last element is the average of the performances of timesteps from 9501 - 10000.----------------------------------------------------------------------------------------------`transform_data` transforms the absolute failure timesteps (python list of Pandas DataFrames) into the respective `transformation` (a numpy array of numpy arrays) for plotting
plottingData = {} from loadFromRewards import transform_data transformation = 'Returns' window = 2500 for alg, data in Data.items(): plottingData[alg] = transform_data(alg, data, transformation, window) print('Data will be plotted for', ', '.join([k for k in plottingData.keys()])) print('The stored rewards are transformed to: ', transformation)
0 esarsa Data will be plotted for esarsa The stored rewards are transformed to: Returns
MIT
plot/plotFromRewards.ipynb
architsakhadeo/OHT
Here, we can plot the following statistics:1. Mean of all the runs2. Median run3. Run with the best performance (highest return, or equivalently least failures)4. Run with the worst performance (lowest return, or equivalently most failures)5. Mean along with the confidence interval (Currently, plots the mean along with 95% confidence interval, but should be changed to make it adaptive to any confidence interval)6. Mean along with percentile regions (Plots the mean and shades the region between the run with the lower percentile and the run with the upper percentile)----------------------------------------------------------------------------------------------Details:plotBest, plotWorst, plotMeanAndPercentileRegions sort the performances based on their final performance ----------------------------------------------------Mean, Median, MeanAndConfidenceInterval are all symmetric plots so 'Failures' does not affect their plots Best, Worst, MeanAndPercentileRegions are all asymmetric plots so 'Failures' affects their plots, and has to be treated in the following way: ----------------------------------------------------1. plotBest for Returns will plot the run with the highest return (least failures) plotBest for Failures will plot the run with the least failures and not the highest failures2. plotWorst for Returns will plot the run with the lowest return (most failures) plotWorst for Failures will plot the run with the most failures and not the least failures3. plotMeanAndPercentileRegions for Returns uses the lower variable to select the run with the 'lower' percentile and uses the upper variable to select the run with the 'upper' percentile plotMeanAndPercentileRegions for Failures uses the lower variable along with some calculations to select the run with 'upper' percentile and uses the upper variable along with some calculations to select the run with the 'lower' percentile ----------------------------------------------------------------------------------------------Caution:- Jupyter notebooks (mostly) or matplotlib gives an error when displaying very dense plots. For example: plotting best and worst case for transformation of 'Rewards' for 'example' algorithm, or when trying to zoom into dense plots. Most of the plots for 'Rewards' and 'example' fail.
from stats import getMean, getMedian, getBest, getWorst, getConfidenceIntervalOfMean, getRegion # Add color, linestyles as needed def plotMean(xAxis, data, color): mean = getMean(data) plt.plot(xAxis, mean, label=alg+'-mean', color=color) def plotMedian(xAxis, data, color): median = getMedian(data) plt.plot(xAxis, median, label=alg+'-median', color=color) def plotBest(xAxis, data, transformation, color): best = getBest(data, transformation) plt.plot(xAxis, best, label=alg+'-best', color=color) def plotWorst(xAxis, data, transformation, color): worst = getWorst(data, transformation) plt.plot(xAxis, worst, label=alg+'-worst', color=color) def plotMeanAndConfidenceInterval(xAxis, data, confidence, color): plotMean(xAxis, data, color=color) lowerBound, upperBound = getConfidenceIntervalOfMean(data, confidence) plt.fill_between(xAxis, lowerBound, upperBound, alpha=0.25, color=color) def plotMeanAndPercentileRegions(xAxis, data, lower, upper, transformation, color): plotMean(xAxis, data, color) lowerRun, upperRun = getRegion(data, lower, upper, transformation) plt.fill_between(xAxis, lowerRun, upperRun, alpha=0.25, color=color)
_____no_output_____
MIT
plot/plotFromRewards.ipynb
architsakhadeo/OHT
Details:- X axis for 'Average-Rewards' will start from 'window' timesteps and end with the final timesteps- Need to add color (shades), linestyle as per requirements- Currently plot one at a time by commenting out the others otherwise, it displays different colors for all.
# For saving figures #%matplotlib inline # For plotting in the jupyter notebook %matplotlib notebook import matplotlib.pyplot as plt colors = plt.rcParams['axes.prop_cycle'].by_key()['color'] for alg, data in plottingData.items(): lenRun = len(data[0]) xAxis = np.array([i for i in range(1,lenRun+1)]) if transformation == 'Average-Rewards': xAxis += (window-1) if alg == 'esarsa': color = colors[0] if alg == 'hand': color = colors[1] plotMean(xAxis, data, color=color) #plotMedian(xAxis, data, color=color) #plotBest(xAxis, data, transformation=transformation, color=color) #plotWorst(xAxis, data, transformation=transformation, color=color) #plotMeanAndConfidenceInterval(xAxis, data, confidence=0.95, color=color) #plotMeanAndPercentileRegions(xAxis, data, lower=0.025, upper=0.975, transformation=transformation, color=color) #plt.title('Rewards averaged with sliding window of 1000 timesteps across 100 runs', pad=25, fontsize=10) plt.xlabel('Timesteps', labelpad=35) plt.ylabel(transformation, rotation=0, labelpad=45) plt.rcParams['figure.figsize'] = [8, 5.33] plt.legend(loc=0) plt.yticks() plt.xticks() plt.tight_layout() #plt.savefig('../img/'+transformation+'.png',dpi=500, bbox_inches='tight')
_____no_output_____
MIT
plot/plotFromRewards.ipynb
architsakhadeo/OHT
Classification with MNIST Dataset and ResNet networkThis script sets up a ResNet-style network to classify digits from the MNIST dataset.
import keras import keras.backend as K from keras.datasets import mnist from keras.models import Model from keras.layers import Input, Conv2D, Dense, Flatten, MaxPooling2D, Add, Activation, Dropout from keras.optimizers import SGD from matplotlib import pyplot as plt import numpy as np
Using TensorFlow backend.
MIT
mnist/resnet_mnist.ipynb
jonathanventura/nncookbook
Use a Keras utility function to load the MNIST dataset. We select only zeros and ones to do binary classification.
(x_train, y_train), (x_test, y_test) = mnist.load_data() y_train = keras.utils.to_categorical(y_train,10) y_test = keras.utils.to_categorical(y_test,10)
_____no_output_____
MIT
mnist/resnet_mnist.ipynb
jonathanventura/nncookbook
Resize the images to vectors and convert their datatype and range.
x_train = np.expand_dims(x_train,axis=-1) x_test = np.expand_dims(x_test,axis=-1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 x_train = x_train*2.-1. x_test = x_test*2.-1.
_____no_output_____
MIT
mnist/resnet_mnist.ipynb
jonathanventura/nncookbook
Build a multi-class classifier model.
def res_block(x,c,s=1): if K.shape(x)[3] <> c or s <> 1: x_save = Conv2D(c,1,strides=s,activation=None)(x) else: x_save = x x = Conv2D(c,3,strides=s,padding='same',activation='relu',kernel_initializer='he_normal')(x) x = Conv2D(c,3,padding='same',activation=None,kernel_initializer='he_normal')(x) x = Add()([x,x_save]) x = Activation('relu')(x) return x x_in = Input((28,28,1)) x = res_block(x_in,64,2) x = res_block(x,64) x = res_block(x,128,2) x = res_block(x,128) x = res_block(x,256,2) x = res_block(x,256) x = Flatten()(x) x = Dense(200,kernel_initializer='he_normal')(x) x = Dropout(0.5)(x) x = Dense(10,activation='softmax',kernel_initializer='he_normal')(x) model = Model(inputs=x_in,outputs=x) model.summary()
____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_1 (InputLayer) (None, 28, 28, 1) 0 ____________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 14, 14, 64) 640 input_1[0][0] ____________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 14, 14, 64) 36928 conv2d_2[0][0] ____________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 14, 14, 64) 128 input_1[0][0] ____________________________________________________________________________________________________ add_1 (Add) (None, 14, 14, 64) 0 conv2d_3[0][0] conv2d_1[0][0] ____________________________________________________________________________________________________ activation_1 (Activation) (None, 14, 14, 64) 0 add_1[0][0] ____________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 14, 14, 64) 36928 activation_1[0][0] ____________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 14, 14, 64) 36928 conv2d_5[0][0] ____________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 14, 14, 64) 4160 activation_1[0][0] ____________________________________________________________________________________________________ add_2 (Add) (None, 14, 14, 64) 0 conv2d_6[0][0] conv2d_4[0][0] ____________________________________________________________________________________________________ activation_2 (Activation) (None, 14, 14, 64) 0 add_2[0][0] ____________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 7, 7, 128) 73856 activation_2[0][0] ____________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 7, 7, 128) 147584 conv2d_8[0][0] ____________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 7, 7, 128) 8320 activation_2[0][0] ____________________________________________________________________________________________________ add_3 (Add) (None, 7, 7, 128) 0 conv2d_9[0][0] conv2d_7[0][0] ____________________________________________________________________________________________________ activation_3 (Activation) (None, 7, 7, 128) 0 add_3[0][0] ____________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 7, 7, 128) 147584 activation_3[0][0] ____________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 7, 7, 128) 147584 conv2d_11[0][0] ____________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 7, 7, 128) 16512 activation_3[0][0] ____________________________________________________________________________________________________ add_4 (Add) (None, 7, 7, 128) 0 conv2d_12[0][0] conv2d_10[0][0] ____________________________________________________________________________________________________ activation_4 (Activation) (None, 7, 7, 128) 0 add_4[0][0] ____________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 4, 4, 256) 295168 activation_4[0][0] ____________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 4, 4, 256) 590080 conv2d_14[0][0] ____________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 4, 4, 256) 33024 activation_4[0][0] ____________________________________________________________________________________________________ add_5 (Add) (None, 4, 4, 256) 0 conv2d_15[0][0] conv2d_13[0][0] ____________________________________________________________________________________________________ activation_5 (Activation) (None, 4, 4, 256) 0 add_5[0][0] ____________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 4, 4, 256) 590080 activation_5[0][0] ____________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 4, 4, 256) 590080 conv2d_17[0][0] ____________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 4, 4, 256) 65792 activation_5[0][0] ____________________________________________________________________________________________________ add_6 (Add) (None, 4, 4, 256) 0 conv2d_18[0][0] conv2d_16[0][0] ____________________________________________________________________________________________________ activation_6 (Activation) (None, 4, 4, 256) 0 add_6[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 4096) 0 activation_6[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 200) 819400 flatten_1[0][0] ____________________________________________________________________________________________________ dropout_1 (Dropout) (None, 200) 0 dense_1[0][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 10) 2010 dropout_1[0][0] ==================================================================================================== Total params: 3,642,786 Trainable params: 3,642,786 Non-trainable params: 0 ____________________________________________________________________________________________________
MIT
mnist/resnet_mnist.ipynb
jonathanventura/nncookbook
Set up the model to optimize the categorical crossentropy loss using stochastic gradient descent.
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
_____no_output_____
MIT
mnist/resnet_mnist.ipynb
jonathanventura/nncookbook
Optimize the model over the training data.
history = model.fit(x_train, y_train, batch_size=100, epochs=20, verbose=1, validation_data=(x_test, y_test)) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.legend(['Training Loss','Testing Loss']) plt.xlabel('Epoch') plt.ylabel('Loss') plt.show() plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.legend(['Training Accuracy','Testing Accuracy']) plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.show()
_____no_output_____
MIT
mnist/resnet_mnist.ipynb
jonathanventura/nncookbook
Base class
#export class ProgressBar(): update_every,first_its = 0.2,5 def __init__(self, gen, total=None, display=True, leave=True, parent=None, master=None, comment=''): self.gen,self.parent,self.master,self.comment = gen,parent,master,comment self.total = len(gen) if total is None else total self.last_v = 0 if parent is None: self.leave,self.display = leave,display else: self.leave,self.display=False,False parent.add_child(self) self.last_v = None def on_iter_begin(self): if self.master is not None: self.master.on_iter_begin() def on_interrupt(self): if self.master is not None: self.master.on_interrupt() def on_iter_end(self): if self.master is not None: self.master.on_iter_end() def on_update(self, val, text): pass def __iter__(self): if self.total != 0: self.update(0) try: for i,o in enumerate(self.gen): if i >= self.total: break yield o self.update(i+1) except Exception as e: self.on_interrupt() raise e def update(self, val): if self.last_v is None: self.on_iter_begin() self.last_v = 0 if val == 0: self.start_t = self.last_t = time.time() self.pred_t,self.last_v,self.wait_for = 0,0,1 self.update_bar(0) elif val <= self.first_its or val >= self.last_v + self.wait_for or val >= self.total: cur_t = time.time() avg_t = (cur_t - self.start_t) / val self.wait_for = max(int(self.update_every / (avg_t+1e-8)),1) self.pred_t = avg_t * self.total self.last_v,self.last_t = val,cur_t self.update_bar(val) if val >= self.total: self.on_iter_end() self.last_v = None def update_bar(self, val): elapsed_t = self.last_t - self.start_t remaining_t = format_time(self.pred_t - elapsed_t) elapsed_t = format_time(elapsed_t) end = '' if len(self.comment) == 0 else f' {self.comment}' if self.total == 0: warn("Your generator is empty.") self.on_update(0, '100% [0/0]') else: self.on_update(val, f'{100 * val/self.total:.2f}% [{val}/{self.total} {elapsed_t}<{remaining_t}{end}]') class VerboseProgressBar(ProgressBar): def on_iter_begin(self): super().on_iter_begin(); print("on_iter_begin") def on_interrupt(self): print("on_interrupt") def on_iter_end(self): print("on_iter_end"); super().on_iter_end() def on_update(self, val, text): print(f"on_update {val}") from contextlib import redirect_stdout import io tst_pb = VerboseProgressBar(range(6)) s = io.StringIO() with redirect_stdout(s): for i in tst_pb: time.sleep(0.1) assert s.getvalue() == '\n'.join(['on_iter_begin'] + [f'on_update {i}' for i in range(7)] + ['on_iter_end']) + '\n' tst_pb = VerboseProgressBar(range(6)) s = io.StringIO() with redirect_stdout(s): for i in range(7): tst_pb.update(i) time.sleep(0.1) assert s.getvalue() == '\n'.join(['on_iter_begin'] + [f'on_update {i}' for i in range(7)] + ['on_iter_end']) + '\n' #export class MasterBar(ProgressBar): def __init__(self, gen, cls, total=None): self.main_bar = cls(gen, total=total, display=False, master=self) def on_iter_begin(self): pass def on_interrupt(self): pass def on_iter_end(self): pass def add_child(self, child): pass def write(self, line): pass def update_graph(self, graphs, x_bounds, y_bounds): pass def __iter__(self): for o in self.main_bar: yield o def update(self, val): self.main_bar.update(val) class VerboseMasterBar(MasterBar): def __init__(self, gen, total=None): super().__init__(gen, VerboseProgressBar, total=total) def on_iter_begin(self): print("master_on_iter_begin") def on_interrupt(self): print("master_on_interrupt") def on_iter_end(self): print("master_on_iter_end") #def on_update(self, val, text): print(f"master_on_update {val}") tst_mb = VerboseMasterBar(range(6)) for i in tst_mb: time.sleep(0.1) #hide #Test an empty progress bar doesn't crash for i in ProgressBar([]): pass
_____no_output_____
Apache-2.0
nbs/01_fastprogress.ipynb
hsm/fastprogress
Notebook progress bars
#export if IN_NOTEBOOK: try: from IPython.display import clear_output, display, HTML import matplotlib.pyplot as plt import ipywidgets as widgets except: warn("Couldn't import ipywidgets properly, progress bar will use console behavior") IN_NOTEBOOK = False #export class NBOutput(): def __init__(self, to_display): self.out = widgets.Output() display(self.out) with self.out: display(to_display) def update(self, to_update): with self.out: clear_output(wait=True) display(to_update) #export class NBProgressBar(ProgressBar): def on_iter_begin(self): super().on_iter_begin() self.progress = html_progress_bar(0, self.total, "") if self.display: display(HTML(html_progress_bar_styles)) self.out = NBOutput(HTML(self.progress)) self.is_active=True def on_interrupt(self): self.on_update(0, 'Interrupted', interrupted=True) super().on_interrupt() self.on_iter_end() def on_iter_end(self): if not self.leave and self.display: self.out.update(HTML('')) self.is_active=False super().on_iter_end() def on_update(self, val, text, interrupted=False): self.progress = html_progress_bar(val, self.total, text, interrupted) if self.display: self.out.update(HTML(self.progress)) elif self.parent is not None: self.parent.show() tst = NBProgressBar(range(100)) for i in tst: time.sleep(0.05) tst = NBProgressBar(range(100)) for i in range(50): time.sleep(0.05) tst.update(i) tst.on_interrupt() #hide for i in NBProgressBar([]): pass #export class NBMasterBar(MasterBar): names = ['train', 'valid'] def __init__(self, gen, total=None, hide_graph=False, order=None, clean_on_interrupt=False, total_time=False): super().__init__(gen, NBProgressBar, total) if order is None: order = ['pb1', 'text', 'pb2'] self.hide_graph,self.order = hide_graph,order self.report,self.clean_on_interrupt,self.total_time = [],clean_on_interrupt,total_time self.inner_dict = {'pb1':self.main_bar, 'text':""} self.text,self.lines = "",[] def on_iter_begin(self): self.html_code = '\n'.join([html_progress_bar(0, self.main_bar.total, ""), ""]) display(HTML(html_progress_bar_styles)) self.out = NBOutput(HTML(self.html_code)) def on_interrupt(self): if self.clean_on_interrupt: self.out.update(HTML('')) def on_iter_end(self): if hasattr(self, 'imgs_fig'): plt.close() self.imgs_out.update(self.imgs_fig) if hasattr(self, 'graph_fig'): plt.close() self.graph_out.update(self.graph_fig) if self.text.endswith('<p>'): self.text = self.text[:-3] if self.total_time: total_time = format_time(time.time() - self.main_bar.start_t) self.text = f'Total time: {total_time} <p>' + self.text if hasattr(self, 'out'): self.out.update(HTML(self.text)) def add_child(self, child): self.child = child self.inner_dict['pb2'] = self.child #self.show() def show(self): self.inner_dict['text'] = self.text to_show = [name for name in self.order if name in self.inner_dict.keys()] self.html_code = '\n'.join([getattr(self.inner_dict[n], 'progress', self.inner_dict[n]) for n in to_show]) self.out.update(HTML(self.html_code)) def write(self, line, table=False): if not table: self.text += line + "<p>" else: self.lines.append(line) self.text = text2html_table(self.lines) def show_imgs(self, imgs, titles=None, cols=4, imgsize=4, figsize=None): if self.hide_graph: return rows = len(imgs)//cols if len(imgs)%cols == 0 else len(imgs)//cols + 1 plt.close() if figsize is None: figsize = (imgsize*cols, imgsize*rows) self.imgs_fig, imgs_axs = plt.subplots(rows, cols, figsize=figsize) if titles is None: titles = [None] * len(imgs) for img, ax, title in zip(imgs, imgs_axs.flatten(), titles): img.show(ax=ax, title=title) for ax in imgs_axs.flatten()[len(imgs):]: ax.axis('off') if not hasattr(self, 'imgs_out'): self.imgs_out = NBOutput(self.imgs_fig) else: self.imgs_out.update(self.imgs_fig) def update_graph(self, graphs, x_bounds=None, y_bounds=None, figsize=(6,4)): if self.hide_graph: return if not hasattr(self, 'graph_fig'): self.graph_fig, self.graph_ax = plt.subplots(1, figsize=figsize) self.graph_out = NBOutput(self.graph_ax.figure) self.graph_ax.clear() if len(self.names) < len(graphs): self.names += [''] * (len(graphs) - len(self.names)) for g,n in zip(graphs,self.names): self.graph_ax.plot(*g, label=n) self.graph_ax.legend(loc='upper right') if x_bounds is not None: self.graph_ax.set_xlim(*x_bounds) if y_bounds is not None: self.graph_ax.set_ylim(*y_bounds) self.graph_out.update(self.graph_ax.figure) mb = NBMasterBar(range(5)) for i in mb: for j in NBProgressBar(range(10), parent=mb, comment=f'first bar stat'): time.sleep(0.01) #mb.child.comment = f'second bar stat' mb.write(f'Finished loop {i}.') mb = NBMasterBar(range(5)) mb.update(0) for i in range(5): for j in NBProgressBar(range(10), parent=mb): time.sleep(0.01) #mb.child.comment = f'second bar stat' mb.main_bar.comment = f'first bar stat' mb.write(f'Finished loop {i}.') mb.update(i+1)
_____no_output_____
Apache-2.0
nbs/01_fastprogress.ipynb
hsm/fastprogress
Console progress bars
#export NO_BAR = False WRITER_FN = print FLUSH = True SAVE_PATH = None SAVE_APPEND = False MAX_COLS = 160 #export def printing(): return False if NO_BAR else (stdout.isatty() or IN_NOTEBOOK) #export class ConsoleProgressBar(ProgressBar): fill:str='█' end:str='\r' def __init__(self, gen, total=None, display=True, leave=True, parent=None, master=None, txt_len=60): self.cols,_ = shutil.get_terminal_size((100, 40)) if self.cols > MAX_COLS: self.cols=MAX_COLS self.length = self.cols-txt_len self.max_len,self.prefix = 0,'' #In case the filling char returns an encoding error try: print(self.fill, end='\r', flush=FLUSH) except: self.fill = 'X' super().__init__(gen, total, display, leave, parent, master) def on_interrupt(self): super().on_interrupt() self.on_iter_end() def on_iter_end(self): if not self.leave and printing(): print(f'\r{self.prefix}' + ' ' * (self.max_len - len(f'\r{self.prefix}')), end='\r', flush=FLUSH) super().on_iter_end() def on_update(self, val, text): if self.display: if self.length > self.cols-len(text)-len(self.prefix)-4: self.length = self.cols-len(text)-len(self.prefix)-4 filled_len = int(self.length * val // self.total) if self.total else 0 bar = self.fill * filled_len + '-' * (self.length - filled_len) to_write = f'\r{self.prefix} |{bar}| {text}' if val >= self.total: end = '\r' else: end = self.end if len(to_write) > self.max_len: self.max_len=len(to_write) if printing(): WRITER_FN(to_write, end=end, flush=FLUSH) tst = ConsoleProgressBar(range(100)) for i in tst: time.sleep(0.05) tst = ConsoleProgressBar(range(100)) for i in range(50): time.sleep(0.05) tst.update(i) tst.on_interrupt() #export def print_and_maybe_save(line): WRITER_FN(line) if SAVE_PATH is not None: attr = "a" if os.path.exists(SAVE_PATH) else "w" with open(SAVE_PATH, attr) as f: f.write(line + '\n') #export class ConsoleMasterBar(MasterBar): def __init__(self, gen, total=None, hide_graph=False, order=None, clean_on_interrupt=False, total_time=False): super().__init__(gen, ConsoleProgressBar, total) self.total_time = total_time def add_child(self, child): self.child = child v = 0 if self.main_bar.last_v is None else self.main_bar.last_v self.child.prefix = f'Epoch {v+1}/{self.main_bar.total} :' self.child.display = True def on_iter_begin(self): super().on_iter_begin() if SAVE_PATH is not None and os.path.exists(SAVE_PATH) and not SAVE_APPEND: with open(SAVE_PATH, 'w') as f: f.write('') def write(self, line, table=False): if table: text = '' if not hasattr(self, 'names'): self.names = [name + ' ' * (8-len(name)) if len(name) < 8 else name for name in line] text = ' '.join(self.names) else: for (t,name) in zip(line,self.names): text += t + ' ' * (2 + len(name)-len(t)) print_and_maybe_save(text) else: print_and_maybe_save(line) if self.total_time: total_time = format_time(time() - self.start_t) print_and_maybe_save(f'Total time: {total_time}') def show_imgs(*args, **kwargs): pass def update_graph(*args, **kwargs): pass mb = ConsoleMasterBar(range(5)) for i in mb: for j in ConsoleProgressBar(range(10), parent=mb): time.sleep(0.01) #mb.child.comment = f'second bar stat' mb.main_bar.comment = f'first bar stat' mb.write(f'Finished loop {i}.') mb = ConsoleMasterBar(range(5)) mb.update(0) for i in range(5): for j in ConsoleProgressBar(range(10), parent=mb): time.sleep(0.01) #mb.child.comment = f'second bar stat' mb.main_bar.comment = f'first bar stat' mb.write(f'Finished loop {i}.') mb.update(i+1) # confirming a kwarg can be passed to ConsoleMasterBar instance mb.update_graph([[1,2],[3,4]], figsize=(10,5,)) mb.show_imgs(figsize=(10,5,)) #export if IN_NOTEBOOK: master_bar, progress_bar = NBMasterBar, NBProgressBar else: master_bar, progress_bar = ConsoleMasterBar, ConsoleProgressBar #export _all_ = ['master_bar', 'progress_bar'] #export def force_console_behavior(): "Return the console progress bars" return ConsoleMasterBar, ConsoleProgressBar #export def workaround_empty_console_output(): "Change console output behaviour to correctly show progress in consoles not recognizing \r at the end of line" ConsoleProgressBar.end = ''
_____no_output_____
Apache-2.0
nbs/01_fastprogress.ipynb
hsm/fastprogress
Export -
from nbdev.export import notebook2script notebook2script()
_____no_output_____
Apache-2.0
nbs/01_fastprogress.ipynb
hsm/fastprogress
Impotations de dependances
import numpy as np import pandas as pd #import matplotlib.pyplot as plt #import seaborn as sns from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn import svm from sklearn.metrics import accuracy_score #from sklearn.cluster import KMeans #from sklearn.model_selection import train_test_split #from sklearn.ensemble import RandomForestRegressor #from sklearn import metrics
_____no_output_____
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
chargement des données du fichier csv dans une trame de données pendas
diabet_data = pd.read_csv('/content/diabete.csv')
_____no_output_____
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
lecture des donnes du tableau
pd.read_csv
_____no_output_____
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
les 5 premiers lignes du tableau
diabet_data.head() # le nombre de ligne et de colonnes dans notre base de donnes diabet_data.shape #Pour obtenir les mesures statistique de cette donnee. diabet_data.describe()
_____no_output_____
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
le nombre de diabetique et le non diabetique 0--> non Diabetique1--> Diabetique
diabet_data['diabete'].value_counts() diabet_data.groupby('diabete').mean() #Sepearation de données et les etiquettes X=diabet_data.drop(columns='diabete', axis=1) Y=diabet_data['diabete'] #lecture de X print(X) print(Y)
0 1 1 0 2 1 3 0 4 1 .. 763 0 764 0 765 0 766 1 767 0 Name: diabete, Length: 768, dtype: int64
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
Normalisation des donnees.
scaler = StandardScaler() scaler.fit(X) standardized_date = scaler.transform(X) print(standardized_date) X = standardized_date Y = diabet_data['diabete'] print(X) print(Y) #Essaye de train fractionné X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.2, stratify=Y, random_state=2) print(X.shape, X_train.shape, X_test.shape)
(768, 8) (614, 8) (154, 8)
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
Model de formation
classifier = svm.SVC(kernel='linear')
_____no_output_____
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
Formation de la machine a vecteur classée
classifier.fit(X_train, Y_train)
_____no_output_____
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
Evaluation de modele Note de precision
# Score de precision sur les données de formation X_train_prediction = classifier.predict(X_train) training_data_accuracy = accuracy_score(X_train_prediction, Y_train) print('Score de precision sur les données de formation : ', training_data_accuracy) # Score de precision sur les données de formation X_test_prediction = classifier.predict(X_test) test_data_accuracy = accuracy_score(X_test_prediction, Y_test) print('Score de precision sur les données de teste : ', test_data_accuracy)
Score de precision sur les données de teste : 0.7727272727272727
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
Faaier un systeme predictif
input_data = (2,197,70,45,543,30.5,0.158,53) # Changement de inptu_data en tableau nympy input_data_as_numpy_array = np.asarray(input_data) # Remodeler le tableau comme nous le predisons une instance input_data_reshaped = input_data_as_numpy_array.reshape(1,-1) # normaliser l’entrée de donnees std_date = scaler.transform(input_data_reshaped) print(std_date) prediction = classifier.predict(std_date) print(prediction) if (prediction[0] == 0) : print('La personne n\'est pas diabetique') else: print('La peronne est diabetique')
[[-0.54791859 2.38188392 0.04624525 1.53455054 4.02192191 -0.18943689 -0.94794368 1.68125866]] [1] La peronne est diabetique
MIT
Prediction_diabet.ipynb
bsyllaisidk/Architecture-L
Breakdown of lethality
import pickle import pandas as pd import os import seaborn as sns import matplotlib.pyplot as plt import numpy as np from ast import literal_eval
_____no_output_____
MIT
notebooks/10b-anlyz_run02-synthetic_lethal_classes-feat1.ipynb
pritchardlabatpsu/cga