markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
# A smarter differentiation scheme. gradient_safe_sampled_expectation = tfq.layers.SampledExpectation( differentiator=tfq.differentiators.ParameterShift()) with tf.GradientTape() as g: g.watch(values_tensor) imperfect_outputs = gradient_safe_sampled_expectation( my_circuit, operators=pauli_x, repetitions=500, symbol_names=['alpha'], symbol_values=values_tensor) sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor) plt.title('Gradient Values') plt.xlabel('$x$') plt.ylabel('$f^{\'}(x)$') plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic') plt.plot(input_points, sampled_param_shift_gradients, label='Sampled') plt.legend()
_____no_output_____
Apache-2.0
docs/tutorials/gradients.ipynb
HectorIGH/quantum
From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm. 3. Multiple observablesLet's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
pauli_z = cirq.Z(qubit) pauli_z
_____no_output_____
Apache-2.0
docs/tutorials/gradients.ipynb
HectorIGH/quantum
If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
test_value = 0. print('Finite difference:', my_grad(pauli_z, test_value)) print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
_____no_output_____
Apache-2.0
docs/tutorials/gradients.ipynb
HectorIGH/quantum
It's a match (close enough).Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
sum_of_outputs = tfq.layers.Expectation( differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01)) sum_of_outputs(my_circuit, operators=[pauli_x, pauli_z], symbol_names=['alpha'], symbol_values=[[test_value]])
_____no_output_____
Apache-2.0
docs/tutorials/gradients.ipynb
HectorIGH/quantum
Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
test_value_tensor = tf.convert_to_tensor([[test_value]]) with tf.GradientTape() as g: g.watch(test_value_tensor) outputs = sum_of_outputs(my_circuit, operators=[pauli_x, pauli_z], symbol_names=['alpha'], symbol_values=test_value_tensor) sum_of_gradients = g.gradient(outputs, test_value_tensor) print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value)) print(sum_of_gradients.numpy())
_____no_output_____
Apache-2.0
docs/tutorials/gradients.ipynb
HectorIGH/quantum
Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow. 4. Advanced usageHere you will learn how to define your own custom differentiation routines for quantum circuits.All differentiators that exist inside of TensorFlow Quantum subclass `tfq.differentiators.Differentiator`. A differentiator must implement `differentiate_analytic` and `differentiate_sampled`.The following uses TensorFlow Quantum constructs to implement the closed form solution from the first part of this tutorial.
class MyDifferentiator(tfq.differentiators.Differentiator): """A Toy differentiator for <Y^alpha | X |Y^alpha>.""" def __init__(self): pass @tf.function def get_gradient_circuits(self, programs, symbol_names, symbol_values): """Return circuits to compute gradients for given forward pass circuits. When implementing a gradient, it is often useful to describe the intermediate computations in terms of transformed versions of the input circuits. The details are beyond the scope of this tutorial, but interested users should check out the differentiator implementations in the TFQ library for examples. """ raise NotImplementedError( "Gradient circuits are not implemented in this tutorial.") @tf.function def _compute_gradient(self, symbol_values): """Compute the gradient based on symbol_values.""" # f(x) = sin(pi * x) # f'(x) = pi * cos(pi * x) return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32) @tf.function def differentiate_analytic(self, programs, symbol_names, symbol_values, pauli_sums, forward_pass_vals, grad): """Specify how to differentiate a circuit with analytical expectation. This is called at graph runtime by TensorFlow. `differentiate_analytic` should calculate the gradient of a batch of circuits and return it formatted as indicated below. See `tfq.differentiators.ForwardDifference` for an example. Args: programs: `tf.Tensor` of strings with shape [batch_size] containing the string representations of the circuits to be executed. symbol_names: `tf.Tensor` of strings with shape [n_params], which is used to specify the order in which the values in `symbol_values` should be placed inside of the circuits in `programs`. symbol_values: `tf.Tensor` of real numbers with shape [batch_size, n_params] specifying parameter values to resolve into the circuits specified by programs, following the ordering dictated by `symbol_names`. pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops] containing the string representation of the operators that will be used on all of the circuits in the expectation calculations. forward_pass_vals: `tf.Tensor` of real numbers with shape [batch_size, n_ops] containing the output of the forward pass through the op you are differentiating. grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops] representing the gradient backpropagated to the output of the op you are differentiating through. Returns: A `tf.Tensor` with the same shape as `symbol_values` representing the gradient backpropagated to the `symbol_values` input of the op you are differentiating through. """ # Computing gradients just based off of symbol_values. return self._compute_gradient(symbol_values) * grad @tf.function def differentiate_sampled(self, programs, symbol_names, symbol_values, pauli_sums, num_samples, forward_pass_vals, grad): """Specify how to differentiate a circuit with sampled expectation. This is called at graph runtime by TensorFlow. `differentiate_sampled` should calculate the gradient of a batch of circuits and return it formatted as indicated below. See `tfq.differentiators.ForwardDifference` for an example. Args: programs: `tf.Tensor` of strings with shape [batch_size] containing the string representations of the circuits to be executed. symbol_names: `tf.Tensor` of strings with shape [n_params], which is used to specify the order in which the values in `symbol_values` should be placed inside of the circuits in `programs`. symbol_values: `tf.Tensor` of real numbers with shape [batch_size, n_params] specifying parameter values to resolve into the circuits specified by programs, following the ordering dictated by `symbol_names`. pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops] containing the string representation of the operators that will be used on all of the circuits in the expectation calculations. num_samples: `tf.Tensor` of positive integers representing the number of samples per term in each term of pauli_sums used during the forward pass. forward_pass_vals: `tf.Tensor` of real numbers with shape [batch_size, n_ops] containing the output of the forward pass through the op you are differentiating. grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops] representing the gradient backpropagated to the output of the op you are differentiating through. Returns: A `tf.Tensor` with the same shape as `symbol_values` representing the gradient backpropagated to the `symbol_values` input of the op you are differentiating through. """ return self._compute_gradient(symbol_values) * grad
_____no_output_____
Apache-2.0
docs/tutorials/gradients.ipynb
HectorIGH/quantum
This new differentiator can now be used with existing `tfq.layer` objects:
custom_dif = MyDifferentiator() custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif) # Now let's get the gradients with finite diff. with tf.GradientTape() as g: g.watch(values_tensor) exact_outputs = expectation_calculation(my_circuit, operators=[pauli_x], symbol_names=['alpha'], symbol_values=values_tensor) analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor) # Now let's get the gradients with custom diff. with tf.GradientTape() as g: g.watch(values_tensor) my_outputs = custom_grad_expectation(my_circuit, operators=[pauli_x], symbol_names=['alpha'], symbol_values=values_tensor) my_gradients = g.gradient(my_outputs, values_tensor) plt.subplot(1, 2, 1) plt.title('Exact Gradient') plt.plot(input_points, analytic_finite_diff_gradients.numpy()) plt.xlabel('x') plt.ylabel('f(x)') plt.subplot(1, 2, 2) plt.title('My Gradient') plt.plot(input_points, my_gradients.numpy()) plt.xlabel('x')
_____no_output_____
Apache-2.0
docs/tutorials/gradients.ipynb
HectorIGH/quantum
This new differentiator can now be used to generate differentiable ops.Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
# Create a noisy sample based expectation op. expectation_sampled = tfq.get_sampled_expectation_op( cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01))) # Make it differentiable with your differentiator: # Remember to refresh the differentiator before attaching the new op custom_dif.refresh() differentiable_op = custom_dif.generate_differentiable_op( sampled_op=expectation_sampled) # Prep op inputs. circuit_tensor = tfq.convert_to_tensor([my_circuit]) op_tensor = tfq.convert_to_tensor([[pauli_x]]) single_value = tf.convert_to_tensor([[my_alpha]]) num_samples_tensor = tf.convert_to_tensor([[1000]]) with tf.GradientTape() as g: g.watch(single_value) forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value, op_tensor, num_samples_tensor) my_gradients = g.gradient(forward_output, single_value) print('---TFQ---') print('Foward: ', forward_output.numpy()) print('Gradient:', my_gradients.numpy()) print('---Original---') print('Forward: ', my_expectation(pauli_x, my_alpha)) print('Gradient:', my_grad(pauli_x, my_alpha))
_____no_output_____
Apache-2.0
docs/tutorials/gradients.ipynb
HectorIGH/quantum
Now You Code 2: Is That An Email Address?Let's use Python's built-in string functions to write our own function to detect if a string is an email address. The function `isEmail(text)` should return `True` when `text` is an email address, `False` otherwise. For simplicity's sake we will define an email address to be any string with just ONE `@` symbol in it, where the `@` is not at the beginning or end of the string. So `a@b` is considered an email (even though it really isn't).The program should detect emails until you enter quit. Sample run:```Email address detector. Type quit to exit. Email: [email protected]@syr.edu ==> emailEmail: mafudge@mafudge@ ==> NOT EMAILEmail: mafudgemafudge ==> NOT EMAILEmail: @[email protected] ==> NOT EMAILEmail: @@ ==> NOT EMAILEmail: mafudge@@syr.edumafudge@@syr.edu ==> NOT EMAILEmail: mafudge@syr@edumafudge@syr@edu ==> NOT EMAIL```Once again we will use the problem simplification technique to writing this program.First we will write the `isEmail(text)` function, then we will write the main program. Step 1: Problem Analysis for isEmail function onlyInputs (function arguments):Outputs (what is returns): Algorithm (Steps in Function):
## Step 2: Todo write the function definition for isEmail functiuon ## Step 3: Write some tests, to ensure the function works, for example ## Make sure to test all cases! print("WHEN [email protected] We EXPECT isEmail(text) to return True", "ACTUAL", isEmail("[email protected]") ) print("WHEN text=mike@ We EXPECT isEmail(text) to return False", "ACTUAL", isEmail("mike@") )
_____no_output_____
MIT
content/lessons/07/Now-You-Code/NYC2-Email-Address.ipynb
IST256-classroom/fall2018-learn-python-mafudge
Step 4: Problem Analysis for full ProgramInputs:Outputs:Algorithm (Steps in Program):
## Step 5: todo write code for full problem, using the isEmail function to help you solve the problem
_____no_output_____
MIT
content/lessons/07/Now-You-Code/NYC2-Email-Address.ipynb
IST256-classroom/fall2018-learn-python-mafudge
Example notebook for training a U-net deep learning network to predict tree cover This notebook presents a toy example for training a deep learning architecture for semantic segmentation of satellite images using `eo-learn` and `keras`. The example showcases tree cover prediction over an area in Framce. The ground-truth data is retrieved from the [EU tree cover density (2015)](https://land.copernicus.eu/pan-european/high-resolution-layers/forests/view) through [Geopedia](http://www.geopedia.world/T235_L2081_x449046.043261205_y6052157.300792162_s15_b17).The workflow is as foolows: * input the area-of-interest (AOI) * split the AOI into small manageable eopatches * for each eopatch: * download RGB bands form Sentinel-2 L2A products using Sentinel-Hub for the 2017 year * retrieve corresponding ground-truth from Geopedia using a WMS request * compute the median values for the RGB bands over the time-interval * save to disk * select a 256x256 patch with corresponding ground-truth to be used for training/validating the model * train and validate a U-net This example can easily be expanded to: * larger AOIs; * include more/different bands/indices, such as NDVI * include Sentinel-1 images (after harmonisation with Sentinel-2) The notebook requires `Keras` with `tensorflow` back-end.
import os import datetime from os import path as op import itertools from eolearn.io import * from eolearn.core import EOTask, EOPatch, LinearWorkflow, FeatureType, SaveToDisk, OverwritePermission from sentinelhub import BBox, CRS, BBoxSplitter, MimeType, ServiceType from tqdm import tqdm_notebook as tqdm import matplotlib.pyplot as plt import numpy as np import geopandas from sklearn.metrics import confusion_matrix from keras.preprocessing.image import ImageDataGenerator from keras import backend as K from keras.models import * from keras.layers import * from keras.optimizers import * from keras.utils.np_utils import to_categorical K.clear_session()
Using TensorFlow backend.
MIT
examples/tree-cover-keras/tree-cover-keras.ipynb
Gnilliw/eo-learn
1. Set up workflow
# global image request parameters time_interval = ('2017-01-01', '2017-12-31') img_width = 256 img_height = 256 maxcc = 0.2 # get the AOI and split into bboxes crs = CRS.UTM_31N aoi = geopandas.read_file('../../example_data/eastern_france.geojson') aoi = aoi.to_crs(crs=crs.pyproj_crs()) aoi_shape = aoi.geometry.values[-1] bbox_splitter = BBoxSplitter([aoi_shape], crs, (19, 10)) # set raster_value conversions for our Geopedia task # see more about how to do this here: raster_value = { '0%': (0, [0, 0, 0, 0]), '10%': (1, [163, 235, 153, 255]), '30%': (2, [119, 195, 118, 255]), '50%': (3, [85, 160, 89, 255]), '70%': (4, [58, 130, 64, 255]), '90%': (5, [36, 103, 44, 255]) } import matplotlib as mpl tree_cmap = mpl.colors.ListedColormap(['#F0F0F0', '#A2EB9B', '#77C277', '#539F5B', '#388141', '#226528']) tree_cmap.set_over('white') tree_cmap.set_under('white') bounds = np.arange(-0.5, 6, 1).tolist() tree_norm = mpl.colors.BoundaryNorm(bounds, tree_cmap.N) # create a task for calculating a median pixel value class MedianPixel(EOTask): """ The task returns a pixelwise median value from a time-series and stores the results in a timeless data array. """ def __init__(self, feature, feature_out): self.feature_type, self.feature_name = next(self._parse_features(feature)()) self.feature_type_out, self.feature_name_out = next(self._parse_features(feature_out)()) def execute(self, eopatch): eopatch.add_feature(self.feature_type_out, self.feature_name_out, np.median(eopatch[self.feature_type][self.feature_name], axis=0)) return eopatch # initialize tasks # task to get S2 L2A images input_task = SentinelHubInputTask(data_source=DataSource.SENTINEL2_L2A, bands_feature=(FeatureType.DATA, 'BANDS'), resolution=10, maxcc=0.2, bands=['B04', 'B03', 'B02'], time_difference=datetime.timedelta(hours=2), additional_data=[(FeatureType.MASK, 'dataMask', 'IS_DATA')] ) geopedia_data = AddGeopediaFeature((FeatureType.MASK_TIMELESS, 'TREE_COVER'), layer='ttl2275', theme='QP', raster_value=raster_value) # task to compute median values get_median_pixel = MedianPixel((FeatureType.DATA, 'BANDS'), feature_out=(FeatureType.DATA_TIMELESS, 'MEDIAN_PIXEL')) # task to save to disk save = SaveTask(op.join('data', 'eopatch'), overwrite_permission=OverwritePermission.OVERWRITE_PATCH, compress_level=2) # initialize workflow workflow = LinearWorkflow(input_task, geopedia_data, get_median_pixel, save) # use a function to run this workflow on a single bbox def execute_workflow(index): bbox = bbox_splitter.bbox_list[index] info = bbox_splitter.info_list[index] patch_name = 'eopatch_{0}_row-{1}_col-{2}'.format(index, info['index_x'], info['index_y']) results = workflow.execute({input_task:{'bbox':bbox, 'time_interval':time_interval}, save:{'eopatch_folder':patch_name} }) return list(results.values())[-1] del results
_____no_output_____
MIT
examples/tree-cover-keras/tree-cover-keras.ipynb
Gnilliw/eo-learn
Test workflow on an example patch and display
idx = 168 example_patch = execute_workflow(idx) example_patch mp = example_patch.data_timeless['MEDIAN_PIXEL'] plt.figure(figsize=(15,15)) plt.imshow(2.5*mp) tc = example_patch.mask_timeless['TREE_COVER'] plt.imshow(tc[...,0], vmin=0, vmax=5, alpha=.5, cmap=tree_cmap) plt.colorbar()
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
MIT
examples/tree-cover-keras/tree-cover-keras.ipynb
Gnilliw/eo-learn
2. Run workflow on all patches
# run over multiple bboxes subset_idx = len(bbox_splitter.bbox_list) x_train_raw = np.empty((subset_idx, img_height, img_width, 3)) y_train_raw = np.empty((subset_idx, img_height, img_width, 1)) pbar = tqdm(total=subset_idx) for idx in range(0, subset_idx): patch = execute_workflow(idx) x_train_raw[idx] = patch.data_timeless['MEDIAN_PIXEL'][20:276,0:256,:] y_train_raw[idx] = patch.mask_timeless['TREE_COVER'][20:276,0:256,:] pbar.update(1)
_____no_output_____
MIT
examples/tree-cover-keras/tree-cover-keras.ipynb
Gnilliw/eo-learn
3. Create training and validation data arrays
# data normalization and augmentation img_mean = np.mean(x_train_raw, axis=(0, 1, 2)) img_std = np.std(x_train_raw, axis=(0, 1, 2)) x_train_mean = x_train_raw - img_mean x_train = x_train_mean - img_std train_gen = ImageDataGenerator( horizontal_flip=True, vertical_flip=True, rotation_range=180) y_train = to_categorical(y_train_raw, len(raster_value))
_____no_output_____
MIT
examples/tree-cover-keras/tree-cover-keras.ipynb
Gnilliw/eo-learn
4. Set up U-net model using Keras (tensorflow back-end)
# Model setup #from https://www.kaggle.com/lyakaap/weighing-boundary-pixels-loss-script-by-keras2 # weight: weighted tensor(same shape with mask image) def weighted_bce_loss(y_true, y_pred, weight): # avoiding overflow epsilon = 1e-7 y_pred = K.clip(y_pred, epsilon, 1. - epsilon) logit_y_pred = K.log(y_pred / (1. - y_pred)) # https://www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits loss = (1. - y_true) * logit_y_pred + (1. + (weight - 1.) * y_true) * \ (K.log(1. + K.exp(-K.abs(logit_y_pred))) + K.maximum(-logit_y_pred, 0.)) return K.sum(loss) / K.sum(weight) def weighted_dice_loss(y_true, y_pred, weight): smooth = 1. w, m1, m2 = weight * weight, y_true, y_pred intersection = (m1 * m2) score = (2. * K.sum(w * intersection) + smooth) / (K.sum(w * m1) + K.sum(w * m2) + smooth) loss = 1. - K.sum(score) return loss def weighted_bce_dice_loss(y_true, y_pred): y_true = K.cast(y_true, 'float32') y_pred = K.cast(y_pred, 'float32') # if we want to get same size of output, kernel size must be odd number averaged_mask = K.pool2d( y_true, pool_size=(11, 11), strides=(1, 1), padding='same', pool_mode='avg') border = K.cast(K.greater(averaged_mask, 0.005), 'float32') * K.cast(K.less(averaged_mask, 0.995), 'float32') weight = K.ones_like(averaged_mask) w0 = K.sum(weight) weight += border * 2 w1 = K.sum(weight) weight *= (w0 / w1) loss = weighted_bce_loss(y_true, y_pred, weight) + \ weighted_dice_loss(y_true, y_pred, weight) return loss def unet(input_size): inputs = Input(input_size) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs) conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1) conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2) conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3) conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4) drop4 = Dropout(0.5)(conv4) pool4 = MaxPooling2D(pool_size=(2, 2))(drop4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4) conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5) drop5 = Dropout(0.5)(conv5) up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5)) merge6 = concatenate([drop4,up6]) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6) conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6) up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6)) merge7 = concatenate([conv3,up7]) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7) conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7) up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7)) merge8 = concatenate([conv2,up8]) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8) conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8) up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8)) merge9 = concatenate([conv1,up9]) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9) conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9) conv10 = Conv2D(len(raster_value), 1, activation = 'softmax')(conv9) model = Model(inputs = inputs, outputs = conv10) model.compile(optimizer = Adam(lr = 1e-4), loss = weighted_bce_dice_loss, metrics = ['accuracy']) return model model = unet(input_size=(256, 256, 3))
_____no_output_____
MIT
examples/tree-cover-keras/tree-cover-keras.ipynb
Gnilliw/eo-learn
5. Train the model
# Fit the model batch_size = 16 model.fit_generator( train_gen.flow(x_train, y_train, batch_size=batch_size), steps_per_epoch=len(x_train), epochs=20, verbose=1) model.save(op.join('model.h5'))
Epoch 1/20 190/190 [==============================] - 419s 2s/step - loss: 0.9654 - acc: 0.6208 Epoch 2/20 190/190 [==============================] - 394s 2s/step - loss: 0.9242 - acc: 0.6460 Epoch 3/20 190/190 [==============================] - 394s 2s/step - loss: 0.9126 - acc: 0.6502 Epoch 4/20 190/190 [==============================] - 393s 2s/step - loss: 0.9059 - acc: 0.6540 Epoch 5/20 190/190 [==============================] - 393s 2s/step - loss: 0.9051 - acc: 0.6575 Epoch 6/20 190/190 [==============================] - 393s 2s/step - loss: 0.8940 - acc: 0.6620 Epoch 7/20 190/190 [==============================] - 394s 2s/step - loss: 0.8896 - acc: 0.6654 Epoch 8/20 190/190 [==============================] - 393s 2s/step - loss: 0.8870 - acc: 0.6668 Epoch 9/20 190/190 [==============================] - 393s 2s/step - loss: 0.8844 - acc: 0.6671 Epoch 10/20 190/190 [==============================] - 393s 2s/step - loss: 0.8781 - acc: 0.6708 Epoch 11/20 190/190 [==============================] - 393s 2s/step - loss: 0.8712 - acc: 0.6766 Epoch 12/20 190/190 [==============================] - 393s 2s/step - loss: 0.8671 - acc: 0.6801 Epoch 13/20 190/190 [==============================] - 394s 2s/step - loss: 0.8606 - acc: 0.6827 Epoch 14/20 190/190 [==============================] - 393s 2s/step - loss: 0.8489 - acc: 0.6919 Epoch 15/20 190/190 [==============================] - 394s 2s/step - loss: 0.8393 - acc: 0.6982 Epoch 16/20 190/190 [==============================] - 394s 2s/step - loss: 0.8279 - acc: 0.7063 Epoch 17/20 190/190 [==============================] - 394s 2s/step - loss: 0.8160 - acc: 0.7129 Epoch 18/20 190/190 [==============================] - 394s 2s/step - loss: 0.8020 - acc: 0.7233 Epoch 19/20 190/190 [==============================] - 394s 2s/step - loss: 0.7849 - acc: 0.7342 Epoch 20/20 190/190 [==============================] - 394s 2s/step - loss: 0.7781 - acc: 0.7397
MIT
examples/tree-cover-keras/tree-cover-keras.ipynb
Gnilliw/eo-learn
6. Validate model and show some results
# plot one example (image, label, prediction) idx = 4 p = np.argmax(model.predict(np.array([x_train[idx]])), axis=3) fig = plt.figure(figsize=(12,4)) ax1 = fig.add_subplot(1,3,1) ax1.imshow(x_train_raw[idx]) ax2 = fig.add_subplot(1,3,2) ax2.imshow(y_train_raw[idx][:,:,0]) ax3 = fig.add_subplot(1,3,3) ax3.imshow(p[0]) def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() # show image confusion matrix predictions = np.argmax(model.predict(x_train), axis=3) cnf_matrix = confusion_matrix(y_train_raw.reshape(len(y_train_raw) * 256 * 256, 1), predictions.reshape(len(predictions) * 256 * 256, 1)) plot_confusion_matrix(cnf_matrix, raster_value.keys(), normalize=True)
Normalized confusion matrix [[0.93412552 0. 0. 0. 0.01624412 0.04963036] [0.75458006 0. 0. 0. 0.08682321 0.15859672] [0.73890185 0. 0. 0. 0.1051384 0.15595975] [0.6504189 0. 0. 0. 0.1332155 0.2163656 ] [0.36706531 0. 0. 0. 0.18843914 0.44449555] [0.1816605 0. 0. 0. 0.05291638 0.76542312]]
MIT
examples/tree-cover-keras/tree-cover-keras.ipynb
Gnilliw/eo-learn
Measure autophagosome properties area=[]for i in range (0,number_of_objects): area=np.append(area,props[i].area)
experiments = os.listdir(os. getcwd()) for item in experiments: if 'raph' not in item : experiments.remove(item) experiments.remove('.ipynb_checkpoints') #experiments.remove('Statistical_Analysis.ipynb') experiments.remove('Measure_and _Plot.ipynb') experiments.remove('train') experiments.remove('Results') experiments images_dir = 'train/label/' output_csv_dir = 'Results/' os.makedirs(output_csv_dir, exist_ok=True) results_all = pd.DataFrame(columns=['experiment','condition','image_name','number_of_objects','mean_area']) for experiment in experiments: conditions = os.listdir(experiment) if '.DS_Store' in conditions: conditions.remove('.DS_Store') if 'Screenshot_1.png' in conditions: conditions.remove('Screenshot_1.png') pooled_cell_sizes_expt=pd.DataFrame() mean_cell_sizes_expt=pd.DataFrame()### mean_cell_numbers_expt=pd.DataFrame() for condition in conditions: data_path = str(experiment+'/'+condition) images = os.listdir(data_path) if '.DS_Store' in images: images.remove('.DS_Store') if '0_tif_RGB' in images: images.remove('0_tif_RGB') if '200331_Figure_Atg8a_Chloroquine.jpg' in images: images.remove('200331_Figure_Atg8a_Chloroquine.jpg') pooled_cell_sizes =[] mean_cell_sizes=[]### mean_cell_numbers=[] for image_name in images: file_name = str(experiment)+'_'+str(condition)+'_'+str(image_name)#+'_Simple_Segmentation' if file_name.endswith('.tif'):file_name = file_name[:-4].__add__('_Simple Segmentation.tif') elif file_name.endswith('.tiff'):file_name = file_name[:-5].__add__('_Simple Segmentation.tif') image = io.imread(os.path.join(images_dir, file_name), plugin='pil') number_of_objects,mean_area,area_all_objects = autophagosome_size(image) mean_cell_sizes=np.append(mean_cell_sizes,mean_area)### mean_cell_numbers=np.append(mean_cell_numbers,number_of_objects) pooled_cell_sizes=np.append(pooled_cell_sizes,area_all_objects) results_all = results_all.append({'experiment': str(experiment), 'condition':str(condition), 'image_name': str(image_name), 'number_of_objects':number_of_objects,'mean_area': mean_area},ignore_index=True) pooled_cell_sizes_data = np.reshape(pooled_cell_sizes, (-1, 1)) pooled_cell_sizes_df = pd.DataFrame(data=pooled_cell_sizes_data, index=None, columns=[str(condition)]) pooled_cell_sizes_expt = pd.concat([pooled_cell_sizes_df,pooled_cell_sizes_expt], axis=1,join='outer') mean_cell_numbers_data = np.reshape(mean_cell_numbers, (-1, 1)) mean_cell_numbers_df = pd.DataFrame(data=mean_cell_numbers_data, index=None, columns=[str(condition)]) mean_cell_numbers_expt = pd.concat([mean_cell_numbers_df,mean_cell_numbers_expt], axis=1,join='outer')#### mean_cell_sizes_data = np.reshape(mean_cell_sizes, (-1, 1)) mean_cell_sizes_df = pd.DataFrame(data=mean_cell_sizes_data, index=None, columns=[str(condition)]) mean_cell_sizes_expt = pd.concat([mean_cell_sizes_df,mean_cell_sizes_expt], axis=1,join='outer')#### pooled_cell_sizes_expt.to_csv(output_csv_dir+experiment+'_pooled_cell_sizes.csv', sep=';', decimal=',') mean_cell_sizes_expt.to_csv(output_csv_dir+experiment+'_mean_cell_sizes.csv', sep=';', decimal=',')#### mean_cell_numbers_expt.to_csv(output_csv_dir+experiment+'_mean_cell_numbers.csv', sep=';', decimal=',') print(experiment+' DONE') results_all.to_csv(output_csv_dir+'results_all_'+str(datetime.now())+'.csv', sep=';', decimal=',') results_all.to_csv(output_csv_dir+'results_all.csv', sep=';', decimal=',') print('ALL DONE') results_all=pd.read_csv(output_csv_dir+'results_all_2020-08-30-2.csv', sep=';', decimal=',') results_all.head()
_____no_output_____
MIT
Measure_and _Plot.ipynb
sourabh-bhide/Analyze_Vesicles
PLOT NUMBER OF OBJECTS
import seaborn as sns for experiment in experiments: experiment_data = results_all[results_all['experiment']==experiment] fig,ax = plt.subplots() ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax = sns.boxplot(x="condition", y="number_of_objects", data=experiment_data) ax = sns.swarmplot(x="condition", y="number_of_objects", data=experiment_data, color=".25") plt.xticks(rotation=90) plt.title(experiment) plt.savefig(output_csv_dir+experiment+'_number_of_objects.png',bbox_inches='tight') plt.show()
_____no_output_____
MIT
Measure_and _Plot.ipynb
sourabh-bhide/Analyze_Vesicles
PLOT SIZE OF OBJECTS/ MEAN_AREA
for experiment in experiments: experiment_data = results_all[results_all['experiment']==experiment] fig,ax = plt.subplots() ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax = sns.boxplot(x="condition", y="mean_area", data=experiment_data) ax = sns.swarmplot(x="condition", y="mean_area", data=experiment_data, color=".25") plt.xticks(rotation=90) plt.ylabel('mean_area ($\mu$m$^{2}$)') plt.title(experiment) plt.savefig(output_csv_dir+experiment+'_mean_area.png',bbox_inches='tight') plt.show()
_____no_output_____
MIT
Measure_and _Plot.ipynb
sourabh-bhide/Analyze_Vesicles
PLOT POOLED SIZE OF OBJECTS/ MEAN_AREA
output_csv_dir = 'Results/' for experiment in experiments: df=pd.read_csv(output_csv_dir+experiment+'_pooled_cell_sizes.csv', sep=';', decimal=',') df = df.drop(columns=['Unnamed: 0']) df = df.sort_index(axis=1) if '60x' in str(experiment):df=df*0.1111 if '40x' in str(experiment):df=df*0.1626 fig,ax = plt.subplots() ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax = sns.stripplot(data=df, jitter=0.3,size=2) plt.xticks(rotation=90) plt.ylabel('spot size ($\mu$m$^{2}$)') plt.ylim(-10,250) plt.title(experiment) plt.axhline(y=15,color='k') plt.savefig(output_csv_dir+experiment+'_pooled_cell_sizes.png',bbox_inches='tight') plt.show() df.head() def get_concat_h_multi_resize(im_list, resample=Image.BICUBIC): min_height = min(im.height for im in im_list) im_list_resize = [im.resize((int(im.width * min_height / im.height), min_height),resample=resample) for im in im_list] total_width = sum(im.width for im in im_list_resize) dst = Image.new('RGB', (total_width, min_height)) pos_x = 0 for im in im_list_resize: dst.paste(im, (pos_x, 0)) pos_x += im.width return dst ## CONCATANATE GRAPHS plot_dir = 'Results/' for experiment in experiments: print(experiment) im1 = Image.open(os.path.join(plot_dir, experiment+'_number_of_objects.png')) im2 = Image.open(os.path.join(plot_dir, experiment+'_mean_area.png')) im3 = Image.open(os.path.join(plot_dir, experiment+'_pooled_cell_sizes.png')) get_concat_h_multi_resize([im1, im2, im3]).save('Results/'+experiment+'_concat.jpg')
Graph10_BoiPy__60xWater Graph11_ER__60xWater Graph12_Golgi__60xWater Graph14_Atg8a_Epistase_time_Of_Woud_Healing__40x Graph15_Atg8a_Insulin_Foxo_time_Of_Woud_Healing__40x_and_60x Graph16_Atg8a_Foxo_TM_time_Of_Woud_Healing__40xOil Graph17_Atg8a_time_Of_Woud_Healing__40xOil Graph1_Geraf2__Atg8a__40xOil_rest_of_data Graph3_Atg8a_Epistase__40xOil Graph4_Graph5_Atg8a_Chloroquine_you.have.all.images Graph6_Graph7_LAMP1.GFP__60xWater Graph8_Graph9_GFP.LAMP1__60xWater
MIT
Measure_and _Plot.ipynb
sourabh-bhide/Analyze_Vesicles
Инициализация
#@markdown - **Монтирование GoogleDrive** from google.colab import drive drive.mount('GoogleDrive') # #@markdown - **Размонтирование** # !fusermount -u GoogleDrive
_____no_output_____
MIT
notebooks(colab)/Neural_network_models/Supervised_learning_models/CNN_tf_RU.ipynb
jswanglp/MyML
Область кодов
#@title Сверточные нейронные сети { display-mode: "both" } # В программе используется API в TensorFlow для реализации двухслойных сверточных нейронных сетей # coding: utf-8 import tensorflow.examples.tutorials.mnist.input_data as input_data import tensorflow as tf import matplotlib.pyplot as plt import os #@markdown - **Определение функций инициализации** def glorot_init(shape, name): initial = tf.truncated_normal(shape=shape, stddev=1. / tf.sqrt(shape[0] / 2.)) return tf.Variable(initial, name=name) def bias_init(shape, name): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial, name=name) def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #@markdown - **Настройка гиперпараметров** mnist = input_data.read_data_sets('sample_data/MNIST_data', one_hot=True) num_epochs = 12000 #@param {type: "integer"} batch_size = 196 #@param {type: "integer"} learning_rate = 8e-4 #@param {type: "number"} dir_path = 'GoogleDrive/My Drive/Colab Notebooks' event_path = os.path.join(dir_path, 'Tensorboard') checkpoint_path = os.path.join(dir_path, 'Checkpoints') #@markdown - **Создание graph** graph = tf.Graph() with graph.as_default(): with tf.name_scope('Input'): x = tf.placeholder(tf.float32, shape=[None, 784], name='input_images') y_ = tf.placeholder(tf.float32, shape=[None, 10], name='labels') x_image = tf.reshape(x, [-1, 28, 28, 1]) keep_prob = tf.placeholder(tf.float32) #@markdown - **Настройка сверточных слоев** # --------------conv1----------------------------------- with tf.name_scope('Conv1'): with tf.name_scope('weights_conv1'): W_conv1 = glorot_init([3, 3, 1, 64], 'w_conv1') with tf.name_scope('bias_covn1'): b_conv1 = bias_init([64], 'b_conv1') h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) with tf.name_scope('features_conv1'): h_pool1 = max_pool_2x2(h_conv1) # --------------conv2----------------------------------- with tf.name_scope('Conv2'): W_conv2 = glorot_init([3, 3, 64, 128], 'w_conv2') b_conv2 = bias_init([128], 'b_conv2') h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) #@markdown - **Настройка полносвязных слоев** # --------------fc-------------------------------------- h_pool2_flat = tf.layers.flatten(h_pool2) num_f = h_pool2_flat.get_shape().as_list()[-1] with tf.name_scope('FC1'): W_fc1 = glorot_init([num_f, 128], 'w_fc1') b_fc1 = bias_init([128], 'b_fc1') h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) with tf.name_scope('Dropout'): h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) with tf.name_scope('FC2'): W_fc2 = glorot_init([128, 10], 'w_fc2') b_fc2 = bias_init([10], 'b_fc2') y_fc2 = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 with tf.name_scope('Loss'): y_out = tf.nn.softmax(y_fc2) # cross_entropy = -tf.reduce_mean(y_*tf.log(y_out + 1e-10)) # # or like cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_, logits=y_fc2)) with tf.name_scope('Train'): train_step = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cross_entropy) # # or like # optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate) # grad_list = optimizer.compute_gradients(cross_entropy) # train_step = optimizer.apply_gradients(grad_list) with tf.name_scope('Accuracy'): correct_prediction = tf.equal(tf.argmax(y_out, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) #@markdown - **Обучение сетей и сохранение моделей** with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) saver = tf.train.Saver(max_to_keep=3) # сохранить 3 модели max_acc = 101. # модели с более высокой точностью будут сохранены for epoch in range(num_epochs): batch = mnist.train.next_batch(batch_size) _, acc, loss = sess.run([train_step, accuracy, cross_entropy], feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) step = epoch + 1 if step % 1000 == 0: acc *= 100 print_list = [step, loss, acc] print("Epoch: {0[0]}, cross_entropy: {0[1]:.4f}, accuracy on training data: {0[2]:.2f}%,".format(print_list)) test_acc, test_loss = sess.run([accuracy, cross_entropy], feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}) test_acc *= 100 print_list = [test_loss, test_acc] print(' '*12, 'cross_entropy: {0[0]:.4f}, accuracy on testing data: {0[1]:.2f}%.'.format(print_list)) print('\n') if (acc > max_acc) & (step > 3999): max_acc = acc saver.save(sess, os.path.join(checkpoint_path, 'f_map.ckpt'), global_step=step) test_image, test_label = mnist.test.images[100, :].reshape((1, -1)), mnist.test.labels[100, :].reshape((1, -1)) features1, features2 = sess.run([h_pool1, h_pool2], feed_dict={x: test_image, y_: test_label, keep_prob: 1.0}) #@markdown - **Восстановление сохраненной модели** # with tf.Session() as sess: # model_path = 'GoogleDrive/My Drive/Colab Notebooks/Tensorboard/f_map.ckpt-241' # saver.restore(sess, model_path) # acc, loss = sess.run([accuracy, cross_entropy], feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0}) # print('Accuracy is %.2f.' %(acc)) # sess.close() #@markdown - **Представление feature map первого сверточного слоя** features_map = features1.reshape((14, 14, 64)) num_map = range(features_map.shape[-1]) fig, AX = plt.subplots(nrows=4, ncols=8) fig.set_size_inches(w=14, h=7) fig.subplots_adjust(wspace=.2, hspace=.2) try: for index, ax in enumerate(AX.flatten()): ax.imshow(features_map[:, :, index], 'gray') ax.set_xticks([]), ax.set_yticks([]) except IndexError: pass plt.show() #@markdown - **Представление feature map второго сверточного слоя** features_map = features2.reshape((7, 7, 128)) num_map = range(features_map.shape[-1]) fig, AX = plt.subplots(nrows=4, ncols=8) fig.set_size_inches(w=14, h=7) fig.subplots_adjust(wspace=.2, hspace=.2) try: for index, ax in enumerate(AX.flatten()): ax.imshow(features_map[:, :, index], 'gray') ax.set_xticks([]), ax.set_yticks([]) except IndexError: pass plt.show()
_____no_output_____
MIT
notebooks(colab)/Neural_network_models/Supervised_learning_models/CNN_tf_RU.ipynb
jswanglp/MyML
Predicting Flight Delays with sklearnIn this notebook, we will be using features we've prepared in PySpark to predict flight delays via regression and classification.
import sys, os, re sys.path.append("lib") import utils import numpy as np import sklearn import iso8601 import datetime print("Imports loaded...")
Imports loaded...
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Load and Inspect our JSON Training Data
# Load and check the size of our training data. May take a minute. print("Original JSON file size: {:,} Bytes".format(os.path.getsize("../data/simple_flight_delay_features.jsonl"))) training_data = utils.read_json_lines_file('../data/simple_flight_delay_features.jsonl') print("Training items: {:,}".format(len(training_data))) # 5,714,008 print("Data loaded...") # Inspect a record before we alter them print("Size of training data in RAM: {:,} Bytes".format(sys.getsizeof(training_data))) # 50MB print(training_data[0])
Size of training data in RAM: 406,496 Bytes {'ArrDelay': -14.0, 'CRSArrTime': '2015-01-01T10:25:00.000Z', 'CRSDepTime': '2015-01-01T08:55:00.000Z', 'Carrier': 'AA', 'DayOfMonth': 1, 'DayOfWeek': 4, 'DayOfYear': 1, 'DepDelay': -4.0, 'Dest': 'DFW', 'Distance': 731.0, 'FlightDate': '2015-01-01T00:00:00.000Z', 'FlightNum': '1455', 'Origin': 'ATL'}
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Sample our Data
# We need to sample our data to fit into RAM training_data = np.random.choice(training_data, 1000000) # 'Sample down to 1MM examples' print("Sampled items: {:,} Bytes".format(len(training_data))) print("Data sampled...")
Sampled items: 1,000,000 Bytes Data sampled...
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Vectorize the Results (y)
# Separate our results from the rest of the data, vectorize and size up results = [record['ArrDelay'] for record in training_data] results_vector = np.array(results) print("Results vectorized size: {:,}".format(sys.getsizeof(results_vector))) # 45,712,160 bytes print("Results vectorized...")
Results vectorized size: 8,000,096 Results vectorized...
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Prepare Training Data
# Remove the two delay fields and the flight date from our training data for item in training_data: item.pop('ArrDelay', None) item.pop('FlightDate', None) print("ArrDelay and FlightDate removed from training data...") # Must convert datetime strings to unix times for item in training_data: if isinstance(item['CRSArrTime'], str): dt = iso8601.parse_date(item['CRSArrTime']) unix_time = int(dt.timestamp()) item['CRSArrTime'] = unix_time if isinstance(item['CRSDepTime'], str): dt = iso8601.parse_date(item['CRSDepTime']) unix_time = int(dt.timestamp()) item['CRSDepTime'] = unix_time print("CRSArr/DepTime converted to unix time...")
CRSArr/DepTime converted to unix time...
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Vectorize Training Data with `DictVectorizer`
# Use DictVectorizer to convert feature dicts to vectors from sklearn.feature_extraction import DictVectorizer print("Sampled dimensions: [{:,}]".format(len(training_data))) vectorizer = DictVectorizer() training_vectors = vectorizer.fit_transform(training_data) print("Size of DictVectorized vectors: {:,} Bytes".format(training_vectors.data.nbytes)) print("Training data vectorized...")
_____no_output_____
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Prepare Experiment by Splitting Data into Train/Test
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( training_vectors, results_vector, test_size=0.1, random_state=43 ) print(X_train.shape, X_test.shape) print(y_train.shape, y_test.shape) print("Test train split performed...")
_____no_output_____
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Train our Model(s) on our Training Data
# Train a regressor from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import median_absolute_error, r2_score print("Regressor library and metrics imported...") regressor = LinearRegression() print("Regressor instantiated...") from sklearn.ensemble import GradientBoostingRegressor regressor = GradientBoostingRegressor print("Swapped gradient boosting trees for linear regression!") # Lets go back for now... regressor = LinearRegression() print("Swapped back to linear regression!") regressor.fit(X_train, y_train) print("Regressor fitted...")
_____no_output_____
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Predict Using the Test Data
predicted = regressor.predict(X_test) print("Predictions made for X_test...")
_____no_output_____
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Evaluate and Visualize Model Accuracy
from sklearn.metrics import median_absolute_error, r2_score # Median absolute error is the median of all absolute differences between the target and the prediction. # Less is better, more indicates a high error between target and prediction. medae = median_absolute_error(y_test, predicted) print("Median absolute error: {:.3g}".format(medae)) # R2 score is the coefficient of determination. Ranges from 1-0, 1.0 is best, 0.0 is worst. # Measures how well future samples are likely to be predicted. r2 = r2_score(y_test, predicted) print("r2 score: {:.3g}".format(r2)) # Plot outputs import matplotlib.pyplot as plt # Cleans up the appearance plt.rcdefaults() plt.scatter( y_test, predicted, color='blue', linewidth=1 ) plt.grid(True) plt.xticks() plt.yticks() plt.show()
_____no_output_____
MIT
ch07/Predicting flight delays with sklearn.ipynb
kdiogenes/Agile_Data_Code_2
Assignment-1
class BankAccount(object): def __init__(self, initial_balance=0): self.balance = initial_balance def deposit(self, amount): self.balance += amount def withdraw(self, amount): self.balance -= amount def overdrawn(self): return self.balance < 0 my_account = BankAccount(15) my_account.withdraw(5) print (my_account.balance)
10
Apache-2.0
Batch-7_Day-6_Assignments.ipynb
Deepika0309/LetsUpgrage-Python-Essentials-
Assignment-2
import math pi = math.pi def volume(r, h): return (1 / 3) * pi * r * r * h def surfacearea(r, s): return pi * r * s + pi * r * r radius = float(5) height = float(12) slat_height = float(13) print( "Volume Of Cone : ", volume(radius, height) ) print( "Surface Area Of Cone : ", surfacearea(radius, slat_height) )
Volume Of Cone : 314.15926535897927 Surface Area Of Cone : 282.7433388230814
Apache-2.0
Batch-7_Day-6_Assignments.ipynb
Deepika0309/LetsUpgrage-Python-Essentials-
Praktikum 12 | Pengolahan Citra SharpnessSharpness adalah proses untuk mendapatkan gambar yang lebih tajam. Proses sharpness ini memanfaatkan BSF (Band-Stop Filter) yang merupakan gabungan antara LPF (Low Pass Filter) dan HPF (High Pass Filter). Fadhil Yori Hibatullah | 2103161037 | 2 D3 Teknik Informatika B--------------------- Import Dependency
import imageio import matplotlib.pyplot as plt import numpy as np
_____no_output_____
MIT
Praktikum 12 - Sharpness.ipynb
fadhilyori/pengolahan-citra
Load Image
imgNormal = imageio.imread("gambar4.jpg")
_____no_output_____
MIT
Praktikum 12 - Sharpness.ipynb
fadhilyori/pengolahan-citra
Show Image
plt.imshow(imgNormal) plt.title("Load Image") plt.show()
_____no_output_____
MIT
Praktikum 12 - Sharpness.ipynb
fadhilyori/pengolahan-citra
--------------------- To Grayscale
imgGrayscale = np.zeros((imgNormal.shape[0], imgNormal.shape[1], 3), dtype=np.uint8) for y in range(0, imgNormal.shape[0]): for x in range(0, imgNormal.shape[1]): r = imgNormal[y][x][0] g = imgNormal[y][x][1] b = imgNormal[y][x][2] gr = ( int(r) + int(g) + int(b) ) / 3 imgGrayscale[y][x] = (gr, gr, gr) plt.imshow(imgGrayscale) plt.title("Grayscale") plt.show()
_____no_output_____
MIT
Praktikum 12 - Sharpness.ipynb
fadhilyori/pengolahan-citra
--------------------- Sharpness Gray
imgSharpnessGray = np.zeros((imgNormal.shape[0], imgNormal.shape[1], 3), dtype=np.uint8) for y in range(1, imgNormal.shape[0] - 1): for x in range(1, imgNormal.shape[1] - 1): x1 = int(imgGrayscale[y - 1][x - 1][0]) x2 = int(imgGrayscale[y][x - 1][0]) x3 = int(imgGrayscale[y + 1][x - 1][0]) x4 = int(imgGrayscale[y - 1][x][0]) x5 = int(imgGrayscale[y][x][0]) x6 = int(imgGrayscale[y + 1][x][0]) x7 = int(imgGrayscale[y - 1][x + 1][0]) x8 = int(imgGrayscale[y][x + 1][0]) x9 = int(imgGrayscale[y + 1][x + 1][0]) xt1 = int((x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9) / 9) xt2 = int(-x1 - (2 * x2) - x3 + x7 + (2 * x8) + x9) xt3 = int(-x1 - (2 * x4) - x7 + x3 + (2 * x6) + x9) xb = int(xt1 + xt2 + xt3) if xb < 0: xb = -xb if xb > 255: xb = 255 imgSharpnessGray[y][x] = (xb, xb, xb) plt.imshow(imgSharpnessGray) plt.title("Sharpness Gray") plt.show()
_____no_output_____
MIT
Praktikum 12 - Sharpness.ipynb
fadhilyori/pengolahan-citra
------------------ Sharpness Gray (2:1 LPF:HPF)
imgSharpnessGrayL = np.zeros((imgNormal.shape[0], imgNormal.shape[1], 3), dtype=np.uint8) for y in range(1, imgNormal.shape[0] - 1): for x in range(1, imgNormal.shape[1] - 1): x1 = int(imgGrayscale[y - 1][x - 1][0]) x2 = int(imgGrayscale[y][x - 1][0]) x3 = int(imgGrayscale[y + 1][x - 1][0]) x4 = int(imgGrayscale[y - 1][x][0]) x5 = int(imgGrayscale[y][x][0]) x6 = int(imgGrayscale[y + 1][x][0]) x7 = int(imgGrayscale[y - 1][x + 1][0]) x8 = int(imgGrayscale[y][x + 1][0]) x9 = int(imgGrayscale[y + 1][x + 1][0]) xt1 = int((x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9) / 9) xt2 = int(-x1 - (2 * x2) - x3 + x7 + (2 * x8) + x9) xt3 = int(-x1 - (2 * x4) - x7 + x3 + (2 * x6) + x9) xb = int((2 * xt1) + xt2 + xt3) if xb < 0: xb = -xb if xb > 255: xb = 255 imgSharpnessGrayL[y][x] = (xb, xb, xb) plt.imshow(imgSharpnessGrayL) plt.title("Sharpness Gray 2:1") plt.show()
_____no_output_____
MIT
Praktikum 12 - Sharpness.ipynb
fadhilyori/pengolahan-citra
------------------ Sharpness Gray (1:2 LPF:HPF)
imgSharpnessGrayH = np.zeros((imgNormal.shape[0], imgNormal.shape[1], 3), dtype=np.uint8) for y in range(1, imgNormal.shape[0] - 1): for x in range(1, imgNormal.shape[1] - 1): x1 = int(imgGrayscale[y - 1][x - 1][0]) x2 = int(imgGrayscale[y][x - 1][0]) x3 = int(imgGrayscale[y + 1][x - 1][0]) x4 = int(imgGrayscale[y - 1][x][0]) x5 = int(imgGrayscale[y][x][0]) x6 = int(imgGrayscale[y + 1][x][0]) x7 = int(imgGrayscale[y - 1][x + 1][0]) x8 = int(imgGrayscale[y][x + 1][0]) x9 = int(imgGrayscale[y + 1][x + 1][0]) xt1 = int((x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9) / 9) xt2 = int(-x1 - (2 * x2) - x3 + x7 + (2 * x8) + x9) xt3 = int(-x1 - (2 * x4) - x7 + x3 + (2 * x6) + x9) xb = int(xt1 + (2 * xt2) + (2 * xt3)) if xb < 0: xb = -xb if xb > 255: xb = 255 imgSharpnessGrayH[y][x] = (xb, xb, xb) plt.imshow(imgSharpnessGrayH) plt.title("Sharpness Gray 1:2") plt.show()
_____no_output_____
MIT
Praktikum 12 - Sharpness.ipynb
fadhilyori/pengolahan-citra
Newton's Method for finding a root[Newton's method](https://en.wikipedia.org/wiki/Newton's_method) uses a clever insight to iteratively home in on the root of a function $f$. The central idea is to approximate $f$ by its tangent at some initial position $x_0$:$$y = f'(x_0) (x-x_0) + f(x_0)$$The $x$-intercept of this line is then closer to the root than the starting position $x_0$. That is, we need to solve the linear relation$$f'(x_n)(x_1-x_0) + f(x_0)$$for the updated position $x_1 = x_0 - f(x_0)/f'(x_0)$. Repeating this sequence$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$will yield a fixed point, which is the root of $f$ *if one exists in the vicinity of $x_0$*.
def newtons_method(f, df, x0, tol=1E-6): x_n = x0 while abs(f(x_n)) > tol: x_n = x_n - f(x_n)/df(x_n) return x_n
_____no_output_____
MIT
day4/Newton-Method.ipynb
devonwt/usrp-sciprog
Minimizing a functionAs the maximum and minimum of a function are defined by $f'(x) = 0$, we can use Newton's method to find extremal points by applying it to the first derivative. Let's try this with a simply function with known minimum:
# define a test function def f(x): return (x-3)**2 - 9 def df(x): return 2*(x-3) def df2(x): return 2. root = newtons_method(f, df, x0=0.1) print ("root {0}, f(root) = {1}".format(root, f(root))) minimum = newtons_method(df, df2, x0=0.1) print ("minimum {0}, f'(minimum) = {1}".format(minimum, df(minimum)))
minimum 3.0, f'(minimum) = 0.0
MIT
day4/Newton-Method.ipynb
devonwt/usrp-sciprog
There is an important qualifier in the statement about fixed points: **a root needs to exist in the vicinity of $x_0$!** Let's see what happens if that's not the case:
def f(x): return (x-3)**2 + 1 newtons_method(f, df, x0=0.1)
_____no_output_____
MIT
day4/Newton-Method.ipynb
devonwt/usrp-sciprog
With a little more defensive programming we can make sure that the function will terminate after a given number of iterations:
def newtons_method2(f, df, x0, tol=1E-6, maxiter=100000): x_n = x0 for _ in range(maxiter): x_n = x_n - f(x_n)/df(x_n) if abs(f(x_n)) < tol: return x_n raise RuntimeError("Failed to find a minimum within {} iterations ".format(maxiter)) newtons_method2(f, df, x0=0.1)
_____no_output_____
MIT
day4/Newton-Method.ipynb
devonwt/usrp-sciprog
Random Forest with hyperparameter tuning
import numpy as np import pandas as pd train_data = pd.read_csv('Train_Data.csv') test_data = pd.read_csv('Test_Data.csv') train_data.head() train_data.info() train_data.drop('date', axis=1, inplace=True) train_data.drop('campaign', axis=1, inplace=True) test_data.drop('date', axis=1, inplace=True) test_data.drop('campaign', axis=1, inplace=True) train_data.drop('ad', axis=1, inplace=True) test_data.drop('ad', axis=1, inplace=True) test_data.head() train_data = pd.get_dummies(train_data) test_data = pd.get_dummies(test_data) train_data.head() test_data.columns X_train = train_data.drop(['revenue'], axis='columns') y_train = train_data['revenue'] X_test = test_data from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) X_train_scaled X_test_scaled = scaler.transform(X_test) X_test_scaled from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor() print('Parameters currently in use:\n') print(rf.get_params()) from sklearn.model_selection import RandomizedSearchCV # Number of trees in random forest n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)] # Number of features to consider at every split max_features = ['auto', 'sqrt'] # Maximum number of levels in tree max_depth = [int(x) for x in np.linspace(10, 110, num = 11)] max_depth.append(None) # Minimum number of samples required to split a node min_samples_split = [2, 5, 10] # Minimum number of samples required at each leaf node min_samples_leaf = [1, 2, 4] # Method of selecting samples for training each tree bootstrap = [True, False] # Create the random grid random_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap} print(random_grid) rf = RandomForestRegressor() # Random search of parameters, using 3 fold cross validation, # search across 100 different combinations, and use all available cores rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 100, cv = 3, verbose=2, random_state=42, n_jobs = -1) # Fit the random search model rf_random.fit(X_train_scaled, y_train) rf_random.best_params_ rf_tuned = RandomForestRegressor(n_estimators= 200, min_samples_split= 5, min_samples_leaf= 4, max_features= 'auto', max_depth= 10, bootstrap= True) rf_tuned.fit(X_train_scaled, y_train) preds = rf_tuned.predict(X_test_scaled) preds = preds.astype('int64') preds prediction = pd.DataFrame(preds, columns=['revenue']).to_csv('prediction3.csv', index=False)
_____no_output_____
Apache-2.0
WEEK 6 (Project)/baseline.ipynb
prachuryanath/SA-CAIITG
Notebook for calculating Mask Consistency Score for GAN-transformed images
from PIL import Image import cv2 from matplotlib import pyplot as plt import tensorflow as tf import glob, os import numpy as np import matplotlib.image as mpimg #from keras.preprocessing.image import img_to_array, array_to_img
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
1. Resize GAN-transformed Dataset to 1024*1024 1.1 Specify Args: Directory, folder name and the new image size
folder = 'A2B_FID' image_size = 1024 dir = '/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Powertrain14_Blattfeder/Results/training4_batch4_400trainA_250trainB/samples_testing'
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
1.2 Create new Folder "/A2B_FID_1024" in Directory
old_folder = (os.path.join(dir, folder)) new_folder = (os.path.join(dir, folder+'_'+str(image_size))) if not os.path.exists(new_folder): try: os.mkdir(new_folder) except FileExistsError: print('Folder already exists') pass print(os.path.join(old_folder)) print(os.path.join(dir, folder+'_'+str(image_size)))
/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Powertrain14_Blattfeder/Results/training4_batch4_400trainA_250trainB/samples_testing/A2B_FID /mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Powertrain14_Blattfeder/Results/training4_batch4_400trainA_250trainB/samples_testing/A2B_FID_1024
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
1.3 Function for upsampling images of 256-256 or 512-512 to images with size 1024-1024
new_size = image_size width = new_size height = new_size dim = (width, height) #images = glob.glob(os.path.join(new_folder, '*.jpg')) + glob.glob(os.path.join(new_folder, '*.png')) def resize_upsampling(old_folder, new_folder): for image in os.listdir(old_folder): img = cv2.imread(os.path.join(old_folder, image)) # INTER_CUBIC or INTER_LANCZOS4 img_resized = cv2.resize(img, dim, interpolation = cv2.INTER_CUBIC) print('Shape: '+str(img.shape)+' is now resized to: '+str(img_resized.shape)) cv2.imwrite(os.path.join(new_folder , image),img_resized) def resize_downsampling(old_folder, new_folder): for image in os.listdir(old_folder): img = cv2.imread(os.path.join(old_folder, image)) img_resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) print('Shape: '+str(img.shape)+' is now resized to: '+str(img_resized.shape)) cv2.imwrite(os.path.join(new_folder , image),img_resized)
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
1.4 Run the aforementoined function
resize_upsampling(old_folder, new_folder)
Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3) Shape: (256, 256, 3) is now resized to: (1024, 1024, 3)
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
2. Use the annotation Tool Labelme to create polygons in JSON format We than use the JSON files with polygon data to create semantic segmentation mask - no instance segmentation needed, because we do not need to differenciate between distinct features. We use the bash and python skript in this directory to do the mask translation.
!ls !pwd
augmentation.py interpolation.py __pycache__ data.py labelme2coco.py pylib datasets labelme2voc.py README.md download_dataset.sh labels.txt resize_images_pascalvoc 'FeatureConsistency Score.ipynb' LICENSE test.py FeatureScore mask-score.ipynb tf2gan fid.py module.py tf2lib imlib output train.py /home/molu1019/workspace/CycleGAN-Tensorflow-2
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
Insert the folder path as **input_dir** where the GAN transformed images with corresponding JSON label are located.
input_dir = '/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Evaluation/BatchSize/Blattfeder/Batch1' output_dir = input_dir+'_mask' print(output_dir) !python3 labelme2voc.py $input_dir $output_dir --labels labels.txt seg_dir = output_dir+'/SegmentationObjectPNG' print(seg_dir) GAN_mask_images = os.listdir(seg_dir) print(mask_images)
['rgb_274321.png', 'rgb_274414.png', 'rgb_273810.png', 'rgb_274350.png', 'rgb_274227.png', 'rgb_274288.png', 'rgb_273684.png', 'rgb_273905.png', 'rgb_273715.png', 'rgb_274513.png', 'rgb_274544.png', 'rgb_274002.png', 'rgb_273747.png', 'rgb_273971.png', 'rgb_273462.png', 'rgb_273582.png', 'rgb_273366.png', 'rgb_274064.png', 'rgb_274032.png', 'rgb_273430.png']
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
3. Mask Parameters syntetic Images
mask_Blattfeder = [149, 255, 0] mask_Entluefter = [] mask_Wandlerhalter = [] mask_Getreibeflansch = [] mask_Abdeckung = []
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
Resize syn. Masks from 1920-1080 to 1024-1024
def resize(image, size): dim = (size, size) img = cv2.imread(path) img = img img_resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) # tp show as array use display() #display(img_resized) plt.imshow(img_resized) return img_resized
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
Check Mask and Color
#img = Image.open(path) #rgb_im = img.convert('RGB') r, g, b = rgb_im.getpixel((1020, 500)) width, height = img.size print(r, g, b) print(rgb_im.getextrema()) print(rgb_im) print(width, height) def readfile(path): #img = Image.open(path) #with only one color channel: img = (Image.open(path).convert('L')) img = np.array(img) plt.imshow(img) print(img.size) print(img.shape) with Image.open(path) as im: print(im.getbbox()) return img
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
Read Dataset Folder of Image Masks
def read_imgs(path, size=(1920, 1080), resize=None): """Read images as ndarray. Args: path: A string, path of images. size: A tuple of 2 integers, (heights, widths). resize: A float or None, specifying how the image value should be resized. If None, no scaled. preprocessing: A function of data preprocessing, (e.g. noralization, shape manipulation, etc.) """ img_list = [f for f in os.listdir(path) if not f.startswith(".")] data = np.empty((len(img_list), *size, 3)) size = size[1], size[0] for img_i, _path in enumerate(img_list): img = Image.open(path + os.sep + _path) img = resize_upsampling(img) img = img.convert("RGB") img = np.array(img) data[img_i] = img if rescale: data = data*rescale return data
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
Syntetical Image Data
path = r'/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Powertrain14_Blattfeder/Instance_280443.png' img_or = readfile(path) img_or_res = resize(img_or, 1024) img_or_res = img_or_res[:,:,1] img_or_res_bin = binarize (img_or_res)
2073600 (1080, 1920) (0, 0, 1920, 1080) <class 'numpy.ndarray'> 255 [[False False False ... False False False] [False False False ... False False False] [False False False ... False False False] ... [False False False ... False False False] [False False False ... False False False] [False False False ... False False False]] True (1024, 1024) [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] (497, 200, 641, 860)
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
GAN Image Data
path_result = '/mnt/robolab/data/Bilddaten/GAN_train_data_sydavis-ai/Powertrain14_Blattfeder/Test_maskScore_results/rgb_280443.png' img_gan = readfile(path_result) img_gan_bin = binarize(img_gan) def loadpolygon(): return
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
Since True is regarded as 1 and False is regarded as 0, when multiplied by 255 which is the Max value of uint8, True becomes 255 (white) and False becomes 0 (black)
def binarize(image): #im_gray = np.array(Image.open(path).convert('L')) print(type(image)) print(image[600,600]) thresh = 28 im_bool = image > thresh print(im_bool) print(im_bool[600,600]) print(im_bool.shape) maxval = 255 im_bin = (image > thresh) * maxval print(im_bin) im_save = Image.fromarray(np.uint8(im_bin)) im_save_bool = Image.fromarray((im_bool)) plt.imshow(im_save_bool) f, axarr = plt.subplots(1,2) axarr[0].imshow(im_save) axarr[1].imshow(im_save_bool) with im_save_bool as im: print(im.getbbox()) return im_bin binarize(img_or_res) def convexhull(): return def calculatescore(ground_truth, prediction_gan): """ Compute feature consitency score of two segmentation masks. IoU(A,B) = |A & B| / (| A U B|) Dice(A,B) = 2*|A & B| / (|A| + |B|) Args: y_true: true masks, one-hot encoded. y_pred: predicted masks, either softmax outputs, or one-hot encoded. metric_name: metric to be computed, either 'iou' or 'dice'. metric_type: one of 'standard' (default), 'soft', 'naive'. In the standard version, y_pred is one-hot encoded and the mean is taken only over classes that are present (in y_true or y_pred). The 'soft' version of the metrics are computed without one-hot encoding y_pred. The 'naive' version return mean metrics where absent classes contribute to the class mean as 1.0 (instead of being dropped from the mean). drop_last = True: boolean flag to drop last class (usually reserved for background class in semantic segmentation) mean_per_class = False: return mean along batch axis for each class. verbose = False: print intermediate results such as intersection, union (as number of pixels). Returns: IoU of ground truth and GAN transformed syntetic Image, as a float. Inputs are B*W*H*N tensors, with B = batch size, W = width, H = height, N = number of classes """ # check image shape to be the same assert ground_truth.shape == prediction_gan.shape, 'Input masks should be same shape, instead are {}, {}'.format(ground_truth.shape, prediction_gan.shape) print('Ground truth shape: '+str(ground_truth.shape)) print('Predicted GAN image shape: '+str(prediction_gan.shape)) intersection = np.logical_and(ground_truth, prediction_gan) union = np.logical_or(ground_truth, prediction_gan) mask_sum = np.sum(np.abs(union)) + np.sum(np.abs(intersection)) iou_score = np.sum(intersection) / np.sum(union) dice_score = 2*np.sum(intersection) / np.sum(mask_sum) print('IoU is: '+str(iou_score)) print('Dice/F1 Score is: '+str(dice_score)) return iou_score, dice_score calculatescore(img_or_res_bin, img_gan_bin)
Ground truth shape: (1024, 1024) Predicted GAN image shape: (1024, 1024) IoU is: 0.7979989122059556 Dice/F1 Score is: 0.8876522747468127
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
Image mask transformation Translate image mask to white RGB(255,255,255), fill convex hull, and compare masks to calculate 'Feature Consistency Score'
for file in glob.glob("*.png"): calculatescore()
_____no_output_____
MIT
Notebook_Archive/FeatureConsistency Score.ipynb
molu1019/CycleGAN-Tensorflow-2
1.2 Bayesian FrameworkWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a prior belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.Secondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the posterior probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:$$\begin{align} P( A | X ) = \frac{ P(X | A) P(A) } {P(X) } \\\\[5pt] \propto P(X | A) P(A)\;\; (\propto \text{is proportional to })\end{align}$$The above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$. 1.2.1 Example: Mandatory coin-flip exampleEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be.We begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data.Below we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).
%matplotlib inline from IPython.core.pylabtools import figsize import numpy as np from matplotlib import pyplot as plt figsize(11, 9) import scipy.stats as stats dist = stats.beta n_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500] data = stats.bernoulli.rvs(0.5, size=n_trials[-1]) x = np.linspace(0, 1, 100) # For the already prepared, I'm using Binomial's conj. prior. for k, N in enumerate(n_trials): sx = plt.subplot(len(n_trials) / 2, 2, k + 1) plt.xlabel("$p$, probability of heads") \ if k in [0, len(n_trials) - 1] else None plt.setp(sx.get_yticklabels(), visible=False) heads = data[:N].sum() y = dist.pdf(x, 1 + heads, 1 + N - heads) plt.plot(x, y, label="observe %d tosses,\n %d heads" % (N, heads)) plt.fill_between(x, 0, y, color="#348ABD", alpha=0.4) plt.vlines(0.5, 0, 4, color="k", linestyles="--", lw=1) leg = plt.legend() leg.get_frame().set_alpha(0.4) plt.autoscale(tight=True) plt.suptitle("Bayesian updating of posterior probabilities", y=1.02, fontsize=14) plt.tight_layout()
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
The posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line).Notice that the plots are not always peaked at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased away from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head?). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.The next example is a simple demonstration of the mathematics of Bayesian inference. 1.2.2 Example: Bug, or just sweet, unintended feature?Let $A$ denote the event that our code has no bugs in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$.We are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.What is $P(X | A)$, i.e., the probability that the code passes $X$ tests given there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests.$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code indeed has bugs (denoted $\sim A\;$, spoken not $A$), or event $X$ without bugs ($A$). $P(X)$ can be represented as:$$\begin{align}P(X ) = P(X \text{ and } A) + P(X \text{ and } \sim A) \\\\[5pt] = P(X|A)P(A) + P(X | \sim A)P(\sim A)\\\\[5pt] = P(X|A)p + P(X | \sim A)(1-p)\end{align}$$We have already computed $P(X|A)$ above. On the other hand, $P(X | \sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\sim A) = 0.5$. Then$$\begin{align}P(A | X) = \frac{1\cdot p}{ 1\cdot p +0.5 (1-p) } \\\\ = \frac{ 2 p}{1+p}\end{align}$$This is the posterior probability. What does it look like as a function of our prior, $p \in [0,1]$?
figsize(12.5, 4) p = np.linspace(0, 1, 50) plt.plot(p, 2*p/(1+p), color="#348ABD", lw=3) #plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=["#A60628"]) plt.scatter(0.2, 2*(0.2)/1.2, s=140, c="#348ABD") plt.xlim(0, 1) plt.ylim(0, 1) plt.xlabel("Prior, $P(A) = p$") plt.ylabel("Posterior, $P(A|X)$, with $P(A) = p$") plt.title("Are there bugs in my code?");
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
We can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33.Recall that the prior is a probability: $p$ is the prior probability that there are no bugs, so $1-p$ is the prior probability that there are bugs.Similarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug given we saw all tests pass, hence $1-P(A|X)$ is the probability there is a bug given all tests passed. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities.
figsize(12.5, 4) colours = ["#348ABD", "#A60628"] prior = [0.20, 0.80] posterior = [1. / 3, 2. / 3] plt.bar([0, .7], prior, alpha=0.70, width=0.25, color=colours[0], label="prior distribution", lw="3", edgecolor=colours[0]) plt.bar([0 + 0.25, .7 + 0.25], posterior, alpha=0.7, width=0.25, color=colours[1], label="posterior distribution", lw="3", edgecolor=colours[1]) plt.ylim(0,1) plt.xticks([0.20, .95], ["Bugs Absent", "Bugs Present"]) plt.title("Prior and Posterior probability of bugs present") plt.ylabel("Probability") plt.legend(loc="upper left");
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
1.3 Probability DistributionsLet's quickly recall what a probability distribution is: Let $Z$ be some random variable. Then associated with $Z$ is a probability distribution function that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter.We can divide random variables into three classifications:* $Z$ is discrete: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...* $Z$ is continuous: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.* $Z$ is mixed: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. 1.3.1 Discrete CaseIf $Z$ is discrete, then its distribution is called a probability mass function, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is Poisson-distributed if:$$P(Z = k) =\frac{ \lambda^k e^{-\lambda} }{k!}, \; \; k=0,1,2, \dots $$$\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\lambda$ can be any positive number. By increasing $\lambda$, we add more probability to larger values, and conversely by decreasing $\lambda$ we add more probability to smaller values. One can describe $\lambda$ as the intensity of the Poisson distribution.Unlike $\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members.If a random variable $Z$ has a Poisson mass distribution, we denote this by writing$$Z \sim \text{Poi}(\lambda) $$One useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:$$E\large[ \;Z\; | \; \lambda \;\large] = \lambda $$We will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\lambda$ values. The first thing to notice is that by increasing $\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.
figsize(12.5, 4) import scipy.stats as stats a = np.arange(16) poi = stats.poisson lambda_ = [1.5, 4.25] colours = ["#348ABD", "#A60628"] plt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0], label="$\lambda = %.1f$" % lambda_[0], alpha=0.60, edgecolor=colours[0], lw="3") plt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1], label="$\lambda = %.1f$" % lambda_[1], alpha=0.60, edgecolor=colours[1], lw="3") plt.xticks(a + 0.4, a) plt.legend() plt.ylabel("probability of $k$") plt.xlabel("$k$") plt.title("Probability mass function of a Poisson random variable; differing \ $\lambda$ values")
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
1.3.2 Continuous CaseInstead of a probability mass function, a continuous random variable has a probability density function. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with exponential density. The density function for an exponential random variable looks like this:$$f_Z(z | \lambda) = \lambda e^{-\lambda z }, \;\; z\ge 0$$Like a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on any non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise and positive variable. The graph below shows two probability density functions with different $\lambda$ values.When a random variable $Z$ has an exponential distribution with parameter $\lambda$, we say $Z$ is exponential and write$$Z \sim \text{Exp}(\lambda)$$Given a specific $\lambda$, the expected value of an exponential random variable is equal to the inverse of $\lambda$, that is:$$E[\; Z \;|\; \lambda \;] = \frac{1}{\lambda}$$
a = np.linspace(0, 4, 100) expo = stats.expon lambda_ = [0.5, 1] for l, c in zip(lambda_, colours): plt.plot(a, expo.pdf(a, scale=1. / l), lw=3, color=c, label="$\lambda = %.1f$" % l) plt.fill_between(a, expo.pdf(a, scale=1. / l), color=c, alpha=.33) plt.legend() plt.ylabel("PDF at $z$") plt.xlabel("$z$") plt.ylim(0, 1.2) plt.title("Probability density function of an Exponential random variable;\ differing $\lambda$");
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
But what is $\lambda \;$?This question is what motivates statistics. In the real world, $\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\lambda$. Many different methods have been created to solve the problem of estimating $\lambda$, but since $\lambda$ is never actually observed, no one can say for certain which method is best!Bayesian inference is concerned with beliefs about what $\lambda$ might be. Rather than try to guess $\lambda$ exactly, we can only talk about what $\lambda$ is likely to be by assigning a probability distribution to $\lambda$.This might seem odd at first. After all, $\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we can assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have beliefs about the parameter $\lambda$. 1.3.3 Example: Inferring behaviour from text-message dataLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)
figsize(12.5, 3.5) count_data = np.loadtxt("data/txtdata.csv") n_count_data = len(count_data) plt.bar(np.arange(n_count_data), count_data, color="#348ABD") plt.xlabel("Time (days)") plt.ylabel("count of text-msgs received") plt.title("Did the user's texting habits change over time?") plt.xlim(0, n_count_data);
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
Before we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period?How can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of count data. Denoting day $i$'s text-message count by $C_i$,$$ C_i \sim \text{Poisson}(\lambda) $$We are not sure what the value of the $\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\lambda$ increases at some point during the observations. (Recall that a higher value of $\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)How can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\tau$), the parameter $\lambda$ suddenly jumps to a higher value. So we really have two $\lambda$ parameters: one for the period before $\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a switchpoint:$$\lambda = \begin{cases}\lambda_1 &amp; \text{if } t \lt \tau \cr\lambda_2 &amp; \text{if } t \ge \tau\end{cases}$$If, in reality, no sudden change occurred and indeed $\lambda_1 = \lambda_2$, then the $\lambda$s posterior distributions should look about equal.We are interested in inferring the unknown $\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\lambda$. What would be good prior probability distributions for $\lambda_1$ and $\lambda_2$? Recall that $\lambda$ can be any positive number. As we saw earlier, the exponential distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\alpha$.$$\begin{align}\lambda_1 \sim \text{Exp}( \alpha ) \\\\lambda_2 \sim \text{Exp}( \alpha )\end{align}$$$\alpha$ is called a hyper-parameter or parent variable. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:$$\frac{1}{N}\sum_{i=0}^N \;C_i \approx E[\; \lambda \; |\; \alpha ] = \frac{1}{\alpha}$$An alternative, and something I encourage the reader to try, would be to have two priors: one for each $\lambda_i$. Creating two exponential distributions with different $\alpha$ values reflects our prior belief that the rate changed at some point during the observations.What about $\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\tau$ might have occurred. Instead, we can assign a uniform prior belief to every possible day. This is equivalent to saying$$\begin{align}\tau \sim \text{DiscreteUniform(1,70) }\\\\\Rightarrow P( \tau = k ) = \frac{1}{70}\end{align}$$So after all this, what does our overall prior distribution for the unknown variables look like? Frankly, it doesn't matter. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.We next turn to PyMC3, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. 1.4 Introducing our first hammer: PyMC3PyMC3 is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC3 is so cool.We will model the problem above using PyMC3. This type of programming is called probabilistic programming, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC3 framework.Because of the confusion engendered by the term probabilistic programming, I'll refrain from using it. Instead, I'll simply say programming, since that's what it really is.PyMC3 code is easy to read. The only novel thing should be the syntax. Simply remember that we are representing the model's components ($\tau, \lambda_1, \lambda_2$ ) as variables.
import pymc3 as pm import theano.tensor as tt with pm.Model() as model: alpha = 1.0/count_data.mean() # Recall count_data is the # variable that holds our txt counts lambda_1 = pm.Exponential("lambda_1", alpha) lambda_2 = pm.Exponential("lambda_2", alpha) tau = pm.DiscreteUniform("tau", lower=0, upper=n_count_data - 1)
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
In the code above, we create the PyMC3 variables corresponding to $\lambda_1$ and $\lambda_2$. We assign them to PyMC3's stochastic variables, so-called because they are treated by the back end as random number generators.
with model: idx = np.arange(n_count_data) # Index lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
This code creates a new function lambda_, but really we can think of it as a random variable: the random variable $\lambda$ from above. The switch() function assigns lambda_1 or lambda_2 as the value of lambda_, depending on what side of tau we are on. The values of lambda_ up until tau are lambda_1 and the values afterwards are lambda_2.Note that because lambda_1, lambda_2 and tau are random, lambda_ will be random. We are not fixing any variables yet.
with model: observation = pm.Poisson("obs", lambda_, observed=count_data)
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
The variable observation combines our data, count_data, with our proposed data-generation scheme, given by the variable lambda_, through the observed keyword.The code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a learning step. The machinery being employed is called Markov Chain Monte Carlo (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\lambda_1, \lambda_2$ and $\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called traces in the MCMC literature) into histograms.
with model: step = pm.Metropolis() trace = pm.sample(10000, tune=5000,step=step) lambda_1_samples = trace['lambda_1'] lambda_2_samples = trace['lambda_2'] tau_samples = trace['tau'] figsize(12.5, 10) #histogram of the samples: ax = plt.subplot(311) ax.set_autoscaley_on(False) plt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85, label="posterior of $\lambda_1$", color="#A60628", normed=True) plt.legend(loc="upper left") plt.title(r"""Posterior distributions of the variables $\lambda_1,\;\lambda_2,\;\tau$""") plt.xlim([15, 30]) plt.xlabel("$\lambda_1$ value") ax = plt.subplot(312) ax.set_autoscaley_on(False) plt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85, label="posterior of $\lambda_2$", color="#7A68A6", normed=True) plt.legend(loc="upper left") plt.xlim([15, 30]) plt.xlabel("$\lambda_2$ value") plt.subplot(313) w = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples) plt.hist(tau_samples, bins=n_count_data, alpha=1, label=r"posterior of $\tau$", color="#467821", weights=w, rwidth=2.) plt.xticks(np.arange(n_count_data)) plt.legend(loc="upper left") plt.ylim([0, .75]) plt.xlim([35, len(count_data)-20]) plt.xlabel(r"$\tau$ (in days)") plt.ylabel("probability");
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
InterpretationRecall that Bayesian methodology returns a distribution. Hence we now have distributions to describe the unknown $\lambda$s and $\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\lambda_1$ is around 18 and $\lambda_2$ is around 23. The posterior distributions of the two $\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.What other observations can you make? If you look at the original data again, do these results seem reasonable?Notice also that the posterior distributions for the $\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.Our analysis also returned a distribution for $\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. Why would I want samples from the posterior, anyways?We will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.We'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \; 0 \le t \le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\lambda$. Therefore, the question is equivalent to what is the expected value of $\lambda$ at time $t$?In the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\lambda_i$ for that day $t$, using $\lambda_i = \lambda_{1,i}$ if $t \lt \tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\lambda_i = \lambda_{2,i}$.
figsize(12.5, 5) # tau_samples, lambda_1_samples, lambda_2_samples contain # N samples from the corresponding posterior distribution N = tau_samples.shape[0] expected_texts_per_day = np.zeros(n_count_data) for day in range(0, n_count_data): # ix is a bool index of all tau samples corresponding to # the switchpoint occurring prior to value of 'day' ix = day < tau_samples # Each posterior sample corresponds to a value for tau. # for each day, that value of tau indicates whether we're "before" # (in the lambda1 "regime") or # "after" (in the lambda2 "regime") the switchpoint. # by taking the posterior sample of lambda1/2 accordingly, we can average # over all samples to get an expected value for lambda on that day. # As explained, the "message count" random variable is Poisson distributed, # and therefore lambda (the poisson parameter) is the expected value of # "message count". expected_texts_per_day[day] = (lambda_1_samples[ix].sum() + lambda_2_samples[~ix].sum()) / N plt.plot(range(n_count_data), expected_texts_per_day, lw=4, color="#E24A33", label="expected number of text-messages received") plt.xlim(0, n_count_data) plt.xlabel("Day") plt.ylabel("Expected # text-messages") plt.title("Expected number of text-messages received") plt.ylim(0, 60) plt.bar(np.arange(len(count_data)), count_data, color="#348ABD", alpha=0.65, label="observed texts per day") plt.legend(loc="upper left");
_____no_output_____
CC-BY-4.0
cracking-the-data-science-interview-master/cracking-the-data-science-interview-master/EBooks/Bayesian-Methods-for-Hackers/.ipynb_checkpoints/C1-Introduction-checkpoint.ipynb
anushka-DS/DS-Interview-Prep
Tight Layout guideHow to use tight-layout to fit plots within your figure cleanly.*tight_layout* automatically adjusts subplot params so that thesubplot(s) fits in to the figure area. This is an experimentalfeature and may not work for some cases. It only checks the extentsof ticklabels, axis labels, and titles.An alternative to *tight_layout* is :doc:`constrained_layout`.Simple Example==============In matplotlib, the location of axes (including subplots) are specified innormalized figure coordinates. It can happen that your axis labels ortitles (or sometimes even ticklabels) go outside the figure area, and are thusclipped.
# sphinx_gallery_thumbnail_number = 7 import matplotlib.pyplot as plt import numpy as np plt.rcParams['savefig.facecolor'] = "0.8" def example_plot(ax, fontsize=12): ax.plot([1, 2]) ax.locator_params(nbins=3) ax.set_xlabel('x-label', fontsize=fontsize) ax.set_ylabel('y-label', fontsize=fontsize) ax.set_title('Title', fontsize=fontsize) plt.close('all') fig, ax = plt.subplots() example_plot(ax, fontsize=24)
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
To prevent this, the location of axes needs to be adjusted. Forsubplots, this can be done by adjusting the subplot params(`howto-subplots-adjust`). Matplotlib v1.1 introduces a newcommand :func:`~matplotlib.pyplot.tight_layout` that does thisautomatically for you.
fig, ax = plt.subplots() example_plot(ax, fontsize=24) plt.tight_layout()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
Note that :func:`matplotlib.pyplot.tight_layout` will only adjust thesubplot params when it is called. In order to perform this adjustment eachtime the figure is redrawn, you can call ``fig.set_tight_layout(True)``, or,equivalently, set the ``figure.autolayout`` rcParam to ``True``.When you have multiple subplots, often you see labels of differentaxes overlapping each other.
plt.close('all') fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) example_plot(ax1) example_plot(ax2) example_plot(ax3) example_plot(ax4)
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
:func:`~matplotlib.pyplot.tight_layout` will also adjust spacing betweensubplots to minimize the overlaps.
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) example_plot(ax1) example_plot(ax2) example_plot(ax3) example_plot(ax4) plt.tight_layout()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
:func:`~matplotlib.pyplot.tight_layout` can take keyword arguments of*pad*, *w_pad* and *h_pad*. These control the extra padding around thefigure border and between subplots. The pads are specified in fractionof fontsize.
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) example_plot(ax1) example_plot(ax2) example_plot(ax3) example_plot(ax4) plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
:func:`~matplotlib.pyplot.tight_layout` will work even if the sizes ofsubplots are different as far as their grid specification iscompatible. In the example below, *ax1* and *ax2* are subplots of a 2x2grid, while *ax3* is of a 1x2 grid.
plt.close('all') fig = plt.figure() ax1 = plt.subplot(221) ax2 = plt.subplot(223) ax3 = plt.subplot(122) example_plot(ax1) example_plot(ax2) example_plot(ax3) plt.tight_layout()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
It works with subplots created with:func:`~matplotlib.pyplot.subplot2grid`. In general, subplots createdfrom the gridspec (:doc:`/tutorials/intermediate/gridspec`) will work.
plt.close('all') fig = plt.figure() ax1 = plt.subplot2grid((3, 3), (0, 0)) ax2 = plt.subplot2grid((3, 3), (0, 1), colspan=2) ax3 = plt.subplot2grid((3, 3), (1, 0), colspan=2, rowspan=2) ax4 = plt.subplot2grid((3, 3), (1, 2), rowspan=2) example_plot(ax1) example_plot(ax2) example_plot(ax3) example_plot(ax4) plt.tight_layout()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
Although not thoroughly tested, it seems to work for subplots withaspect != "auto" (e.g., axes with images).
arr = np.arange(100).reshape((10, 10)) plt.close('all') fig = plt.figure(figsize=(5, 4)) ax = plt.subplot(111) im = ax.imshow(arr, interpolation="none") plt.tight_layout()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
Caveats======= * :func:`~matplotlib.pyplot.tight_layout` only considers ticklabels, axis labels, and titles. Thus, other artists may be clipped and also may overlap. * It assumes that the extra space needed for ticklabels, axis labels, and titles is independent of original location of axes. This is often true, but there are rare cases where it is not. * pad=0 clips some of the texts by a few pixels. This may be a bug or a limitation of the current algorithm and it is not clear why it happens. Meanwhile, use of pad at least larger than 0.3 is recommended.Use with GridSpec=================GridSpec has its own :func:`~matplotlib.gridspec.GridSpec.tight_layout` method(the pyplot api :func:`~matplotlib.pyplot.tight_layout` also works).
import matplotlib.gridspec as gridspec plt.close('all') fig = plt.figure() gs1 = gridspec.GridSpec(2, 1) ax1 = fig.add_subplot(gs1[0]) ax2 = fig.add_subplot(gs1[1]) example_plot(ax1) example_plot(ax2) gs1.tight_layout(fig)
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
You may provide an optional *rect* parameter, which specifies the bounding boxthat the subplots will be fit inside. The coordinates must be in normalizedfigure coordinates and the default is (0, 0, 1, 1).
fig = plt.figure() gs1 = gridspec.GridSpec(2, 1) ax1 = fig.add_subplot(gs1[0]) ax2 = fig.add_subplot(gs1[1]) example_plot(ax1) example_plot(ax2) gs1.tight_layout(fig, rect=[0, 0, 0.5, 1])
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
For example, this can be used for a figure with multiple gridspecs.
fig = plt.figure() gs1 = gridspec.GridSpec(2, 1) ax1 = fig.add_subplot(gs1[0]) ax2 = fig.add_subplot(gs1[1]) example_plot(ax1) example_plot(ax2) gs1.tight_layout(fig, rect=[0, 0, 0.5, 1]) gs2 = gridspec.GridSpec(3, 1) for ss in gs2: ax = fig.add_subplot(ss) example_plot(ax) ax.set_title("") ax.set_xlabel("") ax.set_xlabel("x-label", fontsize=12) gs2.tight_layout(fig, rect=[0.5, 0, 1, 1], h_pad=0.5) # We may try to match the top and bottom of two grids :: top = min(gs1.top, gs2.top) bottom = max(gs1.bottom, gs2.bottom) gs1.update(top=top, bottom=bottom) gs2.update(top=top, bottom=bottom) plt.show()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
While this should be mostly good enough, adjusting top and bottommay require adjustment of hspace also. To update hspace & vspace, wecall :func:`~matplotlib.gridspec.GridSpec.tight_layout` again with updatedrect argument. Note that the rect argument specifies the area including theticklabels, etc. Thus, we will increase the bottom (which is 0 for the normalcase) by the difference between the *bottom* from above and the bottom of eachgridspec. Same thing for the top.
fig = plt.gcf() gs1 = gridspec.GridSpec(2, 1) ax1 = fig.add_subplot(gs1[0]) ax2 = fig.add_subplot(gs1[1]) example_plot(ax1) example_plot(ax2) gs1.tight_layout(fig, rect=[0, 0, 0.5, 1]) gs2 = gridspec.GridSpec(3, 1) for ss in gs2: ax = fig.add_subplot(ss) example_plot(ax) ax.set_title("") ax.set_xlabel("") ax.set_xlabel("x-label", fontsize=12) gs2.tight_layout(fig, rect=[0.5, 0, 1, 1], h_pad=0.5) top = min(gs1.top, gs2.top) bottom = max(gs1.bottom, gs2.bottom) gs1.update(top=top, bottom=bottom) gs2.update(top=top, bottom=bottom) top = min(gs1.top, gs2.top) bottom = max(gs1.bottom, gs2.bottom) gs1.tight_layout(fig, rect=[None, 0 + (bottom-gs1.bottom), 0.5, 1 - (gs1.top-top)]) gs2.tight_layout(fig, rect=[0.5, 0 + (bottom-gs2.bottom), None, 1 - (gs2.top-top)], h_pad=0.5)
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
Legends and Annotations=======================Pre Matplotlib 2.2, legends and annotations were excluded from the boundingbox calculations that decide the layout. Subsequently these artists wereadded to the calculation, but sometimes it is undesirable to include them.For instance in this case it might be good to have the axes shring a bitto make room for the legend:
fig, ax = plt.subplots(figsize=(4, 3)) lines = ax.plot(range(10), label='A simple plot') ax.legend(bbox_to_anchor=(0.7, 0.5), loc='center left',) fig.tight_layout() plt.show()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
However, sometimes this is not desired (quite often when using``fig.savefig('outname.png', bbox_inches='tight')``). In order toremove the legend from the bounding box calculation, we simply set itsbounding ``leg.set_in_layout(False)`` and the legend will be ignored.
fig, ax = plt.subplots(figsize=(4, 3)) lines = ax.plot(range(10), label='B simple plot') leg = ax.legend(bbox_to_anchor=(0.7, 0.5), loc='center left',) leg.set_in_layout(False) fig.tight_layout() plt.show()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
Use with AxesGrid1==================While limited, the axes_grid1 toolkit is also supported.
from mpl_toolkits.axes_grid1 import Grid plt.close('all') fig = plt.figure() grid = Grid(fig, rect=111, nrows_ncols=(2, 2), axes_pad=0.25, label_mode='L', ) for ax in grid: example_plot(ax) ax.title.set_visible(False) plt.tight_layout()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
Colorbar========If you create a colorbar with the :func:`~matplotlib.pyplot.colorbar`command, the created colorbar is an instance of Axes, *not* Subplot, sotight_layout does not work. With Matplotlib v1.1, you may create acolorbar as a subplot using the gridspec.
plt.close('all') arr = np.arange(100).reshape((10, 10)) fig = plt.figure(figsize=(4, 4)) im = plt.imshow(arr, interpolation="none") plt.colorbar(im, use_gridspec=True) plt.tight_layout()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
Another option is to use AxesGrid1 toolkit toexplicitly create an axes for colorbar.
from mpl_toolkits.axes_grid1 import make_axes_locatable plt.close('all') arr = np.arange(100).reshape((10, 10)) fig = plt.figure(figsize=(4, 4)) im = plt.imshow(arr, interpolation="none") divider = make_axes_locatable(plt.gca()) cax = divider.append_axes("right", "5%", pad="3%") plt.colorbar(im, cax=cax) plt.tight_layout()
_____no_output_____
MIT
testing/examples/tight_layout_guide.ipynb
pchaos/quanttesting
Energy Minimization Assignment, PharmSci 175/275 By David Mobley (UCI), Jan. 2018 Adapted with permission from an assignment by M. Scott Shell (UCSB) OverviewIn this assignment, you will begin with a provided template and several functions, as well as a Fortran library, and add additional code to perform a conjugate-gradient minimization. That is, you will write a conjugate-gradient minimizer. You will then apply this minimize to generate Lennard-Jones clusters with varying numbers of particles, and look at how the energy varies as a function of cluster size. The Jupyter notebook for this assignment is laid out with action items YOU need to take labeled as Step 1-7. These are interspersed with background information on the problem, some examples, and a sandbox section to tinker with some of the functions. So read on, and watch for the sections which require your action. What are Lennard-Jones clusters?Clusters are small, stable packings of (often spherical) particles. These particles could be colloidal particles, nanoparticles, etc. There has been considerable work spent studying these clusters over the years, from atomic sizes up to colloidal particles in the nanometer to micrometers scale. Cluster analysis is important to understanding a range of phenomena, including structures of solids, aggregation and precipitation of particles, the structure of nanomaterials, self-assembly behavior of synthetic and biomolecular systems, and diffusion in dense liquids.A cluster can be characterized by the number and type of particles and the energetic interactions between them. Here, we will examine Lennard-Jones (LJ) clusters, which are clusters of simple attractive spherical particles with interactions modeled by the Lennard-Jones interaction. For LJ clusters, there are cluster sizes of unusual stability. These are called magic number clusters and correspond to cluster sizes where the packing of atoms is particularly efficient, leading to very favorable energies and hence exceptional stability. The most stable such clusters are built from an icosahedral arrangement of particles, and the first few such **magic numbers** for cluster sizes of icosahedral geometries are 13, 19, 38, 55, and 75. These clusters are still interesting from a basic physical chemistry point of view, but our interest here is mainly in (a) energy minimization algorithms, and (b) learning how to do non-trivial numerics in Python. Here, we will energy minimize Lennard-Jones clusters of different sizesHere, we will examine different numbers of particles and attempt to find the minimum energy cluster for each number of particles. Our energy model will use the LJ potential in its dimensionless form (meaning that we have changed units so that all of the constants are hidden in the units). We denote this by putting a star on the potential:\begin{equation}U^* = \sum \limits_{i<j} 4(r_{ij}^{-12} - r_{ij}^{-6})\end{equation}We will start with a random initial configuration of particles, and try to use an energy minimization algorithm to find the most stable configuration of particles. But when there are more than just a few particles, there will be more than one local minimum, so there is no guarantee the energy minimizer will find the global minimum. In such cases, we will need to minimize from random initial configurations many times in order to ensure we locate the global minimum.There is also the possibility of forming multiple separate clusters separated by a significant difference. This is not unlikely, since the LJ interaction is only very weakly attractive at large distances. So, to ensure we form single clusters, we will use a weak biasing potential to pull all of the particles towards the origin, biasing the system towards forming a single cluster. Otherwise, the LJ potential will tend to be too weak to pull together very distant particles in these tests. We will use a harmonic biasing potential, so that the total potential energy (“force field”) is:\begin{equation}U^* = \sum\limits_i \alpha |\mathbf{r}_i|^2 + \sum \limits_{i<j} 4(r_{ij}^{-12} - r_{ij}^{-6})\end{equation}Here we will use $\alpha = 0.0001 N^{-2/3}$ where $N$ is the number of particles; this will be a very small number. This particular form is selected so that the energy due to this term for a cluster of $N$ particles is, on average, constant regardless of $N$. Additional detailsFor this assignment, your job is to perform a conjugate-gradient minimization of Lennard-Jones particles which are initially randomly distributed in space. I am providing several items for you:* A Fortran library (emlib) which you can use within Python to calculate energies and forces* A Python library (pos_to_pdb.py) which you can use to output structures to your disk to visualize motion of your particles (for example with PyMol) if you so desire* A template for your assignment (below) in iPython notebook format; this also will be posted on the class website in plain Python format in case my experiment with iPython notebooks fails here. * This template contains some code which will get you started, including code for a line search minimization. * It also contains places for you to write the functions (outlined below) you need to write to complete the assignment. A quick (but important) note on Notebook usage:iPython notebooks such as this one often contain a variety of cells containing code. These are normally intended to be run, which you can when you have an individual cell selected, by hitting the button at the top with a 'play' symbol, or by typing shift-enter. If you do NOT do so on each of the cells defining variables/functions which will be used here, then you will encounter an error about undefined variables when you run the later cells. Your step 1 for the assignment: Start by doing some file bookkeeping: * Find `emlib.f90` and optional utility `pos_to_pdb.py` in this directory. * In the command prompt navigate to that folder and type 'f2py -c -m emlib emlib.f90' which should compile the fortran library for use within python (For more on F2Py, refer to the [f2py documentation](https://docs.scipy.org/doc/numpy-dev/f2py/)). In OS X, this may require you to install the (free) XCode developer tools (available from the Mac App store) and the command-line developer tools first (the latter via `xcode-select --install`). In Linux it should just work. Windows would be a hurdle. * In your command prompt, start theis Jupyter notebook (in OSX this would be something like 'Jupyter notebook 272EnergyMinimization'), which should open it in your web browser; you're running it already unless you are looking at the HTML version of this file. Template Python code for the assignment is provided below. I suggest making a new notebook which is a copy of this one (perhaps containing your name in the filename) and working from there. Next, we prep Python for the work: First we import the numpy numerical library we're going to need, as well as the compiled Fortran library emlib
import numpy as np import emlib from pos_to_pdb import * #This would allow you to export coordinates if you want, later
_____no_output_____
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing
Important technical note: Unit masses, etc.Note that all of the following code will assume unit atomic masses, such that forces and accelerations are equal -- that is, instead of $F=ma$ we write $F=a$ assuming that $m=1$. We also drop most constants. This is a relatively common trick in physics when you are interested only in seeing how the basic equations work, and not in quantitative comparison with experimental numbers. It can be called using "dimensionless units". Then we define the LineSearch function: Here is the `LineSearch` function which is provided for you. Read the prototype (definition) and documentation to understand what it needs and what it will do (note that you do NOT need to read all the code):
def LineSearch(Pos, Dir, dx, EFracTol, Accel = 1.5, MaxInc = 10., MaxIter = 10000): """Performs a line search along direction Dir. Input: Pos: starting positions, (N,3) array Dir: (N,3) array of gradient direction dx: initial step amount, a float EFracTol: fractional energy tolerance Accel: acceleration factor MaxInc: the maximum increase in energy for bracketing MaxIter: maximum number of iteration steps Output: PEnergy: value of potential energy at minimum along Dir PosMin: minimum energy (N,3) position array along Dir """ #start the iteration counter Iter = 0 #find the normalized direction NormDir = Dir / np.sqrt(np.sum(Dir * Dir)) #take the first two steps and compute energies Dists = [0., dx] PEs = [emlib.calcenergy(Pos + NormDir * x) for x in Dists] #if the second point is not downhill in energy, back #off and take a shorter step until we find one while PEs[1] > PEs[0]: Iter += 1 dx = dx * 0.5 Dists[1] = dx PEs[1] = emlib.calcenergy(Pos + NormDir * dx) #find a third point Dists.append( 2. * dx ) PEs.append( emlib.calcenergy(Pos + NormDir * 2. * dx) ) #keep stepping forward until the third point is higher #in energy; then we have bracketed a minimum while PEs[2] < PEs[1]: Iter += 1 #find a fourth point and evaluate energy Dists.append( Dists[-1] + dx ) PEs.append( emlib.calcenergy(Pos + NormDir * Dists[-1]) ) #check if we increased too much in energy; if so, back off if (PEs[3] - PEs[0]) > MaxInc * (PEs[0] - PEs[2]): PEs = PEs[:3] Dists = Dists[:3] dx = dx * 0.5 else: #shift all of the points over PEs = PEs[-3:] Dists = Dists[-3:] dx = dx * Accel #we've bracketed a minimum; now we want to find it to high #accuracy OldPE3 = 1.e300 while True: Iter += 1 if Iter > MaxIter: print("Warning: maximum number of iterations reached in line search.") break #store distances for ease of code-reading d0, d1, d2 = Dists PE0, PE1, PE2 = PEs #use a parobolic approximation to estimate the location #of the minimum d10 = d0 - d1 d12 = d2 - d1 Num = d12*d12*(PE0-PE1) - d10*d10*(PE2-PE1) Dem = d12*(PE0-PE1) - d10*(PE2-PE1) if Dem == 0: #parabolic extrapolation won't work; set new dist = 0 d3 = 0 else: #location of parabolic minimum d3 = d1 + 0.5 * Num / Dem #compute the new potential energy PE3 = emlib.calcenergy(Pos + NormDir * d3) #sometimes the parabolic approximation can fail; #check if d3 is out of range < d0 or > d2 or the new energy is higher if d3 < d0 or d3 > d2 or PE3 > PE0 or PE3 > PE1 or PE3 > PE2: #instead, just compute the new distance by bisecting two #of the existing points along the line search if abs(d2 - d1) > abs(d0 - d1): d3 = 0.5 * (d2 + d1) else: d3 = 0.5 * (d0 + d1) PE3 = emlib.calcenergy(Pos + NormDir * d3) #decide which three points to keep; we want to keep #the three that are closest to the minimum if d3 < d1: if PE3 < PE1: #get rid of point 2 Dists, PEs = [d0, d3, d1], [PE0, PE3, PE1] else: #get rid of point 0 Dists, PEs = [d3, d1, d2], [PE3, PE1, PE2] else: if PE3 < PE1: #get rid of point 0 Dists, PEs = [d1, d3, d2], [PE1, PE3, PE2] else: #get rid of point 2 Dists, PEs = [d0, d1, d3], [PE0, PE1, PE3] #check how much we've changed if abs(OldPE3 - PE3) < EFracTol * abs(PE3): #the fractional change is less than the tolerance, #so we are done and can exit the loop break OldPE3 = PE3 #return the position array at the minimum (point 1) PosMin = Pos + NormDir * Dists[1] PEMin = PEs[1] return PEMin, PosMin
_____no_output_____
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing
Step 2: Write a function to assign random initial positions to your atomsWe need a function that can randomly place N atoms in a box with sides of length L. Write a function based on a tool from the numpy 'random' module to do this. Some hints are in order:* NumPy contains a ‘random’ module which is good for obtaining random numbers and/or arrays. For example, if you have imported numpy as np, then np.random.random(shape) returns a random array with the specified shape (i.e. ‘np.random.random(3)’ would be a 3x1 array) with elements randomly selected between 0 to 1. Try this out:
a = np.random.random(3) print("a=\n",a) b = np.random.random((2,3)) print("b=\n",b)
a= [ 0.27969529 0.37836589 0.96785443] b= [[ 0.37068791 0.64081204 0.21422213] [ 0.471194 0.28575791 0.54468387]]
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing
* Note that in your function, you want the numbers to run from 0 to L. You might try out what happens if you multiply 'a' and 'b' in the code above by some number.Now, write your function. I've written the doc string and some comments for you, but you have to fill in its inner workings:
def InitPositions(N, L): """Returns an array of initial positions of each atom, placed randomly within a box of dimensions L. Input: N: number of atoms L: box width Output: Pos: (N,3) array of positions """ #### WRITE YOUR CODE HERE #### ## In my code, I can accomplish this function in 1 line ## using a numpy function. ## Yours can be longer if you want. It's more important that it be right than that it be short. return Pos
_____no_output_____
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing
Step 3: Write the Conjugate Gradient function described belowFill in code for the ConjugateGradient function below based on the discussion in class and below, supplemented by your reading of Leach's book (and other online sources if needed). Some additional guidance and hints are warranted first. Hints for ConjugateGradient:* As discussed/demonstrated above, a LineSearch function is already provided for you here* You are going to want to keep doing line searches until the energy stops changing. Write a loop to do this, and store your evaluated energies as you go.* A fortran library, `emlib`, is provided for you to calculate energies and forces. You should be able to ask for 'help(emlib)' for info on usage. You can also look directly at the Fortran code if you would like, though this may be less helpful.* You can get the potential energy and forces using the provided library using functions from emlib. For example, if `Pos` is an array of positions: `PEnergy, Forces = emlib.calcenergyforces(Pos)` `Forces = emlib.calcforces( Pos )`* Conjugate gradient does not specify an initial direction. Your initial search should be in the direction of the force. * After the initial line search, subsequent line search directions $i$ should be found using this expression for $v_i$, the direction matrix: \begin{equation} \mathbf{v}_i^N = \mathbf{f}_i^N + \gamma_i \mathbf{v}_{i-1}^N \end{equation} where \begin{equation} \gamma_i = \frac{ (\mathbf{f}_i^N-\mathbf{f}_{i-1}^N) \mathbf{f}_i^N}{\mathbf{f}_{i-1}^N \mathbf{f}_{i-1}^N} \end{equation} Note that here, $\mathbf{f}_i^N$ denotes the force on the particles at step $i$ (and hence it has 3N dimensions - $x$, $y$, and $z$ for each particle) and $\mathbf{f}_{i-1}^N$ is the force at the last ($i-1$) step. Note that the forces are a collection of vectors, one vector for the force on each particle. $\gamma_i$ should be just a number (scalar). Hint: Note that if you have a force array consisting of a set of vectors, the product you want inside the equation for $\gamma_i$ should be an element-by-element multiplication, not a dot or inner product. **Be sure to see the helpful tips about how to calculate this which were given in the energy minimization lecture**! * You want to end up at the point, in your code, where you can obtain the new direction by calculating something like `Dir = newForces + gamma * Dir`* Continue successive line searches in your CG minimization until the fractional change in energy on subsequent searches is less than the tolerance. That is, you'll set it up to use an `EFracTolCG` variable and continue until this criteria is met (where $U_i$ is the potential energy at the present step): \begin{equation}\left|U_i-U_{i-1}\right| < EFracTolCG \times \left| U_i\right|\end{equation}* To debug your code, you will probably want to initially use `print` statements in the minimization routine to monitor the energy as a function of step to make sure it's doing the right thing! Now actually write ConjugateGradient:
def ConjugateGradient(Pos, dx, EFracTolLS, EFracTolCG): """Performs a conjugate gradient search. Input: Pos: starting positions, (N,3) array dx: initial step amount EFracTolLS: fractional energy tolerance for line search EFracTolCG: fractional energy tolerance for conjugate gradient Output: PEnergy: value of potential energy at minimum Pos: minimum energy (N,3) position array """ #### WRITE YOUR CODE HERE #### ## In my code, I can accomplish this function in 10 lines ### #A return statement you may/will use to finish things off return PEnergy, Pos
_____no_output_____
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing
Step 4: Energy minimize a variety of clusters, storing energiesWrite code to use the functions you wrote above, plus the emlib module, to energy minimize clusters of various sizes. Loop over clusters from size N=2 to (and including) N=25. For each particle number, do the following:* Perform K (to be specified below in the section on graphing) minimizations, each starting from a different random configuration of particles * Store the K energies to a list * Display the minimum, average, and standard deviation of the minimized energies for the trials. Note standard deviations can be calculated with the numpy.std function (`np.std()`)* After doing this, you'll be tasked with making some plots. Use the following settings:* `dx = 0.001`* `EFracTolCG = 1.0e-10`* `EFracTolLS = 1.0e-8`* And place the particles with L chosen such that the average number density of particles ($N/V$, where $V=L^3$) is $0.001$. That is, for every $N$, solve for $L$ such that $N/L^3 = 0.001$. These are relatively typical settings for this kind of a system. **I suggest you do this first for just one N and K to make sure it works**. Then set up a loop over N and perhaps (if you like) a loop over K as well. Reserve the large K values until you are absolutely certain it’s working. Note that if the computational time becomes prohibitive (i.e. if it runs more than overnight, or your computer is having difficulties handling the lode) we can migrate your simulations to GreenPlanet. You can easily add visualization of a trajectory by adding, within your ConjugateGradient function’s central loop, a call to the PosToPDB function of the pos_to_pdb module. Assuming you’ve done ‘from pos_to_pdb import *’ you’d add something like: `PosToPDB( Pos, L, ‘mytrajectory.pdb’)`within the loop inside your ConjugateGradient minimizer. This will write out each step of the minimization as a separate frame in a pdb file, which you can download with scp and view in PyMol to see exactly what’s going on. Note that visualization (really, the file I/O and coordinate conversions) will slow things considerably, so I suggest you only do this in one specific case to check out what’s going on, or to troubleshoot if things don't appear to be working. It should also be possible to add interactive visualization via `nglview` here, though I've not done that for you.* Hint: **You MAY want to use Python's pickle module to save out your data at the end of your calculations, since the next step involves plotting your data and you may want to easily be able to read it back in**. At the very least - whether you save it to disk or not - you'll want to store it (specifically, the minimum and average energies at each N) to variables for later reuse. If you had the variable `energies` containing all of the energies obtained at K = 10000 you might dump this using:
import pickle file = open('energies.pickle', "w") pickle.dump( energies, file) file.close() #To load again, use: #file = open("energies.pickle", "r") #energies = pickle.load(file) #file.close()
_____no_output_____
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing
Write your code here:
#Your energy minimization code here #This will be the longest code you write in this assignment
_____no_output_____
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing
Step 5: Graph your findings Plot the minimum and average energies as a function of N for each of K=100, 1000, and 10000. The last case may be fairly time consuming (i.e. several hours) and should be done without output of pdb files for visualization (since this can slow it down).Use matplotlib/PyLab to make these plots.**Hint: If your minimizations are proceeding extremely slowly, it may mean you have an error in calculation of gamma**, such that even K=100 or K=10 could take a very long time. Ensure you have implemented the equation for gamma correctly. Even with a correct gamma value, this will take considerable time for the larger N values.
#Your code for this here
_____no_output_____
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing
Step 6: Compare with what's expectedCompare your results (your minimum energy at each N value) with the known global minimum energies, via a plot and by commenting on the results. These are from ( Leary, J. Global Optimization 11:35 (1997)). Add this curve to your graph. Why might your results be higher?
#Write code here to add these to your graph
_____no_output_____
CC-BY-4.0
uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb
matthagy/drug-computing