markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
For complex numbers for instance?
z = 1+4j print(z) objviz(z)
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
OK, this fails. Calls
def factorial(n): if n < 0: return 0 elif n == 0: return 1 else: return n * factorial(n - 1) for n in range(12): print(f"{n}! = {factorial(n)}")
0! = 1 1! = 1 2! = 2 3! = 6 4! = 24 5! = 120 6! = 720 7! = 5040 8! = 40320 9! = 362880 10! = 3628800 11! = 39916800
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
And now with some visualization:
from IPython.display import display def factorial2(n): display(callsviz(varnames=["n"])) if n < 0: return 0 elif n == 0: return 1 else: return n * factorial2(n - 1) n = 4 print(f"{n}! = {factorial2(n)}")
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
We really see the "call stack" as the system keeps track of the nested calls. I like that! 👌 String
import string string.hexdigits strviz(string.hexdigits)
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
MasksWith gdsfactory you can easily go from components, to sweeps, to Masks. Lets start with a resistance sweep, where you change the resistance width to measure sheet resistance Pack
import gdsfactory as gf gf.clear_cache() sweep = [gf.components.resistance_sheet(width=width) for width in [1, 10, 100]] m = gf.pack(sweep) m[0] spiral_te = gf.routing.add_fiber_single(gf.functions.rotate(gf.components.spiral_inner_io_fiber_single, 90)) spiral_te # which is equivalent to spiral_te = gf.compose(gf.routing.add_fiber_single, gf.functions.rotate90, gf.components.spiral_inner_io_fiber_single) spiral_te(length=10e3) import gdsfactory as gf spiral_te = gf.compose(gf.routing.add_fiber_single, gf.functions.rotate90, gf.components.spiral_inner_io_fiber_single) sweep = [spiral_te(length=length) for length in [10e3, 20e3, 30e3]] m = gf.pack(sweep) m[0]
_____no_output_____
MIT
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
You can also add a `prefix` to each text label. For example `S` for the spirals at the `north-center``text_rectangular` is DRC clean and is anchored on `nc` (north-center)
text_metal3 = gf.partial(gf.components.text_rectangular_multi_layer, layers=(gf.LAYER.M3,)) m = gf.pack(sweep, text=text_metal3, text_anchors=('nc',), text_prefix='s') m[0] text_metal2 = gf.partial(gf.c.text, layer=gf.LAYER.M2) m = gf.pack(sweep, text=text_metal2, text_anchors=('nc',), text_prefix='s') m[0]
_____no_output_____
MIT
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
Grid
g = gf.grid(sweep) g gh = gf.grid(sweep, shape=(1, len(sweep))) gh ghymin = gf.grid(sweep, shape=(1, len(sweep)), align_y='ymin') ghymin
_____no_output_____
MIT
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
You can also add text labels to each element of the sweep
ghymin = gf.grid_with_text(sweep, shape=(1, len(sweep)), align_y='ymin', text=text_metal3) ghymin
_____no_output_____
MIT
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
MaskYou can easily define a mask using `grid` and `pack`
import gdsfactory as gf text_metal3 = gf.partial(gf.c.text_rectangular_multi_layer, layers=(gf.LAYER.M3,)) grid = gf.partial(gf.grid_with_text, text=text_metal3) pack = gf.partial(gf.pack, text=text_metal3) gratings_sweep = [gf.c.grating_coupler_elliptical(taper_angle=taper_angle) for taper_angle in [20, 30, 40]] gratings = grid(gratings_sweep, text=None) gratings gratings_sweep = [gf.c.grating_coupler_elliptical(taper_angle=taper_angle) for taper_angle in [20, 30, 40]] gratings_loss_sweep = [gf.c.grating_coupler_loss_fiber_single(grating_coupler=grating) for grating in gratings_sweep] gratings = grid(gratings_loss_sweep, shape=(1, len(gratings_loss_sweep)), spacing = (40,0)) gratings sweep_resistance = [gf.components.resistance_sheet(width=width) for width in [1, 10, 100]] resistance = gf.pack(sweep_resistance)[0] resistance spiral_te = gf.compose(gf.routing.add_fiber_single, gf.functions.rotate90, gf.components.spiral_inner_io_fiber_single) sweep_spirals = [spiral_te(length=length) for length in [10e3, 20e3, 30e3]] spirals = gf.pack(sweep_spirals)[0] spirals mask = gf.pack([spirals, resistance, gratings])[0] mask
_____no_output_____
MIT
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
As you can see you can define your mask in a single line.For more complex mask, you can also create a new cell to build up more complexity
@gf.cell def mask(): c = gf.Component() c << gf.pack([spirals, resistance, gratings])[0] c << gf.c.seal_ring(c) return c c = mask(cache=False) c c.write_gds_with_metadata(gdsdir='extra') gf.mask.write_labels(gdspath='extra/mask_d41d8cd9.gds', label_layer=(201, 0))
_____no_output_____
MIT
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
```CSV labels ------| |--> merge_test_metadata dict |YAML metatada ---```
test_metadata = gf.mask.merge_test_metadata(gdspath='extra/mask_d41d8cd9.gds') test_metadata.spiral_inner_io_6dc6250a.full.length spiral_names = [s for s in test_metadata.keys() if s.startswith('spiral')] spiral_names spiral_lengths = [test_metadata[spiral_name].length for spiral_name in spiral_names] spiral_lengths gc_names = [s for s in test_metadata.keys() if s.startswith('grating')] gc_names gc_taper_angles = [test_metadata[name].full.taper_angle for name in gc_names] gc_taper_angles
_____no_output_____
MIT
docs/notebooks/06_mask.ipynb
gdsfactory/gdsfactory
Introduction to Convolution Neural Nets======== Version 0.1By B Nord 2018 Nov 09This notebook was developed within the [Google Collaboratory](https://colab.research.google.com/notebooks/welcome.ipynbrecent=true) framework. The original notebook can be run in a web browser, and is available [via Collaboratory](https://colab.research.google.com/drive/1wKzhJ0cOsJbgM9L0uIVUCYW1f2Zdf3PKscrollTo=qwubzWGWWD6E). It has been recreated below, though we recommend you run the web-based version. Install packages on the back end
# install software on the backend, which is located at # Google's Super Secret Sky Server in an alternate universe. # The backend is called a 'hosted runtime' if it is on their server. # A local runtime would start a colab notebook on your machine locally. # Think of google colab as a Google Docs version of Jupyter Notebooks # remove display of install details %%capture --no-display # pip install !pip install numpy matplotlib scipy pandas scikit-learn astropy seaborn ipython jupyter #standard install for DSFP !pip install keras tensorflow # required for deep learning !pip install pycm # standard-ish imports import numpy as np import matplotlib.pyplot as plt import time import itertools # non-standard, but stable package for confusion matrices from pycm import ConfusionMatrix # neural network / machine learning packages from sklearn import metrics import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Activation from keras.layers import Conv2D, MaxPooling2D, BatchNormalization from keras import backend as K
Using TensorFlow backend.
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Convolutional Neural Networks make the future now!**Learning Objectives**1. Gain familiarity with 1. Two standard convolutional neural network (CNN) architectures: 1. **Feed-forward CNN** 2. **Convolutional Autoencoder (CAE)** 2. One standard task performed with CNNs: **Binary Classification** 3. One new diagnostic of CNNs: **Feature maps from the first layer**2. Experience fundamental considerations, pitfalls, and strategies when training NNs 1. Data set preparation (never underestimate the time required for this) 2. CNN layer manipulation and architecture design 5. Model fitting (the learning process) 6. Effects of image quality3. Apply diagnostics from previous exercises4. Apply new diagnostics: look inside the networks with feature maps of the first layer5. Continue connecting NN functionality to data set structure and problem of interestSome of this notebook is very similar to the first one, but we're using a new architecture that has more moving pieces.*I'm still taking bets that we can start a paper with deep nets during the Saturday hack.* Activity 1: Classify Handwritten Digits with Convolutional Neural Networks (CNNs)Is it a "zero" [0] or a "one" [1]? (ooooh, the suspense; or maybe the suspense has dissipated by now.) Prepare the Data Download the data (ooh look it's all stored on Amazon's AWS!)(pssst, we're in the cloooud)
# import MNIST data (x_train_temp, y_train_temp), (x_test_temp, y_test_temp) = mnist.load_data()
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
**Look** at the data(always do this so that you **know** what the structure is.)
# Print the shapes print("Train Data Shape:", x_train_temp.shape) print("Test Data Shape:", x_test_temp.shape) print("Train Label Shape:", y_train_temp.shape) print("Test Label Shape:", y_test_temp.shape)
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
**Do the shapes of 'data' and 'label' (for train and test, respectively) match? If they don't now, Keras/TF will kindly yell at you later.**
# Print an example print("Example:") print("y_train[0] is the label for the 0th image, and it is a", y_train_temp[0]) print("x_train[0] is the image data, and you kind of see the pattern in the array of numbers") print(x_train_temp[0])
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
**Can you see the pattern of the number in the array?**
# Plot the data! f = plt.figure() f.add_subplot(1,2, 1) plt.imshow(x_train_temp[0]) f.add_subplot(1,2, 2) plt.imshow(x_train_temp[1]) plt.show(block=True)
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Prepare the dataData often need to be re-shaped and normalized for ingestion into the neural network. Normalize the dataThe images are recast as float and normalized to one for the network.
print("Before:", np.min(x_train_temp), np.max(x_train_temp)) x_train = x_train_temp.astype('float32') x_test = x_test_temp.astype('float32') x_train /= 255 x_test /= 255 y_train = y_train_temp y_test = y_test_temp print("After:", np.min(x_train), np.max(x_train))
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Reshape the data arrays: set the input shape to be ready for a convolution [NEW]We're going to use a Dense Neural Architecture, not as images, so we need to make the input shape appropriate.
# read the dimensions from one example in the training set img_rows, img_cols = x_train[0].shape[0], x_train[0].shape[1] # Different NN libraries (e.g., TF) use different ordering of dimensions # Here we set the "input shape" so that later the NN knows what shape to expect if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1)
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Apply *one-hot encoding* to the data1. Current encoding provides a literal label. For example, the label for "3" is *3*.2. One-hot encoding places a "1" in an array at the appropriate location for that datum. For example, the label "3" becomes *[0, 0, 0, 1, 0, 0, 0, 0, 0, 0]*This increases the efficiency of the matrix algebra during network training and evaluation.
# One-hot encoding num_classes = 10 y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes)
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Design Neural Network Architecture! Select model format
model = Sequential()
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Add layers to the model sequentially [NEW]
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.summary()
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
*Things to think about and notice:*1. How does the "output shape" column change as you go through the network? How does this relate to pictures of CNNs you've seen (or might find on google images, for example)?2. What happens when you re-compile the [cell where you add layers sequentially](https://colab.research.google.com/drive/1wKzhJ0cOsJbgM9L0uIVUCYW1f2Zdf3PKscrollTo=qXiW9aIx9_CM&line=3&uniqifier=1), without first compiling model-definition cell. Why does that happen? Compile the modelSelect three key options1. **optimizer**: the method for optimizing the weights. "Stochastic Gradient Descent (SGD)" is the canonical method.2. **loss** function: the form of the function to encode the difference between the data's true label and the predict label.3. **metric**: the function by which the model is evaluated.
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Fit (read: Train) the model
# Training parameters batch_size = 32 # number of images per epoch num_epochs = 5 # number of epochs validation_split = 0.8 # fraction of the training set that is for validation only # Train the model history = model.fit(x_train, y_train, batch_size=batch_size, epochs=num_epochs, validation_split=validation_split, verbose=True)
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
---*Things to think about and notice:*1. How fast is this training compared to the Dense/Fully Connected Networks? What could be a causing a difference between these two networks?2. Why is it taking a long time at the end of each epoch? Diagnostics! Evaluate overall model efficacyEvaluate model on training and test data and compare. This provides summary values that are equivalent to the final value in the accuracy/loss history plots.
loss_train, acc_train = model.evaluate(x_train, y_train, verbose=False) loss_test, acc_test = model.evaluate(x_test, y_test, verbose=False) print(f'Train acc/loss: {acc_train:.3}, {loss_train:.3}') print(f'Test acc/loss: {acc_test:.3}, {loss_test:.3}')
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Predict train and test data
y_pred_train = model.predict(x_train, verbose=True) y_pred_test = model.predict(x_test,verbose=True)
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Plot accuracy and loss as a function of epochs (equivalently training time)
# set up figure f = plt.figure(figsize=(12,5)) f.add_subplot(1,2, 1) # plot accuracy as a function of epoch plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['training', 'validation'], loc='best') # plot loss as a function of epoch f.add_subplot(1,2, 2) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['training', 'validation'], loc='best') plt.show(block=True)
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
---*Things to think about and notice:*1. How do these curve shapes compare to the initial dense network results? Confusion Matrix
# Function: Convert from categorical back to numerical value def convert_to_index(array_categorical): array_index = [np.argmax(array_temp) for array_temp in array_categorical] return array_index def plot_confusion_matrix(cm, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function modified to plots the ConfusionMatrix object. Normalization can be applied by setting `normalize=True`. Code Reference : http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html This script is derived from PyCM repository: https://github.com/sepandhaghighi/pycm """ plt_cm = [] for i in cm.classes : row=[] for j in cm.classes: row.append(cm.table[i][j]) plt_cm.append(row) plt_cm = np.array(plt_cm) if normalize: plt_cm = plt_cm.astype('float') / plt_cm.sum(axis=1)[:, np.newaxis] plt.imshow(plt_cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(cm.classes)) plt.xticks(tick_marks, cm.classes, rotation=45) plt.yticks(tick_marks, cm.classes) fmt = '.2f' if normalize else 'd' thresh = plt_cm.max() / 2. for i, j in itertools.product(range(plt_cm.shape[0]), range(plt_cm.shape[1])): plt.text(j, i, format(plt_cm[i, j], fmt), horizontalalignment="center", color="white" if plt_cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('Actual') plt.xlabel('Predict') # apply conversion function to data y_test_ind = convert_to_index(y_test) y_pred_test_ind = convert_to_index(y_pred_test) # compute confusion matrix cm_test = ConfusionMatrix(y_test_ind, y_pred_test_ind) np.set_printoptions(precision=2) # plot confusion matrix result plt.figure() plot_confusion_matrix(cm_test,title='cm')
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
---*Things to think about and notice:*1. How does this confusion matrix compare to that from the Dense network? Problems for the CNNs (I mean ones that Wolf Blitzer can't solve) --- Problem 1: There are a lot of moving parts here. A lot of in's and out's(bonus points if you know the 2000's movie, from which this is a near-quote.)So, let's reduce the data set size at the beginning of the notebook.For the rest of the exercises, we'd like to have the flexibility to experiment with larger networks (MOAR PARAMETERS, MOAR), so let's reduce the data set size. 1. Go to the [cell where we download the data](https://colab.research.google.com/drive/1wKzhJ0cOsJbgM9L0uIVUCYW1f2Zdf3PKscrollTo=qwXuui6_yYBv&line=2&uniqifier=1), and add a cell after it. 2. Use array indexing and slicing to create a smaller training set. How about 5000? 3. When train the model then, we'll want to update that validation fraction so that we get about 3000 in our training set. --- Problem 2: Keeeep Learning! What happens if you run the cell that does the model-fitting again, right after doing it the first time. What do you notice about the loss and accuracy, as compared to when you did the fitting the first time?Why do you think this is happening? --- Problem 3: What happens if you add a maxpooling layer? Does this change the training speed? Why might this be? Check the model summary output to see what effect the pooling layer has. --- Problem 4: How deep can you make the network? 1. Make a deep network and see how many parameters you can make. Is it trainable in a reasonable amount of time? Try add Conv layers, but not pooling layers.2. What if you want it to be efficient? Try adding a Max Pooling Layers after every Conv layer. How many layers can you possibly add now? Compile the model until you have an output shape of ( None, 1, 1 , PARAMS) before the first dense layer. --- Problem 5: Comparing performance and efficiency between CNNs and Dense NetworksExperiment with the neural network above, and reduce the number of parameters to near that of the Dense network in the first exercise. Is there a CNN architecture that has the same number of parameters as the Dense network, but can perform better?Remember to think deeply, to pool your resources. When you're nearing the end it may not be as dense as it looks, but nearly so. --- Problem 6: What happens to the training result when you degrade the images?In this part, we will degrade the images by adding noise, and then by blurring the images, and we'll look at how the network training responds. --- Problem 7: Let's see if we can look inside the neural networksUsing the [FAQ from Keras](https://keras.io/getting-started/faq/how-can-i-obtain-the-output-of-an-intermediate-layer) or any other online resource, like examples from Github, can we make a plot of the feature maps for any of the layers, so we can see what the neural net sees? --- Problem 8: Let's progress to Regression.Consider the labels as real values and modify the network to perform regression instead of classification on those values. You may want to consider the following:* normalizing the labels.* normalizing the image data.* modifying the activations that are used.* modifying the loss function that is appropriate for real-valued prediction. (see [keras loss](https://keras.io/losses/) ) Activity 2: Compress Handwritten Digits with a Convolutional Autoencoder (CAE) Add layers to the model sequentially [NEW]
autoencoder = Sequential() # Encoder Layers autoencoder.add(Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=x_train.shape[1:])) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')) autoencoder.add(MaxPooling2D((2, 2), padding='same')) # Flatten encoding for visualization autoencoder.add(Flatten()) autoencoder.add(Reshape((1, 1, 8))) # Decoder Layers autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(16, (3, 3), activation='relu')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(1, (3, 3), activation='sigmoid', padding='same')) autoencoder.summary()
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Create a separate model that is just the encoderThis will allow us to encode the images and look at what the encoding results in.
encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer('flatten_8').output) encoder.summary()
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Compile the autencoder
autoencoder.compile(optimizer='adam', loss='binary_crossentropy') num_epochs = 10
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Plot the input, output, and encoded images
# set number of images to visualize num_images = 10 # select random subsect to visualize np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) # encode images encoded_imgs = encoder.predict(x_test) #decode encode AND decode images decoded_imgs = autoencoder.predict(x_test) # plot figure plt.figure(figsize=(18, 4)) num_rows=4 num_pixel_x = 2 num_pixel_y = 4 for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(4, num_images, i + 1) plt.imshow(x_test[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot encoded image ax = plt.subplot(num_rows, num_images, num_images + i + 1) plt.imshow(encoded_imgs[image_idx].reshape(num_pixel_x, num_pixel_y), interpolation=None, resample=None) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(num_rows, num_images, 2*num_images + i + 1) plt.imshow(decoded_imgs[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show()
_____no_output_____
MIT
Session7/Day4/LetsHaveAConvo.ipynb
rmorgan10/LSSTC-DSFP-Sessions
Main Hartree CodeHartree-Fock Computational Chemistry Method implemented in Python as described in Modern Quantum Chemistry Introduction to Advanced Electronic Structure Theory, by Attila Szabo and Neil S. Ostlund. Throughout the rest of the modules in this notebook, the entire text of Modern Quantum Chemistry will simply be refered to as "Szabo" for the sake of brevity. Program is limited to molecules consisting of hydrogen and helium with only even numbers of electrons. Required user input is the atomic number, 3D space location, and electron number for each atom. A basis set specification is also required. The program makes use of Hartree Atomic Units such that the distances and lengths are described in Bohr Radius, energy is in Hartrees and mass is in Hartree Atomic Units, such that the mass of a proton is equal to 1836 atomic units. More information on Hartree Atomic Units can be found on page 41 of Szabo.
#Python Implementation of the Hartree Fock Method #Procedures listed in the code follow as described in Modern Quantum Chemistry: #Introduction to Advanced Electronic Structure Theory, By Attila Szabo and Neil S. Ostlund import sys sys.path.append("..\\Comp_Chem_Package") import numpy as np from molecule import atom from vector import vector from molecule import gaussian from molecule import molecule from notebookImporter import importNotebook #import integrals notebook for the hartree method integrals = importNotebook("Hartree_Integrals") scf = importNotebook("Hartree_SCF") #define SCF convergence critera, and max number of iteration cycles SCF_CONVERGENCE = pow(10, -15) MAX_ITERATIONS = 500 #Step 1 #Specify Molecules, Nuclear Coordinates, and Charge of the nucli Number of Electrons, #generate an h2 atom with a distance of 1.4 AU to compare with Szabo pg. 160 #R is in units of Bohr Radius R = 1.4 system = molecule() system.addAtom(atom(vector(1,1,1), 1, 1)) system.addAtom(atom(vector(1,1,1 + R), 1, 1)) #add a basis set system.addBasis("STO-3G") system.display() #Step 2 #Calculate Integrals #Overlap, KE, Nuclear Attraaction, and Electron Repulsion S = integrals.overlap(system) print("Overlap Matrix: ") print(np.matrix(S)) print() T = integrals.kineticEnergy(system) print("Electron Kinetic Energy Matrix: ") print(np.matrix(T)) print() V = integrals.nuclearAttraction(system) for index, atom in enumerate(V): print("Nucli " + str(index) + "-Electron Attraction Matrix: ") print(np.matrix(atom)) print() electronRepulsion = integrals.electronElectronRepulsion(system) print("Electron Repulsion Tensor: ") print(np.array(electronRepulsion)) print() #Form the electronic hamiltonian H = np.matrix(T) #add in all of the nuclear attractions matricies to the hamiltonian for atom in V: H += np.matrix(atom) print("Electronic Hamiltonian :") print(H) print() #Prepare for the SCF procedure #get size of the basis set size = len(S) #compute the Transformation Matrix X = scf.X(S, size) #get guess Fock matrix, assume 2-electron term is equal to 0 F = H print(F) # SCF Procedure #init list to store the energy from each iteration #as well as a boolean to signify whether the loop has converged E = [] converged = False while( not converged ): #diagnolze the Fock matrix and convert it to MO basis F = X.transpose() * F * X print("F**",F) #diagnolize the Fock Matrix to obtain the MOs and the their respective energies MOEnergy, MO = np.linalg.eigh(F) #Transform the MO basis MOs to an AO basis C = X * MO print("C", C) #compute the electron density, the two electron term, and then use G to compute the new Fock matrix P = scf.densityMatrix(C, system.N, size) G = scf.G(electronRepulsion, P, size) F = H + G #compute the new expectation energy #Expectation Energy is in units of Hartrees E.append(scf.expectationEnergy(H, F, P)) #check if at least two SCF iterations have occured #if more than two have occured, then check if the difference betweeen this E, #and the previous E is less then the covergence value, if yes, end the SCF loop #if energy has not converged, check whether the max number of iterations have occured so far sizeE = len(E) if(len(E) > 2): if(abs(E[sizeE-2] - E[sizeE-1]) < SCF_CONVERGENCE): converged = True elif(sizeE > MAX_ITERATIONS): print("SCF Failed to Converge") break #compute total energy of the system including nuclear-nuclear repulsion totalE = E[sizeE-1] + scf.nuclearRepulsion(system) #display information about current SCF iteration to the user print("SCF Iteration #" + str(sizeE) + ", Electronic Energy: " + str(E[sizeE-1]) + " Hartrees, Total Energy: " + str(totalE) + " Hartrees") print("F", F) print() print("-"*50) print() print("Final SCF Energy: " + str(E[sizeE-1])) X.transpose() * H * X X.transpose() * H X[0,1] np.matmul(X, H) X * H
_____no_output_____
MIT
Learning/Hartree-Fock/Hartree_Fock.ipynb
GaryZ700/CatLab_CompChem
Tracker[![Github](https://img.shields.io/github/stars/lab-ml/labml?style=social)](https://github.com/lab-ml/labml)[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lab-ml/labml/blob/master/guides/tracker.ipynb)[![Docs](https://img.shields.io/badge/labml-docs-blue)](https://docs.labml.ai/api/tracker.html)Here you specify indicators and the logger stores them temporarily and write in batches.It can aggregate and write them as means or histograms.
%%capture !pip install labml import time import numpy as np from labml import tracker # dummy train function def train(): return np.random.randint(100) # Reset global step because we incremented in previous loop tracker.set_global_step(0)
_____no_output_____
MIT
guides/tracker.ipynb
vishalbelsare/labml
This stores all the loss values and writes the logs the mean on every tenth iteration.Console output line is replaced until[`labml.tracker.new_line`](https://docs.labml.ai/api/tracker.htmllabml.tracker.new_line)is called.
for i in range(1, 401): tracker.add_global_step() loss = train() tracker.add(loss=loss) if i % 10 == 0: tracker.save() if i % 100 == 0: tracker.new_line() time.sleep(0.02)
_____no_output_____
MIT
guides/tracker.ipynb
vishalbelsare/labml
Indicator settings
# dummy train function def train2(idx): return idx, 10, np.random.randint(100) # Reset global step because we incremented in previous loop tracker.set_global_step(0)
_____no_output_____
MIT
guides/tracker.ipynb
vishalbelsare/labml
Histogram indicators will log a histogram of data.Queue will store data in a `deque` of size `queue_size`, and log histograms.Both of these will log the means too. And if `is_print` is `True` it will print the mean. queue size of `10` and the values are printed to the console
tracker.set_queue('reward', 10, True)
_____no_output_____
MIT
guides/tracker.ipynb
vishalbelsare/labml
By default values are not printed to console; i.e. `is_print` defaults to `False`.
tracker.set_scalar('policy')
_____no_output_____
MIT
guides/tracker.ipynb
vishalbelsare/labml
Settings `is_print` to `True` will print the mean value of histogram to console
tracker.set_histogram('value', True) for i in range(1, 400): tracker.add_global_step() reward, policy, value = train2(i) tracker.add(reward=reward, policy=policy, value=value, loss=1.) if i % 10 == 0: tracker.save() if i % 100 == 0: tracker.new_line()
_____no_output_____
MIT
guides/tracker.ipynb
vishalbelsare/labml
**Diplomatura en Ciencia de Datos, Aprendizaje Automático y sus Aplicaciones****Edición 2021**--- Variables Aleatorias y ProbabilidadEn esta notebook, vamos a realizar una primera aproximación al conjunto de datos. * Variables aleatorias y sus distintos tipos* Probabilidad
import io import matplotlib import matplotlib.pyplot as plt import numpy import pandas as pd import seaborn seaborn.set_context('talk')
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Lectura del datasetEn la notebook 00 se explican los detalles de la siguiente sección.
url = 'https://cs.famaf.unc.edu.ar/~mteruel/datasets/diplodatos/sysarmy_survey_2020_processed.csv' df = pd.read_csv(url) df[:3]
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Análisis de salariosLa primera pregunta que se nos ocurre al ver esta encuenta es: **"¿Y cuánto cobran los programadores en Argentina?"**.Este es un punto de partida para el análisis del conjunto de datos. El proceso total constará de varias iteraciones: a medida que se obtengan conclusiones, se descrubrirán otros aspectos relevantes de los datos, lo cual disparará nuevas preguntas. Para conocer más sobre la distribución de los salarios, es necesario elegir una columna de la encuesta para analizar.
salary_col = 'salary_monthly_NETO'
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Una buena forma de comenzar una exploración es a través de la visualización. Seaborn nos provee un tipo de gráfico específico para graficar columnas que contienen números, llamado `displot`. (No confundir con `distplot`, que está deprecado). El gráfico generado es un **histograma** de frecuencias. En el eje x se grafican los valores que toma la columna, divididos en intervalos o bins. En el eje y se grafica el conteo de ocurrencias de valores en cada intervalo.
seaborn.displot(df[salary_col], aspect=2) ## para evitar la notación científica en las etiquetas plt.ticklabel_format(style='plain', axis='x')
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
¿Qué estamos viendo?Las visualizaciones simples son prácticas para conocer la forma de los datos rápidamente, porque condensan mucha información. Por ejemplo:* El rango de valores tomados por la columna va desde 0 hasta aproximadamente 2M.* La mayoría de ls valores se condensa por debajo de los 250K, y pocos superan los 500K.* Los valores más frencuentes aparentan estar cerca de los 100K.* Hay un pico de ocurrencias en el valor 0. y brindan poco detalle. Ejercicio: ¿Qué otro tipo de preguntas nos podemos hacer en este punto que no se responden con un histograma? Análisis, ¡fundamentado!Para continuar el análisis, es necesario aplicar herramientas teóricas que nos brinda la Estadística y la Probabilidad. Variables aleatorias y sus tiposEn base a la definición de variable aleatoria que discutimos, se puede hablar de que cada columna de nuestro dataset es un **variable aleatoria**, y que su valor en cada respuesta es una **realización** de dicha variable. Pero, ¿qué tipo tienen esas variables? V.A. numéricasEl salario, la edad, los años de experiencia, son variables aleatorias cuyo rango es un conjunto numérico. Podemos clasificarlas en **continuas** o **discretas**, aunque esa distinción se vuelve difusa cuando trabajamos con datos computacionalmente. ¿Por qué? * Datos que en teoría son continuos, se miden de manera discreta. Por ejemplo, los *años* de experiencia, la altura de una persona en *centímetros*.* Datos que en teoría son continuos, se discretizan a fines prácticos. Por ejemplo, la edad, el salario en pesos argentinos. Para analizar datos continuos se usan frecuentemente los histogramas, como en el caso anterior de los sueldos.**¡Tip!** Antes de graficar, controlar el rango (ya que seaborn intentará crear miles de segmentos si el rango es muy grande) y remover los valores nulos.
# Obtenemos el rango de valores observados de la variable df.profile_age.min(), df.profile_age.max() seaborn.displot(df.profile_age[df.profile_age < 100].dropna(), stat='count', aspect=4)
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Sin embargo, los histogramas pueden ocultar información. ¿Por qué? Porque agrupan rangos de valores en intervalos inferidos automáticamente. Como resultado, la visualización varía de con distintas longitudes de segmentos. Comparemos los siguientes histogramas.
# Un ejemplo más avanzado fig, ax = plt.subplots(nrows=2, ncols=3, figsize=(15,10), sharey='row') seaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[0,0], stat='count') seaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[0,1], bins=20, stat='count') seaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[0,2], bins=5, stat='count') seaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[1,0], stat='frequency') seaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[1,1], bins=20, stat='frequency') seaborn.histplot(df.profile_age[df.profile_age < 100].dropna(), ax=ax[1,2], bins=5, stat='frequency') fig.show()
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Para variables discretas puede usarse un gráfico de línea, que permite visualizar el conteo de cada uno de los puntos en el rango observado.**¿Se puede usar un gráfico de líneas para la variable `salary_montly_NETO`? ¿Tiene sentido?**
fig = plt.figure(figsize=(16,4)) age_counts = df[df.profile_age < 100].profile_age.value_counts() seaborn.lineplot(age_counts.index, age_counts.values, color='steelblue') plt.xticks(fontsize=14) # Achicamos la letra para que se vea mejor seaborn.despine()
/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
V.A. categóricasLas variables categóricas toman valores de un conjunto pre-definido, usualmente pero no necesariamente finito. Para visualizarlas, puede usarse un gráfico de barras, que representa cada valor observado con una columna, y el conteo de ese valor con la altura de la columna.Las variables numéricas discretas, ¿son categóricas?
df.profile_gender.unique() fig = plt.figure(figsize=(8,6)) seaborn.countplot(df.profile_gender, color='steelblue')
/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Las variables categóricas pueden ser *ordinales*, si existe un orden lógico entre sus valores. Esto es independiente de que sean numéricas. En caso de que un orden exista, es adecuado incluirlo en el gráfico.
sorted_studies_levels = ['Primario', 'Secundario', 'Terciario', 'Universitario', 'Posgrado', 'Doctorado', 'Posdoctorado'] fig, axes = plt.subplots(ncols=2, figsize=(15,6)) g = seaborn.countplot(df.profile_studies_level, color='steelblue', ax=axes[0]) g = seaborn.countplot(df.profile_studies_level, color='steelblue', ax=axes[1], order=sorted_studies_levels) for ax in axes: ax.tick_params(labelrotation=30)
/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning /usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Tipos de variables vs tipos de datosTenemos que distinguir dos conceptos con el mismo nombre y significado similar, pero que no son iguales: - **tipo de la variable aleatoria** es el tipo de valores con los que decidimos *intepretar* las realizaciones - **tipo de datos** es un concepto de programación que indica en qué formato se representa la información. Cuando asignamos a una variable `age` *del programa de Python* una realización de una variable aleatoria conceptual `profile_age`, esa variable `age` también tiene un *tipo de Python*, por ejemplo `int` o `float`.
age = df.profile_age.iloc[0] type(age)
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
*¡Importante!* Hay que tener en cuenta también los límites de la capacidad computacional al momento de representar entidades matemáticas.* Los números reales siempre son "redondeados" a una representación racional.* Los tipos básicos como `Int` sólo pueden representar números en un rango, por ejemplo `(-2^31, 2^31)`. Exceder el rango puede tener consecuencias inesperadas, como `integer overflow`.¿Por qué es importante saberlo? Porque se pueden producir errores de redondeo u obtener resultados aproximados.
print(type(3), type(3.44), type(1/3)) # 1/3 es un numero irracional import numpy print(numpy.iinfo('int64').min, numpy.iinfo('int64').max) numpy.int64(numpy.iinfo('int64').max) + 1 # Traten de hacer numpy.int64(numpy.iinfo('int64').max + 1)
<class 'int'> <class 'float'> <class 'float'> -9223372036854775808 9223372036854775807
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Se puede acceder a los tipos de datos del DataFrame. El tipo `object` se utiliza para representar cualquier variable que no sea numérica, como por ejemplo los `str`.
df.dtypes[:10]
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Hay que tener en cuenta que las librerías de gráficos nos permitirán crear las visualizaciones que querramos, mientras los tipos de datos sean los adecuados.Por ejemplo, podemos hacer un histograma con la variable `profile_open_source_contributions` si la transformamos a tipo `bool` (que se representa internamente como un tipo entero). Sin embargo, esto no tiene ningún sentido.
df.loc[:,'salary_in_usd_bool'] = \ df.salary_in_usd.replace({'Mi sueldo está dolarizado': True}).fillna(False) print(df.salary_in_usd.unique(), df.salary_in_usd_bool.unique()) seaborn.histplot(df.salary_in_usd_bool, bins=5)
<string>:6: RuntimeWarning: Converting input from bool to <class 'numpy.uint8'> for compatibility. <string>:6: RuntimeWarning: Converting input from bool to <class 'numpy.uint8'> for compatibility.
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
También podemos graficar la frecuencia de una variable categórica utilizando un gráfico de líneas. **¿Por qué esta visualización no es correcta?**
count_by_province = df.work_province.value_counts() fig = plt.figure(figsize=(16, 4)) seaborn.lineplot(x=count_by_province.index, y=count_by_province.values) plt.xticks(rotation=45) seaborn.despine()
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Análisis del impacto de los años de experienciaAhora que ya sabemos aproximadamente la forma de nuestros datos, podemos pasar a realizar otra pregunta (otra iteración del proceso de análisis): **¿Tener más años de experiencia significa que se cobra más?**Para responder a esta pregunta, analizamos la probabilidad de que un programador tenga un salario mensual mayor que el promedio, cuando tiene una experiencia mayor que 5 años.
avg_salary = df[salary_col].mean() avg_salary
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Medida de probabilidadEn el teórico vimos que si cada una de nuestros eventos es independiente e idénticamente distribuido, es decir, que $P(\{\omega_i\})=1/k$, entonces la probabilidad de un conjunto $A \subset \Omega$ es la proporción de $A$, donde .$$P(\{\omega_i\})=1/k \implies P(A)=|A|/|\Omega|=|A|/k$$En este problema en particular, $\Omega$ son todas las respuestas del dataset, cada $a_i$ es una variable que representa una respuesta, y el conjunto $A$ son las respuestas (filas) en la que la columna `salary_col` tiene un valor mayor que el promedio
p_above_avg = len(df[df[salary_col] >= avg_salary]) / len(df) p_above_avg
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
* ¿Por qué podemos usar la teoría de la probabilidad?* ¿Por qué calculamos una probabilidad con esta fórmula?* ¿Cómo podemos interpretar esta probabilidad? Probabilidad condicionalAhora podemos pasar a hablar de la probabilidad condicional entre los dos eventos. La definimos como$$P(A|B) = \frac{P(A \cap B)}{P(B)}$$Esto es equivalente a:$$P(A|B) = \frac{|A \cap B|}{|B|}$$ EjercicioReponder: **¿Si uno tiene más de 5 años de experiencia, la probabilidad de cobrar más que el promedio aumenta? ¿Estos eventos, son independientes?**
is_above_avg = df[salary_col] > avg_salary experience_greater_5 = df.profile_years_experience > 5 intersection_count = len(df[is_above_avg & experience_greater_5]) p_above_avg_given_experience = 0 p_above_avg_given_experience
_____no_output_____
MIT
01_Probabilidad.ipynb
carrazanap/DiploDatos-AnalisisyVisualizacion
Simulating Clifford randomized benchmarking using implicit modelsThis tutorial demonstrates shows how to simulate Clifford RB sequences using $n$-qubit "implicit" models which build $n$-qubit process matrices from smaller building blocks. This restricts the noise allowed in the $n$-qubit model; in this tutorial we take $n=3$ and use a `LocalNoiseModel`.
import pygsti import numpy as np
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
Get some CRB circuitsFirst, we follow the [Clifford RB](../CliffordRB.ipynb) tutorial to generate a set of sequences. If you want to perform Direct RB instead, just replace this cell with the contents of the [Direct RB](../DirectRB.ipynb) tutorial up until the point where it creates `circuitlist`:
#Specify the device to be benchmarked - in this case 2 qubits nQubits = 3 qubit_labels = list(range(nQubits)) gate_names = ['Gxpi2', 'Gypi2','Gcphase'] availability = {'Gcphase':[(i,i+1) for i in range(nQubits-1)]} pspec = pygsti.obj.ProcessorSpec(nQubits, gate_names, availability=availability, qubit_labels=qubit_labels) #Specify RB parameters (k = number of repetitions at each length) lengths = [0,1,2,4,8,16] k = 10 subsetQs = qubit_labels randomizeout = False # ==> all circuits have the *same* ideal outcome (the all-zeros bitstring) #Generate clifford RB circuits exp_design = pygsti.protocols.CliffordRBDesign(pspec, lengths, k, qubit_labels=subsetQs, randomizeout=randomizeout) #Collect all the circuits into one list: circuitlist = exp_design.all_circuits_needing_data
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
Create a model to simulate these circuitsNow we need to create a model that can simulate circuits like this. The RB circuits use pyGSTi's "multi-qubit" conventions, which mean:1. RB circuits use our "multi-qubit" gate naming, so you have gates like `Gxpi2:0` and `Gcphase:0:1`.2. RB circuits do gates in parallel (this only matters for >1 qubits), so you have layers like `[Gypi2:0Gypi2:1]`"Implicit" models in pyGSTi (see the [implicit model tutorial](../../objects/ImplicitModel.ipynb)) are designed to efficiently describe multi-qubit processors. There are numerous ways of constructing implicit models, all of which can simulate the type of circuits described above. Here we'll demonstrate the simplest type: a "local noise model" (class `LocalNoiseModel`) where the noise on a gate can only act on that gate's target qubits - so, for instance, 1-qubit gates are still given by 1-qubit operators, not $n$-qubit ones.The construction of a local noise model follows the same pattern as building the `ProcessorSpec` above (in fact, `pspec.models['target']` *is* essentially the same model we build below except it was built with the default `parmeterization="static"` argument.
myModel = pygsti.obj.LocalNoiseModel.build_from_parameterization(nQubits, gate_names, availability=availability, qubit_labels=qubit_labels, parameterization="full")
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
Setting `parameterization="full"` is important, as it lets us assign arbitrary numpy arrays to gates as we'll show below. If you need to use other gates that aren't built into pyGSTi, you can use the `nonstd_gate_unitaries`argument of `build_from_parameterization` (see the docstring).The `build_from_parameterization` function creates a model with ideal (perfect) gates. We'll now create a 1-qubit depolarization superoperator, and a corresponding 2-qubit one (just the tensor product of two 1-qubit ones) to add some simple noise.
depol1Q = np.array([[1, 0, 0, 0], [0, 0.99, 0, 0], [0, 0, 0.99, 0], [0, 0, 0, 0.99]], 'd') # 1-qubit depolarizing operator depol2Q = np.kron(depol1Q,depol1Q)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
As detailed in the [implicit model tutorial](../../objects/ImplicitModel.ipynb), the gate operations of a `LocalNoiseModel` are held in its `.operation_blks['gates']` dictionary. We'll alter these by assigning new process matrices to each gate. In this case, it will be just a depolarized version of the original gate.
myModel.operation_blks['gates']["Gxpi2"] = np.dot(depol1Q, myModel.operation_blks['gates']["Gxpi2"]) myModel.operation_blks['gates']["Gypi2"] = np.dot(depol1Q, myModel.operation_blks['gates']["Gypi2"]) myModel.operation_blks['gates']["Gcphase"] = np.dot(depol2Q, myModel.operation_blks['gates']["Gcphase"])
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
Here's what the gates look like now:
print(myModel.operation_blks['gates']["Gxpi2"]) print(myModel.operation_blks['gates']["Gypi2"]) print(myModel.operation_blks['gates']["Gcphase"])
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
Now that our `Model` object is set to go, generating simulated data is easy:
ds = pygsti.construction.generate_fake_data(myModel, circuitlist, 100, seed=1234)
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
Running RB on the simulated `DataSet`To run an RB analysis, we just package up the experiment design and data set into a `ProtocolData` object and give this to a `RB` protocol's `run` method. This returns a `RandomizedBenchmarkingResults` object that can be used to plot the RB decay curve. (See the [RB analysis tutorial](../RBAnalysis.ipynb) for more details.)
data = pygsti.protocols.ProtocolData(exp_design, ds) results = pygsti.protocols.RB().run(data) %matplotlib inline results.plot()
_____no_output_____
Apache-2.0
jupyter_notebooks/Tutorials/algorithms/advanced/CliffordRB-Simulation-ImplicitModel.ipynb
lnmaurer/pyGSTi
gpu_info = !nvidia-smi gpu_info = '\n'.join(gpu_info) if gpu_info.find('failed') >= 0: print('Not connected to a GPU') else: print(gpu_info)
_____no_output_____
MIT
ratsql_colab.ipynb
nghoanglong/rat-sql
Set up and Install requirements
from google.colab import drive drive.mount('/content/drive') !git clone https://github.com/nghoanglong/rat-sql.git %cd /content/rat-sql !pip install -r requirements.txt import nltk nltk.download('stopwords') nltk.download('punkt') from transformers import BertModel BertModel.from_pretrained('bert-large-uncased-whole-word-masking') !mkdir -p third_party !git clone https://github.com/salesforce/WikiSQL third_party/wikisql %cd /content/rat-sql
/content/rat-sql
MIT
ratsql_colab.ipynb
nghoanglong/rat-sql
Run Spider Spider - Glove
!python run.py preprocess /content/rat-sql/experiments/spider-glove-run.jsonnet !python run.py train /content/rat-sql/experiments/spider-glove-run.jsonnet !python run.py eval /content/rat-sql/experiments/spider-glove-run.jsonnet
_____no_output_____
MIT
ratsql_colab.ipynb
nghoanglong/rat-sql
Spider - Bert
!python run.py preprocess /content/rat-sql/experiments/spider-bert-run.jsonnet !python run.py train /content/rat-sql/experiments/spider-bert-run.jsonnet !python run.py eval /content/rat-sql/experiments/spider-bert-run.jsonnet
_____no_output_____
MIT
ratsql_colab.ipynb
nghoanglong/rat-sql
Run vitext2sql
!wget -P /content/rat-sql/third_party/phow2v_emb https://public.vinai.io/word2vec_vi_words_300dims.zip cd /content/rat-sql/third_party/phow2v_emb !unzip /content/rat-sql/third_party/phow2v_emb/word2vec_vi_words_300dims.zip cd /content/rat-sql
/content/rat-sql
MIT
ratsql_colab.ipynb
nghoanglong/rat-sql
Run Vitext2sql - No PhoBert
!python run.py preprocess /content/rat-sql/experiments/vitext2sql-phow2v-run.jsonnet !python run.py train /content/rat-sql/experiments/vitext2sql-phow2v-run.jsonnet !python run.py eval /content/rat-sql/experiments/vitext2sql-phow2v-run.jsonnet
_____no_output_____
MIT
ratsql_colab.ipynb
nghoanglong/rat-sql
Run Vitext2SQL - PhoBert
!python run.py preprocess /content/rat-sql/experiments/vitext2sql-phobert-run.jsonnet !python run.py train /content/rat-sql/experiments/vitext2sql-phobert-run.jsonnet
_____no_output_____
MIT
ratsql_colab.ipynb
nghoanglong/rat-sql
Load model parameters and set up
base_dir = os.path.dirname(os.getcwd()) model_dir = os.path.join(base_dir, 'data', 'gaussian-bmr') exp_dir = os.path.join(base_dir, 'experiments', 'gaussian-bmr') if not os.path.exists(exp_dir): os.makedirs(exp_dir) seed = 201702 rng = np.random.RandomState(seed) class PhiFunc(object): def __init__(self, Q, b, log_zeta): self.Q = Q self.b = b self.log_zeta = log_zeta def __call__(self, x): return ( 0.5 * (x**2).sum(-1) - tt.log(tt.cosh(x.dot(self.Q.T) + self.b)).sum(-1) + self.log_zeta ) class PsiFunc(object): def __init__(self, L, m): self.L = L self.m = m def __call__(self, x): z = x - self.m return 0.5 * ( (z.T * sla.solve_upper_triangular( self.L.T, sla.solve_lower_triangular( self.L, z.T))).sum(0) + self.m.shape[0] * tt.log(2 * np.pi) + 2 * tt.log(self.L.diagonal()).sum() ) def sigmoid(x): return 1. / (1. + np.exp(-x)) def sigmoidal_schedule(num_temp, scale): inv_temp_sched = sigmoid( scale * (2. * np.arange(num_temp + 1) / num_temp - 1.)) return ( (inv_temp_sched - inv_temp_sched[0]) / (inv_temp_sched[-1] - inv_temp_sched[0]) ) def rmse(x, y): return ((x - y)**2).mean()**0.5 dtype = 'float64' relaxation_list = [] true_log_norm_list = [] true_mean_list = [] true_covar_list = [] var_log_norm_list = [] var_mean_list = [] var_covar_chol_list = [] phi_funcs = [] psi_funcs = [] for i, file_path in enumerate( sorted(glob.glob(os.path.join(model_dir, 'params_and_moms_*.npz')))): loaded = np.load(file_path) relaxation_list.append(gmr.IsotropicCovarianceGMRelaxation( loaded['weights'], loaded['biases'], True) ) true_log_norm_list.append(loaded['log_norm_const_x']) true_mean_list.append(loaded['expc_x']) true_covar_list.append(loaded['covar_x']) var_mean, var_covar_chol, var_log_norm = ( var.mixture_of_variational_distributions_moments( relaxation_list[-1], rng )) var_log_norm_list.append(var_log_norm) var_mean_list.append(var_mean) var_covar_chol_list.append(var_covar_chol) np.savez(os.path.join(model_dir, 'var_moms_{0}.npz'.format(i)), var_mean=var_mean, var_covar_chol=var_covar_chol, var_log_norm=var_log_norm) Q = tt.constant(relaxation_list[-1].Q, 'Q' + str(i), 2, dtype) b = tt.constant(relaxation_list[-1].b, 'b' + str(i), 1, dtype) L = tt.constant(var_covar_chol, 'L' + str(i), 2, dtype) m = tt.constant(var_mean, 'm' + str(i), 1, dtype) log_zeta = tt.constant(var_log_norm, 'log_zeta' + str(i), 0, dtype) phi_funcs.append(PhiFunc(Q, b, log_zeta)) psi_funcs.append(PsiFunc(L, m)) print('Var. log norm RMSE: {0}'.format(rmse(var_log_norm, true_log_norm_list[-1])))
_____no_output_____
MIT
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
Annealed Importance Sampling
num_temps = [1000, 5000, 10000, 20000] dt = 0.5 temp_scale = 4. num_reps = 10 num_step = 10 num_runs_per_rep = 100 mom_resample_coeff = 1. num_runs = num_reps * num_runs_per_rep pos = tt.matrix('pos') inv_temps = tt.vector('inv_temps') hmc_params = { 'dt': dt, 'n_step': num_step, 'mom_resample_coeff': mom_resample_coeff } ais_sampler = disc_temp.AnnealedImportanceSampler( tt.shared_randomstreams.RandomStreams(seed), False ) ais_run_funcs = [] for phi_func, psi_func in zip(phi_funcs, psi_funcs): pos_samples, log_weights, accepts, updates = ais_sampler.run( pos, None, inv_temps, phi_func, psi_func, hmc_params ) ais_run = th.function( [pos, inv_temps], [pos_samples, log_weights, accepts], updates=updates ) ais_run_funcs.append(ais_run) for i in range(10): ais_exp_dir = os.path.join(exp_dir, 'ais', 'params-' + str(i)) if not os.path.exists(ais_exp_dir): os.makedirs(ais_exp_dir) for num_temp in num_temps: settings = { 'dt': dt, 'num_temp': num_temp, 'temp_scale': temp_scale, 'n_step': num_step, 'mom_resample_coeff': mom_resample_coeff } print('Parameters {0} num temps {1}'.format(i, num_temp)) print('-' * 100) print(settings) settings_path = os.path.join(ais_exp_dir, 'settings-{0}.json'.format(num_temp)) results_path = os.path.join(ais_exp_dir, 'results-{0}.npz'.format(num_temp)) with open(settings_path, 'w') as f: json.dump(settings, f, indent=True) inv_temp_sched = sigmoidal_schedule(num_temp, temp_scale) num_dim = relaxation_list[i].n_dim_r pos_init = rng.normal(size=(num_runs, num_dim)).dot( var_covar_chol_list[i].T) + var_mean_list[i] start_time = time.time() pos_samples, log_weights, accepts = ais_run_funcs[i]( pos_init, inv_temp_sched ) sampling_time = time.time() - start_time print('Sampling time: {0:.2f}s'.format(sampling_time)) log_norm_rmses = [] mean_rmses = [] covar_rmses = [] for lw, ps in zip( log_weights.reshape((num_reps, -1)), pos_samples.reshape((num_reps, num_runs_per_rep, -1))): log_norm_rmses.append( rmse(np.log(np.exp(lw).mean(0)) + var_log_norm_list[i], true_log_norm_list[i]) ) probs = np.exp(lw) probs /= probs.sum() mean_est = (probs[:, None] * ps).sum(0) mean_rmses.append( rmse(true_mean_list[i], mean_est) ) ps_zm = ps - mean_est covar_est = (ps_zm * probs[:, None]).T.dot(ps_zm) covar_rmses.append( rmse(true_covar_list[i], covar_est) ) var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i]) var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i]) var_covar_rmse = rmse(true_covar_list[i], var_covar_chol_list[i].dot(var_covar_chol_list[i].T)) print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}' .format( np.mean(log_norm_rmses) / var_log_norm_rmse, np.mean(mean_rmses) / var_mean_rmse, np.mean(covar_rmses) / var_covar_rmse ) ) np.savez( results_path, sampling_time=sampling_time, pos_samples=pos_samples, log_weights=log_weights, accepts=accepts, log_norm_rmses=np.array(log_norm_rmses), mean_rmses=np.array(mean_rmses), covar_rmses=np.array(covar_rmses), var_log_norm_rmse=var_log_norm_rmse, var_mean_rmse=var_mean_rmse, var_covar_rmse=var_covar_rmse ) print('Saved to ' + results_path)
_____no_output_____
MIT
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
Hamiltonian Annealed Importance Sampling
num_temps = [1000, 5000, 10000, 20000] dt = 0.5 temp_scale = 4. num_reps = 10 num_step = 1 num_runs_per_rep = 500 num_runs = num_reps * num_runs_per_rep pos = tt.matrix('pos') inv_temps = tt.vector('inv_temps') hmc_params = { 'dt': dt, 'n_step': num_step, 'mom_resample_coeff': (1. - 0.5**dt)**0.5 } ais_sampler = disc_temp.AnnealedImportanceSampler( tt.shared_randomstreams.RandomStreams(seed), True ) ais_run_funcs = [] for phi_func, psi_func in zip(phi_funcs, psi_funcs): pos_samples, log_weights, accepts, updates = ais_sampler.run( pos, None, inv_temps, phi_func, psi_func, hmc_params ) ais_run = th.function( [pos, inv_temps], [pos_samples, log_weights, accepts], updates=updates ) ais_run_funcs.append(ais_run) for i in range(10): ais_exp_dir = os.path.join(exp_dir, 'h-ais', 'params-' + str(i)) if not os.path.exists(ais_exp_dir): os.makedirs(ais_exp_dir) for num_temp in num_temps: settings = { 'dt': dt, 'num_temp': num_temp, 'temp_scale': temp_scale, 'n_step': num_step, 'mom_resample_coeff': (1. - 0.5**dt)**0.5 } print('Parameters {0} num temp {1}'.format(i, num_temp)) print('-' * 100) print(settings) settings_path = os.path.join(ais_exp_dir, 'settings-{0}.json'.format(num_temp)) results_path = os.path.join(ais_exp_dir, 'results-{0}.npz'.format(num_temp)) with open(settings_path, 'w') as f: json.dump(settings, f, indent=True) inv_temp_sched = sigmoidal_schedule(num_temp, temp_scale) num_dim = relaxation_list[i].n_dim_r pos_init = rng.normal(size=(num_runs, num_dim)).dot( var_covar_chol_list[i].T) + var_mean_list[i] start_time = time.time() pos_samples, log_weights, accepts = ais_run_funcs[i]( pos_init, inv_temp_sched ) sampling_time = time.time() - start_time print('Sampling time: {0:.2f}s'.format(sampling_time)) log_norm_rmses = [] mean_rmses = [] covar_rmses = [] for lw, ps in zip( log_weights.reshape((num_reps, -1)), pos_samples.reshape((num_reps, num_runs_per_rep, -1))): log_norm_rmses.append( rmse(np.log(np.exp(lw).mean(0)) + var_log_norm_list[i], true_log_norm_list[i]) ) probs = np.exp(lw) probs /= probs.sum() mean_est = (probs[:, None] * ps).sum(0) mean_rmses.append( rmse(true_mean_list[i], mean_est) ) ps_zm = ps - mean_est covar_est = (ps_zm * probs[:, None]).T.dot(ps_zm) covar_rmses.append( rmse(true_covar_list[i], covar_est) ) var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i]) var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i]) var_covar_rmse = rmse(true_covar_list[i], var_covar_chol_list[i].dot(var_covar_chol_list[i].T)) print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}' .format( np.mean(log_norm_rmses) / var_log_norm_rmse, np.mean(mean_rmses) / var_mean_rmse, np.mean(covar_rmses) / var_covar_rmse ) ) np.savez( results_path, sampling_time=sampling_time, pos_samples=pos_samples, log_weights=log_weights, accepts=accepts, log_norm_rmses=np.array(log_norm_rmses), mean_rmses=np.array(mean_rmses), covar_rmses=np.array(covar_rmses), var_log_norm_rmse=var_log_norm_rmse, var_mean_rmse=var_mean_rmse, var_covar_rmse=var_covar_rmse ) print('Saved to ' + results_path) print('-' * 100)
_____no_output_____
MIT
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
Incremental RMSE helper
def rmse(x, y): return ((x - y)**2).mean()**0.5 def calculate_incremental_rmses(x_samples, probs_1, probs_0, true_log_norm, true_mean, true_covar): n_sample, n_chain, n_dim = x_samples.shape sum_probs_1_x = 0 sum_probs_1_xx = 0 sum_probs_1 = 0 sum_probs_0 = 0 log_norm_rmses = np.empty(n_sample) * np.nan mean_rmses = np.empty(n_sample) * np.nan covar_rmses = np.empty(n_sample) * np.nan for s in range(n_sample): p1 = probs_1[s] p0 = probs_0[s] x = x_samples[s] sum_probs_1_x += p1[:, None] * x sum_probs_1_xx += p1[:, None, None] * (x[:, :, None] * x[:, None, :]) sum_probs_1 += p1 sum_probs_0 += p0 log_norm_est = np.log(sum_probs_1.sum(0)) - np.log(sum_probs_0.sum(0)) mean_est = sum_probs_1_x.sum(0) / sum_probs_1.sum(0) covar_est = sum_probs_1_xx.sum(0) / sum_probs_1.sum(0) - np.outer(mean_est, mean_est) log_norm_rmses[s] = rmse(log_norm_est, true_log_norm) mean_rmses[s] = rmse(mean_est, true_mean) covar_rmses[s] = rmse(covar_est, true_covar) return log_norm_rmses, mean_rmses, covar_rmses
_____no_output_____
MIT
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
Simulated Tempering
num_temp = 1000 dt = 0.5 num_step = 20 temp_scale = 4. num_reps = 10 num_runs_per_rep = 10 num_runs = num_reps * num_runs_per_rep mom_resample_coeff = 1. pos = tt.matrix('pos') idx = tt.lvector('idx') inv_temps = tt.vector('inv_temps') num_sample = tt.lscalar('num_sample') hmc_params = { 'dt': dt, 'n_step': num_step, 'mom_resample_coeff': mom_resample_coeff } st_sampler = disc_temp.SimulatedTemperingSampler( tt.shared_randomstreams.RandomStreams(seed), False ) st_chain_funcs = [] for phi_func, psi_func in zip(phi_funcs, psi_funcs): pos_samples, idx_samples, probs_0, probs_1, accepts, updates = st_sampler.chain( pos, None, idx, inv_temps, 0, phi_func, psi_func, num_sample, hmc_params ) st_chain = th.function( [pos, idx, inv_temps, num_sample], [pos_samples, idx_samples, probs_0, probs_1, accepts], updates=updates ) st_chain_funcs.append(st_chain) num_sample = 40000 for i in range(10): st_exp_dir = os.path.join(exp_dir, 'st', 'params-' + str(i)) if not os.path.exists(st_exp_dir): os.makedirs(st_exp_dir) settings = { 'dt': dt, 'num_temp': num_temp, 'num_sample': num_sample, 'num_step': num_step, 'temp_scale': temp_scale, 'mom_resample_coeff': mom_resample_coeff } print('Parameters {0}'.format(i)) print('-' * 100) print(settings) settings_path = os.path.join(st_exp_dir, 'settings.json') results_path = os.path.join(st_exp_dir, 'results.npz') with open(settings_path, 'w') as f: json.dump(settings, f, indent=True) inv_temp_sched = sigmoidal_schedule(num_temp, temp_scale) num_dim = relaxation_list[i].n_dim_r pos_init = rng.normal(size=(num_runs, num_dim)).dot( var_covar_chol_list[i].T) + var_mean_list[i] idx_init = np.zeros(num_runs, 'int64') start_time = time.time() pos_samples, idx_samples, probs_0, probs_1, accepts = st_chain_funcs[i]( pos_init, idx_init, inv_temp_sched, num_sample ) sampling_time = time.time() - start_time print('Sampling time: {0:.2f}s'.format(sampling_time)) log_norm_rmses = np.empty((num_reps, num_sample)) mean_rmses = np.empty((num_reps, num_sample)) covar_rmses = np.empty((num_reps, num_sample)) for r in range(num_reps): log_norm_rmses[r], mean_rmses[r], covar_rmses[r] = calculate_incremental_rmses( pos_samples[:, r:(r+1)*num_runs_per_rep], probs_1[:, r:(r+1)*num_runs_per_rep], probs_0[:, r:(r+1)*num_runs_per_rep], true_log_norm_list[i] - var_log_norm_list[i], true_mean_list[i], true_covar_list[i] ) var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i]) var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i]) var_covar_rmse = rmse(true_covar_list[i], var_covar_chol_list[i].dot(var_covar_chol_list[i].T)) print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}' .format( np.mean(log_norm_rmses[:, -1]) / var_log_norm_rmse, np.mean(mean_rmses[:, -1]) / var_mean_rmse, np.mean(covar_rmses[:, -1]) / var_covar_rmse ) ) fig, axes = plt.subplots(1, 3, figsize=(9, 3)) axes[0].semilogy(log_norm_rmses.mean(0) / var_log_norm_rmse) axes[0].set_title('Log norm RMSE') axes[1].semilogy(mean_rmses.mean(0) / var_mean_rmse) axes[1].set_title('Mean RMSE') axes[2].semilogy(covar_rmses.mean(0) / var_covar_rmse) axes[2].set_title('Covariance RMSE') plt.show() np.savez( results_path, sampling_time=sampling_time, pos_samples=pos_samples, idx_samples=idx_samples, probs_1=probs_1, probs_0=probs_0, accepts=accepts, log_norm_rmses=log_norm_rmses, mean_rmses=mean_rmses, covar_rmses=covar_rmses, var_log_norm_rmse=var_log_norm_rmse, var_mean_rmse=var_mean_rmse, var_covar_rmse=var_covar_rmse ) print('Saved to ' + results_path) print('-' * 100)
_____no_output_____
MIT
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
Continuous tempering Gibbs
dt = 0.5 num_step = 20 num_reps = 10 num_runs_per_rep = 10 num_runs = num_reps * num_runs_per_rep mom_resample_coeff = 1. pos = tt.matrix('pos') idx = tt.lvector('idx') inv_temp = tt.vector('inv_temp') num_sample = tt.lscalar('n_sample') hmc_params = { 'dt': dt, 'n_step': num_step, 'mom_resample_coeff': mom_resample_coeff } gct_sampler = cont_temp.GibbsContinuousTemperingSampler( tt.shared_randomstreams.RandomStreams(seed), False ) gct_chain_funcs = [] for phi_func, psi_func in zip(phi_funcs, psi_funcs): pos_samples, inv_temp_samples, probs_0, probs_1, accepts, updates = gct_sampler.chain( pos, None, inv_temp, phi_func, psi_func, num_sample, hmc_params ) gct_chain = th.function( [pos, inv_temp, num_sample], [pos_samples, inv_temp_samples, probs_0, probs_1, accepts], updates=updates ) gct_chain_funcs.append(gct_chain) num_sample = 60000 for i in range(10): gct_exp_dir = os.path.join(exp_dir, 'gibbs-ct', 'params-' + str(i)) if not os.path.exists(gct_exp_dir): os.makedirs(gct_exp_dir) settings = { 'dt': dt, 'num_sample': num_sample, 'num_step': num_step, 'mom_resample_coeff': mom_resample_coeff } print('Parameters {0}'.format(i)) print('-' * 100) print(settings) settings_path = os.path.join(gct_exp_dir, 'settings.json') results_path = os.path.join(gct_exp_dir, 'results.npz') with open(settings_path, 'w') as f: json.dump(settings, f, indent=True) num_dim = relaxation_list[i].n_dim_r pos_init = rng.normal(size=(num_runs, num_dim)).dot( var_covar_chol_list[i].T) + var_mean_list[i] inv_temp_init = np.zeros(num_runs) start_time = time.time() pos_samples, inv_temp_samples, probs_0, probs_1, accepts = gct_chain_funcs[i]( pos_init, inv_temp_init, num_sample ) sampling_time = time.time() - start_time print('Sampling time: {0:.2f}s'.format(sampling_time)) log_norm_rmses = np.empty((num_reps, num_sample)) mean_rmses = np.empty((num_reps, num_sample)) covar_rmses = np.empty((num_reps, num_sample)) for r in range(num_reps): log_norm_rmses[r], mean_rmses[r], covar_rmses[r] = calculate_incremental_rmses( pos_samples[:, r:(r+1)*num_runs_per_rep], probs_1[:, r:(r+1)*num_runs_per_rep], probs_0[:, r:(r+1)*num_runs_per_rep], true_log_norm_list[i] - var_log_norm_list[i], true_mean_list[i], true_covar_list[i] ) var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i]) var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i]) var_covar_rmse = rmse(true_covar_list[i], var_covar_chol_list[i].dot(var_covar_chol_list[i].T)) print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}' .format( np.mean(log_norm_rmses[:, -1]) / var_log_norm_rmse, np.mean(mean_rmses[:, -1]) / var_mean_rmse, np.mean(covar_rmses[:, -1]) / var_covar_rmse ) ) fig, axes = plt.subplots(1, 3, figsize=(9, 3)) axes[0].semilogy(log_norm_rmses.mean(0) / var_log_norm_rmse) axes[0].set_title('Log norm RMSE') axes[1].semilogy(mean_rmses.mean(0) / var_mean_rmse) axes[1].set_title('Mean RMSE') axes[2].semilogy(covar_rmses.mean(0) / var_covar_rmse) axes[2].set_title('Covariance RMSE') plt.show() np.savez( results_path, sampling_time=sampling_time, pos_samples=pos_samples, inv_temp_samples=inv_temp_samples, probs_1=probs_1, probs_0=probs_0, accepts=accepts, log_norm_rmses=log_norm_rmses, mean_rmses=mean_rmses, covar_rmses=covar_rmses, var_log_norm_rmse=var_log_norm_rmse, var_mean_rmse=var_mean_rmse, var_covar_rmse=var_covar_rmse ) print('Saved to ' + results_path) print('-' * 100)
_____no_output_____
MIT
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
Joint
dt = 0.5 num_step = 20 temp_scale = 1. num_reps = 10 num_runs_per_rep = 10 num_runs = num_reps * num_runs_per_rep mom_resample_coeff = 1. pos = tt.matrix('pos') tmp_ctrl = tt.vector('tmp_ctrl') num_sample = tt.lscalar('n_sample') ctrl_func = ctrl.SigmoidalControlFunction(temp_scale) hmc_params = { 'dt': dt, 'n_step': num_step, 'mom_resample_coeff': mom_resample_coeff } jct_sampler = cont_temp.JointContinuousTemperingSampler( tt.shared_randomstreams.RandomStreams(seed), False ) jct_chain_funcs = [] for phi_func, psi_func in zip(phi_funcs, psi_funcs): (pos_samples, tmp_ctrl_sample, inv_temp_samples, probs_0, probs_1, accepts, updates) = jct_sampler.chain( pos, tmp_ctrl, None, phi_func, psi_func, ctrl_func, num_sample, hmc_params ) jct_chain = th.function( [pos, tmp_ctrl, num_sample], [pos_samples, inv_temp_samples, probs_0, probs_1, accepts], updates=updates ) jct_chain_funcs.append(jct_chain) num_sample = 50000 for i in range(10): jct_exp_dir = os.path.join(exp_dir, 'joint-ct', 'params-' + str(i)) if not os.path.exists(jct_exp_dir): os.makedirs(jct_exp_dir) settings = { 'dt': dt, 'num_sample': num_sample, 'num_step': num_step, 'temp_scale': temp_scale, 'mom_resample_coeff': mom_resample_coeff } print('Parameters {0}'.format(i)) print('-' * 100) print(settings) settings_path = os.path.join(jct_exp_dir, 'settings.json') results_path = os.path.join(jct_exp_dir, 'results.npz') with open(settings_path, 'w') as f: json.dump(settings, f, indent=True) num_dim = relaxation_list[i].n_dim_r pos_init = rng.normal(size=(num_runs, num_dim)).dot( var_covar_chol_list[i].T) + var_mean_list[i] tmp_ctrl_init = np.zeros(num_runs) - 10. start_time = time.time() pos_samples, inv_temp_samples, probs_0, probs_1, accepts = jct_chain_funcs[i]( pos_init, tmp_ctrl_init, num_sample ) sampling_time = time.time() - start_time print('Sampling time: {0:.2f}s'.format(sampling_time)) log_norm_rmses = np.empty((num_reps, num_sample)) mean_rmses = np.empty((num_reps, num_sample)) covar_rmses = np.empty((num_reps, num_sample)) for r in range(num_reps): log_norm_rmses[r], mean_rmses[r], covar_rmses[r] = calculate_incremental_rmses( pos_samples[:, r:(r+1)*num_runs_per_rep], probs_1[:, r:(r+1)*num_runs_per_rep], probs_0[:, r:(r+1)*num_runs_per_rep], true_log_norm_list[i] - var_log_norm_list[i], true_mean_list[i], true_covar_list[i] ) var_log_norm_rmse = rmse(true_log_norm_list[i], var_log_norm_list[i]) var_mean_rmse = rmse(true_mean_list[i], var_mean_list[i]) var_covar_rmse = rmse(true_covar_list[i], var_covar_chol_list[i].dot(var_covar_chol_list[i].T)) print('RMSE log_norm={0:.2f} mean={1:.2f} covar={2:.2f}' .format( np.mean(log_norm_rmses[:, -1]) / var_log_norm_rmse, np.mean(mean_rmses[:, -1]) / var_mean_rmse, np.mean(covar_rmses[:, -1]) / var_covar_rmse ) ) fig, axes = plt.subplots(1, 3, figsize=(9, 3)) axes[0].semilogy(log_norm_rmses.mean(0) / var_log_norm_rmse) axes[0].set_title('Log norm RMSE') axes[1].semilogy(mean_rmses.mean(0) / var_mean_rmse) axes[1].set_title('Mean RMSE') axes[2].semilogy(covar_rmses.mean(0) / var_covar_rmse) axes[2].set_title('Covariance RMSE') plt.show() np.savez( results_path, sampling_time=sampling_time, pos_samples=pos_samples, inv_temp_samples=inv_temp_samples, probs_1=probs_1, probs_0=probs_0, accepts=accepts, log_norm_rmses=log_norm_rmses, mean_rmses=mean_rmses, covar_rmses=covar_rmses, var_log_norm_rmse=var_log_norm_rmse, var_mean_rmse=var_mean_rmse, var_covar_rmse=var_covar_rmse ) print('Saved to ' + results_path) print('-' * 100)
_____no_output_____
MIT
notebooks/gaussian-mixture-boltzmann-machine-relaxations.ipynb
matt-graham/continuously-tempered-hmc
Topological Data Analysis with Python and the Gudhi Library Introduction to simplex trees **Authors** : F. Chazal and B. Michel TDA typically aims at extracting topological signatures from a point cloud in $\mathbb R^d$ or in a general metric space. By studying the topology of the point clouds, we actually mean studying the topology of unions of balls centered at the point cloud (offsets). However, non-discrete sets such as offsets, and also continuous mathematical shapes like curves, surfaces and more generally manifolds, cannot easily be encoded as finite discrete structures. [Simplicial complexes](https://en.wikipedia.org/wiki/Simplicial_complex) are therefore used in computational geometry to approximate such shapes.A simplicial complex is a set of [simplices](https://en.wikipedia.org/wiki/Simplex). It can be seen as a higher dimensional generalization of a graph. It is a mathematical object that is both topological and combinatorial, which makes it particularly useful for TDA. Here is an exemple of simplicial complex:![title](Images/Pers14.PNG) A filtration is a increasing sequence of sub-complexes of a simplicial complex $\mathcal K$. It can be seen as ordering the simplices included in the complex. Indeed, simpicial complexes often come with a specific order, as for [Vietoris-Rips complexes](https://en.wikipedia.org/wiki/Vietoris%E2%80%93Rips_complex), [Cech complexes](https://en.wikipedia.org/wiki/%C4%8Cech_complex) and [alpha complexes](https://en.wikipedia.org/wiki/Alpha_shapeAlpha_complex).
from IPython.display import Image from os import chdir import numpy as np import gudhi as gd import matplotlib.pyplot as plt
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
In Gudhi, filtered simplicial complexes are encoded through a data structure called simplex tree. ![](https://gudhi.inria.fr/python/latest/_images/Simplex_tree_representation.png)This notebook illustrates the use of simplex tree to represent simplicial complexes from data points.See the [Python Gudhi documentation](https://gudhi.inria.fr/python/latest/simplex_tree_ref.html) for more details on simplex trees. My first simplex tree Let's create our first simplicial complex, represented by a simplex tree :
st = gd.SimplexTree()
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
The `st` object has class `SimplexTree`. For now, `st` is an empty simplex tree.The `SimplexTree` class has several useful methods for the practice of TDA. For instance, there are methods to define new types of simplicial complexes from existing ones.The `insert()` method can be used to insert simplices in the simplex tree. In the simplex tree:- vertices (0-dimensional simplices) are represented with integers, - edges (1-dimensional simplices) are represented with a length-2 list of integers (corresponding to the two vertices involved in the edge),- triangles (2-dimensional simplices) by three integers are represented with a length-3 list of integers (corresponding to the three vertices involved in the triangle),- etc.For example, the following piece of code inserts three edges into the simplex tree:
st.insert([0, 1]) st.insert([1, 2]) st.insert([3, 1])
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
When the simplex is successfully inserted into the simplex tree, the `insert()` method outputs `True` as you can see from the execution of the above code. On the contrary, if the simplex is already in the filtration, the `insert()` method outputs `False`:
st.insert([3, 1])
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
We obtain the list of all the simplices in the simplex tree with the `get_filtration()` method :
st_gen = st.get_filtration()
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
The output `st_gen` is a generator and we thus we can iterate on its elements. Each element in the list is a tuple that contains a simplex and its **filtration value**.
for splx in st_gen : print(splx)
([0], 0.0) ([1], 0.0) ([0, 1], 0.0) ([2], 0.0) ([1, 2], 0.0) ([3], 0.0) ([1, 3], 0.0)
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
Intuitively, the filtration value of a simplex in a filtered complex acts as a *time stamp* corresponding to "when" the simplex appears in the filtration. By default, the `insert()` method assigns a filtration value equal to 0.Notice that inserting an edge automatically inserts its vertices (if they were not already in the complex) in order to satisfy the **inclusion property** of a filtered complex: any simplex with filtration value $t$ must have all its faces in the filtered complex, with filtration values smaller than or equal to $t$. Simplex tree description The dimension of a simplical complex is the largest dimension of the simplices in it. It can be retrieved by the simplex tree `dimension()` method:
st.dimension()
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
It is possible to compute the number of vertices in a simplex tree via the `num_vertices()` method:
st.num_vertices()
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
The number of simplices in the simplex tree is given by
st.num_simplices()
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
The [$d$-skeleton](https://en.wikipedia.org/wiki/N-skeleton) -- which is the union of all simplices of dimensions smaller than or equal to $d$ -- can be also computed with the `get_skeleton()` method. This method takes as argument the dimension of the desired skeleton. To retrieve the topological graph from a simplex tree, we can therefore call:
print(st.get_skeleton(1))
[([0, 1], 0.0), ([0], 0.0), ([1, 2], 0.0), ([1, 3], 0.0), ([1], 0.0), ([2], 0.0), ([3], 0.0)]
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
One can also check whether a simplex is already in the filtration. This is achieved with the `find()` method:
st.find([2, 4])
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
Filtration valuesWe can insert simplices at a given filtration value. For example, the following piece of code will insert three triangles in the simplex tree at three different filtration values:
st.insert([0, 1, 2], filtration = 0.1) st.insert([1, 2, 3], filtration = 0.2) st.insert([0, 1, 3], filtration = 0.4) st_gen = st.get_filtration() for splx in st_gen : print(splx)
([0], 0.0) ([1], 0.0) ([0, 1], 0.0) ([2], 0.0) ([1, 2], 0.0) ([3], 0.0) ([1, 3], 0.0) ([0, 2], 0.1) ([0, 1, 2], 0.1) ([2, 3], 0.2) ([1, 2, 3], 0.2) ([0, 3], 0.4) ([0, 1, 3], 0.4)
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
As you can see, when we add a new simplex with a given filtration value, all its faces that were not already in the complex are added with the same filtration value: here the edge `[0, 3]` was not part of the tree before including the triangle `[0, 1, 3]` and is thus inserted with the filtration value of the inserted triangle. On the other hand, the filtration value of the faces of added simplices that were already part of the tree before is left alone. One can modify the filtration value of any simplex included in the tree with the `assign_filtration()` method:
st.assign_filtration([3], filtration = 0.8) st_gen = st.get_filtration() for splx in st_gen: print(splx)
([0], 0.0) ([1], 0.0) ([0, 1], 0.0) ([2], 0.0) ([1, 2], 0.0) ([1, 3], 0.0) ([0, 2], 0.1) ([0, 1, 2], 0.1) ([2, 3], 0.2) ([1, 2, 3], 0.2) ([0, 3], 0.4) ([0, 1, 3], 0.4) ([3], 0.8)
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
Notice that, the vertex `[3]` has been moved to the end of the filtration because it now has the highest filtration value. However, this simplex tree is not a filtered simplicial complex anymore because the filtration value of the vertex `[3]` is higher than the filtration value of the edge `[2 3]`. We can use the `make_filtration_non_decreasing()` method to solve the problem:
st.make_filtration_non_decreasing() st_gen = st.get_filtration() for splx in st_gen: print(splx)
([0], 0.0) ([1], 0.0) ([0, 1], 0.0) ([2], 0.0) ([1, 2], 0.0) ([0, 2], 0.1) ([0, 1, 2], 0.1) ([3], 0.8) ([0, 3], 0.8) ([1, 3], 0.8) ([0, 1, 3], 0.8) ([2, 3], 0.8) ([1, 2, 3], 0.8)
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
Finally, it is worth mentioning the `filtration()` method, which returns the filtration value of a given simplex in the filtration :
st.filtration([2, 3])
_____no_output_____
MIT
Tuto-GUDHI-simplex-Trees.ipynb
vishalbelsare/TDA-tutorial
Non-Linear Classifiers
# Global variables for testing changes to this notebook quickly RANDOM_SEED = 0 NUM_FOLDS = 5 import numpy as np import pandas as pd import time import math import os import pyarrow import gc # scikit-learn optimization from sklearnex import patch_sklearn patch_sklearn() # Model evaluation from sklearn.base import clone from sklearn.model_selection import StratifiedKFold, train_test_split from sklearn.metrics import accuracy_score # Plotting import matplotlib import seaborn as sns from matplotlib import pyplot as plt from IPython.display import Image # Hide warnings import warnings warnings.filterwarnings('ignore')
Intel(R) Extension for Scikit-learn* enabled (https://github.com/intel/scikit-learn-intelex)
MIT
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
Scoring Function
# Scoring/Training Baseline Function def score_model(sklearn_model, preprocessing = None): # Store the holdout predictions oof_preds = np.zeros((train.shape[0],)) scores = np.zeros(NUM_FOLDS) times = np.zeros(NUM_FOLDS) print('') # Stratified k-fold cross-validation skf = StratifiedKFold(n_splits = NUM_FOLDS, shuffle = True, random_state = RANDOM_SEED) for fold, (train_idx, valid_idx) in enumerate(skf.split(train, target_bins)): # Training and Validation Sets X_train, y_train = train[features].iloc[train_idx], train['target'].iloc[train_idx] X_valid, y_valid = train[features].iloc[valid_idx], train['target'].iloc[valid_idx] train_weight, valid_weight = train['sample_weight'].iloc[train_idx], train['sample_weight'].iloc[valid_idx] # Preprocessing start = time.time() if preprocessing: X_train = preprocessing.fit_transform(X_train) X_valid = preprocessing.transform(X_valid) # Create model model = clone(sklearn_model) try: model.fit(X_train, y_train, sample_weight = train_weight) except: model.fit(X_train, y_train) # validation valid_preds = model.predict(X_valid) scores[fold] = accuracy_score(y_valid, valid_preds, sample_weight = valid_weight) oof_preds[valid_idx] = valid_preds end = time.time() print(f'Fold {fold}: {round(scores[fold], 5)} accuracy in {round(end-start,2)}s.') times[fold] = end-start mask1, mask10 = train.gcd == 1, train.gcd == 10 mask1000, mask10000 = train.gcd == 1000, train.gcd == 10000 print("\nAccuracy (1M Reads):", round(accuracy_score(oof_preds[mask1], train['target'].loc[mask1], sample_weight = train['sample_weight'].loc[mask1]), 5)) print("Accuracy (100k Reads):", round(accuracy_score(oof_preds[mask10], train['target'].loc[mask10], sample_weight = train['sample_weight'].loc[mask10]), 5)) print("Accuracy (1k Reads):", round(accuracy_score(oof_preds[mask1000], train['target'].loc[mask1000], sample_weight = train['sample_weight'].loc[mask1000]), 5)) print("Accuracy (100 Reads):", round(accuracy_score(oof_preds[mask10000], train['target'].loc[mask10000], sample_weight = train['sample_weight'].loc[mask10000]), 5)) print("Out-of-Fold Accuracy:", round(accuracy_score(oof_preds, train['target'], sample_weight = train['sample_weight']), 5)) print(f'Training Time: {round(times.sum(), 2)}s') return oof_preds from sklearn.metrics import ConfusionMatrixDisplay, confusion_matrix, accuracy_score import matplotlib.pyplot as plt # Plot confusion matrix def plot_confusion_matrix(true_values, pred_values, gcds, plot_title = "Confusion Matrix"): gcd = [[1,10],[1000,10000]] # Confusion matrix fig, ax = plt.subplots(2, 2, figsize = (12,9)) for row in range(2): for col in range(2): idx = 2*row + col cm = confusion_matrix(true_values[gcds == gcd[row][col]], pred_values[gcds == gcd[row][col]]) np.fill_diagonal(cm, 0) disp = ConfusionMatrixDisplay(confusion_matrix = cm) disp.plot(ax = ax[row,col]) plt.show()
_____no_output_____
MIT
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
Load Data
%%time from sklearn.preprocessing import LabelEncoder train = pd.read_feather('../data/train.feather') features = [x for x in train.columns if x not in ['row_id','target','sample_weight','gcd']] encoder = LabelEncoder() train['target'] = encoder.fit_transform(train['target']) target_bins = train['target'].astype(str) + train['gcd'].astype(str) print(f'Training Samples: {len(train)}')
Training Samples: 123993 CPU times: total: 1.12 s Wall time: 195 ms
MIT
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
Naive Bayes
from sklearn.naive_bayes import MultinomialNB from sklearn.preprocessing import PowerTransformer, MinMaxScaler from math import factorial def fix_bias(input_df, add = True): df = input_df.copy() bias = lambda w, x, y, z: factorial(10) / (factorial(w) * factorial(x) * factorial(y) * factorial(z) * 4**10) for col in features: w = int(col[1:col.index('T')]) x = int(col[col.index('T')+1:col.index('G')]) y = int(col[col.index('G')+1:col.index('C')]) z = int(col[col.index('C')+1:]) if add: df[col] = df[col] + bias(w, x, y, z) else: df[col] = df[col] - bias(w, x, y, z) return df # Naive Bayes oof_preds = score_model( MultinomialNB(), MinMaxScaler() ) plot_confusion_matrix(train['target'], oof_preds, train['gcd'])
Fold 0: 0.55738 accuracy in 0.43s. Fold 1: 0.56818 accuracy in 0.29s. Fold 2: 0.54745 accuracy in 0.31s. Fold 3: 0.56104 accuracy in 0.31s. Fold 4: 0.55401 accuracy in 0.31s. Accuracy (1M Reads): 0.61076 Accuracy (100k Reads): 0.61098 Accuracy (1k Reads): 0.56488 Accuracy (100 Reads): 0.4438 Out-of-Fold Accuracy: 0.55762 Training Time: 1.65s
MIT
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
KNN Classifier
from sklearn.neighbors import KNeighborsClassifier from sklearn.preprocessing import StandardScaler # KNN oof_preds = score_model( KNeighborsClassifier(n_neighbors = 1), StandardScaler() ) plot_confusion_matrix(train['target'], oof_preds, train['gcd'])
Fold 0: 0.91848 accuracy in 3.23s. Fold 1: 0.91989 accuracy in 3.25s. Fold 2: 0.91832 accuracy in 3.45s. Fold 3: 0.91746 accuracy in 3.47s. Fold 4: 0.91839 accuracy in 3.42s. Accuracy (1M Reads): 1.0 Accuracy (100k Reads): 0.99998 Accuracy (1k Reads): 0.84836 Accuracy (100 Reads): 0.82578 Out-of-Fold Accuracy: 0.91851 Training Time: 16.82s
MIT
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
Radius Neighbors
from sklearn.neighbors import RadiusNeighborsClassifier # Radius Neighbors oof_preds = score_model( RadiusNeighborsClassifier( n_jobs = -1, outlier_label = 'most_frequent', ), StandardScaler() ) plot_confusion_matrix(train['target'], oof_preds, train['gcd'])
Fold 0: 0.36938 accuracy in 55.13s. Fold 1: 0.36505 accuracy in 54.6s. Fold 2: 0.36727 accuracy in 54.28s. Fold 3: 0.37093 accuracy in 54.42s. Fold 4: 0.37039 accuracy in 54.55s. Accuracy (1M Reads): 0.88689 Accuracy (100k Reads): 0.38838 Accuracy (1k Reads): 0.09996 Accuracy (100 Reads): 0.09964 Out-of-Fold Accuracy: 0.3686 Training Time: 272.97s
MIT
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
Nearest Centroid
from sklearn.neighbors import NearestCentroid # Nearest Centroid oof_preds = score_model( NearestCentroid(), StandardScaler() ) plot_confusion_matrix(train['target'], oof_preds, train['gcd'])
Fold 0: 0.51745 accuracy in 0.52s. Fold 1: 0.52542 accuracy in 0.51s. Fold 2: 0.5295 accuracy in 0.55s. Fold 3: 0.53634 accuracy in 0.56s. Fold 4: 0.51784 accuracy in 0.53s. Accuracy (1M Reads): 0.56773 Accuracy (100k Reads): 0.57384 Accuracy (1k Reads): 0.55603 Accuracy (100 Reads): 0.40347 Out-of-Fold Accuracy: 0.52529 Training Time: 2.66s
MIT
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series
Support Vector Machines
from sklearn.svm import SVC # Polynomial SVM oof_preds = score_model( SVC(kernel = "poly", degree = 2, coef0 = 1), StandardScaler() ) plot_confusion_matrix(train['target'], oof_preds, train['gcd']) # Polynomial SVM oof_preds = score_model( SVC(kernel = "rbf"), StandardScaler() ) plot_confusion_matrix(train['target'], oof_preds, train['gcd'])
Fold 0: 0.93306 accuracy in 28.33s. Fold 1: 0.92856 accuracy in 27.51s. Fold 2: 0.92868 accuracy in 29.49s. Fold 3: 0.93257 accuracy in 29.49s. Fold 4: 0.92989 accuracy in 28.31s. Accuracy (1M Reads): 0.96342 Accuracy (100k Reads): 0.98104 Accuracy (1k Reads): 0.9288 Accuracy (100 Reads): 0.84891 Out-of-Fold Accuracy: 0.93055 Training Time: 143.12s
MIT
tps-2022-02/notebooks/Notebook 3 - Nonlinear Classifiers.ipynb
rsizem2/tabular-playground-series