markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
step4: replace the missing data
from sklearn.impute import SimpleImputer imputer=SimpleImputer(missing_values=np.nan,strategy='mean') imputer.fit(a[:,:]) a[:,:]=imputer.transform(a[:,:]) a b
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
Step5: Encoding(not required) step6 : spiliting of data set into training and testing set
from sklearn.model_selection import train_test_split atrain,atest,btrain,btest=train_test_split(a,b,test_size=0.2,random_state=1) atrain
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step7 : Feature scaling
from sklearn.preprocessing import StandardScaler sc=StandardScaler() atrain=sc.fit_transform(atrain) atest=sc.fit_transform(atest) atrain
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
Part B: build my first linear model step 1: training the classification model
from sklearn.linear_model import LogisticRegression LoR=LogisticRegression(random_state=0) LoR.fit(atrain,btrain)
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step 2: testing the linear model
bestimated=LoR.predict(atest) print(np.concatenate((bestimated.reshape(len(bestimated),1),btest.reshape(len(btest),1)),1))
[[0 0] [0 0] [0 1] [1 1] [0 0] [0 0] [0 0] [1 1] [0 0] [1 0] [0 0] [0 0] [0 0] [1 1] [1 1] [1 1] [1 1] [0 0] [0 0] [1 1] [0 0] [1 1] [1 1] [0 0] [0 1] [0 0] [1 1] [1 0] [1 1] [1 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0] [0 1] [0 0] [1 1] [1 1] [0 0] [0 0] [1 1] [0 1] [0 1] [1 1] [0 0] [1 1] [0 0] [0 0] [0 1] [0 1] [0 1] [0 0] [1 1] [0 0] [1 1] [1 1] [0 0] [0 0] [0 0] [0 0] [0 1] [1 1] [0 0] [0 0] [1 0] [0 0] [1 0] [0 0] [0 1] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0]]
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step C: performance matrix
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score cm=confusion_matrix(btest,bestimated) print(cm) print(accuracy_score(btest,bestimated)) print(precision_score(btest,bestimated)) np.mean((True,True,False)) error_rate=[] for i in range(1,30): KC=KNeighborsClassifier(n_neighbors=i) KC.fit(atrain,btrain) bpred_i=KC.predict(atest) error_rate.append(np.mean(bpred_i!=btest)) plt.plot(range(1,30),error_rate,marker='o',markerfacecolor='red',markersize=5) plt.xlabel('K value') plt.ylabel('Error rate')
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
b) By using KNN Algorithm Part A: Data Preprocessing Step1 : importing the libraries
import numpy as np import matplotlib.pyplot as plt import pandas as pd
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step2: import data set
dataset=pd.read_csv('Logistic Data.csv') dataset
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step3: to create feature matrix and dependent variable vector
a=dataset.iloc[:,:-1].values b=dataset.iloc[:,-1].values a b
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step4: replace the missing data
from sklearn.impute import SimpleImputer imputer=SimpleImputer(missing_values=np.nan,strategy='mean') imputer.fit(a[:,:]) a[:,:]=imputer.transform(a[:,:]) a
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
Step5: Encoding(not required) step6 : spiliting of data set into training and testing set
from sklearn.model_selection import train_test_split atrain,atest,btrain,btest=train_test_split(a,b,test_size=0.2,random_state=1) atrain
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step7 : Feature scaling
from sklearn.preprocessing import StandardScaler sc=StandardScaler() atrain=sc.fit_transform(atrain) atest=sc.fit_transform(atest) atrain
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
Part B: build my KNN classification model step 1: training the classification model
from sklearn.neighbors import KNeighborsClassifier KC=KNeighborsClassifier(n_neighbors=7,weights='uniform',p=2) KC.fit(atrain,btrain)
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step 2: testing the linear model
bestimated=KC.predict(atest)
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
step C: performance matrix
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score cm=confusion_matrix(btest,bestimated) print(cm) print(accuracy_score(btest,bestimated)) print(precision_score(btest,bestimated)) np.mean((True,True,False)) error_rate=[] for i in range(1,30): KC=KNeighborsClassifier(n_neighbors=i) KC.fit(atrain,btrain) bpred_i=KC.predict(atest) error_rate.append(np.mean(bpred_i!=btest)) plt.plot(range(1,30),error_rate,marker='o',markerfacecolor='red',markersize=5) plt.xlabel('K value') plt.ylabel('Error rate')
_____no_output_____
MIT
Project 2/PROJECT 2.ipynb
ParadoxPD/Intro-to-machine-learning
CNN Basic> In this post, We will dig into the basic operation of Convolutional Neural Network, and explain about what each layer look like. And we will simply implement the basic CNN archiecture with tensorflow.- toc: true - badges: true- comments: true- author: Chanseok Kang- categories: [Python, Deep_Learning, Tensorflow-Keras]- image: images/cnn_stacked.png
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import pandas as pd plt.rcParams['figure.figsize'] = (16, 10) plt.rc('font', size=15)
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
Convolutional Neural Network [Convolutional Neural Network](https://en.wikipedia.org/wiki/Convolutional_neural_network) (CNN for short) is the most widely used for image classification. Previously, we handled image classification problem (Fashion-MNIST) with Multi Layer Perceptron, and we found that works. At that time, the information containing each image is very simple, and definitely classified with several parameters. But if the dataset contains high resolution image and several channels, it may require huge amount of parameters with mlp in image classification. How can we effectively classify the image without huge amount of parameters? And that's why the CNN exists. ![CNN example](image/cnn_example.png)*Fig 1. Example of Convolutional Neural Network* Generally, it consists of **Convolution layer**, **Pooling layer**, and **Fully-connected layer**. Usually, Convolution layer and Pooling layer is used for feature extraction. Feature extraction means that it extract important features from image for classification, so that all pixels in images is not required. Through these layer, information of data is resampled, and refined, and go to the output node. Fully-connected (also known as FC) is used for classification. 2D Convolution LayerTo understand convolution layer, we need to see the shape of image and some elements to extract features called filters(also called kernel).![convolution layer](image/conv_layer.png)In Fashion-MNIST example, the image containing dataset is grayscale. So the range of pixel value is from 0 to 255. But most of real-world image and all the things we see now is colorized. Technically, it can expressed with 3 base color, Red, Green, Blue. The color system which can express these 3 colors are called **RGB**, and each range of pixel value is also from 0 to 255, same as grayscaled pixel.> Note: There are other color systems like RGBA (Red-Green-Blue-Alpha) or HSV (Hue-Saturation-Value), HSL (Hue-Saturation-Lightness), etc. In this post, we just assume that image has 3 channels (RGB)So In the figure, this image has shape of 32x32 and 3 channels (RGB).And Next one is the filter, which shape has 5x5x3. The number we need to focus on **3**. Filter always extend the full channel of the input volume. So it requires to set same number of channels in input nodes and filters. As you can see, filter has same channel with input image. And let's assume the notation of filter, $w$. Next thing we need to do is just slide th filter over the image spatially, and compute dot products, same as MLP operation (This operation is called **convolution**, and it's the reason why this layer is called Convolution layer). Simply speaking, we just focus on the range covered in filter and dot product like this,$$ w^T x + b $$Of course, we move the filter until it can operate convolution.![animation](image/convolutions.gif)*Fig 3. Animation of convolution operationAfter that, (Assume we don't apply padding, and stride is 1), we can get output volume which shape has 28x28x1. There is formula to calculate the shape of output volume when the stride and filter is defined,$$ \frac{(\text{Height of input} - \text{Height of filter})}{\text{Stride}} + 1 $$We can substitute the number in this formula, then we can conclude the height of output volume is 28. The output the process convolution is called **Feature Map** (or Activation Map). One feature map gather from one filter, we can apply several filters in one input image. If we use 6 5x5 filters, we can get 6 separate feature maps. We can stack them into one object, then we get "new image" of size 28x28x6.![cnn stacked](image/cnn_stacked.png) Fig 4. stacked output of convolutionHere, one number is need to focus, **6**. 6 means the number of filters we apply in this convolution layer.You may be curious about what filter looks like. Actually, filter was widely used from classic computer vision. For example, to extract the edge of object in image, we can apply the *canny edge filer*, and there is edge detection filter named *sobel filter*, and so on. We can bring the concept of filter in CNN here. In short, filter extract the feature like where the edge is, and convolution layer refine its feature.![operation](image/convolution_op2.png)*Fig 5. Applying filter for each pixel*Here, we just consider the one channel of image and apply it with 6 filters. And we can extend it in multi-channel, and lots of filters.![manychannel](image/convolution_manychannel.png)*Fig 6. Convolution operation in many channels and many filters* Options of ConvolutionSo what parameteter may affect operation in convolution layer? We can introduce three main parameters for convolution layer: stride, zero-padding, and activation function.**Stride** is the size of step that how far to go to the right or the bottom to perform the next convolution. It is important that it defines the output model's size. Review the formula of calculating the feature map size,$$ \frac{(\text{Height of input} - \text{Height of filter})}{\text{Stride}} + 1 $$We can define any size of strides unless its value is smaller than original model size. As we increase the stride, then the size of feature map will be small, and it means that feature will also be small. Think about that the picture is summarized by just a small dot.**Zero-padding** is another factor that affect the convolution layer. The meaning is contained in its name. A few numbers of zeros surrounds the original image, then process the convolution operation. As we can see from previous process, if the layer is deeper and deeper, then the information will be smaller the original one due to the convolution operation. To preserve the size of original image during process, zero-padding is required. If the filter size is FxF and stride is 1, then we can also calculate the zero-padding size,$$ \frac{F - 1}{2} $$You may see the **Activation Function** in MLP. It can also apply in the Convolution layer.For example, when we apply Rectified Linear Unit (ReLU) in feature map, we can remove the value that lower than 0, Convolution Layer in TensorflowLet's look at how we can use Convolution layer in tensorflow. Actually, in Tensorflow v2.x, Convolution layer is implemented in keras as an high-level class, so all we need to do is just defineing the parameters correctly. Here is the `__init__` of `tf.keras.layers.Conv2D`.```pythontf.keras.layers.Conv2D( filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), groups=1, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)```We already covered filters, kernel_size (same as filter_size), strides, and so on. Here is brief description of arguments:| arguments | || --- | --- || filters | Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). || kernel_size | An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions. || strides | An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying any dilation_rate value != 1. || padding | one of "valid" or "same" (case-insensitive). || data_format | A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels,height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be channels_last. || activation | Activation function to use. If you don't specify anything, no activation is applied (see [keras.activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations)). || use_bias | Boolean, whether the layer uses a bias vector. || kernel_initializer | Initializer for the kernel weights matrix || bias_initializer | Initializer for the bias vector |We need to focus on the `data_format`. The description said that the default value of data_format is 'channels_last'. It means that all data in this layer must follow in this format order: `(batch, height, width, channels)`And `padding` argument can accept `valid` and `same`. `same` means no zero-padding. In this case, it drops the last convolution if the dimensions do not match (or fraction). If you want to use zero-padding, then we need to define `padding` argument as `same`. In this case, it pads such that feature map size has same as original one.(That's why we call the `same`) Usually it is also called 'half' padding. Example with Toy ImageLet's use the convolution layer. For the simplicity, we will use a simple toy image, then see the output.
image = tf.constant([[[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]]], dtype=np.float32) fig, ax = plt.subplots() ax.imshow(image.numpy().reshape(3, 3), cmap='gray') for (j, i), label in np.ndenumerate(image.numpy().reshape(3, 3)): if label < image.numpy().mean(): ax.text(i, j, label, ha='center', va='center', color='white') else: ax.text(i, j, label, ha='center', va='center', color='k') plt.show() print(image.shape)
(1, 3, 3, 1)
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
We made a simple image that has size of 3x3. Rememeber that the order of data should be `(batch, height, width, channel)`. In this case, batch size is 1, and currently generates the grayscaled image, so the channel should be 1.Then, we need to define filter and kernel_size, and padding method. We will use one filter with all-one weight that has 2x2 shape.Note that we need to set the same format in image shape and filter. If not, following error will be occurred,```pythonValueError: setting an array element with a sequence.```
# Weight Initialization weight = np.array([[[[1.]], [[1.]]], [[[1.]], [[1.]]]]) weight_init = tf.constant_initializer(weight) print("weight.shape: {}".format(weight.shape)) # Convolution layer layer = tf.keras.layers.Conv2D(filters=1, kernel_size=(2, 2), padding='VALID', kernel_initializer=weight_init) output = layer(image) # Check the result fig, ax = plt.subplots() ax.imshow(output.numpy().reshape(2, 2), cmap='gray') for (j, i), label in np.ndenumerate(output.numpy().reshape(2, 2)): if label < output.numpy().mean(): ax.text(i, j, label, ha='center', va='center', color='white') else: ax.text(i, j, label, ha='center', va='center', color='k') plt.show()
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
See? This is the output of convolution layer with toy image. In this time, change the padding argument from `VALID` to `SAME` and see the result. In this case, zero padding is added ('half' padding), so the output shape will be also changed.
# Convolution layer with half padding layer = tf.keras.layers.Conv2D(filters=1, kernel_size=(2, 2), padding='SAME', kernel_initializer=weight_init) output2 = layer(image) # Check the result fig, ax = plt.subplots() ax.imshow(output2.numpy().reshape(3, 3), cmap='gray') for (j, i), label in np.ndenumerate(output2.numpy().reshape(3, 3)): if label < output2.numpy().mean(): ax.text(i, j, label, ha='center', va='center', color='white') else: ax.text(i, j, label, ha='center', va='center', color='k') plt.show()
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
And what if we apply 3 filters in here?
# Weight initialization weight = np.array([[[[1., 10., -1.]], [[1., 10., -1.]]], [[[1., 10., -1.]], [[1., 10., -1.]]]]) weight_init = tf.constant_initializer(weight) print("Weight shape: {}".format(weight.shape)) # Convolution layer layer = tf.keras.layers.Conv2D(filters=3, kernel_size=(2, 2), padding='SAME', kernel_initializer=weight_init) output = layer(image) ## Check output feature_maps = np.swapaxes(output, 0, 3) fig, ax = plt.subplots(1, 3) for x, feature_map in enumerate(feature_maps): ax[x].imshow(feature_map.reshape(3, 3), cmap='gray') for (j, i), label in np.ndenumerate(feature_map.reshape(3, 3)): if label < feature_map.mean(): ax[x].text(i, j, label, ha='center', va='center', color='white') else: ax[x].text(i, j, label, ha='center', va='center', color='k')
Weight shape: (2, 2, 1, 3)
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
Pooling LayerAfter passing Activation function, the output may be changed. We can summarize the output with some rules, for example, find the maximum pixel value in specific filter that assume to represent that field.![max_pool](image/maxpool.png)*Fig 7, Max-Pooling*In the figure, we use 2x2 filter for pixel handling. When the filter is slided by stride 2, filter find the maximum pixel value, and re-define it as an output.Or we can find the average pixel value in specific filter that assume to represent that field. Prior one is called **Max-Pooling**, and latter is called **Average-Pooling**. Usually, this kind of process is called **Sub-sampling**, since this process extract the important pixel(Max or Average) from the image, and the output size is reduced by half. Max Pooling Layer in TensorflowSame as Convolution Layer, Max Pooling Layer is also defined in Tensorflow-keras as a high level class. Here is the `__init__` of `tf.keras.layers.MaxPool2D`. (you can also check [AveragePooling2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D) in documentation)```pythontf.keras.layers.MaxPool2D( pool_size=(2, 2), strides=None, padding='valid', data_format=None, **kwargs)```Actually, it is almost same as that of convolution layer. `pool_size` argument is similar with `filter_size` in convolution layer, and it will define the range to extract the maximum value. Here is brief description of arguments:| arguments | || --- | --- || pool_size | integer or tuple of 2 integers, factors by which to downscale (vertical, horizontal). (2, 2) will halve the input in both spatial dimension. If only one integer is specified, the same window length will be used for both dimensions. || strides | Integer, tuple of 2 integers, or None. Strides values. If None, it will default to pool_size. || padding | One of "valid" or "same" (case-insensitive). || data_format | A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels,height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be channels_last. | Example with Toy ImageHere, we will check the Maxpooling operation with toy image.
# Sample image image = tf.constant([[[[4.], [3.]], [[2.], [1.]]]], dtype=np.float32) # Max Pooling layer layer = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=1, padding='VALID') output = layer(image) # Check the output print(output.numpy())
[[[[4.]]]]
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
After that, we found out that the output of this image is just 4, the maximum value. How about the case with `SAME` padding?
layer = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=1, padding='SAME') output = layer(image) print(output.numpy())
[[[[4.] [3.]] [[2.] [1.]]]]
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
You may see that the output is different compared with previous one. That's because, while max pooling operation is held, zero-padding is also considered as an one pixel. So the 4 max-pooling operation is occurred.![SAME padding](image/maxpooling_same_padding.png) Convolution/MaxPooling in MNISTIn this case, we apply the convolution/MaxPooling operation in more specific dataset, the MNIST dataset. First, load the data. and normalize it. And see the sampled data.
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() # Normalization X_train = X_train.astype(np.float32) / 255. X_test = X_test.astype(np.float32) / 255. image = X_train[0] plt.imshow(image, cmap='gray') plt.show()
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
To handle this image in tensorflow, we need to convert it from 2d numpy array to 4D Tensor. There are several approaches to convert 4D tensor. One of approaches in Tensorflow is add `tf.newaxis` like this,
print("Dimension: {}".format(image.shape)) image = image[tf.newaxis, ..., tf.newaxis] print("Dimension: {}".format(image.shape)) # Convert it to tensor image = tf.convert_to_tensor(image)
Dimension: (28, 28) Dimension: (1, 28, 28, 1)
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
Same as before, we initialize the filter weight and apply it to convolution layer. In this case, we use 5 filters and (3, 3) filter size and stride to (2, 2), and padding is `SAME`.
weight_init = tf.keras.initializers.RandomNormal(stddev=0.01) layer_conv = tf.keras.layers.Conv2D(filters=5, kernel_size=(3, 3), strides=(2, 2), padding='SAME', kernel_initializer=weight_init) output = layer_conv(image) print(output.shape) feature_maps = np.swapaxes(output, 0, 3) fig, ax = plt.subplots(1, 5) for i, feature_map in enumerate(feature_maps): ax[i].imshow(feature_map.reshape(14, 14), cmap='gray') plt.tight_layout() plt.show()
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
After that, we use this output to push max-pooling layer as an input.
layer_pool = tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2), padding='SAME') output2 = layer_pool(output) print(output2.shape) feature_maps = np.swapaxes(output2, 0, 3) fig, ax = plt.subplots(1, 5) for i, feature_map in enumerate(feature_maps): ax[i].imshow(feature_map.reshape(7, 7), cmap='gray') plt.tight_layout() plt.show()
_____no_output_____
Apache-2.0
_notebooks/2020-10-07-01-CNN-Basic.ipynb
AntonovMikhail/chans_jupyter
0.0. IMPORTS
import math import numpy as np import pandas as pd import inflection import seaborn as sns from matplotlib import pyplot as plt from IPython.core.display import HTML
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
0.1. Helper Functions
def jupyter_settings(): %matplotlib inline %pylab inline plt.style.use( 'bmh' ) plt.rcParams['figure.figsize'] = [25, 12] plt.rcParams['font.size'] = 24 display( HTML( '<style>.container { width:100% !important; }</style>') ) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.set_option( 'display.expand_frame_repr', False ) sns.set() jupyter_settings()
Populating the interactive namespace from numpy and matplotlib
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
0.2. Loading data
df_sales_raw = pd.read_csv( 'data/train.csv', low_memory=False ) df_store_raw = pd.read_csv( 'data/store.csv', low_memory=False ) # merge df_raw = pd.merge( df_sales_raw, df_store_raw, how='left', on='Store' )
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.0 DESCRIPCIÓN DE LOS DATOS
df1 = df_raw.copy()
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.1. Rename Columns
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval'] snakecase = lambda x: inflection.underscore( x ) cols_new = list( map( snakecase, cols_old ) ) # rename df1.columns = cols_new
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.2. Data Dimensions
print( 'Number of Rows: {}'.format( df1.shape[0] ) ) print( 'Number of Cols: {}'.format( df1.shape[1] ) )
Number of Rows: 1017209 Number of Cols: 18
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.3. Data Types
df1['date'] = pd.to_datetime( df1['date'] ) df1.dtypes
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.4. Check NA
df1.isna().sum()
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.5. Fillout NA
df1.sample() #competition_distance df1['competition_distance'] = df1['competition_distance'].apply( lambda x: 200000.0 if math.isnan( x ) else x ) #competition_open_since_month df1['competition_open_since_month'] = df1.apply( lambda x: x['date'].month if math.isnan( x['competition_open_since_month'] ) else x['competition_open_since_month'], axis=1 ) #competition_open_since_year df1['competition_open_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan( x['competition_open_since_year'] ) else x['competition_open_since_year'], axis=1 ) #promo2_since_week df1['promo2_since_week'] = df1.apply( lambda x: x['date'].week if math.isnan( x['promo2_since_week'] ) else x['promo2_since_week'], axis=1 ) #promo2_since_year df1['promo2_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan( x['promo2_since_year'] ) else x['promo2_since_year'], axis=1 ) #promo_interval month_map = {1: 'Jan', 2: 'Fev', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug', 9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'} df1['promo_interval'].fillna(0, inplace=True ) df1['month_map'] = df1['date'].dt.month.map( month_map ) df1['is_promo'] = df1[['promo_interval', 'month_map']].apply( lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split( ',' ) else 0, axis=1 ) df1.isna().sum()
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.6. Change Data Types
# competiton df1['competition_open_since_month'] = df1['competition_open_since_month'].astype( int ) df1['competition_open_since_year'] = df1['competition_open_since_year'].astype( int ) # promo2 df1['promo2_since_week'] = df1['promo2_since_week'].astype( int ) df1['promo2_since_year'] = df1['promo2_since_year'].astype( int )
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.7. Descriptive Statistics
num_attributes = df1.select_dtypes( include=['int64', 'float64'] ) cat_attributes = df1.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] )
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.7.1. Numerical Atributes
# Central Tendency - mean, meadina ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T # dispersion - std, min, max, range, skew, kurtosis d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T d2 = pd.DataFrame( num_attributes.apply( min ) ).T d3 = pd.DataFrame( num_attributes.apply( max ) ).T d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() - x.min() ) ).T d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T # concatenar m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index() m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis'] m sns.distplot( df1['competition_distance'], kde=False )
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
1.7.2. Categorical Atributes
cat_attributes.apply( lambda x: x.unique().shape[0] ) aux = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)] plt.subplot( 1, 3, 1 ) sns.boxplot( x='state_holiday', y='sales', data=aux ) plt.subplot( 1, 3, 2 ) sns.boxplot( x='store_type', y='sales', data=aux ) plt.subplot( 1, 3, 3 ) sns.boxplot( x='assortment', y='sales', data=aux )
_____no_output_____
MIT
store_sales_prediction_1.ipynb
mariosotper/Predict-Time--Series-Test
Chapter 8: Modeling Continuous Variables
import swat conn = swat.CAS('server-name.mycomany.com', 5570, 'username', 'password') cars = conn.upload_file('https://raw.githubusercontent.com/sassoftware/sas-viya-programming/master/data/cars.csv', casout=dict(name='cars', replace=True)) cars.tableinfo() cars.columninfo()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Linear Regressions
conn.loadactionset('regression') conn.help(actionset='regression')
NOTE: Added action set 'regression'. NOTE: Information for action set 'regression': NOTE: regression NOTE: glm - Fits linear regression models using the method of least squares NOTE: genmod - Fits generalized linear regression models NOTE: logistic - Fits logistic regression models
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Simple linear regression
cars.glm( target='MSRP', inputs=['MPG_City'] )
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Another way to define a model
linear1 = cars.Glm() linear1.target = 'MSRP' linear1.inputs = ['MPG_City'] linear1() linear1.display.names = ['ParameterEstimates'] linear1()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Scoring
del linear1.display.names result1 = conn.CASTable('MSRPPrediction') result1.replace = True linear1.output.casout = result1 linear1.output.copyVars = 'ALL'; linear1() result1[['pred']].summary()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Output more information in the score table
result2 = conn.CASTable('MSRPPrediction3') result2.replace = True linear1.output.casout = result2 linear1.output.pred = 'Predicted_MSRP' linear1.output.resid = 'Presidual_MSRP' linear1.output.lcl = 'LCL_MSRP' linear1.output.ucl = 'UCL_MSRP' linear1()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Use scatter plot of predicted values and residuals to check the model fitting
from bokeh.charts import Scatter, output_file, output_notebook, show out1 = result2.to_frame() p = Scatter(out1, x='Predicted_MSRP', y='Residual_MSRP', color='Origin', marker='Origin') output_notebook() #output_file('scatter.html') show(p)
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Investigate which observations have negative predicted MSRP values
result2[['Predicted_MSRP', 'MSRP', 'MPG_City','Make','Model']].query('Predicted_MSRP < 0').to_frame() p = Scatter(out1, x='MPG_City', y='MSRP', color='Origin', marker='Origin') output_notebook() #output_file('scatter.html') show(p)
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Remove outliers
cars.where = 'MSRP < 100000 and MPG_City < 40' result2 = conn.CASTable('cas.MSRPPrediction2') result2.replace = True linear2 = cars.Glm() linear2 = cars.query('MSRP < 100000 and MPG_City < 40').glm linear2.target = 'MSRP' linear2.inputs = ['MPG_City'] linear2.output.casout = result2 linear2.output.copyVars = 'ALL'; linear2.output.pred = 'Predicted_MSRP' linear2.output.resid = 'Residual_MSRP' linear2.output.lcl = 'LCL_MSRP' linear2.output.ucl = 'UCL_MSRP' linear2()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Check the model fitting again
out2 = result2.to_frame() p = Scatter(out2, x='Predicted_MSRP', y='Residual_MSRP', color='Origin', marker='Origin') output_notebook() #output_file('scatter.html') show(p)
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Adding more predictors
nomList = ['Origin','Type','DriveTrain'] contList = ['MPG_City','Weight','Length'] linear3 = conn.CASTable('cars').Glm() linear3.target = 'MSRP' linear3.inputs = nomList + contList linear3.nominals = nomList linear3.display.names = ['FitStatistics','ParameterEstimates'] linear3()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Groupby regression
cars = conn.CASTable('cars') out = cars.groupby('Origin')[['MSRP']].summary().concat_bygroups() out['Summary'][['Column','Mean','Var','Std']] cars = conn.CASTable('cars') cars.groupby=['Origin'] cars.where = 'MSRP < 100000 and MPG_City < 40' nomList = ['Type','DriveTrain'] contList = ['MPG_City','Weight','Length'] groupBYResult =conn.CASTable('MSRPPredictionGroupBy') linear4 = cars.glm linear4.target = 'MSRP' linear4.inputs = nomList + contList linear4.nominals = nomList linear4.display.names = ['FitStatistics','ParameterEstimates'] linear4.output.casout = groupBYResult linear4.output.copyVars = 'ALL'; linear4.output.pred = 'Predicted_MSRP' linear4.output.resid = 'Residual_MSRP' linear4.output.lcl = 'LCL_MSRP' linear4.output.ucl = 'UCL_MSRP' linear4() out = groupBYResult.to_frame() p = Scatter(out, x='Predicted_MSRP', y='Residual_MSRP', color='Origin', marker='Origin') output_notebook() #output_file('scatter.html') show(p)
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Extensions of Ordinary Linear Regression Generalized Linear Models Gamma Regression
cars = conn.CASTable('cars') genmodModel1 = cars.Genmod() genmodModel1.model.depvars = 'MSRP' genmodModel1.model.effects = ['MPG_City'] genmodModel1.model.dist = 'gamma' genmodModel1.model.link = 'log' genmodModel1()
NOTE: Convergence criterion (GCONV=1E-8) satisfied.
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Multinomial Regression
genmodModel1.model.depvars = 'Cylinders' genmodModel1.model.dist = 'multinomial' genmodModel1.model.link = 'logit' genmodModel1.model.effects = ['MPG_City'] genmodModel1.display.names = ['ModelInfo', 'ParameterEstimates'] genmodModel1()
NOTE: Convergence criterion (GCONV=1E-8) satisfied.
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Score the input table
genmodResult = conn.CASTable('CylinderPredicted', replace=True) genmodModel1.output.casout = genmodResult genmodModel1.output.copyVars = 'ALL'; genmodModel1.output.pred = 'Prob_Cylinders' genmodModel1() genmodResult[['Prob_Cylinders','_level_','Cylinders','MPG_City']].head(24)
NOTE: Convergence criterion (GCONV=1E-8) satisfied.
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Regression Trees
conn.loadactionset('decisiontree') conn.help(actionset='decisiontree') cars = conn.CASTable('cars') output1 = conn.CASTable('treeModel1') output1.replace = True; tree1 = cars.dtreetrain tree1.target = 'MSRP' tree1.inputs = ['MPG_City'] tree1.casout = output1 tree1.maxlevel = 2 tree1() output1[['_NodeID_', '_Parent_','_Mean_','_NodeName_','_PBLower0_','_PBUpper0_']].fetch() conn.close()
_____no_output_____
Apache-2.0
Chapter 8 - Modeling Continuous Variables.ipynb
Suraj-617/sas-viya-python
Huggingface Sagemaker-sdk - Distributed Training Demo for `TensorFlow` Distributed Data Parallelism with `transformers` and `tensorflow` 1. [Introduction](Introduction) 2. [Development Environment and Permissions](Development-Environment-and-Permissions) 1. [Installation](Installation) 2. [Development environment](Development-environment) 3. [Permissions](Permissions)3. [Processing](Preprocessing) 1. [Tokenization](Tokenization) 2. [Uploading data to sagemaker_session_bucket](Uploading-data-to-sagemaker_session_bucket) 4. [Fine-tuning & starting Sagemaker Training Job](Fine-tuning-\&-starting-Sagemaker-Training-Job) 1. [Creating an Estimator and start a training job](Creating-an-Estimator-and-start-a-training-job) 2. [Estimator Parameters](Estimator-Parameters) 3. [Download fine-tuned model from s3](Download-fine-tuned-model-from-s3) 3. [Attach to old training job to an estimator ](Attach-to-old-training-job-to-an-estimator) 5. [_Coming soon_:Push model to the Hugging Face hub](Push-model-to-the-Hugging-Face-hub) IntroductionWelcome to our distributed end-to-end binary Text-Classification example. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer on binary text classification. In particular, the pre-trained model will be fine-tuned using the `imdb` dataset. To speed upload Training we are going to use SageMaker distributed Data Parallel library to run our training distributed across multiple gpus. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. ![image.png](attachment:image.png)_**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_ Development Environment and Permissions Installation_*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_
!pip install "sagemaker>=2.48.0" --upgrade
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Development environment **upgrade ipywidgets for `datasets` library and restart kernel, only needed when prerpocessing is done in the notebook**
%%capture import IPython !conda install -c conda-forge ipywidgets -y IPython.Application.instance().kernel.do_shutdown(True) # has to restart kernel so changes are used import sagemaker.huggingface
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Permissions _If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
import sagemaker sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() role = sagemaker.get_execution_role() sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}")
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
PreprocessingIn this example the preproccsing will be done in the `train.py` when executing the script. You could also move the `preprocessing` outside of the script and upload the data to s3 and pass it into it. Fine-tuning & starting Sagemaker Training JobIn order to create a sagemaker training job we need an `HuggingFace` Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. In a Estimator we define, which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, which `hyperparameters` are passed in .....```pythonhuggingface_estimator = HuggingFace(entry_point='train.py', source_dir='./scripts', base_job_name='huggingface-sdk-extension', instance_type='ml.p3.2xlarge', instance_count=1, transformers_version='4.4', pytorch_version='1.6', py_version='py37', role=role, hyperparameters = {'epochs': 1, 'train_batch_size': 32, 'model_name':'distilbert-base-uncased' })```When we create a SageMaker training job, SageMaker takes care of starting and managing all the required ec2 instances for us with the `huggingface` container, uploads the provided fine-tuning script `train.py` and downloads the data from our `sagemaker_session_bucket` into the container at `/opt/ml/input/data`. Then, it starts the training job by running. ```python/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32```The `hyperparameters` you define in the `HuggingFace` estimator are passed in as named arguments. Sagemaker is providing useful properties about the training environment through various environment variables, including the following:* `SM_MODEL_DIR`: A string that represents the path where the training job writes the model artifacts to. After training, artifacts in this directory are uploaded to S3 for model hosting.* `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host.* `SM_CHANNEL_XXXX:` A string that represents the path to the directory that contains the input data for the specified channel. For example, if you specify two input channels in the HuggingFace estimator’s fit call, named `train` and `test`, the environment variables `SM_CHANNEL_TRAIN` and `SM_CHANNEL_TEST` are set.To run your training job locally you can define `instance_type='local'` or `instance_type='local_gpu'` for gpu usage. _Note: this does not working within SageMaker Studio_
!pygmentize ./scripts/train.py
import argparse import logging import os import sys import tensorflow as tf from datasets import load_dataset from tqdm import tqdm from transformers import AutoTokenizer, TFAutoModelForSequenceClassification from transformers.file_utils import is_sagemaker_distributed_available if os.environ.get("SDP_ENABLED") or is_sagemaker_distributed_available(): SDP_ENABLED = True os.environ["SAGEMAKER_INSTANCE_TYPE"] = "p3dn.24xlarge" import smdistributed.dataparallel.tensorflow as sdp else: SDP_ENABLED = False def fit(model, loss, opt, train_dataset, epochs, train_batch_size, max_steps=None): pbar = tqdm(train_dataset) for i, batch in enumerate(pbar): with tf.GradientTape() as tape: inputs, targets = batch outputs = model(batch) loss_value = loss(targets, outputs.logits) if SDP_ENABLED: tape = sdp.DistributedGradientTape(tape, sparse_as_dense=True) grads = tape.gradient(loss_value, model.trainable_variables) opt.apply_gradients(zip(grads, model.trainable_variables)) pbar.set_description(f"Loss: {loss_value:.4f}") if SDP_ENABLED: if i == 0: sdp.broadcast_variables(model.variables, root_rank=0) sdp.broadcast_variables(opt.variables(), root_rank=0) first_batch = False if max_steps and i >= max_steps: break train_results = {"loss": loss_value.numpy()} return train_results def get_datasets(): # Load dataset train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"]) # Preprocess train dataset train_dataset = train_dataset.map( lambda e: tokenizer(e["text"], truncation=True, padding="max_length"), batched=True ) train_dataset.set_format(type="tensorflow", columns=["input_ids", "attention_mask", "label"]) train_features = { x: train_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ["input_ids", "attention_mask"] } tf_train_dataset = tf.data.Dataset.from_tensor_slices((train_features, train_dataset["label"])) # Preprocess test dataset test_dataset = test_dataset.map( lambda e: tokenizer(e["text"], truncation=True, padding="max_length"), batched=True ) test_dataset.set_format(type="tensorflow", columns=["input_ids", "attention_mask", "label"]) test_features = { x: test_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ["input_ids", "attention_mask"] } tf_test_dataset = tf.data.Dataset.from_tensor_slices((test_features, test_dataset["label"])) if SDP_ENABLED: tf_train_dataset = tf_train_dataset.shard(sdp.size(), sdp.rank()) tf_test_dataset = tf_test_dataset.shard(sdp.size(), sdp.rank()) tf_train_dataset = tf_train_dataset.batch(args.train_batch_size, drop_remainder=True) tf_test_dataset = tf_test_dataset.batch(args.eval_batch_size, drop_remainder=True) return tf_train_dataset, tf_test_dataset if __name__ == "__main__": parser = argparse.ArgumentParser() # Hyperparameters sent by the client are passed as command-line arguments to the script. parser.add_argument("--epochs", type=int, default=3) parser.add_argument("--train-batch-size", type=int, default=16) parser.add_argument("--eval-batch-size", type=int, default=8) parser.add_argument("--model_name", type=str) parser.add_argument("--learning_rate", type=str, default=5e-5) parser.add_argument("--do_train", type=bool, default=True) parser.add_argument("--do_eval", type=bool, default=True) # Data, model, and output directories parser.add_argument("--output_data_dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"]) parser.add_argument("--model_dir", type=str, default=os.environ["SM_MODEL_DIR"]) parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"]) args, _ = parser.parse_known_args() # Set up logging logger = logging.getLogger(__name__) logging.basicConfig( level=logging.getLevelName("INFO"), handlers=[logging.StreamHandler(sys.stdout)], format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", ) if SDP_ENABLED: sdp.init() gpus = tf.config.experimental.list_physical_devices("GPU") for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) if gpus: tf.config.experimental.set_visible_devices(gpus[sdp.local_rank()], "GPU") # Load model and tokenizer model = TFAutoModelForSequenceClassification.from_pretrained(args.model_name) tokenizer = AutoTokenizer.from_pretrained(args.model_name) # get datasets tf_train_dataset, tf_test_dataset = get_datasets() # fine optimizer and loss optimizer = tf.keras.optimizers.Adam(learning_rate=args.learning_rate) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metrics = [tf.keras.metrics.SparseCategoricalAccuracy()] model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # Training if args.do_train: # train_results = model.fit(tf_train_dataset, epochs=args.epochs, batch_size=args.train_batch_size) train_results = fit( model, loss, optimizer, tf_train_dataset, args.epochs, args.train_batch_size, max_steps=None ) logger.info("*** Train ***") output_eval_file = os.path.join(args.output_data_dir, "train_results.txt") if not SDP_ENABLED or sdp.rank() == 0: with open(output_eval_file, "w") as writer: logger.info("***** Train results *****") logger.info(train_results) for key, value in train_results.items(): logger.info(" %s = %s", key, value) writer.write("%s = %s\n" % (key, value)) # Evaluation if args.do_eval and (not SDP_ENABLED or sdp.rank() == 0): result = model.evaluate(tf_test_dataset, batch_size=args.eval_batch_size, return_dict=True) logger.info("*** Evaluate ***") output_eval_file = os.path.join(args.output_data_dir, "eval_results.txt") with open(output_eval_file, "w") as writer: logger.info("***** Eval results *****") logger.info(result) for key, value in result.items(): logger.info(" %s = %s", key, value) writer.write("%s = %s\n" % (key, value)) # Save result if SDP_ENABLED: if sdp.rank() == 0: model.save_pretrained(args.model_dir) tokenizer.save_pretrained(args.model_dir) else: model.save_pretrained(args.model_dir) tokenizer.save_pretrained(args.model_dir)
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Creating an Estimator and start a training job
from sagemaker.huggingface import HuggingFace # hyperparameters, which are passed into the training job hyperparameters={ 'epochs': 1, 'train_batch_size': 16, 'model_name':'distilbert-base-uncased', } # configuration for running training on smdistributed Data Parallel distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}} # instance configurations instance_type='ml.p3dn.24xlarge' instance_count=2 volume_size=200 huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type=instance_type, instance_count=instance_count, role=role, transformers_version='4.6', tensorflow_version='2.4', py_version='py37', distribution=distribution, hyperparameters=hyperparameters, debugger_hook_config=False, # currently needed ) huggingface_estimator.fit()
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Deploying the endpointTo deploy our endpoint, we call `deploy()` on our HuggingFace estimator object, passing in our desired number of instances and instance type.
predictor = huggingface_estimator.deploy(1,"ml.g4dn.xlarge")
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Then, we use the returned predictor object to call the endpoint.
sentiment_input= {"inputs":"I love using the new Inference DLC."} predictor.predict(sentiment_input)
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Finally, we delete the endpoint again.
predictor.delete_endpoint()
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Extras Estimator Parameters
# container image used for training job print(f"container image used for training job: \n{huggingface_estimator.image_uri}\n") # s3 uri where the trained model is located print(f"s3 uri where the trained model is located: \n{huggingface_estimator.model_data}\n") # latest training job name for this estimator print(f"latest training job name for this estimator: \n{huggingface_estimator.latest_training_job.name}\n") # access the logs of the training job huggingface_estimator.sagemaker_session.logs_for_job(huggingface_estimator.latest_training_job.name)
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
Attach to old training job to an estimator In Sagemaker you can attach an old training job to an estimator to continue training, get results etc..
from sagemaker.estimator import Estimator # job which is going to be attached to the estimator old_training_job_name='' # attach old training job huggingface_estimator_loaded = Estimator.attach(old_training_job_name) # get model output s3 from training job huggingface_estimator_loaded.model_data
_____no_output_____
Apache-2.0
sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb
Shamik-07/notebooks
https://github.com/facebook/fb.resnet.torch/issues/180https://github.com/bamos/densenet.pytorch/blob/master/compute-cifar10-mean.py
print(f'Number of training examples: {len(train_data)}') print(f'Number of validation examples: {len(valid_data)}') print(f'Number of testing examples: {len(test_data)}') BATCH_SIZE = 64 train_iterator = torch.utils.data.DataLoader(train_data, shuffle=True, batch_size=BATCH_SIZE) valid_iterator = torch.utils.data.DataLoader(valid_data, batch_size=BATCH_SIZE) test_iterator = torch.utils.data.DataLoader(test_data, batch_size=BATCH_SIZE)
_____no_output_____
MIT
misc/6 - ResNet - Dogs vs Cats.ipynb
oney/pytorch-image-classification
https://discuss.pytorch.org/t/why-does-the-resnet-model-given-by-pytorch-omit-biases-from-the-convolutional-layer/10990/4https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py
device = torch.device('cuda') import torchvision.models as models model = models.resnet18(pretrained=True).to(device) print(model) for param in model.parameters(): param.requires_grad = False print(model.fc) model.fc = nn.Linear(in_features=512, out_features=2).to(device) optimizer = optim.Adam(model.parameters()) criterion = nn.CrossEntropyLoss() def calculate_accuracy(fx, y): preds = fx.max(1, keepdim=True)[1] correct = preds.eq(y.view_as(preds)).sum() acc = correct.float()/preds.shape[0] return acc def train(model, device, iterator, optimizer, criterion): epoch_loss = 0 epoch_acc = 0 model.train() for (x, y) in iterator: x = x.to(device) y = y.to(device) optimizer.zero_grad() fx = model(x) loss = criterion(fx, y) acc = calculate_accuracy(fx, y) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, device, iterator, criterion): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for (x, y) in iterator: x = x.to(device) y = y.to(device) fx = model(x) loss = criterion(fx, y) acc = calculate_accuracy(fx, y) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) EPOCHS = 10 SAVE_DIR = 'models' MODEL_SAVE_PATH = os.path.join(SAVE_DIR, 'resnet18-dogs-vs-cats.pt') best_valid_loss = float('inf') if not os.path.isdir(f'{SAVE_DIR}'): os.makedirs(f'{SAVE_DIR}') for epoch in range(EPOCHS): train_loss, train_acc = train(model, device, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, device, valid_iterator, criterion) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), MODEL_SAVE_PATH) print(f'| Epoch: {epoch+1:02} | Train Loss: {train_loss:.3f} | Train Acc: {train_acc*100:05.2f}% | Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:05.2f}% |') model.load_state_dict(torch.load(MODEL_SAVE_PATH)) test_loss, test_acc = evaluate(model, device, valid_iterator, criterion) print(f'| Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:05.2f}% |')
| Test Loss: 0.052 | Test Acc: 97.93% |
MIT
misc/6 - ResNet - Dogs vs Cats.ipynb
oney/pytorch-image-classification
Inference code for running on kaggle server
!pip install ../input/pretrainedmodels/pretrainedmodels-0.7.4/pretrainedmodels-0.7.4/ > /dev/null # no output import gc import os import random import sys import six import math from pathlib import Path from tqdm import tqdm_notebook as tqdm from IPython.core.display import display, HTML from typing import List import plotly.offline as py import plotly.graph_objs as go import plotly.express as px import plotly.figure_factory as ff from plotly import tools, subplots py.init_notebook_mode(connected=True) import numpy import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import albumentations as A import cv2 from sklearn import preprocessing from sklearn.model_selection import KFold from skimage.transform import AffineTransform, warp import torch import torch.nn.functional as F from torch import nn from torch.nn import init, Sequential from torch.nn.parameter import Parameter from torch.utils.data.dataset import Dataset from torch.utils.data.dataloader import DataLoader import pretrainedmodels # --- setup --- pd.set_option('max_columns', 50) debug=False submission=True batch_size=32 device='cuda:0' out='.' #load_model_path='/kaggle/input/pytorch-mixup1/model_097889.pt' load_model_path='.' image_size=128 threshold=40.#20 model_name='se_resnext50_32x4d' datadir = Path('/kaggle/input/bengaliai-cv19') featherdir = Path('/kaggle/input/bengaliaicv19feather') outdir = Path('.') # Read in the data CSV files # train = pd.read_csv(datadir/'train.csv') # test = pd.read_csv(datadir/'test.csv') # sample_submission = pd.read_csv(datadir/'sample_submission.csv') # class_map = pd.read_csv(datadir/'class_map.csv')
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Dataset
""" Referenced `chainer.dataset.DatasetMixin` to work with pytorch Dataset. """ class DatasetMixin(Dataset): def __init__(self, transform=None): self.transform = transform def __getitem__(self, index): """Returns an example or a sequence of examples.""" if torch.is_tensor(index): index = index.tolist() if isinstance(index, slice): current, stop, step = index.indices(len(self)) return [self.get_example_wrapper(i) for i in six.moves.range(current, stop, step)] elif isinstance(index, list) or isinstance(index, numpy.ndarray): return [self.get_example_wrapper(i) for i in index] else: return self.get_example_wrapper(index) def __len__(self): """Returns the number of data points.""" raise NotImplementedError def get_example_wrapper(self, i): """Wrapper of `get_example`, to apply `transform` if necessary""" example = self.get_example(i) if self.transform: example = self.transform(example) return example def get_example(self, i): """Returns the i-th example. Implementations should override it. It should raise :class:`IndexError` if the index is invalid. Args: i (int): The index of the example. Returns: The i-th example. """ raise NotImplementedError class BengaliAIDataset(DatasetMixin): def __init__(self, images, labels=None, transform=None, indices=None): super(BengaliAIDataset, self).__init__(transform=transform) self.images = images self.labels = labels if indices is None: indices = np.arange(len(images)) self.indices = indices self.train = labels is not None def __len__(self): """return length of this dataset""" return len(self.indices) def get_example(self, i): """Return i-th data""" i = self.indices[i] x = self.images[i] x = (255 - x).astype(np.float32) / 255. if self.train: y = self.labels[i] return x, y else: return x
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Data augmentation/processing
""" From https://www.kaggle.com/corochann/deep-learning-cnn-with-chainer-lb-0-99700 """ def affine_image(img): """ Args: img: (h, w) or (1, h, w) Returns: img: (h, w) """ # ch, h, w = img.shape # img = img / 255. if img.ndim == 3: img = img[0] # --- scale --- min_scale = 0.8 max_scale = 1.2 sx = np.random.uniform(min_scale, max_scale) sy = np.random.uniform(min_scale, max_scale) # --- rotation --- max_rot_angle = 7 rot_angle = np.random.uniform(-max_rot_angle, max_rot_angle) * np.pi / 180. # --- shear --- max_shear_angle = 10 shear_angle = np.random.uniform(-max_shear_angle, max_shear_angle) * np.pi / 180. # --- translation --- max_translation = 4 tx = np.random.randint(-max_translation, max_translation) ty = np.random.randint(-max_translation, max_translation) tform = AffineTransform(scale=(sx, sy), rotation=rot_angle, shear=shear_angle, translation=(tx, ty)) transformed_image = warp(img, tform) assert transformed_image.ndim == 2 return transformed_image def crop_char_image(image, threshold=40./255.): assert image.ndim == 2 is_black = image > threshold is_black_vertical = np.sum(is_black, axis=0) > 0 is_black_horizontal = np.sum(is_black, axis=1) > 0 left = np.argmax(is_black_horizontal) right = np.argmax(is_black_horizontal[::-1]) top = np.argmax(is_black_vertical) bottom = np.argmax(is_black_vertical[::-1]) height, width = image.shape cropped_image = image[left:height - right, top:width - bottom] return cropped_image def resize(image, size=(224, 224)): return cv2.resize(image, size, interpolation=cv2.INTER_AREA) def add_gaussian_noise(x, sigma): x += np.random.randn(*x.shape) * sigma x = np.clip(x, 0., 1.) return x def _evaluate_ratio(ratio): if ratio <= 0.: return False return np.random.uniform() < ratio def apply_aug(aug, image): return aug(image=image)['image'] class Transform: def __init__(self, affine=True, crop=False, size=(224, 224), normalize=True, train=True, threshold=40., sigma=-1., ssr_ratio=0.): self.affine = affine self.crop = crop self.size = size self.normalize = normalize self.train = train self.threshold = threshold / 255. self.sigma = sigma / 255. self.ssr_ratio = ssr_ratio def __call__(self, example): if self.train: x, y = example else: x = example # --- Augmentation --- if self.affine: x = affine_image(x) # --- Train/Test common preprocessing --- if self.crop: x = crop_char_image(x, threshold=self.threshold) if self.size is not None: x = resize(x, size=self.size) if self.sigma > 0.: x = add_gaussian_noise(x, sigma=self.sigma) if _evaluate_ratio(self.ssr_ratio): x = apply_aug(A.ShiftScaleRotate( shift_limit=0.0625, scale_limit=0.1, rotate_limit=15, p=1.0), x) if self.normalize: x = (x.astype(np.float32) - 0.0692) / 0.2051 if x.ndim == 2: x = x[None, :, :] x = x.astype(np.float32) if self.train: y = y.astype(np.int64) return x, y else: return x def residual_add(lhs, rhs): lhs_ch, rhs_ch = lhs.shape[1], rhs.shape[1] if lhs_ch < rhs_ch: out = lhs + rhs[:, :lhs_ch] elif lhs_ch > rhs_ch: out = torch.cat([lhs[:, :rhs_ch] + rhs, lhs[:, rhs_ch:]], dim=1) else: out = lhs + rhs return out class LazyLoadModule(nn.Module): """Lazy buffer/parameter loading using load_state_dict_pre_hook Define all buffer/parameter in `_lazy_buffer_keys`/`_lazy_parameter_keys` and save buffer with `register_buffer`/`register_parameter` method, which can be outside of __init__ method. Then this module can load any shape of Tensor during de-serializing. Note that default value of lazy buffer is torch.Tensor([]), while lazy parameter is None. """ _lazy_buffer_keys: List[str] = [] # It needs to be override to register lazy buffer _lazy_parameter_keys: List[str] = [] # It needs to be override to register lazy parameter def __init__(self): super(LazyLoadModule, self).__init__() for k in self._lazy_buffer_keys: self.register_buffer(k, torch.tensor([])) for k in self._lazy_parameter_keys: self.register_parameter(k, None) self._register_load_state_dict_pre_hook(self._hook) def _hook(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs): for key in self._lazy_buffer_keys: self.register_buffer(key, state_dict[prefix + key]) for key in self._lazy_parameter_keys: self.register_parameter(key, Parameter(state_dict[prefix + key])) class LazyLinear(LazyLoadModule): """Linear module with lazy input inference `in_features` can be `None`, and it is determined at the first time of forward step dynamically. """ __constants__ = ['bias', 'in_features', 'out_features'] _lazy_parameter_keys = ['weight'] def __init__(self, in_features, out_features, bias=True): super(LazyLinear, self).__init__() self.in_features = in_features self.out_features = out_features if bias: self.bias = Parameter(torch.Tensor(out_features)) else: self.register_parameter('bias', None) if in_features is not None: self.weight = Parameter(torch.Tensor(out_features, in_features)) self.reset_parameters() def reset_parameters(self): init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) init.uniform_(self.bias, -bound, bound) def forward(self, input): if self.weight is None: self.in_features = input.shape[-1] self.weight = Parameter(torch.Tensor(self.out_features, self.in_features)) self.reset_parameters() # Need to send lazy defined parameter to device... self.to(input.device) return F.linear(input, self.weight, self.bias) def extra_repr(self): return 'in_features={}, out_features={}, bias={}'.format( self.in_features, self.out_features, self.bias is not None ) class LinearBlock(nn.Module): def __init__(self, in_features, out_features, bias=True, use_bn=True, activation=F.relu, dropout_ratio=-1, residual=False,): super(LinearBlock, self).__init__() if in_features is None: self.linear = LazyLinear(in_features, out_features, bias=bias) else: self.linear = nn.Linear(in_features, out_features, bias=bias) if use_bn: self.bn = nn.BatchNorm1d(out_features) if dropout_ratio > 0.: self.dropout = nn.Dropout(p=dropout_ratio) else: self.dropout = None self.activation = activation self.use_bn = use_bn self.dropout_ratio = dropout_ratio self.residual = residual def __call__(self, x): h = self.linear(x) if self.use_bn: h = self.bn(h) if self.activation is not None: h = self.activation(h) if self.residual: h = residual_add(h, x) if self.dropout_ratio > 0: h = self.dropout(h) return h class PretrainedCNN(nn.Module): def __init__(self, model_name='se_resnext50_32x4d', in_channels=1, out_dim=10, use_bn=True, pretrained='imagenet'): super(PretrainedCNN, self).__init__() self.conv0 = nn.Conv2d( in_channels, 3, kernel_size=3, stride=1, padding=1, bias=True) self.base_model = pretrainedmodels.__dict__[model_name](pretrained=pretrained) activation = F.leaky_relu self.do_pooling = True if self.do_pooling: inch = self.base_model.last_linear.in_features else: inch = None hdim = 512 lin1 = LinearBlock(inch, hdim, use_bn=use_bn, activation=activation, residual=False) lin2 = LinearBlock(hdim, out_dim, use_bn=use_bn, activation=None, residual=False) self.lin_layers = Sequential(lin1, lin2) def forward(self, x): #h = self.conv0(x) h = x.repeat(1,3,1,1) h = self.base_model.features(h) if self.do_pooling: h = torch.sum(h, dim=(-1, -2)) else: bs, ch, height, width = h.shape h = h.view(bs, ch*height*width) for layer in self.lin_layers: h = layer(h) return h
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Classifier
def accuracy(y, t): pred_label = torch.argmax(y, dim=1) count = pred_label.shape[0] correct = (pred_label == t).sum().type(torch.float32) acc = correct / count return acc class BengaliClassifier(nn.Module): def __init__(self, predictor, n_grapheme=168, n_vowel=11, n_consonant=7): super(BengaliClassifier, self).__init__() self.n_grapheme = n_grapheme self.n_vowel = n_vowel self.n_consonant = n_consonant self.n_total_class = self.n_grapheme + self.n_vowel + self.n_consonant self.predictor = predictor self.metrics_keys = [ 'loss', 'loss_grapheme', 'loss_vowel', 'loss_consonant', 'acc_grapheme', 'acc_vowel', 'acc_consonant'] def forward(self, x, y=None): pred = self.predictor(x) if isinstance(pred, tuple): assert len(pred) == 3 preds = pred else: assert pred.shape[1] == self.n_total_class preds = torch.split(pred, [self.n_grapheme, self.n_vowel, self.n_consonant], dim=1) loss_grapheme = F.cross_entropy(preds[0], y[:, 0]) loss_vowel = F.cross_entropy(preds[1], y[:, 1]) loss_consonant = F.cross_entropy(preds[2], y[:, 2]) loss = loss_grapheme + loss_vowel + loss_consonant metrics = { 'loss': loss.item(), 'loss_grapheme': loss_grapheme.item(), 'loss_vowel': loss_vowel.item(), 'loss_consonant': loss_consonant.item(), 'acc_grapheme': accuracy(preds[0], y[:, 0]), 'acc_vowel': accuracy(preds[1], y[:, 1]), 'acc_consonant': accuracy(preds[2], y[:, 2]), } return loss, metrics, pred def calc(self, data_loader): device: torch.device = next(self.parameters()).device self.eval() output_list = [] with torch.no_grad(): for batch in tqdm(data_loader): batch = batch.to(device) pred = self.predictor(batch) output_list.append(pred) output = torch.cat(output_list, dim=0) preds = torch.split(output, [self.n_grapheme, self.n_vowel, self.n_consonant], dim=1) return preds def predict_proba(self, data_loader): preds = self.calc(data_loader) return [F.softmax(p, dim=1) for p in preds] def predict(self, data_loader): preds = self.calc(data_loader) pred_labels = [torch.argmax(p, dim=1) for p in preds] return pred_labels def prepare_image(datadir, featherdir, data_type='train', submission=False, indices=[0, 1, 2, 3]): assert data_type in ['train', 'test'] if submission: image_df_list = [pd.read_parquet(datadir / f'{data_type}_image_data_{i}.parquet') for i in indices] else: image_df_list = [pd.read_feather(featherdir / f'{data_type}_image_data_{i}.feather') for i in indices] print('image_df_list', len(image_df_list)) HEIGHT = 137 WIDTH = 236 images = [df.iloc[:, 1:].values.reshape(-1, HEIGHT, WIDTH) for df in image_df_list] images = np.concatenate(images, axis=0) return images # --- Model --- device = torch.device(device) n_grapheme = 168 n_vowel = 11 n_consonant = 7 n_total = n_grapheme + n_vowel + n_consonant print('n_total', n_total) #predictor = PretrainedCNN(in_channels=1, out_dim=n_total, model_name=model_name, pretrained=None) #print('predictor', type(predictor)) #classifier = BengaliClassifier(predictor) class WrappedModel(nn.Module): def __init__(self, module): super(WrappedModel, self).__init__() self.module = module def forward(self, x): return self.module(x) def build_predictor(): predictor = PretrainedCNN(in_channels=3, out_dim=n_total, model_name=model_name, pretrained=None) return predictor def build_classifier(arch, load_model_path, n_total, model_name='', device='cuda:0'): if isinstance(device, str): device = torch.device(device) predictor = build_predictor() predictor = WrappedModel(predictor) print('predictor', type(predictor)) classifier = BengaliClassifier(predictor) if load_model_path: predictor.load_state_dict(torch.load(load_model_path)) else: print("[WARNING] Unexpected value load_model_path={}" .format(load_model_path)) classifier.to(device) return classifier def predict_core(test_images, image_size, threshold, arch, n_total, model_name, load_model_path, batch_size=512, device='cuda:0', **kwargs): classifier = build_classifier(arch, load_model_path, n_total, model_name, device=device) test_dataset = BengaliAIDataset( test_images, None, transform=Transform(affine=False, crop=False, size=(224, 224), threshold=threshold, train=False, ssr_ratio=0.0)) print('test_dataset', len(test_dataset)) test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False) test_pred_proba = classifier.predict_proba(test_loader) return test_pred_proba ''' from torch.utils.data.dataloader import DataLoader # --- Prediction --- data_type = 'test' test_preds_list = [] for i in range(4): # --- prepare data --- indices = [i] test_images = prepare_image( datadir, featherdir, data_type=data_type, submission=submission, indices=indices) n_dataset = len(test_images) print(f'i={i}, n_dataset={n_dataset}') # test_data_size = 200 if debug else int(n_dataset * 0.9) test_dataset = BengaliAIDataset( test_images, None, transform=Transform(affine=False, crop=True, size=(image_size, image_size), threshold=threshold, train=False)) print('test_dataset', len(test_dataset)) test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False) test_preds = classifier.predict(test_loader) test_preds_list.append(test_preds) del test_images gc.collect() if debug: break ''' model_dir = '/kaggle/input/pytorch-cutmix-46/' filenames = [] for filename in os.listdir(model_dir): if filename.endswith(".pt"): print(os.path.join(model_dir, filename)) filenames.append(filename) train_args_dict={ 'load_model_path': filename, 'device': device, 'batch_size': batch_size, 'debug': debug, 'image_size': (224, 224), 'model_name': model_name, 'threshold': 40., 'arch': None,} # --- Prediction --- data_type = 'test' test_preds_list = [] for i in range(4): # --- prepare data --- indices = [i] test_images = prepare_image( datadir, featherdir, data_type=data_type, submission=submission, indices=indices) n_dataset = len(test_images) print(f'i={i}, n_dataset={n_dataset}') # test_data_size = 200 if debug else int(n_dataset * 0.9) model_preds_list = [] for j in range(6): train_args_dict.update({ 'load_model_path': os.path.join(model_dir, filenames[j]), 'device': device, 'batch_size': batch_size, 'debug': debug, }) print(f'j {j} updated train_args_dict {train_args_dict}') test_preds = predict_core( test_images=test_images, n_total=n_total, **train_args_dict) model_preds_list.append(test_preds) # --- ensemble --- proba0 = torch.mean(torch.stack([test_preds[0] for test_preds in model_preds_list], dim=0), dim=0) proba1 = torch.mean(torch.stack([test_preds[1] for test_preds in model_preds_list], dim=0), dim=0) proba2 = torch.mean(torch.stack([test_preds[2] for test_preds in model_preds_list], dim=0), dim=0) p0 = torch.argmax(proba0, dim=1).cpu().numpy() p1 = torch.argmax(proba1, dim=1).cpu().numpy() p2 = torch.argmax(proba2, dim=1).cpu().numpy() print('p0', p0.shape, 'p1', p1.shape, 'p2', p2.shape) test_preds_list.append([p0, p1, p2]) if debug: break del test_images gc.collect() ''' test_preds_list.append(test_preds) del test_images gc.collect() if debug: break ''' p0 = np.concatenate([test_preds[0] for test_preds in test_preds_list], axis=0) p1 = np.concatenate([test_preds[1] for test_preds in test_preds_list], axis=0) p2 = np.concatenate([test_preds[2] for test_preds in test_preds_list], axis=0) print('concat:', 'p0', p0.shape, 'p1', p1.shape, 'p2', p2.shape) row_id = [] target = [] for i in tqdm(range(len(p0))): row_id += [f'Test_{i}_grapheme_root', f'Test_{i}_vowel_diacritic', f'Test_{i}_consonant_diacritic'] target += [p0[i], p1[i], p2[i]] submission_df = pd.DataFrame({'row_id': row_id, 'target': target}) submission_df.to_csv('submission.csv', index=False) ''' p0 = np.concatenate([test_preds[0].cpu().numpy() for test_preds in test_preds_list], axis=0) p1 = np.concatenate([test_preds[1].cpu().numpy() for test_preds in test_preds_list], axis=0) p2 = np.concatenate([test_preds[2].cpu().numpy() for test_preds in test_preds_list], axis=0) print('p0', p0.shape, 'p1', p1.shape, 'p2', p2.shape) row_id = [] target = [] for i in tqdm(range(len(p0))): row_id += [f'Test_{i}_grapheme_root', f'Test_{i}_vowel_diacritic', f'Test_{i}_consonant_diacritic'] target += [p0[i], p1[i], p2[i]] submission_df = pd.DataFrame({'row_id': row_id, 'target': target}) submission_df.to_csv('submission.csv', index=False) ''' submission_df
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Check prediction
train = pd.read_csv(datadir/'train.csv') pred_df = pd.DataFrame({ 'grapheme_root': p0, 'vowel_diacritic': p1, 'consonant_diacritic': p2 }) fig, axes = plt.subplots(2, 3, figsize=(22, 6)) plt.title('Label Count') sns.countplot(x="grapheme_root",data=train, ax=axes[0, 0]) sns.countplot(x="vowel_diacritic",data=train, ax=axes[0, 1]) sns.countplot(x="consonant_diacritic",data=train, ax=axes[0, 2]) sns.countplot(x="grapheme_root",data=pred_df, ax=axes[1, 0]) sns.countplot(x="vowel_diacritic",data=pred_df, ax=axes[1, 1]) sns.countplot(x="consonant_diacritic",data=pred_df, ax=axes[1, 2]) plt.tight_layout() plt.show() train_labels = train[['grapheme_root', 'vowel_diacritic', 'consonant_diacritic']].values fig, axes = plt.subplots(1, 3, figsize=(22, 6)) sns.distplot(train_labels[:, 0], ax=axes[0], color='green', kde=False, label='train grapheme') sns.distplot(train_labels[:, 1], ax=axes[1], color='green', kde=False, label='train vowel') sns.distplot(train_labels[:, 2], ax=axes[2], color='green', kde=False, label='train consonant') plt.tight_layout() fig, axes = plt.subplots(1, 3, figsize=(22, 6)) sns.distplot(p0, ax=axes[0], color='orange', kde=False, label='test grapheme') sns.distplot(p1, ax=axes[1], color='orange', kde=False, label='test vowel') sns.distplot(p2, ax=axes[2], color='orange', kde=False, label='test consonant') plt.legend() plt.tight_layout()
_____no_output_____
MIT
Bengali.Ai classification challenge/pytorch-predict.ipynb
yoviny/Kaggle-Competitions
Tutorial 1: Neural Rate Models**Week 2, Day 4: Dynamic Networks****By Neuromatch Academy**__Content creators:__ Qinglong Gu, Songtin Li, Arvind Kumar, John Murray, Julijana Gjorgjieva __Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Spiros Chavlis, Michael Waskom **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Tutorial ObjectivesThe brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is indeed a network of highly specialized neuronal networks. The activity of a neural network constantly evolves in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of view include information processing, statistical models, etc.). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study network activity dynamics if we want to understand the brain.In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time.In this tutorial, we will learn how to build a firing rate model of a single population of excitatory neurons. **Steps:**- Write the equation for the firing rate dynamics of a 1D excitatory population.- Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve.- Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. --- Setup
# Imports import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt # root-finding algorithm # @title Figure Settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # @title Helper functions def plot_fI(x, f): plt.figure(figsize=(6, 4)) # plot the figure plt.plot(x, f, 'k') plt.xlabel('x (a.u.)', fontsize=14) plt.ylabel('F(x)', fontsize=14) plt.show() def plot_dr_r(r, drdt, x_fps=None): plt.figure() plt.plot(r, drdt, 'k') plt.plot(r, 0. * r, 'k--') if x_fps is not None: plt.plot(x_fps, np.zeros_like(x_fps), "ko", ms=12) plt.xlabel(r'$r$') plt.ylabel(r'$\frac{dr}{dt}$', fontsize=20) plt.ylim(-0.1, 0.1) def plot_dFdt(x, dFdt): plt.figure() plt.plot(x, dFdt, 'r') plt.xlabel('x (a.u.)', fontsize=14) plt.ylabel('dF(x)', fontsize=14) plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
--- Section 1: Neuronal network dynamics
# @title Video 1: Dynamic networks from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="p848349hPyw", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Section 1.1: Dynamics of a single excitatory populationIndividual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of time and network parameters. Mathematically, we can describe the firing rate dynamic as:\begin{align}\tau \frac{dr}{dt} &= -r + F(w\cdot r + I_{\text{ext}}) \quad\qquad (1)\end{align}$r(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau$ controls the timescale of the evolution of the average firing rate, $w$ denotes the strength (synaptic weight) of the recurrent input to the population, $I_{\text{ext}}$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs.To start building the model, please execute the cell below to initialize the simulation parameters.
# @markdown *Execute this cell to set default parameters for a single excitatory population model* def default_pars_single(**kwargs): pars = {} # Excitatory parameters pars['tau'] = 1. # Timescale of the E population [ms] pars['a'] = 1.2 # Gain of the E population pars['theta'] = 2.8 # Threshold of the E population # Connection strength pars['w'] = 0. # E to E, we first set it to 0 # External input pars['I_ext'] = 0. # simulation parameters pars['T'] = 20. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['r_init'] = 0.2 # Initial value of E # External parameters if any pars.update(kwargs) # Vector of discretized time points [ms] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) return pars
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
You can now use:- `pars = default_pars_single()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_pars_single(T=T_sim, dt=time_step)` to set new simulation time and time step- To update an existing parameter dictionary, use `pars['New_para'] = value`Because `pars` is a dictionary, it can be passed to a function that requires individual parameters as arguments using `my_func(**pars)` syntax. Section 1.2: F-I curvesIn electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the output spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial.The transfer function $F(\cdot)$ in Equation $1$ represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$.$$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$.Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters.
def F(x, a, theta): """ Population activation function. Args: x (float): the population input a (float): the gain of the function theta (float): the threshold of the function Returns: float: the population activation response F(x) for input x """ ################################################# ## TODO for students: compute f = F(x) ## # Fill out function and remove raise NotImplementedError("Student excercise: implement the f-I function") ################################################# # Define the sigmoidal transfer function f = F(x) f = ... return f pars = default_pars_single() # get default parameters x = np.arange(0, 10, .1) # set the range of input # Uncomment below to test your function # f = F(x, pars['a'], pars['theta']) # plot_fI(x, f)
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_45ddc05f.py)*Example output:* Interactive Demo: Parameter exploration of F-I curveHere's an interactive demo that shows how the F-I curve changes for different values of the gain and threshold parameters. How do the gain and threshold parameters affect the F-I curve?
# @title # @markdown Make sure you execute this cell to enable the widget! def interactive_plot_FI(a, theta): """ Population activation function. Expecxts: a : the gain of the function theta : the threshold of the function Returns: plot the F-I curve with give parameters """ # set the range of input x = np.arange(0, 10, .1) plt.figure() plt.plot(x, F(x, a, theta), 'k') plt.xlabel('x (a.u.)', fontsize=14) plt.ylabel('F(x)', fontsize=14) plt.show() _ = widgets.interact(interactive_plot_FI, a=(0.3, 3, 0.3), theta=(2, 4, 0.2))
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_1c0165d7.py) Section 1.3: Simulation scheme of E dynamicsBecause $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation $1$ can be approximated using the Euler method on a time-grid of stepsize $\Delta t$:\begin{align}&\frac{dr}{dt} \approx \frac{r[k+1]-r[k]}{\Delta t} \end{align}where $r[k] = r(k\Delta t)$. Thus,$$\Delta r[k] = \frac{\Delta t}{\tau}[-r[k] + F(w\cdot r[k] + I_{\text{ext}}[k];a,\theta)]$$Hence, Equation (1) is updated at each time step by:$$r[k+1] = r[k] + \Delta r[k]$$
# @markdown *Execute this cell to enable the single population rate model simulator: `simulate_single`* def simulate_single(pars): """ Simulate an excitatory population of neurons Args: pars : Parameter dictionary Returns: rE : Activity of excitatory population (array) Example: pars = default_pars_single() r = simulate_single(pars) """ # Set parameters tau, a, theta = pars['tau'], pars['a'], pars['theta'] w = pars['w'] I_ext = pars['I_ext'] r_init = pars['r_init'] dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size # Initialize activity r = np.zeros(Lt) r[0] = r_init I_ext = I_ext * np.ones(Lt) # Update the E activity for k in range(Lt - 1): dr = dt / tau * (-r[k] + F(w * r[k] + I_ext[k], a, theta)) r[k+1] = r[k] + dr return r help(simulate_single)
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Interactive Demo: Parameter Exploration of single population dynamicsNote that $w=0$, as in the default setting, means no recurrent input to the neuron population in Equation (1). Hence, the dynamics are entirely determined by the external input $I_{\text{ext}}$. Explore these dynamics in this interactive demo.How does $r_{\text{sim}}(t)$ change with different $I_{\text{ext}}$ values? How does it change with different $\tau$ values? Investigate the relationship between $F(I_{\text{ext}}; a, \theta)$ and the steady value of $r(t)$. Note that, $r_{\rm ana}(t)$ denotes the analytical solution - you will learn how this is computed in the next section.
# @title # @markdown Make sure you execute this cell to enable the widget! # get default parameters pars = default_pars_single(T=20.) def Myplot_E_diffI_difftau(I_ext, tau): # set external input and time constant pars['I_ext'] = I_ext pars['tau'] = tau # simulation r = simulate_single(pars) # Analytical Solution r_ana = (pars['r_init'] + (F(I_ext, pars['a'], pars['theta']) - pars['r_init']) * (1. - np.exp(-pars['range_t'] / pars['tau']))) # plot plt.figure() plt.plot(pars['range_t'], r, 'b', label=r'$r_{\mathrm{sim}}$(t)', alpha=0.5, zorder=1) plt.plot(pars['range_t'], r_ana, 'b--', lw=5, dashes=(2, 2), label=r'$r_{\mathrm{ana}}$(t)', zorder=2) plt.plot(pars['range_t'], F(I_ext, pars['a'], pars['theta']) * np.ones(pars['range_t'].size), 'k--', label=r'$F(I_{\mathrm{ext}})$') plt.xlabel('t (ms)', fontsize=16.) plt.ylabel('Activity r(t)', fontsize=16.) plt.legend(loc='best', fontsize=14.) plt.show() _ = widgets.interact(Myplot_E_diffI_difftau, I_ext=(0.0, 10., 1.), tau=(1., 5., 0.2))
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_65dee3e7.py) Think!Above, we have numerically solved a system driven by a positive input. Yet, $r_E(t)$ either decays to zero or reaches a fixed non-zero value.- Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that $r_E$(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_5a95a98e.py) --- Section 2: Fixed points of the single population system
# @title Video 2: Fixed point from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="Ox3ELd1UFyo", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($r$) is zero, i.e. $\displaystyle \frac{dr}{dt}=0$. We can find that the steady state of the Equation. (1) by setting $\displaystyle{\frac{dr}{dt}=0}$ and solve for $r$:$$-r_{\text{steady}} + F(w\cdot r_{\text{steady}} + I_{\text{ext}};a,\theta) = 0, \qquad (3)$$When it exists, the solution of Equation. (3) defines a **fixed point** of the dynamical system in Equation (1). Note that if $F(x)$ is nonlinear, it is not always possible to find an analytical solution, but the solution can be found via numerical simulations, as we will do later.From the Interactive Demo, one could also notice that the value of $\tau$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w=0$, we can also analytically compute the solution of Equation (1) (i.e., the thick blue dashed line) and deduce the role of $\tau$ in determining the convergence to the fixed point: $$\displaystyle{r(t) = \big{[}F(I_{\text{ext}};a,\theta) -r(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau}})} + r(t=0)$$ \\We can now numerically calculate the fixed point with a root finding algorithm. Exercise 2: Visualization of the fixed pointsWhen it is not possible to find the solution for Equation (3) analytically, a graphical approach can be taken. To that end, it is useful to plot $\displaystyle{\frac{dr}{dt}}$ as a function of $r$. The values of $r$ for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w=5.0$ and $I^{\text{ext}}=0.5$. From Equation (1), you can obtain$$\frac{dr}{dt} = [-r + F(w\cdot r + I^{\text{ext}})]\,/\,\tau $$Then, plot the $dr/dt$ as a function of $r$, and check for the presence of fixed points.
def compute_drdt(r, I_ext, w, a, theta, tau, **other_pars): """Given parameters, compute dr/dt as a function of r. Args: r (1D array) : Average firing rate of the excitatory population I_ext, w, a, theta, tau (numbers): Simulation parameters to use other_pars : Other simulation parameters are unused by this function Returns drdt function for each value of r """ ######################################################################### # TODO compute drdt and disable the error raise NotImplementedError("Finish the compute_drdt function") ######################################################################### # Calculate drdt drdt = ... return drdt # Define a vector of r values and the simulation parameters r = np.linspace(0, 1, 1000) pars = default_pars_single(I_ext=0.5, w=5) # Uncomment to test your function # drdt = compute_drdt(r, **pars) # plot_dr_r(r, drdt)
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_c5280901.py)*Example output:* Exercise 3: Fixed point calculationWe will now find the fixed points numerically. To do so, we need to specif initial values ($r_{\text{guess}}$) for the root-finding algorithm to start from. From the line $\displaystyle{\frac{dr}{dt}}$ plotted above in Exercise 2, initial values can be chosen as a set of values close to where the line crosses zero on the y axis (real fixed point).The next cell defines three helper functions that we will use:- `my_fp_single(r_guess, **pars)` uses a root-finding algorithm to locate a fixed point near a given initial value- `check_fp_single(x_fp, **pars)`, verifies that the values of $r_{\rm fp}$ for which $\displaystyle{\frac{dr}{dt}} = 0$ are the true fixed points- `my_fp_finder(r_guess_vector, **pars)` accepts an array of initial values and finds the same number of fixed points, using the above two functions
# @markdown *Execute this cell to enable the fixed point functions* def my_fp_single(r_guess, a, theta, w, I_ext, **other_pars): """ Calculate the fixed point through drE/dt=0 Args: r_guess : Initial value used for scipy.optimize function a, theta, w, I_ext : simulation parameters Returns: x_fp : value of fixed point """ # define the right hand of E dynamics def my_WCr(x): r = x drdt = (-r + F(w * r + I_ext, a, theta)) y = np.array(drdt) return y x0 = np.array(r_guess) x_fp = opt.root(my_WCr, x0).x.item() return x_fp def check_fp_single(x_fp, a, theta, w, I_ext, mytol=1e-4, **other_pars): """ Verify |dr/dt| < mytol Args: fp : value of fixed point a, theta, w, I_ext: simulation parameters mytol : tolerance, default as 10^{-4} Returns : Whether it is a correct fixed point: True/False """ # calculate Equation(3) y = x_fp - F(w * x_fp + I_ext, a, theta) # Here we set tolerance as 10^{-4} return np.abs(y) < mytol def my_fp_finder(pars, r_guess_vector, mytol=1e-4): """ Calculate the fixed point(s) through drE/dt=0 Args: pars : Parameter dictionary r_guess_vector : Initial values used for scipy.optimize function mytol : tolerance for checking fixed point, default as 10^{-4} Returns: x_fps : values of fixed points """ x_fps = [] correct_fps = [] for r_guess in r_guess_vector: x_fp = my_fp_single(r_guess, **pars) if check_fp_single(x_fp, **pars, mytol=mytol): x_fps.append(x_fp) return x_fps help(my_fp_finder) r = np.linspace(0, 1, 1000) pars = default_pars_single(I_ext=0.5, w=5) drdt = compute_drdt(r, **pars) ############################################################################# # TODO for students: # Define initial values close to the intersections of drdt and y=0 # (How many initial values? Hint: How many times do the two lines intersect?) # Calculate the fixed point with these initial values and plot them ############################################################################# r_guess_vector = [...] # Uncomment to test your values # x_fps = my_fp_finder(pars, r_guess_vector) # plot_dr_r(r, drdt, x_fps)
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_0637b6bf.py)*Example output:* Interactive Demo: fixed points as a function of recurrent and external inputs.You can now explore how the previous plot changes when the recurrent coupling $w$ and the external input $I_{\text{ext}}$ take different values. How does the number of fixed points change?
# @title # @markdown Make sure you execute this cell to enable the widget! def plot_intersection_single(w, I_ext): # set your parameters pars = default_pars_single(w=w, I_ext=I_ext) # find fixed points r_init_vector = [0, .4, .9] x_fps = my_fp_finder(pars, r_init_vector) # plot r = np.linspace(0, 1., 1000) drdt = (-r + F(w * r + I_ext, pars['a'], pars['theta'])) / pars['tau'] plot_dr_r(r, drdt, x_fps) _ = widgets.interact(plot_intersection_single, w=(1, 7, 0.2), I_ext=(0, 3, 0.1))
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_20486792.py) --- SummaryIn this tutorial, we have investigated the dynamics of a rate-based single population of neurons.We learned about:- The effect of the input parameters and the time constant of the network on the dynamics of the population.- How to find the fixed point(s) of the system.Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn:- How to determine the stability of a fixed point by linearizing the system.- How to add realistic inputs to our model. --- Bonus 1: Stability of a fixed point
# @title Video 3: Stability of fixed points from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="KKMlWWU83Jg", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out)
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Initial values and trajectoriesHere, let us first set $w=5.0$ and $I_{\text{ext}}=0.5$, and investigate the dynamics of $r(t)$ starting with different initial values $r(0) \equiv r_{\text{init}}$. We will plot the trajectories of $r(t)$ with $r_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$.
# @markdown Execute this cell to see the trajectories! pars = default_pars_single() pars['w'] = 5.0 pars['I_ext'] = 0.5 plt.figure(figsize=(8, 5)) for ie in range(10): pars['r_init'] = 0.1 * ie # set the initial value r = simulate_single(pars) # run the simulation # plot the activity with given initial plt.plot(pars['range_t'], r, 'b', alpha=0.1 + 0.1 * ie, label=r'r$_{\mathrm{init}}$=%.1f' % (0.1 * ie)) plt.xlabel('t (ms)') plt.title('Two steady states?') plt.ylabel(r'$r$(t)') plt.legend(loc=[1.01, -0.06], fontsize=14) plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Interactive Demo: dynamics as a function of the initial valueLet's now set $r_{\rm init}$ to a value of your choice in this demo. How does the solution change? What do you observe?
# @title # @markdown Make sure you execute this cell to enable the widget! pars = default_pars_single(w=5.0, I_ext=0.5) def plot_single_diffEinit(r_init): pars['r_init'] = r_init r = simulate_single(pars) plt.figure() plt.plot(pars['range_t'], r, 'b', zorder=1) plt.plot(0, r[0], 'bo', alpha=0.7, zorder=2) plt.xlabel('t (ms)', fontsize=16) plt.ylabel(r'$r(t)$', fontsize=16) plt.ylim(0, 1.0) plt.show() _ = widgets.interact(plot_single_diffEinit, r_init=(0, 1, 0.02))
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_4d2de6a0.py) Stability analysis via linearization of the dynamicsJust like Equation $1$ in the case ($w=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be:$$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$- if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**".- if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially, and the fixed point is, therefore, "**unstable**" . Compute the stability of Equation $1$Similar to what we did in the linear system above, in order to determine the stability of a fixed point $r^{*}$ of the excitatory population dynamics, we perturb Equation (1) around $r^{*}$ by $\epsilon$, i.e. $r = r^{*} + \epsilon$. We can plug in Equation (1) and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$:\begin{align}\tau \frac{d\epsilon}{dt} \approx -\epsilon + w F'(w\cdot r^{*} + I_{\text{ext}};a,\theta) \epsilon \end{align}where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as:\begin{align}\frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau }[-1 + w F'(w\cdot r^* + I_{\text{ext}};a,\theta)] \end{align}That is, as in the linear system above, the value of$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau \qquad (4)$$determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. Exercise 4: Compute $dF$The derivative of the sigmoid transfer function is:\begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\& = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \qquad (5)\end{align}Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it.
def dF(x, a, theta): """ Population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : the population activation response F(x) for input x """ ########################################################################### # TODO for students: compute dFdx ## raise NotImplementedError("Student excercise: compute the deravitive of F") ########################################################################### # Calculate the population activation dFdx = ... return dFdx pars = default_pars_single() # get default parameters x = np.arange(0, 10, .1) # set the range of input # Uncomment below to test your function # df = dF(x, pars['a'], pars['theta']) # plot_dFdt(x, df)
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_ce2e3bc5.py)*Example output:* Exercise 5: Compute eigenvaluesAs discussed above, for the case with $w=5.0$ and $I_{\text{ext}}=0.5$, the system displays **three** fixed points. However, when we simulated the dynamics and varied the initial conditions $r_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the three fixed points by calculating the corresponding eigenvalues with the function `eig_single`. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable?Note that the expression of the eigenvalue at fixed point $r^*$$$\lambda = [-1+ wF'(w\cdot r^* + I_{\text{ext}};a,\theta)]/\tau$$
def eig_single(fp, tau, a, theta, w, I_ext, **other_pars): """ Args: fp : fixed point r_fp tau, a, theta, w, I_ext : Simulation parameters Returns: eig : eigevalue of the linearized system """ ##################################################################### ## TODO for students: compute eigenvalue and disable the error raise NotImplementedError("Student excercise: compute the eigenvalue") ###################################################################### # Compute the eigenvalue eig = ... return eig # Find the eigenvalues for all fixed points of Exercise 2 pars = default_pars_single(w=5, I_ext=.5) r_guess_vector = [0, .4, .9] x_fp = my_fp_finder(pars, r_guess_vector) # Uncomment below lines after completing the eig_single function. # for fp in x_fp: # eig_fp = eig_single(fp, **pars) # print(f'Fixed point1 at {fp:.3f} with Eigenvalue={eig_fp:.3f}')
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
**SAMPLE OUTPUT**```Fixed point1 at 0.042 with Eigenvalue=-0.583Fixed point2 at 0.447 with Eigenvalue=0.498Fixed point3 at 0.900 with Eigenvalue=-0.626``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_e285f60d.py) Think! Throughout the tutorial, we have assumed $w> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w> 0$ is replaced by $w< 0$? [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D4_DynamicNetworks/solutions/W2D4_Tutorial1_Solution_579bc9c9.py) --- Bonus 2: Noisy input drives the transition between two stable states Ornstein-Uhlenbeck (OU) processAs discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process.
# @title OU process `my_OU(pars, sig, myseed=False)` # @markdown Make sure you execute this cell to visualize the noise! def my_OU(pars, sig, myseed=False): """ A functions that generates Ornstein-Uhlenback process Args: pars : parameter dictionary sig : noise amplitute myseed : random seed. int or boolean Returns: I : Ornstein-Uhlenbeck input current """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size tau_ou = pars['tau_ou'] # [ms] # set random seed if myseed: np.random.seed(seed=myseed) else: np.random.seed() # Initialize noise = np.random.randn(Lt) I_ou = np.zeros(Lt) I_ou[0] = noise[0] * sig # generate OU for it in range(Lt - 1): I_ou[it + 1] = (I_ou[it] + dt / tau_ou * (0. - I_ou[it]) + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1]) return I_ou pars = default_pars_single(T=100) pars['tau_ou'] = 1. # [ms] sig_ou = 0.1 I_ou = my_OU(pars, sig=sig_ou, myseed=2020) plt.figure(figsize=(10, 4)) plt.plot(pars['range_t'], I_ou, 'r') plt.xlabel('t (ms)') plt.ylabel(r'$I_{\mathrm{OU}}$') plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Example: Up-Down transitionIn the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs.
# @title Simulation of an E population with OU inputs # @markdown Make sure you execute this cell to spot the Up-Down states! pars = default_pars_single(T=1000) pars['w'] = 5.0 sig_ou = 0.7 pars['tau_ou'] = 1. # [ms] pars['I_ext'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020) r = simulate_single(pars) plt.figure(figsize=(10, 4)) plt.plot(pars['range_t'], r, 'b', alpha=0.8) plt.xlabel('t (ms)') plt.ylabel(r'$r(t)$') plt.show()
_____no_output_____
CC-BY-4.0
tutorials/W2D4_DynamicNetworks/student/W2D4_Tutorial1.ipynb
carsen-stringer/course-content
Refs:https://github.com/deep-learning-with-pytorch/dlwpt-code
import numpy as np import torch
_____no_output_____
MIT
Tutorial/.ipynb_checkpoints/c5_optimizers-checkpoint.ipynb
danhtaihoang/pytorch-deeplearning
Optimizers
x = [35.7, 55.9, 58.2, 81.9, 56.3, 48.9, 33.9, 21.8, 48.4, 60.4, 68.4] y = [0.5, 14.0, 15.0, 28.0, 11.0, 8.0, 3.0, -4.0, 6.0, 13.0, 21.0] x = torch.tensor(x) y = torch.tensor(y) #x = 0.1*x # normalize x_norm = 0.1*x def model(x, w, b): return w * x + b def loss_fn(y_p, y): squared_diffs = (y_p - y)**2 return squared_diffs.mean() import torch.optim as optim dir(optim) def training_loop(n_epochs, optimizer, params, x, y): for epoch in range(1, n_epochs + 1): y_p = model(x, *params) loss = loss_fn(y_p, y) ## reset gradients to zero optimizer.zero_grad() ## calculate gradients loss.backward() ## update params: params -= learning_rate * params.grad optimizer.step() if epoch % 500 == 0: print('Epoch %d, Loss %f' % (epoch, float(loss))) return params params = torch.tensor([1.0, 0.0], requires_grad=True) learning_rate = 1e-2 optimizer = optim.SGD([params], lr=learning_rate) training_loop(n_epochs = 5000, params = params, optimizer = optimizer, x = x_norm, y = y)
Epoch 500, Loss 7.860115 Epoch 1000, Loss 3.828538 Epoch 1500, Loss 3.092191 Epoch 2000, Loss 2.957698 Epoch 2500, Loss 2.933134 Epoch 3000, Loss 2.928648 Epoch 3500, Loss 2.927830 Epoch 4000, Loss 2.927679 Epoch 4500, Loss 2.927652 Epoch 5000, Loss 2.927647
MIT
Tutorial/.ipynb_checkpoints/c5_optimizers-checkpoint.ipynb
danhtaihoang/pytorch-deeplearning
The basic nbpy_top_tweeters app First, let's get connected with the Twitter API:
import os import tweepy auth = tweepy.AppAuthHandler( os.environ['TWITTER_API_TOKEN'], os.environ['TWITTER_API_SECRET'] ) api = tweepy.API(auth) api # import requests_cache # requests_cache.install_cache()
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
At this point, we use the `search()` method to get a list of tweets matching the search term:
nbpy_tweets = api.search('#nbpy', count=100) len(nbpy_tweets)
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
From the iterable of tweets we get the number of tweets per user by using a `collections.Counter` object:
from collections import Counter tweet_count_by_username = Counter(tweet.user.screen_name for tweet in nbpy_tweets) tweet_count_by_username
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
At this point, we can calculate the top $n$ tweeters:
top_tweeters = tweet_count_by_username.most_common(20) top_tweeters
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
And show a scoreboard with the winners:
for username, tweet_count in top_tweeters: print(f'@{username:20}{tweet_count:2d}')
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
- We can see that, already with the "vanilla" notebook, we have some degree of interactivity simply by editing and running the code cell-by-cell rather than in one go --- From `repr()` output to rich output with `IPython.display`
import random tweet = random.choice(nbpy_tweets) tweet
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
- The repr of these objects are rich in information, but not very easy to explore
tweet.user
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters
The `IPython.display` module contains several classes that render rich output from objects in a cell's output
from IPython.display import *
_____no_output_____
MIT
1-output.ipynb
fndari/nbpy-top-tweeters