markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Upload
uploader = PostImages() extras = [] try: with open(URLS_PATH, 'a') as f_urls: for entry in os.scandir(INPUT_DIR): if entry.is_file(): image_url, extra = uploader.upload_file(entry.path) extras.append(extra) print(entry.name, image_url, '', sep='\n', file=f_urls) print(entry.name, image_url, extra) print('DONE!') except KeyboardInterrupt: print('\nStopped!')
_____no_output_____
MIT
notebooks/uploader/postimages.ipynb
TheYoke/PngBin
------ Delete Uploaded Images [EXTRA]> Change below cell to code mode to run the cell.
for extra in extras: result = uploader.delete(extra['removal_link']) print(result)
_____no_output_____
MIT
notebooks/uploader/postimages.ipynb
TheYoke/PngBin
3 аттрибута тензора в pytorch:dtype - тип данных хранимых в тензоре (float32, etc.)device - на чём обрабатывается данный тензор (тензоры которые взаимодействую должны быть на одном проце/видяхе)layout - как данные расположены в памяти (не трогай это, дефолтная настройка норм) Создание тензоров (прямо как в NumPy):
# единичный тензор print(torch.eye(3)) # нулевой torch.zeros(2,2) # единичный torch.ones(2,2)
_____no_output_____
MIT
Books and Courses/PyTorch/1 - tensor creation, reshaping, squeezing, flattening.ipynb
FairlyTales/Machine_Learning_Courses
Создание тензоров из данных:Factory function has more tweaking parameters than class constructor, so we will use them more often. They also infere the dtype (they get data in i.e. int32, they save it as int32).Beware that torch.Tensor and torch.tensor CREATE a copy of input data in memory, when torch.as_tensor and torch.from_numpy just mirror the existing data (so the last is more memory efficient, but if the original data is changed, tensor changed too. Data MUST be in NumPy array).For casual use - torch.tensor()For speed boost - torch.as_tensor()
data = np.array([1,2,3]) t1 = torch.Tensor(data) # class constructor t2 = torch.tensor(data) # factory function t3 = torch.as_tensor(data) # factory function t4 = torch.from_numpy(data) # factory function
_____no_output_____
MIT
Books and Courses/PyTorch/1 - tensor creation, reshaping, squeezing, flattening.ipynb
FairlyTales/Machine_Learning_Courses
Reshaping the the tensors, squeezing.reshape() or .view()(одно и то же, просто разные названия) - прямое изменение размеров тензора.squeeze() - удаление всех осей с длинной 1..unsqueeze() - добавляет ось с длиной 1, это позволяет изменять ранг тензора.
t = torch.tensor([ [1,1,1,1], [2,2,2,2], [3,3,3,3] ], dtype=torch.float32) t.shape # number of elements check print(torch.tensor(t.shape).prod()) print(t.numel()) print(t.reshape(2, 1, 2, 3)) # reshape to rank-4 tensor (from rank-2) print(t.reshape(2, 6)) print(t.reshape(1,12)) # tensor rank 2 print(t.reshape(1,12).squeeze()) # tensor rank 1 # маленький трюк - если не знаем сколько компонентов будет в тензоре который мы хотим # "вытянуть" сохранив второй ранг мы пишем "1, -1" во втором атрибуте .reshape print(t.reshape(1,-1)) # если хотим перевести его в тензор первого ранга пишем "-1" print(t.reshape(-1))
tensor([[1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.]]) tensor([1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.]) tensor([[1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.]]) tensor([1., 1., 1., 1., 2., 2., 2., 2., 3., 3., 3., 3.])
MIT
Books and Courses/PyTorch/1 - tensor creation, reshaping, squeezing, flattening.ipynb
FairlyTales/Machine_Learning_Courses
Flattening, Tensor size for neural network.flatten() - позволяет зарешейпить тензор любого ранга в тензор первого рангаОднако для нейросети абсолютно "плоские" данные нам не нужны, т.к. она должна различать batch (какие данные к какой введённой переменой относятся).Нейронка работает с тензорами ранга 4, вот что каждая из осей представляет(на примере тензора ниже):Batch - contains 3 imagesImage - contains 1 color channel (cause grayscale)Color channel - contains 4 arrays(rows/hight)4 arrays(rows/hight) - contains 4 pixel values (columns/width)
# create 3 "images" 4x4 pixels, grayscale t1 = torch.tensor([ [1,1,1,1], [1,1,1,1], [1,1,1,1], [1,1,1,1] ]) t2 = torch.tensor([ [2,2,2,2], [2,2,2,2], [2,2,2,2], [2,2,2,2] ]) t3 = torch.tensor([ [3,3,3,3], [3,3,3,3], [3,3,3,3], [3,3,3,3] ]) # stack them to make a tensor with rank 3 (new axis represents batch), now we have a 3 # "grayscale" pictures in the resulting tensor t = torch.stack((t1, t2, t3)) t.shape # and a new axis to this tensor to create a "batch" axis for discening the "pictures" t = t.reshape(3, 1, 4, 4) t
_____no_output_____
MIT
Books and Courses/PyTorch/1 - tensor creation, reshaping, squeezing, flattening.ipynb
FairlyTales/Machine_Learning_Courses
how to use flattenWe can't flatten the whole tensor, cause it will become a simple vector and we won't be able to know where are the "start" and the "end" of each "picture". So we flatten this tensor in such a way that wa preserve the "batch" axis (flattening the color channel with hight and width axes), which tells us which "picture" is which.
# bad idea - got vector with 48 pixel, no way to know from which picture each pixel comes from a = t.flatten() print('bad - ', a.shape) # good idea - got 3 images with 16 pixels t = t.flatten(start_dim=1) # start flattening from axis=1 (it`s an index) print('good - ', t.shape)
bad - torch.Size([48]) good - torch.Size([3, 16])
MIT
Books and Courses/PyTorch/1 - tensor creation, reshaping, squeezing, flattening.ipynb
FairlyTales/Machine_Learning_Courses
Broadcasting for element-wise operationsBroadcasting is transforming a lower rank tensor to match the shape of the tensor with which we want it to perform an element-wise operation.All element-wise operations works with tensors due to the broadcasting.
print(t.eq(1)) # equal 1 print(t.abs()) # взятие модуля print(t + 3) print(t.neg())
tensor([[ True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True], [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False]]) tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]]) tensor([[4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]]) tensor([[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1], [-2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2], [-3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3, -3]])
MIT
Books and Courses/PyTorch/1 - tensor creation, reshaping, squeezing, flattening.ipynb
FairlyTales/Machine_Learning_Courses
Reduction operation - .ArgMax()Reduction operation on tensor is an operation that reduced the number of elements contained within the tensor.We can use .mean(),.argmax() returns the index of the max value inside the tensor
t4 = torch.tensor([ [0,1,0], [2,0,2], [0,3,0] ], dtype=torch.float32) print(t4.shape) print(t4.sum()) # sum all elements = reduce this tensor to one axis print(t4.sum(dim=0)) print(t4.sum(dim=0).shape) # sum along axis 1 (sum columns) = reduce this tensor to one axis print('value = ', t4.max()) # MAX value print('flatten = ', t4.flatten()) # flatten tensor (count indices from '0') print('index = ', t4.argmax()) # index of the MAX value (yep, '3' has index=7) print(t4.max(dim=0), '\n') # max values on axis=0 print(t4.max(dim=1)) # first tensor - max values, second tensor - their indices # if we wanna get the results not as tensors, but as ints/floats/np.arrays: print(t4.mean()) # got tensor print(t4.mean().item(), '\n') # got float32 print(t4.mean(dim=0)) # got tensor print(t4.mean(dim=0).tolist(), '\n') # got np.array of float32
tensor(0.8889) 0.8888888955116272 tensor([0.6667, 1.3333, 0.6667]) [0.6666666865348816, 1.3333333730697632, 0.6666666865348816]
MIT
Books and Courses/PyTorch/1 - tensor creation, reshaping, squeezing, flattening.ipynb
FairlyTales/Machine_Learning_Courses
Aim and motivationThe primary reason I have chosen to create this kernel is to practice and use RNNs for various tasks and applications. First of which is time series data. RNNs have truly changed the way sequential data is forecasted. My goal here is to create the ultimate reference for RNNs here on kaggle. Things to remember* Please upvote(like button) and share this kernel if you like it. This would increase its visibility and more people will be able to learn about the awesomeness of RNNs.* I will use keras for this kernel. If you are not familiar with keras or neural networks, refer to this kernel/tutorial of mine: https://www.kaggle.com/thebrownviking20/intro-to-keras-with-breast-cancer-data-ann* Your doubts and curiousity about time series can be taken care of here: https://www.kaggle.com/thebrownviking20/everything-you-can-do-with-a-time-series* Don't let the explanations intimidate you. It's simpler than you think.* Eventually, I will add more applications of LSTMs. So stay tuned for more!* The code is inspired from Kirill Eremenko's Deep Learning Course: https://www.udemy.com/deeplearning/ Recurrent Neural NetworksIn a recurrent neural network we store the output activations from one or more of the layers of the network. Often these are hidden later activations. Then, the next time we feed an input example to the network, we include the previously-stored outputs as additional inputs. You can think of the additional inputs as being concatenated to the end of the “normal” inputs to the previous layer. For example, if a hidden layer had 10 regular input nodes and 128 hidden nodes in the layer, then it would actually have 138 total inputs (assuming you are feeding the layer’s outputs into itself à la Elman) rather than into another layer). Of course, the very first time you try to compute the output of the network you’ll need to fill in those extra 128 inputs with 0s or something.Source: [Quora](https://www.quora.com/What-is-a-simple-explanation-of-a-recurrent-neural-network)Source: [Medium](https://medium.com/ai-journal/lstm-gru-recurrent-neural-networks-81fe2bcdf1f9)Let me give you the best explanation of Recurrent Neural Networks that I found on internet: https://www.youtube.com/watch?v=UNmqTiOnRfg&t=3s Now, even though RNNs are quite powerful, they suffer from **Vanishing gradient problem ** which hinders them from using long term information, like they are good for storing memory 3-4 instances of past iterations but larger number of instances don't provide good results so we don't just use regular RNNs. Instead, we use a better variation of RNNs: **Long Short Term Networks(LSTM).** What is Vanishing Gradient problem?Vanishing gradient problem is a difficulty found in training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, each of the neural network's weights receives an update proportional to the partial derivative of the error function with respect to the current weight in each iteration of training. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value. In the worst case, this may completely stop the neural network from further training. As one example of the problem cause, traditional activation functions such as the hyperbolic tangent function have gradients in the range (0, 1), and backpropagation computes gradients by the chain rule. This has the effect of multiplying n of these small numbers to compute gradients of the "front" layers in an n-layer network, meaning that the gradient (error signal) decreases exponentially with n while the front layers train very slowly.Source: [Wikipedia](https://en.wikipedia.org/wiki/Vanishing_gradient_problem)Source: [Medium](https://medium.com/@anishsingh20/the-vanishing-gradient-problem-48ae7f501257) Long Short Term Memory(LSTM)Long short-term memory (LSTM) units (or blocks) are a building unit for layers of a recurrent neural network (RNN). A RNN composed of LSTM units is often called an LSTM network. A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell is responsible for "remembering" values over arbitrary time intervals; hence the word "memory" in LSTM. Each of the three gates can be thought of as a "conventional" artificial neuron, as in a multi-layer (or feedforward) neural network: that is, they compute an activation (using an activation function) of a weighted sum. Intuitively, they can be thought as regulators of the flow of values that goes through the connections of the LSTM; hence the denotation "gate". There are connections between these gates and the cell.The expression long short-term refers to the fact that LSTM is a model for the short-term memory which can last for a long period of time. An LSTM is well-suited to classify, process and predict time series given time lags of unknown size and duration between important events. LSTMs were developed to deal with the exploding and vanishing gradient problem when training traditional RNNs.Source: [Wikipedia](https://en.wikipedia.org/wiki/Long_short-term_memory)Source: [Medium](https://codeburst.io/generating-text-using-an-lstm-network-no-libraries-2dff88a3968)The best LSTM explanation on internet: https://medium.com/deep-math-machine-learning-ai/chapter-10-1-deepnlp-lstm-long-short-term-memory-networks-with-math-21477f8e4235Refer above link for deeper insights. Components of LSTMsSo the LSTM cell contains the following components* Forget Gate “f” ( a neural network with sigmoid)* Candidate layer “C"(a NN with Tanh)* Input Gate “I” ( a NN with sigmoid )* Output Gate “O”( a NN with sigmoid)* Hidden state “H” ( a vector )* Memory state “C” ( a vector)* Inputs to the LSTM cell at any step are Xt (current input) , Ht-1 (previous hidden state ) and Ct-1 (previous memory state). * Outputs from the LSTM cell are Ht (current hidden state ) and Ct (current memory state) Working of gates in LSTMsFirst, LSTM cell takes the previous memory state Ct-1 and does element wise multiplication with forget gate (f) to decide if present memory state Ct. If forget gate value is 0 then previous memory state is completely forgotten else f forget gate value is 1 then previous memory state is completely passed to the cell ( Remember f gate gives values between 0 and 1 ).**Ct = Ct-1 * ft**Calculating the new memory state: **Ct = Ct + (It * C\`t)**Now, we calculate the output:**Ht = tanh(Ct)** And now we get to the code...I will use LSTMs for predicting the price of stocks of IBM for the year 2017
# Importing the libraries import numpy as np import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') import pandas as pd from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense, LSTM, Dropout, GRU, Bidirectional from keras.optimizers import SGD import math from sklearn.metrics import mean_squared_error # Some functions to help out with def plot_predictions(test,predicted): plt.plot(test, color='red',label='Real IBM Stock Price') plt.plot(predicted, color='blue',label='Predicted IBM Stock Price') plt.title('IBM Stock Price Prediction') plt.xlabel('Time') plt.ylabel('IBM Stock Price') plt.legend() plt.show() def return_rmse(test,predicted): rmse = math.sqrt(mean_squared_error(test, predicted)) print("The root mean squared error is {}.".format(rmse)) # First, we get the data dataset = pd.read_csv('./data/stocks/IBM_2006-01-01_to_2018-01-01.csv', index_col='Date', parse_dates=['Date']) dataset.head() # Checking for missing values training_set = dataset[:'2016'].iloc[:,1:2].values test_set = dataset['2017':].iloc[:,1:2].values # We have chosen 'High' attribute for prices. Let's see what it looks like dataset["High"][:'2016'].plot(figsize=(16,4),legend=True) dataset["High"]['2017':].plot(figsize=(16,4),legend=True) plt.legend(['Training set (Before 2017)','Test set (2017 and beyond)']) plt.title('IBM stock price') plt.show() # Scaling the training set sc = MinMaxScaler(feature_range=(0,1)) training_set_scaled = sc.fit_transform(training_set) # Since LSTMs store long term memory state, we create a data structure with 60 timesteps and 1 output # So for each element of training set, we have 60 previous training set elements X_train = [] y_train = [] for i in range(60,2769): X_train.append(training_set_scaled[i-60:i,0]) y_train.append(training_set_scaled[i,0]) X_train, y_train = np.array(X_train), np.array(y_train) # Reshaping X_train for efficient modelling X_train = np.reshape(X_train, (X_train.shape[0],X_train.shape[1],1)) # The LSTM architecture regressor = Sequential() # First LSTM layer with Dropout regularisation regressor.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],1))) regressor.add(Dropout(0.2)) # Second LSTM layer regressor.add(LSTM(units=50, return_sequences=True)) regressor.add(Dropout(0.2)) # Third LSTM layer regressor.add(LSTM(units=50, return_sequences=True)) regressor.add(Dropout(0.2)) # Fourth LSTM layer regressor.add(LSTM(units=50)) regressor.add(Dropout(0.2)) # The output layer regressor.add(Dense(units=1)) # Compiling the RNN regressor.compile(optimizer='rmsprop',loss='mean_squared_error') # Fitting to the training set regressor.fit(X_train,y_train,epochs=50,batch_size=32) # Now to get the test set ready in a similar way as the training set. # The following has been done so forst 60 entires of test set have 60 previous values which is impossible to get unless we take the whole # 'High' attribute data for processing dataset_total = pd.concat((dataset["High"][:'2016'],dataset["High"]['2017':]),axis=0) inputs = dataset_total[len(dataset_total)-len(test_set) - 60:].values inputs = inputs.reshape(-1,1) inputs = sc.transform(inputs) # Preparing X_test and predicting the prices X_test = [] for i in range(60,311): X_test.append(inputs[i-60:i,0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) predicted_stock_price = regressor.predict(X_test) predicted_stock_price = sc.inverse_transform(predicted_stock_price) # Visualizing the results for LSTM plot_predictions(test_set,predicted_stock_price) # Evaluating our model return_rmse(test_set,predicted_stock_price)
_____no_output_____
MIT
05-machine-learning-nao-tabular/00-tabular-auto-correlacionado/rnns.ipynb
abefukasawa/datascience_course
Truth be told. That's one awesome score. LSTM is not the only kind of unit that has taken the world of Deep Learning by a storm. We have **Gated Recurrent Units(GRU)**. It's not known, which is better: GRU or LSTM becuase they have comparable performances. GRUs are easier to train than LSTMs. Gated Recurrent UnitsIn simple words, the GRU unit does not have to use a memory unit to control the flow of information like the LSTM unit. It can directly makes use of the all hidden states without any control. GRUs have fewer parameters and thus may train a bit faster or need less data to generalize. But, with large data, the LSTMs with higher expressiveness may lead to better results.They are almost similar to LSTMs except that they have two gates: reset gate and update gate. Reset gate determines how to combine new input to previous memory and update gate determines how much of the previous state to keep. Update gate in GRU is what input gate and forget gate were in LSTM. We don't have the second non linearity in GRU before calculating the outpu, .neither they have the output gate.Source: [Quora](https://www.quora.com/Whats-the-difference-between-LSTM-and-GRU-Why-are-GRU-efficient-to-train)
# The GRU architecture regressorGRU = Sequential() # First GRU layer with Dropout regularisation regressorGRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],1), activation='tanh')) regressorGRU.add(Dropout(0.2)) # Second GRU layer regressorGRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],1), activation='tanh')) regressorGRU.add(Dropout(0.2)) # Third GRU layer regressorGRU.add(GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],1), activation='tanh')) regressorGRU.add(Dropout(0.2)) # Fourth GRU layer regressorGRU.add(GRU(units=50, activation='tanh')) regressorGRU.add(Dropout(0.2)) # The output layer regressorGRU.add(Dense(units=1)) # Compiling the RNN regressorGRU.compile(optimizer=SGD(lr=0.01, decay=1e-7, momentum=0.9, nesterov=False),loss='mean_squared_error') # Fitting to the training set regressorGRU.fit(X_train,y_train,epochs=50,batch_size=150)
_____no_output_____
MIT
05-machine-learning-nao-tabular/00-tabular-auto-correlacionado/rnns.ipynb
abefukasawa/datascience_course
The current version version uses a dense GRU network with 100 units as opposed to the GRU network with 50 units in previous version
# Preparing X_test and predicting the prices X_test = [] for i in range(60,311): X_test.append(inputs[i-60:i,0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) GRU_predicted_stock_price = regressorGRU.predict(X_test) GRU_predicted_stock_price = sc.inverse_transform(GRU_predicted_stock_price) # Visualizing the results for GRU plot_predictions(test_set,GRU_predicted_stock_price) # Evaluating GRU return_rmse(test_set,GRU_predicted_stock_price)
_____no_output_____
MIT
05-machine-learning-nao-tabular/00-tabular-auto-correlacionado/rnns.ipynb
abefukasawa/datascience_course
Sequence GenerationHere, I will generate a sequence using just initial 60 values instead of using last 60 values for every new prediction. **Due to doubts in various comments about predictions making use of test set values, I have decided to include sequence generation.** The above models make use of test set so it is using last 60 true values for predicting the new value(I will call it a benchmark). This is why the error is so low. Strong models can bring similar results like above models for sequences too but they require more than just data which has previous values. In case of stocks, we need to know the sentiments of the market, the movement of other stocks and a lot more. So, don't expect a remotely accurate plot. The error will be great and the best I can do is generate the trend similar to the test set.I will use GRU model for predictions. You can try this using LSTMs also. I have modified GRU model above to get the best sequence possible. I have run the model four times and two times I got error of around 8 to 9. The worst case had an error of around 11. Let's see what this iterations.The GRU model in the previous versions is fine too. Just a little tweaking was required to get good sequences. **The main goal of this kernel is to show how to build RNN models. How you predict data and what kind of data you predict is up to you. I can't give you some 100 lines of code where you put the destination of training and test set and get world-class results. That's something you have to do yourself.**
# Preparing sequence data initial_sequence = X_train[2708,:] sequence = [] for i in range(251): new_prediction = regressorGRU.predict(initial_sequence.reshape(initial_sequence.shape[1],initial_sequence.shape[0],1)) initial_sequence = initial_sequence[1:] initial_sequence = np.append(initial_sequence,new_prediction,axis=0) sequence.append(new_prediction) sequence = sc.inverse_transform(np.array(sequence).reshape(251,1)) # Visualizing the sequence plot_predictions(test_set,sequence) # Evaluating the sequence return_rmse(test_set,sequence)
_____no_output_____
MIT
05-machine-learning-nao-tabular/00-tabular-auto-correlacionado/rnns.ipynb
abefukasawa/datascience_course
<h1 style="padding-top: 25px;padding-bottom: 25px;text-align: left; padding-left: 10px; background-color: DDDDDD; color: black;"> AC215: Advanced Practical Data Science, MLOps **Exercise 1 - Dask****Harvard University****Fall 2021****Instructor:**Pavlos Protopapas**Students:**Jiahui Tang, Max Li **Setup Notebook** **Copy & setup Colab** 1) Select "File" menu and pick "Save a copy in Drive" **Installs**
!pip install dask dask[dataframe] dask-image
Requirement already satisfied: dask in /usr/local/lib/python3.7/dist-packages (2.12.0) Requirement already satisfied: dask-image in /usr/local/lib/python3.7/dist-packages (0.6.0) Requirement already satisfied: scipy>=0.19.1 in /usr/local/lib/python3.7/dist-packages (from dask-image) (1.4.1) Requirement already satisfied: pims>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from dask-image) (0.5) Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.7/dist-packages (from dask-image) (1.19.5) Requirement already satisfied: toolz>=0.7.3 in /usr/local/lib/python3.7/dist-packages (from dask) (0.11.1) Requirement already satisfied: slicerator>=0.9.8 in /usr/local/lib/python3.7/dist-packages (from pims>=0.4.1->dask-image) (1.0.0) Requirement already satisfied: six>=1.8 in /usr/local/lib/python3.7/dist-packages (from pims>=0.4.1->dask-image) (1.15.0) Requirement already satisfied: fsspec>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from dask) (2021.8.1) Requirement already satisfied: pandas>=0.23.0 in /usr/local/lib/python3.7/dist-packages (from dask) (1.1.5) Requirement already satisfied: partd>=0.3.10 in /usr/local/lib/python3.7/dist-packages (from dask) (1.2.0) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.23.0->dask) (2.8.2) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.23.0->dask) (2018.9) Requirement already satisfied: locket in /usr/local/lib/python3.7/dist-packages (from partd>=0.3.10->dask) (0.2.1)
MIT
Exercise/exercise_1.ipynb
TangJiahui/AC215-Advanced_Practical_Data_Science
**Imports**
import os import requests import zipfile import tarfile import shutil import math import json import time import sys import numpy as np import pandas as pd # Dask import dask import dask.dataframe as dd import dask.array as da from dask.diagnostics import ProgressBar
_____no_output_____
MIT
Exercise/exercise_1.ipynb
TangJiahui/AC215-Advanced_Practical_Data_Science
**Utils** Here are some util functions that we will be using for this exercise
def download_file(packet_url, base_path="", extract=False, headers=None): if base_path != "": if not os.path.exists(base_path): os.mkdir(base_path) packet_file = os.path.basename(packet_url) with requests.get(packet_url, stream=True, headers=headers) as r: r.raise_for_status() with open(os.path.join(base_path,packet_file), 'wb') as f: for chunk in r.iter_content(chunk_size=8192): f.write(chunk) if extract: if packet_file.endswith(".zip"): with zipfile.ZipFile(os.path.join(base_path,packet_file)) as zfile: zfile.extractall(base_path) else: packet_name = packet_file.split('.')[0] with tarfile.open(os.path.join(base_path,packet_file)) as tfile: tfile.extractall(base_path)
_____no_output_____
MIT
Exercise/exercise_1.ipynb
TangJiahui/AC215-Advanced_Practical_Data_Science
**Dataset** **Load Data**
start_time = time.time() download_file("https://github.com/dlops-io/datasets/releases/download/v1.0/Parking_Violations_Issued_-_Fiscal_Year_2017.csv.zip", base_path="datasets", extract=True) execution_time = (time.time() - start_time)/60.0 print("Download execution time (mins)",execution_time) parking_violation_csv = os.path.join("datasets","Parking_Violations_Issued_-_Fiscal_Year_2017.csv")
_____no_output_____
MIT
Exercise/exercise_1.ipynb
TangJiahui/AC215-Advanced_Practical_Data_Science
Q1: Compute Pi with a Slowly Converging SeriesLeibniz published one of the oldest known series in 1676. While this is easy to understand and derive, it converges very slowly.https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80 $$\frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} ...$$While this is a genuinely cruel way to compute the value of $\pi$, it’s a fun opportunity to use brute force on a problem instead of thinking.Compute using at least four billion terms in this sequence. Compare your time taken with numpy and dask. On my mac, with numpy this took 44 seconds and with dask it took 5.7 seconds. *Hint:* Use dask array **Checking 1e9 * 4 terms with numpy**If 1e9 * 4 fails, try 1e9 * 2 or increase memory
# Your code here start_time = time.time() k = int(1e9*2) positive_sum = np.sum(1/np.arange(1, k ,4)) negative_sum = np.sum(-1/np.arange(3, k, 4)) pi_computed = (positive_sum + negative_sum) * 4 execution_time = time.time() - start_time # Error error = np.abs(pi_computed-np.pi) # Report Results print(f'Pi real value = {np.pi:14.12f}') print(f'Pi computed value = {pi_computed:14.12f}') print(f'Error = {error:6.3e}') print("Numpy execution time (sec)",execution_time)
Pi real value = 3.141592653590 Pi computed value = 3.141592652590 Error = 9.998e-10 Numpy execution time (sec) 8.094965696334839
MIT
Exercise/exercise_1.ipynb
TangJiahui/AC215-Advanced_Practical_Data_Science
**Checking 1e9 * 4 terms with Dask**
# Your code here start_time = time.time() k = int(1e9*2) positive_sum_da = da.sum(1/da.arange(1, k, 4)).compute() negative_sum_da = da.sum(-1/da.arange(3, k, 4)).compute() step3_pi = (positive_sum_da + negative_sum_da) * 4 execution_time = time.time() - start_time error = np.abs(step3_pi - np.pi) # Report Results print(f'Pi real value = {np.pi:14.12f}') print(f'Pi computed value = {step3_pi:14.12f}') print(f'Error = {error:6.3e}') print("Dask Array execution time (sec)",execution_time)
Pi real value = 3.141592653590 Pi computed value = 3.141592652590 Error = 1.000e-09 Dask Array execution time (sec) 4.978763103485107
MIT
Exercise/exercise_1.ipynb
TangJiahui/AC215-Advanced_Practical_Data_Science
Filter Parking Tickets DatasetAccording to the parking tickets data set documentation, the column called ‘Plate Type’ consists mainly of two different types, ‘PAS’ and ‘COM’; presumably for passenger and commercial vehicles, respectively. Maybe the rest are the famous parking tickets from the UN diplomats, who take advantage of diplomatic immunity not to pay their fines.Create a filtered Dask DataFrame with only the commercial plates.Persist it, so it is available in memory for future computations. Count the number of summonses in 2017 (i.e., Issue Year in 2016, 2017) issued to commercial plate types. Compute them as a percentage of the total data set. *Hint*: This is easy; it is only about 5-7 lines of code.
dict_1 = {'Summons Number': 'int64', 'Plate ID': 'object', 'Registration State': 'object', 'Plate Type': 'object', 'Issue Date': 'object', 'Violation Code': 'int64', 'Vehicle Body Type': 'object', 'Vehicle Make': 'object', 'Issuing Agency': 'object', 'Street Code1': 'int64', 'Street Code2': 'int64', 'Street Code3': 'int64', 'Vehicle Expiration Date': 'int64', 'Violation Location': 'float64', 'Violation Precinct': 'int64', 'Issuer Precinct': 'int64', 'Issuer Code': 'int64', 'Issuer Command': 'object', 'Issuer Squad': 'object', 'Violation Time': 'object', 'Time First Observed': 'object', 'Violation County': 'object', 'Violation In Front Of Or Opposite': 'object', 'House Number': 'object', 'Street Name': 'object', 'Intersecting Street': 'object', 'Date First Observed': 'int64', 'Law Section': 'int64', 'Sub Division': 'object', 'Violation Legal Code': 'object', 'Days Parking In Effect ': 'object', 'From Hours In Effect': 'object', 'To Hours In Effect': 'object', 'Vehicle Color': 'object', 'Unregistered Vehicle?': 'float64', 'Vehicle Year': 'int64', 'Meter Number': 'object', 'Feet From Curb': 'int64', 'Violation Post Code': 'object', 'Violation Description': 'object', 'No Standing or Stopping Violation': 'float64', 'Hydrant Violation': 'float64', 'Double Parking Violation': 'float64'} # This is to avoid the DtypeWarning df = dd.read_csv(parking_violation_csv, dtype=dict_1) df.head() # info Dask Dataframe print('<index where df has been split>', df.divisions) print('<# partitions>', df.npartitions) # filter entries in dask dataframe with COM # and persist it action_filtered = df[df['Plate Type'] == 'COM'].persist() # df with number of summonses in 2017 (i.e., Issue Year in 2016, 2017) summonses_value_df = action_filtered[action_filtered['Issue Date'].str.contains('2016|2017')] #after reorganizing dataframe in one partition, check number of summonses action_filtered_reduced = summonses_value_df.repartition(npartitions=1) commercial_2017_count = action_filtered_reduced.map_partitions(len).compute() # Compute them as a percentage of the total data set df_size = df.index.size commercial_2017_percent = ((commercial_2017_count/df_size)*100) num_commercial_2017 = int(commercial_2017_count) pct_commercial = int(commercial_2017_percent) # Percentage relative to all the parking tickets in 2017 print(f'Number of NYC summonses with commercial plates in 2017 was {num_commercial_2017}') print(f'Percentage {pct_commercial:5.2f}%')
Number of NYC summonses with commercial plates in 2017 was 1838970 Percentage 17.00%
MIT
Exercise/exercise_1.ipynb
TangJiahui/AC215-Advanced_Practical_Data_Science
kNN
k=4 KNN_model = KNeighborsClassifier(n_neighbors=k) KNN_model.fit(X_train, y_train) KNN_prediction = KNN_model.predict(X_test) cm,acc,f1,macro_acc,classwise_acc = eval_metrics(y_test,KNN_prediction) print(f"Overall Accuracy Score: {acc}") print(f"Macro Accuracy: {macro_acc}") print(f"Class-wise accuracy: \n{classwise_acc}") l=labels.values() ConfusionMatrixDisplay(confusion_matrix=cm,display_labels=l).plot()
Overall Accuracy Score: 0.4154929577464789 Macro Accuracy: 0.42229983219064593 Class-wise accuracy: [[0.8079096 0.02824859 0.05084746 0.05084746 0.01694915 0.04519774] [0.16071429 0.46428571 0.03571429 0.07142857 0.14285714 0.125 ] [0.35483871 0.29032258 0.08064516 0.10080645 0.13306452 0.04032258] [0.07462687 0.28358209 0.1641791 0.14925373 0.32835821 0. ] [0.10897436 0.35897436 0.08333333 0.12820513 0.28846154 0.03205128] [0.06756757 0.12837838 0.00675676 0.02702703 0.02702703 0.74324324]]
MIT
benchmark-results/6class_results/FeatureSet2.ipynb
VedantKalbag/metal-vocal-vataset
SVM
import matplotlib.pyplot as plt SVM_model = SVC(gamma='scale',C=1.0533, kernel='poly', degree=2,coef0=2.1,random_state=42) SVM_model.fit(X_train, y_train) SVM_prediction = SVM_model.predict(X_test) cm,acc,f1,macro_acc,classwise_acc = eval_metrics(y_test,SVM_prediction) print(f"Overall Accuracy Score: {acc}") print(f"Macro Accuracy: {macro_acc}") print(f"Class-wise accuracy: \n{classwise_acc}") print(f"F1 score: {f1}") l=labels.values() # ConfusionMatrixDisplay(confusion_matrix=cm,display_labels=l).plot() l=['Sing','High Fry Scream','Layered Screams', 'Low Fry Screams','Mid Fry Screams', 'No Vocal'] cm_classwise=np.array([[0.84180791, 0.02824859, 0.03954802, 0.01694915, 0.02824859, 0.04519774], [0.08928571, 0.46428571, 0.08928571, 0.03571429, 0.17857143, 0.14285714], [0.31451613, 0.29032258, 0.08870968, 0.09677419, 0.15322581, 0.05645161], [0.02985075, 0.2238806 , 0.08955224, 0.1641791 , 0.49253731, 0. ], [0.12820513, 0.30128205, 0.08333333, 0.07051282, 0.37179487, 0.04487179], [0.05405405, 0.07432432, 0.00675676, 0.02027027, 0.02027027, 0.82432432]]) # fig, ax = plt.subplots() disp = ConfusionMatrixDisplay(confusion_matrix=cm,display_labels=l) disp.plot(xticks_rotation=45) plt.rcParams.update({'font.size': 18}) fig = plt.gcf() fig.set_size_inches(10, 8) plt.tight_layout() disp.ax_.set_xticklabels(l,ha='right') # plt.savefig('/Users/vedant/Desktop/Programming/ScreamDetection/charts/svm-vggish-6class.pdf') cm/cm.sum(axis=1)
_____no_output_____
MIT
benchmark-results/6class_results/FeatureSet2.ipynb
VedantKalbag/metal-vocal-vataset
RF
RF_model = RandomForestClassifier(n_estimators=90,criterion='gini',max_depth=None,\ min_samples_split=2,min_samples_leaf=1,max_features='auto',max_leaf_nodes=None,class_weight='balanced',random_state=42) RF_model.fit(X_train, y_train) RF_prediction = RF_model.predict(X_test) cm,acc,f1,macro_acc,classwise_acc = eval_metrics(y_test,RF_prediction) print(f"Overall Accuracy Score: {acc}") print(f"Macro Accuracy: {macro_acc}") print(f"F1 score: {f1}") print(f"Class-wise accuracy: \n{classwise_acc}") l=labels.values() ConfusionMatrixDisplay(confusion_matrix=cm,display_labels=l).plot() cm/cm.sum(axis=1)
_____no_output_____
MIT
benchmark-results/6class_results/FeatureSet2.ipynb
VedantKalbag/metal-vocal-vataset
Introduction to Xarray * **Acknowledgement**: This notebook was originally created by [Digital Eath Australia (DEA)](https://www.ga.gov.au/about/projects/geographic/digital-earth-australia) and has been modified for use in the EY Data Science Program* **Prerequisites**: Users of this notebook should have a basic understanding of: * How to run a [Jupyter notebook](01_Jupyter_notebooks.ipynb) * How to work with [Numpy](07_Intro_to_numpy.ipynb) Background`Xarray` is a python library which simplifies working with labelled multi-dimension arrays. `Xarray` introduces labels in the forms of dimensions, coordinates and attributes on top of raw `numpy` arrays, allowing for more intitutive and concise development. More information about `xarray` data structures and functions can be found [here](http://xarray.pydata.org/en/stable/).Once you've completed this notebook, you may be interested in advancing your `xarray` skills further, this [external notebook](https://rabernat.github.io/research_computing/xarray.html) introduces more uses of `xarray` and may help you advance your skills further. DescriptionThis notebook is designed to introduce users to `xarray` using Python code in Jupyter Notebooks via JupyterLab.Topics covered include:* How to use `xarray` functions in a Jupyter Notebook cell* How to access `xarray` dimensions and metadata* Using indexing to explore multi-dimensional `xarray` data* Appliction of built-in `xarray` functions such as sum, std and mean*** Getting startedTo run this notebook, run all the cells in the notebook starting with the "Load packages" cell. For help with running notebook cells, refer back to the [Jupyter Notebooks notebook](01_Jupyter_notebooks.ipynb). Load packages
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import xarray as xr
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Introduction to xarrayDEA uses `xarray` as its core data model. To better understand what it is, let's first do a simple experiment using a combination of plain `numpy` arrays and Python dictionaries.Suppose we have a satellite image with three bands: `Red`, `NIR` and `SWIR`. These bands are represented as 2-dimensional `numpy` arrays and the latitude and longitude coordinates for each dimension are represented using 1-dimensional arrays. Finally, we also have some metadata that comes with this image. The code below creates fake satellite data and structures the data as a `dictionary`.
# Create fake satellite data red = np.random.rand(250, 250) nir = np.random.rand(250, 250) swir = np.random.rand(250, 250) # Create some lats and lons lats = np.linspace(-23.5, -26.0, num=red.shape[0], endpoint=False) lons = np.linspace(110.0, 112.5, num=red.shape[1], endpoint=False) # Create metadata title = "Image of the desert" date = "2019-11-10" # Stack into a dictionary image = { "red": red, "nir": nir, "swir": swir, "latitude": lats, "longitude": lons, "title": title, "date": date, }
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
All our data is conveniently packed in a dictionary. Now we can use this dictionary to work with the data it contains:
# Date of satellite image print(image["date"]) # Mean of red values image["red"].mean()
2019-11-10
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Still, to select data we have to use `numpy` indexes. Wouldn't it be convenient to be able to select data from the images using the coordinates of the pixels instead of their relative positions? This is exactly what `xarray` solves! Let's see how it works:To explore `xarray` we have a file containing some surface reflectance data extracted from the DEA platform. The object that we get `ds` is a `xarray` `Dataset`, which in some ways is very similar to the dictionary that we created before, but with lots of convenient functionality available.
ds = xr.open_dataset("../Supplementary_data/08_Intro_to_xarray/example_netcdf.nc") ds
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Xarray dataset structureA `Dataset` can be seen as a dictionary structure packing up the data, dimensions and attributes. Variables in a `Dataset` object are called `DataArrays` and they share dimensions with the higher level `Dataset`. The figure below provides an illustrative example: To access a variable we can access as if it were a Python dictionary, or using the `.` notation, which is more convenient.
ds["green"] # Or alternatively: ds.green
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Dimensions are also stored as numeric arrays that we can easily access:
ds["time"] # Or alternatively: ds.time
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Metadata is referred to as attributes and is internally stored under `.attrs`, but the same convenient `.` notation applies to them.
ds.attrs["crs"] # Or alternatively: ds.crs
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
`DataArrays` store their data internally as multidimensional `numpy` arrays. But these arrays contain dimensions or labels that make it easier to handle the data. To access the underlaying numpy array of a `DataArray` we can use the `.values` notation.
arr = ds.green.values type(arr), arr.shape
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Indexing`Xarray` offers two different ways of selecting data. This includes the `isel()` approach, where data can be selected based on its index (like `numpy`).
print(ds.time.values) ss = ds.green.isel(time=0) ss
['2018-01-03T08:31:05.000000000' '2018-01-08T08:34:01.000000000' '2018-01-13T08:30:41.000000000' '2018-01-18T08:30:42.000000000' '2018-01-23T08:33:58.000000000' '2018-01-28T08:30:20.000000000' '2018-02-07T08:30:53.000000000' '2018-02-12T08:31:43.000000000' '2018-02-17T08:23:09.000000000' '2018-02-17T08:35:40.000000000' '2018-02-22T08:34:52.000000000' '2018-02-27T08:31:36.000000000']
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Or the `sel()` approach, used for selecting data based on its dimension of label value.
ss = ds.green.sel(time="2018-01-08") ss
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Slicing data is also used to select a subset of data.
ss.x.values[100] ss = ds.green.sel(time="2018-01-08", x=slice(2378390, 2380390)) ss
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Xarray exposes lots of functions to easily transform and analyse `Datasets` and `DataArrays`. For example, to calculate the spatial mean, standard deviation or sum of the green band:
print("Mean of green band:", ds.green.mean().values) print("Standard deviation of green band:", ds.green.std().values) print("Sum of green band:", ds.green.sum().values)
Mean of green band: 4141.488778766468 Standard deviation of green band: 3775.5536474649584 Sum of green band: 14426445446
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Plotting data with MatplotlibPlotting is also conveniently integrated in the library.
ds["green"].isel(time=0).plot()
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
...but we still can do things manually using `numpy` and `matplotlib` if you choose:
rgb = np.dstack((ds.red.isel(time=0).values, ds.green.isel(time=0).values, ds.blue.isel(time=0).values)) rgb = np.clip(rgb, 0, 2000) / 2000 plt.imshow(rgb);
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
But compare the above to elegantly chaining operations within `xarray`:
ds[["red", "green", "blue"]].isel(time=0).to_array().plot.imshow(robust=True, figsize=(6, 6));
_____no_output_____
MIT
notebooks/01_Beginners_guide/08_Intro_to_xarray.ipynb
miguelalejo/2021-Better-Working-World-Data-Challenge
Pre-Tutorial ExercisesIf you've arrived early for the tutorial, please feel free to attempt the following exercises to warm-up.
# 1. Basic Python data structures # I have a list of dictionaries as such: names = [{'name': 'Eric', 'surname': 'Ma'}, {'name': 'Jeffrey', 'surname': 'Elmer'}, {'name': 'Mike', 'surname': 'Lee'}, {'name': 'Jennifer', 'surname': 'Elmer'}] # Write a function that takes in a list of dictionaries and a query surname, # and searches it for all individuals with a given surname. def find_persons_with_surname(persons, query_surname): # Assert that the persons parameter is a list. # This is a good defensive programming practice. assert isinstance(persons, list) results = [] for ______ in ______: if ___________ == __________: results.append(________) return results # Test your result below. results = find_persons_with_surname(names, 'Lee') assert len(results) == 1 results = find_persons_with_surname(names, 'Elmer') assert len(results) == 2
_____no_output_____
MIT
archive/0-pre-tutorial-exercises.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
Le Bloc Note pour calculer Python est un langage interprété, jupyter peut donc lui faire exécuter progressivement des calculs mathématiques entre des nombres : les opérations étant saisies dans des cellules de type code, le résultat s'affichera directement en dessous. Ainsi, cellule après cellule, notre notebook jupyter devient un document pour rendre compte de calculs successifs que l'on peut modifier et refaire à tout moment et qui peuvent être enrichis de commentaires en langage naturel. *** > Ce document est un notebook jupyter, pour bien vous familiariser avec cet environnement regardez cette rapide [Introduction](Introduction-Le_BN_pour_explorer.ipynb). *** Les opérateurs arithmétiques Les additions, soustractions et multiplications sont simples et se réalisent via les opérateurs +,-,*.
4+5-3*2
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Le produit est prioritaire.
4+(5-3)*2
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Pour ce qui est des divisions, il existe trois opérateurs :- l’opérateur de division “/”, qui donne toujours un résultat avec [virgule flottante](https://fr.wikipedia.org/wiki/Virgule_flottante) en Python 3 ;- l’opérateur de division entière “//” ;- l’opérateur modulo “%” donnant le reste de la division euclidienne.
8/2 9//2 9%2
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Pour élever à la puissance on utilise l'opérateur “**”
2**3
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Attention, des opérations mêlant des nombres entiers et flottant donneront des résultats flottants.
13.0//3 13.0%3
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
On peut utiliser l'écriture scientifique pour saisir des nombres flottants :
2e-3
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Pour convertir un flottant en entier et inversement on utilise respectivement les fonctions int() et float()
int(3.9) float(3)
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Pour obtenir la valeur absolue d'un nombre :
abs(-3.3)
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Pour arrondir un nombre flottant par exemple à deux chiffres après la virgule :
round(3.1415926535897932384626433832795,2)
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Autres fonctions mathématiques Pour faire appel à des fonctions mathématiques plus évoluées, il faut importer une bibliothèque tel que :
from numpy import *
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
L' **`*`** veut dire que nous pouvons maintenant utiliser toutes les fonctions de cette bibliothèque, telle que :
sqrt(4) sin(pi)
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
> Peut-être que le résultat de cette dernière cellule vous étonne ? Tout comme celui que produisent les cellules suivantes :
0.1+0.7 4e0+2e-1+1e-3
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
> Cet écart est du à la représentation des nombres [flottants](https://fr.wikipedia.org/wiki/Virgule_flottante) dans la mémoire de l'ordinateur, ce ne sont pas des valeurs exactes mais approchées. Il faudra donc s'en souvenir lorsqu'il s'agira d'interpréter un résultat issu d'un calcul avec des flottants, tout dépend du niveau de précision attendu...
round(0.1+0.7,3) round(sin(pi),3)
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Pour générer un nombre aléatoire :
from numpy.random import * rand()
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Par exemple pour simuler un Dé à 6 faces
int(rint(rand()*5+1))
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
Représentation Graphique d'une fonction Mathématiques Pour tracer des courbes, si vous exécutez la fonction magique %pylab inline, les bibliothèques Numpy et Matplotlib sont importées et il sera possible de dessiner des graphiques de façon intégrés au notebook.
%pylab inline
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
L'exemple de code suivant sera alors exécutable.
# Fait appel à numpy (linspace et pi) x = linspace(0, 3*pi, 500) # Fait appel à matplotlib (plot et title) plot(x, sin(x)) title('Graphique sin(x)')
_____no_output_____
MIT
Arithmetique-Le_BN_pour_calculer.ipynb
ECaMorlaix-2SI-1718/CR
`파이토치(PyTorch) 기본 익히기 `_ ||`빠른 시작 `_ ||`텐서(Tensor) `_ ||`Dataset과 Dataloader `_ ||`변형(Transform) `_ ||**신경망 모델 구성하기** ||`Autograd `_ ||`최적화(Optimization) `_ ||`모델 저장하고 불러오기 `_신경망 모델 구성하기==========================================================================신경망은 데이터에 대한 연산을 수행하는 계층(layer)/모듈(module)로 구성되어 있습니다.`torch.nn `_ 네임스페이스는 신경망을 구성하는데 필요한 모든 구성 요소를 제공합니다.PyTorch의 모든 모듈은 `nn.Module `_ 의 하위 클래스(subclass)입니다. 신경망은 다른 모듈(계층; layer)로 구성된 모듈입니다. 이러한 중첩된 구조는 복잡한 아키텍처를 쉽게 구축하고 관리할 수 있습니다.이어지는 장에서는 FashionMNIST 데이터셋의 이미지들을 분류하는 신경망을 구성해보겠습니다.
import os import torch from torch import nn from torch.utils.data import DataLoader from torchvision import datasets, transforms
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
학습을 위한 장치 얻기------------------------------------------------------------------------------------------가능한 경우 GPU와 같은 하드웨어 가속기에서 모델을 학습하려고 합니다.`torch.cuda `_ 를 사용할 수 있는지확인하고 그렇지 않으면 CPU를 계속 사용합니다.
device = 'cuda' if torch.cuda.is_available() else 'cpu' print('Using {} device'.format(device))
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
클래스 정의하기------------------------------------------------------------------------------------------신경망 모델을 ``nn.Module`` 의 하위클래스로 정의하고, ``__init__`` 에서 신경망 계층들을 초기화합니다.``nn.Module`` 을 상속받은 모든 클래스는 ``forward`` 메소드에 입력 데이터에 대한 연산들을 구현합니다.
class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28*28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10), nn.ReLU() ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
``NeuralNetwork`` 의 인스턴스(instance)를 생성하고 이를 ``device`` 로 이동한 뒤,구조(structure)를 출력합니다.
model = NeuralNetwork().to(device) print(model)
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
모델을 사용하기 위해 입력 데이터를 전달합니다. 이는 일부`백그라운드 연산들 `_ 과 함께모델의 ``forward`` 를 실행합니다. ``model.forward()`` 를 직접 호출하지 마세요!모델에 입력을 호출하면 각 분류(class)에 대한 원시(raw) 예측값이 있는 10-차원 텐서가 반환됩니다.원시 예측값을 ``nn.Softmax`` 모듈의 인스턴스에 통과시켜 예측 확률을 얻습니다.
X = torch.rand(1, 28, 28, device=device) logits = model(X) pred_probab = nn.Softmax(dim=1)(logits) y_pred = pred_probab.argmax(1) print(f"Predicted class: {y_pred}")
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
------------------------------------------------------------------------------------------ 모델 계층(Layer)------------------------------------------------------------------------------------------FashionMNIST 모델의 계층들을 살펴보겠습니다. 이를 설명하기 위해, 28x28 크기의 이미지 3개로 구성된미니배치를 가져와, 신경망을 통과할 때 어떤 일이 발생하는지 알아보겠습니다.
input_image = torch.rand(3,28,28) print(input_image.size())
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
nn.Flatten^^^^^^^^^^^^^^^^^^^^^^`nn.Flatten `_ 계층을 초기화하여각 28x28의 2D 이미지를 784 픽셀 값을 갖는 연속된 배열로 변환합니다. (dim=0의 미니배치 차원은 유지됩니다.)
flatten = nn.Flatten() flat_image = flatten(input_image) print(flat_image.size())
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
nn.Linear^^^^^^^^^^^^^^^^^^^^^^`선형 계층 `_ 은 저장된 가중치(weight)와편향(bias)을 사용하여 입력에 선형 변환(linear transformation)을 적용하는 모듈입니다.
layer1 = nn.Linear(in_features=28*28, out_features=20) hidden1 = layer1(flat_image) print(hidden1.size())
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
nn.ReLU^^^^^^^^^^^^^^^^^^^^^^비선형 활성화(activation)는 모델의 입력과 출력 사이에 복잡한 관계(mapping)를 만듭니다.비선형 활성화는 선형 변환 후에 적용되어 *비선형성(nonlinearity)* 을 도입하고, 신경망이 다양한 현상을 학습할 수 있도록 돕습니다.이 모델에서는 `nn.ReLU `_ 를 선형 계층들 사이에 사용하지만,모델을 만들 때는 비선형성을 가진 다른 활성화를 도입할 수도 있습니다.
print(f"Before ReLU: {hidden1}\n\n") hidden1 = nn.ReLU()(hidden1) print(f"After ReLU: {hidden1}")
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
nn.Sequential^^^^^^^^^^^^^^^^^^^^^^`nn.Sequential `_ 은 순서를 갖는모듈의 컨테이너입니다. 데이터는 정의된 것과 같은 순서로 모든 모듈들을 통해 전달됩니다. 순차 컨테이너(sequential container)를 사용하여아래의 ``seq_modules`` 와 같은 신경망을 빠르게 만들 수 있습니다.
seq_modules = nn.Sequential( flatten, layer1, nn.ReLU(), nn.Linear(20, 10) ) input_image = torch.rand(3,28,28) logits = seq_modules(input_image)
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
nn.Softmax^^^^^^^^^^^^^^^^^^^^^^신경망의 마지막 선형 계층은 `nn.Softmax `_ 모듈에 전달될([-\infty, \infty] 범위의 원시 값(raw value)인) `logits` 를 반환합니다. logits는 모델의 각 분류(class)에 대한 예측 확률을 나타내도록[0, 1] 범위로 비례하여 조정(scale)됩니다. ``dim`` 매개변수는 값의 합이 1이 되는 차원을 나타냅니다.
softmax = nn.Softmax(dim=1) pred_probab = softmax(logits)
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
모델 매개변수------------------------------------------------------------------------------------------신경망 내부의 많은 계층들은 *매개변수화(parameterize)* 됩니다. 즉, 학습 중에 최적화되는 가중치와 편향과 연관지어집니다.``nn.Module`` 을 상속하면 모델 객체 내부의 모든 필드들이 자동으로 추적(track)되며, 모델의 ``parameters()`` 및``named_parameters()`` 메소드로 모든 매개변수에 접근할 수 있게 됩니다.이 예제에서는 각 매개변수들을 순회하며(iterate), 매개변수의 크기와 값을 출력합니다.
print("Model structure: ", model, "\n\n") for name, param in model.named_parameters(): print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n")
_____no_output_____
BSD-3-Clause
docs/_downloads/68e97c325bcdbd63f73a37dc6b8c656d/buildmodel_tutorial.ipynb
YonghyunRyu/PyTorch-tutorials-kr-exercise
Multi-label classification
%reload_ext autoreload %autoreload 2 %matplotlib inline from fastai.conv_learner import * PATH = 'data/planet/' # Data preparation steps if you are using Crestle: os.makedirs('data/planet/models', exist_ok=True) os.makedirs('/cache/planet/tmp', exist_ok=True) !ln -s /datasets/kaggle/planet-understanding-the-amazon-from-space/train-jpg {PATH} !ln -s /datasets/kaggle/planet-understanding-the-amazon-from-space/test-jpg {PATH} !ln -s /datasets/kaggle/planet-understanding-the-amazon-from-space/train_v2.csv {PATH} !ln -s /cache/planet/tmp {PATH} ls {PATH}
models/ test-jpg/ tmp/ train-jpg/ train_v2.csv*
Apache-2.0
DEEP LEARNING/image classification/fastai/fastai satellite multilabel classif.ipynb
Diyago/ML-DL-scripts
Multi-label versus single-label classification
from fastai.plots import * def get_1st(path): return glob(f'{path}/*.*')[0] dc_path = "data/dogscats/valid/" list_paths = [get_1st(f"{dc_path}cats"), get_1st(f"{dc_path}dogs")] plots_from_files(list_paths, titles=["cat", "dog"], maintitle="Single-label classification")
_____no_output_____
Apache-2.0
DEEP LEARNING/image classification/fastai/fastai satellite multilabel classif.ipynb
Diyago/ML-DL-scripts
In single-label classification each sample belongs to one class. In the previous example, each image is either a *dog* or a *cat*.
list_paths = [f"{PATH}train-jpg/train_0.jpg", f"{PATH}train-jpg/train_1.jpg"] titles=["haze primary", "agriculture clear primary water"] plots_from_files(list_paths, titles=titles, maintitle="Multi-label classification")
_____no_output_____
Apache-2.0
DEEP LEARNING/image classification/fastai/fastai satellite multilabel classif.ipynb
Diyago/ML-DL-scripts
In multi-label classification each sample can belong to one or more clases. In the previous example, the first images belongs to two clases: *haze* and *primary*. The second image belongs to four clases: *agriculture*, *clear*, *primary* and *water*. Multi-label models for Planet dataset
from planet import f2 metrics=[f2] f_model = resnet34 label_csv = f'{PATH}train_v2.csv' n = len(list(open(label_csv)))-1 val_idxs = get_cv_idxs(n)
_____no_output_____
Apache-2.0
DEEP LEARNING/image classification/fastai/fastai satellite multilabel classif.ipynb
Diyago/ML-DL-scripts
We use a different set of data augmentations for this dataset - we also allow vertical flips, since we don't expect vertical orientation of satellite images to change our classifications.
def get_data(sz): tfms = tfms_from_model(f_model, sz, aug_tfms=transforms_top_down, max_zoom=1.05) return ImageClassifierData.from_csv(PATH, 'train-jpg', label_csv, tfms=tfms, suffix='.jpg', val_idxs=val_idxs, test_name='test-jpg') data = get_data(256) x,y = next(iter(data.val_dl)) y list(zip(data.classes, y[0])) plt.imshow(data.val_ds.denorm(to_np(x))[0]*1.4); sz=64 data = get_data(sz) data = data.resize(int(sz*1.3), 'tmp') learn = ConvLearner.pretrained(f_model, data, metrics=metrics) lrf=learn.lr_find() learn.sched.plot() lr = 0.2 learn.fit(lr, 3, cycle_len=1, cycle_mult=2) lrs = np.array([lr/9,lr/3,lr]) learn.unfreeze() learn.fit(lrs, 3, cycle_len=1, cycle_mult=2) learn.save(f'{sz}') learn.sched.plot_loss() sz=128 learn.set_data(get_data(sz)) learn.freeze() learn.fit(lr, 3, cycle_len=1, cycle_mult=2) learn.unfreeze() learn.fit(lrs, 3, cycle_len=1, cycle_mult=2) learn.save(f'{sz}') sz=256 learn.set_data(get_data(sz)) learn.freeze() learn.fit(lr, 3, cycle_len=1, cycle_mult=2) learn.unfreeze() learn.fit(lrs, 3, cycle_len=1, cycle_mult=2) learn.save(f'{sz}') multi_preds, y = learn.TTA() preds = np.mean(multi_preds, 0) f2(preds,y)
_____no_output_____
Apache-2.0
DEEP LEARNING/image classification/fastai/fastai satellite multilabel classif.ipynb
Diyago/ML-DL-scripts
Results Classification
import os import sys sys.path.append('../') import torch import pandas as pd import numpy as np DATA_DIR = "../data" data_train = pd.read_csv(os.path.join(DATA_DIR, "train_cleaned.csv"), na_filter=False) data_val = pd.read_csv(os.path.join(DATA_DIR, "val_cleaned.csv"), na_filter=False) from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import TrainingArguments from transformers import Trainer from datasets import load_metric from utils.preprocessing import make_labels, tokenize from utils.classes import SentimentDataset MODEL = "xlm-roberta-base" MODEL_PRETRAINED = "../models/xlm_roberta_classif/checkpoint-1758" model = AutoModelForSequenceClassification.from_pretrained(MODEL_PRETRAINED, num_labels=3) tokenizer = AutoTokenizer.from_pretrained(MODEL) X_train = tokenize(tokenizer, data_train.content) X_val = tokenize(tokenizer, data_val.content) y_train = data_train.sentiment y_val = data_val.sentiment y_train_labels = make_labels(y_train) y_val_labels = make_labels(y_val) data_train = data_train.assign(label=pd.Series(y_train_labels).values) data_val = data_val.assign(label=pd.Series(y_val_labels).values) train_dataset_torch = SentimentDataset(X_train, y_train_labels) val_dataset_torch = SentimentDataset(X_val, y_val_labels) metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) training_args = TrainingArguments( "bert_base_uncased_classif", per_device_train_batch_size=1, per_device_eval_batch_size=8, gradient_accumulation_steps=32, fp16 = True, fp16_opt_level = 'O1', evaluation_strategy = 'epoch', save_strategy="epoch", num_train_epochs=4 ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset_torch, eval_dataset=val_dataset_torch, compute_metrics=compute_metrics ) val_predictions = trainer.predict(val_dataset_torch) val_predictions_labels = np.argmax(val_predictions.predictions, axis = 1) metric.compute(predictions=val_predictions_labels, references=val_predictions.label_ids) from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay conf_matrix = confusion_matrix(val_predictions.label_ids, val_predictions_labels) disp = ConfusionMatrixDisplay(confusion_matrix=conf_matrix, display_labels = ["negative", "neutral", "positive"],) disp.plot( cmap="cividis") data_val = data_val.assign(predictions=pd.Series(val_predictions_labels).values) data_val results_by_language = [] for language in set(data_val.language): data = data_val[data_val.language == language] results_by_language.append((language, metric.compute(predictions=data.predictions, references=data.label)["accuracy"], len(data))) results_by_language = pd.DataFrame(results_by_language, columns=["language", "accuracy", "size"]).sort_values("size")[::-1] results_by_language
_____no_output_____
MIT
notebooks/3_1_classif_results.ipynb
Avditvs/sentiment-analysis-test
Results on the main languages
main_languages = ["en", "id", "ru", "ar", "fr", "es", "pt", "ko", "zh-cn", "ja", "de", "it", "th", "tr"] metric.compute(predictions=data_val[data_val.language.isin(main_languages)].predictions, references=data_val[data_val.language.isin(main_languages)].label) conf_matrix = confusion_matrix(data_val[data_val.language.isin(main_languages)].label, data_val[data_val.language.isin(main_languages)].predictions) disp = ConfusionMatrixDisplay(confusion_matrix=conf_matrix, display_labels = ["negative", "neutral", "positive"],) disp.plot( cmap="cividis")
_____no_output_____
MIT
notebooks/3_1_classif_results.ipynb
Avditvs/sentiment-analysis-test
Anlayse results Let's see what element have been misclassified Positive classified as negative
for sentence in data_val[data_val.language=="en"][data_val.label==2][data_val.predictions==0].content: print(sentence)
Did you notice that zero has a karambit knife and also that they changed the number 1 in the scoreboard and timer Smackgobbed? New word for me...gonna start using all the time now. Anime Saturday about to start. New episode of bleach. Yea So tired. Finally getting some sleep. Nighty If it was up to me I would give more berry's it is tottly not cool having to buy your eggs with berries I mean for real not cool update it a lift will you. just came from the most romantic wedding ever. the groom almost cried, which made everyone else almost cry 핸드폰 전체 세팅에서 말고 앱 내에서 개인이나 단톡방 알림끄기하면 소리는 안나는데 배너는 계속떠요. 아예 메세지가 안온거처럼 안뜨고 들어갈경우에만 받은 메세지가 보였으면 좋겠네요 Once i turn off the notification of a person/ group, as long as I don't check messages, i wish i do not see any notification(banner) and also number of accumulated messages on the list. 8 cm dilated and her water just broke. Getting closer Still thinking of Moscato, sigh... Tomorrow Demi in Madrid I didn't win the concert to meet her LOL! is going to look at guitars tomorrow... woo for having money and working 2 public holidays in the next week typical metro, can't even get that right. chrome falling off or missing on a steel stand and poor stainless steel quality made utensils , returned for refund I am in bed. u will liv the app I'm not going to spoil it but your going to have to get it to s33 will you vome to spain? For No Reason Pou Can Die. in France tmrw [br] Ah! Yes. The competition is ending soon. You have just as much chance as everyone else. There's no tomfoolery in our comps I love Birkenstock Arizona sandals and have been wearing them for nearly 20 years. I recently bought a pair of them in HABANA LEATHER and the sandals delivered do not match the color of the picture. The shade of brown is quite different. Yes very kind of him to do that for such a pain in the arse like me! I'm still here, sorry I haven't signed on in a while, just been really busy. No. Wally-World is the root of all evil. Costco is not to be feared. It was in the 30's last night but I dom't think it hurt anything. The elephant ears are still standing Not me. I'm as pure as the snow. But I drifted. I clocked out early! lol i cant w8 till yoiur film out on friday is it any good? oh what happened?? DH has just run away from his pc saying that twitter is too addictive lmao - then the sound for new message went and he came running back aww dont make me blush! is going to the SKINS party on the 2nd of May wicked!!! downloaded it on a whim before going out the door, the only thing I don't like about it is the constant ads for games I've already downloaded and twitter requests before or after every game. Ahhhhh.... Back to work. *BreBre.net* No invit'. You have to follow the smell of hickory and pork fat on hot coals. Next time maybe? eeeee my first follower can't concentrate on the essay. Pre-pandemic top vaccine companies, and where they are in COVID-19 race: $AZN (out in UK) $GSK (partner w/Sanofi, delayed) $JNJ (coming soon) $MRK (dropped out) $PFE (succeeded via small biotech partner), $SNY (partner w/GSK; delayed) (partner w/Pfizer to manufacture BNT162b2) Try to sport once and awhile instead of sitting behind your computer no not the bbc link? but have been looking at the changes, thanx hope ur well IMO trad segmenting in communities is not very useful stupid phone! It's about zune covers and designs New ones are out for spring hahas sucks to be you must have been one heck of a cloud because it's falling here
MIT
notebooks/3_1_classif_results.ipynb
Avditvs/sentiment-analysis-test
From what we can see, these wrongly classified sentences are not obviously positive. Example : "hahas sucks to be you" Negative classified as positive
for sentence in data_val[data_val.language=="en"][data_val.label==0][data_val.predictions==2].content: print(sentence)
off to college bleurgh I know, I know, it's exactly like mine craft. But, IT KEEPS FREEZING!!!!!!!!!! You might think it is just nothing, but trust me, it freezes all the time. I hardly get the time to play it. I'd give it 0 stars if I could. Rest in peace, Ping. Best hamster ever. 2007-2009 It's a fucking holiday. I promise this will be my last day with this bank Mine are grown -- does reading to your dogs count? :-D off for a shower. i hope i drown... im so depressed right now thanks a lot sharks _ I think I�ll end up going alone But I will see it at some point... At Gatwick. Watch on BST, body 8 hours behind on PDT Thanks. 7:30 here in CA but not on Versus. Can't believe she's up this early on a Sunday omg you poooooor thing!!!! don't worry you can use my itunes or something..... yeah got it, but was on my way with hubby to garage etc. it's beautiful out, and i have a 30 page outline to do Couldn't be happier with this application. Lots of good tools easy to use once you get used to the interface. Not something that your young kids would like, because the interface is somewhat complicated. Snow mounted up to a couple of inches overnight, very pretty but I hope I don't lose my peaches, tree was in bloom loves sleep... wishing I was back there I took in 52 hardbound books by James Patterson and Lee Child that were in very good to excellent condition and was paid 27 cents per book. Yes, 27 cents per book. . had quite a productive day doing art work. not much left to do now! have to go to a r.s revision session tomorrow school in holidays!! Just had a lovely avacado, bacon and chicken baguette - now time for more work I'm afraid the Trekkies have all the glory now. It's like we're back in the early '90s. Play Forces of War. Its better and it cane out first. World War= (TURD) Forces of War= OYUS FORCES OF WAR NIGG@@@@@ Its beautiful Great idea with the iTunes promo codes - they don't work in the UK iTunes store though Topic : Illusions of Greatness Went from we thought we were Elite, to average, to mediocrity and in 2020 just plain pathetic disgrace! RT _Coverage: Episode Alert March 17th, Wednesday, its Penn State Nittany Lions Talk! Tune in to hear about 20', 21', & 22' #WeAreWednesday #PennStateFootball #PSU #WeAre #Big10 #CFB #NittanyLions #HappyValley _PSU On all listening streams! I have installed this game 3 times now and still have not been able to play. it just keeps going to force close on kindle fire. 1 star for teasing me with looks it would be a good game. So sleepy this morning! Wish they'd stop laying people off. My 9-5 is becoming a 6-10. I tried it in several foods. It just wasn't for me. I gave it to my daughter who is a health nut. I suggest someone to make it known that it is a strong taste that if you don't like it you will not use it. Congrats!! i totally forgot to submit photos $2.82 for a bagel with plain cream cheese & rude waitress - Thank you New York Bagel! You just gave me the perfect excuse to save my wallet and my waistline!!!............. or maybe I'll go to Basha's for $1.20 for just as good a bagel & down home smile & friendly service Just good.. My following, followers and updates were all multiples on 10 just now. Now I'm uneasy as 2 are uneven is ready for bed, tons of work later I can't smell anything coming from this at all. I was very disappointed. The spray one is great; very refreshing and calming for bedtime. Weekend like this I should be at Killington Fridge shopping 2day.....YUCK! _harvey RIP Budgie &quot;Sleep now little one, in a feathery heap. No longer going cheep.&quot; plss add scrobling Seeing 15 huge workitems come by in my rss reader. Thought they were commit messages. Turns out they're just new trac tickets Not a bad game except for the fact that the steering is all jacked up. its hard to control the car with the arrows and the tilt type is backwards. I try to turn left amenity goes right. Ring Relief indeed. Perhaps the ring of the cash register is what they meant? I've used the product faithfully for over 60 days (they recommend giving it 45 days) and other than improving my aim of getting drops into my ear canal I've received no other benefit from this product.Don't waste your time and money on it. All i'll say about Wolverine is that I had severe brain pain on walking out of that movie.. it started pretty awesome : How is chap stiques knee? bout to go to bed, without her baebe ... but she had a good time with her #1 tonight!!! Always had good fried chicken from Frank's in the past but not tonight. The chicken breasts were the size of the buffalo wings. Make sure you check your order before you leave. Wow. ...s'posed to go to the Dudie's Burger Festival thing today...but it looks like thunderstorm weather. Maybe it will clear up, eh? word? Me too. Condolences UR SUGAR 7.5ml Acrylic Poly Extension Quick Building Gel Polish Clear Pink UV Gel Colis reçu au bout d’un mois. Total freak in 3 a.m Aww Ate Lois ? Bawiin mo na lang sa 18th birthday. :&gt; :&gt; I bet it would be really fun. :&gt; The old ones don't give me any problems, it's the new fast-growing hybrid ones that are out of control. I should send Apple an angry email along the lines of &quot;Hey d-bags, 'we fixed your computer' generally means you actually did something.&quot;. i thinkk you guys should take a picture. and @ reply it to me in a twitpic because i love you both of u to death. miss u _ i spent a whole summer with them two years ago before they got big. I see them every now and then. Havent heard them play in awhile
MIT
notebooks/3_1_classif_results.ipynb
Avditvs/sentiment-analysis-test
**Guide*** Create a draw_circle function for the callback function* Use two events cv2.EVENT_LBUTTONDOWN and cv2.EVENT_LBUTTONUP* Use a boolean variable to keep track if the mouse has been clicked up and down based on the events above* Use a tuple to keep track of the x and y where the mouse was clicked.* You should be able to then draw a circle on the frame based on the x,y coordinates from the Event Check out the skeleton guide below:
# Create a function based on a CV2 Event (Left button click) # mouse callback function def draw_circle(event,x,y,flags,param): global center,clicked # get mouse click on down and track center if event == cv2.EVENT_LBUTTONDOWN: center = (x, y) clicked = False # Use boolean variable to track if the mouse has been released if event == cv2.EVENT_LBUTTONUP: clicked = True # Haven't drawn anything yet! center = (0,0) clicked = False # Capture Video cap = cv2.VideoCapture(0) # Create a named window for connections cv2.namedWindow('Test') # Bind draw_rectangle function to mouse cliks cv2.setMouseCallback('Test', draw_circle) while True: # Capture frame-by-frame ret, frame = cap.read() # Use if statement to see if clicked is true if clicked==True: # Draw circle on frame cv2.circle(frame, center=center, radius=50, color=(255,0,0), thickness=5) # Display the resulting frame cv2.imshow('Test', frame) # This command let's us quit with the "q" button on a keyboard. # Simply pressing X on the window won't work! if cv2.waitKey(1) & 0xFF == ord('q'): break # When everything is done, release the capture cap.release() cv2.destroyAllWindows()
_____no_output_____
MIT
Neelesh_Video-Basic_opencv.ipynb
Shreyansh-Gupta/Open-contributions
**Process S1 SLC data using parallel processing** First import all necessary libraries
import ost import ost.helpers as h from ost.helpers import onda, asf_wget, vector from ost import Sentinel1_SLCBatch import os from os.path import join from pathlib import Path from pprint import pprint
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Ingest shapefile data and set start and end dates
# create a processing directory project_dir = '/home/ost/Data/jwheeler/Sydney_Fires' # apply function with buffer in meters from ost.helpers import vector input_shp = "/home/ost/Data/jwheeler/Shapefiles/Sydney_fires.shp" aoi = vector.shp_to_wkt(input_shp) #---------------------------- # Time of interest #---------------------------- # we set only the start date to today - 30 days start = '2019-11-30' end = '2019-12-12'
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Initiate class with above attributes
# create s1Project class instance s1_batch = Sentinel1_SLCBatch( project_dir=project_dir, aoi=aoi, start=start, end=end, product_type='SLC', ard_type='OST Plus')
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Search for images on scihub and plot footprints
#--------------------------------------------------- # for plotting purposes we use this iPython magic %matplotlib inline %pylab inline pylab.rcParams['figure.figsize'] = (19, 19) #--------------------------------------------------- # search command s1_batch.search() # we plot the full Inventory on a map s1_batch.plot_inventory(transparency=.1)
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Refine image search
s1_batch.refine()
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Select appropriate key and plot filtered images
pylab.rcParams['figure.figsize'] = (13, 13) key = 'DESCENDING_VVVH' s1_batch.refined_inventory_dict[key] s1_batch.plot_inventory(s1_batch.refined_inventory_dict[key], 0.3)
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Download using a selected S-1 mirror - ideally ASF (2 using request or 5 using wget) or onda (4) if accounts are set up correctly for fast, parallel downloading
s1_batch.download(s1_batch.refined_inventory_dict[key],concurrent=8)
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Create inventory of bursts in downloaded images, plot them and print information
s1_batch.create_burst_inventory(key=key, refine=True) pylab.rcParams['figure.figsize'] = (13, 13) s1_batch.plot_inventory(s1_batch.burst_inventory, transparency=0.1) print('Our burst inventory holds {} bursts to process.'.format(len(s1_batch.burst_inventory))) print('------------------------------------------') print(s1_batch.burst_inventory.head())
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Uncomment the below command to view the current ard parameters
#pprint(s1_batch.ard_parameters)
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Run the s1SLCbatch class function bursts to ard to generate parameter text files for each step from burst to ard, ard to timeseries, timeseries to timescan and mosaic.**NB Use a base name for the exec file without a extension AND make sure to choose the number of cores that each process will use for parallel processing. ncores in this function x multiproc in the multiprocess function should not exceed the number of cores on your machine**
s1_batch.bursts_to_ard(timeseries=True, timescan=True, mosaic=True, overwrite=False, exec_file='/home/ost/Data/jwheeler/Sydney_Fires/test', ncores=2) #print(s1_batch.temp_dir)
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
Run the s1SLCbatch class function multiprocessing to run, sequentially, the parameters in the previously generated text files for each step from burst to ard, ard to timeseries, timeseries to timescan and mosaic.**NB Use the same base name for the exec file without a extension as before AND make sure to choose the number of cores that each process will use for parallel processing as well as the number of concurrent processes. ncores in the previous function x multiproc in this function should not exceed the number of cores on your machine. You should also include the ncores again as the text file generation is reiterated during this process**
s1_batch.multiprocess(timeseries=True, timescan=True, mosaic=True, overwrite=False, exec_file='/home/ost/Data/jwheeler/Sydney_Fires/test', ncores=2,multiproc=4) #burst_to_ard_batch(s1_batch.burst_inventory,s1_batch.download_dir,s1_batch.processing_dir,s1_batch.temp_dir,s1_batch.proc_file,exec_file='/home/ost/Data/jwheeler/Sydney_Fires/test.txt')
_____no_output_____
MIT
6 Sentinel-1 SLC Parallel Processing.ipynb
jamesemwheeler/OSTParallel
PerceptronThis is a simple example illustrating the classic perceptron algorithm. A linear decision function parametrized by the weight vector "w" and bias paramter "b" is learned by making small adjustements to these parameters every time the predicted label "f_i" mismatches the true label "y_i" of an input data point "x_i".The predicted label corresponds to the following function: f_i = sign( + b),where "" denotes the dot product operation.The update rule is given as follows:if "y_i != f_i" (predicted label f_i different from true label y_i) then w <-- w + y_i*x_i; and b <-- b + y_i;else continue with next data sample The above process is repeated over the set of samples several times.If data points are linearly separable the above rule is guaranteed to converge in a finite number of iterations. Proof see: http://www.cs.columbia.edu/~mcollins/courses/6998-2012/notes/perc.converge.pdf2016 Luis G Sanchez Giraldo and Odelia Schwartz. Transcribed and modified to Python by Xu Pan, 2022.
# Construct a simple data set based on MNIST images # This is a data set of handwritten digits 0 to 9 # Download MNIST dataset from keras from keras.datasets import mnist import numpy as np (train_X, train_y), (test_X, test_y) = mnist.load_data() print(test_X.shape) print(test_y.shape) # y are the digit labels print(train_y[0:10]) # Sort dataset by label. sorted_test_X =[[], [], [], [], [], [], [], [], [], []] for i, y in enumerate(test_y): sorted_test_X[y].append(test_X[i,:,:]) # Create a simple two-class problem using images of digits 0 and 5 from # the MNIST test data set pos_class = 0 # 3 neg_class = 5 # get samples from positive and negative classes pos_data = np.array(sorted_test_X[pos_class]) neg_data = np.array(sorted_test_X[neg_class]) print(pos_data.shape) print(neg_data.shape) # Look at some digits from the classes # Look at different samples from each class (here plotted just the first; try others) import matplotlib.pyplot as plt plt.imshow(pos_data[0,:,:], cmap='binary') plt.show() plt.imshow(neg_data[0,:,:], cmap='binary') plt.show() # Gather the samples from the two classes into one matrix X # Note that there the samples from each class are appended # to make up 1872 samples altogether X = np.concatenate((pos_data, neg_data), axis=0)/255 X = np.reshape(X, (X.shape[0],-1)) print(X.shape) # Label the two classes with 1 and -1 respectively Y = np.concatenate((np.ones(pos_data.shape[0]), -np.ones(neg_data.shape[0])), axis=0); print(Y.shape) print(Y[0:10]) print(Y[981:991]) # Choose random samples from data. To do so: # permute data samples to run the learning algorithm # and take just n_samples from the permuted data (here 60 samples) n_samples = 60 p_idx = np.random.permutation(X.shape[0]) X = X[p_idx[0:n_samples], :] Y = Y[p_idx[0:n_samples]] # Project the data onto the means of the two classes # First look at the mean of the two class plt.imshow(np.reshape(np.mean(X[Y == 1, :], axis=0),(28,28)), cmap='binary') plt.show() # mean of second class plt.imshow(np.reshape(np.mean(X[Y == -1, :], axis=0),(28,28)), cmap='binary') plt.show() # Now project the data V = np.stack((np.mean(X[Y == 1, :], axis=0), np.mean(X[Y == -1, :], axis=0)), axis=1) V[:, 0] = V[:, 0]/np.linalg.norm(V[:, 0]) V[:, 1] = V[:, 1]/np.linalg.norm(V[:, 1]) Z = np.matmul(X,V) print(Z.shape) plt.figure(figsize=(5,5)) plt.scatter(Z[Y == 1,0], Z[Y == 1,1], c='g') plt.scatter(Z[Y == -1,0], Z[Y == -1,1], c='r') # This is a helper function that plots the partition. def display2DPartition(Z, Y, w, b): plt.figure(figsize=(5,5)) ax = plt.axes() plt.scatter(Z[Y == 1,0], Z[Y == 1,1], c='g', zorder=1) plt.scatter(Z[Y == -1,0], Z[Y == -1,1], c='r', zorder=2) ax.set_facecolor('pink') x = ax.get_xlim() y = ax.get_ylim() y_line = -(np.array(x)*w[0]+b)/w[1] plt.fill_between(x,y_line, color='palegreen',zorder=0) for iSmp in range(Z.shape[0]): z_i = Z[iSmp, :] # compute prediction with current w and b f_i = np.sign(np.dot(w, z_i) + b) if f_i != Y[iSmp]: plt.scatter(z_i[0], z_i[1], marker='o', s=100, facecolors='none',linewidths=2, edgecolors='black', zorder=3) ax.set_xlim(x) ax.set_ylim(y) # Simple Learning algorithm for Perceptron # here we denote two classes: the positive class by label "1" and the negative # class by label "-1." # Any point in the plane colored as green will be classified as positive # class and any point falling within the red region as negative class. # Training samples are denoted by the green crosses (positive) and red dots # (negative). A missclassified training point, that is "f_i != y_i" is # marked with a circle from IPython.display import clear_output import time # first initialize the parameters lr = 1 # Learning rate parameter (1 in the classic perceptron algorithm) w = np.random.randn(Z.shape[1]) # Initial guess for the hyperplane parameters print(w) print(b) b = 0 # bias is initially zero max_epoch = 100 # Number of epoch (complete loops trough all data) epoch = 1 # epoch counter # display the starting decision hyhyperplane (partition of the space) # (compare this to the decision hyperplane that is learned!) display2DPartition(Z, Y, w, b) plt.show() # now run the perceptron training while epoch <= max_epoch: # loop trough all data points one time (an epoch) for iSmp in range(n_samples): z_i = Z[iSmp, :] # compute prediction with current w and b f_i = np.sign(np.dot(w, z_i) + b) # update w and b if missclassified if f_i != Y[iSmp]: w = w + lr*Y[iSmp]*z_i b = b + lr*Y[iSmp] # diplay current decision hyperplane (partition of the space) clear_output(wait=True) display2DPartition(Z, Y, w, b) plt.show() time.sleep(0.1) epoch = epoch + 1; # display the learned decision hyperplane (partition of the space) print(w) print(b) display2DPartition(Z, Y, w, b) plt.show()
_____no_output_____
MIT
Lab 6 Perceptron/Perceptron.ipynb
xup5/Computational-Neuroscience-Class
Exercise: After trying the code for the given classes,try running the code again, but this time changing the digits of thepositive or negative class. You can do this by changing the following two lines above:pos_class = 0neg_class = 5What classes are easier to learn?
_____no_output_____
MIT
Lab 6 Perceptron/Perceptron.ipynb
xup5/Computational-Neuroscience-Class
biopythonThe [Biopython](http://biopython.org/) Project is an international association of developers of freely available [Python](http://www.python.org) tools for computational molecular biology. [documentation](http://biopython.org/wiki/Documentation) [source](https://github.com/biopython/biopython) [installation](http://biopython.org/wiki/Download) [tutorial](http://biopython.org/DIST/docs/tutorial/Tutorial.html)
from jyquickhelper import add_notebook_menu add_notebook_menu()
_____no_output_____
MIT
_doc/notebooks/2016/pydata/im_biopython.ipynb
sdpython/jupytalk
example
from pyquickhelper.filehelper import download download("https://raw.githubusercontent.com/biopython/biopython/master/Tests/GenBank/NC_005816.gb", outfile="NC_005816.gb") from reportlab.lib import colors from reportlab.lib.units import cm from Bio.Graphics import GenomeDiagram from Bio import SeqIO record = SeqIO.read("NC_005816.gb", "genbank") gd_diagram = GenomeDiagram.Diagram("Yersinia pestis biovar Microtus plasmid pPCP1") gd_track_for_features = gd_diagram.new_track(1, name="Annotated Features") gd_feature_set = gd_track_for_features.new_set() for feature in record.features: if feature.type != "gene": #Exclude this feature continue if len(gd_feature_set) % 2 == 0: color = colors.blue else: color = colors.lightblue gd_feature_set.add_feature(feature, color=color, label=True) gd_diagram.draw(format="linear", orientation="landscape", pagesize='A4', fragments=4, start=0, end=len(record)) gd_diagram.write("plasmid_linear.svg", "svg") from IPython.display import SVG SVG("plasmid_linear.svg") gd_diagram.draw(format="circular", circular=True, pagesize=(20*cm,20*cm), start=0, end=len(record), circle_core=0.7) gd_diagram.write("plasmid_circular.svg", "svg") SVG("plasmid_circular.svg")
_____no_output_____
MIT
_doc/notebooks/2016/pydata/im_biopython.ipynb
sdpython/jupytalk
Fetching WHO's situation reports on COVID-19 as DataFrames Get the data
pdf_save_location = '../data/pdf' csv_save_location = '../data/csv' from who_covid_scraper import WHOCovidScraper scraper = WHOCovidScraper('https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports') scraper.df
_____no_output_____
Apache-2.0
covid19_who_situation_reports_importer/notebook.ipynb
aarohijohal/covid19-who-situation-reports-importer
Download report for a given date
download = scraper.download_for_date(datearg='23rd of Feb', folder=pdf_save_location)
report for the date 2020/02/23 already exists at ../data/pdf/20200223-sitrep-34-covid-19.pdf. didn't re-download
Apache-2.0
covid19_who_situation_reports_importer/notebook.ipynb
aarohijohal/covid19-who-situation-reports-importer
Send report for extraction
job = scraper.send_document_to_parsr(download['file']) job
> Polling server for the job f214dea6618020da1a446307879c1f... >> Job done!
Apache-2.0
covid19_who_situation_reports_importer/notebook.ipynb
aarohijohal/covid19-who-situation-reports-importer
Assemble the stats from the report
scraper.assemble_data(job['server_response'])
_____no_output_____
Apache-2.0
covid19_who_situation_reports_importer/notebook.ipynb
aarohijohal/covid19-who-situation-reports-importer
Pattern Generator and Trace AnalyzerThis notebook will show how to use the Pattern Generator to generate patterns on I/O pins. The pattern that will be generated is 3-bit up count performed 4 times. Step 1: Download the `logictools` overlay
from pynq.overlays.logictools import LogicToolsOverlay logictools_olay = LogicToolsOverlay('logictools.bit')
_____no_output_____
BSD-3-Clause
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
jackrosenthal/PYNQ
Step 2: Create WaveJSON waveformThe pattern to be generated is specified in the waveJSON format The pattern is applied to the Arduino interface, pins **D0**, **D1** and **D2** are set to generate a 3-bit count. To check the generated pattern we loop them back to pins **D19**, **D18** and **D17** respectively and use the the trace analyzer to view the loopback signalsThe Waveform class is used to display the specified waveform.
from pynq.lib.logictools import Waveform up_counter = {'signal': [ ['stimulus', {'name': 'bit0', 'pin': 'D0', 'wave': 'lh' * 8}, {'name': 'bit1', 'pin': 'D1', 'wave': 'l.h.' * 4}, {'name': 'bit2', 'pin': 'D2', 'wave': 'l...h...' * 2}], ['analysis', {'name': 'bit2_loopback', 'pin': 'D17'}, {'name': 'bit1_loopback', 'pin': 'D18'}, {'name': 'bit0_loopback', 'pin': 'D19'}]], 'foot': {'tock': 1}, 'head': {'text': 'up_counter'}} waveform = Waveform(up_counter) waveform.display()
_____no_output_____
BSD-3-Clause
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
jackrosenthal/PYNQ
**Note:** Since there are no captured samples at this moment, the analysis group will be empty. Step 3: Instantiate the pattern generator and trace analyzer objectsUsers can choose whether to use the trace analyzer by calling the `trace()` method. The analyzer can be set to trace a specific number of samples using, `num_analyzer_samples` argument.
pattern_generator = logictools_olay.pattern_generator pattern_generator.trace(num_analyzer_samples=16)
_____no_output_____
BSD-3-Clause
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
jackrosenthal/PYNQ