markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
This dataset contains measurements taken on penguins. We will formulate thefollowing problem: using the flipper length of a penguin, we would liketo infer its mass.
import seaborn as sns feature_names = "Flipper Length (mm)" target_name = "Body Mass (g)" data, target = penguins[[feature_names]], penguins[target_name] ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name) ax.set_title("Flipper length in function of the body mass")
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
TipThe function scatterplot from searborn take as input the full dataframeand the parameter x and y allows to specify the name of the columns tobe plotted. Note that this function returns a matplotlib axis(named ax in the example above) that can be further used to add element onthe same matplotlib axis (such as a title). Caution!Here and later, we use the name data and target to be explicit. Inscikit-learn, documentation data is commonly named X and target iscommonly called y. In this problem, penguin mass is our target. It is a continuousvariable that roughly varies between 2700 g and 6300 g. Thus, this is aregression problem (in contrast to classification). We also see that there isalmost a linear relationship between the body mass of the penguin and itsflipper length. The longer the flipper, the heavier the penguin.Thus, we could come up with a simple formula, where given a flipper lengthwe could compute the body mass of a penguin using a linear relationshipof the form `y = a * x + b` where `a` and `b` are the 2 parameters of ourmodel.
def linear_model_flipper_mass(flipper_length, weight_flipper_length, intercept_body_mass): """Linear model of the form y = a * x + b""" body_mass = weight_flipper_length * flipper_length + intercept_body_mass return body_mass
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
Using the model we defined above, we can check the body mass valuespredicted for a range of flipper lengths. We will set `weight_flipper_length`to be 45 and `intercept_body_mass` to be -5000.
import numpy as np weight_flipper_length = 45 intercept_body_mass = -5000 flipper_length_range = np.linspace(data.min(), data.max(), num=300) predicted_body_mass = linear_model_flipper_mass( flipper_length_range, weight_flipper_length, intercept_body_mass)
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
We can now plot all samples and the linear model prediction.
label = "{0:.2f} (g / mm) * flipper length + {1:.2f} (g)" ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name) ax.plot(flipper_length_range, predicted_body_mass, color="tab:orange") _ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
The variable `weight_flipper_length` is a weight applied to the feature`flipper_length` in order to make the inference. When this coefficient ispositive, it means that penguins with longer flipper lengths will have largerbody masses. If the coefficient is negative, it means that penguins withshorter flipper lengths have larger body masses. Graphically, thiscoefficient is represented by the slope of the curve in the plot. Below weshow what the curve would look like when the `weight_flipper_length`coefficient is negative.
weight_flipper_length = -40 intercept_body_mass = 13000 predicted_body_mass = linear_model_flipper_mass( flipper_length_range, weight_flipper_length, intercept_body_mass)
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
We can now plot all samples and the linear model prediction.
ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name) ax.plot(flipper_length_range, predicted_body_mass, color="tab:orange") _ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
In our case, this coefficient has a meaningful unit: g/mm.For instance, a coefficient of 40 g/mm, means that for eachadditional millimeter in flipper length, the body weight predicted willincrease by 40 g.
body_mass_180 = linear_model_flipper_mass( flipper_length=180, weight_flipper_length=40, intercept_body_mass=0) body_mass_181 = linear_model_flipper_mass( flipper_length=181, weight_flipper_length=40, intercept_body_mass=0) print(f"The body mass for a flipper length of 180 mm " f"is {body_mass_180} g and {body_mass_181} g " f"for a flipper length of 181 mm")
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
We can also see that we have a parameter `intercept_body_mass` in our model.This parameter corresponds to the value on the y-axis if `flipper_length=0`(which in our case is only a mathematical consideration, as in our data, the value of `flipper_length` only goes from 170mm to 230mm). This y-valuewhen x=0 is called the y-intercept. If `intercept_body_mass` is 0, the curvewill pass through the origin:
weight_flipper_length = 25 intercept_body_mass = 0 # redefined the flipper length to start at 0 to plot the intercept value flipper_length_range = np.linspace(0, data.max(), num=300) predicted_body_mass = linear_model_flipper_mass( flipper_length_range, weight_flipper_length, intercept_body_mass) ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name) ax.plot(flipper_length_range, predicted_body_mass, color="tab:orange") _ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
Otherwise, it will pass through the `intercept_body_mass` value:
weight_flipper_length = 45 intercept_body_mass = -5000 predicted_body_mass = linear_model_flipper_mass( flipper_length_range, weight_flipper_length, intercept_body_mass) ax = sns.scatterplot(data=penguins, x=feature_names, y=target_name) ax.plot(flipper_length_range, predicted_body_mass, color="tab:orange") _ = ax.set_title(label.format(weight_flipper_length, intercept_body_mass))
_____no_output_____
CC-BY-4.0
notebooks/linear_regression_without_sklearn.ipynb
brospars/scikit-learn-mooc
IntroductionThis notebook describe how you can use VietOcr to train OCR model
# pip install --quiet vietocr==0.3.2
[?25l  |█████▌ | 10kB 26.4MB/s eta 0:00:01  |███████████ | 20kB 1.7MB/s eta 0:00:01  |████████████████▋ | 30kB 2.3MB/s eta 0:00:01  |██████████████████████▏ | 40kB 2.5MB/s eta 0:00:01  |███████████████████████████▋ | 51kB 2.0MB/s eta 0:00:01  |████████████████████████████████| 61kB 1.8MB/s [?25h Installing build dependencies ... [?25l[?25hdone Getting requirements to build wheel ... [?25l[?25hdone Preparing wheel metadata ... [?25l[?25hdone  |████████████████████████████████| 880kB 7.2MB/s  |████████████████████████████████| 952kB 17.0MB/s [?25h Building wheel for gdown (PEP 517) ... [?25l[?25hdone Building wheel for lmdb (setup.py) ... [?25l[?25hdone ERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.4.0 which is incompatible.
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Inference
import matplotlib.pyplot as plt from PIL import Image from vietocr.tool.predictor import Predictor from vietocr.tool.config import Cfg config = Cfg.load_config_from_name('vgg_transformer')
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Change weights to your weights or using default weights from our pretrained model. Path can be url or local file
# config['weights'] = './weights/transformerocr.pth' config['weights'] = 'https://drive.google.com/uc?id=13327Y1tz1ohsm5YZMyXVMPIOjoOA0OaA' config['cnn']['pretrained']=False config['device'] = 'cpu' config['predictor']['beamsearch']=False detector = Predictor(config) ! gdown --id 1uMVd6EBjY4Q0G2IkU5iMOQ34X0bysm0b ! unzip -qq -o sample.zip ! ls sample | shuf |head -n 5 img = './image/3.png' img = Image.open(img) plt.imshow(img) s = detector.predict(img) s
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Download sample dataset
! gdown https://drive.google.com/uc?id=19QU4VnKtgm3gf0Uw_N2QKSquW1SQ5JiE ! unzip -qq -o ./data_line.zip
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Train model 1. Load your config2. Train model using your dataset above Load the default config, we adopt VGG for image feature extraction
from vietocr.tool.config import Cfg from vietocr.model.trainer import Trainer
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Change the config * *data_root*: the folder save your all images* *train_annotation*: path to train annotation* *valid_annotation*: path to valid annotation* *print_every*: show train loss at every n steps* *valid_every*: show validation loss at every n steps* *iters*: number of iteration to train your model* *export*: export weights to folder that you can use for inference* *metrics*: number of sample in validation annotation you use for computing full_sequence_accuracy, for large dataset it will take too long, then you can reuduce this number
config = Cfg.load_config_from_name('vgg_transformer') #config['vocab'] = 'aAàÀảẢãÃáÁạẠăĂằẰẳẲẵẴắẮặẶâÂầẦẩẨẫẪấẤậẬbBcCdDđĐeEèÈẻẺẽẼéÉẹẸêÊềỀểỂễỄếẾệỆfFgGhHiIìÌỉỈĩĨíÍịỊjJkKlLmMnNoOòÒỏỎõÕóÓọỌôÔồỒổỔỗỖốỐộỘơƠờỜởỞỡỠớỚợỢpPqQrRsStTuUùÙủỦũŨúÚụỤưƯừỪửỬữỮứỨựỰvVwWxXyYỳỲỷỶỹỸýÝỵỴzZ0123456789!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ ' dataset_params = { 'name':'hw', 'data_root':'./data_line/', 'train_annotation':'train_line_annotation.txt', 'valid_annotation':'test_line_annotation.txt' } params = { 'print_every':200, 'valid_every':15*200, 'iters':20000, 'checkpoint':'./checkpoint/transformerocr_checkpoint.pth', 'export':'./weights/transformerocr.pth', 'metrics': 10000 } config['trainer'].update(params) config['dataset'].update(dataset_params) config['device'] = 'cuda:0'
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
you can change any of these params in this full list below
config
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
You should train model from our pretrained
trainer = Trainer(config, pretrained=True)
Downloading: "https://download.pytorch.org/models/vgg19_bn-c79401a0.pth" to /root/.cache/torch/hub/checkpoints/vgg19_bn-c79401a0.pth
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Save model configuration for inference, load_config_from_file
trainer.config.save('config.yml')
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Visualize your dataset to check data augmentation is appropriate
trainer.visualize_dataset()
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Train now
trainer.train()
iter: 000200 - train loss: 1.657 - lr: 1.91e-05 - load time: 1.08 - gpu time: 158.33 iter: 000400 - train loss: 1.429 - lr: 3.95e-05 - load time: 0.76 - gpu time: 158.76 iter: 000600 - train loss: 1.331 - lr: 7.14e-05 - load time: 0.73 - gpu time: 158.38 iter: 000800 - train loss: 1.252 - lr: 1.12e-04 - load time: 1.29 - gpu time: 158.43 iter: 001000 - train loss: 1.218 - lr: 1.56e-04 - load time: 0.84 - gpu time: 158.86 iter: 001200 - train loss: 1.192 - lr: 2.01e-04 - load time: 0.78 - gpu time: 160.20 iter: 001400 - train loss: 1.140 - lr: 2.41e-04 - load time: 1.54 - gpu time: 158.48 iter: 001600 - train loss: 1.129 - lr: 2.73e-04 - load time: 0.70 - gpu time: 159.42 iter: 001800 - train loss: 1.095 - lr: 2.93e-04 - load time: 0.74 - gpu time: 158.03 iter: 002000 - train loss: 1.098 - lr: 3.00e-04 - load time: 0.66 - gpu time: 159.21 iter: 002200 - train loss: 1.060 - lr: 3.00e-04 - load time: 1.52 - gpu time: 157.63 iter: 002400 - train loss: 1.055 - lr: 3.00e-04 - load time: 0.80 - gpu time: 159.34 iter: 002600 - train loss: 1.032 - lr: 2.99e-04 - load time: 0.74 - gpu time: 159.13 iter: 002800 - train loss: 1.019 - lr: 2.99e-04 - load time: 1.42 - gpu time: 158.27
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Visualize prediction from our trained model
trainer.visualize_prediction()
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
Compute full seq accuracy for full valid dataset
trainer.precision()
_____no_output_____
Apache-2.0
vietocr_gettingstart.ipynb
lexuanthinh/vietocr
RNNIn this section, we will introduce how to use recurrent neural networks for text classification.The dataset we use is the IMDB Movie Reviews.We use the reviews written by the users as the input and try to predict whether the they are positive or negative. Preparing the dataYou can use the following code to load the IMDB dataset.
import tensorflow as tf tf.random.set_seed(42) from tensorflow.keras.datasets import imdb from tensorflow.keras.preprocessing import sequence max_words = 10000 embedding_dim = 32 (train_data, train_labels), (test_data, test_labels) = imdb.load_data( num_words=max_words) print(train_data.shape) print(train_labels.shape) print(train_data[:2]) print(train_labels[:2])
(25000,) (25000,) [list([1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]) list([1, 194, 1153, 194, 8255, 78, 228, 5, 6, 1463, 4369, 5012, 134, 26, 4, 715, 8, 118, 1634, 14, 394, 20, 13, 119, 954, 189, 102, 5, 207, 110, 3103, 21, 14, 69, 188, 8, 30, 23, 7, 4, 249, 126, 93, 4, 114, 9, 2300, 1523, 5, 647, 4, 116, 9, 35, 8163, 4, 229, 9, 340, 1322, 4, 118, 9, 4, 130, 4901, 19, 4, 1002, 5, 89, 29, 952, 46, 37, 4, 455, 9, 45, 43, 38, 1543, 1905, 398, 4, 1649, 26, 6853, 5, 163, 11, 3215, 2, 4, 1153, 9, 194, 775, 7, 8255, 2, 349, 2637, 148, 605, 2, 8003, 15, 123, 125, 68, 2, 6853, 15, 349, 165, 4362, 98, 5, 4, 228, 9, 43, 2, 1157, 15, 299, 120, 5, 120, 174, 11, 220, 175, 136, 50, 9, 4373, 228, 8255, 5, 2, 656, 245, 2350, 5, 4, 9837, 131, 152, 491, 18, 2, 32, 7464, 1212, 14, 9, 6, 371, 78, 22, 625, 64, 1382, 9, 8, 168, 145, 23, 4, 1690, 15, 16, 4, 1355, 5, 28, 6, 52, 154, 462, 33, 89, 78, 285, 16, 145, 95])] [1 0]
MIT
3.3-RNN.ipynb
datamllab/automl-in-action-notebooks
The code above would load the reviews into train_data and test_data, load the labels (positive or negative) into train_labels and test_labels. As you can see the reviews in train_data are lists of integers instead of texts. It is because the raw texts cannot be used as an input to a neural network. Neural networks only accepts numerical data as inputs.The integers we see above is the raw text data after a preprocessing step named tokenization. It first split each review into a list of words and assign an integer to each of the words. For example, a scentence "How are you? How are you doing?" will be transformed into a list of words as ["how", "are", "you", "how", "are", "you", "doing"]. Then transformed to [5, 8, 9, 5, 8, 9, 7]. The integers doesn't have special meanings but a representation of the words. Same integers represents the same words, different integers represents different words.The labels are also integers, where 1 represents positive, 0 represents negative.Then, we pad the data to the same length.
# Pad the sequence to length max_len. maxlen = 100 print(len(train_data[0])) print(len(train_data[1])) train_data = sequence.pad_sequences(train_data, maxlen=maxlen) test_data = sequence.pad_sequences(test_data, maxlen=maxlen) print(train_data.shape) print(train_labels.shape)
218 189 (25000, 100) (25000,)
MIT
3.3-RNN.ipynb
datamllab/automl-in-action-notebooks
Building your networkThe next step is to build your neural network model and train it.We will introduce the neural network in three steps.The first step is the embedding, which transform each integer list into a list of vectors.The second step is to feed the vectors to the recurrent neural network.The third step is to use the output of the recurrent neural network for classification. EmbeddingEmbedding means find a corresponding numerical vector for each word, which is now an integer in the list.The numerical vector can be seen as the coordinate of a point in the space.We embed the words into specific points in the space.That is why we call the process embedding.To implement it, we use a Keras Embedding layer.First, we need to create a Keras Sequential model.Then, we add the layers one by one to the model.The order of the layers is from the input to the output.
from tensorflow.keras.layers import Embedding from tensorflow.keras import Sequential max_words = 10000 embedding_dim = 32 model = Sequential() model.add(Embedding(max_words, embedding_dim)) model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, None, 32) 320000 ================================================================= Total params: 320,000 Trainable params: 320,000 Non-trainable params: 0 _________________________________________________________________
MIT
3.3-RNN.ipynb
datamllab/automl-in-action-notebooks
In the code above, we initialized an Embedding layer.The max_words is the vocabulary size, which is an integer meaning how many different words are there in the input data.The integer 16 means the length of the vector representation fro each word is 32.The output tensor of the Embedding layer is (batch_size, max_len, embedding_dim). Recurrent Neural NetworksAfter the embedding layer, we need to use a recurrent neural network for the classification.Recurrent neural networks can handle sequential inputs.For example, we input a movie review as a sequence of word embedding vectors to it, which are the output of the embedding layer.Each vector has a length of 32.Each review contains 100 vectors.If we see the RNN as a whole, it takes 100 vectors of length 16 altogether.However, in the real case, it takes one vector at a time.Each time the RNN takes in a word embedding vector,it not only takes the word embedding vector, but another state vector as well.You can think the state vector as the memory of the RNN.It memorizes the previous words it taken as input.In the first step, the RNN has no previous words to remember.It takes an initial state, which is usually empty,and the first word embedding vector as input.The output of the first step is actually the state to be input to the second step.For the rest of the steps, the RNN will just take the previous output and the current input as input,and output the state for the next step.For the last step, the output state is the final output we will use for the classification.We can use the following python code to illustrate the process.```pythonstate = [0] * 32for i in range(100): state = rnn(embedding[i], state)return state```The returned state is the final output of the RNN.Sometimes, we may also need to collect the output of each step as shown in the following code.```pythonstate = [0] * 32output = []for i in range(100): state = rnn(embedding[i], state) output.append(state)return output```In the code above, the output of an RNN can also be a sequence of vectors, which is the same format as the input to the RNN.Therefore, we can make the RNN deeper by stacking multiple RNN layers together.To implement the RNN described, we need the SimpleRNN layer in Keras.
from tensorflow.keras.layers import SimpleRNN model.add(SimpleRNN(embedding_dim, return_sequences=True)) model.add(SimpleRNN(embedding_dim, return_sequences=True)) model.add(SimpleRNN(embedding_dim, return_sequences=True)) model.add(SimpleRNN(embedding_dim)) model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, None, 32) 320000 _________________________________________________________________ simple_rnn (SimpleRNN) (None, None, 32) 2080 _________________________________________________________________ simple_rnn_1 (SimpleRNN) (None, None, 32) 2080 _________________________________________________________________ simple_rnn_2 (SimpleRNN) (None, None, 32) 2080 _________________________________________________________________ simple_rnn_3 (SimpleRNN) (None, 32) 2080 ================================================================= Total params: 328,320 Trainable params: 328,320 Non-trainable params: 0 _________________________________________________________________
MIT
3.3-RNN.ipynb
datamllab/automl-in-action-notebooks
The return_sequences parameter controlls whether to collect all the output vectors of an RNN or only collect the last output. It is set to False by default. Classification HeadThen we will use the output of the last SimpleRNN layer, which is a vector of length 32, as the input to the classification head.In the classification head, we use a fully-connected layer for the classification.Then we compile and train the model.
from tensorflow.keras.layers import Dense model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='adam', metrics=['acc'], loss='binary_crossentropy') model.fit(train_data, train_labels, epochs=2, batch_size=128)
Epoch 1/2 196/196 [==============================] - 30s 119ms/step - loss: 0.6258 - acc: 0.6111 Epoch 2/2 196/196 [==============================] - 23s 117ms/step - loss: 0.3361 - acc: 0.8573
MIT
3.3-RNN.ipynb
datamllab/automl-in-action-notebooks
Then we can validate our model on the testing data.
model.evaluate(test_data, test_labels)
782/782 [==============================] - 28s 35ms/step - loss: 0.3684 - acc: 0.8402
MIT
3.3-RNN.ipynb
datamllab/automl-in-action-notebooks
Business Results Recomendação de compra dos imóveis: Para decidir quais imóveis deverão ser comprados, iremos comparar os imóveis pelo zipcode e selecionar as casas que estão abaixo da média. Como explicado anteriormente, estaremos selecionando as casas que possuem 'condition' maior ou igual a 3.
import pandas as pd data = pd.read_csv('datasets/kc_house_clean.csv') pd.set_option( 'display.float_format', lambda x: '%.2f' % x) pd.options.display.max_columns = None pd.options.display.max_rows = None #comparar a média do 'zipcode' e se o valor for abaixo da média e a condição >= 3 então comprar df = data[['price','id','zipcode','condition']].copy() recomendations = df[['zipcode','price']].groupby('zipcode').median().reset_index() recomendations.columns = ['zipcode','median_price'] df = pd.merge(df, recomendations, on= 'zipcode', how = 'inner') df['recomendations'] = 'na' for i in range(len(df)): if (df.loc[i,'price'] < df.loc[i,'median_price'])&(df.loc[i,'condition'] >= 3): df.loc[i,'recomendations'] = 'comprar' else: df.loc[i,'recomendations'] = 'não comprar' df.to_csv('datasets/recomendacoes_compras.csv') df.head() ## Recomendação de venda dos imóveis ##### Para decidir o melhor momento de venda dos imóveis estaremos comparando os imóveis pelo 'zipcode' e pela estação. ##### Se o preço for menor que a mediana, então estaremos acrescentando 30% ao valor. ##### Se o preço for maior que a mediana, então estaremos acrescentando 10% ao valor. data = pd.read_csv('datasets/kc_house_clean.csv') df1 = data[['price','date','zipcode','id']].copy() df1['date'] = pd.to_datetime(df1['date']).dt.month df1['season'] = df1['date'].apply(lambda x: 'spring' if ( x >= 3 )&( x <= 5 ) else 'summer' if ( x >= 6 )&( x <= 8 ) else 'fall' if ( x >= 9 )&( x <= 11 ) else 'winter') estacoes = df1[['zipcode','season','price']].groupby(['zipcode','season']).median().reset_index() estacoes.columns = ['zipcode','season','median_price'] estacoes['zip_season'] = estacoes['zipcode'].astype(str) + "_" + estacoes['season'].astype(str) estacoes = estacoes.drop(['zipcode','season'], axis = 1) df1['zip_season'] = df1['zipcode'].astype(str) + "_" + df1['season'].astype(str) df1 = pd.merge( df1, estacoes, on='zip_season', how='inner') df1['venda'] = 'na' for i in range(len(data)): if (df1.loc[i,'price'] <= df1.loc[i,'median_price']): df1.loc[i,'venda'] = df1.loc[i,'price'] * 1.30 else: df1.loc[i,'venda'] = df1.loc[i,'price'] * 1.10 df1.to_csv('datasets/recomendacoes_venda.csv') df1.head()
_____no_output_____
MIT
business_results.ipynb
gustweing/house_rocket
[mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course Author: [Yury Kashnitskiy](https://yorko.github.io). Translated by [Sergey Oreshkov](https://www.linkedin.com/in/sergeoreshkov/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose. Assignment 8 (demo). Solution Implementation of online regressor **Same assignment as a [Kaggle Kernel](https://www.kaggle.com/kashnitsky/a8-demo-implementing-online-regressor) + [solution](https://www.kaggle.com/kashnitsky/a8-demo-implementing-online-regressor-solution).** Here we'll implement a regressor trained with stochastic gradient descent (SGD). Fill in the missing code. If you do evething right, you'll pass a simple embedded test. Linear regression and Stochastic Gradient Descent
import numpy as np import pandas as pd from sklearn.base import BaseEstimator from sklearn.metrics import log_loss, mean_squared_error, roc_auc_score from sklearn.model_selection import train_test_split from tqdm import tqdm %matplotlib inline import seaborn as sns from matplotlib import pyplot as plt from sklearn.preprocessing import StandardScaler
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Implement class `SGDRegressor`. Specification:- class is inherited from `sklearn.base.BaseEstimator`- constructor takes parameters `eta` – gradient step ($10^{-3}$ by default) and `n_epochs` – dataset pass count (3 by default)- constructor also creates `mse_` and `weights_` lists in order to track mean squared error and weight vector during gradient descent iterations- Class has `fit` and `predict` methods- The `fit` method takes matrix `X` and vector `y` (`numpy.array` objects) as parameters, appends column of ones to `X` on the left side, initializes weight vector `w` with **zeros** and then makes `n_epochs` iterations of weight updates (you may refer to this [article](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-8-vowpal-wabbit-fast-learning-with-gigabytes-of-data-60f750086237) for details), and for every iteration logs mean squared error and weight vector `w` in corresponding lists we created in the constructor. - Additionally the `fit` method will create `w_` variable to store weights which produce minimal mean squared error- The `fit` method returns current instance of the `SGDRegressor` class, i.e. `self`- The `predict` method takes `X` matrix, adds column of ones to the left side and returns prediction vector, using weight vector `w_`, created by the `fit` method.
class SGDRegressor(BaseEstimator): def __init__(self, eta=1e-3, n_epochs=3): self.eta = eta self.n_epochs = n_epochs self.mse_ = [] self.weights_ = [] def fit(self, X, y): X = np.hstack([np.ones([X.shape[0], 1]), X]) w = np.zeros(X.shape[1]) for it in tqdm(range(self.n_epochs)): for i in range(X.shape[0]): new_w = w.copy() new_w[0] += self.eta * (y[i] - w.dot(X[i, :])) for j in range(1, X.shape[1]): new_w[j] += self.eta * (y[i] - w.dot(X[i, :])) * X[i, j] w = new_w.copy() self.weights_.append(w) self.mse_.append(mean_squared_error(y, X.dot(w))) self.w_ = self.weights_[np.argmin(self.mse_)] return self def predict(self, X): X = np.hstack([np.ones([X.shape[0], 1]), X]) return X.dot(self.w_)
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Let's test out the algorithm on height/weight data. We will predict heights (in inches) based on weights (in lbs).
data_demo = pd.read_csv("../../data/weights_heights.csv") plt.scatter(data_demo["Weight"], data_demo["Height"]) plt.xlabel("Weight (lbs)") plt.ylabel("Height (Inch)") plt.grid(); X, y = data_demo["Weight"].values, data_demo["Height"].values
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Perform train/test split and scale data.
X_train, X_valid, y_train, y_valid = train_test_split( X, y, test_size=0.3, random_state=17 ) scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train.reshape([-1, 1])) X_valid_scaled = scaler.transform(X_valid.reshape([-1, 1]))
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Train created `SGDRegressor` with `(X_train_scaled, y_train)` data. Leave default parameter values for now.
# you code here sgd_reg = SGDRegressor() sgd_reg.fit(X_train_scaled, y_train)
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Draw a chart with training process – dependency of mean squared error from the i-th SGD iteration number.
# you code here plt.plot(range(len(sgd_reg.mse_)), sgd_reg.mse_) plt.xlabel("#updates") plt.ylabel("MSE");
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Print the minimal value of mean squared error and the best weights vector.
# you code here np.min(sgd_reg.mse_), sgd_reg.w_
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Draw chart of model weights ($w_0$ and $w_1$) behavior during training.
# you code here plt.subplot(121) plt.plot(range(len(sgd_reg.weights_)), [w[0] for w in sgd_reg.weights_]) plt.subplot(122) plt.plot(range(len(sgd_reg.weights_)), [w[1] for w in sgd_reg.weights_]);
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Make a prediction for hold-out set `(X_valid_scaled, y_valid)` and check MSE value.
# you code here sgd_holdout_mse = mean_squared_error(y_valid, sgd_reg.predict(X_valid_scaled)) sgd_holdout_mse
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Do the same thing for `LinearRegression` class from `sklearn.linear_model`. Evaluate MSE for hold-out set.
# you code here from sklearn.linear_model import LinearRegression lm = LinearRegression().fit(X_train_scaled, y_train) print(lm.coef_, lm.intercept_) linreg_holdout_mse = mean_squared_error(y_valid, lm.predict(X_valid_scaled)) linreg_holdout_mse try: assert (sgd_holdout_mse - linreg_holdout_mse) < 1e-4 print("Correct!") except AssertionError: print( "Something's not good.\n Linreg's holdout MSE: {}" "\n SGD's holdout MSE: {}".format(linreg_holdout_mse, sgd_holdout_mse) )
_____no_output_____
Unlicense
jupyter_english/assignments_demo/assignment08_implement_sgd_regressor_solution.ipynb
salman394/AI-ml--course
Dynamic Costs ReportingCalculate DV360 cost at the dynamic creative combination level. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
!pip install git+https://github.com/google/starthinker
_____no_output_____
Apache-2.0
colabs/dynamic_costs.ipynb
Ressmann/starthinker
2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True )
_____no_output_____
Apache-2.0
colabs/dynamic_costs.ipynb
Ressmann/starthinker
3. Enter Dynamic Costs Reporting Recipe Parameters 1. Add a sheet URL. This is where you will enter advertiser and campaign level details. 1. Specify the CM network ID. 1. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. 1. Follow the instructions on the sheet; this will be your configuration. 1. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... 1. Wait for BigQuery->->->Dynamic_Costs_Analysis to be created or click Run Now. 1. Copy Dynamic Costs Sample Data ( Copy From This ). 1. Click Edit Connection, and Change to BigQuery->->->Dynamic_Costs_Analysis. 1. Copy Dynamic Costs Sample Report ( Copy From This ). 1. When prompted, choose the new data source you just created. 1. Edit the table to include or exclude columns as desired. 1. Or, give the dashboard connection intructions to the client.Modify the values below for your use case, can be done multiple times, then click play.
FIELDS = { 'dcm_account': '', 'auth_read': 'user', # Credentials used for reading data. 'configuration_sheet_url': '', 'auth_write': 'service', # Credentials used for writing data. 'bigquery_dataset': 'dynamic_costs', } print("Parameters Set To: %s" % FIELDS)
_____no_output_____
Apache-2.0
colabs/dynamic_costs.ipynb
Ressmann/starthinker
4. Execute Dynamic Costs ReportingThis does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'dynamic_costs': { 'auth': 'user', 'account': {'field': {'name': 'dcm_account', 'kind': 'string', 'order': 0, 'default': ''}}, 'sheet': { 'template': { 'url': 'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing', 'tab': 'Dynamic Costs', 'range': 'A1' }, 'url': {'field': {'name': 'configuration_sheet_url', 'kind': 'string', 'order': 1, 'default': ''}}, 'tab': 'Dynamic Costs', 'range': 'A2:B' }, 'out': { 'auth': 'user', 'dataset': {'field': {'name': 'bigquery_dataset', 'kind': 'string', 'order': 2, 'default': 'dynamic_costs'}} } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True)
_____no_output_____
Apache-2.0
colabs/dynamic_costs.ipynb
Ressmann/starthinker
Capítulo 2: Toma de Decisiones y Neutrosofía Distancia Euclidiana entre números SVN
def euclideanNeu(a1,a2): a=0 c=len(a1) for i in range(c): a=a+pow(a1[i][0]-a2[i][0],2)+pow(a1[i][1]-a2[i][1],2)+pow(a1[i][2]-a2[i][2],2) a=pow(1.0/3.0*a,0.5) return(a)
_____no_output_____
MIT
Cap2..ipynb
mleyvaz/Neutrosofia
Ejemplo de uso de la distancia Euclidiana
EB=(1,0,0) MMB=(0.9, 0.1, 0.1) MB=(0.8,0.15,0.20) B=(0.70,0.25,0.30) MDB=(0.60,0.35,0.40) M=(0.50,0.50,0.50) MDM=(0.40,0.65,0.60) MA=(0.30,0.75,0.70) MM=(0.20,0.85,0.80) MMM=(0.10,0.90,0.90) EM=(0,1,1) r1=[MDB,B,B] i=[MMB, MMB, MB] euclideanNeu(r1,i)
_____no_output_____
MIT
Cap2..ipynb
mleyvaz/Neutrosofia
Operador SVNWA
def SVNWA(list,W): t=1 i=1 f=1 c=0 for j in list: t=t*(1-j[0])**W[c] i=i*j[1]**W[c] f=f*j[2]**W[c] c=c+1 return (1-t,i,f)
_____no_output_____
MIT
Cap2..ipynb
mleyvaz/Neutrosofia
Ejemplo de uso SVNWA
A=[MDB,B,MDB] W = [0.55, 0.26, 0.19] # W:Vector de pesos SVNWA(A,W)
_____no_output_____
MIT
Cap2..ipynb
mleyvaz/Neutrosofia
Operador SVNGA
def SVNGA(list,W): t=1 i=1 f=1 c=0 for j in list: t=t*j[0]**W[c] i=i*j[1]**W[c] f=f*j[2]**W[c] c=c+1 return (t,i,f) A=[MDB,B,MDB] W = [0.55, 0.26, 0.19] # W:Vector de pesos SVNGA(A,W)
_____no_output_____
MIT
Cap2..ipynb
mleyvaz/Neutrosofia
---layout: posttitle: Project Euler - Problem 5post-order: 005--- 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.What is the smallest positive number that is **evenly divisible** by all of the numbers from 1 to 20? Solution 1 Let's brute-force it and see what happens. One thing we know: the number we are trying to find **cannot be** smaller than the products of all primes up to $20$. And the reason for that is, the number we are trying to find has to be divisible by $1$ to $20$ and the only way that can be accomplished is if the prime factors (for example $13$) is in that product.
import math import time # let's setle "floor" first: the products of all primes from 1 to 20 def primes_up_to(n): product = 2 for candidate_for_prime in range (3,n+1): for i in range (2,candidate_for_prime): if (candidate_for_prime % i == 0): break elif ((candidate_for_prime % i != 0) & (i + 1 == candidate_for_prime)): product *= candidate_for_prime return product start = time.time() divisible_up_to = 20 floor = primes_up_to(divisible_up_to) # the highest we can go is to n! (n factorial) ceiling = math.factorial(divisible_up_to) found = False for ii in range (floor,ceiling): for i in range(2,divisible_up_to+1): if ii % i != 0: break elif i == divisible_up_to: print (ii) elapsed = time.time() - start print(elapsed) found = True break if found: break
232792560 162.78695583343506
MIT
_posts/Problem-005.ipynb
bru1987/euler
As you may notice, the running time is extremely high (and it will get higher if we choose a greater `divisible_up_to`). We need to find a more efficient way to tacke this problem. Solution 2 - Greatest power of primes The solution for this problem actually requires **no computation**. And the reason for that is, if we find the prime factorization of each number up to 20 and multiply the greatest power of each, we will find the correct solution. $$\begin{align}2 &= 2^1\\3 &= 3^1\\4 &= 2^2\\5 &= 5^1\\\vdots\\18 &= 2^1 \cdot 3^2\\19 &= 19^1\\20 &= 2^2 \cdot 5^1\end{align}$$ We can take advantage of this and build a list with the prime factors and a list with the prime factor's greatest power that shows up from 1 to `divisible_up_to`. After that, we evaluate the first item of the list of primes raised to the first item of the list of powers, multiplied by the second item of the list of primes raised to the second item of the list of powers and so on. |LISTS|||----------------|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|| list of primes | [ | 2 | 3 | 5 | 7 | 11| 13| 17| 19| ] || list of powers | [ |4 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | ] | Answer = $2^4 \cdot 3^2 \cdot 5^1 \cdot 7^1 \cdot 11^1 \cdot 13^1 \cdot 17^1 \cdot 19^1 = 232792560$
def primes_list(n): list_of_primes = [1,2] for i in range (3,n+1): for ii in range (2,i): if (i % ii == 0): break elif ((i % ii != 0) & (ii + 1 == i)): list_of_primes.append(i) return list_of_primes list_of_primes = primes_list(20) print (list_of_primes)
[1, 2, 3, 5, 7, 11, 13, 17, 19]
MIT
_posts/Problem-005.ipynb
bru1987/euler
Let's build a new list, with the same number of elements, but now filled with zeros. It will receive the greatest power for each prime:
list_of_powers = [1] * len(list_of_primes) print(list_of_powers)
[1, 1, 1, 1, 1, 1, 1, 1, 1]
MIT
_posts/Problem-005.ipynb
bru1987/euler
Now we need to check what is the greatest power that shows up, for each of the primes, from 1 to 20.
for i in list_of_primes: print (i) # if it's prime, there's no need to check: the power will be 1 # do the prime factorization of all numbers up to 20
1 2 3 5 7 11 13 17 19
MIT
_posts/Problem-005.ipynb
bru1987/euler
tips on drawing histograms: bin alignment vs labels
import pandas as pd, numpy as np, seaborn as sns import os # Remove the most annoying pandas warning # A value is trying to be set on a copy of a slice from a DataFrame. pd.options.mode.chained_assignment = None data_dir = '../../data' src_file = 'sample01.csv' f = os.path.join(data_dir, src_file) import sys sys.path.insert(0, '../modules') import handy as hd df = pd.read_csv(f, sep = ';') df.shape # remember the original data under this variable df0 = df.copy()
_____no_output_____
MIT
notebooks/histograms.ipynb
altanova/stuff
example: 2d histogramshowing bin allignments: incorrect, and correct
data = df sns.set() import matplotlib.pyplot as plt; import matplotlib.colors as mcolors fig, ax = plt.subplots(1, 3, figsize=(20,4)) ax = ax.flatten() fig.suptitle("some histograms", fontsize=20, y=1.1) axis = ax[0] h = axis.hist(x = data.weekday) axis.set_xlabel('days of week (0=Mon)') axis.set_ylabel('frequency') axis.set_title("sloppy bin allignment", fontsize = 16) axis = ax[1] axis.hist(x = data.weekday, bins = 7) axis.set_xlabel('days of week (0=Mon)') axis.set_ylabel('frequency') axis.set_title("still sloppy: x labels not aligned)", fontsize = 16) axis = ax[2] axis.hist(x = data.weekday, bins = (np.arange(7 + 1)) - 0.5) axis.set_xlabel('days of week (0=Mon)') axis.set_ylabel('frequency') axis.set_title("correct bin allignment", fontsize = 16) plt.tight_layout() data = df import matplotlib.pyplot as plt; import matplotlib.colors as mcolors fig, ax = plt.subplots(1, 2, figsize=(15,6)) fig.suptitle("two 2d histograms", fontsize=20, y=1.1) ax1 = ax[0] # gammas = [0.8, 0.5, 0.3] gamma = 0.4 h = ax1.hist2d(x = data.weekday, y = data.hour, bins = [7, 24], norm=mcolors.PowerNorm(gamma), cmap='Blues') cb = fig.colorbar(h[3], ax=ax1) cb.set_label('incidents per bin') ax1.set_xlabel('days of week (0=Mon)') ax1.set_ylabel('hours of day') ax1.set_title("sloppy bin allignment", fontsize = 16) plt.tight_layout() ax1 =ax[1] xbins = np.arange(0, 7 + 1) - 0.5 ybins = np.arange(0, 24 + 1) - 0.5 h = ax1.hist2d(x = data.weekday, y = data.hour, bins = [xbins, ybins], norm=mcolors.PowerNorm(gamma), cmap='Blues') #vmax = 100 ) cb = fig.colorbar(h[3], ax=ax1) cb.set_label('incidents per bin') ax1.set_xlabel('days of week (0=Mon)') ax1.set_ylabel('hours of day') ax1.set_title("bins aligned correctly", fontsize = 16) plt.tight_layout()
_____no_output_____
MIT
notebooks/histograms.ipynb
altanova/stuff
same problem with Seaborn (v 0.11) displot or joinplot, type hist
# bad sns.jointplot(x="weekday", y="hour", data=df.sample(1000), kind='hist', ax = ax[0]) # still bad sns.jointplot(x="weekday", y="hour", data=df.sample(1000), kind='hist',bins = [7, 24]) #good bins = (np.arange(7 + 1)-0.5, np.arange(24 + 1) - 0.5) sns.jointplot(x="weekday", y="hour", data=df.sample(1000), kind='hist',bins = bins) # but the problem is gone, if displot type is other than kde sns.jointplot(x="weekday", y="hour", data=df.sample(1000), kind='kde', xlim=(0,6), ylim=(0,24)) sns.jointplot(x="weekday", y="hour", cmap = 'coolwarm', data=df.sample(10000), kind='kde', fill=True) # same, with contour only sns.jointplot(x="weekday", y="hour", cmap = 'coolwarm', data=df.sample(10000), kind='kde')
_____no_output_____
MIT
notebooks/histograms.ipynb
altanova/stuff
more problems with bin allignment
# I will now demonstrate how bad bins lead to bad conclusions # store previous df under separate variable df0 = df data_dir = '../../data' src_file = 'sample02.csv' f = os.path.join(data_dir, src_file) df = pd.read_csv(f, sep = ';') df['created'] = pd.to_datetime(df['created'], format = hd.format_dash, errors = 'coerce') df['resolved'] = pd.to_datetime(df['resolved'], format = hd.format_dash, errors = 'coerce') df = hd.augment_columns(df) # remember this augmented data set df1 = df minweek, maxweek = df.week_nr.min(), df.week_nr.max() minweek, maxweek data1 = df[(df.week_nr == maxweek -1) & (df.category == 'Alarm')].weekhour data2 = df[(df.week_nr == maxweek -1)].weekhour #INCORRECT slots = 24 * 7 bins_sloppy = slots fig, ax = plt.subplots(2,1, figsize = (25,4)) axis = ax[0] w1 = axis.hist(x= data1, bins = bins_sloppy) axis.set_title('category 1, sloppy bins', loc = 'left', fontsize = 16) axis = ax[1] w2 = axis.hist(x= data2, bins = bins_sloppy) axis.set_title('category 2, sloppy bins', loc = 'left', fontsize = 16) plt.tight_layout() # the visualization above is wrong. Bars are mislined, due to sloppy bins definition #CORRECT def my_title_markers(axis, title, marker1, marker2): axis.set_title(title, loc = 'left', fontsize = 16) axis.axvline(x=marker1, color='r', linestyle='dashed', linewidth=2, label = str(marker1)) axis.axvline(x=marker2, color='b', linestyle='dashed', linewidth=2, label = str(marker2)) axis.legend(loc = 'upper left', fontsize = '12') slots = 24 * 7 sloppy_bins = slots bins = np.arange(slots + 1) - 0.5 fig, ax = plt.subplots(4,1, figsize = (25,10)) marker1, marker2 = 8, 40 axis = ax[0] axis.hist(x= data1, bins = sloppy_bins) my_title_markers(axis, 'category 1, sloppy bins', marker1, marker2) axis = ax[1] axis.hist(x= data2, bins = sloppy_bins) my_title_markers(axis, 'category 2, sloppy bins', marker1, marker2) axis = ax[2] w1 = axis.hist(x= data1, bins = bins) my_title_markers(axis, 'category 1, correct bins', marker1, marker2) axis = ax[3] w2 = axis.hist(x= data2, bins = bins) my_title_markers(axis, 'category 2, correct bins', marker1, marker2) plt.tight_layout()
_____no_output_____
MIT
notebooks/histograms.ipynb
altanova/stuff
Problems with binning and rounding days and weeks (pd.Timestamp)The below is a long version. The compact version has been summarized in handy-showcase workbook.
data_dir = '../../data' src_file = 'sample01.csv' f = os.path.join(data_dir, src_file) df = pd.read_csv(f, sep = ';') df['created'] = pd.to_datetime(df['created'], format = hd.format_dash, errors = 'coerce') df['resolved'] = pd.to_datetime(df['resolved'], format = hd.format_dash, errors = 'coerce') df = hd.augment_columns(df) import seaborn as sns, matplotlib.pyplot as plt sns.set() start, end = df.created.min(), df.created.max() days = (end - start).days weeks = days / 7 fig, ax = plt.subplots(1,1, figsize = (25,3)) data = df[(df.created > start) & (df.created < end)].created axis = ax w = axis.hist(x= data, bins = int(weeks)) axis.set_title('Naive (incorrect) weekly histogram (1 bar = 1 week)', fontsize = 20) plt.show() print('Basic statistics:\n') print('Total records:\t{}'.format(len(df))) print('start:\t{}\t{}\nend:\t{}\t{}'.format(start, start.day_name(), end, end.day_name())) print('weeks: {:.1f}\trecords per week:{:.1f},\t weekly min:{},\t weekly max:{}'.format( weeks, len(df) / weeks, int(min(w[0])), int(max(w[0])))) print('days: {}\trecords per day:{:.1f}'.format(days, len(df) / days)) # return Monday 00:00:00 before given moment def monday_before(now): monday_before = now - pd.Timedelta(now.weekday(), 'days') # Monday 00:00:00 return pd.Timestamp(monday_before.date()) # return Monday 00:00:00 after given moment def monday_after(now): # trick: compute Monday before 1 week from now... it's the same. return monday_before(now + pd.Timedelta(7, 'days')) # use this to have full week span, spanning tne entire period # returns: Monday before, Monday after, number of weeks between def outer_week_boundaries(series): start, end = monday_before(series.min()), monday_after(series.max()) return start, end, (end - start).days // 7 def inner_week_boundaries(series): start, end = monday_after(series.min()), monday_before(series.max()) return start, end, (end - start).days // 7 # exact number of days, including fraction of day (float) def fractional_days(data_start, data_end): delta = data_end - data_start return delta.days + delta.seconds / (60 * 60 * 24) # number of full 24-hour periods def inner_days(data_start, data_end): return (data_end - data_start).days # number of days between midnight-before-first-record and midnight-after-last-record def outer_days(data_start, data_end): return (data_end.date() - data_start.date()).days + 1 def weekly_bin_edges(start, howmany): # add 1 for we count bin edges rather than bins WEEK = pd.Timedelta(7, 'days') return [outer_start + i * WEEK for i in np.arange(howmany + 1)] def daily_bin_edges(start, howmany): # add 1 for we count bin edges rather than bins DAY = pd.Timedelta(1, 'days') return [data_start.date() + i * DAY for i in np.arange(howmany + 1)] # weekly bins #s = weekly_statistics() outer_start, outer_end, outer_weeks = outer_week_boundaries(df.created) inner_start, inner_end, inner_weeks = inner_week_boundaries(df.created) data_start, data_end = df.created.min(), df.created.max() weekly_bins = weekly_bin_edges(outer_start, outer_weeks) days = fractional_days(data_start, data_end) outer_days = outer_days(data_start, data_end) daily_bins = daily_bin_edges(data_start, outer_days) weeks = days / 7 def draw(axis, outer_start, outer_end, inner_start, inner_end): axis.axvline(x=inner_start, color='r', linestyle='dashed', linewidth=2, label = 'inner (full) weeks range') axis.axvline(x=outer_start, color='b', linestyle='dashed', linewidth=2, label = 'outer (incomplete) weeks range') axis.axvline(x=inner_end, color='r', linestyle='dashed', linewidth=2) axis.axvline(x=outer_end, color='b', linestyle='dashed', linewidth=2) axis.legend() fig, ax = plt.subplots(2,1, figsize = (25,6)) data = df.created axis = ax[0] w = axis.hist(x= data, bins = weekly_bins) draw(axis, outer_start, outer_end, inner_start, inner_end) axis.set_title('Correct weekly histogram (1 bar = 1 week)', fontsize = 20) week_values = w[0] fullweek_values = week_values[1:-1] axis = ax[1] draw(axis, outer_start, outer_end, inner_start, inner_end) d = axis.hist(x= data, bins = daily_bins, edgecolor = 'black') axis.set_title('Corresponding daily histogram (1 bar = 1 day)', fontsize = 20) day_values = d[0] fullday_values = day_values[1:-1] plt.tight_layout() plt.show() print('Basic statistics:\n') print('Total records:\t{}'.format(len(df))) print('Histogram range (outer weeks):{:.0f}'.format(outer_weeks)) start, end = outer_start, outer_end print('start:\t{}\t{}\nend:\t{}\t{}'.format(start, start.day_name(), end, end.day_name())) print('Data range:') start, end = data_start, data_end print('start:\t{}\t{}\nend:\t{}\t{}'.format(start, start.day_name(), end, end.day_name())) print('Full weeks (inner weeks):{:.0f}'.format(inner_weeks)) start, end = inner_start, inner_end print('start:\t{}\t{}\nend:\t{}\t{}'.format(start, start.day_name(), end, end.day_name())) print('Data stats:') print('weeks: {:.1f}\trecords per week:{:.1f},\t weekly min:{},\t weekly max:{}'.\ format( weeks, len(df) / weeks, int(min(fullweek_values)), int(max(week_values)))) print('days: {:.1f}\trecords per day:{:.1f},\t daily min:{},\t daily max:{}'.\ format(days, len(df) / days, int(min(fullday_values)), int(max(day_values)))) print('Note: The minima do not take into account the marginal (uncomplete) weeks or days')
_____no_output_____
MIT
notebooks/histograms.ipynb
altanova/stuff
set feature model weights and distribution to good start parametersm
n_dims = np.prod(train_inputs[0].shape[1:]) i_class_dims = [int(n_dims*0.25), int(n_dims * 0.75)] from reversible2.constantmemory import clear_ctx_dicts from reversible2.distribution import TwoClassDist feature_model.data_init(th.cat((train_inputs[0], train_inputs[1]), dim=0)) # Check that forward + inverse is really identical t_out = feature_model(train_inputs[0][:2]) inverted = invert(feature_model, t_out) clear_ctx_dicts(feature_model) assert th.allclose(train_inputs[0][:2], inverted, rtol=1e-3,atol=1e-4) device = list(feature_model.parameters())[0].device from reversible2.ot_exact import ot_euclidean_loss_for_samples class_dist = TwoClassDist(2, np.prod(train_inputs[0].size()[1:]) - 2, i_class_inds=i_class_dims) class_dist.cuda() for i_class in range(2): with th.no_grad(): this_outs = feature_model(train_inputs[i_class]) mean = th.mean(this_outs, dim=0) std = th.std(this_outs, dim=0) class_dist.set_mean_std(i_class, mean, std) # Just check setted_mean, setted_std = class_dist.get_mean_std(i_class) assert th.allclose(mean, setted_mean) assert th.allclose(std, setted_std) clear_ctx_dicts(feature_model) optim_model = th.optim.Adam(feature_model.parameters(), lr=1e-3, betas=(0.9,0.999)) optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2, betas=(0.9,0.999)) %%writefile plot.py import torch as th import matplotlib.pyplot as plt import numpy as np from reversible2.util import var_to_np from reversible2.plot import display_close from matplotlib.patches import Ellipse import seaborn def plot_outs(feature_model, train_inputs, test_inputs, class_dist): with th.no_grad(): # Compute dist for mean/std of encodings data_cls_dists = [] for i_class in range(len(train_inputs)): this_class_outs = feature_model(train_inputs[i_class])[:,class_dist.i_class_inds] data_cls_dists.append( th.distributions.MultivariateNormal(th.mean(this_class_outs, dim=0), covariance_matrix=th.diag(th.std(this_class_outs, dim=0) ** 2))) for setname, set_inputs in (("Train", train_inputs), ("Test", test_inputs)): outs = [feature_model(ins) for ins in set_inputs] c_outs = [o[:,class_dist.i_class_inds] for o in outs] c_outs_all = th.cat(c_outs) cls_dists = [] for i_class in range(len(c_outs)): mean, std = class_dist.get_mean_std(i_class) cls_dists.append( th.distributions.MultivariateNormal(mean[class_dist.i_class_inds], covariance_matrix=th.diag(std[class_dist.i_class_inds] ** 2))) preds_per_class = [th.stack([cls_dists[i_cls].log_prob(c_out) for i_cls in range(len(cls_dists))], dim=-1) for c_out in c_outs] pred_labels_per_class = [np.argmax(var_to_np(preds), axis=1) for preds in preds_per_class] labels = np.concatenate([np.ones(len(set_inputs[i_cls])) * i_cls for i_cls in range(len(train_inputs))]) acc = np.mean(labels == np.concatenate(pred_labels_per_class)) data_preds_per_class = [th.stack([data_cls_dists[i_cls].log_prob(c_out) for i_cls in range(len(cls_dists))], dim=-1) for c_out in c_outs] data_pred_labels_per_class = [np.argmax(var_to_np(data_preds), axis=1) for data_preds in data_preds_per_class] data_acc = np.mean(labels == np.concatenate(data_pred_labels_per_class)) print("{:s} Accuracy: {:.1f}%".format(setname, acc * 100)) fig = plt.figure(figsize=(5,5)) ax = plt.gca() for i_class in range(len(c_outs)): #if i_class == 0: # continue o = var_to_np(c_outs[i_class]).squeeze() incorrect_pred_mask = pred_labels_per_class[i_class] != i_class plt.scatter(o[:,0], o[:,1], s=20, alpha=0.75, label=["Right", "Rest"][i_class]) assert len(incorrect_pred_mask) == len(o) plt.scatter(o[incorrect_pred_mask,0], o[incorrect_pred_mask,1], marker='x', color='black', alpha=1, s=5) means, stds = class_dist.get_mean_std(i_class) means = var_to_np(means)[class_dist.i_class_inds] stds = var_to_np(stds)[class_dist.i_class_inds] for sigma in [0.5,1,2,3]: ellipse = Ellipse(means, stds[0]*sigma, stds[1]*sigma) ax.add_artist(ellipse) ellipse.set_edgecolor(seaborn.color_palette()[i_class]) ellipse.set_facecolor("None") for i_class in range(len(c_outs)): o = var_to_np(c_outs[i_class]).squeeze() plt.scatter(np.mean(o[:,0]), np.mean(o[:,1]), color=seaborn.color_palette()[i_class+2], s=80, marker="^", label=["Right Mean", "Rest Mean"][i_class]) plt.title("{:6s} Accuracy: {:.1f}%\n" "From data mean/std: {:.1f}%".format(setname, acc * 100, data_acc * 100)) plt.legend(bbox_to_anchor=(1,1,0,0)) display_close(fig) return import pandas as pd df = pd.DataFrame() from reversible2.training import OTTrainer trainer = OTTrainer(feature_model, class_dist, optim_model, optim_dist) from reversible2.constantmemory import clear_ctx_dicts from reversible2.timer import Timer from plot import plot_outs from reversible2.gradient_penalty import gradient_penalty i_start_epoch_out = 4001 n_epochs = 10001 for i_epoch in range(n_epochs): epoch_row = {} with Timer(name='EpochLoop', verbose=False) as loop_time: loss_on_outs = i_epoch >= i_start_epoch_out result = trainer.train(train_inputs, loss_on_outs=loss_on_outs) epoch_row.update(result) epoch_row['runtime'] = loop_time.elapsed_secs * 1000 if i_epoch % (n_epochs // 20) != 0: df = df.append(epoch_row, ignore_index=True) # otherwise add ot loss in else: for i_class in range(len(train_inputs)): with th.no_grad(): class_ins = train_inputs[i_class] samples = class_dist.get_samples(i_class, len(train_inputs[i_class]) * 4) inverted = feature_model.invert(samples) clear_ctx_dicts(feature_model) ot_loss_in = ot_euclidean_loss_for_samples(class_ins.view(class_ins.shape[0], -1), inverted.view(inverted.shape[0], -1)[:(len(class_ins))]) epoch_row['ot_loss_in_{:d}'.format(i_class)] = ot_loss_in.item() df = df.append(epoch_row, ignore_index=True) print("Epoch {:d} of {:d}".format(i_epoch, n_epochs)) print("Loop Time: {:.0f} ms".format(loop_time.elapsed_secs * 1000)) display(df.iloc[-3:]) plot_outs(feature_model, train_inputs, test_inputs, class_dist) fig = plt.figure(figsize=(8,2)) plt.plot(var_to_np(th.cat((th.exp(class_dist.class_log_stds), th.exp(class_dist.non_class_log_stds)))), marker='o') display_close(fig) df for i_class in range(len(train_inputs)): with th.no_grad(): class_ins = train_inputs[i_class] samples = class_dist.get_samples(i_class, len(train_inputs[i_class]) * 4) inverted = feature_model.invert(samples) clear_ctx_dicts(feature_model) ot_loss_in = ot_euclidean_loss_for_samples(class_ins.view(class_ins.shape[0], -1), inverted.view(inverted.shape[0], -1)[:(len(class_ins))]) epoch_row['ot_loss_in_{:d}'.format(i_class)] = ot_loss_in.item() print("Epoch {:d} of {:d}".format(i_epoch, n_epochs)) print("Loop Time: {:.0f} ms".format(loop_time.elapsed_secs * 1000)) display(df.iloc[-3:]) plot_outs(feature_model, train_inputs, test_inputs, class_dist) fig = plt.figure(figsize=(8,2)) plt.plot(var_to_np(th.cat((th.exp(class_dist.class_log_stds), th.exp(class_dist.non_class_log_stds)))), marker='o') display_close(fig) def get_non_class_outs(feature_model, inputs, class_dist): with th.no_grad(): outs_per_class = [feature_model(t) for t in inputs] clear_ctx_dicts(feature_model) non_class_inds = np.setdiff1d(list(range(outs_per_class[0].shape[1])), class_dist.i_class_inds) non_class_outs = [o[:,non_class_inds].detach() for o in outs_per_class] return non_class_outs train_non_class_outs = get_non_class_outs(feature_model, train_inputs, class_dist) test_non_class_outs = get_non_class_outs(feature_model, test_inputs, class_dist) class DistClassifier(nn.Module): def __init__(self, n_dims): super(DistClassifier, self).__init__() self.mean0 = th.nn.Parameter(th.zeros(n_dims)) self.mean1 = th.nn.Parameter(th.zeros(n_dims)) self.logstd0 = th.nn.Parameter(th.zeros(n_dims)) self.logstd1 = th.nn.Parameter(th.zeros(n_dims)) def predict(self, outs): dist0 = th.distributions.MultivariateNormal(self.mean0, th.diag(th.exp(self.logstd0) ** 2)) dist1 = th.distributions.MultivariateNormal(self.mean1, th.diag(th.exp(self.logstd1) ** 2)) return th.stack((dist0.log_prob(outs), dist1.log_prob(outs)), dim=-1) def predict_log_softmax(self, outs): probs = self.predict(outs) return F.log_softmax(probs, dim=1) clf = DistClassifier(train_non_class_outs[0].shape[1]) clf.cuda() optim_clf = th.optim.Adam(clf.parameters(), lr=1e-2) n_epochs = 200 for i_epoch in range(n_epochs): accs = [] for i_class in range(2): preds = clf.predict_log_softmax(train_non_class_outs[i_class]) labels = np_to_var((np.ones(len(preds)) * i_class).astype(np.int64), device='cuda') loss = F.nll_loss(preds, labels) optim_clf.zero_grad() loss.backward() optim_clf.step() with th.no_grad(): for set_non_class_outs in [train_non_class_outs, test_non_class_outs]: accs = [] for i_class in range(2): preds = clf.predict_log_softmax(set_non_class_outs[i_class]) labels = np_to_var((np.ones(len(preds)) * i_class).astype(np.int64), device='cuda') acc = np.mean(np.argmax(var_to_np(preds), axis=1) == var_to_np(labels)) accs.append(acc) print("Acc: {:.1f}%".format(100*np.mean(accs))) print("") with th.no_grad(): for set_non_class_outs in [train_non_class_outs, test_non_class_outs]: accs = [] for i_class in range(2): preds = clf.predict_log_softmax(set_non_class_outs[i_class]) labels = np_to_var((np.ones(len(preds)) * i_class).astype(np.int64), device='cuda') acc = np.mean(np.argmax(var_to_np(preds), axis=1) == var_to_np(labels)) accs.append(acc) print(len(preds)) print("Acc: {:.1f}%".format(100*np.mean(accs))) print("") %%javascript var kernel = IPython.notebook.kernel; var thename = window.document.getElementById("notebook_name").innerHTML; var command = "nbname = " + "'"+thename+"'"; kernel.execute(command); nbname folder_path = '/data/schirrmr/schirrmr/reversible/models/notebooks/{:s}/'.format(nbname) os.makedirs(folder_path,exist_ok=True) name_and_variable = [('feature_model', feature_model), ('class_dist', class_dist), ('non_class_log_stds', class_dist.non_class_log_stds), ('class_log_stds', class_dist.class_log_stds,), ('class_means', class_dist.class_means), ('non_class_means', class_dist.non_class_means), ('feature_model_params', feature_model.state_dict()), ('optim_model', optim_model.state_dict()), ('optim_dist', optim_dist.state_dict())] for name, variable in name_and_variable: th.save(variable, os.path.join(folder_path, name + '.pkl')) print("\n".join(["{:30s}\t{:.1f}".format(f, os.path.getsize(os.path.join(folder_path, f)) / (1024.0 *1024.0)) for f in os.listdir(folder_path)]))
_____no_output_____
MIT
notebooks/bhno-with-adversary/21ChansOTNoFFTOtherClassDims.ipynb
robintibor/reversible2
Downloading GloVe
dir = HTTP.download("http://ann-benchmarks.com/glove-100-angular.hdf5", update_period=60) data = h5open(dir, "r") do file read(file); end train = data["train"] queries = data["test"] groundtruth = data["neighbors"].+1; #zero-indexed train = train ./ mapslices(norm, train, dims=1); train_backup = deepcopy(train);
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
100 Bits Graph
n_codebooks = 25 n_centers = 16 n_neighbors = 100 stopcond=1e-1;
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
L2 loss
ahpq = builder(train; T=0, n_codebooks=n_codebooks, n_centers=n_centers, verbose=true, stopcond=stopcond, a=0, inverted_index=true, multithreading=false, training_points=25_000, increment_steps=3); yhat_L2_100bits = MIPS(ahpq, queries, n_neighbors) L2_scores_1 = get1atNscores(yhat_L2_100bits, groundtruth, n_neighbors)
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
Anisotropic Loss
train = deepcopy(train_backup) ahpq = builder(train; T=0.2, n_codebooks=n_codebooks, n_centers=n_centers, verbose=true, stopcond=stopcond, a=0, inverted_index=false, multithreading=true, training_points=250_000, increment_steps=3); yhat_anisotropic_100bits = MIPS(ahpq, queries, n_neighbors) anisotropic_scores_1 = get1atNscores(yhat_anisotropic_100bits, groundtruth, n_neighbors);
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
Comparison
plot(1:100, anisotropic_scores_1, label="Anisotropic Loss") plot!(1:100, L2_scores_1, label="Reconstruction Loss") plot!(title="Recall of Glove-1.2M - 100 bits", xlabel="N", ylabel="Recall 1@N", legend=:bottomright, xticks=0:20:100, yticks=0.1:0.1:0.9)
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
200 Bits Graph
n_codebooks = 50 n_centers = 16 n_neighbors = 100 stopcond=1e-1;
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
L2 loss
train = deepcopy(train_backup) ahpq = builder(train; T=0, n_codebooks=n_codebooks, n_centers=n_centers, verbose=true, stopcond=stopcond, a=0, inverted_index=true, multithreading=false, training_points=250_000, increment_steps=3); yhat_L2_200bits = MIPS(ahpq, queries, n_neighbors) L2_scores_2 = get1atNscores(yhat_L2_200bits, groundtruth, n_neighbors);
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
Anisotropic Loss
train = deepcopy(train_backup) ahpq = builder(train; T=0.2, n_codebooks=n_codebooks, n_centers=n_centers, verbose=true, stopcond=stopcond, a=0, inverted_index=true, multithreading=false, training_points=250_000, increment_steps=3); yhat_anisotropic_200bits = MIPS(ahpq, queries, n_neighbors) anisotropic_scores_2 = get1atNscores(yhat_anisotropic_200bits, groundtruth, n_neighbors)
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
Comparison
plot(1:100, anisotropic_scores_2, label="Anisotropic Loss") plot!(1:100, L2_scores_2, label="Reconstruction Loss") plot!(title="Recall of Glove-1.2M - 200 bits", xlabel="N", ylabel="Recall 1@N", legend=:bottomright, xticks=0:20:100, yticks=0.1:0.1:0.9)
_____no_output_____
MIT
docs/AHPQ_for_GloVe.ipynb
AxelvL/AHPQ.jl
Exercise 11.2 Task Try to extend the model to obtain a reasonable fit of the following polynomial of order 3:$$f(x)=4-3x-2x^2+3x^3$$for $x \in [-1,1]$.In order to make practice with NN, explore reasonable different choices for:- the number of layers- the number of neurons in each layer- the activation function- the optimizer- the loss function Make graphs comparing fits for different NNs.Check your NN models by seeing how well your fits predict newly generated test data (including on data outside the range you fit. How well do your NN do on points in the range of $x$ where you trained the model? How about points outside the original training data set? Summarize what you have learned about the relationship between model complexity (number of parameters), goodness of fit on training data, and the ability to predict well.
import numpy as np import math from tensorflow import keras from matplotlib import pyplot as plt #function = 3x^3 - 2x^2 - 3x + 4 def polynomial(x_array,a,b,c,d): x_array = np.asfarray(x_array) return a*x_array**3 + b*x_array**2 + c*x_array + d # np.random.seed(0) x_train = np.random.uniform(-1, 1, 1000) # dataset for training x_valid = np.random.uniform(-1, 1, 100) #dataset for testing/validation x_valid.sort() a=3 b=-2 c=-3 d=4 y_target = polynomial(x_valid,a,b,c,d) sigma = 0.0 # noise standard deviation y_train = np.random.normal(polynomial(x_train,a,b,c,d), sigma) # actual measures from which we want to guess regression parameters y_valid = np.random.normal(polynomial(x_valid,a,b,c,d), sigma) plt.plot(x_valid, y_target) plt.scatter(x_valid, y_valid, color='r') plt.grid(True) plt.show()
_____no_output_____
MIT
es11/11.2.ipynb
lorycontixd/PNS
Using model from Ex11.1This section utilizes the linear model used in Exercise 11.1, just to prove that it should not function and another model must be defined for polynomials.
# Using model from ex11.1 # Load previous model for extension oldmodel = keras.models.load_model('models/model_ex1') oldmodel.summary() print() history = oldmodel.fit(x=x_train, y=y_train, batch_size=32, epochs=100, shuffle=True, # validation_data=(x_valid, y_valid), verbose=0 ) score = oldmodel.evaluate(x_valid, y_valid, batch_size=32, verbose=0) # print performance print() print('Test loss:', score[0]) print('Test accuracy:', score[1]) print() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='best') plt.show() x_predicted = np.random.uniform(-1, 1, 100) y_predicted = oldmodel.predict(x_predicted) plt.scatter(x_predicted, y_predicted,color='r') plt.plot(x_valid, y_target) plt.grid(True) plt.show()
Model: "sequential_184" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_184 (Dense) (None, 1) 2 ================================================================= Total params: 2 Trainable params: 2 Non-trainable params: 0 _________________________________________________________________ Test loss: 0.8131828904151917 Test accuracy: 0.8131828904151917
MIT
es11/11.2.ipynb
lorycontixd/PNS
New model
from tensorflow.keras import models from tensorflow.keras import layers from tensorflow.keras import optimizers from tensorflow.keras import backend as K from tensorflow.keras import callbacks from tensorflow.keras import losses from tensorflow.keras import activations from tensorflow.keras.utils import get_custom_objects, plot_model def run(layers:list,optimizer='sgd',loss='mse',batch_size=32,epochs=60,show_summary=True,outputs=False,testing=True,graph=True,logger=True): global x_train, y_train, x_valid, y_valid, y_target model = models.Sequential(layers) model.compile(optimizer=optimizer, loss=loss, metrics=['mse']) optname = optimizer if isinstance(optimizer,str) else optimizer.__class__.__name__ lossname = loss if isinstance(loss,str) else loss.__class__.__name__ if logger: print(f"************** {len(layers)} Layers\t{layers[0].output_shape[1]} input neurons\t optimizer={optname}\tloss={lossname}") if show_summary: model.summary() history = model.fit(x=x_train, y=y_train, batch_size=batch_size, epochs=epochs, shuffle=True, # a good idea is to shuffle input before at each epoch validation_data=(x_valid, y_valid), verbose=0 ) score = model.evaluate(x_valid, y_valid, batch_size=batch_size, verbose=0) if outputs: print() print("Validation performance") print('Test loss:', score[0]) print('Test accuracy:', score[1]) score = model.evaluate(x_valid, y_target, batch_size=batch_size, verbose=0) if outputs: print() print("Testing performance") print('Test loss:', score[0]) print('Test accuracy:', score[1]) if testing: plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='best') plt.show() if graph: fig=plt.figure(figsize=(10, 5)) x_predicted = np.random.uniform(-1.5, 1.5, 100) x_predicted.sort() y_predicted = model.predict(x_predicted) plt.scatter(x_predicted, y_predicted,color='r') plt.plot(x_predicted, polynomial(x_predicted,a,b,c,d)) plt.title(f"{len(layers)} layers with {layers[0].output_shape[1]} input neurons - opt.= {optname}, loss = {lossname} ") plt.grid(True) plt.tight_layout() plt.show() if logger: print("\n\n")
_____no_output_____
MIT
es11/11.2.ipynb
lorycontixd/PNS
Dependance on layers & neurons Using the code above, various NNs have been trained with different layers and neurons for each layer, to study the accuracy in approximating the polynomial in $[\frac{-3}{2},\frac{3}{2}]$ Different Neural Networks
opt = optimizers.SGD(learning_rate=0.1) run([layers.Dense(500,input_shape=(1,)),layers.Dense(1,activation="relu")],optimizer=opt,testing=False) run([layers.Dense(1,input_shape=(1,)),layers.Dense(30,activation="relu"),layers.Dense(1,activation="relu")],optimizer=opt,testing=False) run([layers.Dense(500,input_shape=(1,)),layers.Dense(100,activation="relu"),layers.Dense(1,activation="relu")],optimizer=opt,testing=False,outputs=True) run([layers.Dense(500,input_shape=(1,)),layers.Dense(250,activation="relu"),layers.Dense(100,activation="relu"),layers.Dense(10,activation="relu"),layers.Dense(1,activation="relu")],optimizer=opt,testing=False) run([layers.Dense(1000,input_shape=(1,)),layers.Dense(500,activation="relu"),layers.Dense(250,activation="relu"),layers.Dense(175,activation="relu"),layers.Dense(100,activation="relu"),layers.Dense(50,activation="relu"),layers.Dense(1,activation="relu")],testing=False)
************** 2 Layers 500 input neurons optimizer=SGD loss=mse Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_3 (Dense) (None, 500) 1000 _________________________________________________________________ dense_4 (Dense) (None, 1) 501 ================================================================= Total params: 1,501 Trainable params: 1,501 Non-trainable params: 0 _________________________________________________________________
MIT
es11/11.2.ipynb
lorycontixd/PNS
ResultsDiscuss results on layers/neurons Dependance on optimizers The following section defines different optimizer functions for a Neural Network with 3 layers and 500 input neurons.Which optimizers to choseExpectations  Adam Optimizer Adam, short for adaptive moment estimation, is an optimization algorithm that can be used instead of the classical stochastic gradient descent procedure to update network weights iteratively based in training data.Between the numerous advantages brought by this algorithm, the most important for this case are:- Computationally efficient- Straightforward to implement- Appropriate for problems with very noisy/or sparse gradients Different optimizers
opt_layers = [ layers.Dense(1,input_shape=(1,)), layers.Dense(50,activation="relu"), layers.Dense(1,activation="selu") ] opts = [ optimizers.SGD(learning_rate=1e-1), optimizers.Adam(learning_rate=1e-1), optimizers.RMSprop(learning_rate=1e-1), optimizers.Adagrad(learning_rate=1e-1) ] for oo in opts: run(opt_layers,optimizer=oo,testing=False,batch_size=64,epochs=150,outputs=True) print()
************** 3 Layers 1 input neurons optimizer=SGD loss=mse Model: "sequential_6" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_23 (Dense) (None, 1) 2 _________________________________________________________________ dense_24 (Dense) (None, 50) 100 _________________________________________________________________ dense_25 (Dense) (None, 1) 51 ================================================================= Total params: 153 Trainable params: 153 Non-trainable params: 0 _________________________________________________________________ Validation performance Test loss: 0.004281164612621069 Test accuracy: 0.004281164612621069 Testing performance Test loss: 0.004281164612621069 Test accuracy: 0.004281164612621069
MIT
es11/11.2.ipynb
lorycontixd/PNS
Learning rateAs with all optimization algorithms, it is defined by numerous parameters, including the learning rate, which determines the step size at each iteration while moving toward a minimum of a loss function.In the following section, I will study the impact of the learning rate parameter on the learning efficiency of the model.At first, I chose to explore three different values for the learning rate: lr = 0.001 (default value for Adam optimizer) lr = 0.1 lr = 0.00001in order to choose values that could be too high, too low or decent.Then I will use a learning rate schedule called "Step decay" which systematically drops the learning rate at specific times during training, formally defined by $$LR = LR_0 * \text{droprate}^{\text{floor}(\text{epoch} / \text{epochs_drop})}$$
#--- LR = 0.001 lr = 0.001 adam1 = optimizers.Adam( learning_rate=lr, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False, name="Adam2" ) print("---> Learning rate: ",lr) run( [ layers.Dense(500,input_shape=(1,)), layers.Dense(100, activation="relu"), layers.Dense(1) ], optimizer=adam1 ) #--- LR = 0.1 lr = 0.1 adam1 = optimizers.Adam( learning_rate=lr, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False, name="Adam2" ) print("---> Learning rate: ",lr) run( [ layers.Dense(500,input_shape=(1,)), layers.Dense(100, activation="relu"), layers.Dense(1) ], optimizer=adam1 ) lr = 0.00001 adam1 = optimizers.Adam( learning_rate=lr, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False, name="Adam2" ) print("---> Learning rate: ",lr) run( [ layers.Dense(500,input_shape=(1,)), layers.Dense(100, activation="relu"), layers.Dense(1) ], optimizer=adam1 ) ### Step decay import math initial_learning_rate = 0.01 def lr_step_decay(epoch, lr): drop_rate = 0.55 epochs_drop = 10.5 return initial_learning_rate * math.pow(drop_rate, math.floor(epoch/epochs_drop)) modellayers = [ layers.Dense(500, input_shape=(1,)), layers.Dense(100, activation="relu"), layers.Dense(1) ] stepmodel = models.Sequential(modellayers) stepmodel.compile(optimizer="adam", loss="mse", metrics=['mse']) stepmodel.summary() # Fit the model to the training data history_step_decay = stepmodel.fit( x_train, y_train, epochs=100, validation_split=0.3, batch_size=64, callbacks=[callbacks.LearningRateScheduler(lr_step_decay, verbose=0)], verbose=0 ) score = stepmodel.evaluate(x_valid, y_valid, batch_size=64, verbose=0) print() print("Validation performance") print('Test loss:', score[0]) print('Test accuracy:', score[1]) score = stepmodel.evaluate(x_valid, y_target, batch_size=64, verbose=0) print() print("Testing performance") print('Test loss:', score[0]) print('Test accuracy:', score[1]) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='best') plt.show() fig=plt.figure(figsize=(10, 5)) x_predicted = np.random.uniform(-1.5, 1.5, 100) x_predicted.sort() y_predicted = stepmodel.predict(x_predicted) plt.scatter(x_predicted, y_predicted,color='r') plt.plot(x_predicted, polynomial(x_predicted,a,b,c,d)) plt.title(f"{len(modellayers)} layers with 500 input neurons, activ. fun.= 'relu', opt.= adam, loss = mse ") plt.grid(True) plt.tight_layout() plt.show() print("\n Final learning rate: ",K.eval(stepmodel.optimizer.lr))
Model: "sequential_13" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_35 (Dense) (None, 500) 1000 _________________________________________________________________ dense_36 (Dense) (None, 100) 50100 _________________________________________________________________ dense_37 (Dense) (None, 1) 101 ================================================================= Total params: 51,201 Trainable params: 51,201 Non-trainable params: 0 _________________________________________________________________ Validation performance Test loss: 0.012978918850421906 Test accuracy: 0.012978918850421906 Testing performance Test loss: 0.012978918850421906 Test accuracy: 0.012978918850421906
MIT
es11/11.2.ipynb
lorycontixd/PNS
The choice of the value for learning rate can impact two things: How fast the algorithm learns Whether the cost function is minimized or notFor an optimal value of the learning rate, the cost function value is minimized in a few iterations. If your learning rate is too low, training will progress very slowly as the fine-tunings to the weights in the network are small, and the number of iterations/epochs required to minimize the cost function is high.If the learning rate is set too high, the cost function could saturate at a value higher than the minimum value, causing undesirable divergent behavior in the loss function. Dependance on loss function In this section, I will try to explore the dependance of the model on the type of loss function.For this purpose, I will keep all the other parameters as fixed as possible to highlight as much as possible the quested dependancy.The parameters are:- A neural network with 3 layers, respectively with 500, 100 and 1 neuron, all following the ReLU activation function- 100 epochs- A batch size of 64- SGD optimizerThe chosen loss functions are:- Mean squared error (Regression Loss)The average squared difference between the estimated values and the actual value: $\text{MSE}=\frac{1}{N}\sum_{i=1}^N(Y_i - )^2$- Mean absolute error (Regression Loss)The measure of errors between paired observations expressing the same phenomenon, including observed versus predicted.$\text{MAE}=\frac{1}{N}\sum_{i=1}^N |Y_i - X_i|$- Cross-entropy (Probabilistic Loss)It measures the performance of a classification model whose output is a probability value between 0 and 1- Poisson
loss_functions = [ losses.MeanSquaredError(), losses.MeanAbsoluteError(reduction="auto", name="mean_absolute_error"), losses.CategoricalCrossentropy(reduction="auto",name="categorical_crossentropy"), losses.Poisson(reduction="auto", name="poisson") ] opt = optimizers.SGD(learning_rate=0.01) ll = [ layers.Dense(500,input_shape=(1,)), layers.Dense(250, activation="relu"), layers.Dense(1) ] for func in loss_functions: run(ll,optimizer=opt,loss=func,testing=False,outputs=True,batch_size=64,epochs=100)
************** 3 Layers 500 input neurons optimizer=SGD loss=MeanSquaredError Model: "sequential_14" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_38 (Dense) (None, 500) 1000 _________________________________________________________________ dense_39 (Dense) (None, 250) 125250 _________________________________________________________________ dense_40 (Dense) (None, 1) 251 ================================================================= Total params: 126,501 Trainable params: 126,501 Non-trainable params: 0 _________________________________________________________________ Validation performance Test loss: 0.040039416402578354 Test accuracy: 0.040039416402578354 Testing performance Test loss: 0.040039416402578354 Test accuracy: 0.040039416402578354
MIT
es11/11.2.ipynb
lorycontixd/PNS
Results... Dependance on activation function Lastly, I reported a study on the dependance of the model on the activation function of the neurons. The structure of the Neural Network is defined by 4 layers, respectively with 500, 250, 100 and 1 neurons.The chosen activation functions are:...
lossfunc = losses.MeanSquaredError() opt = optimizers.SGD(learning_rate=0.01) all_layers = [[ layers.Dense(500, input_shape=(1,)), layers.Dense(250, activation="relu"), layers.Dense(100, activation="relu"), layers.Dense(1, activation="relu") ],[ layers.Dense(500, input_shape=(1,)), layers.Dense(250, activation="relu"), layers.Dense(100, activation="relu"), layers.Dense(1, activation="sigmoid") ],[ layers.Dense(500, input_shape=(1,)), layers.Dense(250, activation="selu"), layers.Dense(100, activation="selu"), layers.Dense(1, activation="softmax") ] ] for l in all_layers: run(l,optimizer=opt,loss=lossfunc,testing=False,outputs=True,batch_size=64,epochs=100) from IPython.display import clear_output class PlotCurrentEstimate(callbacks.Callback): def __init__(self, x_valid, y_valid): """Keras Callback which plot current model estimate against reference target""" # convert numpy arrays into lists for plotting purposes self.x_valid = list(x_valid[:]) self.y_valid = list(y_valid[:]) self.iter=0 def on_epoch_end(self, epoch, logs={}): temp = self.model.predict(self.x_valid, batch_size=None, verbose=False, steps=None) self.y_curr = list(temp[:]) # convert numpy array into list self.iter+=1 if self.iter%10 == 0: clear_output(wait=True) self.eplot = plt.subplot(1,1,1) self.eplot.clear() self.eplot.scatter(self.x_valid, self.y_curr, color="blue", s=4, marker="o", label="estimate") self.eplot.scatter(self.x_valid, self.y_valid, color="red", s=4, marker="x", label="valid") self.eplot.legend() plt.show() np.random.seed(0) finalx_train = np.random.uniform(-1, 1, 10000) # dataset for training finalx_valid = np.random.uniform(-1, 1, 1000) #dataset for testing/validation finalx_valid.sort() finaly_target = polynomial(finalx_valid,a,b,c,d) sigma = 0.0 # noise standard deviation finaly_train = np.random.normal(polynomial(finalx_train,a,b,c,d), sigma) # actual measures from which we want to guess regression parameters finaly_valid = np.random.normal(polynomial(finalx_valid,a,b,c,d), sigma) finalmodel = models.Sequential() finalmodel.add(layers.Dense(units=1, input_dim=1)) finalmodel.add(layers.Activation('relu')) finalmodel.add(layers.Dense(units=40)) finalmodel.add(layers.Activation('relu')) finalmodel.add(layers.Dense(units=1)) finalmodel.compile(loss='mean_squared_error',optimizer='adam', metrics=['mse']) finalmodel.summary() history = finalmodel.fit(x=finalx_train, y=finaly_train, batch_size=64, epochs=150, shuffle=True, # a good idea is to shuffle input before at each epoch validation_data=(finalx_valid, finaly_valid), verbose=0 ) score = finalmodel.evaluate(finalx_valid, finaly_valid, batch_size=64, verbose=0) print() print("Validation performance") print('Test loss:', score[0]) print('Test accuracy:', score[1]) score = finalmodel.evaluate(finalx_valid, finaly_target, batch_size=64, verbose=0) print() print("Testing performance") print('Test loss:', score[0]) print('Test accuracy:', score[1]) plt.plot(history.history['loss'][20:]) plt.plot(history.history['val_loss'][20:]) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='best') plt.show() fig=plt.figure(figsize=(10, 5)) finalx_predicted = np.random.uniform(-1.5, 1.5, 1000) finalx_predicted.sort() finaly_predicted = finalmodel.predict(finalx_predicted) plt.scatter(finalx_predicted, finaly_predicted,color='r') plt.plot(finalx_predicted, polynomial(finalx_predicted,a,b,c,d)) plt.title(f"{len(finalmodel.layers)} layers with {finalmodel.layers[0].output_shape[1]} input neurons - opt.= {opt.__class__.__name__}, loss = {lossfunc.__class__.__name__} ") plt.grid(True) plt.tight_layout() plt.show() finalmodel.save("models/model_ex2") plot_estimate = PlotCurrentEstimate(finalx_valid, finaly_valid) earlystop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=100, mode='auto') model.fit(finalx_valid, finaly_valid, batch_size=32, epochs=150, validation_data=(finalx_valid, finalx_valid), callbacks=[ plot_estimate, earlystop] ) model.get_weights()
_____no_output_____
MIT
es11/11.2.ipynb
lorycontixd/PNS
Boolean Operator
x=1 y=2 print(x>y) print(10>11) print(10==10) print(10!=11) #using bool()function print(bool("Hello")) print(bool(15)) print(bool(1)) print(bool(True)) print(bool(False)) print(bool(None)) print(bool(0)) print(bool([]))
True True True True False False False False
Apache-2.0
Expressions_and_Operations.ipynb
khloemaritonigloria08/CPEN-21A-ECE-2-1
Functions can return Boolean
def myFunction():return False print(myFunction()) def yourFunction():return False if yourFunction(): print("Yes!") else: print("No")
No
Apache-2.0
Expressions_and_Operations.ipynb
khloemaritonigloria08/CPEN-21A-ECE-2-1
You Try!
a=6 b=7 print(a==b) print(a!=a)
False False
Apache-2.0
Expressions_and_Operations.ipynb
khloemaritonigloria08/CPEN-21A-ECE-2-1
Arithmetic Operators
print(10+5) print(10-5) print(10*5) print(10/5) print(10%5) #modulo division, remainder print(10//5) #floor division print(10//3) #floor division print(10%3) #3x3=9+1
15 5 50 2.0 0 2 3 1
Apache-2.0
Expressions_and_Operations.ipynb
khloemaritonigloria08/CPEN-21A-ECE-2-1
Bitwise Operators
a=60 #0011 1100 b=13 #0000 1101 print(a&b) print(a|b) print(a^b) print(~a) print(a<<1) #0111 1000 print(a<<2) #1111 0000 print(b>>1) #1 0000 0110 print(b>>2) #0000 0110 carry flag bit = 01
12 61 49 -61 120 240 6 3
Apache-2.0
Expressions_and_Operations.ipynb
khloemaritonigloria08/CPEN-21A-ECE-2-1
Python Assignment Operators
a+=3 #Same As a=a+3 #Same As a=60+3, a=63 print(a)
63
Apache-2.0
Expressions_and_Operations.ipynb
khloemaritonigloria08/CPEN-21A-ECE-2-1
Logical Operators
#and Logical Operator a=True b=False print(a and b) print(not(a and b)) print(a or b) print(not(a or b)) print(a is b) print(a is not b)
False True
Apache-2.0
Expressions_and_Operations.ipynb
khloemaritonigloria08/CPEN-21A-ECE-2-1
Searching For Simple PatternsBeing able to match letters and metacharacters is the simplest task that regular expressions can do. In this section we will see how we can use regular expressions to perform more complex pattern matching. We can form any pattern we want by using the metacharacters mentioned in the previous lesson.The first metacharacter we are going to look at is the backslash (`\`). We already saw that the backslash can be used to escape all the metacharacters, so that you can search for them directly. However, the backslash can also be followed by various characters to signal various special sequences. Here is a list of the special sequences we are going to look at in this notebook:* `\d` - Matches any decimal digit; this is equivalent to the set [0-9]* `\D` - Matches any non-digit character; this is equivalent to the set [^0-9]* `\s` - Matches any whitespace character, this is equivalent to the set [ \t\n\r\f\v]* `\S` - Matches any non-whitespace character; this is equivalent to the set [^ \t\n\r\f\v]* `\w` - Matches any alphanumeric character and the underscore; this is equivalent to the set [a-zA-Z0-9_]* `\W` - Matches any non-alphanumeric character; this is equivalent to the set [^a-zA-Z0-9_]We can see that there is a difference between lowercase and uppercase sequences. For example, while `\d` matches any digit, `\D` matches everything that is **not** a digit. Similarly, while `\s` matches any whitespace character, `\S` matches everything that is **not** a whitespace character; and while `\w` matches any alphanumeric character, `\W` matches everything that is **not** an alphanumeric character.Let's start by learning how to use `\d` to search for decimal digits. Matching Numbers Using `\d`In the code below, we will use `'\d'` as our regular expression to find all the decimal digits in our `sample_text` string:
# Import re module import re # Sample text sample_text = 'Alice lives in 1230 First St., Ocean City, MD 156789.' # Create a regular expression object with the regular expression '\d' regex = re.compile(r'\d') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Print all the matches for match in matches: print(match)
<_sre.SRE_Match object; span=(15, 16), match='1'> <_sre.SRE_Match object; span=(16, 17), match='2'> <_sre.SRE_Match object; span=(17, 18), match='3'> <_sre.SRE_Match object; span=(18, 19), match='0'> <_sre.SRE_Match object; span=(46, 47), match='1'> <_sre.SRE_Match object; span=(47, 48), match='5'> <_sre.SRE_Match object; span=(48, 49), match='6'> <_sre.SRE_Match object; span=(49, 50), match='7'> <_sre.SRE_Match object; span=(50, 51), match='8'> <_sre.SRE_Match object; span=(51, 52), match='9'>
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
As we can see, all the matches found above correspond to only decimal digits between 0 and 9.Conversely, if wanted to find all the characters that are **not** decimal digits, we will use `\D` as our regular expression, as shown below:
# Import re module import re # Sample text sample_text = 'Alice lives in 1230 First St., Ocean City, MD 156789.' # Create a regular expression object with the regular expression '\D' regex = re.compile(r'\D') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Print all the matches for match in matches: print(match)
<_sre.SRE_Match object; span=(0, 1), match='A'> <_sre.SRE_Match object; span=(1, 2), match='l'> <_sre.SRE_Match object; span=(2, 3), match='i'> <_sre.SRE_Match object; span=(3, 4), match='c'> <_sre.SRE_Match object; span=(4, 5), match='e'> <_sre.SRE_Match object; span=(5, 6), match=' '> <_sre.SRE_Match object; span=(6, 7), match='l'> <_sre.SRE_Match object; span=(7, 8), match='i'> <_sre.SRE_Match object; span=(8, 9), match='v'> <_sre.SRE_Match object; span=(9, 10), match='e'> <_sre.SRE_Match object; span=(10, 11), match='s'> <_sre.SRE_Match object; span=(11, 12), match=' '> <_sre.SRE_Match object; span=(12, 13), match='i'> <_sre.SRE_Match object; span=(13, 14), match='n'> <_sre.SRE_Match object; span=(14, 15), match=' '> <_sre.SRE_Match object; span=(19, 20), match=' '> <_sre.SRE_Match object; span=(20, 21), match='F'> <_sre.SRE_Match object; span=(21, 22), match='i'> <_sre.SRE_Match object; span=(22, 23), match='r'> <_sre.SRE_Match object; span=(23, 24), match='s'> <_sre.SRE_Match object; span=(24, 25), match='t'> <_sre.SRE_Match object; span=(25, 26), match=' '> <_sre.SRE_Match object; span=(26, 27), match='S'> <_sre.SRE_Match object; span=(27, 28), match='t'> <_sre.SRE_Match object; span=(28, 29), match='.'> <_sre.SRE_Match object; span=(29, 30), match=','> <_sre.SRE_Match object; span=(30, 31), match=' '> <_sre.SRE_Match object; span=(31, 32), match='O'> <_sre.SRE_Match object; span=(32, 33), match='c'> <_sre.SRE_Match object; span=(33, 34), match='e'> <_sre.SRE_Match object; span=(34, 35), match='a'> <_sre.SRE_Match object; span=(35, 36), match='n'> <_sre.SRE_Match object; span=(36, 37), match=' '> <_sre.SRE_Match object; span=(37, 38), match='C'> <_sre.SRE_Match object; span=(38, 39), match='i'> <_sre.SRE_Match object; span=(39, 40), match='t'> <_sre.SRE_Match object; span=(40, 41), match='y'> <_sre.SRE_Match object; span=(41, 42), match=','> <_sre.SRE_Match object; span=(42, 43), match=' '> <_sre.SRE_Match object; span=(43, 44), match='M'> <_sre.SRE_Match object; span=(44, 45), match='D'> <_sre.SRE_Match object; span=(45, 46), match=' '> <_sre.SRE_Match object; span=(52, 53), match='.'>
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
We can see that none of the matches are decimal digits. We also see, that by using `\D` we were able to match all characters, including periods (`.`) and white spaces. TODO: Find IP AddressesIn the cell below, our `sample_text` string contains three IP addresses. Write a single regular expression that can match any IP address and save the regular expression object in a variable called `regex`. Then use the `.finditer()` method to search the `sample_text` string for the given regular expression. Finally, write a loop to print all the `matches` found by the `.finditer()` method.**HINT :** Use the special sequence `\d` and take advantage that all IP addresses have the same pattern.
# Import re module import re # Sample text sample_text = 'Here are three IP address: 123.456.789.123, 999.888.777.666, 111.222.333.444' # Create a regular expression object with the regular expression regex = re.compile(r'\d\d\d.\d\d\d.\d\d\d.\d\d\d') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Print all the matches for match in matches: print(match)
<_sre.SRE_Match object; span=(27, 42), match='123.456.789.123'> <_sre.SRE_Match object; span=(44, 59), match='999.888.777.666'> <_sre.SRE_Match object; span=(61, 76), match='111.222.333.444'>
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
If you wrote your regex correctly you should see three matches above corresponding to the three IP addresses in our `sample_text` string. Matching Whitespace Characters Using `\s`In the code below, we will use `\s` as our regular expression to find all the whitespace characters in our `sample_text` string. For this example, we will use a string literal that spans multiple lines. To create this multi-line string, we will use triple-quotes (`'''`) both at the beginning and at the end of the multi-line string.
# Import re module import re # Sample text sample_text = ''' \tAlice lives in:\f 1230 First St.\r Ocean City, MD 156789.\v ''' # Create a regular expression object with the regular expression '\s' regex = re.compile(r'\s') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Print all the matches for match in matches: print(match)
<_sre.SRE_Match object; span=(0, 1), match='\n'> <_sre.SRE_Match object; span=(1, 2), match='\t'> <_sre.SRE_Match object; span=(7, 8), match=' '> <_sre.SRE_Match object; span=(13, 14), match=' '> <_sre.SRE_Match object; span=(17, 18), match='\x0c'> <_sre.SRE_Match object; span=(18, 19), match='\n'> <_sre.SRE_Match object; span=(23, 24), match=' '> <_sre.SRE_Match object; span=(29, 30), match=' '> <_sre.SRE_Match object; span=(33, 34), match='\r'> <_sre.SRE_Match object; span=(34, 35), match='\n'> <_sre.SRE_Match object; span=(40, 41), match=' '> <_sre.SRE_Match object; span=(46, 47), match=' '> <_sre.SRE_Match object; span=(49, 50), match=' '> <_sre.SRE_Match object; span=(57, 58), match='\x0b'> <_sre.SRE_Match object; span=(58, 59), match='\n'>
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
As we can see, all the matches found correspond to white spaces, tabs (`\t`), newlines (`\n`), carriage returns (`\r`), form feeds (`\f`), and vertical tabs (`\v`). Notice that form feeds appear as `\x0c` and vertical tabs as `\x0b`. Conversely, if wanted to find all the characters that are **not** whitespace characters, we will use `\S` as our regular expression, as shown below:
# Import re module import re # Sample text sample_text = ''' \tAlice lives in:\f 1230 First St.\r Ocean City, MD 156789.\v ''' # Create a regular expression object with the regular expression '\S' regex = re.compile(r'\S') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Print all the matches for match in matches: print(match)
<_sre.SRE_Match object; span=(2, 3), match='A'> <_sre.SRE_Match object; span=(3, 4), match='l'> <_sre.SRE_Match object; span=(4, 5), match='i'> <_sre.SRE_Match object; span=(5, 6), match='c'> <_sre.SRE_Match object; span=(6, 7), match='e'> <_sre.SRE_Match object; span=(8, 9), match='l'> <_sre.SRE_Match object; span=(9, 10), match='i'> <_sre.SRE_Match object; span=(10, 11), match='v'> <_sre.SRE_Match object; span=(11, 12), match='e'> <_sre.SRE_Match object; span=(12, 13), match='s'> <_sre.SRE_Match object; span=(14, 15), match='i'> <_sre.SRE_Match object; span=(15, 16), match='n'> <_sre.SRE_Match object; span=(16, 17), match=':'> <_sre.SRE_Match object; span=(19, 20), match='1'> <_sre.SRE_Match object; span=(20, 21), match='2'> <_sre.SRE_Match object; span=(21, 22), match='3'> <_sre.SRE_Match object; span=(22, 23), match='0'> <_sre.SRE_Match object; span=(24, 25), match='F'> <_sre.SRE_Match object; span=(25, 26), match='i'> <_sre.SRE_Match object; span=(26, 27), match='r'> <_sre.SRE_Match object; span=(27, 28), match='s'> <_sre.SRE_Match object; span=(28, 29), match='t'> <_sre.SRE_Match object; span=(30, 31), match='S'> <_sre.SRE_Match object; span=(31, 32), match='t'> <_sre.SRE_Match object; span=(32, 33), match='.'> <_sre.SRE_Match object; span=(35, 36), match='O'> <_sre.SRE_Match object; span=(36, 37), match='c'> <_sre.SRE_Match object; span=(37, 38), match='e'> <_sre.SRE_Match object; span=(38, 39), match='a'> <_sre.SRE_Match object; span=(39, 40), match='n'> <_sre.SRE_Match object; span=(41, 42), match='C'> <_sre.SRE_Match object; span=(42, 43), match='i'> <_sre.SRE_Match object; span=(43, 44), match='t'> <_sre.SRE_Match object; span=(44, 45), match='y'> <_sre.SRE_Match object; span=(45, 46), match=','> <_sre.SRE_Match object; span=(47, 48), match='M'> <_sre.SRE_Match object; span=(48, 49), match='D'> <_sre.SRE_Match object; span=(50, 51), match='1'> <_sre.SRE_Match object; span=(51, 52), match='5'> <_sre.SRE_Match object; span=(52, 53), match='6'> <_sre.SRE_Match object; span=(53, 54), match='7'> <_sre.SRE_Match object; span=(54, 55), match='8'> <_sre.SRE_Match object; span=(55, 56), match='9'> <_sre.SRE_Match object; span=(56, 57), match='.'>
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
We can see that none of the matches above are whitespace characters. We also see, that by using `\S` we were able to match all characters, including periods (`.`), letters, and numbers. TODO: Print The Numbers Between Whitespace CharactersIn the cell below, our `sample_text` consists of a multi-line string with numbers in between whitespace characters:```python123 45 78951 222 33```Notice that not all the numbers have the same number of digits. For example, the first number (`123` ) has three digits, but the second number (`45` ) only has two digits. Notice that not all the numbers have the same number of digits. For example, the first number (`123` ) has three digits, but the second number (`45` ) only has two digits. Write a single regular expression that finds the tabs (`\t`) and the newlines (`\n`) in this multi-line string and save the regular expression object in a variable called `regex`. Then use the `.finditer()` method to search the `sample_text` string for the given regular expression. Then, write a loop that uses the span information from each `match` to only print the numbers found in the original multi-line string. Your code should work in the general case where the numbers can have any number of digits. For example, if the numbers in the string were to change your code should still be able to find them and print them. Finally, in this exercise you cannot use `\d` in your regular expression. **HINT :** Notice that there are no whites paces in the multiline string. Use the `\s` sequence to find the tabs and newlines. Then notice that you can use the span's `end` and `start` index from consecutive matches to figure out the number of digits of each number. Use these indices to print the numbers found in the original multi-line string. You can use the `match.span()` method we saw before to find the `start` and `end` indices of each `match`. Alternatively, you can also use the `.start()` and `.end()` methods to extract the `start` and `end` indices of each match. The `match.start()` is equivalent to `match.span()[0]` and `match.end()` is equivalent to `match.span()[1]`.
# Import re module import re # Sample text sample_text = ''' 123\t45\t7895 1\t222\t33 ''' # Print sample_text print('Sample Text:\n', sample_text) # Create a regular expression object with the regular expression regex = re.compile(r'\s') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Write a loop to print all the numbers found in the original string counter = 0 for match in matches: if counter != 0: start_idx = match.start() print(sample_text[end_idx:start_idx]) end_idx = match.end() counter += 1
Sample Text: 123 45 7895 1 222 33 123 45 7895 1 222 33
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
Matching Alphanumeric Characters Using `\w`In the code below, we will use `\w` as our regular expression to find all the alphanumeric characters in our `sample_text` string. This includes the underscore ( `_` ), all the numbers from 0 through 9, and all the uppercase and lowercase letters:
# Import re module import re # Sample text sample_text = ''' You can contact FAKE Company at: [email protected]. ''' # Create a regular expression object with the regular expression '\w' regex = re.compile(r'\w') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Print all the matches for match in matches: print(match)
<_sre.SRE_Match object; span=(1, 2), match='Y'> <_sre.SRE_Match object; span=(2, 3), match='o'> <_sre.SRE_Match object; span=(3, 4), match='u'> <_sre.SRE_Match object; span=(5, 6), match='c'> <_sre.SRE_Match object; span=(6, 7), match='a'> <_sre.SRE_Match object; span=(7, 8), match='n'> <_sre.SRE_Match object; span=(9, 10), match='c'> <_sre.SRE_Match object; span=(10, 11), match='o'> <_sre.SRE_Match object; span=(11, 12), match='n'> <_sre.SRE_Match object; span=(12, 13), match='t'> <_sre.SRE_Match object; span=(13, 14), match='a'> <_sre.SRE_Match object; span=(14, 15), match='c'> <_sre.SRE_Match object; span=(15, 16), match='t'> <_sre.SRE_Match object; span=(17, 18), match='F'> <_sre.SRE_Match object; span=(18, 19), match='A'> <_sre.SRE_Match object; span=(19, 20), match='K'> <_sre.SRE_Match object; span=(20, 21), match='E'> <_sre.SRE_Match object; span=(22, 23), match='C'> <_sre.SRE_Match object; span=(23, 24), match='o'> <_sre.SRE_Match object; span=(24, 25), match='m'> <_sre.SRE_Match object; span=(25, 26), match='p'> <_sre.SRE_Match object; span=(26, 27), match='a'> <_sre.SRE_Match object; span=(27, 28), match='n'> <_sre.SRE_Match object; span=(28, 29), match='y'> <_sre.SRE_Match object; span=(30, 31), match='a'> <_sre.SRE_Match object; span=(31, 32), match='t'> <_sre.SRE_Match object; span=(34, 35), match='f'> <_sre.SRE_Match object; span=(35, 36), match='a'> <_sre.SRE_Match object; span=(36, 37), match='k'> <_sre.SRE_Match object; span=(37, 38), match='e'> <_sre.SRE_Match object; span=(38, 39), match='_'> <_sre.SRE_Match object; span=(39, 40), match='c'> <_sre.SRE_Match object; span=(40, 41), match='o'> <_sre.SRE_Match object; span=(41, 42), match='m'> <_sre.SRE_Match object; span=(42, 43), match='p'> <_sre.SRE_Match object; span=(43, 44), match='a'> <_sre.SRE_Match object; span=(44, 45), match='n'> <_sre.SRE_Match object; span=(45, 46), match='y'> <_sre.SRE_Match object; span=(46, 47), match='1'> <_sre.SRE_Match object; span=(47, 48), match='2'> <_sre.SRE_Match object; span=(49, 50), match='e'> <_sre.SRE_Match object; span=(50, 51), match='m'> <_sre.SRE_Match object; span=(51, 52), match='a'> <_sre.SRE_Match object; span=(52, 53), match='i'> <_sre.SRE_Match object; span=(53, 54), match='l'> <_sre.SRE_Match object; span=(55, 56), match='c'> <_sre.SRE_Match object; span=(56, 57), match='o'> <_sre.SRE_Match object; span=(57, 58), match='m'>
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
As we can see, all the matches found correspond to alphanumeric characters only, including the underscore in the email address.Conversely, if wanted to find all the characters that are **not** alphanumeric characters, we will use `\W` as our regular expression, as shown below:
# Import re module import re # Sample text sample_text = ''' You can contact FAKE Company at: [email protected]. ''' # Create a regular expression object with the regular expression '\W' regex = re.compile(r'\W') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Print all the matches for match in matches: print(match)
<_sre.SRE_Match object; span=(0, 1), match='\n'> <_sre.SRE_Match object; span=(4, 5), match=' '> <_sre.SRE_Match object; span=(8, 9), match=' '> <_sre.SRE_Match object; span=(16, 17), match=' '> <_sre.SRE_Match object; span=(21, 22), match=' '> <_sre.SRE_Match object; span=(29, 30), match=' '> <_sre.SRE_Match object; span=(32, 33), match=':'> <_sre.SRE_Match object; span=(33, 34), match='\n'> <_sre.SRE_Match object; span=(48, 49), match='@'> <_sre.SRE_Match object; span=(54, 55), match='.'> <_sre.SRE_Match object; span=(58, 59), match='.'> <_sre.SRE_Match object; span=(59, 60), match='\n'>
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
We can see that none of the matches are alphanumeric characters. We also see, that by using `\W` we were able to match all whitespace characters, and the `@` symbol in the email address. TODO: Find emailsIn the cell below, our `sample_text` consists of a multi-line string that contains three email addresses:```[email protected]@[email protected]```Notice, that all three email address have the same pattern, namely, the first name initial, followed by a dot (`.`), followed by the last name initial, and ending in ``` @email.com```. Take advantage of the fact that all three email addresses have the same pattern to write a single regular expression that can find all three email addresses in our `sample_text` string. As usual, save the regular expression object in a variable called `regex`. Then use the `.finditer()` method to search the `sample_text` string for the given regular expression. Finally, write a loop to print all the `matches` found by the `.finditer()` method.
# Import re module import re # Sample text sample_text = ''' John Sanders: [email protected] Alice Walters: [email protected] Mary Jones: [email protected] ''' # Print sample_text print('Sample Text:\n', sample_text) # Create a regular expression object with the regular expression regex = re.compile(r'[0-9a-zA-Z].[0-9a-zA-Z]@email.com') # Search the sample_text for the regular expression matches = regex.finditer(sample_text) # Print all the matches for match in matches: print(match)
Sample Text: John Sanders: [email protected] Alice Walters: [email protected] Mary Jones: [email protected] <_sre.SRE_Match object; span=(15, 28), match='[email protected]'> <_sre.SRE_Match object; span=(44, 57), match='[email protected]'> <_sre.SRE_Match object; span=(70, 83), match='[email protected]'>
Apache-2.0
TradingAI/AI Algorithms in Trading/Lesson 05 - Financial Statements/simple_patterns.ipynb
Quananhle/Python
Creating variables In this notebook we will look into the concept of variables.Python, like R, is a dynamically-typed language, meaning you can change the class/type of a variable on the go. This is convenient in many places, but dangerous in many other ways. It is impossible to rely on the type of the variable, and you should always retrace your steps throughout the code to see what the variable is currently representing. This is sometimes hard, especially in places like this notebook where we can execute different bits of code in any order.**You're suggested to run this script on Colab**[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Magica-Chen/WebSNA-notes/blob/main/Week0/Week0-notes-python-fundamentals.ipynb) Intro and strings Let's create a variable:
name = "Edinburgh" name
_____no_output_____
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes
This generates a string variable. They can be easily printed, although it is safer to use the print function:
print(name)
Edinburgh
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes
It is also wise to check the type of the variable, in case you are lost:
type(name)
_____no_output_____
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes
This confirms that we are dealing with a string. There are a few things we can do with strings (which we can denote by using one or two apostrophes):
name = 'university of edinburgh' print(name.lower()) print(name.upper()) print(name.title())
university of edinburgh UNIVERSITY OF EDINBURGH University Of Edinburgh
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes
We can concatenate strings easily using +, or using a comma in a print statement:
print('University', 'of Edinburgh') print('University' + ' ' + 'of Edinburgh')
University of Edinburgh University of Edinburgh
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes
Writing print('The University of Edinburgh is '+ 439) will not work, as the + operator only works for strings, we can convert any object into a string however:
print('The University of Edinburgh is '+ str(439))
The University of Edinburgh is 439
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes
A few other useful tricks:
name = " edinburgh " print("|"+name.lstrip()+"|") print("|"+name.rstrip()+"|") print("|"+name.strip()+"|")
|edinburgh | | edinburgh| |edinburgh|
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes
You can use control characters as well:
print('Edinburgh\thas a university\nrunning web & social network analytics course')
Edinburgh has a university running web & social network analytics course
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes
Numbers
a = 10 b = -10.1023 #Some operations illustrated (\t stands for a tab) print("a: \t\t\t" + str(a)) print("b: \t\t\t" + str(b)) print("absolute of b: \t\t" + str(abs(b))) print("rounded b: \t\t" + str(round(b,3))) print("square of a: \t\t" + str(pow(a,2))) print("cube of a: \t\t" + str(a**3)) print("integer part of b: \t" + str(int(b)))
a: 10 b: -10.1023 absolute of b: 10.1023 rounded b: -10.102 square of a: 100 cube of a: 1000 integer part of b: -10
MIT
Week0/Week0-notes-python-fundamentals.ipynb
Magica-Chen/WebSNA-notes