code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
# Sentence Classification In this notebook, We will be classifying text (The data-set used here contains tweets, but the process shown here can be adapted for other text classification tasks too.) The content is arranged as follows: * Cleaninig and basic pre-processing of text * Building a vocabulary, and creating iterators using TorchText * Building a sequence model - LSTM using Pytorch to predict labels **_Notebook is still under construction...._** ``` # Check the files in Data import os for dirname, _, filenames in os.walk('./input'): for filename in filenames: print(os.path.join(dirname, filename)) import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import time import torch from torchtext import data import torch.nn as nn import spacy # Import Data test = pd.read_csv("./input/test.csv") train = pd.read_csv("./input/train.csv") ``` ## **Data Pre Processing** Cleaning the text data ``` # Shape of dataset train.shape ``` #### Let us get a glimpse at the data table ``` train.head() ``` #### The `target` column marks the label of the text: * ** * **label==1** : If the Tweet is about Disasters. * **label==0** : If the Tweet is not about disasters. We are only interested in the `text` and `target` columns. So we drop the rest. ``` # drop 'id' , 'keyword' and 'location' columns. train.drop(columns=['id','keyword','location'], inplace=True) ``` ### Next we clean and modify the texts, so that the classification algorithm does not get confused with irrelevant information. ``` # to clean data def normalise_text (text): text = text.str.lower() # lowercase text = text.str.replace(r"\#","") # replaces hashtags text = text.str.replace(r"http\S+","URL") # remove URL addresses text = text.str.replace(r"@","") text = text.str.replace(r"[^A-Za-z0-9()!?\'\`\"]", " ") text = text.str.replace("\s{2,}", " ") return text train["text"]=normalise_text(train["text"]) ``` Let us look at the cleaned text once ``` train['text'].head() ``` Split the data into training and validation sets ``` # split data into train and validation train_df, valid_df = train_test_split(train) train_df.head() valid_df.head() ``` The following will help make the results reproducible later. ``` SEED = 42 torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False ``` We need to create `Field` objects to process the text data. These field objects will contain information for converting the texts to Tensors. We will set two parameters: * `tokenize=spacy` and * `include_arguments=True` Which implies that SpaCy will be used to tokenize the texts and that the field objects should include length of the texts - which will be needed to pad the texts. We will later use methods of these objects to create a vocabulary, which will help us create a numerical representation for every token. The `LabelField` is a shallow wrappper around field, useful for data labels. ``` import spacy TEXT = data.Field(tokenize = 'spacy', include_lengths = True) LABEL = data.LabelField(dtype = torch.float) ``` Next we create a `DataFrameDataset` class which will allow us to load the data and the target-labels as a `DataSet` using a DataFrame as a source of data. We will create a vocabulary using the training dataset and then pass the training and validation datasets to the iterator later. ``` # source : https://gist.github.com/lextoumbourou/8f90313cbc3598ffbabeeaa1741a11c8 # to use DataFrame as a Data source class DataFrameDataset(data.Dataset): def __init__(self, df, fields, is_test=False, **kwargs): examples = [] for i, row in df.iterrows(): label = row.target if not is_test else None text = row.text examples.append(data.Example.fromlist([text, label], fields)) super().__init__(examples, fields, **kwargs) @staticmethod def sort_key(ex): return len(ex.text) @classmethod def splits(cls, fields, train_df, val_df=None, test_df=None, **kwargs): train_data, val_data, test_data = (None, None, None) data_field = fields if train_df is not None: train_data = cls(train_df.copy(), data_field, **kwargs) if val_df is not None: val_data = cls(val_df.copy(), data_field, **kwargs) if test_df is not None: test_data = cls(test_df.copy(), data_field, True, **kwargs) return tuple(d for d in (train_data, val_data, test_data) if d is not None) ``` * We will first create a list called _field_, where the elements will be a tuple of string (name) and `Field` object. The `Field` object for the text should be placed with name 'text' and the object for label should be placed with name 'label' Then we will use the `splits` method of `DataFrameDataset`, which will return the training and validation datasets, which will be composed of Examples of the tokenized texts and labels. The texts and labels will have the name that we provide to the _field_. ``` fields = [('text',TEXT), ('label',LABEL)] train_ds, val_ds = DataFrameDataset.splits(fields, train_df=train_df, val_df=valid_df) # Lets look at a random example print(vars(train_ds[15])) # Check the type print(type(train_ds[15])) ``` We will now build the vocabulary using only the training dataset. This can be accessed through `TEXT.vocab` and will be shared by the validation dataset. We will use pretrainied 200 dimensional vectors to represent the tokens. Any unknown token will have a zero vector. These vectors will be later loaded as the embedding layer. ``` MAX_VOCAB_SIZE = 25000 TEXT.build_vocab(train_ds, max_size = MAX_VOCAB_SIZE, vectors = 'glove.6B.200d', unk_init = torch.Tensor.zero_) LABEL.build_vocab(train_ds) ``` We build the iterators. ``` BATCH_SIZE = 128 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator = data.BucketIterator.splits( (train_ds, val_ds), batch_size = BATCH_SIZE, sort_within_batch = True, device = device) ``` ## LSTM architecture ### Declare Hyperparameters ``` # Hyperparameters num_epochs = 25 learning_rate = 0.001 INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 200 HIDDEN_DIM = 256 OUTPUT_DIM = 1 N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.2 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] # padding ``` ### Setting up the LSTM model ``` class LSTM_net(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout, pad_idx): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx) self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, bidirectional=bidirectional, dropout=dropout) self.fc1 = nn.Linear(hidden_dim * 2, hidden_dim) self.fc2 = nn.Linear(hidden_dim, 1) self.dropout = nn.Dropout(dropout) def forward(self, text, text_lengths): # text = [sent len, batch size] embedded = self.embedding(text) # embedded = [sent len, batch size, emb dim] #pack sequence packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths) packed_output, (hidden, cell) = self.rnn(packed_embedded) #unpack sequence # output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output) # output = [sent len, batch size, hid dim * num directions] # output over padding tokens are zero tensors # hidden = [num layers * num directions, batch size, hid dim] # cell = [num layers * num directions, batch size, hid dim] # concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers # and apply dropout hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)) output = self.fc1(hidden) output = self.dropout(self.fc2(output)) #hidden = [batch size, hid dim * num directions] return output #creating instance of our LSTM_net class model = LSTM_net(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT, PAD_IDX) ``` loading the pretrained vectors into the embedding matrix. ``` pretrained_embeddings = TEXT.vocab.vectors print(pretrained_embeddings.shape) model.embedding.weight.data.copy_(pretrained_embeddings) # to initiaise padded to zeros model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) print(model.embedding.weight.data) model.to(device) #CNN to GPU # Loss and optimizer criterion = nn.BCEWithLogitsLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) def binary_accuracy(preds, y): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ #round predictions to the closest integer rounded_preds = torch.round(torch.sigmoid(preds)) correct = (rounded_preds == y).float() #convert into float for division acc = correct.sum() / len(correct) return acc ``` ### Training the model ``` # training function def train(model, iterator): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: text, text_lengths = batch.text optimizer.zero_grad() predictions = model(text, text_lengths).squeeze(1) loss = criterion(predictions, batch.label) acc = binary_accuracy(predictions, batch.label) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, iterator): epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: text, text_lengths = batch.text predictions = model(text, text_lengths).squeeze(1) acc = binary_accuracy(predictions, batch.label) epoch_acc += acc.item() return epoch_acc / len(iterator) t = time.time() loss=[] acc=[] val_acc=[] for epoch in range(num_epochs): train_loss, train_acc = train(model, train_iterator) valid_acc = evaluate(model, valid_iterator) print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Acc: {valid_acc*100:.2f}%') loss.append(train_loss) acc.append(train_acc) val_acc.append(valid_acc) print(f'time:{time.time()-t:.3f}') ``` ### Plot a graph to trace model performance ``` plt.xlabel("runs") plt.ylabel("normalised measure of loss/accuracy") x_len=list(range(len(acc))) plt.axis([0, max(x_len), 0, 1]) plt.title('result of LSTM') loss=np.asarray(loss)/max(loss) plt.plot(x_len, loss, 'r.',label="loss") plt.plot(x_len, acc, 'b.', label="accuracy") plt.plot(x_len, val_acc, 'g.', label="val_accuracy") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.2) plt.show ```
github_jupyter
# Check the files in Data import os for dirname, _, filenames in os.walk('./input'): for filename in filenames: print(os.path.join(dirname, filename)) import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import time import torch from torchtext import data import torch.nn as nn import spacy # Import Data test = pd.read_csv("./input/test.csv") train = pd.read_csv("./input/train.csv") # Shape of dataset train.shape train.head() # drop 'id' , 'keyword' and 'location' columns. train.drop(columns=['id','keyword','location'], inplace=True) # to clean data def normalise_text (text): text = text.str.lower() # lowercase text = text.str.replace(r"\#","") # replaces hashtags text = text.str.replace(r"http\S+","URL") # remove URL addresses text = text.str.replace(r"@","") text = text.str.replace(r"[^A-Za-z0-9()!?\'\`\"]", " ") text = text.str.replace("\s{2,}", " ") return text train["text"]=normalise_text(train["text"]) train['text'].head() # split data into train and validation train_df, valid_df = train_test_split(train) train_df.head() valid_df.head() SEED = 42 torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False import spacy TEXT = data.Field(tokenize = 'spacy', include_lengths = True) LABEL = data.LabelField(dtype = torch.float) # source : https://gist.github.com/lextoumbourou/8f90313cbc3598ffbabeeaa1741a11c8 # to use DataFrame as a Data source class DataFrameDataset(data.Dataset): def __init__(self, df, fields, is_test=False, **kwargs): examples = [] for i, row in df.iterrows(): label = row.target if not is_test else None text = row.text examples.append(data.Example.fromlist([text, label], fields)) super().__init__(examples, fields, **kwargs) @staticmethod def sort_key(ex): return len(ex.text) @classmethod def splits(cls, fields, train_df, val_df=None, test_df=None, **kwargs): train_data, val_data, test_data = (None, None, None) data_field = fields if train_df is not None: train_data = cls(train_df.copy(), data_field, **kwargs) if val_df is not None: val_data = cls(val_df.copy(), data_field, **kwargs) if test_df is not None: test_data = cls(test_df.copy(), data_field, True, **kwargs) return tuple(d for d in (train_data, val_data, test_data) if d is not None) fields = [('text',TEXT), ('label',LABEL)] train_ds, val_ds = DataFrameDataset.splits(fields, train_df=train_df, val_df=valid_df) # Lets look at a random example print(vars(train_ds[15])) # Check the type print(type(train_ds[15])) MAX_VOCAB_SIZE = 25000 TEXT.build_vocab(train_ds, max_size = MAX_VOCAB_SIZE, vectors = 'glove.6B.200d', unk_init = torch.Tensor.zero_) LABEL.build_vocab(train_ds) BATCH_SIZE = 128 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator = data.BucketIterator.splits( (train_ds, val_ds), batch_size = BATCH_SIZE, sort_within_batch = True, device = device) # Hyperparameters num_epochs = 25 learning_rate = 0.001 INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 200 HIDDEN_DIM = 256 OUTPUT_DIM = 1 N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.2 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] # padding class LSTM_net(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout, pad_idx): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx) self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, bidirectional=bidirectional, dropout=dropout) self.fc1 = nn.Linear(hidden_dim * 2, hidden_dim) self.fc2 = nn.Linear(hidden_dim, 1) self.dropout = nn.Dropout(dropout) def forward(self, text, text_lengths): # text = [sent len, batch size] embedded = self.embedding(text) # embedded = [sent len, batch size, emb dim] #pack sequence packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths) packed_output, (hidden, cell) = self.rnn(packed_embedded) #unpack sequence # output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output) # output = [sent len, batch size, hid dim * num directions] # output over padding tokens are zero tensors # hidden = [num layers * num directions, batch size, hid dim] # cell = [num layers * num directions, batch size, hid dim] # concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers # and apply dropout hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)) output = self.fc1(hidden) output = self.dropout(self.fc2(output)) #hidden = [batch size, hid dim * num directions] return output #creating instance of our LSTM_net class model = LSTM_net(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT, PAD_IDX) pretrained_embeddings = TEXT.vocab.vectors print(pretrained_embeddings.shape) model.embedding.weight.data.copy_(pretrained_embeddings) # to initiaise padded to zeros model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) print(model.embedding.weight.data) model.to(device) #CNN to GPU # Loss and optimizer criterion = nn.BCEWithLogitsLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) def binary_accuracy(preds, y): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ #round predictions to the closest integer rounded_preds = torch.round(torch.sigmoid(preds)) correct = (rounded_preds == y).float() #convert into float for division acc = correct.sum() / len(correct) return acc # training function def train(model, iterator): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: text, text_lengths = batch.text optimizer.zero_grad() predictions = model(text, text_lengths).squeeze(1) loss = criterion(predictions, batch.label) acc = binary_accuracy(predictions, batch.label) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, iterator): epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: text, text_lengths = batch.text predictions = model(text, text_lengths).squeeze(1) acc = binary_accuracy(predictions, batch.label) epoch_acc += acc.item() return epoch_acc / len(iterator) t = time.time() loss=[] acc=[] val_acc=[] for epoch in range(num_epochs): train_loss, train_acc = train(model, train_iterator) valid_acc = evaluate(model, valid_iterator) print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Acc: {valid_acc*100:.2f}%') loss.append(train_loss) acc.append(train_acc) val_acc.append(valid_acc) print(f'time:{time.time()-t:.3f}') plt.xlabel("runs") plt.ylabel("normalised measure of loss/accuracy") x_len=list(range(len(acc))) plt.axis([0, max(x_len), 0, 1]) plt.title('result of LSTM') loss=np.asarray(loss)/max(loss) plt.plot(x_len, loss, 'r.',label="loss") plt.plot(x_len, acc, 'b.', label="accuracy") plt.plot(x_len, val_acc, 'g.', label="val_accuracy") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.2) plt.show
0.72487
0.949435
``` import numpy as np import pandas as pd movies_df = pd.read_csv('../data/movies_clean.csv') reviews_df = pd.read_csv('../data/train_data.csv') del movies_df['Unnamed: 0'] del reviews_df['Unnamed: 0'] movies_df.head() reviews_df.head() ``` ### creating ranked df ``` rating_mean = reviews_df.groupby('movie_id')['rating'].mean() rating_count = reviews_df.groupby('movie_id')['rating'].count() rating_latest = reviews_df.groupby('movie_id')['timestamp'].max() rating_latest[20629] movies_df def create_ranked_df(movies_df: pd.DataFrame = movies_df, reviews_df: pd.DataFrame = reviews_df): rating_mean = reviews_df.groupby('movie_id')['rating'].mean() rating_count = reviews_df.groupby('movie_id')['rating'].count() rating_latest = reviews_df.groupby('movie_id')['timestamp'].max() rating_df = pd.DataFrame({"mean": rating_mean, "count": rating_count, "latest_ts": rating_latest}) ranked_movie = movies_df.merge(rating_df, how='left', on='movie_id', right_index=True) ranked_movie.sort_values(["mean","count","latest_ts"], ascending=False, inplace=True) ranked_movie = ranked_movie[ranked_movie['count'] > 4][["movie", "mean","count","latest_ts"]] return ranked_movie ranked_df = create_ranked_df() ranked_df ``` ## Find Similiar Movie ``` movie_mat = movies_df[movies_df['movie_id'] == 4100].iloc[:,4:] np.array(movie_mat) np.array(movies_df.iloc[:,4:]).transpose() def find_similiar_movie(movie_id: int, movies_df: pd.DataFrame = movies_df): #get row of given movie_id feature movie_mat = np.array(movies_df[movies_df['movie_id'] == movie_id].iloc[:,4:])[0] print(movie_mat) #get feature matrix of all movies movies_mat = np.array(movies_df.iloc[:,4:]) #calculate similiarity between given movie and all movie dot_prod = movie_mat.dot(movies_mat.transpose()) #get the most likely movie movie_rows = np.where(dot_prod == np.max(dot_prod))[0] movie_row = np.random.choice(movie_rows) movie = movies_df.iloc[movie_row]['movie'] return movie find_similiar_movie(2649128, movies_df) def funk_svd_fit(reviews_df: pd.DataFrame, latent_features=20, learning_rate=0.005, iters=10): user_review_df = reviews_df.groupby(['user_id', 'movie_id'])['rating'].max().unstack() user_review_mat = np.array(user_review_df) n_users = user_review_df.shape[0] n_movies = user_review_df.shape[1] n_ratings = np.count_nonzero(user_review_df) u_mat = np.random.rand(n_users, latent_features) v_mat = np.random.rand(latent_features, n_movies) print("Iterations | MSE") for iteration in range(iters): sse_cum = 0 for i in range(n_users): for j in range(n_movies): if user_review_mat[i,j] > 0: diff = user_review_mat[i,j] - u_mat[i,:].dot(v_mat[:,j]) sse_cum += diff**2 for k in range(latent_features): u_mat[i,k] += learning_rate * 2 * diff * v_mat[k,j] v_mat[k,j] += learning_rate * 2 * diff * u_mat[i,k] print("%d \t\t %f" % (iteration+1, sse_cum / n_ratings)) return u_mat, v_mat u_mat, v_mat = funk_svd_fit(reviews_df, iters=20) def predict(u_mat, v_mat, user, movie): return u_mat[user,:].dot(v_mat[:,movie]) predict(u_mat, v_mat, 1270, 50) a = np.array([np.nan, np.nan]) a[0] ```
github_jupyter
import numpy as np import pandas as pd movies_df = pd.read_csv('../data/movies_clean.csv') reviews_df = pd.read_csv('../data/train_data.csv') del movies_df['Unnamed: 0'] del reviews_df['Unnamed: 0'] movies_df.head() reviews_df.head() rating_mean = reviews_df.groupby('movie_id')['rating'].mean() rating_count = reviews_df.groupby('movie_id')['rating'].count() rating_latest = reviews_df.groupby('movie_id')['timestamp'].max() rating_latest[20629] movies_df def create_ranked_df(movies_df: pd.DataFrame = movies_df, reviews_df: pd.DataFrame = reviews_df): rating_mean = reviews_df.groupby('movie_id')['rating'].mean() rating_count = reviews_df.groupby('movie_id')['rating'].count() rating_latest = reviews_df.groupby('movie_id')['timestamp'].max() rating_df = pd.DataFrame({"mean": rating_mean, "count": rating_count, "latest_ts": rating_latest}) ranked_movie = movies_df.merge(rating_df, how='left', on='movie_id', right_index=True) ranked_movie.sort_values(["mean","count","latest_ts"], ascending=False, inplace=True) ranked_movie = ranked_movie[ranked_movie['count'] > 4][["movie", "mean","count","latest_ts"]] return ranked_movie ranked_df = create_ranked_df() ranked_df movie_mat = movies_df[movies_df['movie_id'] == 4100].iloc[:,4:] np.array(movie_mat) np.array(movies_df.iloc[:,4:]).transpose() def find_similiar_movie(movie_id: int, movies_df: pd.DataFrame = movies_df): #get row of given movie_id feature movie_mat = np.array(movies_df[movies_df['movie_id'] == movie_id].iloc[:,4:])[0] print(movie_mat) #get feature matrix of all movies movies_mat = np.array(movies_df.iloc[:,4:]) #calculate similiarity between given movie and all movie dot_prod = movie_mat.dot(movies_mat.transpose()) #get the most likely movie movie_rows = np.where(dot_prod == np.max(dot_prod))[0] movie_row = np.random.choice(movie_rows) movie = movies_df.iloc[movie_row]['movie'] return movie find_similiar_movie(2649128, movies_df) def funk_svd_fit(reviews_df: pd.DataFrame, latent_features=20, learning_rate=0.005, iters=10): user_review_df = reviews_df.groupby(['user_id', 'movie_id'])['rating'].max().unstack() user_review_mat = np.array(user_review_df) n_users = user_review_df.shape[0] n_movies = user_review_df.shape[1] n_ratings = np.count_nonzero(user_review_df) u_mat = np.random.rand(n_users, latent_features) v_mat = np.random.rand(latent_features, n_movies) print("Iterations | MSE") for iteration in range(iters): sse_cum = 0 for i in range(n_users): for j in range(n_movies): if user_review_mat[i,j] > 0: diff = user_review_mat[i,j] - u_mat[i,:].dot(v_mat[:,j]) sse_cum += diff**2 for k in range(latent_features): u_mat[i,k] += learning_rate * 2 * diff * v_mat[k,j] v_mat[k,j] += learning_rate * 2 * diff * u_mat[i,k] print("%d \t\t %f" % (iteration+1, sse_cum / n_ratings)) return u_mat, v_mat u_mat, v_mat = funk_svd_fit(reviews_df, iters=20) def predict(u_mat, v_mat, user, movie): return u_mat[user,:].dot(v_mat[:,movie]) predict(u_mat, v_mat, 1270, 50) a = np.array([np.nan, np.nan]) a[0]
0.325413
0.709069
``` # TensorBoard Helper Functions and Constants # Directory to export TensorBoard summary statistics, graph data, etc. TB_DIR = '/tmp/tensorboard/tf_cnn' def clean_tb_dir(): !rm -rf /tmp/tensorboard/tf_cnn def _start_tb(d): """ Private function that calls `tensorboard` shell command args: d: The desired directory to launch in TensorBoard """ !tensorboard --port=6006 --logdir=$d def start_tensorboard(d=TB_DIR): """ Starts TensorBoard from the notebook in a separate thread. Prevents Jupyter Notebook from halting while TensorBoard runs. """ import threading threading.Thread(target=_start_tb, args=(TB_DIR,)).start() del threading def stop_tensorboard(): """ Kills all TensorBoard processes """ !ps -aef | grep "tensorboard" | tr -s ' ' | cut -d ' ' -f2 | xargs kill -KILL def reset_tensorboard(): stop_tensorboard() start_tensorboard() # Import core TensorFlow modules import tensorflow as tf import numpy as np # Modules required for file download and extraction import os import sys import tarfile from six.moves.urllib.request import urlretrieve from scipy import ndimage # Directory to download dataset DATASET_DIR = '/tmp/pipeline/datasets/notmnist/' # Create the directory !mkdir -p {DATASET_DIR} def maybe_download(filename, url, force=False): """Download a file if not present.""" if force or not os.path.exists(DATASET_DIR + filename): filename, _ = urlretrieve(url + filename, DATASET_DIR + filename) print('\nDownload complete for {}'.format(filename)) return filename else: print('File {} already present.'.format(filename)) return DATASET_DIR + filename def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('{} already present - don\'t need to extract {}.'.format(root, filename)) else: print('Extracting data for {}. This may take a while. Please wait.'.format(root)) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(root[0:root.rfind('/') + 1]) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] print(data_folders) return data_folders # Locations to download data: url = 'http://yaroslavvb.com/upload/notMNIST/' # Download two datasets train_zip_path = maybe_download('notMNIST_small.tar.gz', url) # Extract datasets train_folders = maybe_extract(train_zip_path) len(train_folders) image_height = 28 # Pixel height of images image_width = 28 # Pixel width of images pixel_depth = 255.0 # Number of levels per pixel expected_img_shape = (image_height, image_width) # Black and white image, no 3rd dimension num_labels = len(train_folders) def load_image_folder(folder): """Load the data for a single image label.""" # Create a list of image paths inside the folder image_files = os.listdir(folder) # Create empty numpy array to hold data dataset = np.ndarray(shape=(len(image_files), image_height, image_width), dtype=np.float32) num_images = 0 # Counter for number of successful images loaded for image in image_files: image_file = os.path.join(folder, image) try: # Read in image pixel data as floating point values image_data = ndimage.imread(image_file).astype(float) # Scale values: [0.0, 255.0] => [-1.0, 1.0] image_data = (image_data - pixel_depth / 2) / (pixel_depth / 2) if image_data.shape != expected_img_shape: print('File {} has unexpected dimensions: '.format(str(image_data.shape))) continue # Add image to the numpy array dataset dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- skipping this file and moving on.') # Trim dataset to remove unused space dataset = dataset[0:num_images, :, :] return dataset def make_data_label_arrays(num_rows, image_height, image_width): """ Creates and returns empty numpy arrays for input data and labels """ if num_rows: dataset = np.ndarray((num_rows, image_height, image_width), dtype=np.float32) labels = np.ndarray(num_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def collect_datasets(data_folders): datasets = [] total_images = 0 for label, data_folder in enumerate(data_folders): # Bring all test folder images in as numpy arrays dataset = load_image_folder(data_folder) num_images = len(dataset) total_images += num_images datasets.append((dataset, label, num_images)) return datasets, total_images def merge_train_test_datasets(datasets, total_images, percent_test): num_train = total_images * (1.0 - percent_test) num_test = total_images * percent_test train_dataset, train_labels = make_data_label_arrays(num_train, image_height, image_width) test_dataset, test_labels = make_data_label_arrays(num_test, image_height, image_width) train_counter = 0 test_counter = 0 dataset_counter = 1 for dataset, label, num_images in datasets: np.random.shuffle(dataset) if dataset_counter != len(datasets): n_v = num_images // (1.0 / percent_test) n_t = num_images - n_v else: # Last label, make sure dataset sizes match up to what we created n_v = len(test_dataset) - test_counter n_t = len(train_dataset) - train_counter train_dataset[train_counter: train_counter + n_t] = dataset[:n_t] train_labels[train_counter: train_counter + n_t] = label test_dataset[test_counter: test_counter + n_v] = dataset[n_t: n_t + n_v] test_labels[test_counter: test_counter + n_v] = label train_counter += n_t test_counter += n_v dataset_counter += 1 return train_dataset, train_labels, test_dataset, test_labels train_test_datasets, train_test_total_images = collect_datasets(train_folders) train_dataset, train_labels, test_dataset, test_labels = \ merge_train_test_datasets(train_test_datasets, train_test_total_images, 0.1) len(train_dataset) # Convert data examples into 3-D tensors num_channels = 1 # grayscale def reformat(dataset, labels): dataset = dataset.reshape( (-1, image_height, image_width, num_channels)).astype(np.float32) labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) def shuffle_data_with_labels(dataset, labels): indices = range(len(dataset)) np.random.shuffle(indices) new_data = np.ndarray(dataset.shape, dataset.dtype) new_labels = np.ndarray(labels.shape, dataset.dtype) n = 0 for i in indices: new_data[n] = dataset[i] new_labels[n] = labels[i] n += 1 return new_data, new_labels train_dataset, train_labels = shuffle_data_with_labels(train_dataset, train_labels) batch_size = 64 patch_size = 5 depth = 16 num_hidden = 64 graph = tf.Graph() with graph.as_default(): def variable_summaries(var, name): with tf.name_scope("summaries"): mean = tf.reduce_mean(var) tf.scalar_summary('mean/' + name, mean) with tf.name_scope('stddev'): stddev = tf.sqrt(tf.reduce_sum(tf.square(var - mean))) tf.scalar_summary('sttdev/' + name, stddev) tf.scalar_summary('max/' + name, tf.reduce_max(var)) tf.scalar_summary('min/' + name, tf.reduce_min(var)) tf.histogram_summary(name, var) # Input data. input_data = tf.placeholder( tf.float32, shape=(None, image_height, image_width, num_channels), name="input_data") input_labels = tf.placeholder(tf.float32, shape=(None, num_labels), name="input_labels") keep_rate = tf.placeholder(tf.float32, shape=(), name="keep_rate") # Variables. layer1_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, num_channels, depth], stddev=0.1), name="L1Weights") layer1_biases = tf.Variable(tf.zeros([depth]), name="L1Bias") layer2_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, depth, depth], stddev=0.1), name="L2Weights") layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]), name="L2Bias") layer3_weights = tf.Variable(tf.truncated_normal( [image_height // 4 * image_width // 4 * depth, num_hidden], stddev=0.1), name="L3Weights") layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]), name="L3Bias") layer4_weights = tf.Variable(tf.truncated_normal( [num_hidden, num_labels], stddev=0.1), name="L4Weights") layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]), name="L4Bias") # Add variable summaries for v in [layer1_weights, layer2_weights, layer3_weights, layer4_weights, layer1_biases, layer2_biases, layer3_biases, layer4_biases]: variable_summaries(v, v.name) # Model. def model(data): with tf.name_scope("Layer1"): conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME', name="2DConvolution") hidden = tf.nn.relu(conv + layer1_biases, name="ReLu") dropped = tf.nn.dropout(hidden, keep_rate, name="Dropout") with tf.name_scope("Layer2"): conv = tf.nn.conv2d(dropped, layer2_weights, [1, 2, 2, 1], padding='SAME', name="2DConvolution") hidden = tf.nn.relu(conv + layer2_biases, name="ReLu") dropped = tf.nn.dropout(hidden, keep_rate, name="Dropout") with tf.name_scope("Layer3"): shape = dropped.get_shape().as_list() reshape = tf.reshape(dropped, [-1, shape[1] * shape[2] * shape[3]]) hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases, name="ReLu") return tf.matmul(hidden, layer4_weights) + layer4_biases # Training computation. logits = model(input_data) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, input_labels)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss) # Predictions for the training and test data. model_prediction = tf.nn.softmax(logits, name="prediction") label_prediction = tf.argmax(model_prediction, 1, name="predicted_label") with tf.name_scope('summaries'): tf.scalar_summary('loss', loss) with tf.name_scope('accuracy'): correct_prediction = tf.equal(label_prediction, tf.argmax(input_labels, 1)) model_accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) tf.scalar_summary('accuracy', model_accuracy) merged_summaries = tf.merge_all_summaries() init = tf.initialize_all_variables() num_steps = 1001 clean_tb_dir() session = tf.Session(graph=graph) writer = tf.train.SummaryWriter(TB_DIR, graph=session.graph) session.run(init) print('Initialized') for step in range(num_steps): offset = (step * batch_size) % (train_labels.shape[0] - batch_size) batch_data = train_dataset[offset:(offset + batch_size), :, :, :] batch_labels = train_labels[offset:(offset + batch_size), :] feed_dict = {input_data : batch_data, input_labels : batch_labels, keep_rate: 0.5} _, l, predictions, accuracy, summaries = session.run( [optimizer, loss, model_prediction, model_accuracy, merged_summaries], feed_dict=feed_dict) if (step % 50 == 0): writer.add_summary(summaries, step) print('Minibatch loss at step %d: %f' % (step, l)) print('Minibatch accuracy: {}'.format(accuracy)) test_dict = {input_data : test_dataset, input_labels : test_labels, keep_rate: 1.0} test_accuracy = session.run(model_accuracy, feed_dict=test_dict) print('Test accuracy: {}'.format(test_accuracy)) writer.flush() writer.close() start_tensorboard() stop_tensorboard() # Visualize data: import matplotlib.pyplot as plt %matplotlib inline i = np.random.randint(len(test_dataset)) data = test_dataset[i,:,:,:] pixels = data[:, :, 0] plt.imshow(pixels) feed_me = np.ndarray((1, image_height, image_width, 1), np.float32) feed_me[0] = data feed_dict = {input_data: feed_me, keep_rate: 1.0} prediction = session.run(label_prediction, feed_dict=feed_dict) print("Predicted character: " + chr(prediction + ord('A'))) print("Actual label: " + chr(np.argmax(test_labels[i]) + ord('A'))) ```
github_jupyter
# TensorBoard Helper Functions and Constants # Directory to export TensorBoard summary statistics, graph data, etc. TB_DIR = '/tmp/tensorboard/tf_cnn' def clean_tb_dir(): !rm -rf /tmp/tensorboard/tf_cnn def _start_tb(d): """ Private function that calls `tensorboard` shell command args: d: The desired directory to launch in TensorBoard """ !tensorboard --port=6006 --logdir=$d def start_tensorboard(d=TB_DIR): """ Starts TensorBoard from the notebook in a separate thread. Prevents Jupyter Notebook from halting while TensorBoard runs. """ import threading threading.Thread(target=_start_tb, args=(TB_DIR,)).start() del threading def stop_tensorboard(): """ Kills all TensorBoard processes """ !ps -aef | grep "tensorboard" | tr -s ' ' | cut -d ' ' -f2 | xargs kill -KILL def reset_tensorboard(): stop_tensorboard() start_tensorboard() # Import core TensorFlow modules import tensorflow as tf import numpy as np # Modules required for file download and extraction import os import sys import tarfile from six.moves.urllib.request import urlretrieve from scipy import ndimage # Directory to download dataset DATASET_DIR = '/tmp/pipeline/datasets/notmnist/' # Create the directory !mkdir -p {DATASET_DIR} def maybe_download(filename, url, force=False): """Download a file if not present.""" if force or not os.path.exists(DATASET_DIR + filename): filename, _ = urlretrieve(url + filename, DATASET_DIR + filename) print('\nDownload complete for {}'.format(filename)) return filename else: print('File {} already present.'.format(filename)) return DATASET_DIR + filename def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('{} already present - don\'t need to extract {}.'.format(root, filename)) else: print('Extracting data for {}. This may take a while. Please wait.'.format(root)) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(root[0:root.rfind('/') + 1]) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] print(data_folders) return data_folders # Locations to download data: url = 'http://yaroslavvb.com/upload/notMNIST/' # Download two datasets train_zip_path = maybe_download('notMNIST_small.tar.gz', url) # Extract datasets train_folders = maybe_extract(train_zip_path) len(train_folders) image_height = 28 # Pixel height of images image_width = 28 # Pixel width of images pixel_depth = 255.0 # Number of levels per pixel expected_img_shape = (image_height, image_width) # Black and white image, no 3rd dimension num_labels = len(train_folders) def load_image_folder(folder): """Load the data for a single image label.""" # Create a list of image paths inside the folder image_files = os.listdir(folder) # Create empty numpy array to hold data dataset = np.ndarray(shape=(len(image_files), image_height, image_width), dtype=np.float32) num_images = 0 # Counter for number of successful images loaded for image in image_files: image_file = os.path.join(folder, image) try: # Read in image pixel data as floating point values image_data = ndimage.imread(image_file).astype(float) # Scale values: [0.0, 255.0] => [-1.0, 1.0] image_data = (image_data - pixel_depth / 2) / (pixel_depth / 2) if image_data.shape != expected_img_shape: print('File {} has unexpected dimensions: '.format(str(image_data.shape))) continue # Add image to the numpy array dataset dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- skipping this file and moving on.') # Trim dataset to remove unused space dataset = dataset[0:num_images, :, :] return dataset def make_data_label_arrays(num_rows, image_height, image_width): """ Creates and returns empty numpy arrays for input data and labels """ if num_rows: dataset = np.ndarray((num_rows, image_height, image_width), dtype=np.float32) labels = np.ndarray(num_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def collect_datasets(data_folders): datasets = [] total_images = 0 for label, data_folder in enumerate(data_folders): # Bring all test folder images in as numpy arrays dataset = load_image_folder(data_folder) num_images = len(dataset) total_images += num_images datasets.append((dataset, label, num_images)) return datasets, total_images def merge_train_test_datasets(datasets, total_images, percent_test): num_train = total_images * (1.0 - percent_test) num_test = total_images * percent_test train_dataset, train_labels = make_data_label_arrays(num_train, image_height, image_width) test_dataset, test_labels = make_data_label_arrays(num_test, image_height, image_width) train_counter = 0 test_counter = 0 dataset_counter = 1 for dataset, label, num_images in datasets: np.random.shuffle(dataset) if dataset_counter != len(datasets): n_v = num_images // (1.0 / percent_test) n_t = num_images - n_v else: # Last label, make sure dataset sizes match up to what we created n_v = len(test_dataset) - test_counter n_t = len(train_dataset) - train_counter train_dataset[train_counter: train_counter + n_t] = dataset[:n_t] train_labels[train_counter: train_counter + n_t] = label test_dataset[test_counter: test_counter + n_v] = dataset[n_t: n_t + n_v] test_labels[test_counter: test_counter + n_v] = label train_counter += n_t test_counter += n_v dataset_counter += 1 return train_dataset, train_labels, test_dataset, test_labels train_test_datasets, train_test_total_images = collect_datasets(train_folders) train_dataset, train_labels, test_dataset, test_labels = \ merge_train_test_datasets(train_test_datasets, train_test_total_images, 0.1) len(train_dataset) # Convert data examples into 3-D tensors num_channels = 1 # grayscale def reformat(dataset, labels): dataset = dataset.reshape( (-1, image_height, image_width, num_channels)).astype(np.float32) labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) def shuffle_data_with_labels(dataset, labels): indices = range(len(dataset)) np.random.shuffle(indices) new_data = np.ndarray(dataset.shape, dataset.dtype) new_labels = np.ndarray(labels.shape, dataset.dtype) n = 0 for i in indices: new_data[n] = dataset[i] new_labels[n] = labels[i] n += 1 return new_data, new_labels train_dataset, train_labels = shuffle_data_with_labels(train_dataset, train_labels) batch_size = 64 patch_size = 5 depth = 16 num_hidden = 64 graph = tf.Graph() with graph.as_default(): def variable_summaries(var, name): with tf.name_scope("summaries"): mean = tf.reduce_mean(var) tf.scalar_summary('mean/' + name, mean) with tf.name_scope('stddev'): stddev = tf.sqrt(tf.reduce_sum(tf.square(var - mean))) tf.scalar_summary('sttdev/' + name, stddev) tf.scalar_summary('max/' + name, tf.reduce_max(var)) tf.scalar_summary('min/' + name, tf.reduce_min(var)) tf.histogram_summary(name, var) # Input data. input_data = tf.placeholder( tf.float32, shape=(None, image_height, image_width, num_channels), name="input_data") input_labels = tf.placeholder(tf.float32, shape=(None, num_labels), name="input_labels") keep_rate = tf.placeholder(tf.float32, shape=(), name="keep_rate") # Variables. layer1_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, num_channels, depth], stddev=0.1), name="L1Weights") layer1_biases = tf.Variable(tf.zeros([depth]), name="L1Bias") layer2_weights = tf.Variable(tf.truncated_normal( [patch_size, patch_size, depth, depth], stddev=0.1), name="L2Weights") layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]), name="L2Bias") layer3_weights = tf.Variable(tf.truncated_normal( [image_height // 4 * image_width // 4 * depth, num_hidden], stddev=0.1), name="L3Weights") layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]), name="L3Bias") layer4_weights = tf.Variable(tf.truncated_normal( [num_hidden, num_labels], stddev=0.1), name="L4Weights") layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]), name="L4Bias") # Add variable summaries for v in [layer1_weights, layer2_weights, layer3_weights, layer4_weights, layer1_biases, layer2_biases, layer3_biases, layer4_biases]: variable_summaries(v, v.name) # Model. def model(data): with tf.name_scope("Layer1"): conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME', name="2DConvolution") hidden = tf.nn.relu(conv + layer1_biases, name="ReLu") dropped = tf.nn.dropout(hidden, keep_rate, name="Dropout") with tf.name_scope("Layer2"): conv = tf.nn.conv2d(dropped, layer2_weights, [1, 2, 2, 1], padding='SAME', name="2DConvolution") hidden = tf.nn.relu(conv + layer2_biases, name="ReLu") dropped = tf.nn.dropout(hidden, keep_rate, name="Dropout") with tf.name_scope("Layer3"): shape = dropped.get_shape().as_list() reshape = tf.reshape(dropped, [-1, shape[1] * shape[2] * shape[3]]) hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases, name="ReLu") return tf.matmul(hidden, layer4_weights) + layer4_biases # Training computation. logits = model(input_data) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, input_labels)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss) # Predictions for the training and test data. model_prediction = tf.nn.softmax(logits, name="prediction") label_prediction = tf.argmax(model_prediction, 1, name="predicted_label") with tf.name_scope('summaries'): tf.scalar_summary('loss', loss) with tf.name_scope('accuracy'): correct_prediction = tf.equal(label_prediction, tf.argmax(input_labels, 1)) model_accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) tf.scalar_summary('accuracy', model_accuracy) merged_summaries = tf.merge_all_summaries() init = tf.initialize_all_variables() num_steps = 1001 clean_tb_dir() session = tf.Session(graph=graph) writer = tf.train.SummaryWriter(TB_DIR, graph=session.graph) session.run(init) print('Initialized') for step in range(num_steps): offset = (step * batch_size) % (train_labels.shape[0] - batch_size) batch_data = train_dataset[offset:(offset + batch_size), :, :, :] batch_labels = train_labels[offset:(offset + batch_size), :] feed_dict = {input_data : batch_data, input_labels : batch_labels, keep_rate: 0.5} _, l, predictions, accuracy, summaries = session.run( [optimizer, loss, model_prediction, model_accuracy, merged_summaries], feed_dict=feed_dict) if (step % 50 == 0): writer.add_summary(summaries, step) print('Minibatch loss at step %d: %f' % (step, l)) print('Minibatch accuracy: {}'.format(accuracy)) test_dict = {input_data : test_dataset, input_labels : test_labels, keep_rate: 1.0} test_accuracy = session.run(model_accuracy, feed_dict=test_dict) print('Test accuracy: {}'.format(test_accuracy)) writer.flush() writer.close() start_tensorboard() stop_tensorboard() # Visualize data: import matplotlib.pyplot as plt %matplotlib inline i = np.random.randint(len(test_dataset)) data = test_dataset[i,:,:,:] pixels = data[:, :, 0] plt.imshow(pixels) feed_me = np.ndarray((1, image_height, image_width, 1), np.float32) feed_me[0] = data feed_dict = {input_data: feed_me, keep_rate: 1.0} prediction = session.run(label_prediction, feed_dict=feed_dict) print("Predicted character: " + chr(prediction + ord('A'))) print("Actual label: " + chr(np.argmax(test_labels[i]) + ord('A')))
0.533641
0.581778
<a href="https://colab.research.google.com/github/PadmarajBhat/Real-Time-Analytics-on-Hadoop/blob/master/SparkAgain.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> what is Big Data? - It is a problem statement and the platform like hadoop provide an architecture to develop massively parallel and scalable application. Hadoop introduced HDFS which involves maintaininng the distributed file system. In the later version of distributed computing, spark, was introduced. It provides scalability, data paralellism and fault tolerance. What are the alternatives to Hadoop? Hydra, DataTorrent RTS, Google BigQuery and Mesos Spark fails if master node gets data (through take or collect statements) out of its size. Idea is to compile data at worker node and get only the summary of it in the master node. Spark does lazy evaluations : stores the mapping or transformations on data and uses it only when there is requirement for transformed data. This stratergy helps in fault tolerance too. It can run anywhere like yarn, mesos, stand alone or cluster, also on kubernetes. File Systems: parquet(has serialization and deserialization logic and hence slow writing), avro (supports shema evaluation), ORC(index data stores the offset to row data and hence faster) ,csv, text, json etc Streaming: Process the data as and when it arrives. More suited for ETL or model predictions. It is micro batching in spark. * Output modes are complete, append(selection, filtering and baisc transformation) and update(with aggregation). * Window: set of data is determined by 2 ends of time intervals. If these time interval is increased then there is a probability of increase in set of data and if reduced gap between 2 intervals might reduce size of data. Pandas UDF (user defined functions): uses apache arrow for smoother transformation to pandas and thus avail pandas functionalities on spark dataframe. Note that ther is UDFs too which can work on spark dataframe which can be applied on the row data in spark dataframe. Machine learning: There are enough list of regression and classification and clustering methods under Spark MLlib which works on data frames. However, for deep learnin tensorflowonSpark or keras can be used for distributed training and modelling. TensorflowOnSpark: https://yahoohadoop.tumblr.com/post/157196317141/open-sourcing-tensorflowonspark-distributed-deep * the worker nodes uses either gRPC (https://grpc.io/docs/guides/) or RDMA(Remote direct memory) to communicate the parameter update between the worker node. * Data into the tensors can be through * TensorFlow Queuerunners where tensorflow directly access the HDFS files * or throug Spark Feed where spark rdd is fed to worker and worker passes the data into tensor through feed_dict. * API: * TFCluster : TFCluster.run indicates the core/main function which does tensorflow activities and also can option send the spark submit command line arguments * TFNode: TFNode.start_cluster_server is called at the beginning of the core/main function which would take ctx input from TFCluster and kick start the tf executions. * examples indicate that there can be no parameter server. Raised a question on the same. https://stackoverflow.com/questions/56469814/can-tensorflowonspark-run-without-parameter-server * Does uses only SparkContext and not the SparkSession. hence it works on RDD and not on DataFrames. elephas: Keras on Spark * Keras as usual, takes away the overhead of tensorflow internals * it is flexible to use dataframes as well. Both supports asynchronous and synchronous training. * As explained in the link: https://stackoverflow.com/a/34361377/8693106 * Synchronous learning update the average gradients to the weights from each worker node. * In my opinion, in all cases where the worker nodes do not have equibalanced batch of dataset, synchronous learning has to be adopted. * Asynchronous learning lets worker override the parameter update from one another. * Again in my opinion, cases where all workers or of same config and has balanced data set can work in this mode of learning because all worker nodes would have the more or less same parameter update. ### SCALA cheat sheet: https://docs.scala-lang.org/cheatsheets/index.html ``` ```
github_jupyter
0.096376
0.977989
### Python Data Structures and Boolean This notebook will contain some examples using python codes: 1. Boolean 2. Boolean and Logical Operators 3. Lists 4. Comparison operators 5. Dictionaries 6. Tuples 7. Sets ## 1. Boolean Boolean values are the two constant objects False and True. They are used to represent truth values (other values can also be considered false or true). In numeric contexts (for example, when used as the argument to an arithmetic operator), they behave like the integers 0 and 1, respectively. The built-in function bool() can be used to cast any value to a Boolean, if the value can be interpreted as a truth value They are written as False and True, respectively. ``` False print(True,False) type(True) my_str="Manzurul Islam" # There are tons of function for string. Some of them are as below. my_str.isnumeric() # Some string functions that return boolean value. print(my_str.isalnum()) #check if all char are numbers print(my_str.isalpha()) #check if all char in the string are alphabetic print(my_str.isdigit()) #test if string contains digits print(my_str.istitle()) #test if string contains title words print(my_str.isupper()) #test if string contains upper case print(my_str.islower()) #test if string contains lower case print(my_str.isspace()) #test if string contains spaces print(my_str.endswith('m')) #test if string endswith a d print(my_str.startswith('m')) #test if string startswith H ``` ## 2. Boolean and Logical Operators ``` True and True True and False True or False True or True False or False str_example='Hello World' my_str='Manzur' my_str.isalpha() or str_example.isnum() ``` ## 3. List A list is a data structure in Python that is a mutable, or changeable, ordered sequence of elements. Each element or value that is inside of a list is called an item. Just as strings are defined as characters between quotes, lists are defined by having values between square brackets [ ] ``` lst_ex = [] type(lst_ex) # List can be created by list() as well. lst = list() type(lst) lst = ['Math', 'Physics', 100, 200, 300] type(lst) # Append to a list lst.append('Manzur') lst # Append and create a nested list lst.append(['nahida', 'parvin']) # Indexing lst[1:3] lst # Inset some thing at your choice. Move items accordingly lst.insert(1,'Manzur') lst # Extend - its just extending the exisiting list lst = [1,2,3,4,5,6] lst lst.extend([8,9]) lst # Inbuilt functions sum(lst) lst.pop() lst lst.pop(2) lst lst = [1,1,1,1,2,3,4,4,5,6] print(lst.count(1), lst.count(4)) print(lst) lst.index(4) ``` ### Sets Set is an unordered collection data type that is iterable, mutable, and has NO duplicate elements. Python's set class represents the mathematical notion of a set.This is based on a data structure known as a hash table. ``` ## Defining an empy set using inbuilt function set_var= set() print(set_var) print(type(set_var)) ## Create a set using {}. It will contain unique values only set_var = {1,2,3,4,3} print(set_var) set_var={"Avengers","IronMan",'Hitman', 'Hitman', 'ironman'} print(set_var) type(set_var) # Set indexing # Set does not support indexing, e.g., set_var[1] will not work, set_var['Avengers'] will not work as well set_var['Avengers'] ## Inbuilt functions in set. There are many. Check them in docs set_var.add("Hulk") print(set_var) set1={"Avengers","IronMan",'Hitman'} set2={"Avengers","IronMan",'Hitman','Hulk2'} set2.intersection_update(set1) print(set2) ``` ### Dictionaries A dictionary is a collection which is unordered, changeable and indexed. In Python dictionaries are written with curly brackets, and they have keys and values. ``` # There are some differences between creating sets and dictionaries. Have a lookm at the following codes dic = {} type(dic) dic = {1,2,3,4,5} type(dic) ## Let create a dictionary my_dict={"Car1": "Audi", "Car2":"BMW","Car3":"Mercidies Benz"} type(my_dict) ##Access the item values based on keys my_dict['Car1'] # We can even loop throught the dictionaries keys for x in my_dict: print(x) # We can even loop throught the dictionaries values for x in my_dict.values(): print(x) # We can also check both keys and values for x in my_dict.items(): print(x) ## Adding items in Dictionaries my_dict['car4']='Audi 2.0' my_dict['Car1']='VW' my_dict ## Nested Dict car1_model={'Mercedes':1960} car2_model={'Audi':1970} car3_model={'Ambassador':1980} car_type={'car1':car1_model,'car2':car2_model,'car3':car3_model} print(car_type) ```
github_jupyter
False print(True,False) type(True) my_str="Manzurul Islam" # There are tons of function for string. Some of them are as below. my_str.isnumeric() # Some string functions that return boolean value. print(my_str.isalnum()) #check if all char are numbers print(my_str.isalpha()) #check if all char in the string are alphabetic print(my_str.isdigit()) #test if string contains digits print(my_str.istitle()) #test if string contains title words print(my_str.isupper()) #test if string contains upper case print(my_str.islower()) #test if string contains lower case print(my_str.isspace()) #test if string contains spaces print(my_str.endswith('m')) #test if string endswith a d print(my_str.startswith('m')) #test if string startswith H True and True True and False True or False True or True False or False str_example='Hello World' my_str='Manzur' my_str.isalpha() or str_example.isnum() lst_ex = [] type(lst_ex) # List can be created by list() as well. lst = list() type(lst) lst = ['Math', 'Physics', 100, 200, 300] type(lst) # Append to a list lst.append('Manzur') lst # Append and create a nested list lst.append(['nahida', 'parvin']) # Indexing lst[1:3] lst # Inset some thing at your choice. Move items accordingly lst.insert(1,'Manzur') lst # Extend - its just extending the exisiting list lst = [1,2,3,4,5,6] lst lst.extend([8,9]) lst # Inbuilt functions sum(lst) lst.pop() lst lst.pop(2) lst lst = [1,1,1,1,2,3,4,4,5,6] print(lst.count(1), lst.count(4)) print(lst) lst.index(4) ## Defining an empy set using inbuilt function set_var= set() print(set_var) print(type(set_var)) ## Create a set using {}. It will contain unique values only set_var = {1,2,3,4,3} print(set_var) set_var={"Avengers","IronMan",'Hitman', 'Hitman', 'ironman'} print(set_var) type(set_var) # Set indexing # Set does not support indexing, e.g., set_var[1] will not work, set_var['Avengers'] will not work as well set_var['Avengers'] ## Inbuilt functions in set. There are many. Check them in docs set_var.add("Hulk") print(set_var) set1={"Avengers","IronMan",'Hitman'} set2={"Avengers","IronMan",'Hitman','Hulk2'} set2.intersection_update(set1) print(set2) # There are some differences between creating sets and dictionaries. Have a lookm at the following codes dic = {} type(dic) dic = {1,2,3,4,5} type(dic) ## Let create a dictionary my_dict={"Car1": "Audi", "Car2":"BMW","Car3":"Mercidies Benz"} type(my_dict) ##Access the item values based on keys my_dict['Car1'] # We can even loop throught the dictionaries keys for x in my_dict: print(x) # We can even loop throught the dictionaries values for x in my_dict.values(): print(x) # We can also check both keys and values for x in my_dict.items(): print(x) ## Adding items in Dictionaries my_dict['car4']='Audi 2.0' my_dict['Car1']='VW' my_dict ## Nested Dict car1_model={'Mercedes':1960} car2_model={'Audi':1970} car3_model={'Ambassador':1980} car_type={'car1':car1_model,'car2':car2_model,'car3':car3_model} print(car_type)
0.304972
0.941708
# AWS Elastic Kubernetes Service (EKS) Deep MNIST In this example we will deploy a tensorflow MNIST model in Amazon Web Services' Elastic Kubernetes Service (EKS). This tutorial will break down in the following sections: 1) Train a tensorflow model to predict mnist locally 2) Containerise the tensorflow model with our docker utility 3) Send some data to the docker model to test it 4) Install and configure AWS tools to interact with AWS 5) Use the AWS tools to create and setup EKS cluster with Seldon 6) Push and run docker image through the AWS Container Registry 7) Test our Elastic Kubernetes deployment by sending some data #### Let's get started! 🚀🔥 ## Dependencies: * Helm v2.13.1+ * A Kubernetes cluster running v1.13 or above (minkube / docker-for-windows work well if enough RAM) * kubectl v1.14+ * EKS CLI v0.1.32 * AWS Cli v1.16.163 * Python 3.6+ * Python DEV requirements ## 1) Train a tensorflow model to predict mnist locally We will load the mnist images, together with their labels, and then train a tensorflow model to predict the right labels ``` from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) import tensorflow as tf if __name__ == '__main__': x = tf.placeholder(tf.float32, [None,784], name="x") W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x,W) + b, name="y") y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict = {x: mnist.test.images, y_:mnist.test.labels})) saver = tf.train.Saver() saver.save(sess, "model/deep_mnist_model") ``` ## 2) Containerise the tensorflow model with our docker utility First you need to make sure that you have added the .s2i/environment configuration file in this folder with the following content: ``` !cat .s2i/environment ``` Now we can build a docker image named "deep-mnist" with the tag 0.1 ``` !s2i build . seldonio/seldon-core-s2i-python36:0.12 deep-mnist:0.1 ``` ## 3) Send some data to the docker model to test it We first run the docker image we just created as a container called "mnist_predictor" ``` !docker run --name "mnist_predictor" -d --rm -p 5000:5000 deep-mnist:0.1 ``` Send some random features that conform to the contract ``` import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) from seldon_core.seldon_client import SeldonClient import math import numpy as np # We now test the REST endpoint expecting the same result endpoint = "0.0.0.0:5000" batch = x payload_type = "ndarray" sc = SeldonClient(microservice_endpoint=endpoint) # We use the microservice, instead of the "predict" function client_prediction = sc.microservice( data=batch, method="predict", payload_type=payload_type, names=["tfidf"]) for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %") !docker rm mnist_predictor --force ``` ## 4) Install and configure AWS tools to interact with AWS First we install the awscli ``` !pip install awscli --upgrade --user ``` #### Configure aws so it can talk to your server (if you are getting issues, make sure you have the permmissions to create clusters) ``` %%bash # You must make sure that the access key and secret are changed aws configure << END_OF_INPUTS YOUR_ACCESS_KEY YOUR_ACCESS_SECRET us-west-2 json END_OF_INPUTS ``` #### Install EKCTL *IMPORTANT*: These instructions are for linux Please follow the official installation of ekctl at: https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html ``` !curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz !chmod 755 ./eksctl !./eksctl version ``` ## 5) Use the AWS tools to create and setup EKS cluster with Seldon In this example we will create a cluster with 2 nodes, with a minimum of 1 and a max of 3. You can tweak this accordingly. If you want to check the status of the deployment you can go to AWS CloudFormation or to the EKS dashboard. It will take 10-15 minutes (so feel free to go grab a ☕). ### IMPORTANT: If you get errors in this step... It is most probably IAM role access requirements, which requires you to discuss with your administrator. ``` %%bash ./eksctl create cluster \ --name demo-eks-cluster \ --region us-west-2 \ --nodes 2 ``` ### Configure local kubectl We want to now configure our local Kubectl so we can actually reach the cluster we've just created ``` !aws eks --region us-west-2 update-kubeconfig --name demo-eks-cluster ``` And we can check if the context has been added to kubectl config (contexts are basically the different k8s cluster connections) You should be able to see the context as "...aws:eks:eu-west-1:27...". If it's not activated you can activate that context with kubectlt config set-context <CONTEXT_NAME> ``` !kubectl config get-contexts ``` ## Install Seldon Core ### Before we install seldon core, we need to install HELM For that, we need to create a ClusterRoleBinding for us, a ServiceAccount, and then a RoleBinding ``` !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default !kubectl create serviceaccount tiller --namespace kube-system !kubectl apply -f tiller-role-binding.yaml ``` ### Once that is set-up we can install Tiller ``` !helm init --service-account tiller # Wait until Tiller finishes !kubectl rollout status deploy/tiller-deploy -n kube-system ``` ### Now we can install SELDON. We first start with the custom resource definitions (CRDs) ``` !helm install seldon-core-operator --name seldon-core-operator --repo https://storage.googleapis.com/seldon-charts --set usageMetrics.enabled=true --namespace seldon-system ``` And confirm they are running by getting the pods: ``` !kubectl rollout status statefulset.apps/seldon-operator-controller-manager -n seldon-system ``` ### Now we set-up the ingress This will allow you to reach the Seldon models from outside the kubernetes cluster. In EKS it automatically creates an Elastic Load Balancer, which you can configure from the EC2 Console ``` !helm install stable/ambassador --name ambassador --set crds.keep=false ``` And let's wait until it's fully deployed ``` !kubectl rollout status deployment.apps/ambassador ``` ## Push docker image In order for the EKS seldon deployment to access the image we just built, we need to push it to the Elastic Container Registry (ECR). If you have any issues please follow the official AWS documentation: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.html ### First we create a registry You can run the following command, and then see the result at https://us-west-2.console.aws.amazon.com/ecr/repositories?# ``` !aws ecr create-repository --repository-name seldon-repository --region us-west-2 ``` ### Now prepare docker image We need to first tag the docker image before we can push it ``` %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker tag deep-mnist:0.1 "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository" ``` ### We now login to aws through docker so we can access the repository ``` !`aws ecr get-login --no-include-email --region us-west-2` ``` ### And push the image Make sure you add your AWS Account ID ``` %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository" ``` ## Running the Model We will now run the model. Let's first have a look at the file we'll be using to trigger the model: ``` !cat deep_mnist.json ``` Now let's trigger seldon to run the model. We basically have a yaml file, where we want to replace the value "REPLACE_FOR_IMAGE_AND_TAG" for the image you pushed ``` %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi sed 's|REPLACE_FOR_IMAGE_AND_TAG|'"$AWS_ACCOUNT_ID"'.dkr.ecr.'"$AWS_REGION"'.amazonaws.com/seldon-repository|g' deep_mnist.json | kubectl apply -f - ``` And let's check that it's been created. You should see an image called "deep-mnist-single-model...". We'll wait until STATUS changes from "ContainerCreating" to "Running" ``` !kubectl get pods ``` ## Test the model Now we can test the model, let's first find out what is the URL that we'll have to use: ``` !kubectl get svc ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' ``` We'll use a random example from our dataset ``` import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) ``` We can now add the URL above to send our request: ``` from seldon_core.seldon_client import SeldonClient import math import numpy as np host = "a68bbac487ca611e988060247f81f4c1-707754258.us-west-2.elb.amazonaws.com" port = "80" # Make sure you use the port above batch = x payload_type = "ndarray" sc = SeldonClient( gateway="ambassador", ambassador_endpoint=host + ":" + port, namespace="default", oauth_key="oauth-key", oauth_secret="oauth-secret") client_prediction = sc.predict( data=batch, deployment_name="deep-mnist", names=["text"], payload_type=payload_type) print(client_prediction) ``` ### Let's visualise the probability for each label It seems that it correctly predicted the number 7 ``` for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %") ```
github_jupyter
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) import tensorflow as tf if __name__ == '__main__': x = tf.placeholder(tf.float32, [None,784], name="x") W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x,W) + b, name="y") y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict = {x: mnist.test.images, y_:mnist.test.labels})) saver = tf.train.Saver() saver.save(sess, "model/deep_mnist_model") !cat .s2i/environment !s2i build . seldonio/seldon-core-s2i-python36:0.12 deep-mnist:0.1 !docker run --name "mnist_predictor" -d --rm -p 5000:5000 deep-mnist:0.1 import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) from seldon_core.seldon_client import SeldonClient import math import numpy as np # We now test the REST endpoint expecting the same result endpoint = "0.0.0.0:5000" batch = x payload_type = "ndarray" sc = SeldonClient(microservice_endpoint=endpoint) # We use the microservice, instead of the "predict" function client_prediction = sc.microservice( data=batch, method="predict", payload_type=payload_type, names=["tfidf"]) for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %") !docker rm mnist_predictor --force !pip install awscli --upgrade --user %%bash # You must make sure that the access key and secret are changed aws configure << END_OF_INPUTS YOUR_ACCESS_KEY YOUR_ACCESS_SECRET us-west-2 json END_OF_INPUTS !curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz !chmod 755 ./eksctl !./eksctl version %%bash ./eksctl create cluster \ --name demo-eks-cluster \ --region us-west-2 \ --nodes 2 !aws eks --region us-west-2 update-kubeconfig --name demo-eks-cluster !kubectl config get-contexts !kubectl create clusterrolebinding kube-system-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default !kubectl create serviceaccount tiller --namespace kube-system !kubectl apply -f tiller-role-binding.yaml !helm init --service-account tiller # Wait until Tiller finishes !kubectl rollout status deploy/tiller-deploy -n kube-system !helm install seldon-core-operator --name seldon-core-operator --repo https://storage.googleapis.com/seldon-charts --set usageMetrics.enabled=true --namespace seldon-system !kubectl rollout status statefulset.apps/seldon-operator-controller-manager -n seldon-system !helm install stable/ambassador --name ambassador --set crds.keep=false !kubectl rollout status deployment.apps/ambassador !aws ecr create-repository --repository-name seldon-repository --region us-west-2 %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker tag deep-mnist:0.1 "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository" !`aws ecr get-login --no-include-email --region us-west-2` %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/seldon-repository" !cat deep_mnist.json %%bash export AWS_ACCOUNT_ID="" export AWS_REGION="us-west-2" if [ -z "$AWS_ACCOUNT_ID" ]; then echo "ERROR: Please provide a value for the AWS variables" exit 1 fi sed 's|REPLACE_FOR_IMAGE_AND_TAG|'"$AWS_ACCOUNT_ID"'.dkr.ecr.'"$AWS_REGION"'.amazonaws.com/seldon-repository|g' deep_mnist.json | kubectl apply -f - !kubectl get pods !kubectl get svc ambassador -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' import matplotlib.pyplot as plt # This is the variable that was initialised at the beginning of the file i = [0] x = mnist.test.images[i] y = mnist.test.labels[i] plt.imshow(x.reshape((28, 28)), cmap='gray') plt.show() print("Expected label: ", np.sum(range(0,10) * y), ". One hot encoding: ", y) from seldon_core.seldon_client import SeldonClient import math import numpy as np host = "a68bbac487ca611e988060247f81f4c1-707754258.us-west-2.elb.amazonaws.com" port = "80" # Make sure you use the port above batch = x payload_type = "ndarray" sc = SeldonClient( gateway="ambassador", ambassador_endpoint=host + ":" + port, namespace="default", oauth_key="oauth-key", oauth_secret="oauth-secret") client_prediction = sc.predict( data=batch, deployment_name="deep-mnist", names=["text"], payload_type=payload_type) print(client_prediction) for proba, label in zip(client_prediction.response.data.ndarray.values[0].list_value.ListFields()[0][1], range(0,10)): print(f"LABEL {label}:\t {proba.number_value*100:6.4f} %")
0.551332
0.943138
# Writing Function **CS1302 Introduction to Computer Programming** ___ ``` %reload_ext mytutor ``` ## Function Definition **How to write a function?** A function is defined using the [`def` keyword](https://docs.python.org/3/reference/compound_stmts.html#def): The following is a simple function that prints "Hello, World!". ``` # Function definition def say_hello(): print('Hello, World!') # Function invocation say_hello() ``` To make a function more powerful and solve different problems, we can - use a [return statement](https://docs.python.org/3/reference/simple_stmts.html#the-return-statement) to return a value that - depends on some input arguments. ``` def increment(x): return x + 1 increment(3) ``` We can also have multiple input arguments. ``` def length_of_hypotenuse(a, b): if a >= 0 and b >= 0: return (a**2 + b**2)**0.5 else: print('Input arguments must be non-negative.') length_of_hypotenuse(3, 4) length_of_hypotenuse(-3, 4) ``` ## Documentation **How to document a function?** ``` # Author: John Doe # Last modified: 2020-09-14 def increment(x): '''The function takes in a value x and returns the increment x + 1. It is a simple example that demonstrates the idea of - parameter passing, - return statement, and - function documentation.''' return x + 1 # + operation is used and may fail for 'str' ``` The `help` command shows the docstring we write - at beginning of the function body - delimited using triple single/double quotes. ``` help(increment) ``` The docstring should contain the *usage guide*, i.e., information for new users to call the function properly. There is a Python style guide (PEP 257) for - [one-line docstrings](https://www.python.org/dev/peps/pep-0257/#one-line-docstrings) and - [multi-line docstrings](https://www.python.org/dev/peps/pep-0257/#multi-line-docstrings). **Why doesn't `help` show the comments that start with `#`?** ```Python # Author: John Doe # Last modified: 2020-09-14 def increment(x): ... return x + 1 # + operation is used and may fail for 'str' ``` Those comments are not usage guide. They are intended for programmers who need to maintain/extend the function definition. - Information about the author and modification date facilitate communications among programmers. - Comments within the code help explain important and not-so-obvious implementation details. **How to let user know the data types of input arguments and return value?** We can [annotate](https://docs.python.org/3/library/typing.html) the function with *hints* of the types of the arguments and return value. ``` # Author: John Doe # Last modified: 2020-09-14 def increment(x: float) -> float: '''The function takes in a value x and returns the increment x + 1. It is a simple example that demonstrates the idea of - parameter passing, - return statement, and - function documentation.''' return x + 1 # + operation is used and may fail for 'str' help(increment) ``` The above annotations is not enforced by the Python interpreter. Nevertheless, such annotations make the code easier to understand and can be used by editor with type-checking tools. ``` def increment_user_input(): return increment(input()) # does not raise error even though input returns str increment_user_input() # still lead to runtime error ``` ## Parameter Passing **Can we increment a variable instead of returning its increment?** ``` def increment(x): x += 1 x = 3 increment(x) print(x) # 4? ``` Does the above code increment `x`? ``` %%mytutor -h 350 def increment(x): x += 1 x = 3 increment(x) print(x) ``` - Step 3: The function `increment` is invoked with the argument evaluated to the value of `x`. - Step 3-4: A local frame is created for variables local to `increment` during its execution. - The *formal parameter* `x` in `def increment(x):` becomes a local variable and - it is assigned the value `3` of the *actual parameter* given by the global variable `x`. - Step 5-6: The local (but not the global) variable `x` is incremented. - Step 6-7: The function call completes and the local frame is removed.
github_jupyter
%reload_ext mytutor # Function definition def say_hello(): print('Hello, World!') # Function invocation say_hello() def increment(x): return x + 1 increment(3) def length_of_hypotenuse(a, b): if a >= 0 and b >= 0: return (a**2 + b**2)**0.5 else: print('Input arguments must be non-negative.') length_of_hypotenuse(3, 4) length_of_hypotenuse(-3, 4) # Author: John Doe # Last modified: 2020-09-14 def increment(x): '''The function takes in a value x and returns the increment x + 1. It is a simple example that demonstrates the idea of - parameter passing, - return statement, and - function documentation.''' return x + 1 # + operation is used and may fail for 'str' help(increment) # Author: John Doe # Last modified: 2020-09-14 def increment(x): ... return x + 1 # + operation is used and may fail for 'str' # Author: John Doe # Last modified: 2020-09-14 def increment(x: float) -> float: '''The function takes in a value x and returns the increment x + 1. It is a simple example that demonstrates the idea of - parameter passing, - return statement, and - function documentation.''' return x + 1 # + operation is used and may fail for 'str' help(increment) def increment_user_input(): return increment(input()) # does not raise error even though input returns str increment_user_input() # still lead to runtime error def increment(x): x += 1 x = 3 increment(x) print(x) # 4? %%mytutor -h 350 def increment(x): x += 1 x = 3 increment(x) print(x)
0.382833
0.988447
# Retriveing GOOGLE stock data using Unibit API ``` #Source code from https://github.com/unibit-api/unibit-examples/blob/master/Stock_data_prediction.ipynb import requests import numpy as np import pandas as pd import json from matplotlib import pyplot as plt import statistics API_KEY = "d_v5YPSjHhQcqdhAh6qPBtem04VlUZ-w" def getIntraDayByTicker(Ticker): """ This function takes as an input ticker symbole and returns intraday stock price as an object""" import requests import json response = requests.get('https://api.unibit.ai/realtimestock/'+Ticker+'?AccessKey='+API_KEY) data_str = response.text parsed_data = json.loads(data_str) return parsed_data def getStockNewsByTicker(Ticker): """ This function takes as an input ticker symbole and returns Latest stock news data array""" import requests import json response = requests.get('https://api.unibit.ai/news/latest/'+Ticker+'?AccessKey='+API_KEY) data_str = response.text parsed_data = json.loads(data_str) return parsed_data def getHistoricalPrice(Ticker,rng,interval): """ This function takes as an input ticker symbole, range as rng and interval and returns historical stock price as object Possible ranges : 1m - 3m - 1y - 3y - 5y - 10y - 20y A positive number (n). If passed, chart data will return every nth element as defined by Interval""" import requests import json response = requests.get('https://api.unibit.ai/historicalstockprice/'+Ticker+'?range='+rng+'&interval='+str(interval)+'&AccessKey='+API_KEY) data_str = response.text parsed_data = json.loads(data_str) return parsed_data ``` # Retriving real-time stock data of GOOGLE ## Way 1 ``` GOOGL_intra = pd.DataFrame(data = getIntraDayByTicker("GOOGL")) GOOGL_intra.head(20) GOOGL_intra["time_index"] = pd.to_datetime((GOOGL_intra['date'] + GOOGL_intra["minute"]).values, format='%Y%m%d%H:%M') GOOGL_intra.set_index("time_index",inplace=True) # AAPL_intra["time_index"] = pd.to_datetime((AAPL_intra['date'] + AAPL_intra["minute"]).values, format='%Y%m%d%H:%M') # AAPL_intra.set_index("time_index",inplace=True) ``` # Visualizing the real-time stock price of GOOGLE ``` GOOGL_intra.tail(40).price.plot(figsize=[10,8], title = 'Visuliaze intra_day stock proce for GOOGLE') ``` # Retriving historical stock data of GOOGLE ``` GOOGL_hist = pd.DataFrame(data = getHistoricalPrice(Ticker='GOOGL',rng="1y",interval=2)["Stock price"]) GOOGL_hist.date = pd.to_datetime(GOOGL_hist.date) GOOGL_hist.set_index("date",inplace=True) GOOGL_hist.close.plot(figsize=[10,10], color='red', title = 'Visuliaze historical stock proce for GOOGLE') GOOGL_hist.describe() ``` # Visualizing historical stock data of GOOGLE ``` plt.figure(figsize=[15,15]) ax1 = plt.subplot(211) plt.title('Historical close for Google') plt.plot(GOOGL_hist.close, color = 'green') plt.setp(ax1.get_xticklabels(), fontsize=10) ax2 = plt.subplot(212) plt.title('Historical volume for Google') plt.plot(GOOGL_hist.volume, color = 'purple') plt.setp(ax2.get_xticklabels(), fontsize=10) from unibit.stockprice import StockPrice sp = StockPrice(key="d_v5YPSjHhQcqdhAh6qPBtem04VlUZ-w") aapl_price_csv = sp.getPricesRealTime("GOOGL", size=20, datatype="csv") import csv result = {} for item in aapl_price_csv: print(item) from unibit.companyinfo import CompanyInfo ci = CompanyInfo(key="d_v5YPSjHhQcqdhAh6qPBtem04VlUZ-w") aapl_profile = ci.getCompanyProfile("AAPL") aapl_profile ```
github_jupyter
#Source code from https://github.com/unibit-api/unibit-examples/blob/master/Stock_data_prediction.ipynb import requests import numpy as np import pandas as pd import json from matplotlib import pyplot as plt import statistics API_KEY = "d_v5YPSjHhQcqdhAh6qPBtem04VlUZ-w" def getIntraDayByTicker(Ticker): """ This function takes as an input ticker symbole and returns intraday stock price as an object""" import requests import json response = requests.get('https://api.unibit.ai/realtimestock/'+Ticker+'?AccessKey='+API_KEY) data_str = response.text parsed_data = json.loads(data_str) return parsed_data def getStockNewsByTicker(Ticker): """ This function takes as an input ticker symbole and returns Latest stock news data array""" import requests import json response = requests.get('https://api.unibit.ai/news/latest/'+Ticker+'?AccessKey='+API_KEY) data_str = response.text parsed_data = json.loads(data_str) return parsed_data def getHistoricalPrice(Ticker,rng,interval): """ This function takes as an input ticker symbole, range as rng and interval and returns historical stock price as object Possible ranges : 1m - 3m - 1y - 3y - 5y - 10y - 20y A positive number (n). If passed, chart data will return every nth element as defined by Interval""" import requests import json response = requests.get('https://api.unibit.ai/historicalstockprice/'+Ticker+'?range='+rng+'&interval='+str(interval)+'&AccessKey='+API_KEY) data_str = response.text parsed_data = json.loads(data_str) return parsed_data GOOGL_intra = pd.DataFrame(data = getIntraDayByTicker("GOOGL")) GOOGL_intra.head(20) GOOGL_intra["time_index"] = pd.to_datetime((GOOGL_intra['date'] + GOOGL_intra["minute"]).values, format='%Y%m%d%H:%M') GOOGL_intra.set_index("time_index",inplace=True) # AAPL_intra["time_index"] = pd.to_datetime((AAPL_intra['date'] + AAPL_intra["minute"]).values, format='%Y%m%d%H:%M') # AAPL_intra.set_index("time_index",inplace=True) GOOGL_intra.tail(40).price.plot(figsize=[10,8], title = 'Visuliaze intra_day stock proce for GOOGLE') GOOGL_hist = pd.DataFrame(data = getHistoricalPrice(Ticker='GOOGL',rng="1y",interval=2)["Stock price"]) GOOGL_hist.date = pd.to_datetime(GOOGL_hist.date) GOOGL_hist.set_index("date",inplace=True) GOOGL_hist.close.plot(figsize=[10,10], color='red', title = 'Visuliaze historical stock proce for GOOGLE') GOOGL_hist.describe() plt.figure(figsize=[15,15]) ax1 = plt.subplot(211) plt.title('Historical close for Google') plt.plot(GOOGL_hist.close, color = 'green') plt.setp(ax1.get_xticklabels(), fontsize=10) ax2 = plt.subplot(212) plt.title('Historical volume for Google') plt.plot(GOOGL_hist.volume, color = 'purple') plt.setp(ax2.get_xticklabels(), fontsize=10) from unibit.stockprice import StockPrice sp = StockPrice(key="d_v5YPSjHhQcqdhAh6qPBtem04VlUZ-w") aapl_price_csv = sp.getPricesRealTime("GOOGL", size=20, datatype="csv") import csv result = {} for item in aapl_price_csv: print(item) from unibit.companyinfo import CompanyInfo ci = CompanyInfo(key="d_v5YPSjHhQcqdhAh6qPBtem04VlUZ-w") aapl_profile = ci.getCompanyProfile("AAPL") aapl_profile
0.555556
0.821939
``` import pandas as pd import statsmodels.api as sm df = pd.read_csv('20100008.csv') df = df[df["Adjustments"] == "Unadjusted"] canada_df = df[df["GEO"] == "Canada"] canada_df["Time Period"] = pd.to_datetime(canada_df["REF_DATE"]) canada_df canada_df_few_cols = canada_df[["North American Industry Classification System (NAICS)", "Time Period", "VALUE"]] canada_df_pivot = canada_df_few_cols.pivot(index="Time Period", columns="North American Industry Classification System (NAICS)", values="VALUE") # All stores that do not have data at the start canada_df_pivot_fewer_cols = canada_df_pivot.drop(columns=["Automobile dealers [4411]", "Automotive parts, accessories and tire stores [4413]", "Cannabis stores [453993]", "Clothing stores [4481]", "Clothing stores [4481]", "Convenience stores [44512]", "Grocery stores [4451]", "Jewellery, luggage and leather goods stores [4483]", "Other motor vehicle dealers [4412]", "Shoe stores [4482]", "Specialty food stores [4452]", "Used car dealers [44112]"]) # All classifications that do not have data at the end canada_df_pivot_fewer_cols = canada_df_pivot_fewer_cols.drop(columns=["Department stores [4521]", "Other general merchandise stores [4529]"]) canada_df_pivot_nona = canada_df_pivot_fewer_cols.dropna() # calling it normalized because I don't know what else to call it canada_df_pivot_nona_normalized = pd.DataFrame() for (columnName, columnData) in canada_df_pivot_nona.iteritems(): canada_df_pivot_nona_normalized[columnName] = canada_df_pivot_nona[columnName] / canada_df_pivot_nona["Retail trade [44-45]"] canada_df_old_index = canada_df_pivot_nona_normalized.reset_index() canada_df_old_index jan_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 1] feb_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 2] march_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 3] apr_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 4] may_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 5] june_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 6] july_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 7] aug_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 8] sept_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 9] oct_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 10] nov_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 11] dec_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 12] month_dfs = [jan_data, feb_data, march_data, apr_data, may_data, june_data, july_data, aug_data, sept_data] # Make Q1 data ready new_index_jan_data = jan_data.reset_index() new_index_feb_data = feb_data.reset_index() new_index_march_data = march_data.reset_index() # Make Q2 data ready new_index_apr_data = apr_data.reset_index() new_index_may_data = may_data.reset_index() new_index_june_data = june_data.reset_index() # Make Q3 data ready new_index_july_data = july_data.reset_index() new_index_aug_data = aug_data.reset_index() new_index_sept_data = sept_data.reset_index() def get_first_quarter_average(category) : Q1_data = (new_index_jan_data[category] + new_index_feb_data[category] + new_index_march_data[category]) / 3 return Q1_data.mean() def get_second_quarter_average(category) : Q2_data = (new_index_apr_data[category] + new_index_may_data[category] + new_index_june_data[category]) / 3 return Q2_data.mean() def get_third_quarter_average(category) : Q3_data = (new_index_july_data[category] + new_index_aug_data[category] + new_index_sept_data[category]) / 3 return Q3_data.mean() def get_average_of_first_three_quarters(category) : return [get_first_quarter_average(category), get_second_quarter_average(category), get_third_quarter_average(category)] liqour_quarterly_data = get_average_of_first_three_quarters('Beer, wine and liquor stores [4453]') building_quarterly_data = get_average_of_first_three_quarters('Building material and garden equipment and supplies dealers [444]') electronics_quarterly_data = get_average_of_first_three_quarters('Electronics and appliance stores [443]') def get_one_month(data, category, year) : current_year_data = data[data["Time Period"].dt.year == year] return current_year_data[category].item() def get_2020_quarterly_data(category): Q1 = (get_one_month(new_index_jan_data, category, 2020) + get_one_month(new_index_feb_data, category, 2020) + get_one_month(new_index_march_data, category, 2020)) / 3 Q2 = (get_one_month(new_index_apr_data, category, 2020) + get_one_month(new_index_may_data, category, 2020) + get_one_month(new_index_june_data, category, 2020)) / 3 Q3 = (get_one_month(new_index_july_data, category, 2020) + get_one_month(new_index_aug_data, category, 2020) + get_one_month(new_index_sept_data, category, 2020)) / 3 return [Q1, Q2, Q3] liqour_quarterly_data_2020 = get_2020_quarterly_data('Beer, wine and liquor stores [4453]') building_quarterly_data_2020 = get_2020_quarterly_data('Building material and garden equipment and supplies dealers [444]') electronics_quarterly_data_2020 = get_2020_quarterly_data('Electronics and appliance stores [443]') # https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/barchart.html#sphx-glr-gallery-lines-bars-and-markers-barchart-py import matplotlib import matplotlib.pyplot as plt import numpy as np labels = ['Q1', 'Q2', 'Q3'] x = np.arange(len(labels)) # the label locations width = 0.35 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, liqour_quarterly_data_2020, width, label='2020 Retail Share') rects2 = ax.bar(x + width/2, liqour_quarterly_data, width, label='Average Retail Share') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Share of Retail Trade') ax.set_title("Liqour Store's Retail Share During Pandemic") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() labels = ['Q1', 'Q2', 'Q3'] x = np.arange(len(labels)) # the label locations width = 0.35 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, building_quarterly_data_2020, width, label='2020 Retail Share') rects2 = ax.bar(x + width/2, building_quarterly_data, width, label='Average Retail Share') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Share of Retail Trade') ax.set_title("Home Improvement Store's Retail Share During Pandemic") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() labels = ['Q1', 'Q2', 'Q3'] x = np.arange(len(labels)) # the label locations width = 0.35 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, electronics_quarterly_data_2020, width, label='2020 Retail Share') rects2 = ax.bar(x + width/2, electronics_quarterly_data, width, label='Average Retail Share') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Share of Retail Trade') ax.set_title("Electronics and Appliance Store's Retail Share During Pandemic") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() def get_all_of_one_year(month_dfs, category, year): monthly_data = [] for month in month_dfs : monthly_data.append(get_one_month(month, category, year)) return monthly_data canada_df_fullvals_old_index = canada_df_pivot_nona.reset_index() canada_df_fullvals_old_index jan_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 1] feb_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 2] march_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 3] apr_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 4] may_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 5] june_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 6] july_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 7] aug_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 8] sept_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 9] oct_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 10] nov_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 11] dec_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 12] month_dfs_fullvals = [jan_data_fullvals, feb_data_fullvals, march_data_fullvals, apr_data_fullvals, may_data_fullvals, june_data_fullvals, july_data_fullvals, aug_data_fullvals, sept_data_fullvals] liquor_data_2020_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Beer, wine and liquor stores [4453]', 2020) liquor_data_2019_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Beer, wine and liquor stores [4453]', 2019) liquor_data_2018_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Beer, wine and liquor stores [4453]', 2018) labels = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September'] x = np.arange(len(labels)) # the label locations width = 0.2 # the width of the bars fig, ax = plt.subplots(figsize=(7.5, 3.75)) rects1 = ax.plot(labels, liquor_data_2020_full_vals, label='2020 Retail Sales') rects2 = ax.plot(labels, liquor_data_2019_full_vals, label='2019 Retail Sales') rects3 = ax.plot(labels, liquor_data_2018_full_vals, label='2018 Retail Sales') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Sales') ax.set_title("Liqour Store Sales Last Three Years") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() building_data_2020_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Building material and garden equipment and supplies dealers [444]', 2020) building_data_2019_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Building material and garden equipment and supplies dealers [444]', 2019) building_data_2018_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Building material and garden equipment and supplies dealers [444]', 2018) labels = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September'] fig, ax = plt.subplots(figsize=(7.5, 3.75)) rects1 = ax.plot(labels, building_data_2020_full_vals, label='2020 Retail Sales') rects2 = ax.plot(labels, building_data_2019_full_vals, label='2019 Retail Sales') rects3 = ax.plot(labels, building_data_2018_full_vals, label='2018 Retail Sales') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Sales') ax.set_title("Home Improvement Store Sales Last Three Years") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() electronics_data_2020_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Electronics and appliance stores [443]', 2020) electronics_data_2019_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Electronics and appliance stores [443]', 2019) electronics_data_2018_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Electronics and appliance stores [443]', 2018) labels = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September'] fig, ax = plt.subplots(figsize=(7.5, 3.75)) rects1 = ax.plot(labels, electronics_data_2020_full_vals, label='2020 Retail Sales') rects2 = ax.plot(labels, electronics_data_2019_full_vals, label='2019 Retail Sales') rects3 = ax.plot(labels, electronics_data_2018_full_vals, label='2018 Retail Sales') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Sales') ax.set_title("Electronic and Appliance Store Sales Last Three Years") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() totalretail_data_2020_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Retail trade [44-45]', 2020) totalretail_data_2019_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Retail trade [44-45]', 2019) totalretail_data_2018_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Retail trade [44-45]', 2018) labels = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September'] fig, ax = plt.subplots(figsize=(10, 5)) rects1 = ax.plot(labels, totalretail_data_2020_full_vals, label='2020 Retail Sales') rects2 = ax.plot(labels, totalretail_data_2019_full_vals, label='2019 Retail Sales') rects3 = ax.plot(labels, totalretail_data_2018_full_vals, label='2018 Retail Sales') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Sales') ax.set_title("Total Retail Store Sales Last Three Years") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() ```
github_jupyter
import pandas as pd import statsmodels.api as sm df = pd.read_csv('20100008.csv') df = df[df["Adjustments"] == "Unadjusted"] canada_df = df[df["GEO"] == "Canada"] canada_df["Time Period"] = pd.to_datetime(canada_df["REF_DATE"]) canada_df canada_df_few_cols = canada_df[["North American Industry Classification System (NAICS)", "Time Period", "VALUE"]] canada_df_pivot = canada_df_few_cols.pivot(index="Time Period", columns="North American Industry Classification System (NAICS)", values="VALUE") # All stores that do not have data at the start canada_df_pivot_fewer_cols = canada_df_pivot.drop(columns=["Automobile dealers [4411]", "Automotive parts, accessories and tire stores [4413]", "Cannabis stores [453993]", "Clothing stores [4481]", "Clothing stores [4481]", "Convenience stores [44512]", "Grocery stores [4451]", "Jewellery, luggage and leather goods stores [4483]", "Other motor vehicle dealers [4412]", "Shoe stores [4482]", "Specialty food stores [4452]", "Used car dealers [44112]"]) # All classifications that do not have data at the end canada_df_pivot_fewer_cols = canada_df_pivot_fewer_cols.drop(columns=["Department stores [4521]", "Other general merchandise stores [4529]"]) canada_df_pivot_nona = canada_df_pivot_fewer_cols.dropna() # calling it normalized because I don't know what else to call it canada_df_pivot_nona_normalized = pd.DataFrame() for (columnName, columnData) in canada_df_pivot_nona.iteritems(): canada_df_pivot_nona_normalized[columnName] = canada_df_pivot_nona[columnName] / canada_df_pivot_nona["Retail trade [44-45]"] canada_df_old_index = canada_df_pivot_nona_normalized.reset_index() canada_df_old_index jan_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 1] feb_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 2] march_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 3] apr_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 4] may_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 5] june_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 6] july_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 7] aug_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 8] sept_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 9] oct_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 10] nov_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 11] dec_data = canada_df_old_index[canada_df_old_index["Time Period"].dt.month == 12] month_dfs = [jan_data, feb_data, march_data, apr_data, may_data, june_data, july_data, aug_data, sept_data] # Make Q1 data ready new_index_jan_data = jan_data.reset_index() new_index_feb_data = feb_data.reset_index() new_index_march_data = march_data.reset_index() # Make Q2 data ready new_index_apr_data = apr_data.reset_index() new_index_may_data = may_data.reset_index() new_index_june_data = june_data.reset_index() # Make Q3 data ready new_index_july_data = july_data.reset_index() new_index_aug_data = aug_data.reset_index() new_index_sept_data = sept_data.reset_index() def get_first_quarter_average(category) : Q1_data = (new_index_jan_data[category] + new_index_feb_data[category] + new_index_march_data[category]) / 3 return Q1_data.mean() def get_second_quarter_average(category) : Q2_data = (new_index_apr_data[category] + new_index_may_data[category] + new_index_june_data[category]) / 3 return Q2_data.mean() def get_third_quarter_average(category) : Q3_data = (new_index_july_data[category] + new_index_aug_data[category] + new_index_sept_data[category]) / 3 return Q3_data.mean() def get_average_of_first_three_quarters(category) : return [get_first_quarter_average(category), get_second_quarter_average(category), get_third_quarter_average(category)] liqour_quarterly_data = get_average_of_first_three_quarters('Beer, wine and liquor stores [4453]') building_quarterly_data = get_average_of_first_three_quarters('Building material and garden equipment and supplies dealers [444]') electronics_quarterly_data = get_average_of_first_three_quarters('Electronics and appliance stores [443]') def get_one_month(data, category, year) : current_year_data = data[data["Time Period"].dt.year == year] return current_year_data[category].item() def get_2020_quarterly_data(category): Q1 = (get_one_month(new_index_jan_data, category, 2020) + get_one_month(new_index_feb_data, category, 2020) + get_one_month(new_index_march_data, category, 2020)) / 3 Q2 = (get_one_month(new_index_apr_data, category, 2020) + get_one_month(new_index_may_data, category, 2020) + get_one_month(new_index_june_data, category, 2020)) / 3 Q3 = (get_one_month(new_index_july_data, category, 2020) + get_one_month(new_index_aug_data, category, 2020) + get_one_month(new_index_sept_data, category, 2020)) / 3 return [Q1, Q2, Q3] liqour_quarterly_data_2020 = get_2020_quarterly_data('Beer, wine and liquor stores [4453]') building_quarterly_data_2020 = get_2020_quarterly_data('Building material and garden equipment and supplies dealers [444]') electronics_quarterly_data_2020 = get_2020_quarterly_data('Electronics and appliance stores [443]') # https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/barchart.html#sphx-glr-gallery-lines-bars-and-markers-barchart-py import matplotlib import matplotlib.pyplot as plt import numpy as np labels = ['Q1', 'Q2', 'Q3'] x = np.arange(len(labels)) # the label locations width = 0.35 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, liqour_quarterly_data_2020, width, label='2020 Retail Share') rects2 = ax.bar(x + width/2, liqour_quarterly_data, width, label='Average Retail Share') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Share of Retail Trade') ax.set_title("Liqour Store's Retail Share During Pandemic") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() labels = ['Q1', 'Q2', 'Q3'] x = np.arange(len(labels)) # the label locations width = 0.35 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, building_quarterly_data_2020, width, label='2020 Retail Share') rects2 = ax.bar(x + width/2, building_quarterly_data, width, label='Average Retail Share') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Share of Retail Trade') ax.set_title("Home Improvement Store's Retail Share During Pandemic") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() labels = ['Q1', 'Q2', 'Q3'] x = np.arange(len(labels)) # the label locations width = 0.35 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, electronics_quarterly_data_2020, width, label='2020 Retail Share') rects2 = ax.bar(x + width/2, electronics_quarterly_data, width, label='Average Retail Share') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Share of Retail Trade') ax.set_title("Electronics and Appliance Store's Retail Share During Pandemic") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() def get_all_of_one_year(month_dfs, category, year): monthly_data = [] for month in month_dfs : monthly_data.append(get_one_month(month, category, year)) return monthly_data canada_df_fullvals_old_index = canada_df_pivot_nona.reset_index() canada_df_fullvals_old_index jan_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 1] feb_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 2] march_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 3] apr_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 4] may_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 5] june_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 6] july_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 7] aug_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 8] sept_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 9] oct_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 10] nov_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 11] dec_data_fullvals = canada_df_fullvals_old_index[canada_df_fullvals_old_index["Time Period"].dt.month == 12] month_dfs_fullvals = [jan_data_fullvals, feb_data_fullvals, march_data_fullvals, apr_data_fullvals, may_data_fullvals, june_data_fullvals, july_data_fullvals, aug_data_fullvals, sept_data_fullvals] liquor_data_2020_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Beer, wine and liquor stores [4453]', 2020) liquor_data_2019_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Beer, wine and liquor stores [4453]', 2019) liquor_data_2018_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Beer, wine and liquor stores [4453]', 2018) labels = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September'] x = np.arange(len(labels)) # the label locations width = 0.2 # the width of the bars fig, ax = plt.subplots(figsize=(7.5, 3.75)) rects1 = ax.plot(labels, liquor_data_2020_full_vals, label='2020 Retail Sales') rects2 = ax.plot(labels, liquor_data_2019_full_vals, label='2019 Retail Sales') rects3 = ax.plot(labels, liquor_data_2018_full_vals, label='2018 Retail Sales') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Sales') ax.set_title("Liqour Store Sales Last Three Years") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() building_data_2020_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Building material and garden equipment and supplies dealers [444]', 2020) building_data_2019_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Building material and garden equipment and supplies dealers [444]', 2019) building_data_2018_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Building material and garden equipment and supplies dealers [444]', 2018) labels = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September'] fig, ax = plt.subplots(figsize=(7.5, 3.75)) rects1 = ax.plot(labels, building_data_2020_full_vals, label='2020 Retail Sales') rects2 = ax.plot(labels, building_data_2019_full_vals, label='2019 Retail Sales') rects3 = ax.plot(labels, building_data_2018_full_vals, label='2018 Retail Sales') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Sales') ax.set_title("Home Improvement Store Sales Last Three Years") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() electronics_data_2020_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Electronics and appliance stores [443]', 2020) electronics_data_2019_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Electronics and appliance stores [443]', 2019) electronics_data_2018_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Electronics and appliance stores [443]', 2018) labels = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September'] fig, ax = plt.subplots(figsize=(7.5, 3.75)) rects1 = ax.plot(labels, electronics_data_2020_full_vals, label='2020 Retail Sales') rects2 = ax.plot(labels, electronics_data_2019_full_vals, label='2019 Retail Sales') rects3 = ax.plot(labels, electronics_data_2018_full_vals, label='2018 Retail Sales') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Sales') ax.set_title("Electronic and Appliance Store Sales Last Three Years") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show() totalretail_data_2020_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Retail trade [44-45]', 2020) totalretail_data_2019_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Retail trade [44-45]', 2019) totalretail_data_2018_full_vals = get_all_of_one_year(month_dfs_fullvals, 'Retail trade [44-45]', 2018) labels = ['January', 'Febuary', 'March', 'April', 'May', 'June', 'July', 'August', 'September'] fig, ax = plt.subplots(figsize=(10, 5)) rects1 = ax.plot(labels, totalretail_data_2020_full_vals, label='2020 Retail Sales') rects2 = ax.plot(labels, totalretail_data_2019_full_vals, label='2019 Retail Sales') rects3 = ax.plot(labels, totalretail_data_2018_full_vals, label='2018 Retail Sales') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Sales') ax.set_title("Total Retail Store Sales Last Three Years") ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() fig.tight_layout() plt.show()
0.413122
0.439206
# Intro to plotting with matplotlib ![](https://matplotlib.org/_static/logo2.svg) Matplotlib is a popular open source 2D plotting library for Python, modeled after Matlab. There are two approaches of using this - functional approach & objected-oriented approach. The latter is preferred. **ToC** - [importing](#importing) - [Functional plotting](#Functional-plotting) - [subplots](#subplots) - [Object oriented plotting](#Object-oriented-plotting) - [multiplots](#multiplots) - [Creating side by side plots](#Creating-side-by-side-plots) - [Easier subplots](#Easier-subplots) - [Figsize](#Figsize) - [Saving your plots](#Saving-your-plots) - [Labels and legends](#Labels-and-legends) - [Decorating with colors markers transparency](#Decorating-with-colors-markers-transparency) - [Set limits on axes](#Set-limits-on-axes) - [pie charts](#pie-charts) ## importing ``` import matplotlib.pyplot as plt %matplotlib inline ``` Let us create some data for X and Y for the plots ``` x = list(range(0,100)) y = list(map(lambda x:x**2, x)) ``` ## Functional plotting Call the plotting as functions. ``` plt.plot(x,y) plt.xlabel('x') plt.ylabel('y') plt.title('x vs y') ``` ### subplots Use `subplot` method and specify `nrows`, `ncols`, `plot number` as arguments. Thus specify (1,2,1) for two plots side by side, (2,2,1) for four plots like quadrants. ``` plt.subplot(1,2,1) #one row, 2 cols, 1st plot: plt.plot(x,y) plt.subplot(1,2,2) #the second plot: plt.plot(y,x) ``` ## Object oriented plotting Here we create a `figure` object, set `axes`, then add plots to it. Specify the axes as a list of a rectangle's `[left, bottom, width, height]`. The values always range from `0-1`. ``` fig = plt.figure() ax = fig.add_axes([0.1,0.1,0.8,0.8]) #rectangle's [left, bottom, width, height] ax.plot(x,y) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('x vs y') ``` ### multiplots You can create multiple plots, subplots, insert plots easily in OO approach once the axes is defined. ``` fig2 = plt.figure() ax1 = fig2.add_axes([0.1,0.1,0.8,0.8]) ax_ins = fig2.add_axes([0.2,0.5,0.3,0.3]) #insert in upper left side of plot ``` Consider the rectangle has max length and width = 1. You can create the first plot as big as you want filling this canvas. Then create the second sort of independent of first, using the same canvas coordinates. ``` fig3 = plt.figure() ax1 = fig3.add_axes([0,0,1,1]) #absolute - full size fig ax_ins = fig3.add_axes([0.5,0.1,0.4,0.4]) #insert in lower right side of plot ``` #### Creating side-by-side plots ``` fig3 = plt.figure() ax1 = fig3.add_axes([0,0,0.4,1]) # about half in width, full height ax2 = fig3.add_axes([0.5,0,0.4,1]) #same, but to the right ``` ### Easier subplots If you are going to be doing side-by-side subplots, then use the easier API as shown below. Matplotlib will auto arrange the axes and plots for you. ``` fig, axes = plt.subplots(nrows=1,ncols=2) axes[0].plot(x,y) axes[0].set_title('x vs y') axes[1].plot(x,x) axes[1].set_title('x vs x') ``` ### Figsize Specify the figure size of the plots. You specify this in `inches`. ``` fig, axes = plt.subplots(nrows=1,ncols=2, figsize=(5,5)) #specifying as **kwargs axes[0].plot(x,y) axes[1].plot(x,x) #use tight layout to resize plots within the canvas so there is no overlaps fig.tight_layout() ``` ### Saving your plots call the `savefig()` method of `figure` object ``` fig.savefig('my_plots.png', dpi=300) ``` ### Labels and legends You can add legend to the `axis` object. You first populate the `label` property of the plot for any text to show up in the legend as shown below: When inserting a legend, you can specify `loc=0` for auto positioning. Value `loc=1` is upper right, `loc=2` is upper left and so on. ``` fig, axes = plt.subplots(nrows=1,ncols=1, figsize=(5,3)) #just 1 large plot axes.plot(x,y, label='x vs x^2') axes.plot(x,x, label ='a straight line') axes.legend(loc=0) #loc=0 corresponds to best position available. #use tight layout to resize plots within the canvas so there is no overlaps fig.tight_layout() ``` **Note** : `fig.suplots()` does not always return a vector `axis` object array. As shown above, if you have just 1 plot, it has only 1 axis object. ## Decorating with colors markers transparency You can go town here and do the full customization, however it is adviced to use a higher level plotting API like seaborn if you find yourself writing a lot of styling code. The `plot()` method accepts a lot of these arguments ``` #linewidth or lw - ranges from 1 (default) to any high up #colors - takes names and HTML notations #alpha - is for transparency and ranges from [0-1] #marker - for tick marks and specify in characters #markersize #markerfacecolor #markeredgecolor #markeredgewidth fig = plt.figure() ax = fig.add_axes([0,0,1,1]) st_line = list(range(0,10000, 100)) ax.plot(x,y,color='orange', linewidth='3', alpha=0.3, marker='*', markersize=4, markerfacecolor='green', label='x vs y') ax.plot(x,st_line, color='green', marker='o', markersize=10, markerfacecolor='green', label='straight line') ax.legend() ``` ## Set limits on axes In the chart above, if you want to zoom and only show the chart for values from 0-20 on X you can do so by limiting the axes. You can also set it such that the axes extends beyond the range of your data ``` fig = plt.figure() ax = fig.add_axes([0,0,1,1]) st_line = list(range(0,10000, 100)) ax.plot(x,y,color='orange', linewidth='3', alpha=0.3, marker='*', markersize=4, markerfacecolor='green', label='x vs y') ax.plot(x,st_line, color='green', marker='o', markersize=10, markerfacecolor='green', label='straight line') ax.legend() ax.set_xlim(0,20) ax.set_ylim(0,3000) ``` ### pie charts You need to send values for pie charts as numbers. You cannot pass a text column and expect matplotlib to count values and make a pie out of it. ``` values = [400, 280, 10] labels = ['apple', 'android', 'windows'] # you can just call plt.pie. However it prints a bunch of objs on the notebook. I do this just to suppress that. fix, ax1 = plt.subplots() # you get the returns from ax1.pie because the font is tiny and to make it bigger # the autopct is to get the percentage values. _, texts, autotexts = ax1.pie(values, labels=labels, shadow=True, autopct='%1.1f%%') # make the font bigger by calling set_fontsize method each obj in texts, autotexts list(map(lambda x:x.set_fontsize(15), texts)) list(map(lambda x:x.set_fontsize(15), autotexts)) # by default the pie has a perspective. so you here you make it flat ax1.axis('equal') ax1.set_title('Cell phone OS by popularity', fontsize=15) ```
github_jupyter
import matplotlib.pyplot as plt %matplotlib inline x = list(range(0,100)) y = list(map(lambda x:x**2, x)) plt.plot(x,y) plt.xlabel('x') plt.ylabel('y') plt.title('x vs y') plt.subplot(1,2,1) #one row, 2 cols, 1st plot: plt.plot(x,y) plt.subplot(1,2,2) #the second plot: plt.plot(y,x) fig = plt.figure() ax = fig.add_axes([0.1,0.1,0.8,0.8]) #rectangle's [left, bottom, width, height] ax.plot(x,y) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('x vs y') fig2 = plt.figure() ax1 = fig2.add_axes([0.1,0.1,0.8,0.8]) ax_ins = fig2.add_axes([0.2,0.5,0.3,0.3]) #insert in upper left side of plot fig3 = plt.figure() ax1 = fig3.add_axes([0,0,1,1]) #absolute - full size fig ax_ins = fig3.add_axes([0.5,0.1,0.4,0.4]) #insert in lower right side of plot fig3 = plt.figure() ax1 = fig3.add_axes([0,0,0.4,1]) # about half in width, full height ax2 = fig3.add_axes([0.5,0,0.4,1]) #same, but to the right fig, axes = plt.subplots(nrows=1,ncols=2) axes[0].plot(x,y) axes[0].set_title('x vs y') axes[1].plot(x,x) axes[1].set_title('x vs x') fig, axes = plt.subplots(nrows=1,ncols=2, figsize=(5,5)) #specifying as **kwargs axes[0].plot(x,y) axes[1].plot(x,x) #use tight layout to resize plots within the canvas so there is no overlaps fig.tight_layout() fig.savefig('my_plots.png', dpi=300) fig, axes = plt.subplots(nrows=1,ncols=1, figsize=(5,3)) #just 1 large plot axes.plot(x,y, label='x vs x^2') axes.plot(x,x, label ='a straight line') axes.legend(loc=0) #loc=0 corresponds to best position available. #use tight layout to resize plots within the canvas so there is no overlaps fig.tight_layout() #linewidth or lw - ranges from 1 (default) to any high up #colors - takes names and HTML notations #alpha - is for transparency and ranges from [0-1] #marker - for tick marks and specify in characters #markersize #markerfacecolor #markeredgecolor #markeredgewidth fig = plt.figure() ax = fig.add_axes([0,0,1,1]) st_line = list(range(0,10000, 100)) ax.plot(x,y,color='orange', linewidth='3', alpha=0.3, marker='*', markersize=4, markerfacecolor='green', label='x vs y') ax.plot(x,st_line, color='green', marker='o', markersize=10, markerfacecolor='green', label='straight line') ax.legend() fig = plt.figure() ax = fig.add_axes([0,0,1,1]) st_line = list(range(0,10000, 100)) ax.plot(x,y,color='orange', linewidth='3', alpha=0.3, marker='*', markersize=4, markerfacecolor='green', label='x vs y') ax.plot(x,st_line, color='green', marker='o', markersize=10, markerfacecolor='green', label='straight line') ax.legend() ax.set_xlim(0,20) ax.set_ylim(0,3000) values = [400, 280, 10] labels = ['apple', 'android', 'windows'] # you can just call plt.pie. However it prints a bunch of objs on the notebook. I do this just to suppress that. fix, ax1 = plt.subplots() # you get the returns from ax1.pie because the font is tiny and to make it bigger # the autopct is to get the percentage values. _, texts, autotexts = ax1.pie(values, labels=labels, shadow=True, autopct='%1.1f%%') # make the font bigger by calling set_fontsize method each obj in texts, autotexts list(map(lambda x:x.set_fontsize(15), texts)) list(map(lambda x:x.set_fontsize(15), autotexts)) # by default the pie has a perspective. so you here you make it flat ax1.axis('equal') ax1.set_title('Cell phone OS by popularity', fontsize=15)
0.646125
0.989751
``` import matplotlib.pyplot as plt import os from google.colab import drive import numpy as np import tensorflow as tf import copy import sklearn.linear_model as lm import sklearn.preprocessing import warnings from keras.models import load_model from tensorflow.keras.preprocessing.image import ImageDataGenerator warnings.filterwarnings("ignore", category=sklearn.exceptions.ConvergenceWarning) #download data mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() nx_train, nx_test = x_train/255, x_test/255 sx_train, sx_test = sklearn.preprocessing.scale(x_train.reshape(60000, 28*28)), sklearn.preprocessing.scale(x_test.reshape(10000, 28*28)) print(y_train) #Augment data IDG = ImageDataGenerator(rotation_range=60, width_shift_range=0.25, height_shift_range=0.25, brightness_range=(0.2, 1), shear_range=30, fill_mode='constant', cval=0.0, horizontal_flip=True, vertical_flip = True, ) augmented_train = IDG.flow(x = sx_train.reshape(-1, 28, 28, 1), y = y_train, batch_size=60000, seed=0, shuffle=False ) augmented_test = IDG.flow(x = sx_test.reshape(-1, 28,28,1), y=y_test, batch_size=10000, seed=0, shuffle=False) # print(augmented_test.next()[0].shape) augmented_train_x, augmented_train_y = augmented_train.next() augmented_test_x, augmented_test_y = augmented_test.next() plt.subplot(3,2,1) plt.imshow(sx_test[999].reshape(28,28), cmap='gray') plt.subplot(3,2,2) plt.imshow(augmented_test_x[99].reshape(28,28), cmap='gray') plt.subplot(3,2,3) plt.imshow(sx_test[0].reshape(28,28), cmap='gray') plt.subplot(3,2,4) plt.imshow(augmented_test_x[0].reshape(28,28), cmap='gray') plt.subplot(3,2,5) plt.imshow(sx_train[11739].reshape(28,28), cmap='gray') plt.subplot(3,2,6) plt.imshow(augmented_train_x[11739].reshape(28,28), cmap='gray') #create hybrid dataset from sklearn.utils import shuffle print(augmented_train_x.shape) print(sx_train.shape) hybrid_x_train = np.concatenate((augmented_train_x[0:30000], sx_train[30000:].reshape(-1,28,28,1))) hybrid_x_train, hybrid_y_train = shuffle(hybrid_x_train, y_train, random_state=0) hybrid_x_test= np.concatenate((augmented_test_x[0:5000], sx_test[5000:].reshape(-1,28,28,1))) hybrid_x_test, hybrid_y_test= shuffle(hybrid_x_test, y_test, random_state=0) plt.subplot(1,2,1) plt.title(hybrid_y_train[900]) plt.imshow(hybrid_x_train[900].reshape(28,28), cmap='gray') plt.subplot(1,2,2) plt.title(hybrid_y_test[990]) plt.imshow(hybrid_x_test[990].reshape(28,28), cmap='gray') #load already trained models and evaluate accuracy for each dataset (upload to /content directory) model = load_model('plain_mnist.h5') augmented_model = load_model('augmented.h5') hybrid_model = load_model('hybrid.h5') model.evaluate(hybrid_x_test, hybrid_y_test) augmented_model.evaluate(sx_test.reshape(-1, 28,28,1), y_test, ) augmented_model.evaluate(augmented_test_x, augmented_test_y) augmented_model.evaluate(hybrid_x_test, hybrid_y_test) hybrid_model.evaluate(sx_test.reshape(-1, 28,28,1), y_test, ) hybrid_model.evaluate(augmented_test_x, augmented_test_y) hybrid_model.evaluate(hybrid_x_test, hybrid_y_test) predictions_plain = tf.argmax(model.predict(sx_test.reshape(-1,28,28,1)), axis=1) predictions_augmented = tf.argmax(augmented_model.predict(augmented_test_x), axis=1) predictions_hybrid = tf.argmax(hybrid_model.predict(hybrid_x_test), axis=1) confusion_plain = tf.math.confusion_matrix(y_test, predictions_plain) confusion_augmented = tf.math.confusion_matrix(augmented_test_y, predictions_augmented) confusion_hybrid = tf.math.confusion_matrix(hybrid_y_test, predictions_hybrid) plt.figure(figsize=(16,12)) plt.subplot(1, 3,1) plt.xlabel('predictions') plt.xticks(np.arange(0, 10, 1)) plt.yticks(np.arange(0, 10, 1)) plt.ylabel('labels') plt.imshow(confusion_plain, cmap='seismic') plt.title('plain') plt.subplot(1, 3,2) plt.xlabel('predictions') plt.xticks(np.arange(0, 10, 1)) plt.yticks(np.arange(0, 10, 1)) plt.imshow(confusion_augmented, cmap='seismic') plt.colorbar(orientation='horizontal') plt.title('transformed') plt.subplot(1, 3,3) plt.xticks(np.arange(0, 10, 1)) plt.yticks(np.arange(0, 10, 1)) plt.xlabel('predictions') plt.title('hybrid') plt.imshow(confusion_hybrid, cmap='seismic') print(predictions_plain.shape) plt.imshow(tf.math.confusion_matrix(y_test, y_plain_pred)) W, b = np.array(model.layers[1].get_weights()) def visualize_filters(V): k, m, n = V.shape ncol = 8 nrow = min(4, (k + ncol - 1) // ncol) V = V[:nrow*ncol] figsize = (2*ncol, max(1, 2*nrow*(m/n))) fig, axes = plt.subplots(nrow, ncol, sharex=True, sharey=True, figsize=figsize) vmin, vmax = np.percentile(V, [0.1, 99.9]) for v, ax in zip(V, axes.flat): img = ax.matshow(v, vmin=vmin, vmax=vmax, cmap=plt.get_cmap('gray')) ax.set_xticks([]) ax.set_yticks([]) fig.colorbar(img, cax=fig.add_axes([0.92, 0.25, 0.01, .5])) print(W.shape) W = W.reshape(28,28,-1) for i in range(10): plt.subplot(3,4,i+1) plt.imshow(W[:,:,i], cmap='gray') # visualize_filters() print(sx_test.shape) hybrid_model.evaluate(sx_test.reshape(-1,28,28,1), y_test) # hybrid_model.save('hybrid.h5') # np.save('augmented_train_x', augmented_train_x) # np.save('augmented_train_y', augmented_train_y) # np.save('augmented_test_x', augmented_test_x) # np.save('augmented_test_y', augmented_test_y) # np.save('hybrid_x_train', hybrid_x_train) # np.save('hybrid_y_train', hybrid_y_train) # np.save('hybrid_x_test', hybrid_x_test) # np.save('hybrid_y_test', hybrid_y_test) #plain model architecture model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(28*2, 3, padding='same', activation='relu', input_shape=(28,28, 1)), tf.keras.layers.Conv2D(28*28, 5, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10,activation='sigmoid') ] ) x = sx_train.reshape(-1, 28,28,1) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model = copy.copy(model) model.compile(optimizer='adam', loss=loss, metrics=['accuracy']) model.fit(x,y_train, epochs=5, batch_size=1000) model.evaluate(nx_test.reshape(-1, 28,28,1), y_test, ) model.save('plain_mnist.h5') #augmented model architecture and training augmented_model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(10, 5, padding='same', activation='relu', input_shape=(28,28, 1)), tf.keras.layers.Conv2D(10*5, 3, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Dense(28,activation='softmax'), tf.keras.layers.Conv2D(28, 5, padding='same', activation='relu', input_shape=(28,28, 1)), tf.keras.layers.Conv2D(28*28, 3, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10,activation='softmax') ] ) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) augmented_model.compile('adam', loss=loss, metrics=['accuracy']) print(augmented_train_x.shape) augmented_model.fit(augmented_train_x, augmented_train_y, epochs=7, batch_size=100) augmented_model.evaluate(augmented_test_x, augmented_test_y) augmented_model.evaluate(sx_test.reshape(-1,28,28,1), y_test) augmented_model.save('augmented.h5') #hybdrid model architecture hybrid_model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(28, 3, padding='same', activation='relu', input_shape=(28,28, 1)), tf.keras.layers.Conv2D(28*2, 5, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Dense(10*10,activation='softmax'), tf.keras.layers.Conv2D(28*2, 5, padding='same', activation='relu', ), tf.keras.layers.Conv2D(28*28, 3, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10,activation='softmax') ] ) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) hybrid_model.compile('adam', loss=loss, metrics=['accuracy']) hybrid_model.fit(hybrid_x_train, hybrid_y_train, epochs=7, batch_size=100) ```
github_jupyter
import matplotlib.pyplot as plt import os from google.colab import drive import numpy as np import tensorflow as tf import copy import sklearn.linear_model as lm import sklearn.preprocessing import warnings from keras.models import load_model from tensorflow.keras.preprocessing.image import ImageDataGenerator warnings.filterwarnings("ignore", category=sklearn.exceptions.ConvergenceWarning) #download data mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() nx_train, nx_test = x_train/255, x_test/255 sx_train, sx_test = sklearn.preprocessing.scale(x_train.reshape(60000, 28*28)), sklearn.preprocessing.scale(x_test.reshape(10000, 28*28)) print(y_train) #Augment data IDG = ImageDataGenerator(rotation_range=60, width_shift_range=0.25, height_shift_range=0.25, brightness_range=(0.2, 1), shear_range=30, fill_mode='constant', cval=0.0, horizontal_flip=True, vertical_flip = True, ) augmented_train = IDG.flow(x = sx_train.reshape(-1, 28, 28, 1), y = y_train, batch_size=60000, seed=0, shuffle=False ) augmented_test = IDG.flow(x = sx_test.reshape(-1, 28,28,1), y=y_test, batch_size=10000, seed=0, shuffle=False) # print(augmented_test.next()[0].shape) augmented_train_x, augmented_train_y = augmented_train.next() augmented_test_x, augmented_test_y = augmented_test.next() plt.subplot(3,2,1) plt.imshow(sx_test[999].reshape(28,28), cmap='gray') plt.subplot(3,2,2) plt.imshow(augmented_test_x[99].reshape(28,28), cmap='gray') plt.subplot(3,2,3) plt.imshow(sx_test[0].reshape(28,28), cmap='gray') plt.subplot(3,2,4) plt.imshow(augmented_test_x[0].reshape(28,28), cmap='gray') plt.subplot(3,2,5) plt.imshow(sx_train[11739].reshape(28,28), cmap='gray') plt.subplot(3,2,6) plt.imshow(augmented_train_x[11739].reshape(28,28), cmap='gray') #create hybrid dataset from sklearn.utils import shuffle print(augmented_train_x.shape) print(sx_train.shape) hybrid_x_train = np.concatenate((augmented_train_x[0:30000], sx_train[30000:].reshape(-1,28,28,1))) hybrid_x_train, hybrid_y_train = shuffle(hybrid_x_train, y_train, random_state=0) hybrid_x_test= np.concatenate((augmented_test_x[0:5000], sx_test[5000:].reshape(-1,28,28,1))) hybrid_x_test, hybrid_y_test= shuffle(hybrid_x_test, y_test, random_state=0) plt.subplot(1,2,1) plt.title(hybrid_y_train[900]) plt.imshow(hybrid_x_train[900].reshape(28,28), cmap='gray') plt.subplot(1,2,2) plt.title(hybrid_y_test[990]) plt.imshow(hybrid_x_test[990].reshape(28,28), cmap='gray') #load already trained models and evaluate accuracy for each dataset (upload to /content directory) model = load_model('plain_mnist.h5') augmented_model = load_model('augmented.h5') hybrid_model = load_model('hybrid.h5') model.evaluate(hybrid_x_test, hybrid_y_test) augmented_model.evaluate(sx_test.reshape(-1, 28,28,1), y_test, ) augmented_model.evaluate(augmented_test_x, augmented_test_y) augmented_model.evaluate(hybrid_x_test, hybrid_y_test) hybrid_model.evaluate(sx_test.reshape(-1, 28,28,1), y_test, ) hybrid_model.evaluate(augmented_test_x, augmented_test_y) hybrid_model.evaluate(hybrid_x_test, hybrid_y_test) predictions_plain = tf.argmax(model.predict(sx_test.reshape(-1,28,28,1)), axis=1) predictions_augmented = tf.argmax(augmented_model.predict(augmented_test_x), axis=1) predictions_hybrid = tf.argmax(hybrid_model.predict(hybrid_x_test), axis=1) confusion_plain = tf.math.confusion_matrix(y_test, predictions_plain) confusion_augmented = tf.math.confusion_matrix(augmented_test_y, predictions_augmented) confusion_hybrid = tf.math.confusion_matrix(hybrid_y_test, predictions_hybrid) plt.figure(figsize=(16,12)) plt.subplot(1, 3,1) plt.xlabel('predictions') plt.xticks(np.arange(0, 10, 1)) plt.yticks(np.arange(0, 10, 1)) plt.ylabel('labels') plt.imshow(confusion_plain, cmap='seismic') plt.title('plain') plt.subplot(1, 3,2) plt.xlabel('predictions') plt.xticks(np.arange(0, 10, 1)) plt.yticks(np.arange(0, 10, 1)) plt.imshow(confusion_augmented, cmap='seismic') plt.colorbar(orientation='horizontal') plt.title('transformed') plt.subplot(1, 3,3) plt.xticks(np.arange(0, 10, 1)) plt.yticks(np.arange(0, 10, 1)) plt.xlabel('predictions') plt.title('hybrid') plt.imshow(confusion_hybrid, cmap='seismic') print(predictions_plain.shape) plt.imshow(tf.math.confusion_matrix(y_test, y_plain_pred)) W, b = np.array(model.layers[1].get_weights()) def visualize_filters(V): k, m, n = V.shape ncol = 8 nrow = min(4, (k + ncol - 1) // ncol) V = V[:nrow*ncol] figsize = (2*ncol, max(1, 2*nrow*(m/n))) fig, axes = plt.subplots(nrow, ncol, sharex=True, sharey=True, figsize=figsize) vmin, vmax = np.percentile(V, [0.1, 99.9]) for v, ax in zip(V, axes.flat): img = ax.matshow(v, vmin=vmin, vmax=vmax, cmap=plt.get_cmap('gray')) ax.set_xticks([]) ax.set_yticks([]) fig.colorbar(img, cax=fig.add_axes([0.92, 0.25, 0.01, .5])) print(W.shape) W = W.reshape(28,28,-1) for i in range(10): plt.subplot(3,4,i+1) plt.imshow(W[:,:,i], cmap='gray') # visualize_filters() print(sx_test.shape) hybrid_model.evaluate(sx_test.reshape(-1,28,28,1), y_test) # hybrid_model.save('hybrid.h5') # np.save('augmented_train_x', augmented_train_x) # np.save('augmented_train_y', augmented_train_y) # np.save('augmented_test_x', augmented_test_x) # np.save('augmented_test_y', augmented_test_y) # np.save('hybrid_x_train', hybrid_x_train) # np.save('hybrid_y_train', hybrid_y_train) # np.save('hybrid_x_test', hybrid_x_test) # np.save('hybrid_y_test', hybrid_y_test) #plain model architecture model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(28*2, 3, padding='same', activation='relu', input_shape=(28,28, 1)), tf.keras.layers.Conv2D(28*28, 5, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10,activation='sigmoid') ] ) x = sx_train.reshape(-1, 28,28,1) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model = copy.copy(model) model.compile(optimizer='adam', loss=loss, metrics=['accuracy']) model.fit(x,y_train, epochs=5, batch_size=1000) model.evaluate(nx_test.reshape(-1, 28,28,1), y_test, ) model.save('plain_mnist.h5') #augmented model architecture and training augmented_model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(10, 5, padding='same', activation='relu', input_shape=(28,28, 1)), tf.keras.layers.Conv2D(10*5, 3, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Dense(28,activation='softmax'), tf.keras.layers.Conv2D(28, 5, padding='same', activation='relu', input_shape=(28,28, 1)), tf.keras.layers.Conv2D(28*28, 3, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10,activation='softmax') ] ) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) augmented_model.compile('adam', loss=loss, metrics=['accuracy']) print(augmented_train_x.shape) augmented_model.fit(augmented_train_x, augmented_train_y, epochs=7, batch_size=100) augmented_model.evaluate(augmented_test_x, augmented_test_y) augmented_model.evaluate(sx_test.reshape(-1,28,28,1), y_test) augmented_model.save('augmented.h5') #hybdrid model architecture hybrid_model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(28, 3, padding='same', activation='relu', input_shape=(28,28, 1)), tf.keras.layers.Conv2D(28*2, 5, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Dense(10*10,activation='softmax'), tf.keras.layers.Conv2D(28*2, 5, padding='same', activation='relu', ), tf.keras.layers.Conv2D(28*28, 3, padding='same', activation='relu', ), tf.keras.layers.MaxPool2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10,activation='softmax') ] ) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) hybrid_model.compile('adam', loss=loss, metrics=['accuracy']) hybrid_model.fit(hybrid_x_train, hybrid_y_train, epochs=7, batch_size=100)
0.404037
0.620047
# Droughts - Pre-Processing In this notebook, I will be going over the preprocessing steps needed before starting the experiments. I will include the following steps: 1. Load Data 2. Select California 3. Fill NANs 4. Smoothing of the VOD signal (savgol filter) 5. Removing the climatology 6. Select drought years and non-drought years 7. Extract density cubes ## Code ``` import sys, os cwd = os.getcwd() sys.path.insert(0, f'{cwd}/../../') sys.path.insert(0, '/home/emmanuel/code/py_esdc') import xarray as xr import pandas as pd import numpy as np # drought tools from src.data.drought.loader import DataLoader from src.features.drought.build_features import ( get_cali_geometry, mask_datacube, smooth_vod_signal, remove_climatology, get_cali_emdata, get_drought_years, get_density_cubes, get_common_elements, normalize ) from src.visualization.drought.analysis import plot_mean_time # esdc tools from esdc.subset import select_pixel from esdc.shape import ShapeFileExtract, rasterize from esdc.transform import DensityCubes import matplotlib.pyplot as plt import cartopy import cartopy.crs as ccrs plt.style.use(['fivethirtyeight', 'seaborn-poster']) %matplotlib inline %load_ext autoreload %autoreload 2 ``` ## 1. Load Data ``` region = 'conus' sampling = '14D' drought_cube = DataLoader().load_data(region, sampling) pixel = (-121, 37) drought_cube ``` Verify with a simple plot. ``` plot_mean_time( drought_cube.LST.sel(time=slice('June-2010', 'June-2010')) ) ``` ## 2. Subset California ``` # get california polygon cali_geoms = get_cali_geometry() # get california cube subset cali_cube = mask_datacube(drought_cube, cali_geoms) plot_mean_time( cali_cube.LST.sel(time=slice('June-2011', 'June-2011')) ) ``` ## 3. Interpolate NANs - Time Dimension ``` # interpolation arguments interp_dim = 'time' method = 'linear' # do interpolation cali_cube_interp = cali_cube.interpolate_na( dim=interp_dim, method=method ) ``` ## 4. Smoothing the Signal (VOD) In this section, we will try to smooth the signal with two methods: 1. Simple - Rolling mean 2. Using a savgol filter. Some initial parameters: * Window Size = 5 * Polynomial Order = 3 We will apply this filter in the time domain only. ``` vod_data = cali_cube_interp.VOD vod_data ``` ### 4.1 - Savgol Filter ``` from scipy.signal import savgol_filter # select example vod_data_ex = select_pixel(vod_data, pixel) # savgol filter params window_length = 5 polyorder = 3 # apply savgol filter vod_smooth_filter = savgol_filter( vod_data_ex, window_length=window_length, polyorder=polyorder ) fig, ax = plt.subplots(nrows=2, figsize=(10, 10)) ax[0].plot(vod_data_ex) ax[0].set_title('Original Data') ax[1].plot(vod_smooth_filter) ax[1].set_title('After Savgol Filter') plt.show() ``` ### 4.2 - Rolling Window ``` # select example vod_data_ex = select_pixel(vod_data, pixel) # savgol filter params window_length = 2 # apply savgol filter vod_smooth_roll = vod_data_ex.rolling( time=window_length, center=True ).mean() fig, ax = plt.subplots(nrows=2, figsize=(10, 10)) ax[0].plot(vod_data_ex) ax[0].set_title('Original Data') ax[1].plot(vod_smooth_roll) ax[1].set_title('After Rolling Mean') plt.show() ``` ### 4.3 - Difference ``` vod_smooth_diff = vod_smooth_filter - vod_smooth_roll fig, ax = plt.subplots(nrows=4,figsize=(10,10)) ax[0].plot(vod_data_ex) ax[0].set_title('Original') ax[1].plot(vod_smooth_filter) ax[1].set_title('Savgol Filter') ax[2].plot(vod_smooth_roll) ax[2].set_title('Rolling Mean') ax[3].plot(vod_smooth_diff) ax[3].set_title('Difference') # Scale the Difference Y-Limits ymax = np.max([vod_smooth_filter.max(), vod_smooth_roll.max()]) ymin = np.min([vod_smooth_filter.min(), vod_smooth_roll.min()]) center = (ymax - ymin) ymax = ymax - center ymin = center - ymin ax[3].set_ylim([0 - ymin, 0 + ymax]) plt.tight_layout() plt.show() ``` ### 4.3 - Apply Rolling Mean to the whole dataset ``` cali_cube_interp = smooth_vod_signal(cali_cube_interp, window_length=2, center=True) ``` ## 5. Remove Climatology When I mean 'climatology', I mean the difference between observations and typical weather for a particular season. The anomalies should not show up in the seasonal cycle. I'll just do a very simple removal. I'll calculate the monthly mean wrt time and then remove that from each month from the original datacube. **Steps** 1. Climatalogy - Monthly Mean for the 6 years 2. Remove Climatology - Climatology from each month ``` # remove climatology cali_anomalies, cali_mean = remove_climatology(cali_cube_interp) ``` Simple check where we look at the original and the new. ``` variables = ['LST', 'VOD', 'NDVI', 'SM'] for ivariable in variables: fig, ax = plt.subplots(nrows=3, figsize=(10, 10)) # Before Climatology select_pixel(cali_cube_interp[ivariable], pixel).plot(ax=ax[0]) ax[0].set_title('Original Time Series') # Climatology select_pixel(cali_mean[ivariable], pixel).plot(ax=ax[1]) ax[1].set_title('Climatology') # After Climatology select_pixel(cali_anomalies[ivariable], pixel).plot(ax=ax[2]) ax[2].set_title('After Climatology Median Removed') plt.tight_layout() plt.show() ``` ## 6. EMData I extract the dates for the drought events for california. This will allow me to separate the drought years and non-drought years. ``` cali_droughts = get_cali_emdata() cali_droughts ``` So the drought years are: **Drought Years** * 2012 * 2014 * 2015 **Non-Drought Years** * 2010 * 2011 * 2013 **Note**: Even though the EM-Data says that the drought year for 2012 is only half a year, we're going to say that that is a full year. ``` # Drought Years cali_anomalies_drought = get_drought_years( cali_anomalies, ['2012', '2014', '2015'] ) # Non-Drought Years cali_anomalies_nondrought = get_drought_years( cali_anomalies, ['2010', '2011', '2013'] ) ``` ## 7. Extract Density Cubes In this step, we will construct 'density cubes'. These are cubes where we add features from a combination of the spatial and/or temporal dimensions. Instead of a single sample, we have a sample that takes into account spatial and/or temporal information. In this experiment, we will only look at temporal information. Our temporal resolution is 14 Days and we want to look at a maximum of 6 months. So: $$\Bigg\lfloor \frac{6 \: months}{\frac{14\: days}{30 \: days} \:\times 1 \: month} \Bigg\rfloor = 12 \: time \: stamps$$ ``` # confirm sub_ = cali_anomalies_drought.isel(time=slice(0,12)) sub_.time[0].data, sub_.time[-1].data ``` We see that the start date for the year 2012 is 01-10 and the end date is 06-12. It's good enough. So we get roughly 6 months of temporal information in our density cubes. #### 7.1 - Example Density Cube ``` # window sizes spatial = 1 time = 12 vod_df, lst_df, ndvi_df, sm_df = get_density_cubes(cali_anomalies_drought, spatial, time) vod_df.shape, lst_df.shape, ndvi_df.shape, sm_df.shape ``` ### 8. Find Common Elements Notice how the number of elements is different depending upon the dataset. I believe it is the case that there are less elements for the VOD and the SM datasets. To make a fair comparison, I'll be using only the common elements between the two density cubes. **Note**: This is also a bit difficult for RBIG to calculate the Mutual Information for datasets that are potentially so different in their domain. ``` vod_df, lst_df = get_common_elements(vod_df, lst_df) vod_df.shape, lst_df.shape ```
github_jupyter
import sys, os cwd = os.getcwd() sys.path.insert(0, f'{cwd}/../../') sys.path.insert(0, '/home/emmanuel/code/py_esdc') import xarray as xr import pandas as pd import numpy as np # drought tools from src.data.drought.loader import DataLoader from src.features.drought.build_features import ( get_cali_geometry, mask_datacube, smooth_vod_signal, remove_climatology, get_cali_emdata, get_drought_years, get_density_cubes, get_common_elements, normalize ) from src.visualization.drought.analysis import plot_mean_time # esdc tools from esdc.subset import select_pixel from esdc.shape import ShapeFileExtract, rasterize from esdc.transform import DensityCubes import matplotlib.pyplot as plt import cartopy import cartopy.crs as ccrs plt.style.use(['fivethirtyeight', 'seaborn-poster']) %matplotlib inline %load_ext autoreload %autoreload 2 region = 'conus' sampling = '14D' drought_cube = DataLoader().load_data(region, sampling) pixel = (-121, 37) drought_cube plot_mean_time( drought_cube.LST.sel(time=slice('June-2010', 'June-2010')) ) # get california polygon cali_geoms = get_cali_geometry() # get california cube subset cali_cube = mask_datacube(drought_cube, cali_geoms) plot_mean_time( cali_cube.LST.sel(time=slice('June-2011', 'June-2011')) ) # interpolation arguments interp_dim = 'time' method = 'linear' # do interpolation cali_cube_interp = cali_cube.interpolate_na( dim=interp_dim, method=method ) vod_data = cali_cube_interp.VOD vod_data from scipy.signal import savgol_filter # select example vod_data_ex = select_pixel(vod_data, pixel) # savgol filter params window_length = 5 polyorder = 3 # apply savgol filter vod_smooth_filter = savgol_filter( vod_data_ex, window_length=window_length, polyorder=polyorder ) fig, ax = plt.subplots(nrows=2, figsize=(10, 10)) ax[0].plot(vod_data_ex) ax[0].set_title('Original Data') ax[1].plot(vod_smooth_filter) ax[1].set_title('After Savgol Filter') plt.show() # select example vod_data_ex = select_pixel(vod_data, pixel) # savgol filter params window_length = 2 # apply savgol filter vod_smooth_roll = vod_data_ex.rolling( time=window_length, center=True ).mean() fig, ax = plt.subplots(nrows=2, figsize=(10, 10)) ax[0].plot(vod_data_ex) ax[0].set_title('Original Data') ax[1].plot(vod_smooth_roll) ax[1].set_title('After Rolling Mean') plt.show() vod_smooth_diff = vod_smooth_filter - vod_smooth_roll fig, ax = plt.subplots(nrows=4,figsize=(10,10)) ax[0].plot(vod_data_ex) ax[0].set_title('Original') ax[1].plot(vod_smooth_filter) ax[1].set_title('Savgol Filter') ax[2].plot(vod_smooth_roll) ax[2].set_title('Rolling Mean') ax[3].plot(vod_smooth_diff) ax[3].set_title('Difference') # Scale the Difference Y-Limits ymax = np.max([vod_smooth_filter.max(), vod_smooth_roll.max()]) ymin = np.min([vod_smooth_filter.min(), vod_smooth_roll.min()]) center = (ymax - ymin) ymax = ymax - center ymin = center - ymin ax[3].set_ylim([0 - ymin, 0 + ymax]) plt.tight_layout() plt.show() cali_cube_interp = smooth_vod_signal(cali_cube_interp, window_length=2, center=True) # remove climatology cali_anomalies, cali_mean = remove_climatology(cali_cube_interp) variables = ['LST', 'VOD', 'NDVI', 'SM'] for ivariable in variables: fig, ax = plt.subplots(nrows=3, figsize=(10, 10)) # Before Climatology select_pixel(cali_cube_interp[ivariable], pixel).plot(ax=ax[0]) ax[0].set_title('Original Time Series') # Climatology select_pixel(cali_mean[ivariable], pixel).plot(ax=ax[1]) ax[1].set_title('Climatology') # After Climatology select_pixel(cali_anomalies[ivariable], pixel).plot(ax=ax[2]) ax[2].set_title('After Climatology Median Removed') plt.tight_layout() plt.show() cali_droughts = get_cali_emdata() cali_droughts # Drought Years cali_anomalies_drought = get_drought_years( cali_anomalies, ['2012', '2014', '2015'] ) # Non-Drought Years cali_anomalies_nondrought = get_drought_years( cali_anomalies, ['2010', '2011', '2013'] ) # confirm sub_ = cali_anomalies_drought.isel(time=slice(0,12)) sub_.time[0].data, sub_.time[-1].data # window sizes spatial = 1 time = 12 vod_df, lst_df, ndvi_df, sm_df = get_density_cubes(cali_anomalies_drought, spatial, time) vod_df.shape, lst_df.shape, ndvi_df.shape, sm_df.shape vod_df, lst_df = get_common_elements(vod_df, lst_df) vod_df.shape, lst_df.shape
0.324771
0.884539
``` import csv import statistics as stats bank_data = ("/Users/sethjacobson/DENVDEN201905DATA4/Homework/3 Python 6-18/PyBank/Resources/budget_data.csv") month_counter = set() month_set = [] # Algorithm to open csv, iterate through rows, count the total number of months in the set and create a set. with open(bank_data, "r") as f: reader = csv.reader(f) next(reader) month_counter = set() for row in reader: month = row[0] month_counter.add(month) print(len(month_counter)) print(month_counter) # Algorithm to open csv, iterate through rows, and count net profit/loss (sum) with open(bank_data, "r") as f: reader = csv.reader(f) next(reader) net_pnl = 0.00 for row in reader: value = int(row[1]) net_pnl += value print(f"The net profit and loss for this period is ${net_pnl}") # Create a function to find month change and print def month_change_finder(val1, val2): if (val1 == val2): month_change = 0 print(month_change) elif (val1 > val2): month_change = val2 - val1 print(month_change) elif (val1 < val2): month_change = val2 - val1 print(month_change) # Algorithm to open csv, iterate through rows, create list of values. with open(bank_data, "r") as f: reader = csv.reader(f) next(reader) value_string= [] month_string= [] month_change_total = 0 for row in reader: value = float(row[1]) month = row[0] # print(value) value_list = [] month_list = [] value_list.append(value) month_list.append(month) value_string += value_list month_string += month_list zip(value_string, value_string[1:]) #Zip the new list with itself at [1:] month_match = (list(zip(value_string, value_string[1:]))) # print(month_match) month_change = [y - x for x, y in month_match] # print(month_change) print(dict(zip(month_string[1:], month_change))) overall_change = stats.mean(month_change) print(f"The overall change for this period is ${overall_change}") #Reopen file, set variables for minimum and maximum, find greatest increase in profits and greatest decrease in profits. bank_data = ("/Users/sethjacobson/DENVDEN201905DATA4/Homework/3 Python 6-18/PyBank/Resources/budget_data.csv") with open(bank_data, "r") as f: reader = csv.reader(f) next(reader) greatest_profit_increase = [] greatest_profit_decrease = [] value_list = [] for row in reader: value = float(row[1]) month = row[0] # print(value) value_list = [] month_list = [] value_list.append(value) month_list.append(month) value_string += value_list month_string += month_list zip(value_string, value_string[1:]) #Zip the new list with itself at [1:] month_match = (list(zip(value_string, value_string[1:]))) # print(month_match) month_change = [y - x for x, y in month_match] # print(month_change) greatest_profit_decrease = (min(month_change)) print(greatest_profit_decrease) greatest_profit_increase = (max(month_change)) print(greatest_profit_increase) # print(dict(zip(month_string[1:], month_change))) ```
github_jupyter
import csv import statistics as stats bank_data = ("/Users/sethjacobson/DENVDEN201905DATA4/Homework/3 Python 6-18/PyBank/Resources/budget_data.csv") month_counter = set() month_set = [] # Algorithm to open csv, iterate through rows, count the total number of months in the set and create a set. with open(bank_data, "r") as f: reader = csv.reader(f) next(reader) month_counter = set() for row in reader: month = row[0] month_counter.add(month) print(len(month_counter)) print(month_counter) # Algorithm to open csv, iterate through rows, and count net profit/loss (sum) with open(bank_data, "r") as f: reader = csv.reader(f) next(reader) net_pnl = 0.00 for row in reader: value = int(row[1]) net_pnl += value print(f"The net profit and loss for this period is ${net_pnl}") # Create a function to find month change and print def month_change_finder(val1, val2): if (val1 == val2): month_change = 0 print(month_change) elif (val1 > val2): month_change = val2 - val1 print(month_change) elif (val1 < val2): month_change = val2 - val1 print(month_change) # Algorithm to open csv, iterate through rows, create list of values. with open(bank_data, "r") as f: reader = csv.reader(f) next(reader) value_string= [] month_string= [] month_change_total = 0 for row in reader: value = float(row[1]) month = row[0] # print(value) value_list = [] month_list = [] value_list.append(value) month_list.append(month) value_string += value_list month_string += month_list zip(value_string, value_string[1:]) #Zip the new list with itself at [1:] month_match = (list(zip(value_string, value_string[1:]))) # print(month_match) month_change = [y - x for x, y in month_match] # print(month_change) print(dict(zip(month_string[1:], month_change))) overall_change = stats.mean(month_change) print(f"The overall change for this period is ${overall_change}") #Reopen file, set variables for minimum and maximum, find greatest increase in profits and greatest decrease in profits. bank_data = ("/Users/sethjacobson/DENVDEN201905DATA4/Homework/3 Python 6-18/PyBank/Resources/budget_data.csv") with open(bank_data, "r") as f: reader = csv.reader(f) next(reader) greatest_profit_increase = [] greatest_profit_decrease = [] value_list = [] for row in reader: value = float(row[1]) month = row[0] # print(value) value_list = [] month_list = [] value_list.append(value) month_list.append(month) value_string += value_list month_string += month_list zip(value_string, value_string[1:]) #Zip the new list with itself at [1:] month_match = (list(zip(value_string, value_string[1:]))) # print(month_match) month_change = [y - x for x, y in month_match] # print(month_change) greatest_profit_decrease = (min(month_change)) print(greatest_profit_decrease) greatest_profit_increase = (max(month_change)) print(greatest_profit_increase) # print(dict(zip(month_string[1:], month_change)))
0.155335
0.467149
# Function arguments So far we have used function arguments in a basic way; this is the way that is familiar from mathematics: ``` # Load the Numpy package, and rename to "np" import numpy as np np.cos(0) ``` Here is another Numpy function, from the `random` sub-package of the Numpy library. We get to the sub-packages with the dot `.` - so to get to the `random` sub-package, we use `np.random`. Then, to get to the functions in this sub-package, we use the dot again, like this: ``` np.random.randint(0, 2) ``` Remember, this is a random integer from 0 up to, but *not including* 2, so it is a random integer that can either be 0 or 1. Now let us look at the help for the `np.random.randint` function. As usual, we do this by appending `?` to the function name, and pressing Enter in the notebook. ``` # To see the help for np.random.randint, remove the # at the beginning # of the next line, and execute this cell. np.random.randint? ``` We find that the function can accept up to four arguments. We have passed two. The first sets the argument called `low` to be 0, and the second sets the argument called `high` to be 2. To take another example, in this case we are asking for a random number starting at 1 up to, but not including 11. This gives us a random integer from 1 through 10. `low` is 1 and `high` is 11. ``` # Random integer from 1 through 10. np.random.randint(1, 11) ``` If we pass three arguments, we also set the `size` argument. This tells the function how many random numbers to return. The following asks for an array of four random integers from 1 through 20: ``` # Four random integers from 1 through 20. np.random.randint(1, 21, 4) ``` Notice that this is an *array*. Now look again at the help. Notice that the help gives each argument a *name* --- `low`, `high`, `size`. We can also use these names when we set these arguments. For example, the cell below does exactly the same thing as the cell above. ``` # Four random integers from 1 through 20, using keyword arguments. np.random.randint(low=1, high=21, size=4) ``` When we call the function using the arguments with their names like this, the named arguments are called *keyword* arguments. Passing the arguments like this, using keywords, can be very useful, to make it clearer what each argument means. For example, it's a common pattern to call a function with one or a few keyword arguments, like this: ``` # Four random integers from 1 through 20. np.random.randint(1, 21, size=4) ``` Writing the call like the cell gives exactly the same result as the cell below, but the cell above can be easier to follow, because the person reading the code does not have to guess what the 4 means --- they can see that it means the size of the output array. ``` # Four random integers from 1 through 20 - but no keyword argument. np.random.randint(1, 21, size=4) ``` To take another example, we have already seen the function `round`. Inspect the help for `round` with `round?` and Enter in a notebook cell. `round` takes up to two arguments. If we pass one argument, it is just the value that `round` will round to the nearest integer: ``` round(3.1415) ``` If we pass two arguments, the first argument is the value we will round, and the second is the number of digits to round to, like this: ``` round(3.1415, 2) ``` As you saw in the help, the second argument has the name `ndigits`, so we can also write: ``` round(3.1415, ndigits=2) ``` As before, this makes the code a little bit easier to read and understand, because it is immediately clear from the name `ndigits` that the 2 means the number of digits to round to.
github_jupyter
# Load the Numpy package, and rename to "np" import numpy as np np.cos(0) np.random.randint(0, 2) # To see the help for np.random.randint, remove the # at the beginning # of the next line, and execute this cell. np.random.randint? # Random integer from 1 through 10. np.random.randint(1, 11) # Four random integers from 1 through 20. np.random.randint(1, 21, 4) # Four random integers from 1 through 20, using keyword arguments. np.random.randint(low=1, high=21, size=4) # Four random integers from 1 through 20. np.random.randint(1, 21, size=4) # Four random integers from 1 through 20 - but no keyword argument. np.random.randint(1, 21, size=4) round(3.1415) round(3.1415, 2) round(3.1415, ndigits=2)
0.616705
0.991439
# 4.1 模型构造 ``` import torch from torch import nn print(torch.__version__) ``` ## 4.1.1 继承`Module`类来构造模型 ``` class MLP(nn.Module): # 声明带有模型参数的层,这里声明了两个全连接层 def __init__(self, **kwargs): # 调用MLP父类Block的构造函数来进行必要的初始化。这样在构造实例时还可以指定其他函数 # 参数,如“模型参数的访问、初始化和共享”一节将介绍的模型参数params super(MLP, self).__init__(**kwargs) self.hidden = nn.Linear(784, 256) # 隐藏层 self.act = nn.ReLU() self.output = nn.Linear(256, 10) # 输出层 # 定义模型的前向计算,即如何根据输入x计算返回所需要的模型输出 def forward(self, x): a = self.act(self.hidden(x)) return self.output(a) X = torch.rand(2, 784) net = MLP() print(net) net(X) ``` ## 4.1.2 `Module`的子类 ### 4.1.2.1 `Sequential`类 ``` class MySequential(nn.Module): from collections import OrderedDict def __init__(self, *args): super(MySequential, self).__init__() if len(args) == 1 and isinstance(args[0], OrderedDict): # 如果传入的是一个OrderedDict for key, module in args[0].items(): self.add_module(key, module) # add_module方法会将module添加进self._modules(一个OrderedDict) else: # 传入的是一些Module for idx, module in enumerate(args): self.add_module(str(idx), module) def forward(self, input): # self._modules返回一个 OrderedDict,保证会按照成员添加时的顺序遍历成 for module in self._modules.values(): input = module(input) return input net = MySequential( nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, 10), ) print(net) net(X) ``` ### 4.1.2.2 `ModuleList`类 ``` net = nn.ModuleList([nn.Linear(784, 256), nn.ReLU()]) net.append(nn.Linear(256, 10)) # # 类似List的append操作 print(net[-1]) # 类似List的索引访问 print(net) ``` ### 4.1.2.3 `ModuleDict`类 ``` net = nn.ModuleDict({ 'linear': nn.Linear(784, 256), 'act': nn.ReLU(), }) net['output'] = nn.Linear(256, 10) # 添加 print(net['linear']) # 访问 print(net.output) print(net) ``` ## 4.1.3 构造复杂的模型 ``` class FancyMLP(nn.Module): def __init__(self, **kwargs): super(FancyMLP, self).__init__(**kwargs) self.rand_weight = torch.rand((20, 20), requires_grad=False) # 不可训练参数(常数参数) self.linear = nn.Linear(20, 20) def forward(self, x): x = self.linear(x) # 使用创建的常数参数,以及nn.functional中的relu函数和mm函数 x = nn.functional.relu(torch.mm(x, self.rand_weight.data) + 1) # 复用全连接层。等价于两个全连接层共享参数 x = self.linear(x) # 控制流,这里我们需要调用item函数来返回标量进行比较 while x.norm().item() > 1: x /= 2 if x.norm().item() < 0.8: x *= 10 return x.sum() X = torch.rand(2, 20) net = FancyMLP() print(net) net(X) class NestMLP(nn.Module): def __init__(self, **kwargs): super(NestMLP, self).__init__(**kwargs) self.net = nn.Sequential(nn.Linear(40, 30), nn.ReLU()) def forward(self, x): return self.net(x) net = nn.Sequential(NestMLP(), nn.Linear(30, 20), FancyMLP()) X = torch.rand(2, 40) print(net) net(X) ```
github_jupyter
import torch from torch import nn print(torch.__version__) class MLP(nn.Module): # 声明带有模型参数的层,这里声明了两个全连接层 def __init__(self, **kwargs): # 调用MLP父类Block的构造函数来进行必要的初始化。这样在构造实例时还可以指定其他函数 # 参数,如“模型参数的访问、初始化和共享”一节将介绍的模型参数params super(MLP, self).__init__(**kwargs) self.hidden = nn.Linear(784, 256) # 隐藏层 self.act = nn.ReLU() self.output = nn.Linear(256, 10) # 输出层 # 定义模型的前向计算,即如何根据输入x计算返回所需要的模型输出 def forward(self, x): a = self.act(self.hidden(x)) return self.output(a) X = torch.rand(2, 784) net = MLP() print(net) net(X) class MySequential(nn.Module): from collections import OrderedDict def __init__(self, *args): super(MySequential, self).__init__() if len(args) == 1 and isinstance(args[0], OrderedDict): # 如果传入的是一个OrderedDict for key, module in args[0].items(): self.add_module(key, module) # add_module方法会将module添加进self._modules(一个OrderedDict) else: # 传入的是一些Module for idx, module in enumerate(args): self.add_module(str(idx), module) def forward(self, input): # self._modules返回一个 OrderedDict,保证会按照成员添加时的顺序遍历成 for module in self._modules.values(): input = module(input) return input net = MySequential( nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, 10), ) print(net) net(X) net = nn.ModuleList([nn.Linear(784, 256), nn.ReLU()]) net.append(nn.Linear(256, 10)) # # 类似List的append操作 print(net[-1]) # 类似List的索引访问 print(net) net = nn.ModuleDict({ 'linear': nn.Linear(784, 256), 'act': nn.ReLU(), }) net['output'] = nn.Linear(256, 10) # 添加 print(net['linear']) # 访问 print(net.output) print(net) class FancyMLP(nn.Module): def __init__(self, **kwargs): super(FancyMLP, self).__init__(**kwargs) self.rand_weight = torch.rand((20, 20), requires_grad=False) # 不可训练参数(常数参数) self.linear = nn.Linear(20, 20) def forward(self, x): x = self.linear(x) # 使用创建的常数参数,以及nn.functional中的relu函数和mm函数 x = nn.functional.relu(torch.mm(x, self.rand_weight.data) + 1) # 复用全连接层。等价于两个全连接层共享参数 x = self.linear(x) # 控制流,这里我们需要调用item函数来返回标量进行比较 while x.norm().item() > 1: x /= 2 if x.norm().item() < 0.8: x *= 10 return x.sum() X = torch.rand(2, 20) net = FancyMLP() print(net) net(X) class NestMLP(nn.Module): def __init__(self, **kwargs): super(NestMLP, self).__init__(**kwargs) self.net = nn.Sequential(nn.Linear(40, 30), nn.ReLU()) def forward(self, x): return self.net(x) net = nn.Sequential(NestMLP(), nn.Linear(30, 20), FancyMLP()) X = torch.rand(2, 40) print(net) net(X)
0.851845
0.896297
# Use PyTorch to predict handwritten digits <table style="border: none" align="left"> <tr style="border: none"> <td style="border: none"><img src="https://github.com/IBM/pytorch-on-watson-studio/raw/master/doc/source/images/pytorch-pattern-header.jpg" width="600" alt="Icon"></td> </tr> </table> This notebook contains steps and code to demonstrate Deep Learning model training in the <a href="https://www.ibm.com/cloud/machine-learning">Watson Machine Learning</a> service. <a href="https://pytorch.org/" target="_blank" rel="noopener no referrer">PyTorch</a> is a relatively new deep learning framework. Yet, it has begun to gain adoption especially among researchers and data scientists. The strength of PyTorch is its support of dynamic computational graph while most deep learning frameworks are based on static computational graph. In addition, its strong NumPy like GPU accelerated tensor computation has allowed Python developers to easily learn and build deep learning networks for GPUs and CPUs alike. Some familiarity with Python is helpful. This notebook uses Python 3 and <a href="https://dataplatform.cloud.ibm.com/docs/content/analyze-data/environments-parent.html" target="_blank" rel="noopener no referrer">Watson Studio</a> to configure and initiate training of a PyTorch base workload using Watson Machine Learning service. ## Learning goals In this notebook, you will learn how to: - Work with Watson Machine Learning to train Deep Learning models - Use PyTorch features, tools and libraries - Save trained models in the Watson Machine Learning repository ## Contents 1. [Set up](#setup) 2. [Create the training definitions](#model) 3. [Train the model](#train) 4. [Work with the trained models](#work) 5. [Summary and next steps](#summary) <a id="setup"></a> ## 1. Set up Before you use the sample code in this notebook, you must perform the following setup tasks: - Create a <a href="https://console.bluemix.net/catalog/services/machine-learning" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance is <a href="https://dataplatform.ibm.com/docs/content/analyze-data/wml-setup.html" target="_blank" rel="noopener no referrer">here</a>). - Create a <a href="https://console.bluemix.net/catalog/services/cloud-object-storage" target="_blank" rel="noopener no referrer">Cloud Object Storage (COS)</a> instance (a lite plan is offered and information about how to order storage is <a href="https://console.bluemix.net/docs/services/cloud-object-storage/basics/order-storage.html#order-storage" target="_blank" rel="noopener no referrer">here</a>). <br/>**Note: When using Watson Studio, you already have a COS instance associated with the project you are running the notebook in.** - Create new credentials with HMAC: - Go to your COS dashboard (see Tip). - In the **Service credentials** tab, click **New Credential+**. - In the **Add Inline Configuration Parameters(Optional):** box, add {"HMAC":true} - Click **Add**. (For more information, see <a href="https://console.bluemix.net/docs/services/cloud-object-storage/hmac/credentials.html#using-hmac-credentials" target="_blank" rel="noopener no referrer">HMAC</a>.) This configuration parameter adds the following section to the instance credentials, (for use later in this notebook): ``` "cos_hmac_keys": { "access_key_id": "-------", "secret_access_key": "-------" } ``` **Tip:** follow the steps below to access your COS instance dashboard. From the Watson Studio dashboard: - Click the **Services** tab on the top of the page - Click the **Data Services** tab - Select and click your target object storage (COS) ### 1.1 Work with Cloud Object Storage (COS) Install the boto library. This library allows Python developers to manage Cloud Object Storage (COS). **Tip:** If `ibm_boto3` is not preinstalled in your environment, run the following command to install it: ``` # Run the command if ibm_boto3 is not installed. # !pip install ibm-cos-sdk # Install the boto library. import ibm_boto3 from ibm_botocore.client import Config ``` **Replace** the information in the following cell with your COS credentials. You can find these credentials in your COS instance dashboard under the **Service credentials** tab. **Note:** the HMAC key, described in [set up the environment](#setup) is included in these credentials. ` cos_credentials = { "apikey": "-------", "cos_hmac_keys": { "access_key_id": "------", "secret_access_key": "------" }, "endpoints": "https://cos-service.bluemix.net/endpoints", "iam_apikey_description": "------", "iam_apikey_name": "------", "iam_role_crn": "------", "iam_serviceid_crn": "------", "resource_instance_id": "-------" } ` ``` # @hidden_cell cos_credentials = { } ``` Define the endpoint. To do this, go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information, then enter it in the cell below: ``` # Define endpoint information. service_endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net' ``` You also need the IBM Cloud authorization endpoint to be able to create COS resource object. ``` # Define the authorization endpoint. auth_endpoint = 'https://iam.bluemix.net/oidc/token' ``` Create a Boto resource to be able to write data to COS. ``` # Create a COS resource. cos = ibm_boto3.resource('s3', ibm_api_key_id=cos_credentials['apikey'], ibm_service_instance_id=cos_credentials['resource_instance_id'], ibm_auth_endpoint=auth_endpoint, config=Config(signature_version='oauth'), endpoint_url=service_endpoint) ``` Create two buckets, which you will use to store training data and training results. **Note:** The bucket names must be unique. ``` from uuid import uuid4 bucket_uid = str(uuid4()) buckets = ['training-mnist-data-' + bucket_uid, 'training-mnist-results-' + bucket_uid] for bucket in buckets: if not cos.Bucket(bucket) in cos.buckets.all(): print('Creating bucket "{}"...'.format(bucket)) try: cos.create_bucket(Bucket=bucket) except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e: print('Error: {}.'.format(e.response['Error']['Message'])) ``` Now you should have 2 buckets. ``` # Display a list of created buckets. print(list(cos.buckets.all())) ``` ### 1.2 Download the training data and upload it to the COS buckets **PyTorch Tools & Libraries** An active community of researchers and developers have built a rich ecosystem of tools and libraries for extending PyTorch and supporting development in areas from computer vision to reinforcement learning. PyTorch's <a href="https://github.com/pytorch/vision" target="_blank" rel="noopener no referrer">torchvision</a> is one of those packages. `torchvision` consists of popular datasets, model architectures, and common image transformations for computer vision. This tutorial will use `torchvision's MNIST dataset` package to download and process the training data. The processed data files will be uploaded to the `training-data-mnist` bucket. **Tip:** If PyTorch or `torchvision` is not preinstalled in your environment, run the following command to install it: ``` #Install PyTorch !pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp35-cp35m-linux_x86_64.whl #Install torchvision !pip install torchvision ``` The following code will download and process the MNIST training and test data. ``` import torch from torchvision import datasets, transforms data_dir = './data' datasets.MNIST(data_dir, train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])) ``` The code in the next cell uploads the processed files to your COS. ``` import glob import os files_search = os.path.join(data_dir, "processed", "*") files = glob.glob(files_search) bucket_obj = cos.Bucket(buckets[0]) for file in files: filename = file.split('/')[-1] filename = os.path.join("processed", filename) print('Uploading data {}...'.format(filename)) bucket_obj.upload_file(file, filename ) print('{} is uploaded.'.format(filename)) print("Done") ``` Have a look at the list of the created buckets and their contents. ``` for bucket_name in buckets: print(bucket_name) bucket_obj = cos.Bucket(bucket_name) for obj in bucket_obj.objects.all(): print(" File: {}, {:4.2f}kB".format(obj.key, obj.size/1024)) ``` You are done with COS, and you are ready to train your model! ### 1.3. Work with the WML service instance Import the libraries you need to work with your WML instance. **Hint:** You may also need to install `wget` using the following command `!pip install wget` ``` !pip install wget import urllib3, requests, json, base64, time, os, wget ``` Authenticate to the Watson Machine Learning (WML) service on IBM Cloud. **Tip**: Authentication information (your credentials) can be found in the <a href="https://console.bluemix.net/docs/services/service_credentials.html#service_credentials" target="_blank" rel="noopener noreferrer">Service credentials</a> tab of the service instance that you created on IBM Cloud. If there are no credentials listed for your instance in **Service credentials**, click **New credential (+)** and enter the information required to generate new authentication information. **Action**: Enter your WML service instance credentials here. ` wml_credentials = { "apikey": "------", "iam_apikey_description": "------:", "iam_apikey_name": "------", "iam_role_crn": "-------", "iam_serviceid_crn": "-------", "instance_id": "-------", "password": "------", "url": "------", "username": "-------" } ` ``` # @hidden_cell wml_credentials = { } ``` #### Import the `watson-machine-learning-client` and authenticate to the service instance. **Tip:** If `watson-machine-learning-client` is not preinstalled in your environment, run the following command to install it: ``` # !pip install watson-machine-learning-client from watson_machine_learning_client import WatsonMachineLearningAPIClient ``` **Note:** A deprecation warning is returned from scikit-learn package that does not impact watson machine learning client functionalities. ``` client = WatsonMachineLearningAPIClient(wml_credentials) # Display the client version number. print(client.version) ``` **Note:** `watson-machine-learning-client` documentation can be found <a href="http://wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener noreferrer">here</a>. <a id="model"></a> ## 2. Create the training definitions In this section you: - [2.1 Prepare the training definition metadata](#prep) - [2.2 Get the sample model definition content files from Git](#get) - [2.3 Store the training definition in the WML repository](#store) ### 2.1 Prepare the training definition metadata<a id="prep"></a> Prepare the training definition metadata. The main program will be called with enviroment variables `$DATA_DIR` and `$RESULT_DIR` as the inputs for the `--data-dir` and `--result-dir` options. **Tip:** You may want to change the number of epoch to run with a larger epoch number. ``` model_definition_metadata = { client.repository.DefinitionMetaNames.NAME: "My definition name", client.repository.DefinitionMetaNames.DESCRIPTION: "My description", client.repository.DefinitionMetaNames.AUTHOR_NAME: "John Smith", client.repository.DefinitionMetaNames.FRAMEWORK_NAME: "pytorch", client.repository.DefinitionMetaNames.FRAMEWORK_VERSION: "0.4", client.repository.DefinitionMetaNames.RUNTIME_NAME: "python", client.repository.DefinitionMetaNames.RUNTIME_VERSION: "3.5", client.repository.DefinitionMetaNames.EXECUTION_COMMAND: "python3 main.py --epochs 1 --data-dir $DATA_DIR --result-dir $RESULT_DIR" } ``` ### 2.2 Get the sample model definition content file from GitHub <a id="get"></a> The sample model used here is the <a href="https://github.com/pytorch/examples/tree/master/mnist">MNIST model</a> from the official PyTorch examples repository. ``` filename='pytorch-mnist.zip' if not os.path.isfile(filename): filename = wget.download('https://github.com/IBM/pytorch-on-watson-studio/raw/master/data/code/pytorch-mnist.zip') print(filename, "was downloaded") else: print(filename, "was downloaded previously.") ``` You can verify the size of the model definition file by running the following command. ``` ls -o ``` ### 2.3 Store the training definition in the WML repository<a id="store"></a> ``` definition_details = client.repository.store_definition(filename, model_definition_metadata) definition_uid = client.repository.get_definition_uid(definition_details) # Display the training definition uid. print(definition_uid) ``` ## 3. Train the model<a id="train"></a> In this section, learn how to: - [3.1 Enter training configuration metadata](#meta) - [3.2 Train the model in the background](#backg) - [3.3 Monitor the training log](#log) - [3.4 Cancel the training run](#cancel) ### 3.1 Enter training configuration metadata<a id="meta"></a> - `TRAINING_DATA_REFERENCE` - references the uploaded training data. - `TRAINING_RESULTS_REFERENCE` - location where trained model will be saved. **Note** Your COS credentials are referenced in this code. ``` # Configure the training metadata for the TRAINING_DATA_REFERENCE and TRAINING_RESULTS_REFERENCE. training_configuration_metadata = { client.training.ConfigurationMetaNames.NAME: "Hand-written Digit Recognition", client.training.ConfigurationMetaNames.AUTHOR_NAME: "John Smith", client.training.ConfigurationMetaNames.DESCRIPTION: "Hand-written Digit Recognition training", client.training.ConfigurationMetaNames.COMPUTE_CONFIGURATION: {"name": "k80"}, client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCE: { "connection": { "endpoint_url": service_endpoint, "access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'], "secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key'] }, "source": { "bucket": buckets[0], }, "type": "s3" }, client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: { "connection": { "endpoint_url": service_endpoint, "access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'], "secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key'] }, "target": { "bucket": buckets[1], }, "type": "s3" }, } ``` ### 3.2 Train the model in the background<a id="backg"></a> To run the training in the **background**, set the optional parameter `asynchronous=True` (or remove it). In this case the parameter has been removed. **Note:** To run the training in **active** mode, set `asynchronous=False`. ``` training_run_details = client.training.run(definition_uid, training_configuration_metadata) # print(json.dumps(training_run_details, indent=2)) training_run_guid_async = client.training.get_run_uid(training_run_details) print("training_run_guid_async=",training_run_guid_async) ``` Check the status of the training run by calling the method the next cell: ``` # Get training run status. status = client.training.get_status(training_run_guid_async) print(json.dumps(status, indent=2)) ``` ### 3.3 Monitor the training log<a id="log"></a> Run the cell below to monitor the training log. ``` client.training.monitor_logs(training_run_guid_async) ``` After the training is complete, get the training GUID. ``` training_details = client.training.get_details(training_run_guid_async) training_guid = training_details["entity"]["training_results_reference"]["location"]["model_location"] print("Training GUID is:", training_guid) ``` ### 3.4 Cancel the training run<a id="cancel"></a> You can cancel the training run by calling the method below. ``` client.training.cancel(training_run_guid_async) ``` <a id="work"></a> ## 4. Work with the trained models In this sample workload, the trained model is saved as a file named `saved_models.pth` in the result bucket. The following code will fetch the model file from the bucket. **Tip:** Make sure that the training run is completed by checking it's status as shown earlier. ``` # buckets[1] is the bucket to save the result data as defined above bucket_obj = cos.Bucket(buckets[1]) # model file name as defined in the code saved_model_filename = "saved_models.pth" source_file = os.path.join(training_guid, saved_model_filename) bucket_obj.download_file(source_file,saved_model_filename) ``` Copy the definition of the neural network as it is defined in the sample workload. ``` import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x, dim=1) ``` Instantiate and load previously trained model parameters. ``` mnist_model = Net() mnist_model.load_state_dict(torch.load(saved_model_filename, map_location='cpu')) ``` Download sample image files. ``` import os import wget images =[] for i in range(1,10): filename = "img_"+str(i)+".jpg" images.append(filename) if not os.path.isfile(filename): path = "https://github.com/IBM/pytorch-on-watson-studio/raw/master/data/images/"+filename wget.download(path) print(images) ``` Using the trained model to predict the digits in the sample image files. ``` import numpy as np from IPython.display import display from PIL import Image ``` **Python-First** PyTorch is not a Python binding into a monolithic C++ framework. It’s built to be deeply integrated into Python so it can be used with popular Python libraries. The code below shows how to convert a NumPy array to a PyTorch tensor using `torch.from_numpy`. ``` digits = [i for i in range(10)] mnist_model.eval() for i, filename in enumerate(images): img = Image.open(filename).resize((28, 28)).convert('L') display(img) data = torch.from_numpy(np.asarray(img, dtype=np.float32)[np.newaxis, np.newaxis, :, :]) output = mnist_model(data) # get the index of the max log-probability prediction = output.max(1, keepdim=True)[1] print("Prediction for image number", i+1, "is:", digits[prediction[0,0]]) ``` **Native ONNX Support** PyTorch includes native <a href="http://onnx.ai/">Open Neural Network Exchange (ONNX)</a> support. The following code will export models in the standard ONNX format so that the models can be consumed by ONNX-compatible platforms, runtimes, visualizers, and more. PyTorch exports the model by running the model through the training path once and then save the traced model to a file using ONNX format. **Tip:** You can test the exported ONNX format model by importing and running it in an ONNX-comptible famework. See <a href="https://github.com/onnx/tutorials/tree/master/tutorials/PytorchTensorflowMnist.ipynb">ONNX tutorials</a> for more information. ``` # Export the trained model to ONNX # one black and white 28 x 28 picture will be used as the input to the model dummy_input = torch.randn(1, 1, 28, 28) onnx_model_filename = "mnist.onnx" torch.onnx.export(mnist_model, dummy_input,onnx_model_filename) ``` Save the ONNX model file to the result bucket. ``` # buckets[1] is bucket to save the result data as defined above bucket_obj = cos.Bucket(buckets[1]) # model file name as defined in the code bucket_obj.upload_file(onnx_model_filename,onnx_model_filename) ``` You are done and can delete the training run in WML by calling the method below. ``` client.training.delete(training_run_uid_async) ``` <a id="summary"></a> ## 5. Summary and next steps You successfully completed this notebook! You learned how to use `watson-machine-learning-client` to train PyTorch models. Check out our <a href="https://dataplatform.ibm.com/docs/content/analyze-data/wml-setup.html" target="_blank" rel="noopener noreferrer">Online Documentation</a> for a <a href="https://dataplatform.ibm.com/docs/content/analyze-data/ml-python-mnist-tutorial.html" target="_blank" rel="noopener noreferrer">tutorial</a> and more samples, documentation, how-tos, and blog posts. ### Citations Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. "Gradient-based learning applied to document recognition." Proceedings of the IEEE, 86(11):2278-2324, November 1998. ### References 1. <a href="https://pytorch.org/">PyTorch</a>. 2. <a href="https://github.com/pytorch/examples/tree/master/mnist">MNIST model</a> from the official PyTorch examples repository. 3. <a href="https://dataplatform.cloud.ibm.com/analytics/notebooks/v2/3bd3efb8-833d-460f-b07b-fee51dd0f1af/view?access_token=6bd0ff8d807861d09e0dab0cad28ce9685711078f612fcd92bb8cf8535d089c1">Use TensorFlow to predict handwritten digits</a> ### Authors **Lucasz Cmielowski**, PhD, is a Automation Architect and Data Scientist at IBM with a track record of developing enterprise-level applications that substantially increase the clients' ability to turn data into actionable knowledge. **Catherine Diep** is a Solutions Architect and Performance Engineer of the Cognitive OpenTech group at IBM Silicon Valley Lab. Her current projects include deep learning related workloads that use open source frameworks and APIs such as PyTorch, TensorFlow, Keras, etc. **Simeon Monov** is a Senior Software Developer and Performance Engineer for the Cognitive OpenTech group at IBM. He is currently working in data science and machine learning related projects. Copyright © 2017, 2018 IBM. This notebook and its source code are released under the terms of the MIT License. <div style="background:#F5F7FA; height:110px; padding: 2em; font-size:14px;"> <span style="font-size:18px;color:#152935;">Love this notebook? </span> <span style="font-size:15px;color:#152935;float:right;margin-right:40px;">Don't have an account yet?</span><br> <span style="color:#5A6872;">Share it with your colleagues and help them discover the power of Watson Studio!</span> <span style="border: 1px solid #3d70b2;padding:8px;float:right;margin-right:40px; color:#3d70b2;"><a href="https://ibm.co/wsnotebooks" target="_blank" style="color: #3d70b2;text-decoration: none;">Sign Up</a></span><br> </div>
github_jupyter
"cos_hmac_keys": { "access_key_id": "-------", "secret_access_key": "-------" } ``` **Tip:** follow the steps below to access your COS instance dashboard. From the Watson Studio dashboard: - Click the **Services** tab on the top of the page - Click the **Data Services** tab - Select and click your target object storage (COS) ### 1.1 Work with Cloud Object Storage (COS) Install the boto library. This library allows Python developers to manage Cloud Object Storage (COS). **Tip:** If `ibm_boto3` is not preinstalled in your environment, run the following command to install it: **Replace** the information in the following cell with your COS credentials. You can find these credentials in your COS instance dashboard under the **Service credentials** tab. **Note:** the HMAC key, described in [set up the environment](#setup) is included in these credentials. ` cos_credentials = { "apikey": "-------", "cos_hmac_keys": { "access_key_id": "------", "secret_access_key": "------" }, "endpoints": "https://cos-service.bluemix.net/endpoints", "iam_apikey_description": "------", "iam_apikey_name": "------", "iam_role_crn": "------", "iam_serviceid_crn": "------", "resource_instance_id": "-------" } ` Define the endpoint. To do this, go to the **Endpoint** tab in the COS instance's dashboard to get the endpoint information, then enter it in the cell below: You also need the IBM Cloud authorization endpoint to be able to create COS resource object. Create a Boto resource to be able to write data to COS. Create two buckets, which you will use to store training data and training results. **Note:** The bucket names must be unique. Now you should have 2 buckets. ### 1.2 Download the training data and upload it to the COS buckets **PyTorch Tools & Libraries** An active community of researchers and developers have built a rich ecosystem of tools and libraries for extending PyTorch and supporting development in areas from computer vision to reinforcement learning. PyTorch's <a href="https://github.com/pytorch/vision" target="_blank" rel="noopener no referrer">torchvision</a> is one of those packages. `torchvision` consists of popular datasets, model architectures, and common image transformations for computer vision. This tutorial will use `torchvision's MNIST dataset` package to download and process the training data. The processed data files will be uploaded to the `training-data-mnist` bucket. **Tip:** If PyTorch or `torchvision` is not preinstalled in your environment, run the following command to install it: The following code will download and process the MNIST training and test data. The code in the next cell uploads the processed files to your COS. Have a look at the list of the created buckets and their contents. You are done with COS, and you are ready to train your model! ### 1.3. Work with the WML service instance Import the libraries you need to work with your WML instance. **Hint:** You may also need to install `wget` using the following command `!pip install wget` Authenticate to the Watson Machine Learning (WML) service on IBM Cloud. **Tip**: Authentication information (your credentials) can be found in the <a href="https://console.bluemix.net/docs/services/service_credentials.html#service_credentials" target="_blank" rel="noopener noreferrer">Service credentials</a> tab of the service instance that you created on IBM Cloud. If there are no credentials listed for your instance in **Service credentials**, click **New credential (+)** and enter the information required to generate new authentication information. **Action**: Enter your WML service instance credentials here. ` wml_credentials = { "apikey": "------", "iam_apikey_description": "------:", "iam_apikey_name": "------", "iam_role_crn": "-------", "iam_serviceid_crn": "-------", "instance_id": "-------", "password": "------", "url": "------", "username": "-------" } ` #### Import the `watson-machine-learning-client` and authenticate to the service instance. **Tip:** If `watson-machine-learning-client` is not preinstalled in your environment, run the following command to install it: **Note:** A deprecation warning is returned from scikit-learn package that does not impact watson machine learning client functionalities. **Note:** `watson-machine-learning-client` documentation can be found <a href="http://wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener noreferrer">here</a>. <a id="model"></a> ## 2. Create the training definitions In this section you: - [2.1 Prepare the training definition metadata](#prep) - [2.2 Get the sample model definition content files from Git](#get) - [2.3 Store the training definition in the WML repository](#store) ### 2.1 Prepare the training definition metadata<a id="prep"></a> Prepare the training definition metadata. The main program will be called with enviroment variables `$DATA_DIR` and `$RESULT_DIR` as the inputs for the `--data-dir` and `--result-dir` options. **Tip:** You may want to change the number of epoch to run with a larger epoch number. ### 2.2 Get the sample model definition content file from GitHub <a id="get"></a> The sample model used here is the <a href="https://github.com/pytorch/examples/tree/master/mnist">MNIST model</a> from the official PyTorch examples repository. You can verify the size of the model definition file by running the following command. ### 2.3 Store the training definition in the WML repository<a id="store"></a> ## 3. Train the model<a id="train"></a> In this section, learn how to: - [3.1 Enter training configuration metadata](#meta) - [3.2 Train the model in the background](#backg) - [3.3 Monitor the training log](#log) - [3.4 Cancel the training run](#cancel) ### 3.1 Enter training configuration metadata<a id="meta"></a> - `TRAINING_DATA_REFERENCE` - references the uploaded training data. - `TRAINING_RESULTS_REFERENCE` - location where trained model will be saved. **Note** Your COS credentials are referenced in this code. ### 3.2 Train the model in the background<a id="backg"></a> To run the training in the **background**, set the optional parameter `asynchronous=True` (or remove it). In this case the parameter has been removed. **Note:** To run the training in **active** mode, set `asynchronous=False`. Check the status of the training run by calling the method the next cell: ### 3.3 Monitor the training log<a id="log"></a> Run the cell below to monitor the training log. After the training is complete, get the training GUID. ### 3.4 Cancel the training run<a id="cancel"></a> You can cancel the training run by calling the method below. <a id="work"></a> ## 4. Work with the trained models In this sample workload, the trained model is saved as a file named `saved_models.pth` in the result bucket. The following code will fetch the model file from the bucket. **Tip:** Make sure that the training run is completed by checking it's status as shown earlier. Copy the definition of the neural network as it is defined in the sample workload. Instantiate and load previously trained model parameters. Download sample image files. Using the trained model to predict the digits in the sample image files. **Python-First** PyTorch is not a Python binding into a monolithic C++ framework. It’s built to be deeply integrated into Python so it can be used with popular Python libraries. The code below shows how to convert a NumPy array to a PyTorch tensor using `torch.from_numpy`. **Native ONNX Support** PyTorch includes native <a href="http://onnx.ai/">Open Neural Network Exchange (ONNX)</a> support. The following code will export models in the standard ONNX format so that the models can be consumed by ONNX-compatible platforms, runtimes, visualizers, and more. PyTorch exports the model by running the model through the training path once and then save the traced model to a file using ONNX format. **Tip:** You can test the exported ONNX format model by importing and running it in an ONNX-comptible famework. See <a href="https://github.com/onnx/tutorials/tree/master/tutorials/PytorchTensorflowMnist.ipynb">ONNX tutorials</a> for more information. Save the ONNX model file to the result bucket. You are done and can delete the training run in WML by calling the method below.
0.845273
0.978935
# Implementing the Gradient Descent Algorithm In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data. ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd #Some helper functions for plotting and drawing lines def plot_points(X, y): admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k') def display(m, b, color='g--'): plt.xlim(-0.05,1.05) plt.ylim(-0.05,1.05) x = np.arange(-10, 10, 0.1) plt.plot(x, m*x+b, color) ``` ## Reading and plotting the data ``` data = pd.read_csv('data.csv', header=None) X = np.array(data[[0,1]]) y = np.array(data[2]) plot_points(X,y) plt.show() ``` ## TODO: Implementing the basic functions Here is your turn to shine. Implement the following formulas, as explained in the text. - Sigmoid activation function $$\sigma(x) = \frac{1}{1+e^{-x}}$$ - Output (prediction) formula $$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$ - Error function $$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$ - The function that updates the weights $$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$ $$ b \longrightarrow b + \alpha (y - \hat{y})$$ ``` import numpy as np # Implement the following functions # Activation (sigmoid) function def sigmoid(x): sigma = 1 / (1 + np.exp(-x)) return sigma # Output (prediction) formula def output_formula(features, weights, bias): output = sigmoid(np.dot(features, weights) + bias) return output # Error (log-loss) formula def error_formula(y, output): error = - y * np.log(output) - (1 - y) * np.log(1 - output) return error # Gradient descent step def update_weights(x, y, weights, bias, learn_rate): output = output_formula(x, weights, bias) d_error = y - output weights += learn_rate * d_error * x bias += learn_rate * d_error return weights, bias ``` ## Training function This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm. ``` np.random.seed(44) epochs = 100 learnrate = 0.01 def train(features, targets, epochs, learnrate, graph_lines=False): errors = [] n_records, n_features = features.shape last_loss = None weights = np.random.normal(scale=1 / n_features**.5, size=n_features) bias = 0 for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features, targets): output = output_formula(x, weights, bias) error = error_formula(y, output) weights, bias = update_weights(x, y, weights, bias, learnrate) # Printing out the log-loss error on the training set out = output_formula(features, weights, bias) loss = np.mean(error_formula(targets, out)) errors.append(loss) if e % (epochs / 10) == 0: print("\n========== Epoch", e,"==========") if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss predictions = out > 0.5 accuracy = np.mean(predictions == targets) print("Accuracy: ", accuracy) if graph_lines and e % (epochs / 100) == 0: display(-weights[0]/weights[1], -bias/weights[1]) # Plotting the solution boundary plt.title("Solution boundary") display(-weights[0]/weights[1], -bias/weights[1], 'black') # Plotting the data plot_points(features, targets) plt.show() # Plotting the error plt.title("Error Plot") plt.xlabel('Number of epochs') plt.ylabel('Error') plt.plot(errors) plt.show() ``` ## Time to train the algorithm! When we run the function, we'll obtain the following: - 10 updates with the current training loss and accuracy - A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs. - A plot of the error function. Notice how it decreases as we go through more epochs. ``` train(X, y, epochs, learnrate, True) ```
github_jupyter
import matplotlib.pyplot as plt import numpy as np import pandas as pd #Some helper functions for plotting and drawing lines def plot_points(X, y): admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k') def display(m, b, color='g--'): plt.xlim(-0.05,1.05) plt.ylim(-0.05,1.05) x = np.arange(-10, 10, 0.1) plt.plot(x, m*x+b, color) data = pd.read_csv('data.csv', header=None) X = np.array(data[[0,1]]) y = np.array(data[2]) plot_points(X,y) plt.show() import numpy as np # Implement the following functions # Activation (sigmoid) function def sigmoid(x): sigma = 1 / (1 + np.exp(-x)) return sigma # Output (prediction) formula def output_formula(features, weights, bias): output = sigmoid(np.dot(features, weights) + bias) return output # Error (log-loss) formula def error_formula(y, output): error = - y * np.log(output) - (1 - y) * np.log(1 - output) return error # Gradient descent step def update_weights(x, y, weights, bias, learn_rate): output = output_formula(x, weights, bias) d_error = y - output weights += learn_rate * d_error * x bias += learn_rate * d_error return weights, bias np.random.seed(44) epochs = 100 learnrate = 0.01 def train(features, targets, epochs, learnrate, graph_lines=False): errors = [] n_records, n_features = features.shape last_loss = None weights = np.random.normal(scale=1 / n_features**.5, size=n_features) bias = 0 for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features, targets): output = output_formula(x, weights, bias) error = error_formula(y, output) weights, bias = update_weights(x, y, weights, bias, learnrate) # Printing out the log-loss error on the training set out = output_formula(features, weights, bias) loss = np.mean(error_formula(targets, out)) errors.append(loss) if e % (epochs / 10) == 0: print("\n========== Epoch", e,"==========") if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss predictions = out > 0.5 accuracy = np.mean(predictions == targets) print("Accuracy: ", accuracy) if graph_lines and e % (epochs / 100) == 0: display(-weights[0]/weights[1], -bias/weights[1]) # Plotting the solution boundary plt.title("Solution boundary") display(-weights[0]/weights[1], -bias/weights[1], 'black') # Plotting the data plot_points(features, targets) plt.show() # Plotting the error plt.title("Error Plot") plt.xlabel('Number of epochs') plt.ylabel('Error') plt.plot(errors) plt.show() train(X, y, epochs, learnrate, True)
0.757525
0.984306
# Keras version of Alexnet We'll train Alexnet with the MNIST dataset. [AlexNet](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf) was first introduced in 2012 and ushered in the recent resurgence of deep neural networks. It won the 2012 ImageNet competition by a score that was significantly better than any previous model. Two year later, every state-of-the-art computer imaging model was using neural networks. The 3 tricks of AlexNet: 1. "Deep" : Up until then it was difficult to train neural networks with many hidden layers due to the vanishing gradient and slow computer processors. AlexNet made use of GPU processors to train the network faster. 2. ReLU : AlexNet introduced the rectified linear unit (ReLU) activation function. This virtually eliminated the vanishing gradient problem. 3. Dropout : AlexNet introduced the concept of dropout. Neurons were randomly removed from the network during a batch. This helped to prevent overfitting on the training dataset. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tonyreina/keras_tutorials/blob/master/lesson_3_alexnet.ipynb) ``` from tensorflow import keras from tensorflow.keras.datasets import mnist from tensorflow.keras.layers import Dense, Dropout, Flatten from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D from tensorflow.keras.models import Model ``` # Common terms + SGD : Stochastic gradient descent. The usual way to train a neural network. The "weights" or "parameters" of the network are updated bit by bit in order to minimize some global function ("cost" or "loss") + "Cost" or " Loss" - A function we wish the network to minimize. This is typically some distance measure of how far the network's prediction is from the actual value (i.e. the error). + Epoch = A single pass through the entire training set. SGD involves mulitple passes through the training dataset. + Batch = How many samples of the training dataset are used to create an update to the weights of the network during SGD. If the batch is 1, then the weights are updated after every forward pass (truly stochastic descent). If the batch is the size of the dataset then the weights are updated based on the sum of the gradients for the entire training set (non-stochastic or just gradient descent). We usually use batch or mini-batch gradient descent. ``` batch_size = 128 num_classes = 10 epochs = 4 ``` # MNIST This is the standard dataset of handwritten digit classification. The images are 28 pixels by 28 pixels. There is only 1 color channel (grayscale). For color images there are typically 3 color channels (red, blue, green). Tensor size = NHWC = Batch size x 28 x 28 x 1 ``` # input image dimensions img_rows, img_cols, n_channels = 28, 28, 1 input_shape = (img_rows, img_cols, n_channels) (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, n_channels) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, n_channels) x_train = x_train.astype("float32") x_test = x_test.astype("float32") x_train /= x_train.max() # Scale everything between 0 and 1 x_test /= x_test.max() # Scale everything between 0 and 1 print("x_train shape:", x_train.shape) print(x_train.shape[0], "train samples") print(x_test.shape[0], "test samples") ``` # One Hot Encoding For multi-class problems we always one-hot encode the output variable. There are 10 classes (numbers 0-9). The label for 7 would be 0000001000. The label for 0 would be 1000000000. The label for 3 would be 0010000000. This allows us to use the cost function of [multi-class entropy](https://en.wikipedia.org/wiki/Cross_entropy) which will maximize the margin between classes. ``` # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) ``` # AlexNet model ![AlexNet diagram](https://www.researchgate.net/profile/Huafeng_Wang4/publication/300412100/figure/fig1/AS:388811231121412@1469711229450/Figure-2-AlexNet-Architecture-To-be-noted-Figure-2-is-copied-2.ppm) Above is AlexNet. The MNIST images are only 28 x 28 so if we implemented this on a 28 x 28 image, then the max pooling and cropping would quickly reduce our images to a single pixel. Instead, we'll create an AlexNet-like CNN. ``` inputs = Input(input_shape, name="Images") conv1 = Conv2D( filters=96, kernel_size=(5, 5), strides=(2, 2), activation="relu", padding="valid", kernel_initializer="glorot_uniform", )(inputs) conv2 = Conv2D(filters=256, kernel_size=(3, 3), activation="relu", padding="same")( conv1 ) max2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(filters=384, kernel_size=(3, 3), activation="relu")(max2) conv4 = Conv2D( name="ernie", filters=384, kernel_size=(3, 3), activation="relu", padding="same" )(conv3) conv5 = Conv2D(name="cookie", filters=256, kernel_size=(3, 3), activation="relu")(conv4) layer6 = Flatten()(conv5) layer7 = Dense(4096, activation="relu")(layer6) layer8 = Dropout(0.5)(layer7) layer9 = Dense(4096, activation="relu")(layer8) layer10 = Dense(num_classes, activation="softmax", name="bert")(layer9) model = Model(inputs=[inputs], outputs=[layer10]) ``` ![image.png](attachment:image.png) # TensorBoard TensorBoard is an essential tool to monitor our model and the training. Keras/TF will write a log after every epoch of the model and the current training metrics. All you need to do is type at the command line: tensorboard --logdir='./logs' And then open the browser to http://localhost:6006 ``` tb_log = keras.callbacks.TensorBoard( log_dir="./logs", # This is where the log files will go histogram_freq=10, write_graph=True, write_images=True, ) model.compile( loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam( learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.01 ), metrics=[ "accuracy", keras.metrics.AUC(), keras.metrics.Precision(), keras.metrics.Recall(), ], ) model.summary() %load_ext tensorboard %tensorboard --logdir logs history = model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, # 1=Show a progress bar validation_data=(x_test, y_test), callbacks=[tb_log], ) score = model.evaluate(x_test, y_test, verbose=1) print("Test loss:", score[0]) print("Test accuracy:", score[1]) print("Test AUC:", score[2]) print("Test Precision:", score[3]) print("Test Recall:", score[4]) import matplotlib.pyplot as plt %matplotlib inline ``` # Loss curves It's always a good idea to look at the loss curves. They can tell you if your model is indeed "learning" and can point out when it over-fits the training set. TensorBoard is the better way to monitor this, but it can be done also manually with matplotlib. ``` plt.plot( range(1, epochs + 1), history.history["loss"], ".-", range(1, epochs + 1), history.history["val_loss"], ".-", ) plt.legend(["training loss", "testing loss"]) plt.title("Loss curve") plt.xlabel("Epoch") plt.ylabel("Loss") ``` # Predictions Now let's use the model to predict the test set images. ``` all_predictions = model.predict(x_test).argmax(axis=1) print(all_predictions) import numpy as np samples = [ 4, 83, 298, 1045, 3751, 5555, 7112, 8953, ] # Just print out some random examples from the test set plt.subplots(len(samples) // 2, 2, figsize=(10, 16)) for i, n in enumerate(samples): img = np.expand_dims( x_test[n, :, :, :], 0 ) # Numpy collapses the singleton dimension plt.subplot(len(samples) // 2, 2, i + 1) plt.imshow(img.squeeze(), cmap="gray") plt.axis("off") label = y_test[n].argmax() predicted_label = model.predict(img).argmax() # Predict for just one image plt.title("Actual = {}, Predicted = {}".format(label, predicted_label), fontsize=14) model.save("my_alexnet_model") !ls my_alexnet_model ```
github_jupyter
from tensorflow import keras from tensorflow.keras.datasets import mnist from tensorflow.keras.layers import Dense, Dropout, Flatten from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D from tensorflow.keras.models import Model batch_size = 128 num_classes = 10 epochs = 4 # input image dimensions img_rows, img_cols, n_channels = 28, 28, 1 input_shape = (img_rows, img_cols, n_channels) (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, n_channels) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, n_channels) x_train = x_train.astype("float32") x_test = x_test.astype("float32") x_train /= x_train.max() # Scale everything between 0 and 1 x_test /= x_test.max() # Scale everything between 0 and 1 print("x_train shape:", x_train.shape) print(x_train.shape[0], "train samples") print(x_test.shape[0], "test samples") # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) inputs = Input(input_shape, name="Images") conv1 = Conv2D( filters=96, kernel_size=(5, 5), strides=(2, 2), activation="relu", padding="valid", kernel_initializer="glorot_uniform", )(inputs) conv2 = Conv2D(filters=256, kernel_size=(3, 3), activation="relu", padding="same")( conv1 ) max2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(filters=384, kernel_size=(3, 3), activation="relu")(max2) conv4 = Conv2D( name="ernie", filters=384, kernel_size=(3, 3), activation="relu", padding="same" )(conv3) conv5 = Conv2D(name="cookie", filters=256, kernel_size=(3, 3), activation="relu")(conv4) layer6 = Flatten()(conv5) layer7 = Dense(4096, activation="relu")(layer6) layer8 = Dropout(0.5)(layer7) layer9 = Dense(4096, activation="relu")(layer8) layer10 = Dense(num_classes, activation="softmax", name="bert")(layer9) model = Model(inputs=[inputs], outputs=[layer10]) tb_log = keras.callbacks.TensorBoard( log_dir="./logs", # This is where the log files will go histogram_freq=10, write_graph=True, write_images=True, ) model.compile( loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam( learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.01 ), metrics=[ "accuracy", keras.metrics.AUC(), keras.metrics.Precision(), keras.metrics.Recall(), ], ) model.summary() %load_ext tensorboard %tensorboard --logdir logs history = model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, # 1=Show a progress bar validation_data=(x_test, y_test), callbacks=[tb_log], ) score = model.evaluate(x_test, y_test, verbose=1) print("Test loss:", score[0]) print("Test accuracy:", score[1]) print("Test AUC:", score[2]) print("Test Precision:", score[3]) print("Test Recall:", score[4]) import matplotlib.pyplot as plt %matplotlib inline plt.plot( range(1, epochs + 1), history.history["loss"], ".-", range(1, epochs + 1), history.history["val_loss"], ".-", ) plt.legend(["training loss", "testing loss"]) plt.title("Loss curve") plt.xlabel("Epoch") plt.ylabel("Loss") all_predictions = model.predict(x_test).argmax(axis=1) print(all_predictions) import numpy as np samples = [ 4, 83, 298, 1045, 3751, 5555, 7112, 8953, ] # Just print out some random examples from the test set plt.subplots(len(samples) // 2, 2, figsize=(10, 16)) for i, n in enumerate(samples): img = np.expand_dims( x_test[n, :, :, :], 0 ) # Numpy collapses the singleton dimension plt.subplot(len(samples) // 2, 2, i + 1) plt.imshow(img.squeeze(), cmap="gray") plt.axis("off") label = y_test[n].argmax() predicted_label = model.predict(img).argmax() # Predict for just one image plt.title("Actual = {}, Predicted = {}".format(label, predicted_label), fontsize=14) model.save("my_alexnet_model") !ls my_alexnet_model
0.906557
0.956796
# Harmonizome ETL: Cancer Cell Line Encyclopedia (CCLE) Created by: Charles Dai <br> Credit to: Moshe Silverstein Data Source: https://portals.broadinstitute.org/ccle/data ``` # appyter init from appyter import magic magic.init(lambda _=globals: _()) import sys import os from datetime import date import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import harmonizome.utility_functions as uf import harmonizome.lookup as lookup %load_ext autoreload %autoreload 2 ``` ### Notebook Information ``` print('This notebook was run on:', date.today(), '\nPython version:', sys.version) ``` # Initialization ``` %%appyter hide_code {% do SectionField( name='data', title='Upload Data', img='load_icon.png' ) %} %%appyter code_eval {% do DescriptionField( name='description', text='The examples below were sourced from <a href="https://portals.broadinstitute.org/ccle/data" target="_blank">portals.broadinstitute.org</a>. The downloads require a login so clicking on the examples may not work, in which case they should be downloaded directly from the source website.', section='data' ) %} {% set matrix_file = FileField( constraint='.*\.gz$', name='expression_matrix', label='RNA-Seq Expression Matrix (gct.gz)', default='Input/CCLE/CCLE_RNAseq_genes_counts_20180929.gct.gz', examples={ 'CCLE_RNAseq_genes_counts_20180929.gct.gz': 'https://data.broadinstitute.org/ccle/CCLE_RNAseq_genes_counts_20180929.gct.gz' }, section='data') %} {% set sample_file= FileField( constraint='.*\.txt$', name='cell_annot', label='Cell Line Annotations (txt)', default='Input/CCLE/Cell_lines_annotations_20181226.txt', examples={ 'Cell_lines_annotations_20181226.txt': 'https://data.broadinstitute.org/ccle/Cell_lines_annotations_20181226.txt' }, section='data') %} ``` ### Load Mapping Dictionaries ``` symbol_lookup, geneid_lookup = lookup.get_lookups() ``` ### Output Path ``` output_name = 'ccle' path = 'Output/CCLE' if not os.path.exists(path): os.makedirs(path) ``` # Load Data ``` %%appyter code_exec matrix = pd.read_csv( {{matrix_file}}, sep='\t', index_col=0, skiprows=2, usecols=lambda c: c != 'Name') matrix.head() matrix.shape ``` ## Load Sample Metadata ``` %%appyter code_exec sample_meta = pd.read_csv( {{sample_file}}, sep='\t', usecols=['CCLE_ID', 'Name', 'Gender', 'Site_Primary', 'Histology'], index_col=1) sample_meta.head() sample_meta.shape ``` # Pre-process Data ## Map CCLE ID to Cell Line Name ``` matrix = matrix.rename(columns=dict(zip( sample_meta['CCLE_ID'], sample_meta.index))) matrix.index.name = 'Gene Symbol' matrix.columns.name = 'Cell Line' matrix.head() matrix.shape ``` ## Drop Missing Data from Sample Metadata ``` sample_meta = sample_meta.reset_index().dropna(subset=['Name']).set_index('Name') sample_meta.head() ``` ## Save Unfiltered Matrix to file ``` uf.save_data(matrix, path, output_name + '_matrix_unfiltered', compression='gzip', dtype=np.float32) ``` # Filter Data ## Map Gene Symbols to Up-to-date Approved Gene Symbols ``` matrix = uf.map_symbols(matrix, symbol_lookup) matrix.shape ``` ## Merge Duplicate Genes By Rows and Duplicate Columns ``` matrix = uf.merge(matrix, 'row') matrix = uf.merge(matrix, 'column') matrix.shape ``` ## Remove Data that is More Than 95% Missing and Impute Missing Data ``` matrix = uf.remove_impute(matrix) matrix.head() matrix.shape ``` ## Log2 Transform ``` matrix = uf.log2(matrix) matrix.head() ``` ## Normalize Matrix (Quantile Normalize the Matrix by Column) ``` matrix = uf.quantile_normalize(matrix) matrix.head() ``` ## Normalize Matrix (Z-Score the Rows) ``` matrix = uf.zscore(matrix) matrix.head() ``` ## Histogram of First Sample ``` matrix.iloc[:, 0].hist(bins=100) ``` ## Histogram of First Gene ``` matrix.iloc[0, :].hist(bins=100) ``` ## Save Filtered Matrix ``` uf.save_data(matrix, path, output_name + '_matrix_filtered', ext='tsv', compression='gzip') ``` # Analyze Data ## Create Gene List ``` gene_list = uf.gene_list(matrix, geneid_lookup) gene_list.head() gene_list.shape uf.save_data(gene_list, path, output_name + '_gene_list', ext='tsv', compression='gzip', index=False) ``` ## Create Attribute List ``` attribute_list = uf.attribute_list(matrix, sample_meta) attribute_list.head() attribute_list.shape uf.save_data(attribute_list, path, output_name + '_attribute_list', ext='tsv', compression='gzip') ``` ## Create matrix of Standardized values (values between -1, and 1) ``` standard_matrix = uf.standardized_matrix(matrix) standard_matrix.head() uf.save_data(standard_matrix, path, output_name + '_standard_matrix', ext='tsv', compression='gzip') ``` ## Plot of A Single Celltype, Normalized Value vs. Standardized Value ``` plt.plot(matrix[matrix.columns[0]], standard_matrix[standard_matrix.columns[0]], 'bo') plt.xlabel('Normalized Values') plt.ylabel('Standardized Values') plt.title(standard_matrix.columns[0]) plt.grid(True) ``` ## Create Ternary Matrix ``` ternary_matrix = uf.ternary_matrix(standard_matrix) ternary_matrix.head() uf.save_data(ternary_matrix, path, output_name + '_ternary_matrix', ext='tsv', compression='gzip') ``` ## Create Gene and Attribute Set Libraries ``` uf.save_setlib(ternary_matrix, 'gene', 'up', path, output_name + '_gene_up_set') uf.save_setlib(ternary_matrix, 'gene', 'down', path, output_name + '_gene_down_set') uf.save_setlib(ternary_matrix, 'attribute', 'up', path, output_name + '_attribute_up_set') uf.save_setlib(ternary_matrix, 'attribute', 'down', path, output_name + '_attribute_down_set') ``` ## Create Attribute Similarity Matrix ``` attribute_similarity_matrix = uf.similarity_matrix(standard_matrix.T, 'cosine') attribute_similarity_matrix.head() uf.save_data(attribute_similarity_matrix, path, output_name + '_attribute_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) ``` ## Create Gene Similarity Matrix ``` gene_similarity_matrix = uf.similarity_matrix(standard_matrix, 'cosine') gene_similarity_matrix.head() uf.save_data(gene_similarity_matrix, path, output_name + '_gene_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) ``` ## Create Gene-Attribute Edge List ``` edge_list = uf.edge_list(standard_matrix) uf.save_data(edge_list, path, output_name + '_edge_list', ext='tsv', compression='gzip') ``` # Create Downloadable Save File ``` uf.archive(path) ``` ### Link to download output files: [click here](./output_archive.zip)
github_jupyter
# appyter init from appyter import magic magic.init(lambda _=globals: _()) import sys import os from datetime import date import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import harmonizome.utility_functions as uf import harmonizome.lookup as lookup %load_ext autoreload %autoreload 2 print('This notebook was run on:', date.today(), '\nPython version:', sys.version) %%appyter hide_code {% do SectionField( name='data', title='Upload Data', img='load_icon.png' ) %} %%appyter code_eval {% do DescriptionField( name='description', text='The examples below were sourced from <a href="https://portals.broadinstitute.org/ccle/data" target="_blank">portals.broadinstitute.org</a>. The downloads require a login so clicking on the examples may not work, in which case they should be downloaded directly from the source website.', section='data' ) %} {% set matrix_file = FileField( constraint='.*\.gz$', name='expression_matrix', label='RNA-Seq Expression Matrix (gct.gz)', default='Input/CCLE/CCLE_RNAseq_genes_counts_20180929.gct.gz', examples={ 'CCLE_RNAseq_genes_counts_20180929.gct.gz': 'https://data.broadinstitute.org/ccle/CCLE_RNAseq_genes_counts_20180929.gct.gz' }, section='data') %} {% set sample_file= FileField( constraint='.*\.txt$', name='cell_annot', label='Cell Line Annotations (txt)', default='Input/CCLE/Cell_lines_annotations_20181226.txt', examples={ 'Cell_lines_annotations_20181226.txt': 'https://data.broadinstitute.org/ccle/Cell_lines_annotations_20181226.txt' }, section='data') %} symbol_lookup, geneid_lookup = lookup.get_lookups() output_name = 'ccle' path = 'Output/CCLE' if not os.path.exists(path): os.makedirs(path) %%appyter code_exec matrix = pd.read_csv( {{matrix_file}}, sep='\t', index_col=0, skiprows=2, usecols=lambda c: c != 'Name') matrix.head() matrix.shape %%appyter code_exec sample_meta = pd.read_csv( {{sample_file}}, sep='\t', usecols=['CCLE_ID', 'Name', 'Gender', 'Site_Primary', 'Histology'], index_col=1) sample_meta.head() sample_meta.shape matrix = matrix.rename(columns=dict(zip( sample_meta['CCLE_ID'], sample_meta.index))) matrix.index.name = 'Gene Symbol' matrix.columns.name = 'Cell Line' matrix.head() matrix.shape sample_meta = sample_meta.reset_index().dropna(subset=['Name']).set_index('Name') sample_meta.head() uf.save_data(matrix, path, output_name + '_matrix_unfiltered', compression='gzip', dtype=np.float32) matrix = uf.map_symbols(matrix, symbol_lookup) matrix.shape matrix = uf.merge(matrix, 'row') matrix = uf.merge(matrix, 'column') matrix.shape matrix = uf.remove_impute(matrix) matrix.head() matrix.shape matrix = uf.log2(matrix) matrix.head() matrix = uf.quantile_normalize(matrix) matrix.head() matrix = uf.zscore(matrix) matrix.head() matrix.iloc[:, 0].hist(bins=100) matrix.iloc[0, :].hist(bins=100) uf.save_data(matrix, path, output_name + '_matrix_filtered', ext='tsv', compression='gzip') gene_list = uf.gene_list(matrix, geneid_lookup) gene_list.head() gene_list.shape uf.save_data(gene_list, path, output_name + '_gene_list', ext='tsv', compression='gzip', index=False) attribute_list = uf.attribute_list(matrix, sample_meta) attribute_list.head() attribute_list.shape uf.save_data(attribute_list, path, output_name + '_attribute_list', ext='tsv', compression='gzip') standard_matrix = uf.standardized_matrix(matrix) standard_matrix.head() uf.save_data(standard_matrix, path, output_name + '_standard_matrix', ext='tsv', compression='gzip') plt.plot(matrix[matrix.columns[0]], standard_matrix[standard_matrix.columns[0]], 'bo') plt.xlabel('Normalized Values') plt.ylabel('Standardized Values') plt.title(standard_matrix.columns[0]) plt.grid(True) ternary_matrix = uf.ternary_matrix(standard_matrix) ternary_matrix.head() uf.save_data(ternary_matrix, path, output_name + '_ternary_matrix', ext='tsv', compression='gzip') uf.save_setlib(ternary_matrix, 'gene', 'up', path, output_name + '_gene_up_set') uf.save_setlib(ternary_matrix, 'gene', 'down', path, output_name + '_gene_down_set') uf.save_setlib(ternary_matrix, 'attribute', 'up', path, output_name + '_attribute_up_set') uf.save_setlib(ternary_matrix, 'attribute', 'down', path, output_name + '_attribute_down_set') attribute_similarity_matrix = uf.similarity_matrix(standard_matrix.T, 'cosine') attribute_similarity_matrix.head() uf.save_data(attribute_similarity_matrix, path, output_name + '_attribute_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) gene_similarity_matrix = uf.similarity_matrix(standard_matrix, 'cosine') gene_similarity_matrix.head() uf.save_data(gene_similarity_matrix, path, output_name + '_gene_similarity_matrix', compression='npz', symmetric=True, dtype=np.float32) edge_list = uf.edge_list(standard_matrix) uf.save_data(edge_list, path, output_name + '_edge_list', ext='tsv', compression='gzip') uf.archive(path)
0.369656
0.833528
### Applying Model Agnostic Interpretation to Ensemble Models ``` #!conda install -c conda-forge Skater --yes %matplotlib inline import warnings warnings.filterwarnings('ignore') import matplotlib.pyplot as plt import pandas as pd # Reference for customizing matplotlib: https://matplotlib.org/users/style_sheets.html plt.style.use('ggplot') from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier, VotingClassifier from skater.core.explanations import Interpretation from skater.model import InMemoryModel data = load_breast_cancer() # Description of the data print(data.DESCR) pd.DataFrame(data.target_names) ``` ### Lets build an Ensemble of heterogeneous Models ``` X = data.data y = data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2) clf1 = LogisticRegression(random_state=1) clf2 = RandomForestClassifier(random_state=1) clf3 = GaussianNB() eclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='soft') eclf = eclf.fit(X_train, y_train) clf1 = clf1.fit(X_train, y_train) clf2 = clf2.fit(X_train, y_train) clf3 = clf3.fit(X_train, y_train) models = {'lr':clf1, 'rf':clf2, 'gnb':clf3, 'ensemble':eclf} ``` ### How can we interpret or explain an Ensemble Model ? #### Feature Importance: ``` # Ensemble Classifier does not have feature importance enabled by default f, axes = plt.subplots(2, 2, figsize = (26, 18)) ax_dict = { 'lr':axes[0][0], 'rf':axes[1][0], 'gnb':axes[0][1], 'ensemble':axes[1][1] } interpreter = Interpretation(X_test, feature_names=data.feature_names) for model_key in models: pyint_model = InMemoryModel(models[model_key].predict_proba, examples=X_test) ax = ax_dict[model_key] interpreter.feature_importance.plot_feature_importance(pyint_model, ascending=True, ax=ax) ax.set_title(model_key) # Before interpreting, lets check on the accuracy of all the models from sklearn.metrics import f1_score for model_key in models: print("Model Type: {0} -> F1 Score: {1}". format(model_key, f1_score(y_test, models[model_key].predict(X_test)))) ``` #### Decision Boundaries ``` %matplotlib inline from skater.core.visualizer import decision_boundary as db X_train = pd.DataFrame(X_train) X_train.head() # feature_list = interpreter.feature_names # _, _ = db.plot_decision_boundary(eclf, X0=X_train.iloc[:, 0], X1=X_train.iloc[:, 1], # feature_names=[feature_list[0], feature_list[1]], # Y=y_train, mode='interactive', height=6, width=10, file_name='iplot') ``` #### Partial Dependence Plots with Interactive slider for controlling grid resolution ``` def understanding_interaction(): pyint_model = InMemoryModel(eclf.predict_proba, examples=X_test, target_names=data.target_names) # ['worst area', 'mean perimeter'] --> list(feature_selection.value) interpreter.partial_dependence.plot_partial_dependence(list(feature_selection.value), pyint_model, grid_resolution=grid_resolution.value, with_variance=True) # Lets understand interaction using 2-way interaction using the same covariates # feature_selection.value --> ('worst area', 'mean perimeter') axes_list = interpreter.partial_dependence.plot_partial_dependence([feature_selection.value], pyint_model, grid_resolution=grid_resolution.value, with_variance=True) ``` #### Understanding interaction using interactive widgets ``` #!conda install ipywidgets --yes #!jupyter nbextension enable --py --sys-prefix widgetsnbextension # One could further improve this by setting up an event callback using # asynchronous widgets import ipywidgets as widgets from ipywidgets import Layout from IPython.display import display from IPython.display import clear_output grid_resolution = widgets.IntSlider(description="GR", value=10, min=10, max=100) display(grid_resolution) # dropdown to select relevant features from the dataset feature_selection = widgets.SelectMultiple( options=tuple(data.feature_names), value=['worst area', 'mean perimeter'], description='Features', layout=widgets.Layout(display="flex", flex_flow='column', align_items = 'stretch'), disabled=False, multiple=True ) display(feature_selection) # Reference: http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html button = widgets.Button(description="Generate Interactions") display(button) def on_button_clicked(button_func_ref): clear_output() understanding_interaction() button.on_click(on_button_clicked) ``` ### To Evaluate a point locally, lets apply Local Interpretation using an interactive slider ``` from skater.core.local_interpretation.lime.lime_tabular import LimeTabularExplainer from IPython.display import display, HTML, clear_output int_range = widgets.IntSlider(description="Index Selector", value=9, min=0, max=100) display(int_range) def on_value_change(change): index = change['new'] exp = LimeTabularExplainer(X_test, feature_names=data.feature_names, discretize_continuous=False, class_names=['p(Cancer)-malignant', 'p(No Cancer)-benign']) print("Model behavior at row: {}".format(index)) # Lets evaluate the prediction from the model and actual target label print("prediction from the model:{}".format(eclf.predict(X_test[index].reshape(1, -1)))) print("Target Label on the row: {}".format(y_test.reshape(1,-1)[0][index])) clear_output() display(HTML(exp.explain_instance(X_test[index], models['ensemble'].predict_proba).as_html())) int_range.observe(on_value_change, names='value') ``` ## Conclusion: Using global and local interpretation one is able to understand interactions between independent(input features) and dependent variable(P(Cancer)/P(No Cancer) by querying the model's behavior. While Feature Importance helped us in understanding more about the variable weights used by the predictive model. Partial dependence plots and LIME helped us understand interactions between the variables that is driving the prediction.
github_jupyter
#!conda install -c conda-forge Skater --yes %matplotlib inline import warnings warnings.filterwarnings('ignore') import matplotlib.pyplot as plt import pandas as pd # Reference for customizing matplotlib: https://matplotlib.org/users/style_sheets.html plt.style.use('ggplot') from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier, VotingClassifier from skater.core.explanations import Interpretation from skater.model import InMemoryModel data = load_breast_cancer() # Description of the data print(data.DESCR) pd.DataFrame(data.target_names) X = data.data y = data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .2) clf1 = LogisticRegression(random_state=1) clf2 = RandomForestClassifier(random_state=1) clf3 = GaussianNB() eclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='soft') eclf = eclf.fit(X_train, y_train) clf1 = clf1.fit(X_train, y_train) clf2 = clf2.fit(X_train, y_train) clf3 = clf3.fit(X_train, y_train) models = {'lr':clf1, 'rf':clf2, 'gnb':clf3, 'ensemble':eclf} # Ensemble Classifier does not have feature importance enabled by default f, axes = plt.subplots(2, 2, figsize = (26, 18)) ax_dict = { 'lr':axes[0][0], 'rf':axes[1][0], 'gnb':axes[0][1], 'ensemble':axes[1][1] } interpreter = Interpretation(X_test, feature_names=data.feature_names) for model_key in models: pyint_model = InMemoryModel(models[model_key].predict_proba, examples=X_test) ax = ax_dict[model_key] interpreter.feature_importance.plot_feature_importance(pyint_model, ascending=True, ax=ax) ax.set_title(model_key) # Before interpreting, lets check on the accuracy of all the models from sklearn.metrics import f1_score for model_key in models: print("Model Type: {0} -> F1 Score: {1}". format(model_key, f1_score(y_test, models[model_key].predict(X_test)))) %matplotlib inline from skater.core.visualizer import decision_boundary as db X_train = pd.DataFrame(X_train) X_train.head() # feature_list = interpreter.feature_names # _, _ = db.plot_decision_boundary(eclf, X0=X_train.iloc[:, 0], X1=X_train.iloc[:, 1], # feature_names=[feature_list[0], feature_list[1]], # Y=y_train, mode='interactive', height=6, width=10, file_name='iplot') def understanding_interaction(): pyint_model = InMemoryModel(eclf.predict_proba, examples=X_test, target_names=data.target_names) # ['worst area', 'mean perimeter'] --> list(feature_selection.value) interpreter.partial_dependence.plot_partial_dependence(list(feature_selection.value), pyint_model, grid_resolution=grid_resolution.value, with_variance=True) # Lets understand interaction using 2-way interaction using the same covariates # feature_selection.value --> ('worst area', 'mean perimeter') axes_list = interpreter.partial_dependence.plot_partial_dependence([feature_selection.value], pyint_model, grid_resolution=grid_resolution.value, with_variance=True) #!conda install ipywidgets --yes #!jupyter nbextension enable --py --sys-prefix widgetsnbextension # One could further improve this by setting up an event callback using # asynchronous widgets import ipywidgets as widgets from ipywidgets import Layout from IPython.display import display from IPython.display import clear_output grid_resolution = widgets.IntSlider(description="GR", value=10, min=10, max=100) display(grid_resolution) # dropdown to select relevant features from the dataset feature_selection = widgets.SelectMultiple( options=tuple(data.feature_names), value=['worst area', 'mean perimeter'], description='Features', layout=widgets.Layout(display="flex", flex_flow='column', align_items = 'stretch'), disabled=False, multiple=True ) display(feature_selection) # Reference: http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html button = widgets.Button(description="Generate Interactions") display(button) def on_button_clicked(button_func_ref): clear_output() understanding_interaction() button.on_click(on_button_clicked) from skater.core.local_interpretation.lime.lime_tabular import LimeTabularExplainer from IPython.display import display, HTML, clear_output int_range = widgets.IntSlider(description="Index Selector", value=9, min=0, max=100) display(int_range) def on_value_change(change): index = change['new'] exp = LimeTabularExplainer(X_test, feature_names=data.feature_names, discretize_continuous=False, class_names=['p(Cancer)-malignant', 'p(No Cancer)-benign']) print("Model behavior at row: {}".format(index)) # Lets evaluate the prediction from the model and actual target label print("prediction from the model:{}".format(eclf.predict(X_test[index].reshape(1, -1)))) print("Target Label on the row: {}".format(y_test.reshape(1,-1)[0][index])) clear_output() display(HTML(exp.explain_instance(X_test[index], models['ensemble'].predict_proba).as_html())) int_range.observe(on_value_change, names='value')
0.675015
0.841631
``` import pandas as pd pd.options.display.max_rows = 20 surveys_df = pd.read_csv("data/surveys.csv") surveys_df[(surveys_df['year'] > 1990) & (surveys_df['year'] < 2000)] import plotnine as p9 surveys_complete = pd.read_csv('data/surveys.csv') surveys_complete = surveys_complete.dropna() (p9.ggplot(data=surveys_complete, mapping=p9.aes(x='weight',y='hindfoot_length')) + p9.geom_point() ) surveys_plot = p9.ggplot(data=surveys_complete, mapping = p9.aes(x='weight',y='hindfoot_length')) surveys_plot + p9.geom_point() surveys_plot + p9.geom_point(alpha=0.1, color='red') surveys_plot = p9.ggplot(data=surveys_complete, mapping = p9.aes(x='weight', y='hindfoot_length', color='species_id')) (surveys_plot + p9.geom_point(alpha=0.1) + p9.xlab("Weight(g)") + p9.scale_x_log10() + p9.theme_bw() + p9.theme(text=p9.element_text(size=16))) plot = p9.ggplot(data=surveys_complete, mapping=p9.aes(x='species_id', y='weight')) plot + p9.geom_boxplot(alpha=0) + p9.geom_jitter(alpha=0.2) yearly_counts = surveys_complete.groupby(['year','species_id'])['species_id'].count() yearly_counts yearly_counts = yearly_counts.reset_index(name='counts') yearly_counts plot = p9.ggplot(data=yearly_counts, mapping=p9.aes(x='year',y='counts',color='species_id')) plot + p9.geom_line() (p9.ggplot(data=surveys_complete, mapping=p9.aes(x='weight', y='hindfoot_length', color='species_id')) + p9.geom_point(alpha=0.1) + p9.facet_wrap("sex")) # only select the years of interest survey_2000 = surveys_complete[surveys_complete["year"].isin([2000, 2001])] (p9.ggplot(data=survey_2000, mapping=p9.aes(x='weight', y='hindfoot_length', color='species_id')) + p9.geom_point(alpha=0.1) + p9.facet_grid("year ~ sex") ) my_custom_theme = p9.theme(axis_text_x = p9.element_text(color="grey", size=10, angle=90, hjust=.5), axis_text_y = p9.element_text(color="grey", size=10)) my_plot = (p9.ggplot(data=surveys_complete, mapping=p9.aes(x='factor(year)')) + p9.geom_bar() + my_custom_theme ) my_plot.save("plot.png",width=10, height=10,dpi=300) (p9.ggplot(data=surveys_complete, mapping=p9.aes(x='factor(year)')) + p9.geom_bar() ) ```
github_jupyter
import pandas as pd pd.options.display.max_rows = 20 surveys_df = pd.read_csv("data/surveys.csv") surveys_df[(surveys_df['year'] > 1990) & (surveys_df['year'] < 2000)] import plotnine as p9 surveys_complete = pd.read_csv('data/surveys.csv') surveys_complete = surveys_complete.dropna() (p9.ggplot(data=surveys_complete, mapping=p9.aes(x='weight',y='hindfoot_length')) + p9.geom_point() ) surveys_plot = p9.ggplot(data=surveys_complete, mapping = p9.aes(x='weight',y='hindfoot_length')) surveys_plot + p9.geom_point() surveys_plot + p9.geom_point(alpha=0.1, color='red') surveys_plot = p9.ggplot(data=surveys_complete, mapping = p9.aes(x='weight', y='hindfoot_length', color='species_id')) (surveys_plot + p9.geom_point(alpha=0.1) + p9.xlab("Weight(g)") + p9.scale_x_log10() + p9.theme_bw() + p9.theme(text=p9.element_text(size=16))) plot = p9.ggplot(data=surveys_complete, mapping=p9.aes(x='species_id', y='weight')) plot + p9.geom_boxplot(alpha=0) + p9.geom_jitter(alpha=0.2) yearly_counts = surveys_complete.groupby(['year','species_id'])['species_id'].count() yearly_counts yearly_counts = yearly_counts.reset_index(name='counts') yearly_counts plot = p9.ggplot(data=yearly_counts, mapping=p9.aes(x='year',y='counts',color='species_id')) plot + p9.geom_line() (p9.ggplot(data=surveys_complete, mapping=p9.aes(x='weight', y='hindfoot_length', color='species_id')) + p9.geom_point(alpha=0.1) + p9.facet_wrap("sex")) # only select the years of interest survey_2000 = surveys_complete[surveys_complete["year"].isin([2000, 2001])] (p9.ggplot(data=survey_2000, mapping=p9.aes(x='weight', y='hindfoot_length', color='species_id')) + p9.geom_point(alpha=0.1) + p9.facet_grid("year ~ sex") ) my_custom_theme = p9.theme(axis_text_x = p9.element_text(color="grey", size=10, angle=90, hjust=.5), axis_text_y = p9.element_text(color="grey", size=10)) my_plot = (p9.ggplot(data=surveys_complete, mapping=p9.aes(x='factor(year)')) + p9.geom_bar() + my_custom_theme ) my_plot.save("plot.png",width=10, height=10,dpi=300) (p9.ggplot(data=surveys_complete, mapping=p9.aes(x='factor(year)')) + p9.geom_bar() )
0.349089
0.22114
``` import os import random import json from itertools import cycle import pandas as pd import requests ``` # Download Necessary Files ``` required_files = ["title.ratings.tsv.gz", "title.basics.tsv.gz"] for file in required_files: if os.path.isfile(file): continue file_downloaded = requests.get(f"https://datasets.imdbws.com/{file}", allow_redirects=True) with open(file, 'wb') as new_file: new_file.write(file_downloaded.content) ``` # Read Movie Ratings ``` movies_ = pd.read_csv("title.ratings.tsv.gz", delimiter="\t", low_memory=False) extra_data_ = pd.read_csv("title.basics.tsv.gz", delimiter="\t", low_memory=False) movies = movies_.copy() extra_data = extra_data_.copy().set_index('tconst') ``` # Auxiliary Functions ``` def filter_movies(movies, votes, average_rate): filtered_movies = movies.copy() filtered_movies = filtered_movies[filtered_movies["numVotes"] >= votes] filtered_movies = filtered_movies[filtered_movies["averageRating"] >= average_rate] filtered_movies = filtered_movies.drop(["numVotes", "averageRating"], axis=1) filtered_movies = filtered_movies.set_index('tconst') return filtered_movies def remove_unpopular(movies_): movies = movies_.copy() movies = movies[movies["isAdult"] == "0"] movies = movies[movies["titleType"] == 'movie'] movies = movies[["startYear", "runtimeMinutes", "primaryTitle"]] return movies def clean_movies(movies_): movies = movies_.copy() movies = movies[(movies != '\\N').all(axis=1)] movies = movies[movies["runtimeMinutes"].astype(int) > 70] movies = movies.drop("runtimeMinutes", axis=1) movies = movies[movies["startYear"].astype(int) > 1995] movies["movieID"] = movies.index return movies def create_level(movies_, extra_data_, votes, average_rate): movies = movies_.copy() extra_data = extra_data_.copy() level = filter_movies(movies, votes=votes, average_rate=average_rate) level = extra_data.loc[level.index] level = remove_unpopular(level) level = clean_movies(level) return level ``` # Create Level Datasets ``` level0_movies = create_level(movies, extra_data, votes=250000, average_rate=8.0) level1_movies = create_level(movies, extra_data, votes=250000, average_rate=7.5) level2_movies = create_level(movies, extra_data, votes=100000, average_rate=7.5) level3_movies = create_level(movies, extra_data, votes=75000, average_rate=7.0) level4_movies = create_level(movies, extra_data, votes=25000, average_rate=7.0) len(level0_movies), len(level1_movies), len(level2_movies), len(level3_movies), len(level4_movies) ``` # Check Longest Movie Title ``` longest_title = sorted(level4_movies["primaryTitle"].to_numpy(), key=len, reverse=True)[0] longest_title, len(longest_title) ``` # Add Poster URL with OMDB API ``` def add_posters(movies_): movies = movies_.copy() apikeys = ["fcbcfdd4", "354ba942"] with open("top_movies_level4.json", "r") as dataset_file: movie_database = json.load(dataset_file)["data"] movie_posters = {movie[2]:movie[3] for movie in movie_database} index_cycle = cycle(list(range(len(apikeys)))) api_index = next(index_cycle) poster_urls = [] for movie_id in movies.index: if movie_id in movie_posters and movie_posters[movie_id] != "MISSING": poster_urls.append(movie_posters[movie_id]) continue try: apikey = apikeys[api_index] response = requests.get(f'http://omdbapi.com/?apikey={apikey}&i={movie_id}') poster_url = response.json()['Poster'] poster_url = poster_url.replace("300.jpg", "500.jpg") poster_urls.append(poster_url) api_index = next(index_cycle) except: poster_urls.append("MISSING") api_index = next(index_cycle) movies["poster_url"] = poster_urls return movies level0_movies = add_posters(level0_movies) level1_movies = add_posters(level1_movies) level2_movies = add_posters(level2_movies) level3_movies = add_posters(level3_movies) level4_movies = add_posters(level4_movies) assert len(level0_movies[level0_movies["poster_url"] == "MISSING"]) == 0 assert len(level1_movies[level1_movies["poster_url"] == "MISSING"]) == 0 assert len(level2_movies[level2_movies["poster_url"] == "MISSING"]) == 0 assert len(level3_movies[level3_movies["poster_url"] == "MISSING"]) == 0 assert len(level4_movies[level4_movies["poster_url"] == "MISSING"]) == 0 len(level0_movies), len(level1_movies), len(level2_movies), len(level3_movies), len(level4_movies) ``` # Export Data as JSON ``` level0_movies.to_json("top_movies_level0.json", orient="split", index=False) level1_movies.to_json("top_movies_level1.json", orient="split", index=False) level2_movies.to_json("top_movies_level2.json", orient="split", index=False) level3_movies.to_json("top_movies_level3.json", orient="split", index=False) level4_movies.to_json("top_movies_level4.json", orient="split", index=False) ``` # Libraries Used ``` !pip install watermark; %load_ext watermark %watermark -n -u -v -iv -w ```
github_jupyter
import os import random import json from itertools import cycle import pandas as pd import requests required_files = ["title.ratings.tsv.gz", "title.basics.tsv.gz"] for file in required_files: if os.path.isfile(file): continue file_downloaded = requests.get(f"https://datasets.imdbws.com/{file}", allow_redirects=True) with open(file, 'wb') as new_file: new_file.write(file_downloaded.content) movies_ = pd.read_csv("title.ratings.tsv.gz", delimiter="\t", low_memory=False) extra_data_ = pd.read_csv("title.basics.tsv.gz", delimiter="\t", low_memory=False) movies = movies_.copy() extra_data = extra_data_.copy().set_index('tconst') def filter_movies(movies, votes, average_rate): filtered_movies = movies.copy() filtered_movies = filtered_movies[filtered_movies["numVotes"] >= votes] filtered_movies = filtered_movies[filtered_movies["averageRating"] >= average_rate] filtered_movies = filtered_movies.drop(["numVotes", "averageRating"], axis=1) filtered_movies = filtered_movies.set_index('tconst') return filtered_movies def remove_unpopular(movies_): movies = movies_.copy() movies = movies[movies["isAdult"] == "0"] movies = movies[movies["titleType"] == 'movie'] movies = movies[["startYear", "runtimeMinutes", "primaryTitle"]] return movies def clean_movies(movies_): movies = movies_.copy() movies = movies[(movies != '\\N').all(axis=1)] movies = movies[movies["runtimeMinutes"].astype(int) > 70] movies = movies.drop("runtimeMinutes", axis=1) movies = movies[movies["startYear"].astype(int) > 1995] movies["movieID"] = movies.index return movies def create_level(movies_, extra_data_, votes, average_rate): movies = movies_.copy() extra_data = extra_data_.copy() level = filter_movies(movies, votes=votes, average_rate=average_rate) level = extra_data.loc[level.index] level = remove_unpopular(level) level = clean_movies(level) return level level0_movies = create_level(movies, extra_data, votes=250000, average_rate=8.0) level1_movies = create_level(movies, extra_data, votes=250000, average_rate=7.5) level2_movies = create_level(movies, extra_data, votes=100000, average_rate=7.5) level3_movies = create_level(movies, extra_data, votes=75000, average_rate=7.0) level4_movies = create_level(movies, extra_data, votes=25000, average_rate=7.0) len(level0_movies), len(level1_movies), len(level2_movies), len(level3_movies), len(level4_movies) longest_title = sorted(level4_movies["primaryTitle"].to_numpy(), key=len, reverse=True)[0] longest_title, len(longest_title) def add_posters(movies_): movies = movies_.copy() apikeys = ["fcbcfdd4", "354ba942"] with open("top_movies_level4.json", "r") as dataset_file: movie_database = json.load(dataset_file)["data"] movie_posters = {movie[2]:movie[3] for movie in movie_database} index_cycle = cycle(list(range(len(apikeys)))) api_index = next(index_cycle) poster_urls = [] for movie_id in movies.index: if movie_id in movie_posters and movie_posters[movie_id] != "MISSING": poster_urls.append(movie_posters[movie_id]) continue try: apikey = apikeys[api_index] response = requests.get(f'http://omdbapi.com/?apikey={apikey}&i={movie_id}') poster_url = response.json()['Poster'] poster_url = poster_url.replace("300.jpg", "500.jpg") poster_urls.append(poster_url) api_index = next(index_cycle) except: poster_urls.append("MISSING") api_index = next(index_cycle) movies["poster_url"] = poster_urls return movies level0_movies = add_posters(level0_movies) level1_movies = add_posters(level1_movies) level2_movies = add_posters(level2_movies) level3_movies = add_posters(level3_movies) level4_movies = add_posters(level4_movies) assert len(level0_movies[level0_movies["poster_url"] == "MISSING"]) == 0 assert len(level1_movies[level1_movies["poster_url"] == "MISSING"]) == 0 assert len(level2_movies[level2_movies["poster_url"] == "MISSING"]) == 0 assert len(level3_movies[level3_movies["poster_url"] == "MISSING"]) == 0 assert len(level4_movies[level4_movies["poster_url"] == "MISSING"]) == 0 len(level0_movies), len(level1_movies), len(level2_movies), len(level3_movies), len(level4_movies) level0_movies.to_json("top_movies_level0.json", orient="split", index=False) level1_movies.to_json("top_movies_level1.json", orient="split", index=False) level2_movies.to_json("top_movies_level2.json", orient="split", index=False) level3_movies.to_json("top_movies_level3.json", orient="split", index=False) level4_movies.to_json("top_movies_level4.json", orient="split", index=False) !pip install watermark; %load_ext watermark %watermark -n -u -v -iv -w
0.36693
0.420302
# Character-Level LSTM in PyTorch In this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. **This model will be able to generate new text based on the text from the book!** This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> First let's load in our required resources for data loading and model creation. ``` import numpy as np import torch from torch import nn import torch.nn.functional as F ``` ## Load in Data Then, we'll load the Anna Karenina text file and convert it into integers for our network to use. ``` # open text file and read in data as `text` with open('data/anna.txt', 'r') as f: text = f.read() ``` Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever. ``` text[:100] ``` ### Tokenization In the cells, below, I'm creating a couple **dictionaries** to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network. ``` # encode the text and map each character to an integer and vice versa # we create two dictionaries: # 1. int2char, which maps integers to characters # 2. char2int, which maps characters to unique integers chars = tuple(set(text)) int2char = dict(enumerate(chars)) char2int = {ch: ii for ii, ch in int2char.items()} # encode the text encoded = np.array([char2int[ch] for ch in text]) ``` And we can see those same characters from above, encoded as integers. ``` encoded[:100] ``` ## Pre-processing the data As you can see in our char-RNN image above, our LSTM expects an input that is **one-hot encoded** meaning that each character is converted into an integer (via our created dictionary) and *then* converted into a column vector where only it's corresponding integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that! ``` def one_hot_encode(arr, n_labels): # Initialize the the encoded array one_hot = np.zeros((arr.size, n_labels), dtype=np.float32) # Fill the appropriate elements with ones one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1. # Finally reshape it to get back to the original array one_hot = one_hot.reshape((*arr.shape, n_labels)) return one_hot # check that the function works as expected test_seq = np.array([[3, 5, 1]]) one_hot = one_hot_encode(test_seq, 8) print(one_hot) ``` ## Making training mini-batches To train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this: <img src="assets/[email protected]" width=500px> <br> In this example, we'll take the encoded characters (passed in as the `arr` parameter) and split them into multiple sequences, given by `batch_size`. Each of our sequences will be `seq_length` long. ### Creating Batches **1. The first thing we need to do is discard some of the text so we only have completely full mini-batches. ** Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences in a batch) and $M$ is the seq_length or number of time steps in a sequence. Then, to get the total number of batches, $K$, that we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$. **2. After that, we need to split `arr` into $N$ batches. ** You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences in a batch, so let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$. **3. Now that we have this array, we can iterate through it to get our mini-batches. ** The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `seq_length`. We also want to create both the input and target arrays. Remember that the targets are just the inputs shifted over by one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of tokens in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `seq_length` wide. > **TODO:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.** ``` def get_batches(arr, batch_size, seq_length): '''Create a generator that returns batches of size batch_size x seq_length from arr. Arguments --------- arr: Array you want to make batches from batch_size: Batch size, the number of sequences per batch seq_length: Number of encoded chars in a sequence ''' ## TODO: Get the number of batches we can make total_b_s = batch_size * seq_length n_batches = len(arr) // total_b_s ## TODO: Keep only enough characters to make full batches arr = arr[: n_batches * total_b_s] ## TODO: Reshape into batch_size rows arr = arr.reshape((batch_size, -1)) ## TODO: Iterate over the batches using a window of size seq_length for n in range(0, arr.shape[1], seq_length): # The features x = arr[:, n:n+seq_length] # The targets, shifted by one y = np.zeros_like(x) try: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+seq_length] except IndexError: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0] yield x, y ``` ### Test Your Implementation Now I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 8 and 50 sequence steps. ``` batches = get_batches(encoded, 8, 50) x, y = next(batches) # printing out the first 10 items in a sequence print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) ``` If you implemented `get_batches` correctly, the above output should look something like ``` x [[25 8 60 11 45 27 28 73 1 2] [17 7 20 73 45 8 60 45 73 60] [27 20 80 73 7 28 73 60 73 65] [17 73 45 8 27 73 66 8 46 27] [73 17 60 12 73 8 27 28 73 45] [66 64 17 17 46 7 20 73 60 20] [73 76 20 20 60 73 8 60 80 73] [47 35 43 7 20 17 24 50 37 73]] y [[ 8 60 11 45 27 28 73 1 2 2] [ 7 20 73 45 8 60 45 73 60 45] [20 80 73 7 28 73 60 73 65 7] [73 45 8 27 73 66 8 46 27 65] [17 60 12 73 8 27 28 73 45 27] [64 17 17 46 7 20 73 60 20 80] [76 20 20 60 73 8 60 80 73 17] [35 43 7 20 17 24 50 37 73 36]] ``` although the exact numbers may be different. Check to make sure the data is shifted over one step for `y`. --- ## Defining the network with PyTorch Below is where you'll define the network. <img src="assets/charRNN.png" width=500px> Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters. ### Model Structure In `__init__` the suggested structure is as follows: * Create and store the necessary dictionaries (this has been done for you) * Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching) * Define a dropout layer with `drop_prob` * Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters) * Finally, initialize the weights (again, this has been given) Note that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`. --- ### LSTM Inputs/Outputs You can create a basic [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) as follows ```python self.lstm = nn.LSTM(input_size, n_hidden, n_layers, dropout=drop_prob, batch_first=True) ``` where `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell. We also need to create an initial hidden state of all zeros. This is done like so ```python self.init_hidden() ``` ``` # check if GPU is available train_on_gpu = torch.cuda.is_available() if(train_on_gpu): print('Training on GPU!') else: print('No GPU available, training on CPU; consider making n_epochs very small.') class CharRNN(nn.Module): def __init__(self, tokens, n_hidden=256, n_layers=2, drop_prob=0.5, lr=0.001): super().__init__() self.drop_prob = drop_prob self.n_layers = n_layers self.n_hidden = n_hidden self.lr = lr # creating character dictionaries self.chars = tokens self.int2char = dict(enumerate(self.chars)) self.char2int = {ch: ii for ii, ch in self.int2char.items()} ## TODO: define the layers of the model self.lstm = nn.LSTM(len(self.chars), n_hidden, n_layers, dropout=drop_prob, batch_first=True) self.dropout = nn.Dropout(drop_prob) self.fc = nn.Linear(n_hidden, len(self.chars)) def forward(self, x, hidden): ''' Forward pass through the network. These inputs are x, and the hidden/cell state `hidden`. ''' ## TODO: Get the outputs and the new hidden state from the lstm r_output, hidden = self.lstm(x, hidden) out = self.dropout(r_output) out = out.contiguous().view(-1, self.n_hidden) out = self.fc(out) # return the final output and the hidden state return out, hidden def init_hidden(self, batch_size): ''' Initializes hidden state ''' # Create two new tensors with sizes n_layers x batch_size x n_hidden, # initialized to zero, for hidden state and cell state of LSTM weight = next(self.parameters()).data if (train_on_gpu): hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(), weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda()) else: hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(), weight.new(self.n_layers, batch_size, self.n_hidden).zero_()) return hidden ``` ## Time to train The train function gives us the ability to set the number of epochs, the learning rate, and other parameters. Below we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual! A couple of details about training: >* Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new *tuple* variable because an LSTM has a hidden state that is a tuple of the hidden and cell states. * We use [`clip_grad_norm_`](https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html) to help prevent exploding gradients. ``` def train(net, data, epochs=10, batch_size=10, seq_length=50, lr=0.001, clip=5, val_frac=0.1, print_every=10): ''' Training a network Arguments --------- net: CharRNN network data: text data to train the network epochs: Number of epochs to train batch_size: Number of mini-sequences per mini-batch, aka batch size seq_length: Number of character steps per mini-batch lr: learning rate clip: gradient clipping val_frac: Fraction of data to hold out for validation print_every: Number of steps for printing training and validation loss ''' net.train() opt = torch.optim.Adam(net.parameters(), lr=lr) criterion = nn.CrossEntropyLoss() # create training and validation data val_idx = int(len(data)*(1-val_frac)) data, val_data = data[:val_idx], data[val_idx:] if(train_on_gpu): net.cuda() counter = 0 n_chars = len(net.chars) for e in range(epochs): # initialize hidden state h = net.init_hidden(batch_size) for x, y in get_batches(data, batch_size, seq_length): counter += 1 # One-hot encode our data and make them Torch tensors x = one_hot_encode(x, n_chars) inputs, targets = torch.from_numpy(x), torch.from_numpy(y) if(train_on_gpu): inputs, targets = inputs.cuda(), targets.cuda() # Creating new variables for the hidden state, otherwise # we'd backprop through the entire training history h = tuple([each.data for each in h]) # zero accumulated gradients net.zero_grad() # get the output from the model output, h = net(inputs, h) # calculate the loss and perform backprop loss = criterion(output, targets.view(batch_size*seq_length).long()) loss.backward() # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs. nn.utils.clip_grad_norm_(net.parameters(), clip) opt.step() # loss stats if counter % print_every == 0: # Get validation loss val_h = net.init_hidden(batch_size) val_losses = [] net.eval() for x, y in get_batches(val_data, batch_size, seq_length): # One-hot encode our data and make them Torch tensors x = one_hot_encode(x, n_chars) x, y = torch.from_numpy(x), torch.from_numpy(y) # Creating new variables for the hidden state, otherwise # we'd backprop through the entire training history val_h = tuple([each.data for each in val_h]) inputs, targets = x, y if(train_on_gpu): inputs, targets = inputs.cuda(), targets.cuda() output, val_h = net(inputs, val_h) val_loss = criterion(output, targets.view(batch_size*seq_length).long()) val_losses.append(val_loss.item()) net.train() # reset to train mode after iterationg through validation data print("Epoch: {}/{}...".format(e+1, epochs), "Step: {}...".format(counter), "Loss: {:.4f}...".format(loss.item()), "Val Loss: {:.4f}".format(np.mean(val_losses))) ``` ## Instantiating the model Now we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training! ``` ## TODO: set you model hyperparameters # define and print the net n_hidden= 512 n_layers= 2 net = CharRNN(chars, n_hidden, n_layers) print(net) ``` ### Set your training hyperparameters! ``` batch_size = 128 seq_length = 100 n_epochs = 2 # start small if you are just testing initial behavior # train the model train(net, encoded, epochs=n_epochs, batch_size=batch_size, seq_length=seq_length, lr=0.001, print_every=10) ``` ## Getting the best model To set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network. ## Hyperparameters Here are the hyperparameters for the network. In defining the model: * `n_hidden` - The number of units in the hidden layers. * `n_layers` - Number of hidden LSTM layers to use. We assume that dropout probability and learning rate will be kept at the default, in this example. And in training: * `batch_size` - Number of sequences running through the network in one pass. * `seq_length` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. * `lr` - Learning rate for training Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks). > ## Tips and Tricks >### Monitoring Validation Loss vs. Training Loss >If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: > - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. > - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer) > ### Approximate number of parameters > The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are: > - The number of parameters in your model. This is printed when you start training. > - The size of your dataset. 1MB file is approximately 1 million characters. >These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: > - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger. > - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. > ### Best models strategy >The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. >It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. >By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. ## Checkpoint After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters. ``` # change the name, for saving multiple files model_name = 'rnn_x_epoch.net' checkpoint = {'n_hidden': net.n_hidden, 'n_layers': net.n_layers, 'state_dict': net.state_dict(), 'tokens': net.chars} with open(model_name, 'wb') as f: torch.save(checkpoint, f) ``` --- ## Making Predictions Now that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text! ### A note on the `predict` function The output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**. > To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character. ### Top K sampling Our predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about [topk, here](https://pytorch.org/docs/stable/torch.html#torch.topk). ``` def predict(net, char, h=None, top_k=None): ''' Given a character, predict the next character. Returns the predicted character and the hidden state. ''' # tensor inputs x = np.array([[net.char2int[char]]]) x = one_hot_encode(x, len(net.chars)) inputs = torch.from_numpy(x) if(train_on_gpu): inputs = inputs.cuda() # detach hidden state from history h = tuple([each.data for each in h]) # get the output of the model out, h = net(inputs, h) # get the character probabilities p = F.softmax(out, dim=1).data if(train_on_gpu): p = p.cpu() # move to cpu # get top characters if top_k is None: top_ch = np.arange(len(net.chars)) else: p, top_ch = p.topk(top_k) top_ch = top_ch.numpy().squeeze() # select the likely next character with some element of randomness p = p.numpy().squeeze() char = np.random.choice(top_ch, p=p/p.sum()) # return the encoded value of the predicted char and the hidden state return net.int2char[char], h ``` ### Priming and generating text Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from. ``` def sample(net, size, prime='The', top_k=None): if(train_on_gpu): net.cuda() else: net.cpu() net.eval() # eval mode # First off, run through the prime characters chars = [ch for ch in prime] h = net.init_hidden(1) for ch in prime: char, h = predict(net, ch, h, top_k=top_k) chars.append(char) # Now pass in the previous character and get a new one for ii in range(size): char, h = predict(net, chars[-1], h, top_k=top_k) chars.append(char) return ''.join(chars) print(sample(net, 1000, prime='Anna', top_k=5)) ``` ## Loading a checkpoint ``` # Here we have loaded in a model that trained over 20 epochs `rnn_20_epoch.net` with open('rnn_x_epoch.net', 'rb') as f: checkpoint = torch.load(f) loaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers']) loaded.load_state_dict(checkpoint['state_dict']) # Sample using a loaded model print(sample(loaded, 2000, top_k=5, prime="And Levin said")) ```
github_jupyter
import numpy as np import torch from torch import nn import torch.nn.functional as F # open text file and read in data as `text` with open('data/anna.txt', 'r') as f: text = f.read() text[:100] # encode the text and map each character to an integer and vice versa # we create two dictionaries: # 1. int2char, which maps integers to characters # 2. char2int, which maps characters to unique integers chars = tuple(set(text)) int2char = dict(enumerate(chars)) char2int = {ch: ii for ii, ch in int2char.items()} # encode the text encoded = np.array([char2int[ch] for ch in text]) encoded[:100] def one_hot_encode(arr, n_labels): # Initialize the the encoded array one_hot = np.zeros((arr.size, n_labels), dtype=np.float32) # Fill the appropriate elements with ones one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1. # Finally reshape it to get back to the original array one_hot = one_hot.reshape((*arr.shape, n_labels)) return one_hot # check that the function works as expected test_seq = np.array([[3, 5, 1]]) one_hot = one_hot_encode(test_seq, 8) print(one_hot) def get_batches(arr, batch_size, seq_length): '''Create a generator that returns batches of size batch_size x seq_length from arr. Arguments --------- arr: Array you want to make batches from batch_size: Batch size, the number of sequences per batch seq_length: Number of encoded chars in a sequence ''' ## TODO: Get the number of batches we can make total_b_s = batch_size * seq_length n_batches = len(arr) // total_b_s ## TODO: Keep only enough characters to make full batches arr = arr[: n_batches * total_b_s] ## TODO: Reshape into batch_size rows arr = arr.reshape((batch_size, -1)) ## TODO: Iterate over the batches using a window of size seq_length for n in range(0, arr.shape[1], seq_length): # The features x = arr[:, n:n+seq_length] # The targets, shifted by one y = np.zeros_like(x) try: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+seq_length] except IndexError: y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0] yield x, y batches = get_batches(encoded, 8, 50) x, y = next(batches) # printing out the first 10 items in a sequence print('x\n', x[:10, :10]) print('\ny\n', y[:10, :10]) x [[25 8 60 11 45 27 28 73 1 2] [17 7 20 73 45 8 60 45 73 60] [27 20 80 73 7 28 73 60 73 65] [17 73 45 8 27 73 66 8 46 27] [73 17 60 12 73 8 27 28 73 45] [66 64 17 17 46 7 20 73 60 20] [73 76 20 20 60 73 8 60 80 73] [47 35 43 7 20 17 24 50 37 73]] y [[ 8 60 11 45 27 28 73 1 2 2] [ 7 20 73 45 8 60 45 73 60 45] [20 80 73 7 28 73 60 73 65 7] [73 45 8 27 73 66 8 46 27 65] [17 60 12 73 8 27 28 73 45 27] [64 17 17 46 7 20 73 60 20 80] [76 20 20 60 73 8 60 80 73 17] [35 43 7 20 17 24 50 37 73 36]] ``` although the exact numbers may be different. Check to make sure the data is shifted over one step for `y`. --- ## Defining the network with PyTorch Below is where you'll define the network. <img src="assets/charRNN.png" width=500px> Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters. ### Model Structure In `__init__` the suggested structure is as follows: * Create and store the necessary dictionaries (this has been done for you) * Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching) * Define a dropout layer with `drop_prob` * Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters) * Finally, initialize the weights (again, this has been given) Note that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`. --- ### LSTM Inputs/Outputs You can create a basic [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) as follows where `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell. We also need to create an initial hidden state of all zeros. This is done like so ## Time to train The train function gives us the ability to set the number of epochs, the learning rate, and other parameters. Below we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual! A couple of details about training: >* Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new *tuple* variable because an LSTM has a hidden state that is a tuple of the hidden and cell states. * We use [`clip_grad_norm_`](https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html) to help prevent exploding gradients. ## Instantiating the model Now we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training! ### Set your training hyperparameters! ## Getting the best model To set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network. ## Hyperparameters Here are the hyperparameters for the network. In defining the model: * `n_hidden` - The number of units in the hidden layers. * `n_layers` - Number of hidden LSTM layers to use. We assume that dropout probability and learning rate will be kept at the default, in this example. And in training: * `batch_size` - Number of sequences running through the network in one pass. * `seq_length` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here. * `lr` - Learning rate for training Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks). > ## Tips and Tricks >### Monitoring Validation Loss vs. Training Loss >If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular: > - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. > - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer) > ### Approximate number of parameters > The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are: > - The number of parameters in your model. This is printed when you start training. > - The size of your dataset. 1MB file is approximately 1 million characters. >These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples: > - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger. > - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss. > ### Best models strategy >The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end. >It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance. >By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative. ## Checkpoint After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters. --- ## Making Predictions Now that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text! ### A note on the `predict` function The output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**. > To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character. ### Top K sampling Our predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about [topk, here](https://pytorch.org/docs/stable/torch.html#torch.topk). ### Priming and generating text Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from. ## Loading a checkpoint
0.765856
0.966569
``` # Imports import numpy as np import pandas as pd import warnings warnings.filterwarnings("ignore") %matplotlib inline import matplotlib.pyplot as plt import statsmodels as sm import statsmodels.api from statsmodels.tsa.stattools import acf, pacf from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.tsa.stattools import kpss from statsmodels.tsa.stattools import adfuller import pmdarima as pm from pmdarima import model_selection from sklearn.metrics import mean_squared_error import matplotlib.pyplot as plt import numpy as np import sys import pandas as pd import statsmodels as sm import warnings from scipy.stats import norm from statsmodels.tsa.stattools import acf from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.tsa.holtwinters import SimpleExpSmoothing from sklearn.linear_model import LinearRegression from statsmodels.tsa.holtwinters import ExponentialSmoothing from math import sqrt from multiprocessing import cpu_count from joblib import Parallel from joblib import delayed from warnings import catch_warnings from warnings import filterwarnings # Reading the dataset df = pd.read_csv("VIBEBTC-1h-data.csv") # Looking inside the dataset df.head(10) # Looking inside the dataset df.tail(10) # Setting time series df['timestamp'] = pd.to_datetime(df['timestamp'], format='%Y-%m-%d') df = df.set_index("timestamp") # Preparing time-series dataset df.drop(["close_time","quote_av","trades","tb_base_av","tb_quote_av","ignore"],axis=1,inplace=True) # Time-series plot df["close"].plot(figsize=(12, 4)) # Making dataset lenght multiple of the window size indx = df.shape[0] - int(df.shape[0]/110)*110 df=df[indx:] df.shape # Searching for nan or infinity values np.all(np.isfinite(df)) # # In case that is not finite # df.fillna(method='ffill', inplace=True) # np.all(np.isfinite(df)) # Set the target column target_column = "close" def rolling_diagnostics(series, window=48): rolling = series.rolling(window) # create and customize the figures: top and bottom fig = plt.figure(figsize=(12, 6)) ax_top = fig.add_subplot(211, title="Rolling mean", xlabel="Date", ylabel="value") ax_bottom = fig.add_subplot(212, title="Rolling std", sharex=ax_top, xlabel="Date", ylabel="std") # draw plots: # series and rolling mean rolling.mean().plot(ax=ax_top) series.plot(ax=ax_top, color="black", lw=2, alpha=.25, zorder=-10) ax_top.grid(which="major", axis="both") # rolling std rolling.std().plot(ax=ax_bottom) ax_bottom.grid(which="major", axis="both") plt.savefig('Rolling_Diagnostics.png') fig.tight_layout() return fig def yearly_seasonality_diagnostics(series, fraction=0.66, period="day"): # use nonparametric local linear regression for preliminary trend estimation trend = sm.api.nonparametric.lowess(series, np.r_[:len(series)], frac=fraction, it=5) # group by year and calculate the mean and std by = getattr(series.index, period, "day") season_groupby = (series - trend[:, 1]).groupby(by) seas_mean, seas_std = season_groupby.mean(), season_groupby.std() # create and customize the figures: top and bottom fig = plt.figure(figsize=(12, 6)) ax_top = fig.add_subplot(211, title="Trend", xlabel="Date") ax_bottom = fig.add_subplot(212, title="Seasonality", xlabel=period) # draw plots: # series and trend pd.Series(trend[:, 1], index=series.index).plot(ax=ax_top) series.plot(ax=ax_top, color="black", lw=2, alpha=.25, zorder=-10) ax_top.grid(which="major", axis="both") # seasonality and 90% normal confidence interval ax_bottom.plot(1 + np.r_[:len(seas_mean)], seas_mean, lw=2) ax_bottom.fill_between(1 + np.r_[:len(seas_mean)], seas_mean - 1.96 * seas_std, seas_mean + 1.96 * seas_std, zorder=-10, color="C1", alpha=0.15) ax_bottom.grid(which="major", axis="both") plt.savefig('Seasonality_Diagnostics.png') fig.tight_layout() return fig def correlation_diagnostics(series, lags=48): # create and customize the figures: left and right fig = plt.figure(figsize=(12, 3)) ax_left, ax_right = fig.subplots( nrows=1, ncols=2, sharey=True, sharex=True, subplot_kw={"xlabel": "Lag", "ylim": (-1.1, 1.1)}) # draw plots using function from statsmodels plot_acf(series, ax_left, lags=lags, zero=False, alpha=0.05, title="Sample Autocorrelation", marker=None) plot_pacf(series, ax_right, lags=lags, zero=False, alpha=0.05, title="Sample Partial Autocorrelation", marker=None) plt.savefig('Correlation_Diagnostics.png') fig.tight_layout() return fig def stat_test_diagnostics(series): return { "ADF": adfuller(series, regression="ct")[:2], "KPSS": kpss(series, regression="c")[:2], } # Rolling characteristics, Correlation analysis, Testing hypotheses for processes stationarity def diagnostics(series, window=250, fraction=0.25, lags=250): # rolling statistics rolling_diagnostics(series, window=window) plt.show() plt.close() # rough seasonality yearly_seasonality_diagnostics(series, fraction=fraction) plt.show() plt.close() # autocorrelations correlation_diagnostics(series, lags=lags) plt.show() plt.close() return stat_test_diagnostics(series) # Time Series Visual Diagnostic diagnostics(df[target_column], window=36) ``` # ARIMA Baseline Model ``` data = df[target_column] def mean_absolute_percent_error(y_true, y_pred): pct_error = abs(y_true - y_pred) / abs(y_true) return pct_error.mean(axis=0) * 100 # 1 ARIMA Baseline Model def ARIMA_Model(holdout,dataset): # Fit a simple auto_arima model modl = pm.auto_arima(dataset, start_p=0, start_q=0, start_P=0, start_Q=0, max_p=5, max_q=5, max_P=5, max_Q=5, seasonal=True, stepwise=True, suppress_warnings=True, D=10, max_D=10, error_action='ignore') # Create predictions for the future, evaluate on test preds, conf_int = modl.predict(holdout, return_conf_int=True) return preds, conf_int # Validating the model (Sliding Window) loop_value = int(len(data)/100) train_window_size = 100 test_window_size = 10 step_size = train_window_size + test_window_size arima_prediction = [] for i in range(0,loop_value): arima_pred, arima_config = ARIMA_Model(test_window_size,data.iloc[i*train_window_size:(i+1)*train_window_size]) arima_prediction.append(arima_pred) # Compute Real Values every 100 hours r_value=[] for i in range(1,loop_value+1): v= data.iloc[i*100:i*train_window_size + test_window_size] r_value.append(v) # Computing metrics (MAPE) arima_mape_list=[] for i in range(0,len(r_value)): mape=mean_absolute_percent_error(r_value[i],arima_prediction[i]) arima_mape_list.append(mape) # Mean Value of MAPE arima_MAPE = sum(arima_mape_list)/len(arima_mape_list) # Print MAPE print("The Mean Absolute Percentage Error in ARIMA Model is equal to",round(arima_MAPE,2)) # Train-test Split train = data[10:] test = data.tail(10) # Forecasting t+10 timesteps arima_forecast, arima_config = ARIMA_Model(10,train) # Plot Forecasting Values fig, ax = plt.subplots(figsize=(16, 10)) ax.plot(train[2100:].index, train.values[2100:]); ax.plot(test.index, test.values, label='truth'); ax.plot(test.index, arima_forecast, linestyle='--', color='#ff7823'); ax.set_title("ARIMA t+10 Forecasting"); plt.savefig('ARIMA t+10 Forecasting.png') ``` # Theta Baseline Model ``` # 2 Theta Baseline Model # Step 1: Check for seasonality # Step 2: Decompose Seasonality if it is deemed seasonal # Step 3: Applying Theta Method # Step 4: Reseasonalize the resulting forecast def sesThetaF(y, s_period , h = 10, level = np.array([90,95,99])): """ @param y : array-like time series data @param s_period : the no. of observations before seasonal pattern repeats @param h : number of period for forcasting @param level: confidence levels for prediction intervals """ if not s_period: print('ERROR: s_period variable only accepts positive integer.') sys.exit() fcast = {} # store result # Check seasonality x = y.copy() n = y.index.size m = s_period if m > 1 and n > 2 * m: r = (acf(x, nlags = m))[1:] temp = np.delete(r, m-1) stat = np.sqrt((1+ 2 * np.sum(np.square(temp))) / n) seasonal = (abs(r[m-1])/stat) > norm.cdf(0.95) else: seasonal = False # Seasonal Decomposition origx = x.copy() if seasonal: decomp = seasonal_decompose(x, model = 'multiplicative') if decomp.seasonal < 1e-10 : warnings.warn('Seasonal indexes equal to zero. Using non-seasonal Theta method') else: x = decomp.observed/decomp.seasonal # Find theta lines model = SimpleExpSmoothing(x).fit() fcast['mean'] = model.forecast(h) num = np.array(range(0,n)) temp = LinearRegression().fit(num.reshape(-1,1),x).coef_ temp = temp/2 alpha = np.maximum(1e-10, model.params['smoothing_level']) fcast['mean'] = fcast['mean'] + temp * (np.array(range(0,h)) + (1 - (1 - alpha)**n)/alpha) # Reseasonalize if seasonal: fcast['mean'] = fcast['mean'] * np.repeat(decomp.seasonal[-m:], (1 + h//m))[:h] fcast['fitted'] = model.predict(x.index[0], x.index[n-1]) * decomp.seasonal else: fcast['fitted'] = model.predict(x.index[0], x.index[n-1]) fcast['residuals'] = origx - fcast['fitted'] return fcast # Prediction Intervals data = pd.Series(df['close']).asfreq("H") data.fillna(method='ffill', inplace=True) np.all(np.isfinite(data)) # Validating the model (Sliding Window) theta_pred_list=[] for i in range(0,loop_value): theta_pred = sesThetaF(data[i*100:(i+1)*100],s_period=1,h = 10) theta_pred_list.append(theta_pred['mean']) r_value=[] for i in range(1,loop_value+1): v= data.iloc[i*100:i*train_window_size + test_window_size] r_value.append(v) # Computing metrics (MAPE) theta_mape_list=[] for i in range(0,len(r_value)): mape=mean_absolute_percent_error(r_value[i],theta_pred_list[i]) theta_mape_list.append(mape) # Mean Value of MAPE theta_MAPE = sum(theta_mape_list)/len(theta_mape_list) # Print MAPE print("The Mean Absolute Percentage Error in Theta Model is equal to",round(theta_MAPE,2)) # Forecasting t+10 timesteps theta_conf = sesThetaF(data,s_period=1,h = 10) # Plot Forecasting Values mean = theta_conf['mean'] fitted = theta_conf['fitted'] residuals = theta_conf['residuals'] plt.figure(figsize = (16,10)) plt.plot(fitted, marker = '.', color = 'red', label = 'In-sample Fitted') plt.plot(mean, marker = '*', color = 'blue', label = 'Forecast') plt.plot(residuals, marker = '', color = 'green', label = 'Residuals') plt.title('Standard Theta Model') plt.legend() plt.show() plt.savefig('Standard Theta Model t+10 Forecasting.png') ``` # HW Exponential Smoothing Baseline Model ``` # Dataset pre-processing data = df[target_column] data = pd.Series(df['close']).asfreq("H") np.all(np.isfinite(data)) data.fillna(method='ffill', inplace=True) np.all(np.isfinite(data)) # 3 HWES Baseline Model exp_smooth_pred_list=[] for i in range(0,loop_value): model = ExponentialSmoothing(data[i*100:(i+1)*100],freq="H") model_fit = model.fit() # make prediction yhat = model_fit.predict(100, 109) exp_smooth_pred_list.append(yhat) exp_smooth_mape_list=[] for i in range(0,len(r_value)): mape=mean_absolute_percent_error(r_value[i],exp_smooth_pred_list[i]) exp_smooth_mape_list.append(mape) exp_smooth_MAPE = sum(exp_smooth_mape_list)/len(exp_smooth_mape_list) # Print MAPE print("The Mean Absolute Percentage Error in Exponential Smoothing Method is equal to",round(exp_smooth_MAPE,2)) # Train-test Split train = data[10:] test = data.tail(10) # Forecasting t+10 timesteps model = ExponentialSmoothing(train,freq="H") model_fit = model.fit() # make prediction yhat = model_fit.predict(len(train), len(train)+9) # Plot Forecasting Values fig, ax = plt.subplots(figsize=(16, 10)) ax.plot(train[2100:].index, train.values[2100:]); ax.plot(test.index, test.values, label='truth'); # ax.plot(test.index, yhat, linestyle='--', color='#ff7823'); ax.set_title("Holt-Winter's Seasonal Smoothing"); plt.savefig("Holt-Winter's Seasonal Smoothing t+10 Forecasting.png") ```
github_jupyter
# Imports import numpy as np import pandas as pd import warnings warnings.filterwarnings("ignore") %matplotlib inline import matplotlib.pyplot as plt import statsmodels as sm import statsmodels.api from statsmodels.tsa.stattools import acf, pacf from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.tsa.stattools import kpss from statsmodels.tsa.stattools import adfuller import pmdarima as pm from pmdarima import model_selection from sklearn.metrics import mean_squared_error import matplotlib.pyplot as plt import numpy as np import sys import pandas as pd import statsmodels as sm import warnings from scipy.stats import norm from statsmodels.tsa.stattools import acf from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.tsa.holtwinters import SimpleExpSmoothing from sklearn.linear_model import LinearRegression from statsmodels.tsa.holtwinters import ExponentialSmoothing from math import sqrt from multiprocessing import cpu_count from joblib import Parallel from joblib import delayed from warnings import catch_warnings from warnings import filterwarnings # Reading the dataset df = pd.read_csv("VIBEBTC-1h-data.csv") # Looking inside the dataset df.head(10) # Looking inside the dataset df.tail(10) # Setting time series df['timestamp'] = pd.to_datetime(df['timestamp'], format='%Y-%m-%d') df = df.set_index("timestamp") # Preparing time-series dataset df.drop(["close_time","quote_av","trades","tb_base_av","tb_quote_av","ignore"],axis=1,inplace=True) # Time-series plot df["close"].plot(figsize=(12, 4)) # Making dataset lenght multiple of the window size indx = df.shape[0] - int(df.shape[0]/110)*110 df=df[indx:] df.shape # Searching for nan or infinity values np.all(np.isfinite(df)) # # In case that is not finite # df.fillna(method='ffill', inplace=True) # np.all(np.isfinite(df)) # Set the target column target_column = "close" def rolling_diagnostics(series, window=48): rolling = series.rolling(window) # create and customize the figures: top and bottom fig = plt.figure(figsize=(12, 6)) ax_top = fig.add_subplot(211, title="Rolling mean", xlabel="Date", ylabel="value") ax_bottom = fig.add_subplot(212, title="Rolling std", sharex=ax_top, xlabel="Date", ylabel="std") # draw plots: # series and rolling mean rolling.mean().plot(ax=ax_top) series.plot(ax=ax_top, color="black", lw=2, alpha=.25, zorder=-10) ax_top.grid(which="major", axis="both") # rolling std rolling.std().plot(ax=ax_bottom) ax_bottom.grid(which="major", axis="both") plt.savefig('Rolling_Diagnostics.png') fig.tight_layout() return fig def yearly_seasonality_diagnostics(series, fraction=0.66, period="day"): # use nonparametric local linear regression for preliminary trend estimation trend = sm.api.nonparametric.lowess(series, np.r_[:len(series)], frac=fraction, it=5) # group by year and calculate the mean and std by = getattr(series.index, period, "day") season_groupby = (series - trend[:, 1]).groupby(by) seas_mean, seas_std = season_groupby.mean(), season_groupby.std() # create and customize the figures: top and bottom fig = plt.figure(figsize=(12, 6)) ax_top = fig.add_subplot(211, title="Trend", xlabel="Date") ax_bottom = fig.add_subplot(212, title="Seasonality", xlabel=period) # draw plots: # series and trend pd.Series(trend[:, 1], index=series.index).plot(ax=ax_top) series.plot(ax=ax_top, color="black", lw=2, alpha=.25, zorder=-10) ax_top.grid(which="major", axis="both") # seasonality and 90% normal confidence interval ax_bottom.plot(1 + np.r_[:len(seas_mean)], seas_mean, lw=2) ax_bottom.fill_between(1 + np.r_[:len(seas_mean)], seas_mean - 1.96 * seas_std, seas_mean + 1.96 * seas_std, zorder=-10, color="C1", alpha=0.15) ax_bottom.grid(which="major", axis="both") plt.savefig('Seasonality_Diagnostics.png') fig.tight_layout() return fig def correlation_diagnostics(series, lags=48): # create and customize the figures: left and right fig = plt.figure(figsize=(12, 3)) ax_left, ax_right = fig.subplots( nrows=1, ncols=2, sharey=True, sharex=True, subplot_kw={"xlabel": "Lag", "ylim": (-1.1, 1.1)}) # draw plots using function from statsmodels plot_acf(series, ax_left, lags=lags, zero=False, alpha=0.05, title="Sample Autocorrelation", marker=None) plot_pacf(series, ax_right, lags=lags, zero=False, alpha=0.05, title="Sample Partial Autocorrelation", marker=None) plt.savefig('Correlation_Diagnostics.png') fig.tight_layout() return fig def stat_test_diagnostics(series): return { "ADF": adfuller(series, regression="ct")[:2], "KPSS": kpss(series, regression="c")[:2], } # Rolling characteristics, Correlation analysis, Testing hypotheses for processes stationarity def diagnostics(series, window=250, fraction=0.25, lags=250): # rolling statistics rolling_diagnostics(series, window=window) plt.show() plt.close() # rough seasonality yearly_seasonality_diagnostics(series, fraction=fraction) plt.show() plt.close() # autocorrelations correlation_diagnostics(series, lags=lags) plt.show() plt.close() return stat_test_diagnostics(series) # Time Series Visual Diagnostic diagnostics(df[target_column], window=36) data = df[target_column] def mean_absolute_percent_error(y_true, y_pred): pct_error = abs(y_true - y_pred) / abs(y_true) return pct_error.mean(axis=0) * 100 # 1 ARIMA Baseline Model def ARIMA_Model(holdout,dataset): # Fit a simple auto_arima model modl = pm.auto_arima(dataset, start_p=0, start_q=0, start_P=0, start_Q=0, max_p=5, max_q=5, max_P=5, max_Q=5, seasonal=True, stepwise=True, suppress_warnings=True, D=10, max_D=10, error_action='ignore') # Create predictions for the future, evaluate on test preds, conf_int = modl.predict(holdout, return_conf_int=True) return preds, conf_int # Validating the model (Sliding Window) loop_value = int(len(data)/100) train_window_size = 100 test_window_size = 10 step_size = train_window_size + test_window_size arima_prediction = [] for i in range(0,loop_value): arima_pred, arima_config = ARIMA_Model(test_window_size,data.iloc[i*train_window_size:(i+1)*train_window_size]) arima_prediction.append(arima_pred) # Compute Real Values every 100 hours r_value=[] for i in range(1,loop_value+1): v= data.iloc[i*100:i*train_window_size + test_window_size] r_value.append(v) # Computing metrics (MAPE) arima_mape_list=[] for i in range(0,len(r_value)): mape=mean_absolute_percent_error(r_value[i],arima_prediction[i]) arima_mape_list.append(mape) # Mean Value of MAPE arima_MAPE = sum(arima_mape_list)/len(arima_mape_list) # Print MAPE print("The Mean Absolute Percentage Error in ARIMA Model is equal to",round(arima_MAPE,2)) # Train-test Split train = data[10:] test = data.tail(10) # Forecasting t+10 timesteps arima_forecast, arima_config = ARIMA_Model(10,train) # Plot Forecasting Values fig, ax = plt.subplots(figsize=(16, 10)) ax.plot(train[2100:].index, train.values[2100:]); ax.plot(test.index, test.values, label='truth'); ax.plot(test.index, arima_forecast, linestyle='--', color='#ff7823'); ax.set_title("ARIMA t+10 Forecasting"); plt.savefig('ARIMA t+10 Forecasting.png') # 2 Theta Baseline Model # Step 1: Check for seasonality # Step 2: Decompose Seasonality if it is deemed seasonal # Step 3: Applying Theta Method # Step 4: Reseasonalize the resulting forecast def sesThetaF(y, s_period , h = 10, level = np.array([90,95,99])): """ @param y : array-like time series data @param s_period : the no. of observations before seasonal pattern repeats @param h : number of period for forcasting @param level: confidence levels for prediction intervals """ if not s_period: print('ERROR: s_period variable only accepts positive integer.') sys.exit() fcast = {} # store result # Check seasonality x = y.copy() n = y.index.size m = s_period if m > 1 and n > 2 * m: r = (acf(x, nlags = m))[1:] temp = np.delete(r, m-1) stat = np.sqrt((1+ 2 * np.sum(np.square(temp))) / n) seasonal = (abs(r[m-1])/stat) > norm.cdf(0.95) else: seasonal = False # Seasonal Decomposition origx = x.copy() if seasonal: decomp = seasonal_decompose(x, model = 'multiplicative') if decomp.seasonal < 1e-10 : warnings.warn('Seasonal indexes equal to zero. Using non-seasonal Theta method') else: x = decomp.observed/decomp.seasonal # Find theta lines model = SimpleExpSmoothing(x).fit() fcast['mean'] = model.forecast(h) num = np.array(range(0,n)) temp = LinearRegression().fit(num.reshape(-1,1),x).coef_ temp = temp/2 alpha = np.maximum(1e-10, model.params['smoothing_level']) fcast['mean'] = fcast['mean'] + temp * (np.array(range(0,h)) + (1 - (1 - alpha)**n)/alpha) # Reseasonalize if seasonal: fcast['mean'] = fcast['mean'] * np.repeat(decomp.seasonal[-m:], (1 + h//m))[:h] fcast['fitted'] = model.predict(x.index[0], x.index[n-1]) * decomp.seasonal else: fcast['fitted'] = model.predict(x.index[0], x.index[n-1]) fcast['residuals'] = origx - fcast['fitted'] return fcast # Prediction Intervals data = pd.Series(df['close']).asfreq("H") data.fillna(method='ffill', inplace=True) np.all(np.isfinite(data)) # Validating the model (Sliding Window) theta_pred_list=[] for i in range(0,loop_value): theta_pred = sesThetaF(data[i*100:(i+1)*100],s_period=1,h = 10) theta_pred_list.append(theta_pred['mean']) r_value=[] for i in range(1,loop_value+1): v= data.iloc[i*100:i*train_window_size + test_window_size] r_value.append(v) # Computing metrics (MAPE) theta_mape_list=[] for i in range(0,len(r_value)): mape=mean_absolute_percent_error(r_value[i],theta_pred_list[i]) theta_mape_list.append(mape) # Mean Value of MAPE theta_MAPE = sum(theta_mape_list)/len(theta_mape_list) # Print MAPE print("The Mean Absolute Percentage Error in Theta Model is equal to",round(theta_MAPE,2)) # Forecasting t+10 timesteps theta_conf = sesThetaF(data,s_period=1,h = 10) # Plot Forecasting Values mean = theta_conf['mean'] fitted = theta_conf['fitted'] residuals = theta_conf['residuals'] plt.figure(figsize = (16,10)) plt.plot(fitted, marker = '.', color = 'red', label = 'In-sample Fitted') plt.plot(mean, marker = '*', color = 'blue', label = 'Forecast') plt.plot(residuals, marker = '', color = 'green', label = 'Residuals') plt.title('Standard Theta Model') plt.legend() plt.show() plt.savefig('Standard Theta Model t+10 Forecasting.png') # Dataset pre-processing data = df[target_column] data = pd.Series(df['close']).asfreq("H") np.all(np.isfinite(data)) data.fillna(method='ffill', inplace=True) np.all(np.isfinite(data)) # 3 HWES Baseline Model exp_smooth_pred_list=[] for i in range(0,loop_value): model = ExponentialSmoothing(data[i*100:(i+1)*100],freq="H") model_fit = model.fit() # make prediction yhat = model_fit.predict(100, 109) exp_smooth_pred_list.append(yhat) exp_smooth_mape_list=[] for i in range(0,len(r_value)): mape=mean_absolute_percent_error(r_value[i],exp_smooth_pred_list[i]) exp_smooth_mape_list.append(mape) exp_smooth_MAPE = sum(exp_smooth_mape_list)/len(exp_smooth_mape_list) # Print MAPE print("The Mean Absolute Percentage Error in Exponential Smoothing Method is equal to",round(exp_smooth_MAPE,2)) # Train-test Split train = data[10:] test = data.tail(10) # Forecasting t+10 timesteps model = ExponentialSmoothing(train,freq="H") model_fit = model.fit() # make prediction yhat = model_fit.predict(len(train), len(train)+9) # Plot Forecasting Values fig, ax = plt.subplots(figsize=(16, 10)) ax.plot(train[2100:].index, train.values[2100:]); ax.plot(test.index, test.values, label='truth'); # ax.plot(test.index, yhat, linestyle='--', color='#ff7823'); ax.set_title("Holt-Winter's Seasonal Smoothing"); plt.savefig("Holt-Winter's Seasonal Smoothing t+10 Forecasting.png")
0.59972
0.74461
# Bayesian Demand Models for Dynamic Price Optimization In this notebook, we demonstrate how simple demand models can be fitted using a probabilistic programming framework, specifically PyMC3. This type of models can be useful in dynamic pricing applications. For example, it can be combined with the Thompson sampling algorithm. ``` import pymc3 as pm from pymc3 import * import theano import theano.tensor as tt print('Running on PyMC3 v{}'.format(pm.__version__)) import numpy as np from scipy import stats from matplotlib import pylab as plt import seaborn as sns sns.set_style("whitegrid") import warnings warnings.simplefilter(action='ignore', category=FutureWarning) ``` # Example 1: Poisson-Gamma Demand Model We consider the following scenario: * The seller offers some product to the market at price $p_t$ for time step $t$ * Prices are limited to some discrete set $p_1, \ldots, p_k$ * For a given price, we have observed Poisson distributed demand samples $d_1, \ldots, d_n$ * We assume that the prior demand distribution is gamma The code snippet below shows how to compute the posterior demand distriution for a given price under the above assumptions. ``` d0 = [20, 28, 24, 20, 23] # observed demand samples for a certain price (n = 5) prior_a = 15 prior_b = 1 with pm.Model() as m: d = pm.Gamma('theta', prior_a, prior_b) # prior distribution pm.Poisson('d0', d, observed = d0) # likelihood samples = pm.sample(1000) # draw samples from the posterior x = np.linspace(10, 30, 50) fig = plt.figure(figsize=(10, 5)) sns.lineplot(x, stats.gamma.pdf(x, prior_a), label='Prior') sns.distplot(samples.get_values('theta'), fit=stats.gamma, kde=False, label='Posterior') plt.ylabel('p(Demand)') plt.xlabel('Demand (Units)') plt.legend() plt.show() ``` # Example 2: Contsant-Elasticity Demand Model * The second scenario assumes the constan-elasticity model. * We observe price-demand pairs and fit the model $d = b\cdot p^{-c}$ where $c$ is the elasticity coeffecient. * We use the logarithmic form of the model for stability and convenience: $\log d = \log b - c \cdot \log p$ ``` # (offered price, $) : (observed demand, units) pairs price_demand = { 15: 20, 14: 18, 13: 35, 12: 50, 11: 65 } p0, d0 = list(price_demand.keys()), list(price_demand.values()) with pm.Model() as m: log_b = pm.Normal('log_b', sd = 5) # priors c = pm.HalfNormal('c', sd = 5) # assume the elasticty to be non-negative log_d = log_b - c * np.log(p0) # demand model pm.Poisson('d0', np.exp(log_d), observed = d0) # likelihood s = pm.sample(1000) # inference p = np.linspace(10, 16) # price range d_means = np.exp(s.log_b - s.c * np.log(p).reshape(-1, 1))[:, :500] fig = plt.figure(figsize=(10, 5)) plt.plot(p, d_means, c = 'k', alpha = 0.01) plt.plot(p0, d0, 'ko', markeredgewidth=1.5, markerfacecolor='w', markersize=10) plt.xlabel('Price ($)') plt.ylabel('Demand (Units)') plt.show() ``` # Example 3: Two Related Products * The third example shows how the model can be extended to incorporate cross-product dependencies. * We assume two products and impose correlations between their elasticity coeffecients. ``` price_demand = [ { 15: 20, 14: 18, 13: 35, 12: 50, 11: 65}, # product 1 { 15: 10, 14: 12, 13: 13, 12: 17, 11: 20} # product 2 ] p01, d01 = list(price_demand[0].keys()), list(price_demand[0].values()) p02, d02 = list(price_demand[1].keys()), list(price_demand[1].values()) fig = plt.figure(figsize=(10, 5)) plt.plot(p01, d01, 'ko', markeredgewidth=1.5, markerfacecolor='w', markersize=10) plt.plot(p02, d02, 'ks', markeredgewidth=1.5, markerfacecolor='w', markersize=10) plt.xlabel('Price ($)') plt.ylabel('Demand (Units)') plt.show() p, d = np.vstack([p01, p02]), np.vstack([d01, d02]) with pm.Model() as m: # priors log_b_mu, log_b_cov = np.zeros(2), 10*np.eye(2) log_b = pm.MvNormal('log_b', mu=log_b_mu, cov=log_b_cov, shape=(2,)) c_mu = np.zeros(2) c_cov = 10 * np.array([[ 1.0, 0.9], [ 0.9, 1.0]]) c = pm.MvNormal('c', mu=c_mu, cov=c_cov, shape=(2,)) log_d1 = log_b - c * np.log(p.T) # demand model pm.Poisson('d0', np.exp(log_d1), observed = d.T) # likelihood s = pm.sample(1000) # inference p = np.linspace(10, 16) # price range d_means = [ np.exp(s.log_b[:, i] - s.c[:, i] * np.log(p).reshape(-1, 1))[:, :500] for i in [0, 1] ] fig, ax = plt.subplots(2, 1, figsize=(8, 8)) ax[0].plot(p, d_means[0], c = 'k', alpha = 0.01) ax[0].plot(p01, d01, 'ko', markeredgewidth=1.5, markerfacecolor='w', markersize=10) ax[0].set_xlabel('Price ($)') ax[0].set_ylabel('Demand (Units)') ax[1].plot(p, d_means[1], c = 'k', alpha = 0.01) ax[1].plot(p02, d02, 'ko', markeredgewidth=1.5, markerfacecolor='w', markersize=10) ax[1].set_xlabel('Price ($)') ax[1].set_ylabel('Demand (Units)') plt.tight_layout() plt.show() ```
github_jupyter
import pymc3 as pm from pymc3 import * import theano import theano.tensor as tt print('Running on PyMC3 v{}'.format(pm.__version__)) import numpy as np from scipy import stats from matplotlib import pylab as plt import seaborn as sns sns.set_style("whitegrid") import warnings warnings.simplefilter(action='ignore', category=FutureWarning) d0 = [20, 28, 24, 20, 23] # observed demand samples for a certain price (n = 5) prior_a = 15 prior_b = 1 with pm.Model() as m: d = pm.Gamma('theta', prior_a, prior_b) # prior distribution pm.Poisson('d0', d, observed = d0) # likelihood samples = pm.sample(1000) # draw samples from the posterior x = np.linspace(10, 30, 50) fig = plt.figure(figsize=(10, 5)) sns.lineplot(x, stats.gamma.pdf(x, prior_a), label='Prior') sns.distplot(samples.get_values('theta'), fit=stats.gamma, kde=False, label='Posterior') plt.ylabel('p(Demand)') plt.xlabel('Demand (Units)') plt.legend() plt.show() # (offered price, $) : (observed demand, units) pairs price_demand = { 15: 20, 14: 18, 13: 35, 12: 50, 11: 65 } p0, d0 = list(price_demand.keys()), list(price_demand.values()) with pm.Model() as m: log_b = pm.Normal('log_b', sd = 5) # priors c = pm.HalfNormal('c', sd = 5) # assume the elasticty to be non-negative log_d = log_b - c * np.log(p0) # demand model pm.Poisson('d0', np.exp(log_d), observed = d0) # likelihood s = pm.sample(1000) # inference p = np.linspace(10, 16) # price range d_means = np.exp(s.log_b - s.c * np.log(p).reshape(-1, 1))[:, :500] fig = plt.figure(figsize=(10, 5)) plt.plot(p, d_means, c = 'k', alpha = 0.01) plt.plot(p0, d0, 'ko', markeredgewidth=1.5, markerfacecolor='w', markersize=10) plt.xlabel('Price ($)') plt.ylabel('Demand (Units)') plt.show() price_demand = [ { 15: 20, 14: 18, 13: 35, 12: 50, 11: 65}, # product 1 { 15: 10, 14: 12, 13: 13, 12: 17, 11: 20} # product 2 ] p01, d01 = list(price_demand[0].keys()), list(price_demand[0].values()) p02, d02 = list(price_demand[1].keys()), list(price_demand[1].values()) fig = plt.figure(figsize=(10, 5)) plt.plot(p01, d01, 'ko', markeredgewidth=1.5, markerfacecolor='w', markersize=10) plt.plot(p02, d02, 'ks', markeredgewidth=1.5, markerfacecolor='w', markersize=10) plt.xlabel('Price ($)') plt.ylabel('Demand (Units)') plt.show() p, d = np.vstack([p01, p02]), np.vstack([d01, d02]) with pm.Model() as m: # priors log_b_mu, log_b_cov = np.zeros(2), 10*np.eye(2) log_b = pm.MvNormal('log_b', mu=log_b_mu, cov=log_b_cov, shape=(2,)) c_mu = np.zeros(2) c_cov = 10 * np.array([[ 1.0, 0.9], [ 0.9, 1.0]]) c = pm.MvNormal('c', mu=c_mu, cov=c_cov, shape=(2,)) log_d1 = log_b - c * np.log(p.T) # demand model pm.Poisson('d0', np.exp(log_d1), observed = d.T) # likelihood s = pm.sample(1000) # inference p = np.linspace(10, 16) # price range d_means = [ np.exp(s.log_b[:, i] - s.c[:, i] * np.log(p).reshape(-1, 1))[:, :500] for i in [0, 1] ] fig, ax = plt.subplots(2, 1, figsize=(8, 8)) ax[0].plot(p, d_means[0], c = 'k', alpha = 0.01) ax[0].plot(p01, d01, 'ko', markeredgewidth=1.5, markerfacecolor='w', markersize=10) ax[0].set_xlabel('Price ($)') ax[0].set_ylabel('Demand (Units)') ax[1].plot(p, d_means[1], c = 'k', alpha = 0.01) ax[1].plot(p02, d02, 'ko', markeredgewidth=1.5, markerfacecolor='w', markersize=10) ax[1].set_xlabel('Price ($)') ax[1].set_ylabel('Demand (Units)') plt.tight_layout() plt.show()
0.535341
0.980876
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"></ul></div> ``` from ipywidgets import interact, interactive, fixed, interact_manual, Layout from traitlets import Unicode, Bool, validate, TraitError import ipywidgets as widgets from ipywidgets import DOMWidget, register from IPython.display import display, HTML, JSON, Code from vega import Vega from vega3 import Vega as Vega3 import requests import json import simplejson import re %reload_ext version_information %version_information vega, vega3, IPython, ipywidgets, re AQvegaTheme=""" { "background":"#fff", "render": {"retina": true}, "axis": { "layer": "back", "ticks": 5, "axisColor": "transparent", "axisWidth": 1, "gridColor": "#eeeeee", "gridOpacity": 1, "tickColor": "#eeeeee", "tickLabelColor": "#758290", "tickWidth": 0, "tickSize": 10, "tickLabelFontSize": 12, "tickLabelFont": "\"Roboto\"", "tickOffset": 0, "titleFont": "\"Roboto\"", "titleFontSize": 12, "titleFontWeight": "regular", "titleColor": "#758290", "titleOffset": "auto", "titleOffsetAutoMin": 0, "titleOffsetAutoMax": 0, "titleOffsetAutoMargin": 0 }, "legend": { "orient": "left", "padding": 0, "margin": 0, "labelColor": "#758290", "labelFontSize": 12, "labelFont": "\"Roboto\"", "labelAlign": "left", "labelBaseline": "Bottom", "labelOffset": 0, "symbolShape": "line", "symbolSize": 2, "symbolStrokeWidth": 0, "titleFont": "\"Roboto\"", "titleFontSize": 12, "titleFontWeight": "medium", "titleColor": "#758290" }, "color": { "rgb": [128,128,128], "lab": [50,0,0], "hcl": [0,0,50], "hsl": [0,0,0.5] }, "range": { "cropColor":[ "#3d6ac4", "#749be9", "#51d4f0", "#b1f3b7", "#24962e" ], "riskColor":[ "#808080", "#FFFF99", "#FFE600", "#FF9900", "#FF1900", "#990000" ], "shapes": [ "circle", "cross", "diamond", "square", "triangle-down", "triangle-up", "line", "textMark" ], "cropCatColor": [ "#C71585", "#0000FF", "#A52A2A", "#FFA500", "#6B8E23", "#FFDAB9", "#E6E6FA", "#8B4513", "#008000", "#2F4F4F" ] } } """ @register class CodeArea(DOMWidget): _view_name = Unicode('CodeView').tag(sync=True) _view_module = Unicode('CodeArea_widget').tag(sync=True) _view_module_version = Unicode('0.1.0').tag(sync=True) # Attributes value = Unicode('codeAreaHere', help="Code area here").tag(sync=True) disabled = Bool(False, help="Enable or disable user changes.").tag(sync=True) #methods def clear(self): self.value = '' %%html <style> .CodeMirror { /* Set height, width, borders, and global font properties here */ font-family: monospace; height: 300px; color: black; direction: ltr; } /* PADDING */ .CodeMirror-lines { padding: 4px 0; /* Vertical padding around content */ } .CodeMirror pre { padding: 0 4px; /* Horizontal padding of content */ } .CodeMirror-scrollbar-filler, .CodeMirror-gutter-filler { background-color: white; /* The little square between H and V scrollbars */ } /* GUTTER */ .CodeMirror-gutters { border-right: 1px solid #ddd; background-color: #f7f7f7; white-space: nowrap; } .CodeMirror-linenumbers {} .CodeMirror-linenumber { padding: 0 3px 0 5px; min-width: 20px; text-align: right; color: #999; white-space: nowrap; } .CodeMirror-guttermarker { color: black; } .CodeMirror-guttermarker-subtle { color: #999; } /* CURSOR */ .CodeMirror-cursor { border-left: 1px solid black; border-right: none; width: 0; } /* Shown when moving in bi-directional text */ .CodeMirror div.CodeMirror-secondarycursor { border-left: 1px solid silver; } .cm-fat-cursor .CodeMirror-cursor { width: auto; border: 0 !important; background: #7e7; } .cm-fat-cursor div.CodeMirror-cursors { z-index: 1; } .cm-fat-cursor-mark { background-color: rgba(20, 255, 20, 0.5); -webkit-animation: blink 1.06s steps(1) infinite; -moz-animation: blink 1.06s steps(1) infinite; animation: blink 1.06s steps(1) infinite; } .cm-animate-fat-cursor { width: auto; border: 0; -webkit-animation: blink 1.06s steps(1) infinite; -moz-animation: blink 1.06s steps(1) infinite; animation: blink 1.06s steps(1) infinite; background-color: #7e7; } @-moz-keyframes blink { 0% {} 50% { background-color: transparent; } 100% {} } @-webkit-keyframes blink { 0% {} 50% { background-color: transparent; } 100% {} } @keyframes blink { 0% {} 50% { background-color: transparent; } 100% {} } /* Can style cursor different in overwrite (non-insert) mode */ .CodeMirror-overwrite .CodeMirror-cursor {} .cm-tab { display: inline-block; text-decoration: inherit; } .CodeMirror-rulers { position: absolute; left: 0; right: 0; top: -50px; bottom: 0; overflow: hidden; } .CodeMirror-ruler { border-left: 1px solid #ccc; top: 0; bottom: 0; position: absolute; } /* DEFAULT THEME */ .cm-s-default .cm-header {color: blue;} .cm-s-default .cm-quote {color: #090;} .cm-negative {color: #d44;} .cm-positive {color: #292;} .cm-header, .cm-strong {font-weight: bold;} .cm-em {font-style: italic;} .cm-link {text-decoration: underline;} .cm-strikethrough {text-decoration: line-through;} .cm-s-default .cm-keyword {color: #708;} .cm-s-default .cm-atom {color: #219;} .cm-s-default .cm-number {color: #164;} .cm-s-default .cm-def {color: #00f;} .cm-s-default .cm-variable, .cm-s-default .cm-punctuation, .cm-s-default .cm-property, .cm-s-default .cm-operator {} .cm-s-default .cm-variable-2 {color: #05a;} .cm-s-default .cm-variable-3, .cm-s-default .cm-type {color: #085;} .cm-s-default .cm-comment {color: #a50;} .cm-s-default .cm-string {color: #a11;} .cm-s-default .cm-string-2 {color: #f50;} .cm-s-default .cm-meta {color: #555;} .cm-s-default .cm-qualifier {color: #555;} .cm-s-default .cm-builtin {color: #30a;} .cm-s-default .cm-bracket {color: #997;} .cm-s-default .cm-tag {color: #170;} .cm-s-default .cm-attribute {color: #00c;} .cm-s-default .cm-hr {color: #999;} .cm-s-default .cm-link {color: #00c;} .cm-s-default .cm-error {color: #f00;} .cm-invalidchar {color: #f00;} .CodeMirror-composing { border-bottom: 2px solid; } /* Default styles for common addons */ div.CodeMirror span.CodeMirror-matchingbracket {color: #0b0;} div.CodeMirror span.CodeMirror-nonmatchingbracket {color: #a22;} .CodeMirror-matchingtag { background: rgba(255, 150, 0, .3); } .CodeMirror-activeline-background {background: #e8f2ff;} /* STOP */ /* The rest of this file contains styles related to the mechanics of the editor. You probably shouldn't touch them. */ .CodeMirror { position: relative; overflow: hidden; background: white; } .CodeMirror-scroll { overflow: scroll !important; /* Things will break if this is overridden */ /* 30px is the magic margin used to hide the element's real scrollbars */ /* See overflow: hidden in .CodeMirror */ margin-bottom: -30px; margin-right: -30px; padding-bottom: 30px; height: 100%; outline: none; /* Prevent dragging from highlighting the element */ position: relative; } .CodeMirror-sizer { position: relative; border-right: 30px solid transparent; } /* The fake, visible scrollbars. Used to force redraw during scrolling before actual scrolling happens, thus preventing shaking and flickering artifacts. */ .CodeMirror-vscrollbar, .CodeMirror-hscrollbar, .CodeMirror-scrollbar-filler, .CodeMirror-gutter-filler { position: absolute; z-index: 6; display: none; } .CodeMirror-vscrollbar { right: 0; top: 0; overflow-x: hidden; overflow-y: scroll; } .CodeMirror-hscrollbar { bottom: 0; left: 0; overflow-y: hidden; overflow-x: scroll; } .CodeMirror-scrollbar-filler { right: 0; bottom: 0; } .CodeMirror-gutter-filler { left: 0; bottom: 0; } .CodeMirror-gutters { position: absolute; left: 0; top: 0; min-height: 100%; z-index: 3; } .CodeMirror-gutter { white-space: normal; height: 100%; display: inline-block; vertical-align: top; margin-bottom: -30px; } .CodeMirror-gutter-wrapper { position: absolute; z-index: 4; background: none !important; border: none !important; } .CodeMirror-gutter-background { position: absolute; top: 0; bottom: 0; z-index: 4; } .CodeMirror-gutter-elt { position: absolute; cursor: default; z-index: 4; } .CodeMirror-gutter-wrapper ::selection { background-color: transparent } .CodeMirror-gutter-wrapper ::-moz-selection { background-color: transparent } .CodeMirror-lines { cursor: text; min-height: 1px; /* prevents collapsing before first draw */ } .CodeMirror pre { /* Reset some styles that the rest of the page might have set */ -moz-border-radius: 0; -webkit-border-radius: 0; border-radius: 0; border-width: 0; background: transparent; font-family: inherit; font-size: inherit; margin: 0; white-space: pre; word-wrap: normal; line-height: inherit; color: inherit; z-index: 2; position: relative; overflow: visible; -webkit-tap-highlight-color: transparent; -webkit-font-variant-ligatures: contextual; font-variant-ligatures: contextual; } .CodeMirror-wrap pre { word-wrap: break-word; white-space: pre-wrap; word-break: normal; } .CodeMirror-linebackground { position: absolute; left: 0; right: 0; top: 0; bottom: 0; z-index: 0; } .CodeMirror-linewidget { position: relative; z-index: 2; padding: 0.1px; /* Force widget margins to stay inside of the container */ } .CodeMirror-widget {} .CodeMirror-rtl pre { direction: rtl; } .CodeMirror-code { outline: none; } /* Force content-box sizing for the elements where we expect it */ .CodeMirror-scroll, .CodeMirror-sizer, .CodeMirror-gutter, .CodeMirror-gutters, .CodeMirror-linenumber { -moz-box-sizing: content-box; box-sizing: content-box; } .CodeMirror-measure { position: absolute; width: 100%; height: 0; overflow: hidden; visibility: hidden; } .CodeMirror-cursor { position: absolute; pointer-events: none; } .CodeMirror-measure pre { position: static; } div.CodeMirror-cursors { visibility: hidden; position: relative; z-index: 3; } div.CodeMirror-dragcursors { visibility: visible; } .CodeMirror-focused div.CodeMirror-cursors { visibility: visible; } .CodeMirror-selected { background: #d9d9d9; } .CodeMirror-focused .CodeMirror-selected { background: #d7d4f0; } .CodeMirror-crosshair { cursor: crosshair; } .CodeMirror-line::selection, .CodeMirror-line > span::selection, .CodeMirror-line > span > span::selection { background: #d7d4f0; } .CodeMirror-line::-moz-selection, .CodeMirror-line > span::-moz-selection, .CodeMirror-line > span > span::-moz-selection { background: #d7d4f0; } .cm-searching { background-color: #ffa; background-color: rgba(255, 255, 0, .4); } /* Used to force a border model for a node */ .cm-force-border { padding-right: .1px; } @media print { /* Hide the cursor when printing */ .CodeMirror div.CodeMirror-cursors { visibility: hidden; } } /* See issue #2901 */ .cm-tab-wrap-hack:after { content: ''; } /* Help users use markselection to safely style text background */ span.CodeMirror-selectedtext { background: none; } </style> %%javascript require.undef('CodeArea_widget'); var CodeMirror = require('codemirror/lib/codemirror'); require('codemirror/mode/javascript/javascript'); //require('codemirror/theme/monokai.css'); define('CodeArea_widget', ["@jupyter-widgets/base"], function(widgets) { var CodeModel = widgets.DOMWidgetModel.extend({ defaults: _.extend(widgets.DOMWidgetModel.prototype.defaults(), { _model_name: 'DrawingModel', _view_name: 'DrawingView', _model_module: 'jupyter-drawing-pad', _view_module: 'jupyter-drawing-pad', _model_module_version: '0.1.6', _view_module_version: '0.1.6', value: 'Hello World', }) }); var CodeView = widgets.DOMWidgetView.extend({ // Render the view. render: function() { this.code_input = document.createElement('textarea'); this.code_input.id = 'code'; this.code_input.value = this.model.get('value'); this.code_input.disabled = this.model.get('disabled'); //this.code_input.setAttribute('name', 'post'); //this.code_input.setAttribute('maxlength', 5000); //this.code_input.setAttribute('cols',80); //this.code_input.setAttribute('rows', 40); // Python -> JavaScript update this.model.on('change:value', this.value_changed, this); this.model.on('change:disabled', this.disabled_changed, this); // JavaScript -> Python update this.code_input.onchange = this.input_changed.bind(this); this.code_input.onchange = function() { console.log(this) } this.el.appendChild(this.code_input); //use the codemirror to build up custom textarea CodeMirror.fromTextArea(this.code_input, { mode: 'javascript', lineWrapping: true, extraKeys: { 'Ctrl-Space': 'autocomplete' }, lineNumbers: true, theme: 'monokai', cursorScrollMargin: 5 }); }, value_changed: function() { this.code_input.value = this.model.get('value'); }, disabled_changed: function() { this.code_input.disabled = this.model.get('disabled'); }, input_changed: function() { this.model.set('value', this.code_input.value); console.log(this.code_input.value) this.model.save_changes(); }, }); return { CodeModel: CodeModel, CodeView: CodeView }; }); codear=CodeArea(value = 'blalblablab') codear codear def patchWidget(widgetId, datasetId, body): return def formatObj(inputKey: dict, predefineKeys: dict)-> list: """ converts input dictionary on regex definition for parsing parameters Output: object2format_Global= [ {'pattern':r"({{([^}}]*)(year)}})", 'sub':'2010'}, {'pattern':r"({{([^}}]*)(water_column)}})", 'sub':'ws2028tl'}, {'pattern':r"({{([^}}]*)(crop)}})", 'sub':'banana'}, {'pattern':r"({{([^}}]*)(commodity)}})", 'sub':'banana'}, {'pattern':r"({{([^}}]*)(iso)}})", 'sub':'ESP'}, {'pattern':r"({{([^}}]*)(countryName)}})", 'sub':'Spain'} ] """ object2format=[] if 'sql_config' in inputKey: for i in inputKey['sql_config']: sentence = ["{0} = {1}".format(word['key'],predefineKeys[word['key']]) if word['key'] in ('year') else "{0} = '{1}'".format(word['key'],predefineKeys[word['key']]) for word in i['key_params']] object2format.append({'pattern':r"({{([^}}]*)("+i['key']+")}})", 'sub': '{1} {0} '.format(' and '.join(sentence), i['key'])}) if 'params_config' in inputKey: for i in inputKey['params_config']: object2format.append({'pattern':r"({{([^}}]*)("+i['key']+")}})", 'sub': str(predefineKeys[i['key']])}) return object2format def formatWidget2read(widgetConfig): object2format= [ {'pattern':r"({{([^}}]*)(year)}})", 'sub':'2020'}, {'pattern':r"({{([^}}]*)(water_column)}})", 'sub':'ws2028tl'}, {'pattern':r"({{([^}}]*)(crop)}})", 'sub':'banana'}, {'pattern':r"({{([^}}]*)(commodity)}})", 'sub':'banana'}, {'pattern':r"({{([^}}]*)(iso)}})", 'sub':'BRA'}, {'pattern':r"({{([^}}]*)(countryName)}})", 'sub':'Brasil'} ] n=json.dumps(widgetConfig) for paterns in object2format: n = re.compile(paterns['pattern']).sub(paterns['sub'], n) return json.loads(n) def getWidgets(env, app, lsize): widgetsUrl=f'{env}/v1/widget?app={app}&page[size]={lsize}' return requests.get(widgetsUrl) def getAWidget(env, widgetId): widgetUrl=f'{env}/v1/widget/{widgetId}' return requests.get(widgetUrl) def app(): #define dropdown widgets wApp=widgets.Dropdown( options={'Resource Watch': 'rw', 'Aqueduct': 'aqueduct', 'Prep': 'prep'}, value='aqueduct', description='App: ', ) wEnv=widgets.Dropdown( options={'Production': 'https://api.resourcewatch.org', 'Staging': 'https://staging-api.globalforestwatch.org'}, value='https://api.resourcewatch.org', description='Env: ', ) wsize=widgets.IntSlider(min=1, max=1e3, step=1,continuous_update=False, value=1000) wWidgets=widgets.Dropdown( description='widget: ', ) box_layout = Layout(display='flex', flex_flow='column', height='300px', width='300px') wCodeBox= CodeArea( value='', placeholder='Type something', description='Widget:', disabled=False, layout=box_layout ) wCodeBox2= widgets.Textarea( value='', placeholder='Type something', description='Widget Id:', disabled=False ) wCodeBox3= widgets.Textarea( value='', placeholder='Type something', description='Dataset Id:', disabled=False ) codeArea = widgets.Output() def f(env, app, lsize, codeAreas, swidget_id): r = getWidgets(env, app, lsize) if r.status_code == 200: opt = {widget['attributes']['name']: widget['id'] for widget in r.json()['data']} wWidgets.options = opt s = getAWidget(env, swidget_id) if s.status_code == 200: mywidget = s.json()['data']['attributes']['widgetConfig'] mywidget['width']=300 mywidget['height']=300 if app=='aqueduct': mywidget['padding']="auto" #mywidget['viewport']=[300,300] display(Vega(formatWidget2read(mywidget))) wCodeBox.value = json.dumps(mywidget, sort_keys=True, indent=2) wCodeBox2.value = swidget_id wCodeBox3.value = s.json()['data']['attributes']['dataset'] else: #wCodeBox.value = json.dumps(mywidget) wCodeBox2.value = swidget_id wCodeBox3.value = s.json()['data']['attributes']['dataset'] display(Vega3(mywidget)) out = widgets.interactive_output(f, {'env':wEnv, 'app': wApp, 'lsize':wsize, 'codeAreas':wCodeBox,'swidget_id':wWidgets}) ui = widgets.VBox([widgets.HBox([wApp,wsize]),wWidgets,widgets.HBox([wCodeBox,widgets.VBox([wCodeBox3,wCodeBox2,codeArea]),out])]) #ui = widgets.VBox([widgets.HBox([wApp,wsize]),wWidgets,widgets.HBox([codeArea,widgets.VBox([wCodeBox3,wCodeBox2]),out])]) header = widgets.HBox([widgets.VBox([wEnv,wWidgets]), widgets.VBox([wApp,wsize])]) app = widgets.AppLayout(header=header, left_sidebar=wCodeBox, center=out, footer=None) return app #out.observe(handle_slider_change, names='All') app() ```
github_jupyter
from ipywidgets import interact, interactive, fixed, interact_manual, Layout from traitlets import Unicode, Bool, validate, TraitError import ipywidgets as widgets from ipywidgets import DOMWidget, register from IPython.display import display, HTML, JSON, Code from vega import Vega from vega3 import Vega as Vega3 import requests import json import simplejson import re %reload_ext version_information %version_information vega, vega3, IPython, ipywidgets, re AQvegaTheme=""" { "background":"#fff", "render": {"retina": true}, "axis": { "layer": "back", "ticks": 5, "axisColor": "transparent", "axisWidth": 1, "gridColor": "#eeeeee", "gridOpacity": 1, "tickColor": "#eeeeee", "tickLabelColor": "#758290", "tickWidth": 0, "tickSize": 10, "tickLabelFontSize": 12, "tickLabelFont": "\"Roboto\"", "tickOffset": 0, "titleFont": "\"Roboto\"", "titleFontSize": 12, "titleFontWeight": "regular", "titleColor": "#758290", "titleOffset": "auto", "titleOffsetAutoMin": 0, "titleOffsetAutoMax": 0, "titleOffsetAutoMargin": 0 }, "legend": { "orient": "left", "padding": 0, "margin": 0, "labelColor": "#758290", "labelFontSize": 12, "labelFont": "\"Roboto\"", "labelAlign": "left", "labelBaseline": "Bottom", "labelOffset": 0, "symbolShape": "line", "symbolSize": 2, "symbolStrokeWidth": 0, "titleFont": "\"Roboto\"", "titleFontSize": 12, "titleFontWeight": "medium", "titleColor": "#758290" }, "color": { "rgb": [128,128,128], "lab": [50,0,0], "hcl": [0,0,50], "hsl": [0,0,0.5] }, "range": { "cropColor":[ "#3d6ac4", "#749be9", "#51d4f0", "#b1f3b7", "#24962e" ], "riskColor":[ "#808080", "#FFFF99", "#FFE600", "#FF9900", "#FF1900", "#990000" ], "shapes": [ "circle", "cross", "diamond", "square", "triangle-down", "triangle-up", "line", "textMark" ], "cropCatColor": [ "#C71585", "#0000FF", "#A52A2A", "#FFA500", "#6B8E23", "#FFDAB9", "#E6E6FA", "#8B4513", "#008000", "#2F4F4F" ] } } """ @register class CodeArea(DOMWidget): _view_name = Unicode('CodeView').tag(sync=True) _view_module = Unicode('CodeArea_widget').tag(sync=True) _view_module_version = Unicode('0.1.0').tag(sync=True) # Attributes value = Unicode('codeAreaHere', help="Code area here").tag(sync=True) disabled = Bool(False, help="Enable or disable user changes.").tag(sync=True) #methods def clear(self): self.value = '' %%html <style> .CodeMirror { /* Set height, width, borders, and global font properties here */ font-family: monospace; height: 300px; color: black; direction: ltr; } /* PADDING */ .CodeMirror-lines { padding: 4px 0; /* Vertical padding around content */ } .CodeMirror pre { padding: 0 4px; /* Horizontal padding of content */ } .CodeMirror-scrollbar-filler, .CodeMirror-gutter-filler { background-color: white; /* The little square between H and V scrollbars */ } /* GUTTER */ .CodeMirror-gutters { border-right: 1px solid #ddd; background-color: #f7f7f7; white-space: nowrap; } .CodeMirror-linenumbers {} .CodeMirror-linenumber { padding: 0 3px 0 5px; min-width: 20px; text-align: right; color: #999; white-space: nowrap; } .CodeMirror-guttermarker { color: black; } .CodeMirror-guttermarker-subtle { color: #999; } /* CURSOR */ .CodeMirror-cursor { border-left: 1px solid black; border-right: none; width: 0; } /* Shown when moving in bi-directional text */ .CodeMirror div.CodeMirror-secondarycursor { border-left: 1px solid silver; } .cm-fat-cursor .CodeMirror-cursor { width: auto; border: 0 !important; background: #7e7; } .cm-fat-cursor div.CodeMirror-cursors { z-index: 1; } .cm-fat-cursor-mark { background-color: rgba(20, 255, 20, 0.5); -webkit-animation: blink 1.06s steps(1) infinite; -moz-animation: blink 1.06s steps(1) infinite; animation: blink 1.06s steps(1) infinite; } .cm-animate-fat-cursor { width: auto; border: 0; -webkit-animation: blink 1.06s steps(1) infinite; -moz-animation: blink 1.06s steps(1) infinite; animation: blink 1.06s steps(1) infinite; background-color: #7e7; } @-moz-keyframes blink { 0% {} 50% { background-color: transparent; } 100% {} } @-webkit-keyframes blink { 0% {} 50% { background-color: transparent; } 100% {} } @keyframes blink { 0% {} 50% { background-color: transparent; } 100% {} } /* Can style cursor different in overwrite (non-insert) mode */ .CodeMirror-overwrite .CodeMirror-cursor {} .cm-tab { display: inline-block; text-decoration: inherit; } .CodeMirror-rulers { position: absolute; left: 0; right: 0; top: -50px; bottom: 0; overflow: hidden; } .CodeMirror-ruler { border-left: 1px solid #ccc; top: 0; bottom: 0; position: absolute; } /* DEFAULT THEME */ .cm-s-default .cm-header {color: blue;} .cm-s-default .cm-quote {color: #090;} .cm-negative {color: #d44;} .cm-positive {color: #292;} .cm-header, .cm-strong {font-weight: bold;} .cm-em {font-style: italic;} .cm-link {text-decoration: underline;} .cm-strikethrough {text-decoration: line-through;} .cm-s-default .cm-keyword {color: #708;} .cm-s-default .cm-atom {color: #219;} .cm-s-default .cm-number {color: #164;} .cm-s-default .cm-def {color: #00f;} .cm-s-default .cm-variable, .cm-s-default .cm-punctuation, .cm-s-default .cm-property, .cm-s-default .cm-operator {} .cm-s-default .cm-variable-2 {color: #05a;} .cm-s-default .cm-variable-3, .cm-s-default .cm-type {color: #085;} .cm-s-default .cm-comment {color: #a50;} .cm-s-default .cm-string {color: #a11;} .cm-s-default .cm-string-2 {color: #f50;} .cm-s-default .cm-meta {color: #555;} .cm-s-default .cm-qualifier {color: #555;} .cm-s-default .cm-builtin {color: #30a;} .cm-s-default .cm-bracket {color: #997;} .cm-s-default .cm-tag {color: #170;} .cm-s-default .cm-attribute {color: #00c;} .cm-s-default .cm-hr {color: #999;} .cm-s-default .cm-link {color: #00c;} .cm-s-default .cm-error {color: #f00;} .cm-invalidchar {color: #f00;} .CodeMirror-composing { border-bottom: 2px solid; } /* Default styles for common addons */ div.CodeMirror span.CodeMirror-matchingbracket {color: #0b0;} div.CodeMirror span.CodeMirror-nonmatchingbracket {color: #a22;} .CodeMirror-matchingtag { background: rgba(255, 150, 0, .3); } .CodeMirror-activeline-background {background: #e8f2ff;} /* STOP */ /* The rest of this file contains styles related to the mechanics of the editor. You probably shouldn't touch them. */ .CodeMirror { position: relative; overflow: hidden; background: white; } .CodeMirror-scroll { overflow: scroll !important; /* Things will break if this is overridden */ /* 30px is the magic margin used to hide the element's real scrollbars */ /* See overflow: hidden in .CodeMirror */ margin-bottom: -30px; margin-right: -30px; padding-bottom: 30px; height: 100%; outline: none; /* Prevent dragging from highlighting the element */ position: relative; } .CodeMirror-sizer { position: relative; border-right: 30px solid transparent; } /* The fake, visible scrollbars. Used to force redraw during scrolling before actual scrolling happens, thus preventing shaking and flickering artifacts. */ .CodeMirror-vscrollbar, .CodeMirror-hscrollbar, .CodeMirror-scrollbar-filler, .CodeMirror-gutter-filler { position: absolute; z-index: 6; display: none; } .CodeMirror-vscrollbar { right: 0; top: 0; overflow-x: hidden; overflow-y: scroll; } .CodeMirror-hscrollbar { bottom: 0; left: 0; overflow-y: hidden; overflow-x: scroll; } .CodeMirror-scrollbar-filler { right: 0; bottom: 0; } .CodeMirror-gutter-filler { left: 0; bottom: 0; } .CodeMirror-gutters { position: absolute; left: 0; top: 0; min-height: 100%; z-index: 3; } .CodeMirror-gutter { white-space: normal; height: 100%; display: inline-block; vertical-align: top; margin-bottom: -30px; } .CodeMirror-gutter-wrapper { position: absolute; z-index: 4; background: none !important; border: none !important; } .CodeMirror-gutter-background { position: absolute; top: 0; bottom: 0; z-index: 4; } .CodeMirror-gutter-elt { position: absolute; cursor: default; z-index: 4; } .CodeMirror-gutter-wrapper ::selection { background-color: transparent } .CodeMirror-gutter-wrapper ::-moz-selection { background-color: transparent } .CodeMirror-lines { cursor: text; min-height: 1px; /* prevents collapsing before first draw */ } .CodeMirror pre { /* Reset some styles that the rest of the page might have set */ -moz-border-radius: 0; -webkit-border-radius: 0; border-radius: 0; border-width: 0; background: transparent; font-family: inherit; font-size: inherit; margin: 0; white-space: pre; word-wrap: normal; line-height: inherit; color: inherit; z-index: 2; position: relative; overflow: visible; -webkit-tap-highlight-color: transparent; -webkit-font-variant-ligatures: contextual; font-variant-ligatures: contextual; } .CodeMirror-wrap pre { word-wrap: break-word; white-space: pre-wrap; word-break: normal; } .CodeMirror-linebackground { position: absolute; left: 0; right: 0; top: 0; bottom: 0; z-index: 0; } .CodeMirror-linewidget { position: relative; z-index: 2; padding: 0.1px; /* Force widget margins to stay inside of the container */ } .CodeMirror-widget {} .CodeMirror-rtl pre { direction: rtl; } .CodeMirror-code { outline: none; } /* Force content-box sizing for the elements where we expect it */ .CodeMirror-scroll, .CodeMirror-sizer, .CodeMirror-gutter, .CodeMirror-gutters, .CodeMirror-linenumber { -moz-box-sizing: content-box; box-sizing: content-box; } .CodeMirror-measure { position: absolute; width: 100%; height: 0; overflow: hidden; visibility: hidden; } .CodeMirror-cursor { position: absolute; pointer-events: none; } .CodeMirror-measure pre { position: static; } div.CodeMirror-cursors { visibility: hidden; position: relative; z-index: 3; } div.CodeMirror-dragcursors { visibility: visible; } .CodeMirror-focused div.CodeMirror-cursors { visibility: visible; } .CodeMirror-selected { background: #d9d9d9; } .CodeMirror-focused .CodeMirror-selected { background: #d7d4f0; } .CodeMirror-crosshair { cursor: crosshair; } .CodeMirror-line::selection, .CodeMirror-line > span::selection, .CodeMirror-line > span > span::selection { background: #d7d4f0; } .CodeMirror-line::-moz-selection, .CodeMirror-line > span::-moz-selection, .CodeMirror-line > span > span::-moz-selection { background: #d7d4f0; } .cm-searching { background-color: #ffa; background-color: rgba(255, 255, 0, .4); } /* Used to force a border model for a node */ .cm-force-border { padding-right: .1px; } @media print { /* Hide the cursor when printing */ .CodeMirror div.CodeMirror-cursors { visibility: hidden; } } /* See issue #2901 */ .cm-tab-wrap-hack:after { content: ''; } /* Help users use markselection to safely style text background */ span.CodeMirror-selectedtext { background: none; } </style> %%javascript require.undef('CodeArea_widget'); var CodeMirror = require('codemirror/lib/codemirror'); require('codemirror/mode/javascript/javascript'); //require('codemirror/theme/monokai.css'); define('CodeArea_widget', ["@jupyter-widgets/base"], function(widgets) { var CodeModel = widgets.DOMWidgetModel.extend({ defaults: _.extend(widgets.DOMWidgetModel.prototype.defaults(), { _model_name: 'DrawingModel', _view_name: 'DrawingView', _model_module: 'jupyter-drawing-pad', _view_module: 'jupyter-drawing-pad', _model_module_version: '0.1.6', _view_module_version: '0.1.6', value: 'Hello World', }) }); var CodeView = widgets.DOMWidgetView.extend({ // Render the view. render: function() { this.code_input = document.createElement('textarea'); this.code_input.id = 'code'; this.code_input.value = this.model.get('value'); this.code_input.disabled = this.model.get('disabled'); //this.code_input.setAttribute('name', 'post'); //this.code_input.setAttribute('maxlength', 5000); //this.code_input.setAttribute('cols',80); //this.code_input.setAttribute('rows', 40); // Python -> JavaScript update this.model.on('change:value', this.value_changed, this); this.model.on('change:disabled', this.disabled_changed, this); // JavaScript -> Python update this.code_input.onchange = this.input_changed.bind(this); this.code_input.onchange = function() { console.log(this) } this.el.appendChild(this.code_input); //use the codemirror to build up custom textarea CodeMirror.fromTextArea(this.code_input, { mode: 'javascript', lineWrapping: true, extraKeys: { 'Ctrl-Space': 'autocomplete' }, lineNumbers: true, theme: 'monokai', cursorScrollMargin: 5 }); }, value_changed: function() { this.code_input.value = this.model.get('value'); }, disabled_changed: function() { this.code_input.disabled = this.model.get('disabled'); }, input_changed: function() { this.model.set('value', this.code_input.value); console.log(this.code_input.value) this.model.save_changes(); }, }); return { CodeModel: CodeModel, CodeView: CodeView }; }); codear=CodeArea(value = 'blalblablab') codear codear def patchWidget(widgetId, datasetId, body): return def formatObj(inputKey: dict, predefineKeys: dict)-> list: """ converts input dictionary on regex definition for parsing parameters Output: object2format_Global= [ {'pattern':r"({{([^}}]*)(year)}})", 'sub':'2010'}, {'pattern':r"({{([^}}]*)(water_column)}})", 'sub':'ws2028tl'}, {'pattern':r"({{([^}}]*)(crop)}})", 'sub':'banana'}, {'pattern':r"({{([^}}]*)(commodity)}})", 'sub':'banana'}, {'pattern':r"({{([^}}]*)(iso)}})", 'sub':'ESP'}, {'pattern':r"({{([^}}]*)(countryName)}})", 'sub':'Spain'} ] """ object2format=[] if 'sql_config' in inputKey: for i in inputKey['sql_config']: sentence = ["{0} = {1}".format(word['key'],predefineKeys[word['key']]) if word['key'] in ('year') else "{0} = '{1}'".format(word['key'],predefineKeys[word['key']]) for word in i['key_params']] object2format.append({'pattern':r"({{([^}}]*)("+i['key']+")}})", 'sub': '{1} {0} '.format(' and '.join(sentence), i['key'])}) if 'params_config' in inputKey: for i in inputKey['params_config']: object2format.append({'pattern':r"({{([^}}]*)("+i['key']+")}})", 'sub': str(predefineKeys[i['key']])}) return object2format def formatWidget2read(widgetConfig): object2format= [ {'pattern':r"({{([^}}]*)(year)}})", 'sub':'2020'}, {'pattern':r"({{([^}}]*)(water_column)}})", 'sub':'ws2028tl'}, {'pattern':r"({{([^}}]*)(crop)}})", 'sub':'banana'}, {'pattern':r"({{([^}}]*)(commodity)}})", 'sub':'banana'}, {'pattern':r"({{([^}}]*)(iso)}})", 'sub':'BRA'}, {'pattern':r"({{([^}}]*)(countryName)}})", 'sub':'Brasil'} ] n=json.dumps(widgetConfig) for paterns in object2format: n = re.compile(paterns['pattern']).sub(paterns['sub'], n) return json.loads(n) def getWidgets(env, app, lsize): widgetsUrl=f'{env}/v1/widget?app={app}&page[size]={lsize}' return requests.get(widgetsUrl) def getAWidget(env, widgetId): widgetUrl=f'{env}/v1/widget/{widgetId}' return requests.get(widgetUrl) def app(): #define dropdown widgets wApp=widgets.Dropdown( options={'Resource Watch': 'rw', 'Aqueduct': 'aqueduct', 'Prep': 'prep'}, value='aqueduct', description='App: ', ) wEnv=widgets.Dropdown( options={'Production': 'https://api.resourcewatch.org', 'Staging': 'https://staging-api.globalforestwatch.org'}, value='https://api.resourcewatch.org', description='Env: ', ) wsize=widgets.IntSlider(min=1, max=1e3, step=1,continuous_update=False, value=1000) wWidgets=widgets.Dropdown( description='widget: ', ) box_layout = Layout(display='flex', flex_flow='column', height='300px', width='300px') wCodeBox= CodeArea( value='', placeholder='Type something', description='Widget:', disabled=False, layout=box_layout ) wCodeBox2= widgets.Textarea( value='', placeholder='Type something', description='Widget Id:', disabled=False ) wCodeBox3= widgets.Textarea( value='', placeholder='Type something', description='Dataset Id:', disabled=False ) codeArea = widgets.Output() def f(env, app, lsize, codeAreas, swidget_id): r = getWidgets(env, app, lsize) if r.status_code == 200: opt = {widget['attributes']['name']: widget['id'] for widget in r.json()['data']} wWidgets.options = opt s = getAWidget(env, swidget_id) if s.status_code == 200: mywidget = s.json()['data']['attributes']['widgetConfig'] mywidget['width']=300 mywidget['height']=300 if app=='aqueduct': mywidget['padding']="auto" #mywidget['viewport']=[300,300] display(Vega(formatWidget2read(mywidget))) wCodeBox.value = json.dumps(mywidget, sort_keys=True, indent=2) wCodeBox2.value = swidget_id wCodeBox3.value = s.json()['data']['attributes']['dataset'] else: #wCodeBox.value = json.dumps(mywidget) wCodeBox2.value = swidget_id wCodeBox3.value = s.json()['data']['attributes']['dataset'] display(Vega3(mywidget)) out = widgets.interactive_output(f, {'env':wEnv, 'app': wApp, 'lsize':wsize, 'codeAreas':wCodeBox,'swidget_id':wWidgets}) ui = widgets.VBox([widgets.HBox([wApp,wsize]),wWidgets,widgets.HBox([wCodeBox,widgets.VBox([wCodeBox3,wCodeBox2,codeArea]),out])]) #ui = widgets.VBox([widgets.HBox([wApp,wsize]),wWidgets,widgets.HBox([codeArea,widgets.VBox([wCodeBox3,wCodeBox2]),out])]) header = widgets.HBox([widgets.VBox([wEnv,wWidgets]), widgets.VBox([wApp,wsize])]) app = widgets.AppLayout(header=header, left_sidebar=wCodeBox, center=out, footer=None) return app #out.observe(handle_slider_change, names='All') app()
0.437824
0.619126
``` import pandas as pd import numpy as np # The following three also have confidence columns in the datasets # Study 1c: Study on 50 U.S. states. state - name of the state, city - name of the city asked # Every state is asked with it's true capital study_1c = pd.read_csv('~/DATA_1030/Final_Project/crowd_wisdom_data/study1c.csv') # Study 2: Trivia. qname - the topic of the trivia question (39 participants, 80 unique qnames) study_2 = pd.read_csv('~/DATA_1030/Final_Project/crowd_wisdom_data/study2.csv') # Study 3: Dermatologists diagnosing lesions as malignant or benign study_3 = pd.read_csv('~/DATA_1030/Final_Project/crowd_wisdom_data/study3.csv') def meta_conf_stats(sub, q): descm = sub["meta"].describe() descc = sub["confidence"].describe() skewValue_col = sub.skew(axis=0) skewValue_row = sub.skew(axis=1) mean_meta = descm["mean"] std_meta = descm["std"] min_meta = descm["min"] p25_meta = descm["25%"] p50_meta = descm["50%"] p75_meta = descm["75%"] max_meta = descm["max"] mean_conf = descc["mean"] std_conf = descc["std"] min_conf = descc["min"] p25_conf = descc["25%"] p50_conf = descc["50%"] p75_conf = descc["75%"] max_conf = descc["max"] skew_own = skewValue_col["own"] skew_meta = skewValue_col["meta"] skew_conf = skewValue_col["confidence"] skew_row_average = skewValue_row.mean() skew_row_skew = skewValue_row.skew(axis=0) df_together.loc[df_together["question"] == q, "mean_meta"] = mean_meta df_together.loc[df_together["question"] == q, "std_meta"] = std_meta df_together.loc[df_together["question"] == q, "min_meta"] = min_meta df_together.loc[df_together["question"] == q, "p25_meta"] = p25_meta df_together.loc[df_together["question"] == q, "p50_meta"] = p50_meta df_together.loc[df_together["question"] == q, "p75_meta"] = p75_meta df_together.loc[df_together["question"] == q, "max_meta"] = max_meta df_together.loc[df_together["question"] == q, "mean_conf"] = mean_conf df_together.loc[df_together["question"] == q, "std_conf"] = std_conf df_together.loc[df_together["question"] == q, "min_conf"] = min_conf df_together.loc[df_together["question"] == q, "p25_conf"] = p25_conf df_together.loc[df_together["question"] == q, "p50_conf"] = p50_conf df_together.loc[df_together["question"] == q, "p75_conf"] = p75_conf df_together.loc[df_together["question"] == q, "max_conf"] = max_conf df_together.loc[df_together["question"] == q, "skew_own"] = skew_own df_together.loc[df_together["question"] == q, "skew_meta"] = skew_meta df_together.loc[df_together["question"] == q, "skew_conf"] = skew_conf df_together.loc[df_together["question"] == q, "skew_row_average"] = skew_row_average df_together.loc[df_together["question"] == q, "skew_row_skew"] = skew_row_skew study_1c = study_1c.drop(["expt city"], axis=1) study_2 = study_2.drop(['qname'], axis=1) study_1c.columns = ['question', 'q_id', 'own', 'meta', 'confidence', "actual"] study_2.columns = ['q_id','own', 'meta', 'question', "actual", 'confidence'] study_3.columns = ['q_id','own', 'actual', 'question', 'meta', 'confidence'] frames = [study_1c, study_2, study_3] result = pd.concat(frames) # result = result.drop(["actual"],axis=1) result["sc"] = np.where(result['own'] == 0, 1-result['meta'], result['meta']) result["sc*conf"] = result["sc"]*result["confidence"] result["meta*conf"] = result["meta"]*result["confidence"] result["meta*conf*sc"] = result["meta"]*result["confidence"]*result["sc"] df_together = result questions = df_together["question"].unique() for q in questions: sub = df_together[df_together["question"] == q] meta_conf_stats(sub, q) ids = df_together["q_id"].unique() for ind in ids: individual_dta = df_together.loc[df_together["q_id"] == ind] # import pdb; pdb.set_trace() ind_desc = individual_dta.describe() skewValue_col = individual_dta.skew(axis=0) meta_column = ind_desc["meta"] id_mean_meta = meta_column.loc["mean"] id_std_meta = meta_column.loc["std"] id_min_meta = meta_column.loc["min"] id_p25_meta = meta_column.loc["25%"] id_p50_meta = meta_column.loc["50%"] id_p75_meta = meta_column.loc["75%"] id_max_meta = meta_column.loc["max"] conf_column = ind_desc["confidence"] id_mean_conf = conf_column.loc["mean"] id_std_conf = conf_column.loc["std"] id_min_conf = conf_column.loc["min"] id_p25_conf = conf_column.loc["25%"] id_p50_conf = conf_column.loc["50%"] id_p75_conf = conf_column.loc["75%"] id_max_conf = conf_column.loc["max"] skew_own = skewValue_col["own"] skew_meta = skewValue_col["meta"] skew_conf = skewValue_col["confidence"] skew_sc = skewValue_col["sc"] skew_scconf = skewValue_col["sc*conf"] skew_confmeta = skewValue_col["meta*conf"] skew_metaconfsc = skewValue_col["meta*conf*sc"] df_together.loc[df_together["q_id"] == ind, "id_mean_meta"] = id_mean_meta df_together.loc[df_together["q_id"] == ind, "id_std_meta"] = id_std_meta df_together.loc[df_together["q_id"] == ind, "id_min_meta"] = id_min_meta df_together.loc[df_together["q_id"] == ind, "id_p25_meta"] = id_p25_meta df_together.loc[df_together["q_id"] == ind, "id_p50_meta"] = id_p50_meta df_together.loc[df_together["q_id"] == ind, "id_p75_meta"] = id_p75_meta df_together.loc[df_together["q_id"] == ind, "id_max_meta"] = id_max_meta df_together.loc[df_together["q_id"] == ind, "id_mean_conf"] = id_mean_conf df_together.loc[df_together["q_id"] == ind, "id_std_conf"] = id_std_conf df_together.loc[df_together["q_id"] == ind, "id_min_conf"] = id_min_conf df_together.loc[df_together["q_id"] == ind, "id_p25_conf"] = id_p25_conf df_together.loc[df_together["q_id"] == ind, "id_p50_conf"] = id_p50_conf df_together.loc[df_together["q_id"] == ind, "id_p75_conf"] = id_p75_conf df_together.loc[df_together["q_id"] == ind, "id_max_conf"] = id_max_conf df_together.loc[df_together["q_id"] == ind, "id_skew_own"] = skew_own df_together.loc[df_together["q_id"] == ind, "id_skew_meta"] = skew_meta df_together.loc[df_together["q_id"] == ind, "id_skew_conf"] = skew_conf df_together.loc[df_together["q_id"] == ind, "id_skew_sc"] = skew_sc df_together.loc[df_together["q_id"] == ind, "id_skew_sc*conf"] = skew_scconf df_together.loc[df_together["q_id"] == ind, "id_skew_meta*conf"] = skew_confmeta df_together.loc[df_together["q_id"] == ind, "id_skew_meta*conf*sc"] = skew_metaconfsc questions = df_together["question"].unique() for q in questions: sub = df_together.loc[df_together["question"] == q] questions = df_together["question"].unique() df_together["0_percentage"] = 0 df_together["1_percentage"] = 0 for q in questions: sub = df_together[df_together["question"] == q] df1 = sub.groupby(['own']).size().reset_index(name='Count') a = df1["Count"]/len(sub) try: sub["0_percentage"] = a.loc[0] except KeyError: sub["0_percentage"] = 0 try: sub["1_percentage"] = a.loc[1] except KeyError: sub["1_percentage"] = 0 df_together.update(sub) df_together["correct"] = np.where(df_together['own'] == df_together['actual'], 1, 0) df_together["majority_1"] = np.where(df_together['1_percentage']>0.5, 1, 0) df_together["in_major"] = np.where(df_together['majority_1'] == df_together['own'], 1, 0) df_together["in_minor"] = np.where(df_together['in_major'] == 0, 1, 0) df_together["expert"] = np.where((df_together['in_minor'] == 1)&(df_together['correct']==1), 1, 0) df_together = df_together.drop(["correct","majority_1","actual","in_major"],axis=1) df_together.to_csv("unprocessed_expert_Classifier.csv") ```
github_jupyter
import pandas as pd import numpy as np # The following three also have confidence columns in the datasets # Study 1c: Study on 50 U.S. states. state - name of the state, city - name of the city asked # Every state is asked with it's true capital study_1c = pd.read_csv('~/DATA_1030/Final_Project/crowd_wisdom_data/study1c.csv') # Study 2: Trivia. qname - the topic of the trivia question (39 participants, 80 unique qnames) study_2 = pd.read_csv('~/DATA_1030/Final_Project/crowd_wisdom_data/study2.csv') # Study 3: Dermatologists diagnosing lesions as malignant or benign study_3 = pd.read_csv('~/DATA_1030/Final_Project/crowd_wisdom_data/study3.csv') def meta_conf_stats(sub, q): descm = sub["meta"].describe() descc = sub["confidence"].describe() skewValue_col = sub.skew(axis=0) skewValue_row = sub.skew(axis=1) mean_meta = descm["mean"] std_meta = descm["std"] min_meta = descm["min"] p25_meta = descm["25%"] p50_meta = descm["50%"] p75_meta = descm["75%"] max_meta = descm["max"] mean_conf = descc["mean"] std_conf = descc["std"] min_conf = descc["min"] p25_conf = descc["25%"] p50_conf = descc["50%"] p75_conf = descc["75%"] max_conf = descc["max"] skew_own = skewValue_col["own"] skew_meta = skewValue_col["meta"] skew_conf = skewValue_col["confidence"] skew_row_average = skewValue_row.mean() skew_row_skew = skewValue_row.skew(axis=0) df_together.loc[df_together["question"] == q, "mean_meta"] = mean_meta df_together.loc[df_together["question"] == q, "std_meta"] = std_meta df_together.loc[df_together["question"] == q, "min_meta"] = min_meta df_together.loc[df_together["question"] == q, "p25_meta"] = p25_meta df_together.loc[df_together["question"] == q, "p50_meta"] = p50_meta df_together.loc[df_together["question"] == q, "p75_meta"] = p75_meta df_together.loc[df_together["question"] == q, "max_meta"] = max_meta df_together.loc[df_together["question"] == q, "mean_conf"] = mean_conf df_together.loc[df_together["question"] == q, "std_conf"] = std_conf df_together.loc[df_together["question"] == q, "min_conf"] = min_conf df_together.loc[df_together["question"] == q, "p25_conf"] = p25_conf df_together.loc[df_together["question"] == q, "p50_conf"] = p50_conf df_together.loc[df_together["question"] == q, "p75_conf"] = p75_conf df_together.loc[df_together["question"] == q, "max_conf"] = max_conf df_together.loc[df_together["question"] == q, "skew_own"] = skew_own df_together.loc[df_together["question"] == q, "skew_meta"] = skew_meta df_together.loc[df_together["question"] == q, "skew_conf"] = skew_conf df_together.loc[df_together["question"] == q, "skew_row_average"] = skew_row_average df_together.loc[df_together["question"] == q, "skew_row_skew"] = skew_row_skew study_1c = study_1c.drop(["expt city"], axis=1) study_2 = study_2.drop(['qname'], axis=1) study_1c.columns = ['question', 'q_id', 'own', 'meta', 'confidence', "actual"] study_2.columns = ['q_id','own', 'meta', 'question', "actual", 'confidence'] study_3.columns = ['q_id','own', 'actual', 'question', 'meta', 'confidence'] frames = [study_1c, study_2, study_3] result = pd.concat(frames) # result = result.drop(["actual"],axis=1) result["sc"] = np.where(result['own'] == 0, 1-result['meta'], result['meta']) result["sc*conf"] = result["sc"]*result["confidence"] result["meta*conf"] = result["meta"]*result["confidence"] result["meta*conf*sc"] = result["meta"]*result["confidence"]*result["sc"] df_together = result questions = df_together["question"].unique() for q in questions: sub = df_together[df_together["question"] == q] meta_conf_stats(sub, q) ids = df_together["q_id"].unique() for ind in ids: individual_dta = df_together.loc[df_together["q_id"] == ind] # import pdb; pdb.set_trace() ind_desc = individual_dta.describe() skewValue_col = individual_dta.skew(axis=0) meta_column = ind_desc["meta"] id_mean_meta = meta_column.loc["mean"] id_std_meta = meta_column.loc["std"] id_min_meta = meta_column.loc["min"] id_p25_meta = meta_column.loc["25%"] id_p50_meta = meta_column.loc["50%"] id_p75_meta = meta_column.loc["75%"] id_max_meta = meta_column.loc["max"] conf_column = ind_desc["confidence"] id_mean_conf = conf_column.loc["mean"] id_std_conf = conf_column.loc["std"] id_min_conf = conf_column.loc["min"] id_p25_conf = conf_column.loc["25%"] id_p50_conf = conf_column.loc["50%"] id_p75_conf = conf_column.loc["75%"] id_max_conf = conf_column.loc["max"] skew_own = skewValue_col["own"] skew_meta = skewValue_col["meta"] skew_conf = skewValue_col["confidence"] skew_sc = skewValue_col["sc"] skew_scconf = skewValue_col["sc*conf"] skew_confmeta = skewValue_col["meta*conf"] skew_metaconfsc = skewValue_col["meta*conf*sc"] df_together.loc[df_together["q_id"] == ind, "id_mean_meta"] = id_mean_meta df_together.loc[df_together["q_id"] == ind, "id_std_meta"] = id_std_meta df_together.loc[df_together["q_id"] == ind, "id_min_meta"] = id_min_meta df_together.loc[df_together["q_id"] == ind, "id_p25_meta"] = id_p25_meta df_together.loc[df_together["q_id"] == ind, "id_p50_meta"] = id_p50_meta df_together.loc[df_together["q_id"] == ind, "id_p75_meta"] = id_p75_meta df_together.loc[df_together["q_id"] == ind, "id_max_meta"] = id_max_meta df_together.loc[df_together["q_id"] == ind, "id_mean_conf"] = id_mean_conf df_together.loc[df_together["q_id"] == ind, "id_std_conf"] = id_std_conf df_together.loc[df_together["q_id"] == ind, "id_min_conf"] = id_min_conf df_together.loc[df_together["q_id"] == ind, "id_p25_conf"] = id_p25_conf df_together.loc[df_together["q_id"] == ind, "id_p50_conf"] = id_p50_conf df_together.loc[df_together["q_id"] == ind, "id_p75_conf"] = id_p75_conf df_together.loc[df_together["q_id"] == ind, "id_max_conf"] = id_max_conf df_together.loc[df_together["q_id"] == ind, "id_skew_own"] = skew_own df_together.loc[df_together["q_id"] == ind, "id_skew_meta"] = skew_meta df_together.loc[df_together["q_id"] == ind, "id_skew_conf"] = skew_conf df_together.loc[df_together["q_id"] == ind, "id_skew_sc"] = skew_sc df_together.loc[df_together["q_id"] == ind, "id_skew_sc*conf"] = skew_scconf df_together.loc[df_together["q_id"] == ind, "id_skew_meta*conf"] = skew_confmeta df_together.loc[df_together["q_id"] == ind, "id_skew_meta*conf*sc"] = skew_metaconfsc questions = df_together["question"].unique() for q in questions: sub = df_together.loc[df_together["question"] == q] questions = df_together["question"].unique() df_together["0_percentage"] = 0 df_together["1_percentage"] = 0 for q in questions: sub = df_together[df_together["question"] == q] df1 = sub.groupby(['own']).size().reset_index(name='Count') a = df1["Count"]/len(sub) try: sub["0_percentage"] = a.loc[0] except KeyError: sub["0_percentage"] = 0 try: sub["1_percentage"] = a.loc[1] except KeyError: sub["1_percentage"] = 0 df_together.update(sub) df_together["correct"] = np.where(df_together['own'] == df_together['actual'], 1, 0) df_together["majority_1"] = np.where(df_together['1_percentage']>0.5, 1, 0) df_together["in_major"] = np.where(df_together['majority_1'] == df_together['own'], 1, 0) df_together["in_minor"] = np.where(df_together['in_major'] == 0, 1, 0) df_together["expert"] = np.where((df_together['in_minor'] == 1)&(df_together['correct']==1), 1, 0) df_together = df_together.drop(["correct","majority_1","actual","in_major"],axis=1) df_together.to_csv("unprocessed_expert_Classifier.csv")
0.428831
0.304588
# Day and Night Image Classifier --- The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images. We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images! *Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* ### Import resources Before you get started on the project code, import the libraries and resources that you'll need. ``` import cv2 # computer vision library import helpers import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline ``` ## Training and Testing Data The 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier. * 40% are test images, which will be used to test the accuracy of your classifier. First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored ``` # Image data directories image_dir_training = "day_night_images/training/" image_dir_test = "day_night_images/test/" ``` ## Load the datasets These first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```. ``` # Using the load_dataset function in helpers.py # Load training data IMAGE_LIST = helpers.load_dataset(image_dir_training) ``` ## Construct a `STANDARDIZED_LIST` of input images and output labels. This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels. ``` # Standardize all training images STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST) ``` ## Visualize the standardized data Display a standardized image from STANDARDIZED_LIST. ``` # Display a standardized image and its label # Select an image by index image_num = 0 selected_image = STANDARDIZED_LIST[image_num][0] selected_label = STANDARDIZED_LIST[image_num][1] # Display image and data about it plt.imshow(selected_image) print("Shape: "+str(selected_image.shape)) print("Label [1 = day, 0 = night]: " + str(selected_label)) ``` # Feature Extraction Create a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. ## RGB to HSV conversion Below, a test image is converted from RGB to HSV colorspace and each component is displayed in an image. ``` # Convert and image to HSV colorspace # Visualize the individual color channels image_num = 0 test_im = STANDARDIZED_LIST[image_num][0] test_label = STANDARDIZED_LIST[image_num][1] # Convert to HSV hsv = cv2.cvtColor(test_im, cv2.COLOR_RGB2HSV) # Print image label print('Label: ' + str(test_label)) # HSV channels h = hsv[:,:,0] s = hsv[:,:,1] v = hsv[:,:,2] # Plot the original image and the three channels f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,10)) ax1.set_title('Standardized image') ax1.imshow(test_im) ax2.set_title('H channel') ax2.imshow(h, cmap='gray') ax3.set_title('S channel') ax3.imshow(s, cmap='gray') ax4.set_title('V channel') ax4.imshow(v, cmap='gray') ``` --- ### Find the average brightness using the V channel This function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night. ``` # Find the average Value or brightness of an image def avg_brightness(rgb_image): # Convert image to HSV hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV) # Add up all the pixel values in the V channel sum_brightness = np.sum(hsv[:,:,2]) ## TODO: Calculate the average brightness using the area of the image # and the sum calculated above avg = sum_brightness / (600.0 * 1100.0) return avg import random # Testing average brightness levels # Look at a number of different day and night images and think about # what average brightness value separates the two types of images # As an example, a "night" image is loaded in and its avg brightness is displayed image_num = random.randint(0, len(STANDARDIZED_LIST)) test_im = STANDARDIZED_LIST[image_num][0] avg = avg_brightness(test_im) print('Avg brightness: {:.1f}'.format(avg)) plt.imshow(test_im) ```
github_jupyter
import cv2 # computer vision library import helpers import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline # Image data directories image_dir_training = "day_night_images/training/" image_dir_test = "day_night_images/test/" # Using the load_dataset function in helpers.py # Load training data IMAGE_LIST = helpers.load_dataset(image_dir_training) # Standardize all training images STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST) # Display a standardized image and its label # Select an image by index image_num = 0 selected_image = STANDARDIZED_LIST[image_num][0] selected_label = STANDARDIZED_LIST[image_num][1] # Display image and data about it plt.imshow(selected_image) print("Shape: "+str(selected_image.shape)) print("Label [1 = day, 0 = night]: " + str(selected_label)) # Convert and image to HSV colorspace # Visualize the individual color channels image_num = 0 test_im = STANDARDIZED_LIST[image_num][0] test_label = STANDARDIZED_LIST[image_num][1] # Convert to HSV hsv = cv2.cvtColor(test_im, cv2.COLOR_RGB2HSV) # Print image label print('Label: ' + str(test_label)) # HSV channels h = hsv[:,:,0] s = hsv[:,:,1] v = hsv[:,:,2] # Plot the original image and the three channels f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,10)) ax1.set_title('Standardized image') ax1.imshow(test_im) ax2.set_title('H channel') ax2.imshow(h, cmap='gray') ax3.set_title('S channel') ax3.imshow(s, cmap='gray') ax4.set_title('V channel') ax4.imshow(v, cmap='gray') # Find the average Value or brightness of an image def avg_brightness(rgb_image): # Convert image to HSV hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV) # Add up all the pixel values in the V channel sum_brightness = np.sum(hsv[:,:,2]) ## TODO: Calculate the average brightness using the area of the image # and the sum calculated above avg = sum_brightness / (600.0 * 1100.0) return avg import random # Testing average brightness levels # Look at a number of different day and night images and think about # what average brightness value separates the two types of images # As an example, a "night" image is loaded in and its avg brightness is displayed image_num = random.randint(0, len(STANDARDIZED_LIST)) test_im = STANDARDIZED_LIST[image_num][0] avg = avg_brightness(test_im) print('Avg brightness: {:.1f}'.format(avg)) plt.imshow(test_im)
0.68215
0.991447
# 基本程序设计 - 一切代码输入,请使用英文输入法 ## 编写一个简单的程序 - 圆公式面积: area = radius \* radius \* 3.1415 ``` radius = float(input('please input redius')) #type=radius print(redius * redius* 3.1415) ``` ### 在Python里面不需要定义数据的类型 ## 控制台的读取与输入 - input 输入进去的是字符串 - eval ``` age = eval(input('age')) print(age) ``` - 在jupyter用shift + tab 键可以跳出解释文档 ## 变量命名的规范 - 由字母、数字、下划线构成 - 不能以数字开头 \* - 标识符不能是关键词(实际上是可以强制改变的,但是对于代码规范而言是极其不适合) - 可以是任意长度 - 驼峰式命名 ## 变量、赋值语句和赋值表达式 - 变量: 通俗理解为可以变化的量 - x = 2 \* x + 1 在数学中是一个方程,而在语言中它是一个表达式 - test = test + 1 \* 变量在赋值之前必须有值 ## 同时赋值 var1, var2,var3... = exp1,exp2,exp3... ## 定义常量 - 常量:表示一种定值标识符,适合于多次使用的场景。比如PI - 注意:在其他低级语言中如果定义了常量,那么,该常量是不可以被改变的,但是在Python中一切皆对象,常量也是可以被改变的 ## 数值数据类型和运算符 - 在Python中有两种数值类型(int 和 float)适用于加减乘除、模、幂次 <img src = "../Photo/01.jpg"></img> ## 运算符 /、//、** ## 运算符 % ## EP: - 25/4 多少,如果要将其转变为整数该怎么改写 - 输入一个数字判断是奇数还是偶数 - 进阶: 输入一个秒数,写一个程序将其转换成分和秒:例如500秒等于8分20秒 - 进阶: 如果今天是星期六,那么10天以后是星期几? 提示:每个星期的第0天是星期天 ``` number = eval(input('number')) if number % 2 == 0: print('偶数') import random res = random.randint(1000,10000) print(res) res2 =input('输入验证码') if res == int(res2): print('ok') else: print('error') ``` ## 科学计数法 - 1.234e+2 - 1.234e-2 ``` 10.0101e+2 ``` ## 计算表达式和运算优先级 <img src = "../Photo/02.png"></img> <img src = "../Photo/03.png"></img> (3+4*x)/5-(10*(y-5)(a+b+c)+9*((4/x)+(9+x)/y) ## 增强型赋值运算 <img src = "../Photo/04.png"></img> ## 类型转换 - float -> int - 四舍五入 round ``` round(100.0222234345,2) ``` ## EP: - 如果一个年营业税为0.06%,那么对于197.55e+2的年收入,需要交税为多少?(结果保留2为小数) - 必须使用科学计数法 ``` rate = 0.06/100 account = 197.55e+2 res = round(rate * account,2) print(res) ``` # Project - 用Python写一个贷款计算器程序:输入的是月供(monthlyPayment) 输出的是总还款数(totalpayment) ![](../Photo/05.png) ``` 月供 = (贷款数 * 月利率)/(1-(1/(1+月利率)**年限*12)) ``` # Homework - 1 <img src="../Photo/06.png"></img> ``` res = eval(input('celsius')) fahrenheit = (9 / 5) * res + 32 print(fahrenheit) ``` - 2 <img src="../Photo/07.png"></img> ``` radius = eval(input("请输入半径")) length =eval(input("请输入高")) area = radius * radius * 3.1415 print("底面积",round(area,2)) volume = area * length print("体积",round(volume,2)) ``` - 3 <img src="../Photo/08.png"></img> ``` res = eval(input('请输入英尺:')) meter = res * 0.305 print("米",meter) ``` - 4 <img src="../Photo/10.png"></img> ``` m = eval(input('水量:')) inital = eval(input('初始温度')) final = eval(input('最终温度')) Q = m * (final - inital) * 4184 print(Q) ``` - 5 <img src="../Photo/11.png"></img> ``` chae = eval(input('差额:')) nianlilv = eval(input('年利率:')) rate = chae * (nianlilv / 1200) print(round(rate,5)) ``` - 6 <img src="../Photo/12.png"></img> ``` v1 = eval(input('末速度:')) v0 = eval(input('初速度:')) t = eval(input('时间:')) a=(v1-v0)/t print("加速度:",round(a,4)) ``` - 7 进阶 <img src="../Photo/13.png"></img> ``` a = eval(input('每月存款数:')) month = 6 b = 0 q = 1 while q<=month: b=(a + b) * (1 + 0.00417) q +=1 print(round(b,2)) ``` - 8 进阶 <img src="../Photo/14.png"></img> ``` a = eval(input('输入一个0到1000之间的整数:')) b = a % 10 c = a // 10 d = c % 10 e = a // 100 sum =b+d+e print("各位数之和",sum) ```
github_jupyter
radius = float(input('please input redius')) #type=radius print(redius * redius* 3.1415) age = eval(input('age')) print(age) number = eval(input('number')) if number % 2 == 0: print('偶数') import random res = random.randint(1000,10000) print(res) res2 =input('输入验证码') if res == int(res2): print('ok') else: print('error') 10.0101e+2 round(100.0222234345,2) rate = 0.06/100 account = 197.55e+2 res = round(rate * account,2) print(res) 月供 = (贷款数 * 月利率)/(1-(1/(1+月利率)**年限*12)) res = eval(input('celsius')) fahrenheit = (9 / 5) * res + 32 print(fahrenheit) radius = eval(input("请输入半径")) length =eval(input("请输入高")) area = radius * radius * 3.1415 print("底面积",round(area,2)) volume = area * length print("体积",round(volume,2)) res = eval(input('请输入英尺:')) meter = res * 0.305 print("米",meter) m = eval(input('水量:')) inital = eval(input('初始温度')) final = eval(input('最终温度')) Q = m * (final - inital) * 4184 print(Q) chae = eval(input('差额:')) nianlilv = eval(input('年利率:')) rate = chae * (nianlilv / 1200) print(round(rate,5)) v1 = eval(input('末速度:')) v0 = eval(input('初速度:')) t = eval(input('时间:')) a=(v1-v0)/t print("加速度:",round(a,4)) a = eval(input('每月存款数:')) month = 6 b = 0 q = 1 while q<=month: b=(a + b) * (1 + 0.00417) q +=1 print(round(b,2)) a = eval(input('输入一个0到1000之间的整数:')) b = a % 10 c = a // 10 d = c % 10 e = a // 100 sum =b+d+e print("各位数之和",sum)
0.189146
0.776072
<a href="https://colab.research.google.com/github/INFINITY-RUBER/Machine_Learning_A-Z_Hands-On-Python-R-In-Data-Science/blob/master/Part%203%20-%20Classification/Section%2014%20-%20Logistic%20Regression/Python/logistic_regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Logistic Regression ## Importing the libraries ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd ``` ## Importing the dataset ``` dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:, :-1].values# menos la ultima columna y = dataset.iloc[:, -1].values# toma la ultima columna print(X[:11]) print(y[:11]) ``` ## Splitting the dataset into the Training set and Test set ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) print(X_train[:11]) print(y_train[:11]) print(X_test[:11]) print(y_test) ``` ## Feature Scaling Escalado de funciones ``` from sklearn.preprocessing import StandardScaler sc = StandardScaler()# crea el objecto de escalado X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) print(X_train[:11]) print(X_test[:11]) ``` ## Training the Logistic Regression model on the Training set ``` # sklearn.linear_model.LogisticRegression >> API from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0) # creamos el objecto, y una semilla de randon cero classifier.fit(X_train, y_train) ``` ## Predicting a new result ``` # predecir un valor print(classifier.predict(sc.transform([[30, 15000]]))) # predicion del primer dato de testeo ``` ## Predicting the Test set results ``` # comparar las prediciones y_pred = classifier.predict(X_test) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) ``` ## Making the Confusion Matrix Haciendo la matriz de confusión ``` from sklearn.metrics import confusion_matrix, accuracy_score # se importa las 2 librerias cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred)# porcentaje de precision ``` ## Visualising the Training set results ``` from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_train), y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Logistic Regression (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ``` ## Visualising the Test set results ``` from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_test), y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Logistic Regression (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ```
github_jupyter
import numpy as np import matplotlib.pyplot as plt import pandas as pd dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:, :-1].values# menos la ultima columna y = dataset.iloc[:, -1].values# toma la ultima columna print(X[:11]) print(y[:11]) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) print(X_train[:11]) print(y_train[:11]) print(X_test[:11]) print(y_test) from sklearn.preprocessing import StandardScaler sc = StandardScaler()# crea el objecto de escalado X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) print(X_train[:11]) print(X_test[:11]) # sklearn.linear_model.LogisticRegression >> API from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0) # creamos el objecto, y una semilla de randon cero classifier.fit(X_train, y_train) # predecir un valor print(classifier.predict(sc.transform([[30, 15000]]))) # predicion del primer dato de testeo # comparar las prediciones y_pred = classifier.predict(X_test) print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1)) from sklearn.metrics import confusion_matrix, accuracy_score # se importa las 2 librerias cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred)# porcentaje de precision from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_train), y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Logistic Regression (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() from matplotlib.colors import ListedColormap X_set, y_set = sc.inverse_transform(X_test), y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Logistic Regression (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show()
0.435181
0.976714
# 3. Linear Regression – Conceptual Excercises from **Chapter 3** of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani. I've elected to use Python instead of R. ![table3.4](./images/3_table3.4.png) **Q1.** Describe the null hypotheses to which the p-values given in Table 3.4 correspond. Explain what conclusions you can draw based on these p-values. Your explanation should be phrased in terms of sales, TV, radio, and newspaper, rather than in terms of the coefficients of the linear model. **A.** The null hypotheses are 1. There is no relationship between amount spent on TV advertising and Sales 2. There is no relationship between amount spent on radio ads and Sales 3. There is no relationship between amount spent on newspaper ads and Sales The p-values given in table 3.4 suggest that we can reject the null hypotheses 1 & 2, it seems likely that there is a relationship between TV ads and Sales, and radio ads and sales. The p-value associated with the t-statistic for newspaper ads is high which suggests that we *cannot* reject null hypothesis 3. This suggests that there is no significant relationship between newspaper ads and sales. **Q2.** Carefully explain the differences between the KNN classifier and KNN regression methods. **A.** The KNN classifier and the KNN regression methods are largely similar. The KNN classifier determines a decision boundary which can be used to segment data into 2 or more clusters or groups. KNN regression is non-parmetric method for estimating a regression function that can be used to predict some quantitivie variable. **Q3.** Suppose we have a data set with five predictors, X1 = GPA, X2 = IQ, X3 = Gender (1 for Female and 0 for Male), X4 = Interaction between GPA and IQ, and X5 = Interaction between GPA and Gender. The response is starting salary after graduation (in thousands of dollars). Suppose we use least squares to fit the model, and get β_0 = 50, β_1 = 20 , β_2 = 0.07 , β_3 = 35 , β_4 = 0.01 , β_5 = −10 . **(a)** Which answer is correct, and why? - i. For a fixed value of IQ and GPA, males earn more on average than females. - ii. For a fixed value of IQ and GPA, females earn more on average than males. - iii. For a fixed value of IQ and GPA, males earn more on average than females provided that the GPA is high enough. - iv. For a fixed value of IQ and GPA, females earn more on average than males provided that the GPA is high enough. **A.** iii. For a fixed value of IQ and GPA, males earn more on average than females provided that the GPA is high enough. Because X3 is our dummy variable for gender with 1 for female and 0 male, and coefficient 35, which means – all else being equal – the model will estimate a starting salary for females $35k higher than for males, but there is an additional interaction variable concerning GPA and gender which means if GPA > 3.5 then males earn more than females. **(b)** Predict the salary of a female with IQ of 110 and a GPA of 4.0 ``` def f(gpa, iq, gender): return 50 + 20*gpa + 0.07*iq + 35*gender + 0.01*gpa*iq + (-10*gpa*gender) gpa = 4 iq = 110 gender = 1 print('$' + str(f(gpa, iq, gender) * 1000)) ``` (c) True or false: Since the coefficient for the GPA/IQ interaction term is very small, there is very little evidence of an interaction effect. Justify your answer. False: the interaction effect might be small but we would need to inspect the standard error to understand if this interaction effect is significant. If the standard error is also very small then it might still be considered a significant effect. **Q4.** I collect a set of data (n = 100 observations) containing a single predictor and a quantitative response. I then fit a linear regression model to the data, as well as a separate cubic regression, i.e. Y = β0 + β1X + β2X^2 + β3X^3 + ε **(a)** Suppose that the true relationship between X and Y is linear, i.e. Y = β0 + β1X + ε. Consider the training residual sum of squares (RSS) for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer. **A:** We would expect the training RSS for the cubic model because it is more flexible which allows it to fit more closely variance in the training data – which will reduce RSS despite this note being representative of a closer approaximation to the true linear relationship that is f(x). **(b)** Answer (a) using test rather than training RSS. **A:** We would expect the test RSS for the linear regression to be lower because the assumption of high bias is correct and so the lack of flexibility in that model is of no cost in estimating the true f(x). The cubic model is more flexible, and so is likely to overfit the training data meaning that the fit of the model will be affected by variance in the training data that is not representive of the true f(x). **(c)** Suppose that the true relationship between X and Y is not linear, but we don’t know how far it is from linear. Consider the training RSS for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer. **A:** We expect training RSS to decrease as the the variance/flexibility of our model increases. This holds true regardles of the true value of f(x). So we expect the cubic model to result in a lower training RSS **(d)** Answer (c) using test rather than training RSS. There is not enough information to answer this fully. If the true relationship is highly non-linear and there is low noise (or irreducible error) in our training data then we might expect the more flexible cubic model to deliver a better test RSS. However, if the relationship is only slightly non-linear or the noise in our training data is high then a linear model might deliver better results. **5.** Consider the fitted values that result from performing linear regression without an intercept. In this setting, the i-th fitted value takes the form: ![Screen%20Shot%202018-09-06%20at%2018.57.02.png](attachment:Screen%20Shot%202018-09-06%20at%2018.57.02.png) What is a_i′ ? *Note: We interpret this result by saying that the fitted values from linear regression are linear combinations of the response values.* ![IMG_1922.jpg](./images/3_5.jpg) **6.** Using (3.4), argue that in the case of simple linear regression, the least squares line always passes through the point (x ̄, y ̄). ![IMG_1923.jpg](./images/3_6.jpg) **7.** It is claimed in the text that in the case of simple linear regression of Y onto X, the R2 statistic (3.17) is equal to the square of the correlation between X and Y (3.18). Prove that this is the case. For simplicity, you may assume that x ̄ = y ̄ = 0. ![IMG_1924.jpg](./images/3_7.jpg)
github_jupyter
def f(gpa, iq, gender): return 50 + 20*gpa + 0.07*iq + 35*gender + 0.01*gpa*iq + (-10*gpa*gender) gpa = 4 iq = 110 gender = 1 print('$' + str(f(gpa, iq, gender) * 1000))
0.320077
0.99313
<table> <tr align=left><td><img align=left src="https://i.creativecommons.org/l/by/4.0/88x31.png"> <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td> </table> ``` %matplotlib inline from __future__ import print_function import numpy import matplotlib.pyplot as plt ``` # Hyperbolic Equations - Part II ## Characteristic Tracing The common way to solve hyperbolic PDEs analytically is by using the method of characteristics but up until now we really have not tried to use this theory to construct a numerical method. Thinking about the value of the solution at a point $(x_j, t + \Delta t)$ we know for $a > 0$ that we look backwards to the point $(x_j - a \Delta t, t)$ where the solution there informs the solution at the point of interest. This works great until we start thinking about a discretized grid where we only know the solution at time $t$ at a discrete set of points. ![Characteristic Tracing nu != 1](./images/characteristic_tracing_1.png) If the characteristics that intersects with $(x_j, t + \Delta t)$ also intersects with a point at time $t$ then we are ok. Usually this will not be the case unless we specifically choose $\Delta x$ and $\Delta t$ such that this is true. It turns out of course that this happens if $$ \frac{a \Delta t}{\Delta x} = 1, $$ exactly the upper bound of our stability results. ![Characteristic Tracing nu == 1](./images/characteristic_tracing_2.png) Similarly if $\nu < 1$ then we know that the characteristic will not hit the grid points exactly. Note also that due to the constraint that $|\nu| \leq 1$ that we know that the characteristic cannot pass $x_{j-1}$. We could also instead interpolate between the two values that the characteristic splits and find the value in question. Show that doing this using linear interpolation leads to the upwind method. For the linear interpolation we know that the intersection is at $x_p = x_j - a \Delta t$. The linear interpolant is $$\begin{aligned} P_1(x) &= \frac{x - x_{j-1}}{x_{j} - x_{j-1}} U^n_{j} + \frac{x - x_j}{x_{j-1} - x_j} U^n_{j-1} \\ & = \frac{x - x_{j-1}}{\Delta x} U^n_{j} - \frac{x - x_j}{\Delta x} U^n_{j-1} \end{aligned}$$ so that the value is $$\begin{aligned} U^{n+1}_j = P_1(x_j - a \Delta t) &= \frac{x_j - a \Delta t - x_{j-1}}{\Delta x} U^n_{j} - \frac{x_j - a \Delta t - x_j}{\Delta x} U^n_{j-1} \\ &= \frac{\Delta x - a \Delta t}{\Delta x} U^n_{j} + \frac{a \Delta t}{\Delta x} U^n_{j-1} \\ &= U^n_{j} - \frac{a \Delta t}{\Delta x} (U^n_{j} - U^n_{j-1}). \end{aligned}$$ Using a similar technique we can also find the Beam-Warming method with quadratic interpolation: $$\begin{aligned} P_2(x) &= \frac{(x - x_{j-1})(x - x_{j-2})}{(x_{j} - x_{j-1}) (x_{j} - x_{j-2})} U^n_{j} + \frac{(x - x_{j})(x - x_{j-2})}{(x_{j-1} - x_{j}) (x_{j-1} - x_{j-2})} U^n_{j-1} + \frac{(x - x_{j})(x - x_{j-1})}{(x_{j-2} - x_{j}) (x_{j-2} - x_{j-1})} U^n_{j-2} \\ &=\frac{(x - x_{j-1})(x - x_{j-2})}{2 \Delta x^2} U^n_{j} - \frac{(x - x_{j})(x - x_{j-2})}{\Delta x^2} U^n_{j-1} + \frac{(x - x_{j})(x - x_{j-1})}{2 \Delta x^2} U^n_{j-2} \\ &=\frac{1}{\Delta x^2} \left[\frac{1}{2} U^n_{j} (x - x_{j-1})(x - x_{j-2}) - U^n_{j-1} (x - x_{j})(x - x_{j-2}) + \frac{1}{2} U^n_{j-2} (x - x_{j})(x - x_{j-1}) \right ] \end{aligned}$$ and finally $$\begin{aligned} U^{n+1}_j = P_2(x_j - a \Delta t) &= \frac{1}{\Delta x^2} \left[\frac{1}{2} U^n_{j} (\Delta x - a \Delta t)(2 \Delta x - a \Delta t) - U^n_{j-1} (- a \Delta t)(2 \Delta x - a \Delta t) + \frac{1}{2} U^n_{j-2} (- a \Delta t)(\Delta x - a \Delta t) \right ] \\ &= \frac{1}{\Delta x^2} \left[\frac{1}{2} U^n_{j} (2 \Delta x^2 - 3 a \Delta t \Delta x + a \Delta t^2) - U^n_{j-1} (-2a \Delta t \Delta x + a^2 \Delta t^2) + \frac{1}{2} U^n_{j-2} (-a \Delta t \Delta x + a^2 \Delta t^2) \right ] \\ &= U^n_j - \frac{a \Delta t}{2 \Delta x} (3 U^n_j - 4 U^n_{j-1} + U^n_{j-2}) + \frac{a \Delta t^2}{2 \Delta x^2} (U^n_j - 2 U^n_{j-1} + U^n_{j-2}) \end{aligned}$$ ## The Courant-Friedrichs-Lewy (CFL) condition One interesting result of our characteristic analysis was that the stability criteria caused the characteristic intersection with $t_n$ to be within the interval $[x_{j-1}, x_j]$ when $a > 0$. This is indicative of a more general principle for stability for numerics for PDEs, due to Courant, Friedrichs, and Lewy and often called the CFL condition. The stability condition that we have been observing time and time again $$ \nu = \left | \frac{a \Delta t}{\Delta x} \right | \leq 1 $$ turns out to be a necessary condition for methods developed to solve the advection equation. The value $\nu$ is often called the *Courant number* due to this. ### Domain of Dependence To make the more general statement about the CFL condition and the Courant number we need to talk about what the *domain of dependence* is for a given PDE. We already know what this is for the advection equation. We know the solution at $(X, T)$ is $u(X, T) = u_0(X - a T)$. The domain of dependence then is $$ \mathcal{D}(X,T) = \{X - a T\}. $$ Another way to think about this is to consider what points could we change that would effect the solution at $(X,T)$. In the case of the advection equation it is one point. More generally for other PDEs we might expect the domain of dependence to be larger than a single point but rather a set of points (as is the case for systems of advection equations) or an entire interval. The heat equation is one such equation and has domain of dependence $\mathcal{D}(X, T) = (-\infty, \infty)$. In other words all points in the domain effect all other points at any future time. This type of equation is also said to have infinite *propagation speed* and is the case for any parabolic PDE and constitutes another way to classify more complex PDEs. One could possibly reject this idea for the heat equation after all the Green's function for a particular point decays exponentially fast away from a point but unfortunately is still not fast enough. This is also the source of the conclusion and physical break down of the diffusion model, material (or heat) will travel infinitely fast. ### Numerical Domain of Dependence A numerical method also has a domain of dependence determined by the stencil used. For instance the Lax-Friedrichs method has the solution $U^n_j$ dependent on the points $U^{n-1}_{j+1}$, $U^{n-1}_{j}$, and $U^{n-1}_{j-1}$. This is generally true for the three-point methods we developed earlier including the Lax-Wendroff method. We can also trace backwards further in time to see which points $U^{n-1}_{j+1}$, $U^{n-1}_{j}$, and $U^{n-1}_{j-1}$ depend on to see a growing cone of dependence. ![Domain of Dependence](./images/characteristic_tracing_3.png) As the grid is refined in both time and space respecting the stability criteria (the CFL condition) we then might expect that the numerical domain of dependence might converge to the true one. This is actually not true for the three-point stencils but in fact a weaker condition does hold, that the numerical domain of dependence should contain the PDE's domain of dependence. If we say continue to refine our grid with the ratio between $\Delta t / \Delta x = r$ then the domain of dependence for the point $(X,T)$ will fill in the interval $[X - T/ r, X+ T/r]$. Since we want the computed solution $U(X,T)$ to converge to the true solution $u_0(X - a T)$ we need to require $$ X - T/r \leq X - a T \leq X + T /r. $$ This basically implies that $u_0(X - aT)$ lies in the numerical cone of dependence. This also implies that $|a| \leq 1 /r$ and therefore $|a \Delta t / \Delta x| \leq 1$ again giving us the familiar stability criteria. This then leads us to the general statement of the CFL condition. The CFL condition can then be summed up in the following necessary condition: > A numerical method is convergent only if its numerical domain of dependence contains the domain of dependence determined from the original PDE as $\Delta t \rightarrow 0$ and $\Delta x \rightarrow 0$. ### Example - Upwind Methods Numerical domain depends on the sign of $a$ but has a 2 point stencil. Note that if we pick the wrong direction for the upwinding that as $\Delta t$ and $\Delta x$ go to 0 the point $X - a T$ will never lie in the cone of dependence. ### Example - Heat Equation We have mentioned already that the true domain of dependence for the heat equation is the entire domain. How does that work for the heat equation then, especially with an implicit method? This would imply that any 3-point stencil (which was what we had been using) in fact violates the CFL condition. This is indeed true if we fix the ratio of $\Delta t / \Delta x$ but in fact we had a stricter requirement for the relationship, that $\Delta t / \Delta x^2 \leq 1 / 2$. This expands the domain of dependence as $\Delta t \rightarrow 0$ fast enough that it will cover the entire domain. For implicit methods, such as Crank-Nicholson, the CFL condition is satisfied for any time step $\Delta t$ due to the coupling of every point to every other point. ## Modified Equations Another powerful tool for analyzing numerical methods is the use of modified equations. This approach is similar to what we used for deriving local truncation error and reveals more about how we might expect a given numerical method to perform and what the error might appear as. The basic idea is to find a new PDE that may be solved **exactly** by the numerical method. In other words if we had a PDE $v_t = f(v, v_x, v_{xx}, \cdots)$ then our approximate solution given some $\Delta t$ and $\Delta x$ would satisfy $U^n_j = v(x_j, t_n)$. The question can also be posed "is there a PDE that $U^n_j$ better captures?". We can answer this question via Taylor series expansions. ### Example - Upwind Method The upwind method is $$ U^{n+1}_j = U^n_j - \frac{a \Delta t}{\Delta x} (U^n_j - U^n_{j-1}). $$ assuming $a > 0$. Assume that we have a function $v(x,t)$ and an associated PDE (which we do not know yet) that the upwind method solves exactly. First replace the discrete solution $U$ with the continuous function $v(x,t)$ so that we have $$ v(x, t + \Delta t) = v(x,t) - \frac{a \Delta t}{\Delta x} (v(x,t) - v(x-\Delta x,t)). $$ Using Taylor series we know $$\begin{aligned} \left(v + v_t \Delta t + \frac{\Delta t^2}{2} v_{tt} + \frac{\Delta t^3}{6} v_{ttt} + \cdots \right ) - v + \frac{a \Delta t}{\Delta x} \left( v - v + \Delta x v_x + \frac{\Delta x^2}{2} v_{xx} - \frac{\Delta x^3}{6} v_{xxx} + \cdots \right ) = 0\\ v_t + \frac{\Delta t}{2} v_{tt} + \frac{\Delta t^2}{6} v_{ttt} + \cdots + a \left( v_x + \frac{\Delta x}{2} v_{xx} - \frac{\Delta x^2}{6} v_{xxx} + \cdots \right ) = 0. \end{aligned}$$ Reorganizing the terms in the equation we have $$ v_t + a v_x = \frac{1}{2}(a \Delta x v_{xx} - \Delta t v_{tt}) + \frac{1}{6} (a \Delta x^2 v_{xxx} - \Delta t^2 v_{ttt}) + \cdots $$ This is the PDE that $v$ satisfies. We can see here that if $\Delta t$ and $\Delta x$ go to zero we can expect that we will recover the original equation the method was meant to solve. The dominant terms on the right hand side though give us a glimpse of the behavior of the solution $v$ if $\Delta t$ and $\Delta x$ are non-zero. For instance if we consider the $\mathcal{O}(\Delta x, \Delta t)$ terms we have the equation $$ v_t + a v_x = \frac{1}{2}(a \Delta x v_{xx} - \Delta t v_{tt}), $$ an equation that also includes something that looks like the second order wave equation. We can rewrite this even more explicitly by differentiating both sides with respect to $t$ $$ v_{tt} = -a v_{xt} + \frac{1}{2} (a \Delta x v_{xxt} - \Delta t v_{ttt}) $$ and with respect to $x$ $$ v_{tx} = -a v_{xx} + \frac{1}{2} (a \Delta x v_{xxx} - \Delta t v_{ttx}) $$ which combined leads to $$ v_{tt} = a^2 v_{xx} + \mathcal{O}(\Delta t). $$ Inserting this back into the original expression on the right hand side we can get rid of the second order derivative in time to find $$ v_t + a v_x = \frac{1}{2} a \Delta x \left(1 - \frac{a \Delta t}{\Delta x} \right) v_{xx} + \mathcal{O}(\Delta x^2, \Delta t^2) $$ which is an advection-diffusion equation similar to what we saw before except now explicitly formulated in the continuous case. We can also say then that the upwind discretization gives a solution to the above advection-diffusion equation to second-order accuracy. So what can we take away from this? - This leading order behavior leads us to believe that the error will be diffusive in nature. - If $a \Delta t = \Delta x$, i.e. the Courant number $\nu = 1$, then the exact solution will be recovered. - The coefficient in front of the diffusion operator is $\frac{1}{2} (a \Delta x - a^2 \Delta t)$ which is positive if $0 < a \Delta t / \Delta x < 1$, another way to see the stability criteria. ### Example - Lax-Wendroff Following the same procedure we can derive the leading order terms (up to $\mathcal{O}(\Delta t^2, \Delta x^2)$) for Lax-Wendroff to find $$ v_t + a v_x = -\frac{1}{6} a \Delta x^2 \left( 1 - \left(\frac{a \Delta t}{\Delta x}\right)^2 \right) v_{xxx}. $$ We can observe a few things from this modified equation - The Lax-Wendroff approximates an advection-dispersion equation to third order. - The dominant error will be dispersive (the third derivative does this) although this error will be smaller than the diffusive error from the up-wind method above. #### An Aside - Dispersion Consider the PDE $$ u_t = u_{xxx} $$ as a Cauchy problem. If we Fourier transform the equation we arrive at the ODE $$ \hat{u~}_t(\xi,t) = - i \xi^3 \hat{u~}(\xi, t) $$ which has the solution $$ \hat{u~}(\xi, t) = \hat{u~}_0(\xi) e^{-i \xi^3 t}. $$ Note that this looks like the general solution to an advection problem in that waves will maintain their amplitude, however each Fourier mode now propagates at its own speed dependent on its wave number. We can see this by taking the inverse Fourier transform to find $$ u(x,t) = \frac{1}{\sqrt{2 \pi}} \int^\infty_{-\infty} \hat{u~}_0(\xi) e^{i\xi(x - \xi^2 t)} d\xi. $$ Examining the integrand we can see that the $\xi$ wave number travels at the speed $\xi^2$. In contrast the similar path with the advection equation leads to $$ u(x,t) = \frac{1}{\sqrt{2 \pi}} \int^\infty_{-\infty} \hat{u~}_0(\xi) e^{i\xi(x - a t)} d\xi $$ where we clearly see all wave numbers $\xi$ traveling at the speed $a$. This is the essential difference between advection and dispersion, the components of the solution spread out due to their different effective speeds. We can extend this to more general equations of the form $$ u_t + a u_x + b u_{xxx} = 0 $$ where we can write the solution $$ u(x,t) = \frac{1}{\sqrt{2 \pi}} \int^\infty_{-\infty} \hat{u~}_0(\xi) e^{i \xi (x - (a - b\xi^2) t)} d\xi. $$ Here we see the speed of the components travel at $a - b \xi^2$ so the relative values of $a$ and $b$ will determine which effect will be more dominant. Back to the Lax-Wendroff method's modified equations we can write down the group velocity as $$ c_g = a - \frac{1}{2} a \Delta x^2 \left(1 - \left( \frac{a \Delta t}{\Delta x} \right )^2 \right) \xi^2. $$ For this particular method $c_g < a$ for all $\xi$ and hence the dispersion error trails the waves as seen in the numerical example. We can also retain more terms in the modified equation, if we did this to fourth order we would find $$ v_t + a v_x + \frac{1}{6} a \Delta x^2 \left(1 - \left( \frac{a \Delta t}{\Delta x} \right )^2 \right) v_{xxx} + \epsilon v_{xxxx} = 0 $$ where $\epsilon = \mathcal{O}(\Delta x^3 + \Delta t^3)$. We now see that past the dispersive error we will find hyper-diffusion as the leading error. Dispersion and talking about wave numbers $\xi$ also brings up another important consideration. If we were interested in highly oscillatory waves relative to the grid, i.e. when $\xi \Delta x \gg 0$, we may run into problems representing them on a given grid. For $\xi \Delta x$ sufficiently small this is not a problem and the modified equation gives a reasonable estimate as to the dispersion and therefore the group velocity. If our expected solution contains waves with $\xi \Delta x \gg 0$ then higher order terms may be needed to correctly represent the solution. Usually we therefore rely on plugging in the ansatz $$ u(x,t) = e^{i(\xi x_j - \omega(\xi) t_n)}. $$ This clearly has a relation to von Neumann analysis where we have replaced $g(\xi)$ with $e^{-i \omega(\xi) \Delta t}$. ### Example: Beam-Warming As a contrast to the Lax-Wendroff error behavior consider the modified equation for the Beam-Warming method which is $$ v_t + a v_x = \frac{1}{6} a \Delta x^2 \left ( 2- \frac{3 a \Delta t}{\Delta x} + \left(\frac{a \Delta t}{\Delta x} \right)^2 \right ) v_{xxx}. $$ We saw with the numerical example that the dispersion error proceeded the wave and we now can see why as in this case $c_g > a$. ### Example: Leapfrog The modified equation for leapfrog leads to some interesting conclusions as we have some fortunate cancellations. Writing the leapfrog method as $$ \frac{v(x, t + \Delta t) - v(x, t - \Delta t)}{2 \Delta t} + a \frac{v(x + \Delta x, t) - v(x - \Delta x, t)}{2 \Delta x} = 0 $$ we can observe that the modified equations take the form $$ v_t + a v_x + \frac{1}{6} a \Delta x^2 \left(1 - \left( \frac{a \Delta t}{\Delta x} \right )^2 \right) v_{xxx} = \epsilon_1 v_{xxxxx} + \epsilon_2 v_{xxxxxxx} + \cdots. $$ It turns out that all even order derivative terms drop out leaving us only with dispersive error. In fact up to fourth order the leapfrog discretization solves an advection-dispersion equation. We can also see now again why leapfrog should be called non-dissipative as there are no error terms that have even derivatives, i.e. diffusion is not present. As a further exercise we can also compute the exact dispersion relation of the numerical method (the dispersion relation relates the wave number $\xi$ to the phase speed, usually denoted $\omega(\xi)$). Plugging in the familiar ansatz similar to von Neumann analysis $e^{i(\xi x_j - \omega t_n)}$ we have $$ e^{-i\omega \Delta t} = e^{i \omega \Delta t} - \frac{a \Delta t}{\Delta x} \left( e^{i \xi \Delta x} - e^{-i\xi \Delta x}\right) $$ leading to $$ \sin(\omega \Delta t) = \frac{a \Delta t}{\Delta x} \sin(\xi \Delta x). $$ We can also compute the group velocity $c_g$ from this since $$ c_g = \frac{\text{d} \omega}{\text{d} \xi} = \frac{a \cos(\xi \Delta x)}{\cos(\omega \Delta t)} = \pm \frac{a \cos(\xi \Delta x)}{\sqrt{1 - \nu^2 \sin^2(\xi \Delta x)}}. $$ Note again what happens if $\nu = 1$. ## Systems of Hyperbolic Equations We can extend what we have done so far to systems of (linear) hyperbolic PDEs of the form $$ u_t + A u_x = 0 $$ with an appropriate initial condition $u(x,0) = u_0(x)$. Here $A \in \mathbb R^{s \times s}$ where $s$ is the number of equations. In this case there is a well-defined way to extend our previous idea of hyperbolic PDEs as we require $A$ to be diagonalizable with real eigenvalues for the system of PDEs to be called hyperbolic. The consequence of this is that we can write $A$ as $$ A = R \Lambda R^{-1} $$ were $R$ are the eigenvectors with $\Lambda$ containing the eigenvalues on its diagonal. These eigenvalues fill in for the value we saw before as the advective speed $a$ so these being real and finite matches well with our previous idea of what a hyperbolic equation should have, information propagates at a finite speed. Although less trivial we can still solve linear hyperbolic systems due to the decomposition of $A$. Plugging in the decomposition and multiplying by $R^{-1}$ on the right leads to $$ u_t + R \Lambda R^{-1} u_x = 0 \Rightarrow \\ R^{-1} u_t + \Lambda R^{-1} u_x = 0. $$ Defining the *characteristic variables* as $w = R^{-1} u$ we can rewrite the system as a set of decoupled equations with $$ w_t + \Lambda w_x = 0. $$ We know how to solve these as $w_p(x,t) = w_p(x - \lambda_p t, 0)$. The initial conditions in the characteristic variables is $$ w(x, 0) = R^{-1} u_0(x). $$ Transforming back to the original variables we in principle need only to evaluate $$ u(x,t) = R w(x,t) $$ however this is not so easy due to the form of the solution in $w$. Instead we can write the solution as $$ u(x,t) = \sum^s_{p=1} w_p(x,t) r_p = \sum^s_{p=1}w_p(x - \lambda_p t, 0) r_p. $$ We now have *characteristics of the $p$th family* which refer to the $p$th group of characteristics determined by the $p$th eigenvalue. ### Numerical Methods We can extend most of the methods we have discussed thus far to systems by simply replacing the advective speed $a$ with the matrix $A$. For example the Lax-Wendroff method can be generalized to $$ U^{n+1}_j = U^n_j - \frac{\Delta t}{2 \Delta x} A (U^n_{j+1} - U^n_{j-1}) + \frac{ \Delta t^2}{2 \Delta x^2} A^2 (U^n_{j+1} - 2 U^n_{j} + U^n_{j-1}) $$ provided that the Courant number $\nu < 1$. Note now we need to be careful about the Courant number as in general the CFL condition requires that $$ \nu = \max_{1 \leq p \leq s} \left| \frac{\lambda_p \Delta t}{\Delta x} \right | < 1 $$ All of the centered-based approximations are generally applicable with this stability criteria. The methods we have considered that were one-sided however require a bit more care unless all the eigenvalues of the matrix $A$ are either positive or negative. Instead we must decompose the system into its characteristic variables, apply the method per equation, and re-transform back. Generally these types of methods are classified as *Godunov methods*. ## Boundaries We have mostly ignored boundaries beyond those that are periodic so let's turn back to the question of non-periodic boundary conditions. Let 's now consider how to incorporate boundaries to find the methods to solve initial boundary value problems. Consider now the hyperbolic PDE defined by $$ u_t + a u_x = 0 ~~~~ \Omega = [0, 1] \\ u(x, 0) = u_0(x) $$ before defining boundary conditions. Due to our domain of dependence discussion we know a bit about when the boundaries will impact our solution. ![Characteristic boundaries](./images/characteristics_regions_1.png) For the scalar equation and $a > 0$ then we know $$ u(x,t) = \left \{ \begin{aligned} u_0(x - a t) & & 0 \leq x - at \leq 1 \\ g_0(t - x / a) & & \text{otherwise}. \end{aligned} \right . $$ If we have a system of equations with opposite signs for the speeds we might have a situation that looked like the following instead ![System of hyperbolic PDEs with boundaries](./images/characteristics_regions_2.png) ### Upwind for IBVP Say we use the appropriate upwind method for $a > 0$ with the grid $\Delta x = 1 / (m + 1)$. Upwind describes all the internal equations with the condition on the left boundary providing the $U_0$ value, hence the method completely specifies the problem. The method is stable with the same condition as before. Note that we can no longer directly use von Neumann analysis due to the new boundary. It still can be useful as a stability tool however the method of lines analysis can be more useful here. Consider again the system of ODEs $$ U'(t) = A U(t) + g(t) $$ where $$ A = - \frac{a}{\Delta x} \begin{bmatrix} 1 \\ -1 & 1 \\ & -1 & 1 \\ & & \ddots & \ddots \\ & & & -1 & 1 \end{bmatrix} \quad g(t) = \begin{bmatrix} g_0(t) \frac{a}{\Delta x} \\ 0 \\ \vdots \\ 0 \end{bmatrix}. $$ Unfortunately this new matrix, although similar to the one considered before, has very different properties. This new matrix has eigenvalues uniformly distributed around the circle with radius $a / \Delta x$ and centered at $z = - a / \Delta x$. Why are these changes significant? If we follow our previous analysis we would conclude that the method is stable if $$ 0 \leq \nu \leq 2 $$ which is a bit suspicious. It turns out that this is a necessary condition (although clearly not sufficient). The problem in our analysis stems from the fact that $A$ is highly non-normal and we require further constraints on the $\epsilon$-pseudospectra which again leads to our more familiar stability constraint. ### Outflow Boundaries As was mentioned before, often times a numerical method we would like to use would require the use of boundary conditions where none should exist. We saw this with the Lax-Wendroff method where the outflow boundary points are needed by the stencil. We can specify a *numerical boundary condition* or *artificial boundary condition* instead of using a one-sided approximation. The prescription of numerical boundary conditions is long and the analysis tricky so here we will relegate ourselves to a couple of illustrative examples #### Example Consider the leapfrog method on a finite domain with $a > 0$ and a given inflow boundary condition $g_0(t)$. Say we use the upwind method on the outflow boundary instead of prescribing a condition. It turns out doing so will introduce waves with $\xi \Delta x \approx \pi$ that will move to the left with speed $-a$. ``` # Implement Leapfrog for the PDE u_t + u_x = 0 on a finite domain [0, 10] # domain u_true = lambda x, t: numpy.exp(-5.0 * ((x - t - 7.0)**2)) m = 100 x = numpy.linspace(0, 10.0, m) delta_x = 10.0 / (m - 1) cfl = 0.8 delta_t = cfl * delta_x U = u_true(x, 0) t = 0.0 # Jump start with true-solution U_new = u_true(x, t + delta_t) U_old = U_new.copy() fig = plt.figure() fig.set_figwidth(fig.get_figwidth() * 2) fig.set_figheight(fig.get_figheight() * 3) axes = fig.add_subplot(3, 2, 1) axes.plot(x, U, 'ro') axes.plot(x, u_true(x, t),'k') axes.set_ylim((-0.1, 1.1)) axes.set_title("t = 0.0") t += delta_t for (n, t_final) in enumerate((10*delta_t, 50 * delta_t, 100 * delta_t, 200 * delta_t, 300 * delta_t)): while t < t_final: U_new[0] = U_old[0] - delta_t / delta_x * (U[1] - u_true(0.0, t)) U_new[1:-1] = U_old[1:-1] - delta_t / delta_x * (U[2:] - U[:-2]) # Use upwind for outflow boundary U_new[-1] = U[-1] - delta_t / delta_x * (U[-1] - U[-2]) U_old = U.copy() U = U_new.copy() t += delta_t # Plot solution at t = 17.0 and t = 0.0 axes = fig.add_subplot(3, 2, n + 2) axes.plot(x, U, 'ro') axes.plot(x, u_true(x, t),'k') axes.set_ylim((-0.1, 1.1)) axes.set_title("t = %s" % t_final) plt.show() ``` In general dealing with outflow boundary conditions is very difficult. Often we will instead of prescribing a one-sided method want to specify the numerical boundaries which have special properties, such as being non-reflective or absorbing (we see a slight reflected wave in the above example). ## Alternatives Finally we end this discussion with a few alternatives not mentioned above. ### Higher Order Discretizations We can of course use arbitrarily high order discretizations beyond what we talked about above by employing the method of lines and discretizing space and time $$ U_j'(t) = -a W_j(t) $$ assuming the solution remains sufficiently smooth. One example could be $$ W_j(t) = \frac{4}{3} \left(\frac{U_{j+1} - U_{j-1}}{2 \Delta x} \right )- \frac{1}{3} \left(\frac{U_{j+2} - U_{j-2}}{4 \Delta x} \right ). $$ The finite difference discretizations discussed so far can all be used but the higher-order accuracy we reach comes at the cost of a wider the stencil which leads to difficulty for the usual reasons. Another approach to avoid this is to use compact differencing methods which solve linear systems. A simple example of this idea is $$ \frac{1}{4} W_{j-1} + W_j + \frac{1}{4} W_{j+1} = \frac{3}{2} \left( \frac{U_{j+1} - U_{j-1}}{2 \Delta x} \right ) $$ which leads to a $\mathcal{O}(\Delta x^4)$ approximation. ### Spectral Methods We can also use spectral methods to transform the spatial derivatives into a linear system. In essence we can derive a dense differentiation matrix $D$ so that $W = D U$. These can easily be generalized to more complex systems of equations but require smooth solutions to work and can be very difficult to analyze. ### Other Time Discretizations We can of course also use different time discretizations. Above we used what looked like forward Euler and leapfrog but you can use higher-order, explicit methods such as Runge-Kutta methods or an implicit method. Implicit methods can be useful if you are not as concerned about accuracy but want to evolve the solution to large times. Also, although the advection equation itself is not stiff, some hyperbolic PDEs can be or the spatial discretization can as well (as is the case with the spectral approach above). ### Conservation Laws and Finite Volume Methods A large and important class of hyperbolic PDEs are conservation laws of the form $$ u_t + f(u)_x = 0. $$ These naturally arise in many areas of physics and describe the evolution of quantities such as mass, momentum, or energy. One such system is the Euler equations describing compressible gas dynamics: $$\begin{aligned} &\rho_t + (\rho u)_x = 0 \\ &(\rho u)_t + (\rho u^2 + p)_x = 0 \\ &(E)_t + [(E + p) u]_x = 0 \end{aligned}$$ describing density $\rho$, momenta $\rho u$ and energy $E$ coupled with an appropriate equation of state relating pressure, density, and internal energy. A more natural way to formulate conservation laws is with integral forms of the same equations. In general we can write these as $$ \frac{\text{d}}{\text{d}t} \int^{x_2}_{x_1} u(x, t) dx = f(u(x_1,t)) - f(u(x_2, t)). $$ Methods for solving these often evolve cell averages of $u$ rather than point values. In this case our approximation $U^n_i$ is viewed as this average over a grid cell $[x_{i-1/2}, x_{i+1/2}]$ with length $\Delta x$ and centered at $x_i$. The cell average would then be $$ U^n_i \approx \frac{1}{\Delta x} \int^{x_{i+1/2}}_{x_{i-1/2}} u(x, t_n) dx. $$ Finite volume methods generally take this approach with the specification of a way to evaluate the flux functions the primary numerical goal.
github_jupyter
%matplotlib inline from __future__ import print_function import numpy import matplotlib.pyplot as plt # Implement Leapfrog for the PDE u_t + u_x = 0 on a finite domain [0, 10] # domain u_true = lambda x, t: numpy.exp(-5.0 * ((x - t - 7.0)**2)) m = 100 x = numpy.linspace(0, 10.0, m) delta_x = 10.0 / (m - 1) cfl = 0.8 delta_t = cfl * delta_x U = u_true(x, 0) t = 0.0 # Jump start with true-solution U_new = u_true(x, t + delta_t) U_old = U_new.copy() fig = plt.figure() fig.set_figwidth(fig.get_figwidth() * 2) fig.set_figheight(fig.get_figheight() * 3) axes = fig.add_subplot(3, 2, 1) axes.plot(x, U, 'ro') axes.plot(x, u_true(x, t),'k') axes.set_ylim((-0.1, 1.1)) axes.set_title("t = 0.0") t += delta_t for (n, t_final) in enumerate((10*delta_t, 50 * delta_t, 100 * delta_t, 200 * delta_t, 300 * delta_t)): while t < t_final: U_new[0] = U_old[0] - delta_t / delta_x * (U[1] - u_true(0.0, t)) U_new[1:-1] = U_old[1:-1] - delta_t / delta_x * (U[2:] - U[:-2]) # Use upwind for outflow boundary U_new[-1] = U[-1] - delta_t / delta_x * (U[-1] - U[-2]) U_old = U.copy() U = U_new.copy() t += delta_t # Plot solution at t = 17.0 and t = 0.0 axes = fig.add_subplot(3, 2, n + 2) axes.plot(x, U, 'ro') axes.plot(x, u_true(x, t),'k') axes.set_ylim((-0.1, 1.1)) axes.set_title("t = %s" % t_final) plt.show()
0.611034
0.994286
# WaveNet Sample Generation Fast generation of samples from a pretrained WaveNet model ``` from wavenet_model import WaveNetModel from wavenet_training import AudioFileLoader, WaveNetOptimizer import torch import numpy as np import time from IPython.display import Audio from matplotlib import pyplot as plt from matplotlib import pylab as pl from IPython import display %matplotlib notebook ``` ## Load Model ``` train_samples = ["train_samples/clarinet_g.wav"] sampling_rate = 11025 parameters ="model_parameters/clarinet_g_7-3-256-32-32-64-2" layers = 7 blocks = 3 classes = 256 dilation_channels = 32 residual_channels = 32 skip_channels = 64 kernel_size = 2 dtype = torch.FloatTensor ltype = torch.LongTensor use_cuda = torch.cuda.is_available() if use_cuda: dtype = torch.cuda.FloatTensor ltype = torch.cuda.LongTensor model = WaveNetModel(layers=layers, blocks=blocks, dilation_channels=dilation_channels, residual_channels=residual_channels, skip_channels=skip_channels, classes=classes, kernel_size=kernel_size, dtype=dtype) if use_cuda: model.cuda() print("use cuda") #print("model: ", model) print("receptive_field: ", model.receptive_field) if use_cuda: model.load_state_dict(torch.load(parameters)) else: # move to cpu model.load_state_dict(torch.load(parameters, map_location=lambda storage, loc: storage)) data_loader = AudioFileLoader(train_samples, classes=classes, receptive_field=model.receptive_field, target_length=model.output_length, dtype=dtype, ltype=ltype, sampling_rate=sampling_rate) data_loader.start_new_epoch() data_loader.load_new_chunk() data_loader.use_new_chunk() start_data = data_loader.get_minibatch(1)[0] start_data = start_data.squeeze() #start_tensor = torch.zeros((model.scope)) + 0.0 plt.plot(start_data.cpu().numpy()[:]) ``` ## Generate Samples ``` num_samples = 10000 # number of samples that will be generated out_file = "generated_samples/violin_7-2-128-32-32-32-2.wav" from ipywidgets import FloatProgress from IPython.display import display progress = FloatProgress(min=0, max=100) display(progress) def p_callback(i, total): progress.value += 1 tic = time.time() generated_sample = model.generate_fast(num_samples, first_samples=start_data, #first_samples=torch.zeros((1)), progress_callback=p_callback, sampled_generation=False, temperature=1.0) toc = time.time() print('Generating took {} seconds.'.format(toc-tic)) fig = plt.figure() plt.plot(generated_sample) from IPython.display import Audio Audio(np.array(generated_sample), rate=sampling_rate) print(np.array(generated_sample)) from scipy.io import wavfile wavfile.write(out_file, sampling_rate, np.array(generated_sample)) ```
github_jupyter
from wavenet_model import WaveNetModel from wavenet_training import AudioFileLoader, WaveNetOptimizer import torch import numpy as np import time from IPython.display import Audio from matplotlib import pyplot as plt from matplotlib import pylab as pl from IPython import display %matplotlib notebook train_samples = ["train_samples/clarinet_g.wav"] sampling_rate = 11025 parameters ="model_parameters/clarinet_g_7-3-256-32-32-64-2" layers = 7 blocks = 3 classes = 256 dilation_channels = 32 residual_channels = 32 skip_channels = 64 kernel_size = 2 dtype = torch.FloatTensor ltype = torch.LongTensor use_cuda = torch.cuda.is_available() if use_cuda: dtype = torch.cuda.FloatTensor ltype = torch.cuda.LongTensor model = WaveNetModel(layers=layers, blocks=blocks, dilation_channels=dilation_channels, residual_channels=residual_channels, skip_channels=skip_channels, classes=classes, kernel_size=kernel_size, dtype=dtype) if use_cuda: model.cuda() print("use cuda") #print("model: ", model) print("receptive_field: ", model.receptive_field) if use_cuda: model.load_state_dict(torch.load(parameters)) else: # move to cpu model.load_state_dict(torch.load(parameters, map_location=lambda storage, loc: storage)) data_loader = AudioFileLoader(train_samples, classes=classes, receptive_field=model.receptive_field, target_length=model.output_length, dtype=dtype, ltype=ltype, sampling_rate=sampling_rate) data_loader.start_new_epoch() data_loader.load_new_chunk() data_loader.use_new_chunk() start_data = data_loader.get_minibatch(1)[0] start_data = start_data.squeeze() #start_tensor = torch.zeros((model.scope)) + 0.0 plt.plot(start_data.cpu().numpy()[:]) num_samples = 10000 # number of samples that will be generated out_file = "generated_samples/violin_7-2-128-32-32-32-2.wav" from ipywidgets import FloatProgress from IPython.display import display progress = FloatProgress(min=0, max=100) display(progress) def p_callback(i, total): progress.value += 1 tic = time.time() generated_sample = model.generate_fast(num_samples, first_samples=start_data, #first_samples=torch.zeros((1)), progress_callback=p_callback, sampled_generation=False, temperature=1.0) toc = time.time() print('Generating took {} seconds.'.format(toc-tic)) fig = plt.figure() plt.plot(generated_sample) from IPython.display import Audio Audio(np.array(generated_sample), rate=sampling_rate) print(np.array(generated_sample)) from scipy.io import wavfile wavfile.write(out_file, sampling_rate, np.array(generated_sample))
0.448909
0.729592
# Microtones # This notebook discusses musical scales based on subdividing the octave into something other than the traditional Western 12 equal parts. To run it, you will need my [ipy_magics](https://github.com/ldo/ipy_magics), specifically `csound_magic.py` and `setvar_magic.py`. This uses [Csound](https://csound.com/) to generate the audio samples. ``` # Edit this as appropriate for the correct paths to the Python modules %run ../ipy_magics/setvar_magic.py %run ../ipy_magics/csound_magic.py ``` Define some common preamble to simplify writing the Csound scores: ``` %%setvar common_setup <CsInstruments> sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 2 gibasekey = 31 ; G ginterval = 12 ; default opcode midifreq, k, kk ; lets me express pitches in a more natural way, with separate ; octave number and note within octave, offset by a global base ; key. koct, knote xin kfreq = cpsmidinn(koct * 12 + knote * 12 / ginterval + gibasekey) xout kfreq endop instr 1 kamp linseg 0, 0.02, 1, p3 - 0.04, 1, 0.02, 0 kfreq midifreq p4, p5 araw oscil kamp, kfreq out araw endin instr 10 ; doesn’t produce any sound, but changes the number of ; divisions in the octave. ginterval = p4 prints "%d divisions per octave\n", ginterval endin </CsInstruments> ``` Based on the above instrument definitions, the following Python function generates an ascending and descending scale using the specified number of divisions in the octave: ``` def scale(divisions) : # outputs notes for an ascending and descending scale # over a single octave, with the specified number of # divisions of the octave. base_octave = 2 result = \ [ "i10 0 0 %d" % divisions, "i1 0 1 %d 0" % base_octave, ] for i in range(1, divisions) : result.append("i1 + . %d %d" % (base_octave, i)) #end for result.append("i1 + . %d 0" % (base_octave + 1)) for i in range(divisions, 0, -1) : result.append("i1 + . %d %d" % (base_octave, i - 1)) #end for return \ result #end scale ``` To start with, the following generates the conventional 12-semitone scale: ``` %%csound <CsoundSynthesizer> %insval common_setup <CsScore> t0 180 %insval scale(12) e </CsScore> </CsoundSynthesizer> ``` Now let’s try quarter-tones: ``` %%csound <CsoundSynthesizer> %insval common_setup <CsScore> t0 180 %insval scale(24) e </CsScore> </CsoundSynthesizer> ``` What do other divisions sound like? Feel free to try different values.
github_jupyter
# Edit this as appropriate for the correct paths to the Python modules %run ../ipy_magics/setvar_magic.py %run ../ipy_magics/csound_magic.py %%setvar common_setup <CsInstruments> sr = 44100 ksmps = 32 nchnls = 1 0dbfs = 2 gibasekey = 31 ; G ginterval = 12 ; default opcode midifreq, k, kk ; lets me express pitches in a more natural way, with separate ; octave number and note within octave, offset by a global base ; key. koct, knote xin kfreq = cpsmidinn(koct * 12 + knote * 12 / ginterval + gibasekey) xout kfreq endop instr 1 kamp linseg 0, 0.02, 1, p3 - 0.04, 1, 0.02, 0 kfreq midifreq p4, p5 araw oscil kamp, kfreq out araw endin instr 10 ; doesn’t produce any sound, but changes the number of ; divisions in the octave. ginterval = p4 prints "%d divisions per octave\n", ginterval endin </CsInstruments> def scale(divisions) : # outputs notes for an ascending and descending scale # over a single octave, with the specified number of # divisions of the octave. base_octave = 2 result = \ [ "i10 0 0 %d" % divisions, "i1 0 1 %d 0" % base_octave, ] for i in range(1, divisions) : result.append("i1 + . %d %d" % (base_octave, i)) #end for result.append("i1 + . %d 0" % (base_octave + 1)) for i in range(divisions, 0, -1) : result.append("i1 + . %d %d" % (base_octave, i - 1)) #end for return \ result #end scale %%csound <CsoundSynthesizer> %insval common_setup <CsScore> t0 180 %insval scale(12) e </CsScore> </CsoundSynthesizer> %%csound <CsoundSynthesizer> %insval common_setup <CsScore> t0 180 %insval scale(24) e </CsScore> </CsoundSynthesizer>
0.352759
0.932269
### **`HyperComplex Cayley Graphs`** For the smaller algebras (up to Pathion), we can construct the [Cayley Graph](http://en.wikipedia.org/wiki/Cayley_graph) using `group()` to display the various rotations of various imaginary indices as shown below for quaternions. When displaying edges, the color of the edge will be the same as the vertex points they relate to, E.g. `i` will always be `red`, `j` will be `green` and so on. Negative rotations will be displayed in a darker variant of the color to stand out. Default Options: - `named=None` : name of the hyper complex object to use. - `order=None` : order of the hyper complex object to use. - `filename="G{order}.{filetype}"` : images filename E.g. G3.png. - `filetype="png"` : the file extension used above. - `figsize=6.0` : figure size in inches. - `figdpi=100.0` : figure dpi (pixels per inch). - `fontsize=14` : font size used for labels. - `element="e"` : used when displaying as string, but not translating to index. - `indices="1ijkLIJKmpqrMPQRnstuNSTUovwxOVWX"` : used to translate indicies. - `layers="..."` : select which rotations to display, can be positive or negative. - `translate=False` : tranlates the indicies for easy reading. - `positives=False` : show all positive translations. - `negatives=False` : show all negative rotations. - `undirected=False` : dont show arrows indicating direction of rotation. - `translate=False` : tranlates the indicies for easy reading. - `showall=False` : show all rotations. - `show=False` : show figure to screen. - `save=False` : save figure to disk. ### **`Requirements`** The following packages are required: - itertools - argparse - graph-tool - networkx - numpy ### **`Import Group Library`** ``` from group import * ``` ### **`Complex Numbers`** A [complex number](http://en.wikipedia.org/wiki/Complex_number) is a number that can be expressed in the form `a + bi`, where `a` and `b` are real numbers and `i` is the imaginary unit, imaginary being the root of a negative square number `i = sqrt(-1)`. They are a normed division algebra over the real numbers. There is no natural linear ordering (commutativity) on the set of complex numbers. ``` group(named="Complex", translate=True, show=True, figsize=8) ``` ### **`Quaternion Numbers`** [Quaternions](http://en.wikipedia.org/wiki/Quaternion) are a normed division algebra over the real numbers that can be expressed in the form `a + bi + cj + dk`, where `a`, `b`, `c` and `d` are real numbers and `i`, `j`, `k` are the imaginary units. They are noncommutative. The unit quaternions can be thought of as a choice of a group structure on the 3-sphere S3 that gives the group Spin(3), which is isomorphic to SU(2) and also to the universal cover of SO(3). ``` group(named="Quaternion", translate=True, show=True, figsize=8, layers="i,j,k") ``` ### **`Octonion Numbers`** [Octonions](http://en.wikipedia.org/wiki/Octonion) are a normed division algebra over the real numbers. They are noncommutative and nonassociative, but satisfy a weaker form of associativity, namely they are alternative. The Cayley graph is hard project into two-dimensions, there overlapping edges along the diagonals. That can be expressed in the form `a + bi + cj + dk + eL + fI + gJ + hK`, where `a .. h` are real numbers and `i, j, k, L, I, J, K` are the imaginary units. ``` group(named="Octonion", translate=True, show=True, figsize=8, layers="i") ``` ### **`Sedenion Numbers`** [Sedenion](http://en.wikipedia.org/wiki/Sedenion) orm a 16-dimensional noncommutative and nonassociative algebra over the reals obtained by applying the Cayley–Dickson construction to the octonions. That can be expressed in the form `a + i + j + k + L + I + J + K...`, where `a...` are real numbers and `i, j, k, L, I, J, K, m, p, q, r, M, P, Q, R` are the imaginary units. Now things are getting messy, we will only show the positive layers, for each of the four main rotational groups, `L,i,j,k`, `L,I,J,K` as for Octonions and their duals `m,p,q,r` and `M,P,Q,R`. Even as they are, it is still hard to visualise, but displaying fewer layers per image will rectify that, you need to display a minimum of one layer - so you could just display singular rotational groups for maximum readability. ``` group(named="Sedenion", translate=True, show=True, figsize=8, layers="i") ``` ### **`Pathion Numbers`** Pathions form a 32-dimensional algebra over the reals obtained by applying the Cayley–Dickson construction to the sedenions. ``` group(named="Pathion", translate=True, show=True, figsize=8, layers="i") ```
github_jupyter
from group import * group(named="Complex", translate=True, show=True, figsize=8) group(named="Quaternion", translate=True, show=True, figsize=8, layers="i,j,k") group(named="Octonion", translate=True, show=True, figsize=8, layers="i") group(named="Sedenion", translate=True, show=True, figsize=8, layers="i") group(named="Pathion", translate=True, show=True, figsize=8, layers="i")
0.637821
0.99066
``` !nvidia-smi pip install -q tf-models-official import pickle import numpy as np from tensorflow.keras.utils import Sequence from tensorflow.keras.optimizers import SGD, Adam from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping from official.nlp import optimization from tensorflow.keras.preprocessing.sequence import pad_sequences from sklearn.utils import shuffle from sklearn.metrics import classification_report import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers ``` # II. Transformer ``` def get_angles(pos, i, d_model): angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model)) return pos * angle_rates def positional_encoding(position, d_model): angle_rads = get_angles(np.arange(position)[:, np.newaxis], np.arange(d_model)[np.newaxis, :], d_model) # apply sin to even indices in the array; 2i angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2]) # apply cos to odd indices in the array; 2i+1 angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2]) pos_encoding = angle_rads[np.newaxis, ...] return tf.cast(pos_encoding, dtype=tf.float32) class TransformerBlock(layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): super(TransformerBlock, self).__init__() self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = keras.Sequential( [layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),] ) self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(rate) self.dropout2 = layers.Dropout(rate) def call(self, inputs, training): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) class PositionEmbedding(layers.Layer): def __init__(self, max_len, vocab_size, embed_dim): super(PositionEmbedding, self).__init__() self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim) # self.pos_emb = layers.Embedding(input_dim=max_len, output_dim=embed_dim) self.pos_encoding = positional_encoding(max_len, embed_dim) def call(self, x): # x = self.token_emb(x) seq_len = tf.shape(x)[1] # print(maxlen) x += self.pos_encoding[:, :seq_len, :] # positions = tf.range(start=0, limit=maxlen, delta=1) # positions = self.pos_emb(positions) # print(x.shape, positions.shape) # x = self.token_emb(x) return x embed_dim = 768 # Embedding size for each token num_heads = 12 # Number of attention heads ff_dim = 2048 # Hidden layer size in feed forward network inside transformer max_len = 75 num_layers = 1 def transformer_classifer(input_size, loss_object, optimizer, dropout=0.1): inputs = layers.Input(shape=(max_len, embed_dim)) transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim) embedding_layer = PositionEmbedding(100, 2000, embed_dim) # print(inputs.shape) x = embedding_layer(inputs) # print(x.shape) x = transformer_block(x) x = layers.GlobalAveragePooling1D()(x) x = layers.Dropout(dropout)(x) x = layers.Dense(32, activation="relu")(x) x = layers.Dropout(dropout)(x) outputs = layers.Dense(2, activation="softmax")(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(loss=loss_object, metrics=['accuracy'], optimizer=optimizer) return model ``` # Training/Testing ``` class BatchGenerator(Sequence): def __init__(self, X, Y, batch_size): self.X, self.Y = X, Y self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.X) / float(self.batch_size))) def __getitem__(self, idx): # print(self.batch_size) dummy = np.zeros(shape=(embed_dim,)) x = self.X[idx * self.batch_size:min((idx + 1) * self.batch_size, len(self.X))] X = np.zeros((len(x), max_len, embed_dim)) Y = np.zeros((len(x), 2)) item_count = 0 for i in range(idx * self.batch_size, min((idx + 1) * self.batch_size, len(self.X))): x = self.X[i] if len(x) > max_len: x = x[-max_len:] x = np.pad(np.array(x), pad_width=((max_len - len(x), 0), (0, 0)), mode='constant', constant_values=0) X[item_count] = np.reshape(x, [max_len, embed_dim]) Y[item_count] = self.Y[i] item_count += 1 return X[:], Y[:, 0] class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule): def __init__(self, d_model, warmup_steps=4000): super(CustomSchedule, self).__init__() self.d_model = d_model self.d_model = tf.cast(self.d_model, tf.float32) self.warmup_steps = warmup_steps def __call__(self, step): arg1 = tf.math.rsqrt(step) arg2 = step * (self.warmup_steps ** -1.5) return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2) def train_generator(training_generator, validate_generator, num_train_samples, num_val_samples, batch_size, epoch_num, model_name=None): # learning_rate = CustomSchedule(768) # optim = tf.keras.optimizers.Adam(learning_rate) optim = Adam() epochs = epoch_num steps_per_epoch = num_train_samples num_train_steps = steps_per_epoch * epochs num_warmup_steps = int(0.1*num_train_steps) init_lr = 3e-4 optimizer = optimization.create_optimizer(init_lr=init_lr, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, optimizer_type='adamw') loss_object = tf.keras.losses.SparseCategoricalCrossentropy() model = transformer_classifer(768, loss_object, optimizer) # model.load_weights("hdfs_transformer.hdf5") print(model.summary()) # checkpoint filepath = model_name checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max', save_weights_only=True) early_stop = EarlyStopping( monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto', baseline=None, restore_best_weights=True ) callbacks_list = [checkpoint, early_stop] # class_weight = {0: 245., 1: 1.} model.fit_generator(generator=training_generator, steps_per_epoch=int(num_train_samples / batch_size), epochs=epoch_num, verbose=1, validation_data=validate_generator, validation_steps=int(num_val_samples / batch_size), workers=16, max_queue_size=32, callbacks=callbacks_list, shuffle=True # class_weight=class_weight ) return model def train(X, Y, epoch_num, batch_size, tx, ty, model_file=None): X, Y = shuffle(X, Y) n_samples = len(X) train_x, train_y = X[:int(n_samples * 90 / 100)], Y[:int(n_samples * 90 / 100)] val_x, val_y = X[int(n_samples * 90 / 100):], Y[int(n_samples * 90 / 100):] training_generator, num_train_samples = BatchGenerator(train_x, train_y, batch_size), len(train_x) validate_generator, num_val_samples = BatchGenerator(val_x, val_y, batch_size), len(val_x) print("Number of training samples: {0} - Number of validating samples: {1}".format(num_train_samples, num_val_samples)) model = train_generator(training_generator, validate_generator, num_train_samples, num_val_samples, batch_size, epoch_num, model_name=model_file) test_model(model, tx, ty, batch_size) def test_model(model, x, y, batch_size): x, y = shuffle(x, y) x, y = x[: len(x) // batch_size * batch_size], y[: len(y) // batch_size * batch_size] test_loader = BatchGenerator(x, y, batch_size) prediction = model.predict_generator(test_loader, steps=(len(x) // batch_size), workers=16, max_queue_size=32, verbose=1) prediction = np.argmax(prediction, axis=1) y = y[:len(prediction)] report = classification_report(np.array(y), prediction) print(report) from collections import Counter with open("neural-train.pkl", mode="rb") as f: (x_tr, y_tr) = pickle.load(f) x_tr, y_tr = shuffle(x_tr, y_tr) print(Counter(y_tr)) with open("neural-test.pkl", mode="rb") as f: (x_te, y_te) = pickle.load(f) print(Counter(y_te)) print("Data loaded") train(x_tr, y_tr, 20, 64, x_te, y_te, "hdfs_transformer.hdf5") ```
github_jupyter
!nvidia-smi pip install -q tf-models-official import pickle import numpy as np from tensorflow.keras.utils import Sequence from tensorflow.keras.optimizers import SGD, Adam from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping from official.nlp import optimization from tensorflow.keras.preprocessing.sequence import pad_sequences from sklearn.utils import shuffle from sklearn.metrics import classification_report import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers def get_angles(pos, i, d_model): angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model)) return pos * angle_rates def positional_encoding(position, d_model): angle_rads = get_angles(np.arange(position)[:, np.newaxis], np.arange(d_model)[np.newaxis, :], d_model) # apply sin to even indices in the array; 2i angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2]) # apply cos to odd indices in the array; 2i+1 angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2]) pos_encoding = angle_rads[np.newaxis, ...] return tf.cast(pos_encoding, dtype=tf.float32) class TransformerBlock(layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): super(TransformerBlock, self).__init__() self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = keras.Sequential( [layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),] ) self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(rate) self.dropout2 = layers.Dropout(rate) def call(self, inputs, training): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) class PositionEmbedding(layers.Layer): def __init__(self, max_len, vocab_size, embed_dim): super(PositionEmbedding, self).__init__() self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim) # self.pos_emb = layers.Embedding(input_dim=max_len, output_dim=embed_dim) self.pos_encoding = positional_encoding(max_len, embed_dim) def call(self, x): # x = self.token_emb(x) seq_len = tf.shape(x)[1] # print(maxlen) x += self.pos_encoding[:, :seq_len, :] # positions = tf.range(start=0, limit=maxlen, delta=1) # positions = self.pos_emb(positions) # print(x.shape, positions.shape) # x = self.token_emb(x) return x embed_dim = 768 # Embedding size for each token num_heads = 12 # Number of attention heads ff_dim = 2048 # Hidden layer size in feed forward network inside transformer max_len = 75 num_layers = 1 def transformer_classifer(input_size, loss_object, optimizer, dropout=0.1): inputs = layers.Input(shape=(max_len, embed_dim)) transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim) embedding_layer = PositionEmbedding(100, 2000, embed_dim) # print(inputs.shape) x = embedding_layer(inputs) # print(x.shape) x = transformer_block(x) x = layers.GlobalAveragePooling1D()(x) x = layers.Dropout(dropout)(x) x = layers.Dense(32, activation="relu")(x) x = layers.Dropout(dropout)(x) outputs = layers.Dense(2, activation="softmax")(x) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(loss=loss_object, metrics=['accuracy'], optimizer=optimizer) return model class BatchGenerator(Sequence): def __init__(self, X, Y, batch_size): self.X, self.Y = X, Y self.batch_size = batch_size def __len__(self): return int(np.ceil(len(self.X) / float(self.batch_size))) def __getitem__(self, idx): # print(self.batch_size) dummy = np.zeros(shape=(embed_dim,)) x = self.X[idx * self.batch_size:min((idx + 1) * self.batch_size, len(self.X))] X = np.zeros((len(x), max_len, embed_dim)) Y = np.zeros((len(x), 2)) item_count = 0 for i in range(idx * self.batch_size, min((idx + 1) * self.batch_size, len(self.X))): x = self.X[i] if len(x) > max_len: x = x[-max_len:] x = np.pad(np.array(x), pad_width=((max_len - len(x), 0), (0, 0)), mode='constant', constant_values=0) X[item_count] = np.reshape(x, [max_len, embed_dim]) Y[item_count] = self.Y[i] item_count += 1 return X[:], Y[:, 0] class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule): def __init__(self, d_model, warmup_steps=4000): super(CustomSchedule, self).__init__() self.d_model = d_model self.d_model = tf.cast(self.d_model, tf.float32) self.warmup_steps = warmup_steps def __call__(self, step): arg1 = tf.math.rsqrt(step) arg2 = step * (self.warmup_steps ** -1.5) return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2) def train_generator(training_generator, validate_generator, num_train_samples, num_val_samples, batch_size, epoch_num, model_name=None): # learning_rate = CustomSchedule(768) # optim = tf.keras.optimizers.Adam(learning_rate) optim = Adam() epochs = epoch_num steps_per_epoch = num_train_samples num_train_steps = steps_per_epoch * epochs num_warmup_steps = int(0.1*num_train_steps) init_lr = 3e-4 optimizer = optimization.create_optimizer(init_lr=init_lr, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, optimizer_type='adamw') loss_object = tf.keras.losses.SparseCategoricalCrossentropy() model = transformer_classifer(768, loss_object, optimizer) # model.load_weights("hdfs_transformer.hdf5") print(model.summary()) # checkpoint filepath = model_name checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max', save_weights_only=True) early_stop = EarlyStopping( monitor='val_loss', min_delta=0, patience=5, verbose=0, mode='auto', baseline=None, restore_best_weights=True ) callbacks_list = [checkpoint, early_stop] # class_weight = {0: 245., 1: 1.} model.fit_generator(generator=training_generator, steps_per_epoch=int(num_train_samples / batch_size), epochs=epoch_num, verbose=1, validation_data=validate_generator, validation_steps=int(num_val_samples / batch_size), workers=16, max_queue_size=32, callbacks=callbacks_list, shuffle=True # class_weight=class_weight ) return model def train(X, Y, epoch_num, batch_size, tx, ty, model_file=None): X, Y = shuffle(X, Y) n_samples = len(X) train_x, train_y = X[:int(n_samples * 90 / 100)], Y[:int(n_samples * 90 / 100)] val_x, val_y = X[int(n_samples * 90 / 100):], Y[int(n_samples * 90 / 100):] training_generator, num_train_samples = BatchGenerator(train_x, train_y, batch_size), len(train_x) validate_generator, num_val_samples = BatchGenerator(val_x, val_y, batch_size), len(val_x) print("Number of training samples: {0} - Number of validating samples: {1}".format(num_train_samples, num_val_samples)) model = train_generator(training_generator, validate_generator, num_train_samples, num_val_samples, batch_size, epoch_num, model_name=model_file) test_model(model, tx, ty, batch_size) def test_model(model, x, y, batch_size): x, y = shuffle(x, y) x, y = x[: len(x) // batch_size * batch_size], y[: len(y) // batch_size * batch_size] test_loader = BatchGenerator(x, y, batch_size) prediction = model.predict_generator(test_loader, steps=(len(x) // batch_size), workers=16, max_queue_size=32, verbose=1) prediction = np.argmax(prediction, axis=1) y = y[:len(prediction)] report = classification_report(np.array(y), prediction) print(report) from collections import Counter with open("neural-train.pkl", mode="rb") as f: (x_tr, y_tr) = pickle.load(f) x_tr, y_tr = shuffle(x_tr, y_tr) print(Counter(y_tr)) with open("neural-test.pkl", mode="rb") as f: (x_te, y_te) = pickle.load(f) print(Counter(y_te)) print("Data loaded") train(x_tr, y_tr, 20, 64, x_te, y_te, "hdfs_transformer.hdf5")
0.857828
0.75409
``` import os import sys import warnings from functools import reduce, partial import pandas as pd import numpy as np from sklearn.exceptions import DataConversionWarning from sklearn.metrics import mean_absolute_error import featuretools as ft import featuretools.variable_types as vtypes PROJECT_PATH = os.path.join(os.getcwd(), '../') if PROJECT_PATH not in sys.path: sys.path.append(PROJECT_PATH) from server.ml_models.all_model import AllModelData from server.ml_models.match_model import MatchModelData from server.ml_models.player_model import PlayerModelData from server.ml_models.betting_model import BettingModelData from server.ml_models import EnsembleModel from src.model.metrics import yearly_performance_scores from src.model.charts import graph_yearly_model_performance from src.data.feature_engineering import (match_id, ladder_position, add_elo_rating, city_lat_long, playing_for_team_match_id, player_team_match_id, home_away_df) from server.ml_models.data_config import TEAM_CITIES, VENUE_CITIES SEED = 42 np.random.seed(SEED) warnings.simplefilter("ignore", DataConversionWarning) ``` ## Prepare raw data for featuretools featuretools handles a lot of the data transformation that I was doing myself, and things got messing when I was trying to use ft after doing all my own aggregations/transformations, so I'm taking a step back and passing raw data to ft and letting them take it from there. ``` data_kwargs = {'data_transformers': [], 'index_cols': ['home_team', 'year', 'round_number']} betting_data = BettingModelData player_data = PlayerModelData match_data = MatchModelData bd = betting_data(**data_kwargs) pld = player_data(**data_kwargs) md = match_data(**data_kwargs) SHARED_COLS = ['away_score', 'away_team', 'home_score', 'home_team', 'round_number', 'year'] raw_df = (md.data .merge(bd.data, how='left', on=SHARED_COLS) .sort_values(['year', 'round_number', 'home_team']) .reset_index(drop=True)) raw_df = raw_df[(raw_df['date'] > '2010-01-01') & (raw_df['date'] < '2015-12-31')] raw_df raw_df.info() round_start = (raw_df.groupby(['year', 'round_number'])['date'] .min() .rename('round_start_date') .reset_index()) end_of_round = ( (raw_df.groupby(['year', 'round_number'])['date'].max() + pd.Timedelta(hours=23, minutes=59, seconds=59)) .rename('end_of_round') .reset_index() ) end_of_season = end_of_round.groupby('year')['end_of_round'].max().rename('end_of_season').reset_index() prev_df = raw_df.groupby('team') clean_df = (raw_df .fillna(0) .assign( match_id=match_id, # By default dates w/o time have 00:00:00 as their timestamp end_of_day=lambda df: df['date'] + pd.Timedelta(hours=23, minutes=59, seconds=59), ) .merge(round_start, on=['year', 'round_number'], how='left') .merge(end_of_round, on=['year', 'round_number'], how='left') .merge(end_of_season, on=['year'], how='left') # Sort by date and drop duplicates to get rid of finals replays due to draws .sort_values('date') .drop_duplicates(subset='match_id', keep="last")) clean_df MATCH_COLS = ['team_behinds', 'team_goals', 'match_points', 'match_result', 'score', 'elo_rating', 'ladder_position'] team_df = (pd .concat([home_away_df(True, clean_df), home_away_df(False, clean_df)], sort=True) .sort_index() .rename(columns={'goals': 'team_goals', 'behinds': 'team_behinds'}) .assign(home_city=lambda df: df['team'].map(TEAM_CITIES), ladder_position=ladder_position, elo_rating=add_elo_rating, end_of_day=lambda df: df['date'] + pd.Timedelta(hours=23, minutes=59, seconds=59)) .assign(home_lat_long=lambda df: df['home_city'].map(city_lat_long)) .merge(end_of_round, on=['year', 'round_number'], how='left') # Dropping shared columns with match data frame (except match_id) .drop(['date', 'year', 'round_number', 'oppo_score'], axis=1) .set_index('team_match_id', drop=False) .rename_axis(None)) prev_df = (team_df .groupby('team') .shift() .loc[:, MATCH_COLS + ['margin']] .rename(columns=lambda col: 'prev_' + col)) team_match_df = pd.concat([team_df.drop(MATCH_COLS, axis=1), prev_df], axis=1).fillna(0) team_match_df team_match_df.info() team_cols = clean_df.filter(regex='^(home_|away_)').columns match_df = (clean_df .drop(team_cols, axis=1) .assign(venue_city=lambda df: df['venue'].map(VENUE_CITIES)) .assign(venue_lat_long=lambda df: df['venue_city'].map(city_lat_long))) match_df PLAYER_MATCH_COLS = [ 'kicks', 'marks', 'handballs', 'goals', 'behinds', 'hit_outs', 'tackles', 'rebounds', 'inside_50s', 'clearances', 'clangers', 'frees_for', 'frees_against', 'contested_possessions', 'uncontested_possessions', 'contested_marks', 'marks_inside_50', 'one_percenters', 'bounces', 'goal_assists', 'time_on_ground' ] player_dates = (team_match_df[['end_of_day', 'round_start_date', 'team_match_id']]) player_df = (pld.data .assign(team_match_id=playing_for_team_match_id, player_team_match_id=player_team_match_id) .merge(player_dates, on='team_match_id', how='left') .merge(end_of_season, on='year', how='left') .drop(SHARED_COLS + ['player_name'], axis=1) # Normally, there wouldn't be NaNs, but since we filter team_match_df by date, # player_df has a lot more rows .dropna() .set_index('player_team_match_id', drop=False) .rename_axis(None)) prev_player_df = (player_df .groupby('player_id') .shift() .loc[:, PLAYER_MATCH_COLS] .rename(columns=lambda col: 'prev_' + col) .fillna(0)) player_match_df = pd.concat([player_df, prev_player_df], axis=1).drop(PLAYER_MATCH_COLS, axis=1) player_match_df = player_match_df[ (player_match_df['end_of_day'] > '2010-01-01') & (player_match_df['end_of_day'] < '2015-12-31') ] player_match_df # Make match entity as base es = ft.EntitySet('Matches') # Match entity es = es.entity_from_dataframe( entity_id='matches', dataframe=match_df, index='match_id', # Most of the fixture data is known at the beginning of the season, but not all, # so setting it to the start of the round simplifies things time_index='round_start_date', variable_types={ 'venue_city': vtypes.Categorical, 'venue_lat_long': vtypes.LatLong, 'date': vtypes.Datetime, 'venue': vtypes.Categorical, 'year': vtypes.Ordinal, 'round_type': vtypes.Categorical, 'round_number': vtypes.Ordinal, }, ) # TeamMatch entity es = es.entity_from_dataframe( entity_id='team_matches', dataframe=team_match_df, index='team_match_id', # Most of the fixture data is known at the beginning of the season, but not all, # so setting it to the start of the round simplifies things time_index='round_start_date', variable_types={ 'at_home': vtypes.Boolean, 'team': vtypes.Categorical, 'home_city': vtypes.Categorical, 'prev_ladder_position': vtypes.Ordinal, 'home_lat_long': vtypes.LatLong, }, secondary_time_index={ 'end_of_day': ['prev_' + col for col in MATCH_COLS], 'end_of_round': ['prev_ladder_position'] }, ) # Relationship between matches and team matches es = es.add_relationship( ft.Relationship(es['matches']['match_id'], es['team_matches']['match_id']) ) # Team entity es.normalize_entity('team_matches', 'teams', 'team', make_time_index=False, make_secondary_time_index=False, additional_variables=['home_city', 'home_lat_long']) # Venue entity es.normalize_entity('matches', 'venues', 'venue', make_time_index=False, make_secondary_time_index=False, additional_variables=['venue_city', 'venue_lat_long']) # Add year entity es.normalize_entity('matches', 'years', 'year', make_time_index=False, make_secondary_time_index=False) # Add round_number entity es.normalize_entity('matches', 'round_numbers', 'round_number', additional_variables=['round_type'], make_time_index=False, make_secondary_time_index=False) es = es.entity_from_dataframe( entity_id='player_matches', dataframe=player_df, index='player_team_match_id', # Most of the fixture data is known at the beginning of the season, but not all, # so setting it to the start of the round simplifies things time_index='round_start_date', variable_types={ 'playing_for': vtypes.Categorical, }, secondary_time_index={ 'end_of_day': ['prev_' + col for col in PLAYER_MATCH_COLS], 'end_of_season': ['brownlow_votes'] }, ) es = es.add_relationship(ft.Relationship(es['team_matches']['team_match_id'], es['player_matches']['team_match_id'])) # Add player entity es.normalize_entity('player_matches', 'players', 'player_id', make_time_index=False) es cutoff_times = (es['team_matches'] .df[['team_match_id', 'round_start_date', 'margin']] .rename(columns={'round_start_date': 'cutoff_time'})) cutoff_times # Generate features using the constructed entityset features = ft.dfs( entityset=es, target_entity='team_matches', agg_primitives=[ 'sum', 'trend', 'count', 'max', 'min', 'last', 'skew', ], trans_primitives=[ 'subtract_numeric', 'divide_numeric', 'haversine', 'add_numeric', 'greater_than', 'less_than', 'month', ], max_depth=2, cutoff_time=cutoff_times, cutoff_time_in_index=True, n_jobs=-1, chunk_size=0.1, training_window=ft.Timedelta(2, 'observations', entity='years'), features_only=True, ignore_entities=['player_matches'], ignore_variables={'team_matches': ['match_id', 'margin']}, verbose=True, ) features fm.info() fm.filter(regex='matches.') prim = ft.primitives.list_primitives() prim[prim['type'] == 'transform'].sort_values('name') ```
github_jupyter
import os import sys import warnings from functools import reduce, partial import pandas as pd import numpy as np from sklearn.exceptions import DataConversionWarning from sklearn.metrics import mean_absolute_error import featuretools as ft import featuretools.variable_types as vtypes PROJECT_PATH = os.path.join(os.getcwd(), '../') if PROJECT_PATH not in sys.path: sys.path.append(PROJECT_PATH) from server.ml_models.all_model import AllModelData from server.ml_models.match_model import MatchModelData from server.ml_models.player_model import PlayerModelData from server.ml_models.betting_model import BettingModelData from server.ml_models import EnsembleModel from src.model.metrics import yearly_performance_scores from src.model.charts import graph_yearly_model_performance from src.data.feature_engineering import (match_id, ladder_position, add_elo_rating, city_lat_long, playing_for_team_match_id, player_team_match_id, home_away_df) from server.ml_models.data_config import TEAM_CITIES, VENUE_CITIES SEED = 42 np.random.seed(SEED) warnings.simplefilter("ignore", DataConversionWarning) data_kwargs = {'data_transformers': [], 'index_cols': ['home_team', 'year', 'round_number']} betting_data = BettingModelData player_data = PlayerModelData match_data = MatchModelData bd = betting_data(**data_kwargs) pld = player_data(**data_kwargs) md = match_data(**data_kwargs) SHARED_COLS = ['away_score', 'away_team', 'home_score', 'home_team', 'round_number', 'year'] raw_df = (md.data .merge(bd.data, how='left', on=SHARED_COLS) .sort_values(['year', 'round_number', 'home_team']) .reset_index(drop=True)) raw_df = raw_df[(raw_df['date'] > '2010-01-01') & (raw_df['date'] < '2015-12-31')] raw_df raw_df.info() round_start = (raw_df.groupby(['year', 'round_number'])['date'] .min() .rename('round_start_date') .reset_index()) end_of_round = ( (raw_df.groupby(['year', 'round_number'])['date'].max() + pd.Timedelta(hours=23, minutes=59, seconds=59)) .rename('end_of_round') .reset_index() ) end_of_season = end_of_round.groupby('year')['end_of_round'].max().rename('end_of_season').reset_index() prev_df = raw_df.groupby('team') clean_df = (raw_df .fillna(0) .assign( match_id=match_id, # By default dates w/o time have 00:00:00 as their timestamp end_of_day=lambda df: df['date'] + pd.Timedelta(hours=23, minutes=59, seconds=59), ) .merge(round_start, on=['year', 'round_number'], how='left') .merge(end_of_round, on=['year', 'round_number'], how='left') .merge(end_of_season, on=['year'], how='left') # Sort by date and drop duplicates to get rid of finals replays due to draws .sort_values('date') .drop_duplicates(subset='match_id', keep="last")) clean_df MATCH_COLS = ['team_behinds', 'team_goals', 'match_points', 'match_result', 'score', 'elo_rating', 'ladder_position'] team_df = (pd .concat([home_away_df(True, clean_df), home_away_df(False, clean_df)], sort=True) .sort_index() .rename(columns={'goals': 'team_goals', 'behinds': 'team_behinds'}) .assign(home_city=lambda df: df['team'].map(TEAM_CITIES), ladder_position=ladder_position, elo_rating=add_elo_rating, end_of_day=lambda df: df['date'] + pd.Timedelta(hours=23, minutes=59, seconds=59)) .assign(home_lat_long=lambda df: df['home_city'].map(city_lat_long)) .merge(end_of_round, on=['year', 'round_number'], how='left') # Dropping shared columns with match data frame (except match_id) .drop(['date', 'year', 'round_number', 'oppo_score'], axis=1) .set_index('team_match_id', drop=False) .rename_axis(None)) prev_df = (team_df .groupby('team') .shift() .loc[:, MATCH_COLS + ['margin']] .rename(columns=lambda col: 'prev_' + col)) team_match_df = pd.concat([team_df.drop(MATCH_COLS, axis=1), prev_df], axis=1).fillna(0) team_match_df team_match_df.info() team_cols = clean_df.filter(regex='^(home_|away_)').columns match_df = (clean_df .drop(team_cols, axis=1) .assign(venue_city=lambda df: df['venue'].map(VENUE_CITIES)) .assign(venue_lat_long=lambda df: df['venue_city'].map(city_lat_long))) match_df PLAYER_MATCH_COLS = [ 'kicks', 'marks', 'handballs', 'goals', 'behinds', 'hit_outs', 'tackles', 'rebounds', 'inside_50s', 'clearances', 'clangers', 'frees_for', 'frees_against', 'contested_possessions', 'uncontested_possessions', 'contested_marks', 'marks_inside_50', 'one_percenters', 'bounces', 'goal_assists', 'time_on_ground' ] player_dates = (team_match_df[['end_of_day', 'round_start_date', 'team_match_id']]) player_df = (pld.data .assign(team_match_id=playing_for_team_match_id, player_team_match_id=player_team_match_id) .merge(player_dates, on='team_match_id', how='left') .merge(end_of_season, on='year', how='left') .drop(SHARED_COLS + ['player_name'], axis=1) # Normally, there wouldn't be NaNs, but since we filter team_match_df by date, # player_df has a lot more rows .dropna() .set_index('player_team_match_id', drop=False) .rename_axis(None)) prev_player_df = (player_df .groupby('player_id') .shift() .loc[:, PLAYER_MATCH_COLS] .rename(columns=lambda col: 'prev_' + col) .fillna(0)) player_match_df = pd.concat([player_df, prev_player_df], axis=1).drop(PLAYER_MATCH_COLS, axis=1) player_match_df = player_match_df[ (player_match_df['end_of_day'] > '2010-01-01') & (player_match_df['end_of_day'] < '2015-12-31') ] player_match_df # Make match entity as base es = ft.EntitySet('Matches') # Match entity es = es.entity_from_dataframe( entity_id='matches', dataframe=match_df, index='match_id', # Most of the fixture data is known at the beginning of the season, but not all, # so setting it to the start of the round simplifies things time_index='round_start_date', variable_types={ 'venue_city': vtypes.Categorical, 'venue_lat_long': vtypes.LatLong, 'date': vtypes.Datetime, 'venue': vtypes.Categorical, 'year': vtypes.Ordinal, 'round_type': vtypes.Categorical, 'round_number': vtypes.Ordinal, }, ) # TeamMatch entity es = es.entity_from_dataframe( entity_id='team_matches', dataframe=team_match_df, index='team_match_id', # Most of the fixture data is known at the beginning of the season, but not all, # so setting it to the start of the round simplifies things time_index='round_start_date', variable_types={ 'at_home': vtypes.Boolean, 'team': vtypes.Categorical, 'home_city': vtypes.Categorical, 'prev_ladder_position': vtypes.Ordinal, 'home_lat_long': vtypes.LatLong, }, secondary_time_index={ 'end_of_day': ['prev_' + col for col in MATCH_COLS], 'end_of_round': ['prev_ladder_position'] }, ) # Relationship between matches and team matches es = es.add_relationship( ft.Relationship(es['matches']['match_id'], es['team_matches']['match_id']) ) # Team entity es.normalize_entity('team_matches', 'teams', 'team', make_time_index=False, make_secondary_time_index=False, additional_variables=['home_city', 'home_lat_long']) # Venue entity es.normalize_entity('matches', 'venues', 'venue', make_time_index=False, make_secondary_time_index=False, additional_variables=['venue_city', 'venue_lat_long']) # Add year entity es.normalize_entity('matches', 'years', 'year', make_time_index=False, make_secondary_time_index=False) # Add round_number entity es.normalize_entity('matches', 'round_numbers', 'round_number', additional_variables=['round_type'], make_time_index=False, make_secondary_time_index=False) es = es.entity_from_dataframe( entity_id='player_matches', dataframe=player_df, index='player_team_match_id', # Most of the fixture data is known at the beginning of the season, but not all, # so setting it to the start of the round simplifies things time_index='round_start_date', variable_types={ 'playing_for': vtypes.Categorical, }, secondary_time_index={ 'end_of_day': ['prev_' + col for col in PLAYER_MATCH_COLS], 'end_of_season': ['brownlow_votes'] }, ) es = es.add_relationship(ft.Relationship(es['team_matches']['team_match_id'], es['player_matches']['team_match_id'])) # Add player entity es.normalize_entity('player_matches', 'players', 'player_id', make_time_index=False) es cutoff_times = (es['team_matches'] .df[['team_match_id', 'round_start_date', 'margin']] .rename(columns={'round_start_date': 'cutoff_time'})) cutoff_times # Generate features using the constructed entityset features = ft.dfs( entityset=es, target_entity='team_matches', agg_primitives=[ 'sum', 'trend', 'count', 'max', 'min', 'last', 'skew', ], trans_primitives=[ 'subtract_numeric', 'divide_numeric', 'haversine', 'add_numeric', 'greater_than', 'less_than', 'month', ], max_depth=2, cutoff_time=cutoff_times, cutoff_time_in_index=True, n_jobs=-1, chunk_size=0.1, training_window=ft.Timedelta(2, 'observations', entity='years'), features_only=True, ignore_entities=['player_matches'], ignore_variables={'team_matches': ['match_id', 'margin']}, verbose=True, ) features fm.info() fm.filter(regex='matches.') prim = ft.primitives.list_primitives() prim[prim['type'] == 'transform'].sort_values('name')
0.330363
0.46217
this notebook makes sure that I can train models in the same way using either old or new code. reference is mostly <https://github.com/leelabcnbc/tang_jcompneuro/blob/master/results_ipynb/debug/cnn_debug/cnn_fitting_demo.ipynb> ``` import numpy as np from copy import deepcopy from collections import OrderedDict import torch from torch.autograd import Variable from torch import FloatTensor from tang_jcompneuro_legacy import cnn as cnn_legacy from tang_jcompneuro.cnn import CNN from tang_jcompneuro.configs.cnn_arch import arch_dict from tang_jcompneuro.configs.cnn_init import init_dict from tang_jcompneuro.configs.cnn_opt import opt_dict from tang_jcompneuro import training_aux from torch.utils.data import TensorDataset from torch.backends import cudnn # disable cudnn for complete determinism. cudnn.enabled = False arch_config = arch_dict['legacy_1L']['12'] init_config = init_dict['legacy'] opt_config_list = opt_dict['legacy'] # just to get an idea. so 5 epochs. total_epoch = 5 opt_config_list def generate_legacy_opt_config_list(): opt_param_list = OrderedDict() opt_param_list['baseline'] = {'num_epoch': total_epoch,} opt_param_list['middle_decay'] = {'weight_decay': 0.001,'num_epoch': total_epoch,} opt_param_list['adam_longer'] = {'momentum': None, 'opt_type': 'Adam', 'lr': 0.001, 'num_epoch': total_epoch} return opt_param_list opt_config_list_old = generate_legacy_opt_config_list() # prepare some dummy datasets def provide_training_dataset(): num_im = 500 rng_state = np.random.RandomState(seed=0) X_ = rng_state.randn(num_im, 1, 20, 20)*0.1 y_ = rng_state.rand(num_im, 1)*0.01 # prepare dataset # by shuffle, I will be able to test whether random seed behavior is preserved as well. return X_, y_ X, y = provide_training_dataset() def train_one_old_model(X_tensor, y_tensor, opt_param, seed): opt_param = deepcopy(opt_param) opt_param.update({'seed': seed}) net_this = cnn_legacy.one_train_loop('baseline', TensorDataset(FloatTensor(X_tensor), FloatTensor(y_tensor)), submodel_param=None, opt_param=opt_param, loss_every=None, verbose=True)[0] return net_this def train_one_new_model(X_tensor, y_tensor, opt_param, seed): # generate model. model_new = CNN(arch_config, init_config, seed=seed) model_new.cuda() # generate loss and optimizer. training_aux.train_one_case(model_new, (X, y, None, None, None, None), opt_param, legacy=True, legacy_epoch=total_epoch, shuffle_train=False) return model_new def check(): assert opt_config_list.keys() == opt_config_list_old.keys() for k, v in opt_config_list.items(): print(f'check {k}') old_opt_param = opt_config_list_old[k] new_opt_param = v for seed in range(5): model_old = train_one_old_model(X, y, old_opt_param, seed) model_new = train_one_new_model(X, y, new_opt_param, seed) params_old = print_and_save_parameters(model_old) params_new = print_and_save_parameters(model_new) check_parameters(params_new, params_old) parameter_mapping = { 'conv.conv0.weight': 'features.0.weight', 'conv.conv0.bias': 'features.0.bias', 'fc.fc.weight': 'classifier.0.weight', 'fc.fc.bias': 'classifier.0.bias', } def print_and_save_parameters(model): parameter_dict = {} for x, y in model.named_parameters(): parameter_dict[x] = y.data.cpu().numpy().copy() return parameter_dict def check_parameters(params_new, params_old): assert len(params_new) == len(params_old) == len(parameter_mapping) for x, y in params_new.items(): y_old = params_old[parameter_mapping[x]] assert y_old.shape == y.shape print(f'check {x}', y.shape, abs(y_old-y).max()) assert abs(y_old-y).max() < 1e-6 check() ```
github_jupyter
import numpy as np from copy import deepcopy from collections import OrderedDict import torch from torch.autograd import Variable from torch import FloatTensor from tang_jcompneuro_legacy import cnn as cnn_legacy from tang_jcompneuro.cnn import CNN from tang_jcompneuro.configs.cnn_arch import arch_dict from tang_jcompneuro.configs.cnn_init import init_dict from tang_jcompneuro.configs.cnn_opt import opt_dict from tang_jcompneuro import training_aux from torch.utils.data import TensorDataset from torch.backends import cudnn # disable cudnn for complete determinism. cudnn.enabled = False arch_config = arch_dict['legacy_1L']['12'] init_config = init_dict['legacy'] opt_config_list = opt_dict['legacy'] # just to get an idea. so 5 epochs. total_epoch = 5 opt_config_list def generate_legacy_opt_config_list(): opt_param_list = OrderedDict() opt_param_list['baseline'] = {'num_epoch': total_epoch,} opt_param_list['middle_decay'] = {'weight_decay': 0.001,'num_epoch': total_epoch,} opt_param_list['adam_longer'] = {'momentum': None, 'opt_type': 'Adam', 'lr': 0.001, 'num_epoch': total_epoch} return opt_param_list opt_config_list_old = generate_legacy_opt_config_list() # prepare some dummy datasets def provide_training_dataset(): num_im = 500 rng_state = np.random.RandomState(seed=0) X_ = rng_state.randn(num_im, 1, 20, 20)*0.1 y_ = rng_state.rand(num_im, 1)*0.01 # prepare dataset # by shuffle, I will be able to test whether random seed behavior is preserved as well. return X_, y_ X, y = provide_training_dataset() def train_one_old_model(X_tensor, y_tensor, opt_param, seed): opt_param = deepcopy(opt_param) opt_param.update({'seed': seed}) net_this = cnn_legacy.one_train_loop('baseline', TensorDataset(FloatTensor(X_tensor), FloatTensor(y_tensor)), submodel_param=None, opt_param=opt_param, loss_every=None, verbose=True)[0] return net_this def train_one_new_model(X_tensor, y_tensor, opt_param, seed): # generate model. model_new = CNN(arch_config, init_config, seed=seed) model_new.cuda() # generate loss and optimizer. training_aux.train_one_case(model_new, (X, y, None, None, None, None), opt_param, legacy=True, legacy_epoch=total_epoch, shuffle_train=False) return model_new def check(): assert opt_config_list.keys() == opt_config_list_old.keys() for k, v in opt_config_list.items(): print(f'check {k}') old_opt_param = opt_config_list_old[k] new_opt_param = v for seed in range(5): model_old = train_one_old_model(X, y, old_opt_param, seed) model_new = train_one_new_model(X, y, new_opt_param, seed) params_old = print_and_save_parameters(model_old) params_new = print_and_save_parameters(model_new) check_parameters(params_new, params_old) parameter_mapping = { 'conv.conv0.weight': 'features.0.weight', 'conv.conv0.bias': 'features.0.bias', 'fc.fc.weight': 'classifier.0.weight', 'fc.fc.bias': 'classifier.0.bias', } def print_and_save_parameters(model): parameter_dict = {} for x, y in model.named_parameters(): parameter_dict[x] = y.data.cpu().numpy().copy() return parameter_dict def check_parameters(params_new, params_old): assert len(params_new) == len(params_old) == len(parameter_mapping) for x, y in params_new.items(): y_old = params_old[parameter_mapping[x]] assert y_old.shape == y.shape print(f'check {x}', y.shape, abs(y_old-y).max()) assert abs(y_old-y).max() < 1e-6 check()
0.64512
0.688341
``` from IPython.display import Image Image('../../Python_probability_statistics_machine_learning_2E.png',width=200) ``` This chapter takes a geometric view of probability theory and relates it to familiar concepts in linear algebra and geometry. This approach connects your natural geometric intuition to the key abstractions in probability that can help guide your reasoning. This is particularly important in probability because it is easy to be misled. We need a bit of rigor and some intuition to guide us. In grade school, you were introduced to the natural numbers (i.e., `1,2,3,..`) and you learned how to manipulate them by operations like addition, subtraction, and multiplication. Later, you were introduced to positive and negative numbers and were again taught how to manipulate them. Ultimately, you were introduced to the calculus of the real line, and learned how to differentiate, take limits, and so on. This progression provided more abstractions, but also widened the field of problems you could successfully tackle. The same is true of probability. One way to think about probability is as a new number concept that allows you to tackle problems that have a special kind of *uncertainty* built into them. Thus, the key idea is that there is some number, say $x$, with a traveling companion, say, $f(x)$, and this companion represents the uncertainties about the value of $x$ as if looking at the number $x$ through a frosted window. The degree of opacity of the window is represented by $f(x)$. If we want to manipulate $x$, then we have to figure out what to do with $f(x)$. For example if we want $y= 2 x $, then we have to understand how $f(x)$ generates $f(y)$. Where is the *random* part? To conceptualize this, we need still another analogy: think about a beehive with the swarm around it representing $f(x)$, and the hive itself, which you can barely see through the swarm, as $x$. The random piece is you don't know *which* bee in particular is going to sting you! Once this happens the uncertainty evaporates. Up until that happens, all we have is a concept of a swarm (i.e., density of bees) which represents a *potentiality* of which bee will ultimately sting. In summary, one way to think about probability is as a way of carrying through mathematical reasoning (e.g., adding, subtracting, taking limits) with a notion of potentiality that is so-transformed by these operations. ## Understanding Probability Density In order to understand the heart of modern probability, which is built on the Lesbesgue theory of integration, we need to extend the concept of integration from basic calculus. To begin, let us consider the following piecewise function $$ f(x) = \begin{cases} 1 & \mbox{if } 0 < x \leq 1 \\\ 2 & \mbox{if } 1 < x \leq 2 \\\ 0 & \mbox{otherwise } \end{cases} $$ as shown in [Figure](#fig:intro_001). In calculus, you learned Riemann integration, which you can apply here as <!-- dom:FIGURE: [fig- probability/intro_001.pdf, width=500 frac=0.75] Simple piecewise-constant function. <div id="fig:intro_001"></div> --> <!-- begin figure --> <div id="fig:intro_001"></div> <p>Simple piecewise-constant function.</p> <img src="./fig-probability/intro_001.png" width=500> <!-- end figure --> $$ \int_0^2 f(x) dx = 1 + 2 = 3 $$ which has the usual interpretation as the area of the two rectangles that make up $f(x)$. So far, so good. With Lesbesgue integration, the idea is very similar except that we focus on the y-axis instead of moving along the x-axis. The question is given $f(x) = 1$, what is the set of $x$ values for which this is true? For our example, this is true whenever $x\in (0,1]$. So now we have a correspondence between the values of the function (namely, `1` and `2`) and the sets of $x$ values for which this is true, namely, $\lbrace (0,1] \rbrace$ and $\lbrace (1,2] \rbrace$, respectively. To compute the integral, we simply take the function values (i.e., `1,2`) and some way of measuring the size of the corresponding interval (i.e., $\mu$) as in the following: $$ \int_0^2 f d\mu = 1 \mu(\lbrace (0,1] \rbrace) + 2 \mu(\lbrace (1,2] \rbrace) $$ We have suppressed some of the notation above to emphasize generality. Note that we obtain the same value of the integral as in the Riemann case when $\mu((0,1]) = \mu((1,2]) = 1$. By introducing the $\mu$ function as a way of measuring the intervals above, we have introduced another degree of freedom in our integration. This accommodates many weird functions that are not tractable using the usual Riemann theory, but we refer you to a proper introduction to Lesbesgue integration for further study [[jones2001lebesgue]](#jones2001lebesgue). Nonetheless, the key step in the above discussion is the introduction of the $\mu$ function, which we will encounter again as the so-called probability density function. ## Random Variables Most introductions to probability jump straight into *random variables* and then explain how to compute complicated integrals. The problem with this approach is that it skips over some of the important subtleties that we will now consider. Unfortunately, the term *random variable* is not very descriptive. A better term is *measurable function*. To understand why this is a better term, we have to dive into the formal constructions of probability by way of a simple example. Consider tossing a fair six-sided die. There are only six outcomes possible, $$ \Omega=\lbrace 1,2,3,4,5,6 \rbrace $$ As we know, if the die is fair, then the probability of each outcome is $1/6$. To say this formally, the measure of each set (i.e., $\lbrace 1 \rbrace,\lbrace 2 \rbrace,\ldots,\lbrace 6 \rbrace$) is $\mu(\lbrace 1 \rbrace ) =\mu(\lbrace 2 \rbrace ) \ldots = \mu(\lbrace 6 \rbrace ) = 1/6$. In this case, the $\mu$ function we discussed earlier is the usual *probability* mass function, denoted by $\mathbb{P}$. The measurable function maps a set into a number on the real line. For example, $ \lbrace 1 \rbrace \mapsto 1 $ is one such function. Now, here's where things get interesting. Suppose you were asked to construct a fair coin from the fair die. In other words, we want to throw the die and then record the outcomes as if we had just tossed a fair coin. How could we do this? One way would be to define a measurable function that says if the die comes up `3` or less, then we declare *heads* and otherwise declare *tails*. This has some strong intuition behind it, but let's articulate it in terms of formal theory. This strategy creates two different non-overlapping sets $\lbrace 1,2,3 \rbrace$ and $\lbrace 4,5,6 \rbrace$. Each set has the same probability *measure*, $$ \begin{eqnarray*} \mathbb{P}(\lbrace 1,2,3 \rbrace) & = & 1/2 \\\ \mathbb{P}(\lbrace 4,5,6 \rbrace) & = & 1/2 \end{eqnarray*} $$ And the problem is solved. Everytime the die comes up $\lbrace 1,2,3 \rbrace$, we record heads and record tails otherwise. Is this the only way to construct a fair coin experiment from a fair die? Alternatively, we can define the sets as $\lbrace 1 \rbrace$, $\lbrace 2 \rbrace$, $\lbrace 3,4,5,6 \rbrace$. If we define the corresponding measure for each set as the following $$ \begin{eqnarray*} \mathbb{P}(\lbrace 1 \rbrace) & = & 1/2 \\\ \mathbb{P}(\lbrace 2 \rbrace) & = & 1/2 \\\ \mathbb{P}(\lbrace 3,4,5,6 \rbrace) & = & 0 \end{eqnarray*} $$ then, we have another solution to the fair coin problem. To implement this, all we do is ignore every time the die shows `3,4,5,6` and throw again. This is wasteful, but it solves the problem. Nonetheless, we hope you can see how the interlocking pieces of the theory provide a framework for carrying the notion of uncertainty/potentiality from one problem to the next (e.g., from the fair die to the fair coin). Let's consider a slightly more interesting problem where we toss two dice. We assume that each throw is *independent*, meaning that the outcome of one does not influence the other. What are the sets in this case? They are all pairs of possible outcomes from two throws as shown below, $$ \Omega = \lbrace (1,1),(1,2),\ldots,(5,6),(6,6) \rbrace $$ What are the measures of each of these sets? By virtue of the independence claim, the measure of each is the product of the respective measures of each element. For instance, $$ \mathbb{P}((1,2)) = \mathbb{P}(\lbrace 1 \rbrace) \mathbb{P}(\lbrace 2 \rbrace) = \frac{1}{6^2} $$ With all that established, we can ask the following question: what is the probability that the sum of the dice equals seven? As before, the first thing to do is characterize the measurable function for this as $X:(a,b) \mapsto (a+b)$. Next, we associate all of the $(a,b)$ pairs with their sum. We can create a Python dictionary for this as shown, ``` d={(i,j):i+j for i in range(1,7) for j in range(1,7)} ``` The next step is to collect all of the $(a,b)$ pairs that sum to each of the possible values from two to twelve. ``` from collections import defaultdict dinv = defaultdict(list) for i,j in d.items(): dinv[j].append(i) ``` **Programming Tip.** The `defaultdict` object from the built-in collections module creates dictionaries with default values when it encounters a new key. Otherwise, we would have had to create default values manually for a regular dictionary. For example, `dinv[7]` contains the following list of pairs that sum to seven, ``` dinv[7] ``` The next step is to compute the probability measured for each of these items. Using the independence assumption, this means we have to compute the sum of the products of the individual item probabilities in `dinv`. Because we know that each outcome is equally likely, the probability of every term in the sum equals $1/36$. Thus, all we have to do is count the number of items in the corresponding list for each key in `dinv` and divide by `36`. For example, `dinv[11]` contains `[(5, 6), (6, 5)]`. The probability of `5+6=6+5=11` is the probability of this set which is composed of the sum of the probabilities of the individual elements `{(5,6),(6,5)}`. In this case, we have $\mathbb{P}(11) = \mathbb{P}(\lbrace (5,6) \rbrace)+ \mathbb{P}(\lbrace (6,5) \rbrace) = 1/36 + 1/36 = 2/36$. Repeating this procedure for all the elements, we derive the probability mass function as shown below, ``` X={i:len(j)/36. for i,j in dinv.items()} print(X) ``` **Programming Tip.** In the preceding code note that `36.` is written with the trailing decimal mark. This is a good habit to get into because the default division operation changed between Python 2.x and Python 3.x. In Python 2.x division is integer division by default, and it is floating-point division in Python 3.x. The above example exposes the elements of probability theory that are in play for this simple problem while deliberately suppressing some of the gory technical details. With this framework, we can ask other questions like what is the probability that half the product of three dice will exceed the their sum? We can solve this using the same method as in the following. First, let's create the first mapping, ``` d={(i,j,k):((i*j*k)/2>i+j+k) for i in range(1,7) for j in range(1,7) for k in range(1,7)} ``` The keys of this dictionary are the triples and the values are the logical values of whether or not half the product of three dice exceeds their sum. Now, we do the inverse mapping to collect the corresponding lists, ``` dinv = defaultdict(list) for i,j in d.items(): dinv[j].append(i) ``` Note that `dinv` contains only two keys, `True` and `False`. Again, because the dice are independent, the probability of any triple is $1/6^3$. Finally, we collect this for each outcome as in the following, ``` X={i:len(j)/6.0**3 for i,j in dinv.items()} print(X) ``` Thus, the probability of half the product of three dice exceeding their sum is `136/(6.0**3) = 0.63`. The set that is induced by the random variable has only two elements in it, `True` and `False`, with $\mathbb{P}(\mbox{True})=136/216$ and $\mathbb{P}(\mbox{False})=1-136/216$. As a final example to exercise another layer of generality, let is consider the first problem with the two dice where we want the probability of a seven, but this time one of the dice is no longer fair. The distribution for the unfair die is the following: $$ \begin{eqnarray*} \mathbb{P}(\lbrace 1\rbrace)=\mathbb{P}(\lbrace 2 \rbrace)=\mathbb{P}(\lbrace 3 \rbrace) = \frac{1}{9} \\\ \mathbb{P}(\lbrace 4\rbrace)=\mathbb{P}(\lbrace 5 \rbrace)=\mathbb{P}(\lbrace 6 \rbrace) = \frac{2}{9} \end{eqnarray*} $$ From our earlier work, we know the elements corresponding to the sum of seven are the following: $$ \lbrace (1,6),(2,5),(3,4),(4,3),(5,2),(6,1) \rbrace $$ Because we still have the independence assumption, all we need to change is the probability computation of each of elements. For example, given that the first die is the unfair one, we have $$ \mathbb{P}((1,6)) = \mathbb{P}(1)\mathbb{P}(6) = \frac{1}{9} \times \frac{1}{6} $$ and likewise for $(2,5)$ we have the following: $$ \mathbb{P}((2,5)) = \mathbb{P}(2)\mathbb{P}(5) = \frac{1}{9} \times \frac{1}{6} $$ and so forth. Summing all of these gives the following: $$ \mathbb{P}_X(7) = \frac{1}{9} \times \frac{1}{6} +\frac{1}{9} \times \frac{1}{6} +\frac{1}{9} \times \frac{1}{6} +\frac{2}{9} \times \frac{1}{6} +\frac{2}{9} \times \frac{1}{6} +\frac{2}{9} \times \frac{1}{6} = \frac{1}{6} $$ Let's try computing this using Pandas instead of Python dictionaries. First, we construct a `DataFrame` object with an index of tuples consisting of all pairs of possible dice outcomes. ``` from pandas import DataFrame d=DataFrame(index=[(i,j) for i in range(1,7) for j in range(1,7)], columns=['sm','d1','d2','pd1','pd2','p']) ``` Now, we can populate the columns that we set up above where the outcome of the first die is the `d1` column and the outcome of the second die is `d2`, ``` d.d1=[i[0] for i in d.index] d.d2=[i[1] for i in d.index] ``` Next, we compute the sum of the dices in the `sm` column, ``` d.sm=list(map(sum,d.index)) ``` With that established, the DataFrame now looks like the following: ``` d.head(5) # show first five lines ``` Next, we fill out the probabilities for each face of the unfair die (`d1`) and the fair die (`d2`), ``` d.loc[d.d1<=3,'pd1']=1/9. d.loc[d.d1 > 3,'pd1']=2/9. d.pd2=1/6. d.head(10) ``` Finally, we can compute the joint probabilities for the sum of the shown faces as the following: ``` d.p = d.pd1 * d.pd2 d.head(5) ``` With all that established, we can compute the density of all the dice outcomes by using `groupby` as in the following, ``` d.groupby('sm')['p'].sum() ``` These examples have shown how the theory of probability breaks down sets and measurements of those sets and how these can be combined to develop the probability mass functions for new random variables. ## Continuous Random Variables The same ideas work with continuous variables but managing the sets becomes trickier because the real line, unlike discrete sets, has many limiting properties already built into it that have to be handled carefully. Nonetheless, let's start with an example that should illustrate the analogous ideas. Suppose a random variable $X$ is uniformly distributed on the unit interval. What is the probability that the variable takes on values less than 1/2? In order to build intuition onto the discrete case, let's go back to our dice-throwing experiment with the fair dice. The sum of the values of the dice is a measurable function, $$ Y \colon \lbrace 1,2,\dots,6 \rbrace^2 \mapsto \lbrace 2,3,\ldots, 12 \rbrace $$ That is, $Y$ is a mapping of the cartesian product of sets to a discrete set of outcomes. In order to compute probabilities of the set of outcomes, we need to derive the probability measure for $Y$, $\mathbb{P}_Y$, from the corresponding probability measures for each die. Our previous discussion went through the mechanics of that. This means that $$ \mathbb{P}_Y \colon \lbrace 2,3,\ldots,12 \rbrace \mapsto [0,1] $$ Note there is a separation between the function definition and where the target items of the function are measured in probability. More bluntly, $$ Y \colon A \mapsto B $$ with, $$ \mathbb{P}_Y \colon B \mapsto [0,1] $$ Thus, to compute $\mathbb{P}_Y$, which is derived from other random variables, we have to express the equivalence classes in $B$ in terms of their progenitor $A$ sets. The situation for continuous variables follows the same pattern, but with many more deep technicalities that we are going to skip. For the continuous case, the random variable is now, $$ X \colon \mathbb{R} \mapsto \mathbb{R} $$ with corresponding probability measure, $$ \mathbb{P}_X \colon \mathbb{R} \mapsto [0,1] $$ But where are the corresponding sets here? Technically, these are the *Borel* sets, but we can just think of them as intervals. Returning to our question, what is the probability that a uniformly distributed random variable on the unit interval takes values less than $1/2$? Rephrasing this question according to the framework, we have the following: $$ X \colon [0,1] \mapsto [0,1] $$ with corresponding, $$ \mathbb{P}_X \colon [0,1] \mapsto [0,1] $$ To answer the question, by the definition of the uniform random variable on the unit interval, we compute the following integral, $$ \mathbb{P}_X([0,1/2]) = \mathbb{P}_X(0 < X < 1/2) = \int_0^{1/2} dx = 1/2 $$ where the above integral's $dx$ sweeps through intervals of the $B$-type. The measure of any $dx$ interval (i.e., $A$-type set) is equal to $dx$, by definition of the uniform random variable. To get all the moving parts into one notationally rich integral, we can also write this as, $$ \mathbb{P}_X(0 < X < 1/2) = \int_0^{ 1/2 } d\mathbb{P}_X(dx) = 1/2 $$ Now, let's consider a slightly more complicated and interesting example. As before, suppose we have a uniform random variable, $X$ and let us introduce another random variable defined, $$ Y = 2 X $$ Now, what is the probability that $0 < Y < \frac{1}{2}$? To express this in our framework, we write, $$ Y \colon [0,1] \mapsto [0,2] $$ with corresponding, $$ \mathbb{P}_Y \colon [0,2] \mapsto [0,1] $$ To answer the question, we need to measure the set $[0,1/2]$, with the probability measure for $Y$, $\mathbb{P}_Y([0,1/2])$. How can we do this? Because $Y$ is derived from the $X$ random variable, as with the fair-dice throwing experiment, we have to create a set of equivalences in the target space (i.e., $B$-type sets) that reflect back on the input space (i.e., $A$-type sets). That is, what is the interval $[0,1/2]$ equivalent to in terms of the $X$ random variable? Because, functionally, $Y=2 X$, then the $B$-type interval $[0,1/2]$ corresponds to the $A$-type interval $[0,1/4]$. From the probability measure of $X$, we compute this with the integral, $$ \mathbb{P}_Y([0,1/2]) =\mathbb{P}_X([0,1/4])= \int_0^{1/4} dx = 1/4 $$ Now, let's up the ante and consider the following random variable, $$ Y = X^2 $$ where now $X$ is still uniformly distributed, but now over the interval $[-1/2,1/2]$. We can express this in our framework as, $$ Y \colon [-1/2,1/2] \mapsto [0,1/4] $$ with corresponding, $$ \mathbb{P}_Y \colon [0,1/4] \mapsto [0,1] $$ What is the $\mathbb{P}_Y(Y < 1/8)$? In other words, what is the measure of the set $B_Y= [0,1/8]$? As before, because $X$ is derived from our uniformly distributed random variable, we have to reflect the $B_Y$ set onto sets of the $A$-type. The thing to recognize is that because $X^2$ is symmetric about zero, all $B_Y$ sets reflect back into two sets. This means that for any set $B_Y$, we have the correspondence $B_Y = A_X^+ \cup A_X^{-}$. So, we have, $$ B_Y=\Big\lbrace 0<Y<\frac{1}{8}\Big\rbrace=\Big\lbrace 0<X<\frac{1}{\sqrt{8}} \Big\rbrace \bigcup \Big\lbrace -\frac{1}{\sqrt {8}}<X<0 \Big\rbrace $$ From this perspective, we have the following solution, $$ \mathbb{P}_Y(B_Y)=\mathbb{P}(A_X^+) + \mathbb{P}(A_X^{-}) $$ Also, $$ \begin{align*} A_X^+ &= \Big\lbrace 0< X<\frac{1}{\sqrt{8}} \Big\rbrace \\\ A_X^{-} &= \Big\lbrace -\frac{1}{\sqrt {8}} < X<0 \Big\rbrace \end{align*} $$ Therefore, $$ \mathbb{P}_Y(B_Y) = \frac{1}{\sqrt 8} + \frac{1}{\sqrt 8} $$ because $\mathbb{P}(A_X^+) =\mathbb{P}(A_X^-) = 1/\sqrt 8$. Let's see if this comes out using the usual transformation of variables method from calculus. Using this method, the density $f_Y(y) = \frac{1}{ \sqrt y} $. Then, we obtain, $$ \int_0^{\frac{1}{8}} \frac{1}{\sqrt y} dy = \frac{1}{\sqrt 2} $$ which is what we got using the sets method. Note that you would favor the calculus method in practice, but it is important to understand the deeper mechanics, because sometimes the usual calculus method fails, as the next problem shows. ## Transformation of Variables Beyond Calculus Suppose $X$ and $Y$ are uniformly distributed in the unit interval and we define $Z$ as $$ Z = \frac{X}{Y-X} $$ What is the $f_Z(z)$? If you try this using the usual calculus method, you will fail (try it!). The problem is one of the technical prerequisites for the calculus method is not in force. The key observation is that $Z \notin (-1,0]$. If this were possible, the $X$ and $Y$ would have different signs, which cannot happen, given that $X$ and $Y$ are uniformly distributed over $(0,1]$. Now, let's consider when $Z>0$. In this case, $Y>X$ because $Z$ cannot be positive otherwise. For the density function, we are interested in the set $\lbrace 0 < Z < z \rbrace $. We want to compute $$ \mathbb{P}(Z<z) = \int \int B_1 dX dY $$ with, $$ B_1 = \lbrace 0 < Z < z \rbrace $$ Now, we have to translate that interval into an interval relevant to $X$ and $Y$. For $0 < Z$, we have $ Y > X$. For $Z < z $, we have $Y > X(1/z+1)$. Putting this together gives $$ A_1 = \lbrace \max (X,X(1/z+1)) < Y < 1 \rbrace $$ Integrating this over $Y$ as follows, $$ \int_0^1\lbrace\max(X,X(1/z+1))<Y<1 \rbrace dY=\frac{z-X-Xz}{z}\mbox{ where } z > \frac{X}{1-X} $$ and integrating this one more time over $X$ gives $$ \int_0^{\frac{z}{1+z}} \frac{-X+z-Xz}{z} dX = \frac{z}{2(z+1)} \mbox{ where } z > 0 $$ Note that this is the computation for the *probability* itself, not the probability density function. To get that, all we have to do is differentiate the last expression to obtain $$ f_Z(z) = \frac{1}{(z+1)^2} \mbox{ where } z > 0 $$ Now we need to compute this density using the same process for when $z < -1$. We want the interval $ Z < z $ for when $z < -1$. For a fixed $z$, this is equivalent to $ X(1+1/z) < Y$. Because $z$ is negative, this also means that $Y < X$. Under these terms, we have the following integral, $$ \int_0^1 \lbrace X(1/z+1) <Y< X\rbrace dY = -\frac{X}{z} \mbox{ where } z < -1 $$ and integrating this one more time over $X$ gives the following $$ -\frac{1}{2 z} \mbox{ where } z < -1 $$ To get the density for $z<-1$, we differentiate this with respect to $z$ to obtain the following, $$ f_Z(z) = \frac{1}{2 z^2} \mbox{ where } z < -1 $$ Putting this all together, we obtain, $$ f_Z(z) = \begin{cases} \frac{1}{(z+1)^2} & \mbox{if } z > 0 \\\ \frac{1}{2 z^2} & \mbox{if } z < -1 \\\ 0 & \mbox{otherwise } \end{cases} $$ We will leave it as an exercise to show that this integrates out to one. ## Independent Random Variables Independence is a standard assumption. Mathematically, the necessary and sufficient condition for independence between two random variables $X$ and $Y$ is the following: $$ \mathbb{P}(X,Y) = \mathbb{P}(X)\mathbb{P}(Y) $$ Two random variables $X$ and $Y$ are *uncorrelated* if, $$ \mathbb{E}\left( (X-\overline{X})(Y-\overline{Y}) \right)=0 $$ where $\overline{X}=\mathbb{E}(X)$ Note that uncorrelated random variables are sometimes called *orthogonal* random variables. Uncorrelatedness is a weaker property than independence, however. For example, consider the discrete random variables $X$ and $Y$ uniformly distributed over the set $\lbrace 1,2,3 \rbrace$ where $$ X = \begin{cases} 1 & \mbox{if } \omega =1 \\\ 0 & \mbox{if } \omega =2 \\\ -1 & \mbox{if } \omega =3 \end{cases} $$ and also, $$ Y = \begin{cases} 0 & \mbox{if } \omega =1 \\\ 1 & \mbox{if } \omega =2 \\\ 0 & \mbox{if } \omega =3 \end{cases} $$ Thus, $\mathbb{E}(X)=0$ and $\mathbb{E}(X Y)=0$, so $X$ and $Y$ are uncorrelated. However, we have $$ \mathbb{P}(X=1,Y=1)=0\neq \mathbb{P}(X=1)\mathbb{P}(Y=1)=\frac{1}{9} $$ So, these two random variables are *not* independent. Thus, uncorrelatedness does not imply independence, generally, but there is the important case of Gaussian random variables for which it does. To see this, consider the probability density function for two zero-mean, unit-variance Gaussian random variables $X$ and $Y$, $$ f_{X,Y}(x,y) = \frac{e^{\frac{x^2-2 \rho x y+y^2}{2 \left(\rho^2-1\right)}}}{2 \pi \sqrt{1-\rho^2}} $$ where $\rho:=\mathbb{E}(X Y)$ is the correlation coefficient. In the uncorrelated case where $\rho=0$, the probability density function factors into the following, $$ f_{X,Y}(x,y)=\frac{e^{-\frac{1}{2}\left(x^2+y^2\right)}}{2\pi}=\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\frac{e^{-\frac{y^2}{2}}}{\sqrt{2\pi}} =f_X(x)f_Y(y) $$ which means that $X$ and $Y$ are independent. Independence and conditional independence are closely related, as in the following: $$ \mathbb{P}(X,Y\vert Z) =\mathbb{P}(X\vert Z) \mathbb{P}(Y\vert Z) $$ which says that $X$ and $Y$ and independent conditioned on $Z$. Conditioning independent random variables can break their independence. For example, consider two independent Bernoulli-distributed random variables, $X_1, X_2\in\lbrace 0,1 \rbrace$. We define $Z=X_1+X_2$. Note that $Z\in \lbrace 0,1,2 \rbrace$. In the case where $Z=1$, we have, $$ \begin{align*} \mathbb{P}(X_1\vert Z=1) &>0 \\\ \mathbb{P}(X_2\vert Z=1) &>0 \end{align*} $$ Even though $X_1,X_2$ are independent, after conditioning on $Z$, we have the following, $$ \mathbb{P}(X_1=1,X_2=1\vert Z=1)=0\neq \mathbb{P}(X_1=1\vert Z=1)\mathbb{P}(X_2=1\vert Z=1) $$ Thus, conditioning on $Z$ breaks the independence of $X_1,X_2$. This also works in the opposite direction --- conditioning can make dependent random variables independent. Define $Z_n=\sum_i^n X_i$ with $X_i$ independent, integer-valued random variables. The $Z_n$ variables are dependent because they stack the same telescoping set of $X_i$ variables. Consider the following, <!-- Equation labels as ordinary links --> <div id="_auto1"></div> $$ \begin{equation} \mathbb{P}(Z_1=i,Z_3=j\vert Z_2=k) = \frac{\mathbb{P}(Z_1=i,Z_2=k,Z_3=j)}{\mathbb{P}(Z_2 =k)} \label{_auto1} \tag{1} \end{equation} $$ <!-- Equation labels as ordinary links --> <div id="eq:condIndep"></div> $$ \begin{equation} \ =\frac{\mathbb{P}(X_1 =i)\mathbb{P}(X_2 =k-i)\mathbb{P}(X_3 =j-k) }{\mathbb{P}(Z_2 =k)} \end{equation} \label{eq:condIndep} \tag{2} $$ where the factorization comes from the independence of the $X_i$ variables. Using the definition of conditional probability, $$ \mathbb{P}(Z_1=i\vert Z_2)=\frac{\mathbb{P}(Z_1=i,Z_2=k)}{\mathbb{P}(Z_2=k)} $$ We can continue to expand Equation [2](#eq:condIndep), $$ \begin{align*} \mathbb{P}(Z_1=i,Z_3=j\vert Z_2=k) &=\mathbb{P}(Z_1 =i\vert Z_2) \frac{\mathbb{P}( X_3 =j-k)\mathbb{P}( Z_2 =k)}{\mathbb{P}( Z_2 =k)}\\\ &=\mathbb{P}(Z_1 =i\vert Z_2)\mathbb{P}(Z_3 =j\vert Z_2) \end{align*} $$ where $\mathbb{P}(X_3=j-k)\mathbb{P}(Z_2=k)= \mathbb{P}(Z_3=j,Z_2)$. Thus, we see that dependence between random variables can be broken by conditioning to create conditionally independent random variables. As we have just witnessed, understanding how conditioning influences independence is important and is the main topic of study in Probabilistic Graphical Models, a field with many algorithms and concepts to extract these notions of conditional independence from graph-based representations of random variables. ## Classic Broken Rod Example Let's do one last example to exercise fluency in our methods by considering the following classic problem: given a rod of unit-length, broken independently and randomly at two places, what is the probability that you can assemble the three remaining pieces into a triangle? The first task is to find a representation of a triangle as an easy-to-apply constraint. What we want is something like the following: $$ \mathbb{P}(\mbox{ triangle exists }) = \int_0^1 \int_0^1 \lbrace \mbox{ triangle exists } \rbrace dX dY $$ where $X$ and $Y$ are independent and uniformly distributed in the unit- interval. Heron's formula for the area of the triangle, $$ \mbox{ area } = \sqrt{(s-a)(s-b)(s-c)s} $$ where $s = (a+b+c)/2$ is what we need. The idea is that this yields a valid area only when each of the terms under the square root is greater than or equal to zero. Thus, suppose that we have $$ \begin{eqnarray*} a & = & X \\\ b & = & Y-X \\\ c & = & 1-Y \end{eqnarray*} $$ assuming that $Y>X$. Thus, the criterion for a valid triangle boils down to $$ \lbrace (s > a) \wedge (s > b) \wedge (s > c) \wedge (X<Y) \rbrace $$ After a bit of manipulation, this consolidates into: $$ \Big\lbrace \frac{1}{2} < Y < 1 \bigwedge \frac{1}{2}(2 Y-1) < X < \frac{1}{2} \Big\rbrace $$ which we integrate out by $dX$ first to obtain $$ \mathbb{P}(\mbox{ triangle exists }) = \int_{0}^1 \int_{0}^1 \Big\lbrace \frac{1}{2} < Y < 1 \bigwedge \frac{1}{2}(2 Y-1) < X < \frac{1}{2} \Big\rbrace dX dY $$ $$ \mathbb{P}(\mbox{ triangle exists }) = \int_{\frac{1}{2}}^1 (1-Y) dY $$ and then by $dY$ to obtain finally, $$ \mathbb{P}(\mbox{ triangle exists }) = \frac{1}{8} $$ when $Y>X$. By symmetry, we get the same result for $X>Y$. Thus, the final result is the following: $$ \mathbb{P}(\mbox{ triangle exists }) = \frac{1}{8}+\frac{1}{8} = \frac{1}{4} $$ We can quickly check using this result using Python for the case $Y>X$ using the following code: ``` import numpy as np x,y = np.random.rand(2,1000) # uniform rv a,b,c = x,(y-x),1-y # 3 sides s = (a+b+c)/2 np.mean((s>a) & (s>b) & (s>c) & (y>x)) # approx 1/8=0.125 ``` **Programming Tip.** The chained logical `&` symbols above tell Numpy that the logical operation should be considered element-wise.
github_jupyter
from IPython.display import Image Image('../../Python_probability_statistics_machine_learning_2E.png',width=200) d={(i,j):i+j for i in range(1,7) for j in range(1,7)} from collections import defaultdict dinv = defaultdict(list) for i,j in d.items(): dinv[j].append(i) dinv[7] X={i:len(j)/36. for i,j in dinv.items()} print(X) d={(i,j,k):((i*j*k)/2>i+j+k) for i in range(1,7) for j in range(1,7) for k in range(1,7)} dinv = defaultdict(list) for i,j in d.items(): dinv[j].append(i) X={i:len(j)/6.0**3 for i,j in dinv.items()} print(X) from pandas import DataFrame d=DataFrame(index=[(i,j) for i in range(1,7) for j in range(1,7)], columns=['sm','d1','d2','pd1','pd2','p']) d.d1=[i[0] for i in d.index] d.d2=[i[1] for i in d.index] d.sm=list(map(sum,d.index)) d.head(5) # show first five lines d.loc[d.d1<=3,'pd1']=1/9. d.loc[d.d1 > 3,'pd1']=2/9. d.pd2=1/6. d.head(10) d.p = d.pd1 * d.pd2 d.head(5) d.groupby('sm')['p'].sum() import numpy as np x,y = np.random.rand(2,1000) # uniform rv a,b,c = x,(y-x),1-y # 3 sides s = (a+b+c)/2 np.mean((s>a) & (s>b) & (s>c) & (y>x)) # approx 1/8=0.125
0.184253
0.979334
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import math import pandas_datareader as pdr #ALGORITMO PRUEBA DE NORMALIDAD def prueba_normalidad(valores): #FUNCIÓN MEDIA def media(valores): resultado = sum(valores)/len(valores) return resultado #FUNCIÓN DESVIACIÓN ESTÁNDAR def des_estandar(valores): import math calculo = [] for i in valores: calculo.append((i-media(valores))**2) suma = sum(calculo) resultado = math.sqrt(suma/len(valores)) return resultado #FUNCIÓN COEFICIENTE DE ASIMETRÍA def cof_corre(valores): calculo = [] for i in valores: calculo.append((i-media(valores))**3) suma= sum(calculo) resultado = suma/((len(valores)*(des_estandar(valores)**3))) return resultado #FUNCIÓN CURTOSIS def curtosis(valores): calculo = [] for i in valores: calculo.append((i-media(valores))**4) suma = sum(calculo) resultado = (suma/((len(valores)*(des_estandar(valores)**4)))) return resultado rmedia = str(media(valores)) rdesviación = str(des_estandar(valores)) rcoefi = str(cof_corre(valores)) vacurtosis = curtosis(valores) curtosistexto="" vacoefi = cof_corre(valores) coefitexto="" rcurtosis = str(curtosis(valores)) valorMax = str(max(valores)) valorMin = str(min(valores)) if(vacoefi) == 0: coefitexto = "La distribución es perfectamente simétrica respecto a la media" elif (vacoefi) >0: coefitexto = "La distribución está sesgada a la derecha" else: coefitexto = "La distribución está sesgada a la izquierda" if(vacurtosis) == 3: curtosistexto = "La distribución es perfectamente normal" elif(vacurtosis) > 3: curtosistexto = "Los valores de la distribución están concentrados respecto a la media" else: curtosistexto = "Muchos valores de la distribución alejados de la media, tenemos el fenómeno de las colas gruesas" print("El número de datos es: ", len(valores)) print("El valor máximo de la serie de datos es: ", valorMax[:7]) print("El valor mínimo de la serie de datos es: ", valorMin[:7]) print("La media de los valores es: ", rmedia[:5]) print("La desviación estándar de los valores es: ", rdesviación[:7]) print("El coeficiente de asimetría de los valores es: ", rcoefi[:7]) print(coefitexto) print("La curtosis de los valores es: ", rcurtosis[:8]) print(curtosistexto) def graficar(valores, eje_x ="", eje_y = "", titulo =""): #FUNCIÓN CON LOS PARÁMETROS MÁS BÁSICOS PARA GRAFICAR DIRECTAMENTE plt.plot(valores) plt.xlabel(eje_x) plt.ylabel(eje_y) plt.title(titulo) plt.show() def coef_Hurst(valores): import math #Dividimos la serie de datos en tres partes iguales primero, segundo, tercero = np.array_split(datos, 3) #FUNCIÓN MEDIA def media(valores): resultado = sum(valores)/len(valores) return resultado #FUNCIÓN DESVIACIÓN ESTÁNDAR def des_estandar(valores): calculo = [] for i in valores: calculo.append((i-media(valores))**2) suma = sum(calculo) resultado = math.sqrt(suma/len(valores)) return resultado #Medias de los tres intervales primeroMe = media(primero) segundoMe = media(segundo) terceroMe = media(tercero) # FUNCIÓN CALCULAR LA MEDIA AJUSTADA def media_ajustada(valores, media): calculo = [] for i in valores: calculo.append(i - media) return calculo primeroAjus = media_ajustada(primero, primeroMe) segundoAjus = media_ajustada(segundo, segundoMe) terceroAjus = media_ajustada(tercero, terceroMe) # FUNCIÓN DESVIACIÓN ACUMULATIVA def desvia_acu(valores): suma = 0 nuevocalculo = [] for i in valores: suma = suma + i nuevocalculo.append(suma) return nuevocalculo primeroAcu = desvia_acu(primeroAjus) segundoAcu = desvia_acu(segundoAjus) terceroAcu = desvia_acu(terceroAjus) # FUNCIÓN CÁLCULO DEL RANGO def rango(valores): rango = max(valores)-min(valores) return rango primeroRango = rango(primeroAcu) segundoRango = rango(segundoAcu) terceroRango = rango(terceroAcu) primeroDesvi = des_estandar(primero) segundoDesvi = des_estandar(segundo) terceroDesvi = des_estandar(tercero) # FUNCIÓN RANGO REESCALADO def R_S (rango, desviacion): resultado = rango/desviacion return resultado primeroRS = R_S(primeroRango, primeroDesvi) segundoRS = R_S(segundoRango, segundoDesvi) terceroRS = R_S(terceroRango, terceroDesvi) #FUNCIÓN PARA CALCULAR FINALMENTE EL COEFICIENTE DE HURST def coeficiente(RS, valores): resultado = math.log(RS)/math.log(len(valores)) return resultado primeroResul = coeficiente(primeroRS, primero) segundoResul = coeficiente(segundoRS, segundo) terceroResul = coeficiente(terceroRS, tercero) resultadoFinal = (primeroResul + segundoResul + terceroResul)/3 resultadoCa = str(resultadoFinal) porcentaje = str(((primeroResul + segundoResul + terceroResul)/3)*100) print("El coeficiente de Hurst para esta serie de datos es: ", resultadoCa[:5]) if(resultadoFinal > 0.5 and resultadoFinal <= 1): print("Existe algún tipo de memoria a largo plazo en los datos. Se trata de un proceso cíclico") print("Existe una probabilidad del {}% de que los resultados se vuelvan a repetir próximamente".format(porcentaje[:5])) elif(resultadoFinal >= 0 and resultadoFinal < 0.5): print("Existe antipersistencia en la serie. Se trata de un proceso turbulento") else: print("Se trata de un proceso independiente, no hay relación entre los datos") from pandas_datareader import data appl = data.DataReader("AAPL", start = "2008-12-24" , end = "2008-12-31" ,data_source = "yahoo")["Adj Close"] datos = appl datos = datos.to_numpy() from pandas_datareader import data appl = data.DataReader("GOOG", start = "2011-03-24", end = "2011-04-15", data_source = "yahoo")["Adj Close"] datos = appl datos = datos.to_numpy() coef_Hurst(datos) ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import math import pandas_datareader as pdr #ALGORITMO PRUEBA DE NORMALIDAD def prueba_normalidad(valores): #FUNCIÓN MEDIA def media(valores): resultado = sum(valores)/len(valores) return resultado #FUNCIÓN DESVIACIÓN ESTÁNDAR def des_estandar(valores): import math calculo = [] for i in valores: calculo.append((i-media(valores))**2) suma = sum(calculo) resultado = math.sqrt(suma/len(valores)) return resultado #FUNCIÓN COEFICIENTE DE ASIMETRÍA def cof_corre(valores): calculo = [] for i in valores: calculo.append((i-media(valores))**3) suma= sum(calculo) resultado = suma/((len(valores)*(des_estandar(valores)**3))) return resultado #FUNCIÓN CURTOSIS def curtosis(valores): calculo = [] for i in valores: calculo.append((i-media(valores))**4) suma = sum(calculo) resultado = (suma/((len(valores)*(des_estandar(valores)**4)))) return resultado rmedia = str(media(valores)) rdesviación = str(des_estandar(valores)) rcoefi = str(cof_corre(valores)) vacurtosis = curtosis(valores) curtosistexto="" vacoefi = cof_corre(valores) coefitexto="" rcurtosis = str(curtosis(valores)) valorMax = str(max(valores)) valorMin = str(min(valores)) if(vacoefi) == 0: coefitexto = "La distribución es perfectamente simétrica respecto a la media" elif (vacoefi) >0: coefitexto = "La distribución está sesgada a la derecha" else: coefitexto = "La distribución está sesgada a la izquierda" if(vacurtosis) == 3: curtosistexto = "La distribución es perfectamente normal" elif(vacurtosis) > 3: curtosistexto = "Los valores de la distribución están concentrados respecto a la media" else: curtosistexto = "Muchos valores de la distribución alejados de la media, tenemos el fenómeno de las colas gruesas" print("El número de datos es: ", len(valores)) print("El valor máximo de la serie de datos es: ", valorMax[:7]) print("El valor mínimo de la serie de datos es: ", valorMin[:7]) print("La media de los valores es: ", rmedia[:5]) print("La desviación estándar de los valores es: ", rdesviación[:7]) print("El coeficiente de asimetría de los valores es: ", rcoefi[:7]) print(coefitexto) print("La curtosis de los valores es: ", rcurtosis[:8]) print(curtosistexto) def graficar(valores, eje_x ="", eje_y = "", titulo =""): #FUNCIÓN CON LOS PARÁMETROS MÁS BÁSICOS PARA GRAFICAR DIRECTAMENTE plt.plot(valores) plt.xlabel(eje_x) plt.ylabel(eje_y) plt.title(titulo) plt.show() def coef_Hurst(valores): import math #Dividimos la serie de datos en tres partes iguales primero, segundo, tercero = np.array_split(datos, 3) #FUNCIÓN MEDIA def media(valores): resultado = sum(valores)/len(valores) return resultado #FUNCIÓN DESVIACIÓN ESTÁNDAR def des_estandar(valores): calculo = [] for i in valores: calculo.append((i-media(valores))**2) suma = sum(calculo) resultado = math.sqrt(suma/len(valores)) return resultado #Medias de los tres intervales primeroMe = media(primero) segundoMe = media(segundo) terceroMe = media(tercero) # FUNCIÓN CALCULAR LA MEDIA AJUSTADA def media_ajustada(valores, media): calculo = [] for i in valores: calculo.append(i - media) return calculo primeroAjus = media_ajustada(primero, primeroMe) segundoAjus = media_ajustada(segundo, segundoMe) terceroAjus = media_ajustada(tercero, terceroMe) # FUNCIÓN DESVIACIÓN ACUMULATIVA def desvia_acu(valores): suma = 0 nuevocalculo = [] for i in valores: suma = suma + i nuevocalculo.append(suma) return nuevocalculo primeroAcu = desvia_acu(primeroAjus) segundoAcu = desvia_acu(segundoAjus) terceroAcu = desvia_acu(terceroAjus) # FUNCIÓN CÁLCULO DEL RANGO def rango(valores): rango = max(valores)-min(valores) return rango primeroRango = rango(primeroAcu) segundoRango = rango(segundoAcu) terceroRango = rango(terceroAcu) primeroDesvi = des_estandar(primero) segundoDesvi = des_estandar(segundo) terceroDesvi = des_estandar(tercero) # FUNCIÓN RANGO REESCALADO def R_S (rango, desviacion): resultado = rango/desviacion return resultado primeroRS = R_S(primeroRango, primeroDesvi) segundoRS = R_S(segundoRango, segundoDesvi) terceroRS = R_S(terceroRango, terceroDesvi) #FUNCIÓN PARA CALCULAR FINALMENTE EL COEFICIENTE DE HURST def coeficiente(RS, valores): resultado = math.log(RS)/math.log(len(valores)) return resultado primeroResul = coeficiente(primeroRS, primero) segundoResul = coeficiente(segundoRS, segundo) terceroResul = coeficiente(terceroRS, tercero) resultadoFinal = (primeroResul + segundoResul + terceroResul)/3 resultadoCa = str(resultadoFinal) porcentaje = str(((primeroResul + segundoResul + terceroResul)/3)*100) print("El coeficiente de Hurst para esta serie de datos es: ", resultadoCa[:5]) if(resultadoFinal > 0.5 and resultadoFinal <= 1): print("Existe algún tipo de memoria a largo plazo en los datos. Se trata de un proceso cíclico") print("Existe una probabilidad del {}% de que los resultados se vuelvan a repetir próximamente".format(porcentaje[:5])) elif(resultadoFinal >= 0 and resultadoFinal < 0.5): print("Existe antipersistencia en la serie. Se trata de un proceso turbulento") else: print("Se trata de un proceso independiente, no hay relación entre los datos") from pandas_datareader import data appl = data.DataReader("AAPL", start = "2008-12-24" , end = "2008-12-31" ,data_source = "yahoo")["Adj Close"] datos = appl datos = datos.to_numpy() from pandas_datareader import data appl = data.DataReader("GOOG", start = "2011-03-24", end = "2011-04-15", data_source = "yahoo")["Adj Close"] datos = appl datos = datos.to_numpy() coef_Hurst(datos)
0.157947
0.490053
[@LorenaABarba](https://twitter.com/LorenaABarba) 12 steps to Navier–Stokes ===== *** This lesson complements the first interactive module of the online [CFD Python](https://github.com/barbagroup/CFDPython) class, by Prof. Lorena A. Barba, called **12 Steps to Navier–Stokes.** It was written with BU graduate student Gilbert Forsyth. Array Operations with NumPy ---------------- For more computationally intensive programs, the use of built-in Numpy functions can provide an increase in execution speed many-times over. As a simple example, consider the following equation: $$u^{n+1}_i = u^n_i-u^n_{i-1}$$ Now, given a vector $u^n = [0, 1, 2, 3, 4, 5]\ \ $ we can calculate the values of $u^{n+1}$ by iterating over the values of $u^n$ with a for loop. ``` import numpy u = numpy.array((0, 1, 2, 3, 4, 5)) for i in range(1, len(u)): print(u[i] - u[i-1]) ``` This is the expected result and the execution time was nearly instantaneous. If we perform the same operation as an array operation, then rather than calculate $u^n_i-u^n_{i-1}\ $ 5 separate times, we can slice the $u$ array and calculate each operation with one command: ``` u[1:] - u[0:-1] ``` What this command says is subtract the 0th, 1st, 2nd, 3rd, 4th and 5th elements of $u$ from the 1st, 2nd, 3rd, 4th, 5th and 6th elements of $u$. ### Speed Increases For a 6 element array, the benefits of array operations are pretty slim. There will be no appreciable difference in execution time because there are so few operations taking place. But if we revisit 2D linear convection, we can see some substantial speed increases. ``` nx = 81 ny = 81 nt = 100 c = 1 dx = 2 / (nx - 1) dy = 2 / (ny - 1) sigma = .2 dt = sigma * dx x = numpy.linspace(0, 2, nx) y = numpy.linspace(0, 2, ny) u = numpy.ones((ny, nx)) ##create a 1xn vector of 1's un = numpy.ones((ny, nx)) ###Assign initial conditions u[int(.5 / dy): int(1 / dy + 1), int(.5 / dx):int(1 / dx + 1)] = 2 ``` With our initial conditions all set up, let's first try running our original nested loop code, making use of the iPython "magic" function `%%timeit`, which will help us evaluate the performance of our code. **Note**: The `%%timeit` magic function will run the code several times and then give an average execution time as a result. If you have any figures being plotted within a cell where you run `%%timeit`, it will plot those figures repeatedly which can be a bit messy. The execution times below will vary from machine to machine. Don't expect your times to match these times, but you _should_ expect to see the same general trend in decreasing execution time as we switch to array operations. ``` %%timeit u = numpy.ones((ny, nx)) u[int(.5 / dy): int(1 / dy + 1), int(.5 / dx):int(1 / dx + 1)] = 2 for n in range(nt + 1): ##loop across number of time steps un = u.copy() row, col = u.shape for j in range(1, row): for i in range(1, col): u[j, i] = (un[j, i] - (c * dt / dx * (un[j, i] - un[j, i - 1])) - (c * dt / dy * (un[j, i] - un[j - 1, i]))) u[0, :] = 1 u[-1, :] = 1 u[:, 0] = 1 u[:, -1] = 1 ``` With the "raw" Python code above, the mean execution time achieved was 3.07 seconds (on a MacBook Pro Mid 2012). Keep in mind that with these three nested loops, that the statements inside the **j** loop are being evaluated more than 650,000 times. Let's compare that with the performance of the same code implemented with array operations: ``` %%timeit u = numpy.ones((ny, nx)) u[int(.5 / dy): int(1 / dy + 1), int(.5 / dx):int(1 / dx + 1)] = 2 for n in range(nt + 1): ##loop across number of time steps un = u.copy() u[1:, 1:] = (un[1:, 1:] - (c * dt / dx * (un[1:, 1:] - un[1:, 0:-1])) - (c * dt / dy * (un[1:, 1:] - un[0:-1, 1:]))) u[0, :] = 1 u[-1, :] = 1 u[:, 0] = 1 u[:, -1] = 1 ``` As you can see, the speed increase is substantial. The same calculation goes from 3.07 seconds to 7.38 milliseconds. 3 seconds isn't a huge amount of time to wait, but these speed gains will increase exponentially with the size and complexity of the problem being evaluated. ``` from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() ```
github_jupyter
import numpy u = numpy.array((0, 1, 2, 3, 4, 5)) for i in range(1, len(u)): print(u[i] - u[i-1]) u[1:] - u[0:-1] nx = 81 ny = 81 nt = 100 c = 1 dx = 2 / (nx - 1) dy = 2 / (ny - 1) sigma = .2 dt = sigma * dx x = numpy.linspace(0, 2, nx) y = numpy.linspace(0, 2, ny) u = numpy.ones((ny, nx)) ##create a 1xn vector of 1's un = numpy.ones((ny, nx)) ###Assign initial conditions u[int(.5 / dy): int(1 / dy + 1), int(.5 / dx):int(1 / dx + 1)] = 2 %%timeit u = numpy.ones((ny, nx)) u[int(.5 / dy): int(1 / dy + 1), int(.5 / dx):int(1 / dx + 1)] = 2 for n in range(nt + 1): ##loop across number of time steps un = u.copy() row, col = u.shape for j in range(1, row): for i in range(1, col): u[j, i] = (un[j, i] - (c * dt / dx * (un[j, i] - un[j, i - 1])) - (c * dt / dy * (un[j, i] - un[j - 1, i]))) u[0, :] = 1 u[-1, :] = 1 u[:, 0] = 1 u[:, -1] = 1 %%timeit u = numpy.ones((ny, nx)) u[int(.5 / dy): int(1 / dy + 1), int(.5 / dx):int(1 / dx + 1)] = 2 for n in range(nt + 1): ##loop across number of time steps un = u.copy() u[1:, 1:] = (un[1:, 1:] - (c * dt / dx * (un[1:, 1:] - un[1:, 0:-1])) - (c * dt / dy * (un[1:, 1:] - un[0:-1, 1:]))) u[0, :] = 1 u[-1, :] = 1 u[:, 0] = 1 u[:, -1] = 1 from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling()
0.138928
0.960025
``` import os os.environ["CUDA_VISIBLE_DEVICES"] = "2" %pylab inline from tqdm import tqdm import jax_cosmo as jc mesh_shape= [64, 64, 64] box_size = [25., 25., 25.] cosmo = jc.Planck15(Omega_c= 0.10 - 0.049, Omega_b=0.049, n_s=0.9624, h=0.6711, sigma8=0.8) import readgadget init_cond = '/data/CAMELS/Sims/IllustrisTNG_DM/1P_1_n5/ICs/ics' header = readgadget.header(init_cond) BoxSize = header.boxsize/1e3 #Mpc/h Nall = header.nall #Total number of particles Masses = header.massarr*1e10 #Masses of the particles in Msun/h Omega_m = header.omega_m #value of Omega_m Omega_l = header.omega_l #value of Omega_l h = header.hubble #value of h redshift = header.redshift #redshift of the snapshot Hubble = 100.0*np.sqrt(Omega_m*(1.0+redshift)**3+Omega_l)#Value of H(z) in km/s/(Mpc/h) ptype = [1] #dark matter is particle type 1 ids_i = np.argsort(readgadget.read_block(init_cond, "ID ", ptype)-1) #IDs starting from 0 pos_i = readgadget.read_block(init_cond, "POS ", ptype)[ids_i]/1e3 #positions in Mpc/h vel_i = readgadget.read_block(init_cond, "VEL ", ptype)[ids_i] #peculiar velocities in km/s # Reordering data for simple reshaping pos_i = pos_i.reshape(4,4,4,64,64,64,3).transpose(0,3,1,4,2,5,6).reshape(-1,3) vel_i = vel_i.reshape(4,4,4,64,64,64,3).transpose(0,3,1,4,2,5,6).reshape(-1,3) pos_i = (pos_i/BoxSize*64).reshape([256,256,256,3])[::4,::4,::4,:].reshape([-1,3]) vel_i = (vel_i / 100 * (1./(1+redshift)) / BoxSize*64).reshape([256,256,256,3])[::4,::4,::4,:].reshape([-1,3]) a_i = 1./(1+redshift) scales = [] poss = [] vels = [] # Loading all the intermediate snapshots for i in tqdm(range(34)): snapshot='/data/CAMELS/Sims/IllustrisTNG_DM/1P_1_n5/snap_%03d.hdf5'%i header = readgadget.header(snapshot) redshift = header.redshift #redshift of the snapshot h = header.hubble #value of h ptype = [1] #dark matter is particle type 1 ids = np.argsort(readgadget.read_block(snapshot, "ID ", ptype)-1) #IDs starting from 0 pos = readgadget.read_block(snapshot, "POS ", ptype)[ids] / 1e3 #positions in Mpc/h vel = readgadget.read_block(snapshot, "VEL ", ptype)[ids] #peculiar velocities in km/s # Reordering data for simple reshaping pos = pos.reshape(4,4,4,64,64,64,3).transpose(0,3,1,4,2,5,6).reshape(-1,3) vel = vel.reshape(4,4,4,64,64,64,3).transpose(0,3,1,4,2,5,6).reshape(-1,3) pos = (pos / BoxSize * 64).reshape([256,256,256,3])[::4,::4,::4,:].reshape([-1,3]) vel = (vel / 100 * (1./(1+redshift)) / BoxSize*64).reshape([256,256,256,3])[::4,::4,::4,:].reshape([-1,3]) scales.append((1./(1+redshift))) poss.append(pos) vels.append(vel) import jax import jax.numpy as jnp import jax_cosmo as jc import haiku as hk from jax.experimental.ode import odeint from jaxpm.painting import cic_paint, cic_read, compensate_cic from jaxpm.pm import linear_field, lpt, make_ode_fn, pm_forces from jaxpm.kernels import fftk, gradient_kernel, laplace_kernel, longrange_kernel from jaxpm.nn import NeuralSplineFourierFilter from jaxpm.utils import power_spectrum import numpyro rng_seq = hk.PRNGSequence(1) # Run the reference simulation without correction at the same steps resi = odeint(make_ode_fn(mesh_shape), [poss[0], vels[0]], jnp.array(scales), cosmo, rtol=1e-5, atol=1e-5) # High res simulation figure(figsize=[10,10]) for i in range(16): subplot(4,4,i+1) imshow(cic_paint(jnp.zeros(mesh_shape), poss[::2][i]).sum(axis=0), cmap='gist_stern', vmin=0) k, pk_ref = power_spectrum( compensate_cic(cic_paint(jnp.zeros(mesh_shape), poss[-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) k, pk_i = power_spectrum( compensate_cic(cic_paint(jnp.zeros(mesh_shape), resi[0][-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) loglog(k,pk_ref, label='N-body') loglog(k,pk_i, label='JaxPM without correction') legend() plt.xlabel(r"$k$ [$h \ \mathrm{Mpc}^{-1}$]") plt.ylabel(r"$P(k)$") model = hk.without_apply_rng(hk.transform(lambda x,a : NeuralSplineFourierFilter(n_knots=16, latent_size=32)(x,a))) import pickle params = pickle.load( open( "correction_params/camels_25_64_CV_0_lambda1_01.params", "rb" ) ) def neural_nbody_ode(state, a, cosmo, params): """ state is a tuple (position, velocities) """ pos, vel = state kvec = fftk(mesh_shape) delta = cic_paint(jnp.zeros(mesh_shape), pos) delta_k = jnp.fft.rfftn(delta) # Computes gravitational potential pot_k = delta_k * laplace_kernel(kvec) * longrange_kernel(kvec, r_split=0) # Apply a correction filter kk = jnp.sqrt(sum((ki/pi)**2 for ki in kvec)) pot_k = pot_k *(1. + model.apply(params, kk, jnp.atleast_1d(a))) # Computes gravitational forces forces = jnp.stack([cic_read(jnp.fft.irfftn(gradient_kernel(kvec, i)*pot_k), pos) for i in range(3)],axis=-1) forces = forces * 1.5 * cosmo.Omega_m # Computes the update of position (drift) dpos = 1. / (a**3 * jnp.sqrt(jc.background.Esqr(cosmo, a))) * vel # Computes the update of velocity (kick) dvel = 1. / (a**2 * jnp.sqrt(jc.background.Esqr(cosmo, a))) * forces return dpos, dvel res = odeint(neural_nbody_ode, [poss[0], vels[0]], jnp.array(scales), cosmo, params, rtol=1e-5, atol=1e-5) k, pk_ref = power_spectrum( (cic_paint(jnp.zeros(mesh_shape), poss[-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) k, pk_i = power_spectrum( (cic_paint(jnp.zeros(mesh_shape), resi[0][-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) k, pk_c = power_spectrum( (cic_paint(jnp.zeros(mesh_shape), res[0][-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) params_pgd = pickle.load( open( "correction_params/camels_25_64_pkloss_PGD_CV_0.params", "rb" ) ) def PGD_kernel(kvec, kl, ks): kk = sum(ki**2 for ki in kvec) kl2 = kl**2 ks4 = ks**4 mask = (kk == 0).nonzero() kk[mask] = 1 v = jnp.exp(-kl2 / kk) * jnp.exp(-kk**2 / ks4) imask = (~(kk == 0)).astype(int) v *= imask return v def pgd_correction(pos, cosmo, params): """ state is a tuple (position, velocities) """ kvec = fftk(mesh_shape) delta = cic_paint(jnp.zeros(mesh_shape), pos) alpha, kl, ks = params delta_k = jnp.fft.rfftn(delta) PGD_range=PGD_kernel(kvec, kl, ks) pot_k_pgd=(delta_k * laplace_kernel(kvec))*PGD_range forces_pgd= jnp.stack([cic_read(jnp.fft.irfftn(gradient_kernel(kvec, i)*pot_k_pgd), pos) for i in range(3)],axis=-1) dpos_pgd = forces_pgd*alpha return dpos_pgd k, pk_pgd = power_spectrum( (cic_paint(jnp.zeros(mesh_shape), resi[0][-1]+pgd_correction(resi[0][-1], cosmo,params_pgd))), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) import cmasher as cmr import matplotlib.colors as colors cmap = cmr.eclipse col = cmr.eclipse([0.,0,0.55,0.85]) sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0., vmax=1)) ``` ### Loss function with position and power spectrum ``` from matplotlib import gridspec col = cmr.eclipse([0.,0.13,0.55,0.85]) fig = plt.figure(figsize=(8, 6)) gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1],hspace=0) ax0 = plt.subplot(gs[0]) ax0.loglog(k, pk_ref,'--', label='CAMELS',color=col[0]) ax0.loglog(k, pk_i,label='PM without correction',color=col[1]) ax0.loglog(k, pk_c, label='PM with NN-correction',color=col[2]) ax0.loglog(k, pk_pgd, label='PM with PGD-correction',color=col[3]) ax0.label_outer() plt.legend(fontsize='large') ax0.set_xlabel(r"$k$ [$h \ \mathrm{Mpc}^{-1}$]",fontsize=14) ax0.set_ylabel(r"$P(k)$", fontsize=14) ax1 = plt.subplot(gs[1]) ax1.semilogx(k, (pk_i/pk_ref)-1,label='PM without correction',color=col[1]) ax1.semilogx(k, (pk_c/pk_ref)-1,label='PM with NN-correction',color=col[2]) ax1.semilogx(k, (pk_pgd/pk_ref)-1,label='PM with PGD-correction',color=col[3]) ax1.set_ylabel(r"$ (P(k) \ / \ P^{Camels}(k))-1$",fontsize=14) ax1.set_xlabel(r"$k$ [$h \ \mathrm{Mpc}^{-1}$]",fontsize=14) ax0.set_title('Different $\Omega_m$',fontsize=15) ax1.set_ylim(-1.5,1.5) plt.tight_layout() plt.grid(True) plt.savefig('../figures/camels_comparison_residual_diffomega_1P_1_n5.pdf') ```
github_jupyter
import os os.environ["CUDA_VISIBLE_DEVICES"] = "2" %pylab inline from tqdm import tqdm import jax_cosmo as jc mesh_shape= [64, 64, 64] box_size = [25., 25., 25.] cosmo = jc.Planck15(Omega_c= 0.10 - 0.049, Omega_b=0.049, n_s=0.9624, h=0.6711, sigma8=0.8) import readgadget init_cond = '/data/CAMELS/Sims/IllustrisTNG_DM/1P_1_n5/ICs/ics' header = readgadget.header(init_cond) BoxSize = header.boxsize/1e3 #Mpc/h Nall = header.nall #Total number of particles Masses = header.massarr*1e10 #Masses of the particles in Msun/h Omega_m = header.omega_m #value of Omega_m Omega_l = header.omega_l #value of Omega_l h = header.hubble #value of h redshift = header.redshift #redshift of the snapshot Hubble = 100.0*np.sqrt(Omega_m*(1.0+redshift)**3+Omega_l)#Value of H(z) in km/s/(Mpc/h) ptype = [1] #dark matter is particle type 1 ids_i = np.argsort(readgadget.read_block(init_cond, "ID ", ptype)-1) #IDs starting from 0 pos_i = readgadget.read_block(init_cond, "POS ", ptype)[ids_i]/1e3 #positions in Mpc/h vel_i = readgadget.read_block(init_cond, "VEL ", ptype)[ids_i] #peculiar velocities in km/s # Reordering data for simple reshaping pos_i = pos_i.reshape(4,4,4,64,64,64,3).transpose(0,3,1,4,2,5,6).reshape(-1,3) vel_i = vel_i.reshape(4,4,4,64,64,64,3).transpose(0,3,1,4,2,5,6).reshape(-1,3) pos_i = (pos_i/BoxSize*64).reshape([256,256,256,3])[::4,::4,::4,:].reshape([-1,3]) vel_i = (vel_i / 100 * (1./(1+redshift)) / BoxSize*64).reshape([256,256,256,3])[::4,::4,::4,:].reshape([-1,3]) a_i = 1./(1+redshift) scales = [] poss = [] vels = [] # Loading all the intermediate snapshots for i in tqdm(range(34)): snapshot='/data/CAMELS/Sims/IllustrisTNG_DM/1P_1_n5/snap_%03d.hdf5'%i header = readgadget.header(snapshot) redshift = header.redshift #redshift of the snapshot h = header.hubble #value of h ptype = [1] #dark matter is particle type 1 ids = np.argsort(readgadget.read_block(snapshot, "ID ", ptype)-1) #IDs starting from 0 pos = readgadget.read_block(snapshot, "POS ", ptype)[ids] / 1e3 #positions in Mpc/h vel = readgadget.read_block(snapshot, "VEL ", ptype)[ids] #peculiar velocities in km/s # Reordering data for simple reshaping pos = pos.reshape(4,4,4,64,64,64,3).transpose(0,3,1,4,2,5,6).reshape(-1,3) vel = vel.reshape(4,4,4,64,64,64,3).transpose(0,3,1,4,2,5,6).reshape(-1,3) pos = (pos / BoxSize * 64).reshape([256,256,256,3])[::4,::4,::4,:].reshape([-1,3]) vel = (vel / 100 * (1./(1+redshift)) / BoxSize*64).reshape([256,256,256,3])[::4,::4,::4,:].reshape([-1,3]) scales.append((1./(1+redshift))) poss.append(pos) vels.append(vel) import jax import jax.numpy as jnp import jax_cosmo as jc import haiku as hk from jax.experimental.ode import odeint from jaxpm.painting import cic_paint, cic_read, compensate_cic from jaxpm.pm import linear_field, lpt, make_ode_fn, pm_forces from jaxpm.kernels import fftk, gradient_kernel, laplace_kernel, longrange_kernel from jaxpm.nn import NeuralSplineFourierFilter from jaxpm.utils import power_spectrum import numpyro rng_seq = hk.PRNGSequence(1) # Run the reference simulation without correction at the same steps resi = odeint(make_ode_fn(mesh_shape), [poss[0], vels[0]], jnp.array(scales), cosmo, rtol=1e-5, atol=1e-5) # High res simulation figure(figsize=[10,10]) for i in range(16): subplot(4,4,i+1) imshow(cic_paint(jnp.zeros(mesh_shape), poss[::2][i]).sum(axis=0), cmap='gist_stern', vmin=0) k, pk_ref = power_spectrum( compensate_cic(cic_paint(jnp.zeros(mesh_shape), poss[-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) k, pk_i = power_spectrum( compensate_cic(cic_paint(jnp.zeros(mesh_shape), resi[0][-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) loglog(k,pk_ref, label='N-body') loglog(k,pk_i, label='JaxPM without correction') legend() plt.xlabel(r"$k$ [$h \ \mathrm{Mpc}^{-1}$]") plt.ylabel(r"$P(k)$") model = hk.without_apply_rng(hk.transform(lambda x,a : NeuralSplineFourierFilter(n_knots=16, latent_size=32)(x,a))) import pickle params = pickle.load( open( "correction_params/camels_25_64_CV_0_lambda1_01.params", "rb" ) ) def neural_nbody_ode(state, a, cosmo, params): """ state is a tuple (position, velocities) """ pos, vel = state kvec = fftk(mesh_shape) delta = cic_paint(jnp.zeros(mesh_shape), pos) delta_k = jnp.fft.rfftn(delta) # Computes gravitational potential pot_k = delta_k * laplace_kernel(kvec) * longrange_kernel(kvec, r_split=0) # Apply a correction filter kk = jnp.sqrt(sum((ki/pi)**2 for ki in kvec)) pot_k = pot_k *(1. + model.apply(params, kk, jnp.atleast_1d(a))) # Computes gravitational forces forces = jnp.stack([cic_read(jnp.fft.irfftn(gradient_kernel(kvec, i)*pot_k), pos) for i in range(3)],axis=-1) forces = forces * 1.5 * cosmo.Omega_m # Computes the update of position (drift) dpos = 1. / (a**3 * jnp.sqrt(jc.background.Esqr(cosmo, a))) * vel # Computes the update of velocity (kick) dvel = 1. / (a**2 * jnp.sqrt(jc.background.Esqr(cosmo, a))) * forces return dpos, dvel res = odeint(neural_nbody_ode, [poss[0], vels[0]], jnp.array(scales), cosmo, params, rtol=1e-5, atol=1e-5) k, pk_ref = power_spectrum( (cic_paint(jnp.zeros(mesh_shape), poss[-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) k, pk_i = power_spectrum( (cic_paint(jnp.zeros(mesh_shape), resi[0][-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) k, pk_c = power_spectrum( (cic_paint(jnp.zeros(mesh_shape), res[0][-1])), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) params_pgd = pickle.load( open( "correction_params/camels_25_64_pkloss_PGD_CV_0.params", "rb" ) ) def PGD_kernel(kvec, kl, ks): kk = sum(ki**2 for ki in kvec) kl2 = kl**2 ks4 = ks**4 mask = (kk == 0).nonzero() kk[mask] = 1 v = jnp.exp(-kl2 / kk) * jnp.exp(-kk**2 / ks4) imask = (~(kk == 0)).astype(int) v *= imask return v def pgd_correction(pos, cosmo, params): """ state is a tuple (position, velocities) """ kvec = fftk(mesh_shape) delta = cic_paint(jnp.zeros(mesh_shape), pos) alpha, kl, ks = params delta_k = jnp.fft.rfftn(delta) PGD_range=PGD_kernel(kvec, kl, ks) pot_k_pgd=(delta_k * laplace_kernel(kvec))*PGD_range forces_pgd= jnp.stack([cic_read(jnp.fft.irfftn(gradient_kernel(kvec, i)*pot_k_pgd), pos) for i in range(3)],axis=-1) dpos_pgd = forces_pgd*alpha return dpos_pgd k, pk_pgd = power_spectrum( (cic_paint(jnp.zeros(mesh_shape), resi[0][-1]+pgd_correction(resi[0][-1], cosmo,params_pgd))), boxsize=np.array([25.] * 3), kmin=np.pi / 25., dk=2 * np.pi / 25.) import cmasher as cmr import matplotlib.colors as colors cmap = cmr.eclipse col = cmr.eclipse([0.,0,0.55,0.85]) sm = plt.cm.ScalarMappable(cmap=cmap, norm=plt.Normalize(vmin=0., vmax=1)) from matplotlib import gridspec col = cmr.eclipse([0.,0.13,0.55,0.85]) fig = plt.figure(figsize=(8, 6)) gs = gridspec.GridSpec(2, 1, height_ratios=[3, 1],hspace=0) ax0 = plt.subplot(gs[0]) ax0.loglog(k, pk_ref,'--', label='CAMELS',color=col[0]) ax0.loglog(k, pk_i,label='PM without correction',color=col[1]) ax0.loglog(k, pk_c, label='PM with NN-correction',color=col[2]) ax0.loglog(k, pk_pgd, label='PM with PGD-correction',color=col[3]) ax0.label_outer() plt.legend(fontsize='large') ax0.set_xlabel(r"$k$ [$h \ \mathrm{Mpc}^{-1}$]",fontsize=14) ax0.set_ylabel(r"$P(k)$", fontsize=14) ax1 = plt.subplot(gs[1]) ax1.semilogx(k, (pk_i/pk_ref)-1,label='PM without correction',color=col[1]) ax1.semilogx(k, (pk_c/pk_ref)-1,label='PM with NN-correction',color=col[2]) ax1.semilogx(k, (pk_pgd/pk_ref)-1,label='PM with PGD-correction',color=col[3]) ax1.set_ylabel(r"$ (P(k) \ / \ P^{Camels}(k))-1$",fontsize=14) ax1.set_xlabel(r"$k$ [$h \ \mathrm{Mpc}^{-1}$]",fontsize=14) ax0.set_title('Different $\Omega_m$',fontsize=15) ax1.set_ylim(-1.5,1.5) plt.tight_layout() plt.grid(True) plt.savefig('../figures/camels_comparison_residual_diffomega_1P_1_n5.pdf')
0.386879
0.308333
# KMeans Clustering Example A data set that identifies different types of iris's is used to demonstrate KMeans in SAP HANA. ## Iris Data Set The data set used is from University of California, Irvine (https://archive.ics.uci.edu/ml/datasets/iris, for tutorials use only). This data set contains attributes of a plant iris. There are three species of Iris plants. <table> <tr><td>Iris Setosa</td><td><img src="images/Iris_setosa.jpg" title="Iris Sertosa" style="float:left;" width="300" height="50" /></td> <td>Iris Versicolor</td><td><img src="images/Iris_versicolor.jpg" title="Iris Versicolor" style="float:left;" width="300" height="50" /></td> <td>Iris Virginica</td><td><img src="images/Iris_virginica.jpg" title="Iris Virginica" style="float:left;" width="300" height="50" /></td></tr> </table> The data contains the following attributes for various flowers: <table align="left"><tr><td> <li align="top">sepal length in cm</li> <li align="left">sepal width in cm</li> <li align="left">petal length in cm</li> <li align="left">petal width in cm</li> </td><td><img src="images/sepal_petal.jpg" style="float:left;" width="200" height="40" /></td></tr></table> Although the flower is identified in the data set, we will cluster the data set into 3 clusters since we know there are three different flowers. The hope is that the cluster will correspond to each of the flowers. A different notebook will use a classification algorithm to predict the type of flower based on the sepal and petal dimensions. ``` from hana_ml import dataframe from hana_ml.algorithms.pal import clustering import numpy as np import pandas as pd import logging import itertools import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d, Axes3D ``` ## Load data The data is loaded into 4 tables - full set, test set, training set, and the validation set: <li>IRIS_DATA_FULL_TBL</li> <li>IRIS_DATA_TRAIN_TBL</li> <li>IRIS_DATA_TEST_TBL</li> <li>IRIS_DATA_VALIDATION_TBL</li> To do that, a connection is created and passed to the loader. There is a config file, <b>config/e2edata.ini</b> that controls the connection parameters and whether or not to reload the data from scratch. In case the data is already loaded, there would be no need to load the data. A sample section is below. If the config parameter, reload_data is true then the tables for test, training, and validation are (re-)created and data inserted into them. Although this ini file has other sections, please do not modify them. Only the [hana] section should be modified. #########################<br> [hana]<br> url=host.sjc.sap.corp<br> user=username<br> passwd=userpassword<br> port=3xx15<br> <br> #########################<br> ``` from hana_ml.algorithms.pal.utility import DataSets, Settings url, port, user, pwd = Settings.load_config("../../config/e2edata.ini") connection_context = dataframe.ConnectionContext(url, port, user, pwd) full_set, training_set, validation_set, test_set = DataSets.load_iris_data(connection_context) ``` ## Simple Exploration Let us look at the number of rows in the data set ``` print('Number of rows in full set: {}'.format(full_set.count())) ``` ### Let's look at the columns ``` print(full_set.columns) ``` ### Let us look at some rows ``` full_set.head(5).collect() ``` ### Let's look at the data types ``` full_set.dtypes() ``` ### Let's check how many SPECIES are in the data set. ``` full_set.distinct("SPECIES").collect() ``` # Create Model The lines below show the ease with which clustering can be done. Set up the features and labels for the model and create the model ``` features = ['SEPALLENGTHCM','SEPALWIDTHCM','PETALLENGTHCM','PETALWIDTHCM'] label = ['SPECIES'] kmeans = clustering.KMeans(thread_ratio=0.2, n_clusters=3, distance_level='euclidean', max_iter=100, tol=1.0E-6, category_weights=0.5, normalization='min_max') predictions = kmeans.fit_predict(full_set, 'ID', features).collect() print(predictions) ``` # Plot the data ``` def plot_kmeans_results(data_set, features, predictions): # use this to estimate what each cluster_id represents in terms of flowers # ideal would be 50-50-50 for each flower, so we can see there are some mis clusterings class_colors = {0: 'r', 1: 'b', 2: 'k'} predictions_colors = [class_colors[p] for p in predictions['CLUSTER_ID'].values] red = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='r', label='Iris-virginica', markersize=10, alpha=0.9) blue = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='b', label='Iris-versicolor', markersize=10, alpha=0.9) black = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='k', label='Iris-setosa', markersize=10, alpha=0.9) for x, y in itertools.combinations(features, 2): plt.figure(figsize=(10,5)) plt.scatter(full_set[[x]].collect(), data_set[[y]].collect(), c=predictions_colors, alpha=0.6, s=70) plt.grid() plt.xlabel(x, fontsize=15) plt.ylabel(y, fontsize=15) plt.tick_params(labelsize=15) plt.legend(handles=[red, blue, black]) plt.show() %matplotlib notebook #above allows interactive 3d plot sizes=10 for x, y, z in itertools.combinations(features, 3): fig = plt.figure(figsize=(8,5)) ax = fig.add_subplot(111, projection='3d') ax.scatter3D(data_set[[x]].collect(), data_set[[y]].collect(), data_set[[z]].collect(), c=predictions_colors, s=70) plt.grid() ax.set_xlabel(x, labelpad=sizes, fontsize=sizes) ax.set_ylabel(y, labelpad=sizes, fontsize=sizes) ax.set_zlabel(z, labelpad=sizes, fontsize=sizes) ax.tick_params(labelsize=sizes) plt.legend(handles=[red, blue, black]) plt.show() print(pd.concat([predictions, full_set[['SPECIES']].collect()], axis=1).groupby(['SPECIES','CLUSTER_ID']).size()) %matplotlib inline plot_kmeans_results(full_set, features, predictions) ```
github_jupyter
from hana_ml import dataframe from hana_ml.algorithms.pal import clustering import numpy as np import pandas as pd import logging import itertools import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d, Axes3D from hana_ml.algorithms.pal.utility import DataSets, Settings url, port, user, pwd = Settings.load_config("../../config/e2edata.ini") connection_context = dataframe.ConnectionContext(url, port, user, pwd) full_set, training_set, validation_set, test_set = DataSets.load_iris_data(connection_context) print('Number of rows in full set: {}'.format(full_set.count())) print(full_set.columns) full_set.head(5).collect() full_set.dtypes() full_set.distinct("SPECIES").collect() features = ['SEPALLENGTHCM','SEPALWIDTHCM','PETALLENGTHCM','PETALWIDTHCM'] label = ['SPECIES'] kmeans = clustering.KMeans(thread_ratio=0.2, n_clusters=3, distance_level='euclidean', max_iter=100, tol=1.0E-6, category_weights=0.5, normalization='min_max') predictions = kmeans.fit_predict(full_set, 'ID', features).collect() print(predictions) def plot_kmeans_results(data_set, features, predictions): # use this to estimate what each cluster_id represents in terms of flowers # ideal would be 50-50-50 for each flower, so we can see there are some mis clusterings class_colors = {0: 'r', 1: 'b', 2: 'k'} predictions_colors = [class_colors[p] for p in predictions['CLUSTER_ID'].values] red = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='r', label='Iris-virginica', markersize=10, alpha=0.9) blue = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='b', label='Iris-versicolor', markersize=10, alpha=0.9) black = plt.Line2D(range(1), range(1), c='w', marker='o', markerfacecolor='k', label='Iris-setosa', markersize=10, alpha=0.9) for x, y in itertools.combinations(features, 2): plt.figure(figsize=(10,5)) plt.scatter(full_set[[x]].collect(), data_set[[y]].collect(), c=predictions_colors, alpha=0.6, s=70) plt.grid() plt.xlabel(x, fontsize=15) plt.ylabel(y, fontsize=15) plt.tick_params(labelsize=15) plt.legend(handles=[red, blue, black]) plt.show() %matplotlib notebook #above allows interactive 3d plot sizes=10 for x, y, z in itertools.combinations(features, 3): fig = plt.figure(figsize=(8,5)) ax = fig.add_subplot(111, projection='3d') ax.scatter3D(data_set[[x]].collect(), data_set[[y]].collect(), data_set[[z]].collect(), c=predictions_colors, s=70) plt.grid() ax.set_xlabel(x, labelpad=sizes, fontsize=sizes) ax.set_ylabel(y, labelpad=sizes, fontsize=sizes) ax.set_zlabel(z, labelpad=sizes, fontsize=sizes) ax.tick_params(labelsize=sizes) plt.legend(handles=[red, blue, black]) plt.show() print(pd.concat([predictions, full_set[['SPECIES']].collect()], axis=1).groupby(['SPECIES','CLUSTER_ID']).size()) %matplotlib inline plot_kmeans_results(full_set, features, predictions)
0.438545
0.976152
# Imports and Setups ``` import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.keras import layers from tensorflow.keras import models import matplotlib.pyplot as plt import numpy as np import random import time import os tf.random.set_seed(666) np.random.seed(666) tfds.disable_progress_bar() ``` ### W&B - Experiment Tracking ``` %%capture !pip install wandb import wandb from wandb.keras import WandbCallback wandb.login() ``` ## Dataset gathering and preparation We are using **85%** labeled training examples. ``` # Gather Flowers dataset train_ds, validation_ds = tfds.load( "tf_flowers", split=["train[:85%]", "train[85%:]"], as_supervised=True ) AUTO = tf.data.experimental.AUTOTUNE BATCH_SIZE = 64 @tf.function def scale_resize_image(image, label): image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, (224, 224)) # Resizing to highest resolution used while training swav return (image, label) training_ds = ( train_ds .map(scale_resize_image, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) testing_ds = ( validation_ds .map(scale_resize_image, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) ``` ## ResNet50 base and a custom classification head ``` def get_training_model(trainable=False): inputs = layers.Input(shape=(224, 224, 3)) EXTRACTOR = tf.keras.applications.ResNet50(weights="imagenet", include_top=False, input_shape=(224, 224, 3)) EXTRACTOR.trainable = trainable x = EXTRACTOR(inputs, training=False) x = layers.GlobalAveragePooling2D()(x) x = layers.Dense(5, activation="softmax")(x) classifier = models.Model(inputs=inputs, outputs=x) return classifier model = get_training_model() model.summary() ``` ### Callback ``` # Early Stopping to prevent overfitting early_stopper = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=2, verbose=2, restore_best_weights=True) ``` # Without Augmentation ### Warm Up ``` # get model and compile model = get_training_model() model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"], optimizer='adam') # initialize wandb run wandb.init(entity='authors', project='swav-tf') # train history = model.fit(training_ds, validation_data=testing_ds, epochs=35, callbacks=[WandbCallback(), early_stopper]) model.save('warmup.h5') ``` ### Fine tune CNN ``` # prepare model and compiele model.layers[1].trainable = True model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"], optimizer=tf.keras.optimizers.Adam(1e-5)) # initialize wandb run wandb.init(entity='authors', project='swav-tf') # train history = model.fit(training_ds, validation_data=testing_ds, epochs=35, callbacks=[WandbCallback(), early_stopper]) ``` ### Evaluation ``` loss, acc = model.evaluate(testing_ds) wandb.log({'Test Accuracy': round(acc*100, 2)}) ``` # Training with Augmentation ### Augmentation ``` # Configs CROP_SIZE = 224 MIN_SCALE = 0.5 MAX_SCALE = 1. # Experimental options options = tf.data.Options() options.experimental_optimization.noop_elimination = True options.experimental_optimization.map_vectorization.enabled = True options.experimental_optimization.apply_default_optimizations = True options.experimental_deterministic = False options.experimental_threading.max_intra_op_parallelism = 1 @tf.function def scale_image(image, label): image = tf.image.convert_image_dtype(image, tf.float32) return (image, label) @tf.function def random_apply(func, x, p): return tf.cond( tf.less(tf.random.uniform([], minval=0, maxval=1, dtype=tf.float32), tf.cast(p, tf.float32)), lambda: func(x), lambda: x) @tf.function def random_resize_crop(image, label): # Conditional resizing image = tf.image.resize(image, (260, 260)) # Get the crop size for given min and max scale size = tf.random.uniform(shape=(1,), minval=MIN_SCALE*260, maxval=MAX_SCALE*260, dtype=tf.float32) size = tf.cast(size, tf.int32)[0] # Get the crop from the image crop = tf.image.random_crop(image, (size,size,3)) crop_resize = tf.image.resize(crop, (CROP_SIZE, CROP_SIZE)) return crop_resize, label @tf.function def tie_together(image, label): # Scale the pixel values image, label = scale_image(image , label) # random horizontal flip image = random_apply(tf.image.random_flip_left_right, image, p=0.5) # Random resized crops image, label = random_resize_crop(image, label) return image, label trainloader = ( train_ds .shuffle(1024) .map(tie_together, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) trainloader = trainloader.with_options(options) ``` ### Warmup ``` # get model and compile model = get_training_model() model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"], optimizer='adam') # initialize wandb run wandb.init(entity='authors', project='swav-tf') # train history = model.fit(trainloader, validation_data=testing_ds, epochs=35, callbacks=[WandbCallback(), early_stopper]) model.save('warmup_augmentation.h5') ``` ### Fine tune CNN ``` # prepare model and compiele model.layers[1].trainable = True model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"], optimizer=tf.keras.optimizers.Adam(1e-5)) # initialize wandb run wandb.init(entity='authors', project='swav-tf') # train history = model.fit(trainloader, validation_data=testing_ds, epochs=35, callbacks=[WandbCallback(), early_stopper]) ``` ### Evaluation ``` loss, acc = model.evaluate(testing_ds) wandb.log({'Test Accuracy': round(acc*100, 2)}) ```
github_jupyter
import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.keras import layers from tensorflow.keras import models import matplotlib.pyplot as plt import numpy as np import random import time import os tf.random.set_seed(666) np.random.seed(666) tfds.disable_progress_bar() %%capture !pip install wandb import wandb from wandb.keras import WandbCallback wandb.login() # Gather Flowers dataset train_ds, validation_ds = tfds.load( "tf_flowers", split=["train[:85%]", "train[85%:]"], as_supervised=True ) AUTO = tf.data.experimental.AUTOTUNE BATCH_SIZE = 64 @tf.function def scale_resize_image(image, label): image = tf.image.convert_image_dtype(image, tf.float32) image = tf.image.resize(image, (224, 224)) # Resizing to highest resolution used while training swav return (image, label) training_ds = ( train_ds .map(scale_resize_image, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) testing_ds = ( validation_ds .map(scale_resize_image, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) def get_training_model(trainable=False): inputs = layers.Input(shape=(224, 224, 3)) EXTRACTOR = tf.keras.applications.ResNet50(weights="imagenet", include_top=False, input_shape=(224, 224, 3)) EXTRACTOR.trainable = trainable x = EXTRACTOR(inputs, training=False) x = layers.GlobalAveragePooling2D()(x) x = layers.Dense(5, activation="softmax")(x) classifier = models.Model(inputs=inputs, outputs=x) return classifier model = get_training_model() model.summary() # Early Stopping to prevent overfitting early_stopper = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=2, verbose=2, restore_best_weights=True) # get model and compile model = get_training_model() model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"], optimizer='adam') # initialize wandb run wandb.init(entity='authors', project='swav-tf') # train history = model.fit(training_ds, validation_data=testing_ds, epochs=35, callbacks=[WandbCallback(), early_stopper]) model.save('warmup.h5') # prepare model and compiele model.layers[1].trainable = True model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"], optimizer=tf.keras.optimizers.Adam(1e-5)) # initialize wandb run wandb.init(entity='authors', project='swav-tf') # train history = model.fit(training_ds, validation_data=testing_ds, epochs=35, callbacks=[WandbCallback(), early_stopper]) loss, acc = model.evaluate(testing_ds) wandb.log({'Test Accuracy': round(acc*100, 2)}) # Configs CROP_SIZE = 224 MIN_SCALE = 0.5 MAX_SCALE = 1. # Experimental options options = tf.data.Options() options.experimental_optimization.noop_elimination = True options.experimental_optimization.map_vectorization.enabled = True options.experimental_optimization.apply_default_optimizations = True options.experimental_deterministic = False options.experimental_threading.max_intra_op_parallelism = 1 @tf.function def scale_image(image, label): image = tf.image.convert_image_dtype(image, tf.float32) return (image, label) @tf.function def random_apply(func, x, p): return tf.cond( tf.less(tf.random.uniform([], minval=0, maxval=1, dtype=tf.float32), tf.cast(p, tf.float32)), lambda: func(x), lambda: x) @tf.function def random_resize_crop(image, label): # Conditional resizing image = tf.image.resize(image, (260, 260)) # Get the crop size for given min and max scale size = tf.random.uniform(shape=(1,), minval=MIN_SCALE*260, maxval=MAX_SCALE*260, dtype=tf.float32) size = tf.cast(size, tf.int32)[0] # Get the crop from the image crop = tf.image.random_crop(image, (size,size,3)) crop_resize = tf.image.resize(crop, (CROP_SIZE, CROP_SIZE)) return crop_resize, label @tf.function def tie_together(image, label): # Scale the pixel values image, label = scale_image(image , label) # random horizontal flip image = random_apply(tf.image.random_flip_left_right, image, p=0.5) # Random resized crops image, label = random_resize_crop(image, label) return image, label trainloader = ( train_ds .shuffle(1024) .map(tie_together, num_parallel_calls=AUTO) .batch(BATCH_SIZE) .prefetch(AUTO) ) trainloader = trainloader.with_options(options) # get model and compile model = get_training_model() model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"], optimizer='adam') # initialize wandb run wandb.init(entity='authors', project='swav-tf') # train history = model.fit(trainloader, validation_data=testing_ds, epochs=35, callbacks=[WandbCallback(), early_stopper]) model.save('warmup_augmentation.h5') # prepare model and compiele model.layers[1].trainable = True model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"], optimizer=tf.keras.optimizers.Adam(1e-5)) # initialize wandb run wandb.init(entity='authors', project='swav-tf') # train history = model.fit(trainloader, validation_data=testing_ds, epochs=35, callbacks=[WandbCallback(), early_stopper]) loss, acc = model.evaluate(testing_ds) wandb.log({'Test Accuracy': round(acc*100, 2)})
0.750736
0.843702
``` from covid_analytics import factors import pandas as pd ``` ## Données INSEE sources : - départements insee - régions insee - estimation de la population https://www.insee.fr/fr/statistiques/1893198 ``` date_debut = '2021-04-06' date_fin = '2021-05-06' date_debut, date_fin departement = pd.read_csv('data/departement2020.csv')[['dep','reg','libelle']] departement region = pd.read_csv('data/region2020.csv')[['reg','libelle']] region population_reg = pd.read_csv('data/population-region-2021-insee.csv', delimiter=',', header='infer') population_reg[0] = population_reg['Total'] population_reg[9] = population_reg['0 à 4 ans'] + population_reg['5 à 9 ans'] population_reg[19] = population_reg['10 à 14 ans'] + population_reg['15 à 19 ans'] population_reg[29] = population_reg['20 à 24 ans'] + population_reg['25 à 29 ans'] population_reg[39] = population_reg['30 à 34 ans'] + population_reg['35 à 39 ans'] population_reg[49] = population_reg['40 à 44 ans'] + population_reg['45 à 49 ans'] population_reg[59] = population_reg['50 à 54 ans'] + population_reg['55 à 59 ans'] population_reg[69] = population_reg['60 à 64 ans'] + population_reg['65 à 69 ans'] population_reg[79] = population_reg['70 à 74 ans'] + population_reg['75 à 79 ans'] population_reg[89] = population_reg['80 à 84 ans'] + population_reg['85 à 89 ans'] population_reg[90] = population_reg['90 à 94 ans'] + population_reg['95 ans et plus'] population_reg = population_reg[['libelle',0, 9,19,29,39,49,59,69,79,89,90]] population_reg = pd.melt(population_reg, id_vars=['libelle'], value_vars=[0, 9,19,29,39,49,59,69,79,89,90], var_name='cl_age90', value_name='population').sort_values(by=['libelle', 'cl_age90']) population_reg ``` ## Taux d'incidence Le taux d'incidence correspond au nombre de tests positifs pour 100.000 habitants. Il est calculé de la manière suivante : (100000 * nombre de cas positif) / Population - incidence (https://www.data.gouv.fr/fr/datasets/taux-dincidence-de-lepidemie-de-covid-19/) ``` # Cas positifs par région, sexe et classe d'age incidence = pd.read_csv('data/sp-pe-tb-heb-reg-2021-05-04-19h05.csv', delimiter=';', header='infer') incidence = pd.merge(incidence, region, on="reg") incidence cumuls_incidence = incidence.groupby(['libelle', 'cl_age90']).mean() cumuls_incidence['inc_f'] = 100000 * cumuls_incidence['P_f'] / cumuls_incidence['pop_f'] cumuls_incidence['inc_h'] = 100000 * cumuls_incidence['P_h'] / cumuls_incidence['pop_h'] cumuls_incidence['inc'] = 100000 * cumuls_incidence['P'] / cumuls_incidence['pop'] incidence_reg_cl_age = cumuls_incidence.reset_index()[['libelle','cl_age90','inc_f','inc_h','inc']].round(2) incidence_reg_cl_age ``` ## Hospitalisations sources : https://www.data.gouv.fr/en/datasets/donnees-hospitalieres-relatives-a-lepidemie-de-covid-19/ Les données hospitalières relatives à l'épidémie du COVID-19 par département et sexe du patient : nombre de patients hospitalisés, nombre de personnes actuellement en réanimation ou soins intensifs, nombre de personnes actuellement en Soins de Suite et de Réadaptation (SSR) ou Unités de Soins de Longue Durée(USLD), nombre de personnes actuellement en hospitalisation conventionnelle, nombre actuellement de personnes hospitalisées dans un autre type de service ou nombre cumulé de personnes retournées à domicile, nombre cumulé de personnes décédées. Les données hospitalières relatives à l'épidémie du COVID-19 par région, et classe d'âge du patient : nombre de patients hospitalisés, nombre de personnes actuellement en réanimation ou soins intensifs, nombre de personnes actuellement en Soins de Suite et de Réadaptation (SSR) ou Unités de Soins de Longue Durée(USLD), nombre de personnes actuellement en hospitalisation conventionnelle, nombre actuellement de personnes hospitalisées dans un autre type de service, nombre cumulé de personnes retournées à domicile, nombre cumulé de personnes décédées. ``` hospitalisation = pd.read_csv('data/donnees-hospitalieres-covid19-2021-05-06-19h05.csv', delimiter=';', header='infer') hospitalisation['jour'] = pd.to_datetime(hospitalisation['jour'], infer_datetime_format=True) hospitalisation = hospitalisation[hospitalisation['jour']>= date_debut] hospitalisation.head() hospitalisation2 = pd.read_csv('data/donnees-hospitalieres-classe-age-covid19-2021-05-06-19h05.csv', delimiter=';', header='infer') hospitalisation2['jour'] = pd.to_datetime(hospitalisation2['jour'], infer_datetime_format=True) hospitalisation2 = hospitalisation2[hospitalisation2['jour']>= date_debut] hospitalisation2 hospitalisation3 = pd.read_csv('data/covid-hospit-incid-reg-2021-05-06-19h05.csv', delimiter=';', header='infer') hospitalisation3['jour'] = pd.to_datetime(hospitalisation3['jour'], infer_datetime_format=True) hospitalisation3 = hospitalisation3[hospitalisation3['jour']>= date_debut] hospitalisation3[hospitalisation3.reg==84] # Moyenne des cas actuellement en soins intensifs par région et par classe d'age hospitalisation_cl_age90 = hospitalisation2.groupby(['reg','cl_age90']).sum().reset_index()[['reg','cl_age90','rea']].round(1) hospitalisation_cl_age90[hospitalisation_cl_age90.reg==84] # Moyenne des cas actuellement en soins intensifs par région et par sexe hospitalisation_sexe = pd.merge(hospitalisation, departement, on="dep").groupby(['reg','sexe']).sum().reset_index()[['reg','sexe','rea']].round(1) hospitalisation_sexe[hospitalisation_sexe.reg==24] # Cumul des nouveaux cas en soins intensifs sur 1 mois par région cumul_rea = hospitalisation3.groupby('reg').sum().reset_index() cumul_rea[cumul_rea.reg==24] 100000 * 1598 / population_reg[(population_reg.libelle == 'Auvergne-Rhône-Alpes') & (population_reg.cl_age90 == 0)]['population'] # Répartition moyenne des personnes en soins intensifs par région, classe d'age et sexe hospitalisation_sexe_t = hospitalisation_sexe[hospitalisation_sexe['sexe'] == 0][['reg','rea']] hospitalisation_sexe_h = hospitalisation_sexe[hospitalisation_sexe['sexe'] == 1][['reg','rea']] hospitalisation_sexe_f = hospitalisation_sexe[hospitalisation_sexe['sexe'] == 2][['reg','rea']] hospitalisation_sexe_t.columns = ['reg','rea_t'] hospitalisation_sexe_f.columns = ['reg','rea_f'] hospitalisation_sexe_h.columns = ['reg','rea_h'] hospitalisation_sexe2 = pd.merge( hospitalisation_sexe_t , pd.merge( hospitalisation_sexe_h, hospitalisation_sexe_f, on='reg'), on='reg') hospitalisation_sexe2[hospitalisation_sexe2.reg==24] hospitalisation_sexe2['pct_h'] = hospitalisation_sexe2['rea_h'] / hospitalisation_sexe2['rea_t'] hospitalisation_sexe2['pct_f'] = 1- hospitalisation_sexe2['pct_h'] hospitalisation_sexe2 = pd.merge( hospitalisation_cl_age90, hospitalisation_sexe2, on='reg') hospitalisation_sexe2[hospitalisation_sexe2.reg==24] hospitalisation_sexe2['pct_h'] = hospitalisation_sexe2['rea'] / hospitalisation_sexe2['rea_t'] * hospitalisation_sexe2['pct_h'] hospitalisation_sexe2['pct_f'] = hospitalisation_sexe2['rea'] / hospitalisation_sexe2['rea_t'] * hospitalisation_sexe2['pct_f'] hospitalisation_reg_cl_age = pd.merge(hospitalisation_sexe2, region, on="reg")[['reg','libelle','cl_age90','rea_f','rea_h','rea','pct_h','pct_f']].round(3) hospitalisation_reg_cl_age[hospitalisation_sexe2.reg==24] incidence_rea = pd.merge(hospitalisation_reg_cl_age, cumul_rea, on=['reg']) incidence_rea[incidence_rea.reg==24] ``` ## Incidence en soins intensifs cumuls des nouveaux cas chaque jour pendant les 30 derniers jours pour chqaue région ramenés par tranche d'age et par sexe pour 100000 habitants ``` incidence_rea_reg_cl_age = pd.merge(incidence_rea, population_reg, on=['libelle','cl_age90']) incidence_rea_reg_cl_age['inc_h'] = incidence_rea_reg_cl_age['incid_rea'] * incidence_rea_reg_cl_age['pct_h'] * 100000 / incidence_rea_reg_cl_age['population'] /2 incidence_rea_reg_cl_age['inc_f'] = incidence_rea_reg_cl_age['incid_rea'] * incidence_rea_reg_cl_age['pct_f'] * 100000 / incidence_rea_reg_cl_age['population'] /2 incidence_rea_reg_cl_age = incidence_rea_reg_cl_age.round(2) incidence_rea_reg_cl_age[incidence_rea_reg_cl_age.reg==24] ``` ## Balance bénéfices/risques ``` incidence_rea_reg_cl_age[ (incidence_rea_reg_cl_age['libelle'] == 'Corse')] incidence_rea_reg_cl_age['benef_h'] = 4* incidence_rea_reg_cl_age['inc_h'] * factors.incidence_boost[boost] incidence_rea_reg_cl_age['benef_f'] = 4* incidence_rea_reg_cl_age['inc_f'] * factors.incidence_boost[boost] incidence_rea_reg_cl_age[incidence_rea_reg_cl_age.reg==84] factors.astrazemeca_risk risks = pd.DataFrame(data={'cl_age90': [29, 39, 49, 59, 69, 79, 89], 'astrazeneca_risk': [5.8, 4.6, 5.8, 3.2, 3, 2.2, 1.2]}) risks incidence_rea_reg_cl_age.to_csv('data/incidence_rea_reg_cl_age-2021-05-14.csv') balance = incidence_rea_reg_cl_age[(incidence_rea_reg_cl_age.cl_age90 != 0) & (incidence_rea_reg_cl_age.cl_age90 != 9) & (incidence_rea_reg_cl_age.cl_age90 != 19) & (incidence_rea_reg_cl_age.cl_age90 != 90)][['reg','cl_age90', 'benef_h', 'benef_f']] balance = pd.merge(balance, risks, on=['cl_age90']) balance['balance_astr_h'] = balance['benef_h'] - balance['astrazeneca_risk'] balance['balance_astr_f'] = balance['benef_f'] - balance['astrazeneca_risk'] balance[balance.reg==84] balance.to_csv('data/balance_astrazeneca-2021-05-14.csv', index=False) balance['benef_h'].max() * 4.55 ```
github_jupyter
from covid_analytics import factors import pandas as pd date_debut = '2021-04-06' date_fin = '2021-05-06' date_debut, date_fin departement = pd.read_csv('data/departement2020.csv')[['dep','reg','libelle']] departement region = pd.read_csv('data/region2020.csv')[['reg','libelle']] region population_reg = pd.read_csv('data/population-region-2021-insee.csv', delimiter=',', header='infer') population_reg[0] = population_reg['Total'] population_reg[9] = population_reg['0 à 4 ans'] + population_reg['5 à 9 ans'] population_reg[19] = population_reg['10 à 14 ans'] + population_reg['15 à 19 ans'] population_reg[29] = population_reg['20 à 24 ans'] + population_reg['25 à 29 ans'] population_reg[39] = population_reg['30 à 34 ans'] + population_reg['35 à 39 ans'] population_reg[49] = population_reg['40 à 44 ans'] + population_reg['45 à 49 ans'] population_reg[59] = population_reg['50 à 54 ans'] + population_reg['55 à 59 ans'] population_reg[69] = population_reg['60 à 64 ans'] + population_reg['65 à 69 ans'] population_reg[79] = population_reg['70 à 74 ans'] + population_reg['75 à 79 ans'] population_reg[89] = population_reg['80 à 84 ans'] + population_reg['85 à 89 ans'] population_reg[90] = population_reg['90 à 94 ans'] + population_reg['95 ans et plus'] population_reg = population_reg[['libelle',0, 9,19,29,39,49,59,69,79,89,90]] population_reg = pd.melt(population_reg, id_vars=['libelle'], value_vars=[0, 9,19,29,39,49,59,69,79,89,90], var_name='cl_age90', value_name='population').sort_values(by=['libelle', 'cl_age90']) population_reg # Cas positifs par région, sexe et classe d'age incidence = pd.read_csv('data/sp-pe-tb-heb-reg-2021-05-04-19h05.csv', delimiter=';', header='infer') incidence = pd.merge(incidence, region, on="reg") incidence cumuls_incidence = incidence.groupby(['libelle', 'cl_age90']).mean() cumuls_incidence['inc_f'] = 100000 * cumuls_incidence['P_f'] / cumuls_incidence['pop_f'] cumuls_incidence['inc_h'] = 100000 * cumuls_incidence['P_h'] / cumuls_incidence['pop_h'] cumuls_incidence['inc'] = 100000 * cumuls_incidence['P'] / cumuls_incidence['pop'] incidence_reg_cl_age = cumuls_incidence.reset_index()[['libelle','cl_age90','inc_f','inc_h','inc']].round(2) incidence_reg_cl_age hospitalisation = pd.read_csv('data/donnees-hospitalieres-covid19-2021-05-06-19h05.csv', delimiter=';', header='infer') hospitalisation['jour'] = pd.to_datetime(hospitalisation['jour'], infer_datetime_format=True) hospitalisation = hospitalisation[hospitalisation['jour']>= date_debut] hospitalisation.head() hospitalisation2 = pd.read_csv('data/donnees-hospitalieres-classe-age-covid19-2021-05-06-19h05.csv', delimiter=';', header='infer') hospitalisation2['jour'] = pd.to_datetime(hospitalisation2['jour'], infer_datetime_format=True) hospitalisation2 = hospitalisation2[hospitalisation2['jour']>= date_debut] hospitalisation2 hospitalisation3 = pd.read_csv('data/covid-hospit-incid-reg-2021-05-06-19h05.csv', delimiter=';', header='infer') hospitalisation3['jour'] = pd.to_datetime(hospitalisation3['jour'], infer_datetime_format=True) hospitalisation3 = hospitalisation3[hospitalisation3['jour']>= date_debut] hospitalisation3[hospitalisation3.reg==84] # Moyenne des cas actuellement en soins intensifs par région et par classe d'age hospitalisation_cl_age90 = hospitalisation2.groupby(['reg','cl_age90']).sum().reset_index()[['reg','cl_age90','rea']].round(1) hospitalisation_cl_age90[hospitalisation_cl_age90.reg==84] # Moyenne des cas actuellement en soins intensifs par région et par sexe hospitalisation_sexe = pd.merge(hospitalisation, departement, on="dep").groupby(['reg','sexe']).sum().reset_index()[['reg','sexe','rea']].round(1) hospitalisation_sexe[hospitalisation_sexe.reg==24] # Cumul des nouveaux cas en soins intensifs sur 1 mois par région cumul_rea = hospitalisation3.groupby('reg').sum().reset_index() cumul_rea[cumul_rea.reg==24] 100000 * 1598 / population_reg[(population_reg.libelle == 'Auvergne-Rhône-Alpes') & (population_reg.cl_age90 == 0)]['population'] # Répartition moyenne des personnes en soins intensifs par région, classe d'age et sexe hospitalisation_sexe_t = hospitalisation_sexe[hospitalisation_sexe['sexe'] == 0][['reg','rea']] hospitalisation_sexe_h = hospitalisation_sexe[hospitalisation_sexe['sexe'] == 1][['reg','rea']] hospitalisation_sexe_f = hospitalisation_sexe[hospitalisation_sexe['sexe'] == 2][['reg','rea']] hospitalisation_sexe_t.columns = ['reg','rea_t'] hospitalisation_sexe_f.columns = ['reg','rea_f'] hospitalisation_sexe_h.columns = ['reg','rea_h'] hospitalisation_sexe2 = pd.merge( hospitalisation_sexe_t , pd.merge( hospitalisation_sexe_h, hospitalisation_sexe_f, on='reg'), on='reg') hospitalisation_sexe2[hospitalisation_sexe2.reg==24] hospitalisation_sexe2['pct_h'] = hospitalisation_sexe2['rea_h'] / hospitalisation_sexe2['rea_t'] hospitalisation_sexe2['pct_f'] = 1- hospitalisation_sexe2['pct_h'] hospitalisation_sexe2 = pd.merge( hospitalisation_cl_age90, hospitalisation_sexe2, on='reg') hospitalisation_sexe2[hospitalisation_sexe2.reg==24] hospitalisation_sexe2['pct_h'] = hospitalisation_sexe2['rea'] / hospitalisation_sexe2['rea_t'] * hospitalisation_sexe2['pct_h'] hospitalisation_sexe2['pct_f'] = hospitalisation_sexe2['rea'] / hospitalisation_sexe2['rea_t'] * hospitalisation_sexe2['pct_f'] hospitalisation_reg_cl_age = pd.merge(hospitalisation_sexe2, region, on="reg")[['reg','libelle','cl_age90','rea_f','rea_h','rea','pct_h','pct_f']].round(3) hospitalisation_reg_cl_age[hospitalisation_sexe2.reg==24] incidence_rea = pd.merge(hospitalisation_reg_cl_age, cumul_rea, on=['reg']) incidence_rea[incidence_rea.reg==24] incidence_rea_reg_cl_age = pd.merge(incidence_rea, population_reg, on=['libelle','cl_age90']) incidence_rea_reg_cl_age['inc_h'] = incidence_rea_reg_cl_age['incid_rea'] * incidence_rea_reg_cl_age['pct_h'] * 100000 / incidence_rea_reg_cl_age['population'] /2 incidence_rea_reg_cl_age['inc_f'] = incidence_rea_reg_cl_age['incid_rea'] * incidence_rea_reg_cl_age['pct_f'] * 100000 / incidence_rea_reg_cl_age['population'] /2 incidence_rea_reg_cl_age = incidence_rea_reg_cl_age.round(2) incidence_rea_reg_cl_age[incidence_rea_reg_cl_age.reg==24] incidence_rea_reg_cl_age[ (incidence_rea_reg_cl_age['libelle'] == 'Corse')] incidence_rea_reg_cl_age['benef_h'] = 4* incidence_rea_reg_cl_age['inc_h'] * factors.incidence_boost[boost] incidence_rea_reg_cl_age['benef_f'] = 4* incidence_rea_reg_cl_age['inc_f'] * factors.incidence_boost[boost] incidence_rea_reg_cl_age[incidence_rea_reg_cl_age.reg==84] factors.astrazemeca_risk risks = pd.DataFrame(data={'cl_age90': [29, 39, 49, 59, 69, 79, 89], 'astrazeneca_risk': [5.8, 4.6, 5.8, 3.2, 3, 2.2, 1.2]}) risks incidence_rea_reg_cl_age.to_csv('data/incidence_rea_reg_cl_age-2021-05-14.csv') balance = incidence_rea_reg_cl_age[(incidence_rea_reg_cl_age.cl_age90 != 0) & (incidence_rea_reg_cl_age.cl_age90 != 9) & (incidence_rea_reg_cl_age.cl_age90 != 19) & (incidence_rea_reg_cl_age.cl_age90 != 90)][['reg','cl_age90', 'benef_h', 'benef_f']] balance = pd.merge(balance, risks, on=['cl_age90']) balance['balance_astr_h'] = balance['benef_h'] - balance['astrazeneca_risk'] balance['balance_astr_f'] = balance['benef_f'] - balance['astrazeneca_risk'] balance[balance.reg==84] balance.to_csv('data/balance_astrazeneca-2021-05-14.csv', index=False) balance['benef_h'].max() * 4.55
0.134137
0.887156
# Decision Trees for Iris Species Classification [Cédric Campguilhem](https://github.com/ccampguilhem), March 2018 ## Table of contents - Introduction - Iris species dataset - High-dimensional data visualization - Dimensionality reduction - What is machine learning ? - Modelling in reduced space - Conclusion ## Introduction This notebook shows the capability of Decision Trees from [Scikit-Learn](http://scikit-learn.org/stable/index.html) package as well as some visualization techniques provided by [pandas](https://pandas.pydata.org/) and [seaborn](https://seaborn.pydata.org/). I have decided to use the [Iris dataset](https://www.kaggle.com/uciml/iris) to illustrate what we can do with it. The Iris Species dataset is a famous dataset for any machine learning amateur, as it's a simple dataset which can be used for classification problems. The Iris Species dataset was originally used in R.A. Fisher's classic 1936 [paper](http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf), The Use of Multiple Measurements in Taxonomic Problems. I am following the Udadicty free Machine Learning [course](https://eu.udacity.com/course/machine-learning--ud262) which main problem is not to have assignments ! So this is an attempt to illustrate what I have learnt there. ## Iris species dataset I use the Kaggle beta [API](https://github.com/Kaggle/kaggle-api) to collect the dataset on my local computer. First we download the dataset locally. If you want to replicate this step you need to install the Kaggle beta API and create an account on Kaggle website. Finally you will need to create an API token key for your account. The GitHub [repository](https://github.com/Kaggle/kaggle-api) has all information required. ``` from kaggle.api import KaggleApi connection = KaggleApi() connection.authenticate() #connection.datasetsList(search="iris") #connection.datasetListFiles("uciml/iris") connection.datasetDownloadFile("uciml/iris", "Iris.csv", ".") ``` We can load the dataset with pandas: ``` import pandas as pd df = pd.read_csv("uciml/iris/Iris.csv") df = df.set_index("Id") df.info() df.head() ``` And get few statistics about the dataset: ``` df.describe() print df["Species"].value_counts(dropna=False) ``` The dataset is pretty clean. No null values, so we will keep the 150 samples as-is. For each sample we have 4 measures: - sepal length - sepal width - petal length - petal width For each sample we have a "class" which is the the type of iris. We have 50 samples for each type: - setosa - versicolor - virginica ``` %matplotlib inline import seaborn as sns import matplotlib.pyplot as plt import numpy as np sns.set_style("whitegrid") plt.rcParams["figure.figsize"] = (14, 7) ``` Convert to long format to ease distribution plots: ``` df_melted = df.melt(id_vars=["Species"]) df_melted.head() ``` We can start be having a look a distribution for each feature using a [boxplot](https://seaborn.pydata.org/generated/seaborn.boxplot.html): ``` ax = sns.boxplot(x="variable", y="value", hue="Species", data=df_melted) ax.set_title("Distribution of iris features", fontsize=20) ax.set_xlabel(""); ``` We can see that distribution for petal length and width are quite different for the 3 different species, more specifically for Iris-setosa which is smaller. Sepal features bring "less" information on how to make distinction between the species. Another way to visualize distributions is to use [histograms](https://seaborn.pydata.org/generated/seaborn.distplot.html#seaborn.distplot) layout in a [facet grid](https://seaborn.pydata.org/generated/seaborn.FacetGrid.html#seaborn.FacetGrid): ``` grid = sns.FacetGrid(df_melted, col="variable", hue="Species", col_wrap=2, sharey=False, sharex=False, aspect=2) grid = grid.map(sns.distplot, "value", bins=5, kde=False) grid.add_legend() grid.fig.subplots_adjust(top=0.9) grid.fig.suptitle("Distribution of iris features", fontsize=20); ``` From the above plot, we confirm the difference of Iris-setosa for petal features. We can figure out if features are correlated using a [pair plot](https://seaborn.pydata.org/generated/seaborn.pairplot.html#seaborn.pairplot): ``` ax = sns.pairplot(df, hue="Species", size=1.5, aspect=1.4) ax.fig.suptitle("Relations between features", fontsize=20) ax.fig.subplots_adjust(top=0.9); ``` Sepal length is correlated (linearly) with sepal width. It is also correlated with petal length and width except for Iris-setosa. Similarly petal length and width are correlated, except for Iris-setosa. ## High-dimensional data visualization Visualization of high-dimensional data is a challenge. Both pandas and scikit-learn bring options to the table. The [Andrews curves](https://en.wikipedia.org/wiki/Andrews_plot) is a technique using values of features to feed a Fourier series implemented in [pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html#andrews-curves). Each sample in the dataset is then represented by a curve: ``` from pandas.plotting import andrews_curves ax = andrews_curves(df, "Species", color=["#3274A1", "#E1812C", "#3A923A"]) ax.set_title("Iris dataset with Andrews curves", fontsize=20) ax.set_frame_on(False) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ``` Once again, we can see that Iris-setosa clearly stands out again other species of Iris. Another way to visualize the dataset is to used samples projection on 2D space techniques. [Scikit-learn](http://scikit-learn.org/stable/modules/manifold.html#manifold-learning) comes with manifold techniques, for example t-distributed Stochastic Neighbor Embedding (t-SNE). This method may be computationally expensive in very high-dimension spaces. ``` from sklearn.manifold import TSNE df_features = df[["SepalLengthCm", "SepalWidthCm", "PetalLengthCm", "PetalWidthCm"]] df_classes = df[["Species"]] tsne = TSNE(n_components=2, random_state=1234) df_proj = pd.DataFrame(tsne.fit_transform(df_features)) df_proj["Species"] = df_classes.Species.values df_proj = df_proj.rename(columns={0: "var1", 1: "var2"}) df_proj.head() ax = sns.lmplot(x="var1", y="var2", hue="Species", data=df_proj, size=6, aspect=1.5, fit_reg=False, scatter_kws={"s": 100}) ax.fig.suptitle("Iris species with t-SNE manifold", fontsize=20) ax.fig.subplots_adjust(top=0.9); ax.fig.axes[0].get_xaxis().set_visible(False) ax.fig.axes[0].get_yaxis().set_visible(False) ax.fig.axes[0].set_frame_on(False) ``` For more information on t-SNE check this [video](https://www.youtube.com/watch?v=NEaUSP4YerM). ## Dimensionality reduction For such a simple problem (4 features), it is not necessary to reduce dimensionality for training an algorithm. However, it will come in handy to visualize the boundaries created by our decision tree classifier. We are going to keep only two dimensions for the sake of visualization simplicity. We will use a principal component analysis decomposition (PCA) from [sklearn](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA) package. ``` from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(df_features) df_reduced = pd.DataFrame(pca.transform(df_features)) df_reduced["Species"] = df_classes.Species.values df_reduced = df_reduced.rename(columns={0: "var1", 1: "var2"}) df_reduced.head() ``` After PCA algorithm has been applied, we can see on this reduced dimension space that Iris-setosa is pretty easy to identify. The boundary between Iris-versicolor and Iris-virginica is less obvious: ``` ax = sns.lmplot(x="var1", y="var2", hue="Species", data=df_reduced, size=6, aspect=1.5, fit_reg=False, scatter_kws={"s": 100}) ax.fig.suptitle("Iris species in reduced dimension space", fontsize=20) ax.fig.subplots_adjust(top=0.9); ``` ## What is machine learning ? ![Machine Learning](./img/machinelearning.jpg) Source: https://www.saagie.com/fr/blog/machine-learning-pour-les-grand-meres ### Supervised Learning: ![Machine Learning](./img/classification_vs_regression.png) **Classification**: what is the type ? **Regression**: what is the value ? Source: https://medium.com/@heyozramos/regression-vs-classification-86d73c281c5e ### Our problem statement (supervised classification): Given measures of Iris, what species is that ? ### Methodology - Use a subset of dataset to **train** an algorithm - Use the remaining subset to **test** algorithm ## Modelling in reduced space In this section, we are going to train a [decision tree classifier](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html). First wee need to split the dataset into training and testing datasets. We will train the algorithm in the reduced space that we have created above with PCA. ### Split training and testing dataset We use 30% percent of dataset for test. I am using [StratifiedShuffleSplit](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html#sklearn.model_selection.StratifiedShuffleSplit) to keep the same proportion for each species in the training and testing datasets. ``` from sklearn.model_selection import StratifiedShuffleSplit splitter = StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=1234) indices = splitter.split(df_features, df_classes).next() features_train = df_reduced.iloc[indices[0], [0, 1]] targets_train = df_reduced.iloc[indices[0], [2]] features_test = df_reduced.iloc[indices[1], [0, 1]] targets_test = df_reduced.iloc[indices[1], [2]] def plot_dataset(features_train, targets_train, features_test, targets_test, clf=None, title=None): """ Plot train and test datasets on given Axes - features_train: training features - targets_train: training classes - features_test: testing features - targets_test: testing classes - ax: matplotlib Axes object - clf: trained classifier """ #Create figure fig = plt.figure() ax = fig.add_subplot(111) #Classifier boundary if clf is not None: #Create a grid x, y = np.meshgrid(np.linspace(-4, 4, 30), np.linspace(-1.5, 1.5, 30)) X = x.flatten() Y = y.flatten() inp = np.vstack((X, Y)).transpose() #Predict classes pred = clf.predict(inp) #Convert to numerical values dct = {"Iris-setosa": 0, "Iris-versicolor": 1, "Iris-virginica": 2} dctinv = {0: "Iris-setosa", 1: "Iris-versicolor", 2: "Iris-virginica"} f = lambda x: an[x] conv = np.vectorize(lambda x: dct[x]) Z = conv(pred) #Plot contour z = Z.reshape(x.shape) levels = [-0.5, 0.5, 1.5, 2.5] cf = ax.contourf(x, y, z, levels=levels, colors=["#3274A1", "#E1812C", "#3A923A"], alpha=0.5) cbar = fig.colorbar(cf, ax=ax, orientation="horizontal", aspect=20, fraction=0.07, shrink=1) cbar.set_ticks([0,1,2]) cbar.set_ticklabels([dctinv[t] for t in [0,1,2]]) #Display dataset for (species, color) in [("Iris-setosa", "#3274A1"), ("Iris-versicolor", "#E1812C"), ("Iris-virginica", "#3A923A")]: train = features_train[targets_train.Species == species] test = features_test[targets_test.Species == species] label_train = "{} ({})".format(species, "Train") label_test = "{} ({})".format(species, "Test") ax.scatter(x=train["var1"], y=train["var2"], color=color, marker="o", s=200, label=label_train, edgecolors="#ffffff") ax.scatter(x=test["var1"], y=test["var2"], color=color, marker="^", s=200, label=label_test, edgecolors="#ffffff") #Plot configuration if title is None: ax.set_title("Train and test dataset", fontsize=20) else: ax.set_title(title, fontsize=20) ax.set_xlabel("var1") ax.set_ylabel("var2") ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., frameon=True); return ax ``` Let's visualize the split that has been operated: ``` plot_dataset(features_train, targets_train, features_test, targets_test); ``` We can see that for each species, test samples have been randomized to cover the space occupied by that species. ### Train a decision tree We are going to make the following assumption: the dataset is linearly separable. We then restrict the algorithm to only draws a very simple boundary. We can achieve this by limiting the maximum depth of decision tree to 1. ``` from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(random_state=1234, max_depth=1) clf.fit(features_train, targets_train) ``` A linear boundary (background color) is calculated by the algorithm, which enables to identify Iris-setosa but not the other species: ``` plot_dataset(features_train, targets_train, features_test, targets_test, clf, title="Highly biassed prediction"); ``` Now reduce the bias by increasing the maximum depth of decision tree to 2. The algorithms does a better job but may still be improved: ``` clf = DecisionTreeClassifier(random_state=1234, max_depth=2) clf.fit(features_train, targets_train) plot_dataset(features_train, targets_train, features_test, targets_test, clf, title="Imperfect prediction"); ``` Increasing the maximum depth to 4 shows very interesting results even if few samples are incorrectly predicted: ``` clf = DecisionTreeClassifier(random_state=1234, max_depth=4) clf.fit(features_train, targets_train) plot_dataset(features_train, targets_train, features_test, targets_test, clf, title="Best predicition"); #Export the decision to Graphviz format from sklearn.tree import export_graphviz export_graphviz(clf, "tree.dot", rounded=True, impurity=False, feature_names=["var1", "var2"], class_names=["Iris-setosa", "Iris-versicolor", "Iris-virginica"]) !dot -Tpng tree.dot -o img/tree.png ``` Increasing the maximum depth to 6 shows that decision tree tries to get the previously incorrectly predicted samples. This increases the variance of the model, it becomes very sensible to "exceptional" values. This is a situation of over-fitting that should be avoided: ``` clf = DecisionTreeClassifier(random_state=1234, max_depth=6) clf.fit(features_train, targets_train) plot_dataset(features_train, targets_train, features_test, targets_test, clf, title="Over-fitted prediction"); ``` A decision tree with a maximum depth of 4 is probably the best trade-off between: - **bias**: errors from initial assumptions in algorithm - **variance**: errors due to sensitivity to small fluctuations in training dataset A good predictive model has low bias and low variance. Here is our decision trained tree: <img src="./img/tree.png" alt="Tree" style="width: 600px;"/> Another way to measure accuracy of a predictive model is to use metrics: ``` from sklearn.metrics import precision_recall_fscore_support clf = DecisionTreeClassifier(random_state=1234, max_depth=4) clf.fit(features_train, targets_train) targets_pred = clf.predict(features_test) precision, recall, fscore, support = precision_recall_fscore_support( targets_test, targets_pred) df_metrics = pd.DataFrame({"precision": precision, "recall": recall, "fscore": fscore, "Species": ["Iris-setosa", "Iris-versicolor", "Iris-virginica"]}) df_metrics.set_index("Species", inplace=True) ``` - **precision**: prediction is consistent with the actual class of test sample - **recall**: capability to correctly predict all items of a given class - **fscore**: averaged score between precision and recall ``` df_metrics.head() ``` Here, we have one Iris-versicolor which has been predicted as a Iris-virginica. This decreases the **precision** of Iris-virginica and the **recall** of Iris-versicolor. ## Conclusion - pandas and seaborn provides usefull and easy-to-use capabilities to **see** the data, even in high-dimensions. - Tempted by marchine learning ? Start with **decision trees** and avoid complex alternatives (neural networks...). - **Always** keep samples to test accurary of your predictive model. - Consider **dimension reduction** to better understand the behavior of your predictive model. - Tune parameters to reach a **low-bias** and a **low-variance** model. - Someone claims he has incredible predictive model ? Ask he/she the **precision** and the **recall** scores on the **test** samples, just in case... ## Appendix t-SNE algorithm explained by [StatQuest](https://www.youtube.com/watch?v=NEaUSP4YerM)<hr> ``` #Convert notebook to html !jupyter nbconvert --to html --template html_minimal.tpl --no-prompt iris_dataset_classification.ipynb #Convert notebook to slide show !jupyter nbconvert --to slides --template html_slides_minimal --no-prompt iris_dataset_classification.ipynb ```
github_jupyter
from kaggle.api import KaggleApi connection = KaggleApi() connection.authenticate() #connection.datasetsList(search="iris") #connection.datasetListFiles("uciml/iris") connection.datasetDownloadFile("uciml/iris", "Iris.csv", ".") import pandas as pd df = pd.read_csv("uciml/iris/Iris.csv") df = df.set_index("Id") df.info() df.head() df.describe() print df["Species"].value_counts(dropna=False) %matplotlib inline import seaborn as sns import matplotlib.pyplot as plt import numpy as np sns.set_style("whitegrid") plt.rcParams["figure.figsize"] = (14, 7) df_melted = df.melt(id_vars=["Species"]) df_melted.head() ax = sns.boxplot(x="variable", y="value", hue="Species", data=df_melted) ax.set_title("Distribution of iris features", fontsize=20) ax.set_xlabel(""); grid = sns.FacetGrid(df_melted, col="variable", hue="Species", col_wrap=2, sharey=False, sharex=False, aspect=2) grid = grid.map(sns.distplot, "value", bins=5, kde=False) grid.add_legend() grid.fig.subplots_adjust(top=0.9) grid.fig.suptitle("Distribution of iris features", fontsize=20); ax = sns.pairplot(df, hue="Species", size=1.5, aspect=1.4) ax.fig.suptitle("Relations between features", fontsize=20) ax.fig.subplots_adjust(top=0.9); from pandas.plotting import andrews_curves ax = andrews_curves(df, "Species", color=["#3274A1", "#E1812C", "#3A923A"]) ax.set_title("Iris dataset with Andrews curves", fontsize=20) ax.set_frame_on(False) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) from sklearn.manifold import TSNE df_features = df[["SepalLengthCm", "SepalWidthCm", "PetalLengthCm", "PetalWidthCm"]] df_classes = df[["Species"]] tsne = TSNE(n_components=2, random_state=1234) df_proj = pd.DataFrame(tsne.fit_transform(df_features)) df_proj["Species"] = df_classes.Species.values df_proj = df_proj.rename(columns={0: "var1", 1: "var2"}) df_proj.head() ax = sns.lmplot(x="var1", y="var2", hue="Species", data=df_proj, size=6, aspect=1.5, fit_reg=False, scatter_kws={"s": 100}) ax.fig.suptitle("Iris species with t-SNE manifold", fontsize=20) ax.fig.subplots_adjust(top=0.9); ax.fig.axes[0].get_xaxis().set_visible(False) ax.fig.axes[0].get_yaxis().set_visible(False) ax.fig.axes[0].set_frame_on(False) from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(df_features) df_reduced = pd.DataFrame(pca.transform(df_features)) df_reduced["Species"] = df_classes.Species.values df_reduced = df_reduced.rename(columns={0: "var1", 1: "var2"}) df_reduced.head() ax = sns.lmplot(x="var1", y="var2", hue="Species", data=df_reduced, size=6, aspect=1.5, fit_reg=False, scatter_kws={"s": 100}) ax.fig.suptitle("Iris species in reduced dimension space", fontsize=20) ax.fig.subplots_adjust(top=0.9); from sklearn.model_selection import StratifiedShuffleSplit splitter = StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=1234) indices = splitter.split(df_features, df_classes).next() features_train = df_reduced.iloc[indices[0], [0, 1]] targets_train = df_reduced.iloc[indices[0], [2]] features_test = df_reduced.iloc[indices[1], [0, 1]] targets_test = df_reduced.iloc[indices[1], [2]] def plot_dataset(features_train, targets_train, features_test, targets_test, clf=None, title=None): """ Plot train and test datasets on given Axes - features_train: training features - targets_train: training classes - features_test: testing features - targets_test: testing classes - ax: matplotlib Axes object - clf: trained classifier """ #Create figure fig = plt.figure() ax = fig.add_subplot(111) #Classifier boundary if clf is not None: #Create a grid x, y = np.meshgrid(np.linspace(-4, 4, 30), np.linspace(-1.5, 1.5, 30)) X = x.flatten() Y = y.flatten() inp = np.vstack((X, Y)).transpose() #Predict classes pred = clf.predict(inp) #Convert to numerical values dct = {"Iris-setosa": 0, "Iris-versicolor": 1, "Iris-virginica": 2} dctinv = {0: "Iris-setosa", 1: "Iris-versicolor", 2: "Iris-virginica"} f = lambda x: an[x] conv = np.vectorize(lambda x: dct[x]) Z = conv(pred) #Plot contour z = Z.reshape(x.shape) levels = [-0.5, 0.5, 1.5, 2.5] cf = ax.contourf(x, y, z, levels=levels, colors=["#3274A1", "#E1812C", "#3A923A"], alpha=0.5) cbar = fig.colorbar(cf, ax=ax, orientation="horizontal", aspect=20, fraction=0.07, shrink=1) cbar.set_ticks([0,1,2]) cbar.set_ticklabels([dctinv[t] for t in [0,1,2]]) #Display dataset for (species, color) in [("Iris-setosa", "#3274A1"), ("Iris-versicolor", "#E1812C"), ("Iris-virginica", "#3A923A")]: train = features_train[targets_train.Species == species] test = features_test[targets_test.Species == species] label_train = "{} ({})".format(species, "Train") label_test = "{} ({})".format(species, "Test") ax.scatter(x=train["var1"], y=train["var2"], color=color, marker="o", s=200, label=label_train, edgecolors="#ffffff") ax.scatter(x=test["var1"], y=test["var2"], color=color, marker="^", s=200, label=label_test, edgecolors="#ffffff") #Plot configuration if title is None: ax.set_title("Train and test dataset", fontsize=20) else: ax.set_title(title, fontsize=20) ax.set_xlabel("var1") ax.set_ylabel("var2") ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., frameon=True); return ax plot_dataset(features_train, targets_train, features_test, targets_test); from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier(random_state=1234, max_depth=1) clf.fit(features_train, targets_train) plot_dataset(features_train, targets_train, features_test, targets_test, clf, title="Highly biassed prediction"); clf = DecisionTreeClassifier(random_state=1234, max_depth=2) clf.fit(features_train, targets_train) plot_dataset(features_train, targets_train, features_test, targets_test, clf, title="Imperfect prediction"); clf = DecisionTreeClassifier(random_state=1234, max_depth=4) clf.fit(features_train, targets_train) plot_dataset(features_train, targets_train, features_test, targets_test, clf, title="Best predicition"); #Export the decision to Graphviz format from sklearn.tree import export_graphviz export_graphviz(clf, "tree.dot", rounded=True, impurity=False, feature_names=["var1", "var2"], class_names=["Iris-setosa", "Iris-versicolor", "Iris-virginica"]) !dot -Tpng tree.dot -o img/tree.png clf = DecisionTreeClassifier(random_state=1234, max_depth=6) clf.fit(features_train, targets_train) plot_dataset(features_train, targets_train, features_test, targets_test, clf, title="Over-fitted prediction"); from sklearn.metrics import precision_recall_fscore_support clf = DecisionTreeClassifier(random_state=1234, max_depth=4) clf.fit(features_train, targets_train) targets_pred = clf.predict(features_test) precision, recall, fscore, support = precision_recall_fscore_support( targets_test, targets_pred) df_metrics = pd.DataFrame({"precision": precision, "recall": recall, "fscore": fscore, "Species": ["Iris-setosa", "Iris-versicolor", "Iris-virginica"]}) df_metrics.set_index("Species", inplace=True) df_metrics.head() #Convert notebook to html !jupyter nbconvert --to html --template html_minimal.tpl --no-prompt iris_dataset_classification.ipynb #Convert notebook to slide show !jupyter nbconvert --to slides --template html_slides_minimal --no-prompt iris_dataset_classification.ipynb
0.662578
0.987129
``` import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score, explained_variance_score import statsmodels.formula.api as smf from statsmodels.stats.outliers_influence import variance_inflation_factor from patsy import dmatrices import seaborn as sns %matplotlib inline sns.set(style="darkgrid", rc={"figure.figsize":(12,8), "axes.labelsize":14, "xtick.labelsize":12, "ytick.labelsize":12}) ``` ## A. Potential Problems with Linear Regression 1. Correlation of error terms 1. Non-linear relationship between $Y$ and $X$ 1. Heteroscedasticity: non-constant variance of error terms 1. High-leverage points 1. Outliers 1. Collinearity ### Boston Housing Data ``` df = pd.read_csv("../../data/csv/Boston.csv") df.head() df.columns.tolist() ``` CRIM - per capita crime rate by town ZN - proportion of residential land zoned for lots over 25,000 sq.ft. INDUS - proportion of non-retail business acres per town CHAS - Charles River dummy variable (1 if tract bounds river; 0 otherwise) NOX - nitric oxides concentration (parts per 10 million) RM - average number of rooms per dwelling AGE - proportion of owner-occupied units built prior to 1940 DIS - weighted distances to five Boston employment centres RAD - index of accessibility to radial highways TAX - full-value property-tax rate per \$10,000 PTRATIO - pupil-teacher ratio by town BLACK - $1000(Bk - 0.63)^2$, where Bk is the proportion of blacks by town LSTAT - percentage lower status of the population MEDV - Median value of owner-occupied homes in $1000's ``` df.describe() ``` #### 2. Non-linear Relationship Between $Y$ and $X$ Identification: The plot of residuals vs. fitted (predicted) values $\hat{y_i}$ has a pattern Solution: Transform $X$ #### 3. Heteroscedasticity Identification: The plot of residuals vs. fitted values has a pattern Solution: Transform $Y$ ``` # Predictors x = df.iloc[:,:-1] x.head() # Fit the linear model lm = smf.ols(formula = "medv ~ x", data = df).fit() fig = sns.regplot(x=lm.predict(), y=lm.resid, order=2) fig.set(xlabel='Fitted Values', ylabel='Residuals', title='Response: Y'); ``` The plot of residuals vs. fitted values has a parabolic shape. Let's try fitting the model with $\log(Y)$ and $\sqrt{Y}$: ``` lmLog = smf.ols(formula = "np.log(medv) ~ x", data = df).fit() fig = sns.regplot(x=lmLog.predict(), y=lmLog.resid, fit_reg=False) fig.set(xlabel='Fitted Values', ylabel='Residuals', title=r'Response: $log(Y)$'); lmSqrt = smf.ols(formula = "np.sqrt(medv) ~ x", data = df).fit() fig = sns.regplot(x=lmSqrt.predict(), y=lmSqrt.resid) fig.set(xlabel='Fitted Values', ylabel='Residuals', title=r'Response: $\sqrt{Y}$'); ``` The plot with $log(Y)$ has a fan-in, funnel shape. The plot with $\sqrt{Y}$ has a better random pattern about 0. #### 4. High-leverage Points Definition: A predictor value $x_i$ that doesn't follow the pattern of the remaining predictor values, and thus affecting the estimated regression line. Identification: $x_i$ is said to have high leverage if its leverage statistic $> \frac{p+1}{n}$, where $p=$ # of predictors and $n=$ # observations Solution: Consider removing this $x_i$ from the overall dataset, particularly if it is also an outlier ``` df.shape ``` Any observation whose leverage statistic is greater than $\frac{p+1}{n}=\frac{13+1}{506}=0.0277$ counts as a high-leverage point. ``` influence = lm.get_influence() # Calculate Leverage Statistic leverage = influence.hat_matrix_diag dfRes = pd.concat([df, pd.Series(leverage, name="leverage")], axis=1) print dfRes.shape dfRes.head() # Top 5 high leverage data points dfRes[dfRes["leverage"] > 0.0277].sort_values(by = "leverage", ascending = False).head() ``` #### 5. Outliers Definition: $x_i$ is an outlier if the corresponding $y_i$ is far from the value predicted by the model Identification: $x_i$ is an outlier if its studentized residual $>\left|3\right|$. A studentized residual $=\frac{e_i}{SE(e_i)}$ Solution: Consider removing this $x_i$ from the overall dataset ``` # Calculate Studentized Residuals studentRes = influence.resid_studentized_external dfRes = pd.concat([dfRes, pd.Series(studentRes, name="studentRes")], axis=1) dfRes.head() # Data points with high studentized residuals dfRes[np.absolute(dfRes["studentRes"]) > 3] ``` The above 8 data points have both high studentized residuals > $|3|$ and high leverage of > 0.0277. #### 6. Collinearity Definition: Collinearity = two or more predictors are related to one another Multicollinearity = three or more predictors are related to one another Identification: Large absolute values in the correlation matrix detects collinearity Large VIF (variance inflation factor) detects multicollinearity, where minimum VIF value is 1 Convention: VIF > 10 is considered large and VIF > 5 is moderate Solution: Either drop one of the correlated variables, or combine the collinear variables to form a new variable ``` # Correlation Matrix corr = df.corr() corr # Correlation Heatmap # generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # draw the heatmap with the mask and correct aspect ratio; detecting absolute correlations >= 0.7 sns.heatmap(np.absolute(corr), mask=mask, cmap=cmap, vmax=.7, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}); ``` Lots of collinear variables (absolute correlation value of >=0.7): Indus, Nox, Age, Dis, Rad, Tax are all correlated with each other, and Medv, Lstat, Rm are correlated. We see strong patterns (not necessarily linear) in the graphs of each combination of these variables. ``` sns.pairplot(df, vars=['indus', 'nox', 'age', 'dis', 'rad', 'tax']); sns.pairplot(df, vars=['medv', 'lstat', 'rm']); # calculate VIF y, X = dmatrices("medv ~ x", data = df, return_type = "dataframe") vif = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])] zip(np.append(["intercept"], x.columns), np.round(vif, 3)) ``` VIF for Tax and Rad are quite high. The VIF for Tax, for example, means that the variance of the estimated coefficient of Tax is inflated by a factor of 9 because Tax is highly correlated with at least one of the other predictors in the model. Going back to the correlation matrix, Tax is highly correlated (> 0.7) with both Indus and Rad. We may consider removing one of these redundant predictors from the model. This choice may be governed by either scientific or practical reasons. The scientific strategies for *variable selection* is explored in Chapter 6. ## B. Multiple Linear Regression Regress medv (response) onto all other variables ``` model = LinearRegression() ``` ### Model Fit ``` model.fit(X=x, y=df['medv']) print model.intercept_ print model.coef_ ``` ### Model Accuracy ``` ypred = model.predict(X=x) ypred[:5] r2_score(df['medv'], ypred) ``` The coefficient of determination is $R^2=0.74$ indicating a strong, positive linear relationship between medv and all predictors. ``` print "MSE is {}".format(np.round(mean_squared_error(df['medv'], ypred),2)) # best value 1; lower values are worse explained_variance_score(df['medv'], ypred) ```
github_jupyter
import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score, explained_variance_score import statsmodels.formula.api as smf from statsmodels.stats.outliers_influence import variance_inflation_factor from patsy import dmatrices import seaborn as sns %matplotlib inline sns.set(style="darkgrid", rc={"figure.figsize":(12,8), "axes.labelsize":14, "xtick.labelsize":12, "ytick.labelsize":12}) df = pd.read_csv("../../data/csv/Boston.csv") df.head() df.columns.tolist() df.describe() # Predictors x = df.iloc[:,:-1] x.head() # Fit the linear model lm = smf.ols(formula = "medv ~ x", data = df).fit() fig = sns.regplot(x=lm.predict(), y=lm.resid, order=2) fig.set(xlabel='Fitted Values', ylabel='Residuals', title='Response: Y'); lmLog = smf.ols(formula = "np.log(medv) ~ x", data = df).fit() fig = sns.regplot(x=lmLog.predict(), y=lmLog.resid, fit_reg=False) fig.set(xlabel='Fitted Values', ylabel='Residuals', title=r'Response: $log(Y)$'); lmSqrt = smf.ols(formula = "np.sqrt(medv) ~ x", data = df).fit() fig = sns.regplot(x=lmSqrt.predict(), y=lmSqrt.resid) fig.set(xlabel='Fitted Values', ylabel='Residuals', title=r'Response: $\sqrt{Y}$'); df.shape influence = lm.get_influence() # Calculate Leverage Statistic leverage = influence.hat_matrix_diag dfRes = pd.concat([df, pd.Series(leverage, name="leverage")], axis=1) print dfRes.shape dfRes.head() # Top 5 high leverage data points dfRes[dfRes["leverage"] > 0.0277].sort_values(by = "leverage", ascending = False).head() # Calculate Studentized Residuals studentRes = influence.resid_studentized_external dfRes = pd.concat([dfRes, pd.Series(studentRes, name="studentRes")], axis=1) dfRes.head() # Data points with high studentized residuals dfRes[np.absolute(dfRes["studentRes"]) > 3] # Correlation Matrix corr = df.corr() corr # Correlation Heatmap # generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # draw the heatmap with the mask and correct aspect ratio; detecting absolute correlations >= 0.7 sns.heatmap(np.absolute(corr), mask=mask, cmap=cmap, vmax=.7, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}); sns.pairplot(df, vars=['indus', 'nox', 'age', 'dis', 'rad', 'tax']); sns.pairplot(df, vars=['medv', 'lstat', 'rm']); # calculate VIF y, X = dmatrices("medv ~ x", data = df, return_type = "dataframe") vif = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])] zip(np.append(["intercept"], x.columns), np.round(vif, 3)) model = LinearRegression() model.fit(X=x, y=df['medv']) print model.intercept_ print model.coef_ ypred = model.predict(X=x) ypred[:5] r2_score(df['medv'], ypred) print "MSE is {}".format(np.round(mean_squared_error(df['medv'], ypred),2)) # best value 1; lower values are worse explained_variance_score(df['medv'], ypred)
0.832271
0.912163
<a href="https://colab.research.google.com/github/imbalzy/RecipeQA-FInal-Project-2470/blob/main/preprocess_recipeQA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from google.colab import drive drive.mount('/content/gdrive') import os os.environ['KAGGLE_CONFIG_DIR'] = "/content" !cp "/content/gdrive/My Drive/Kaggle/kaggle.json" . !kaggle datasets download -d jeromeblanchet/recipeqa-nlp-dataset !unzip recipeqa-nlp-dataset.zip -d data/ > /dev/null !wget http://nlp.stanford.edu/data/glove.6B.zip !unzip -q glove.6B.zip import json import numpy as np import tensorflow as tf def read_file(file_name): with open(file_name) as f: data = json.load(f)['data'] textual_cloze = [item for item in data if item["task"]=="textual_cloze"] visual_cloze = [item for item in data if item["task"]=="visual_cloze"] visual_coherence = [item for item in data if item["task"]=="visual_coherence"] visual_ordering = [item for item in data if item["task"]=="visual_ordering"] return textual_cloze, visual_cloze, visual_coherence, visual_ordering def delete_keys(dataset): if (dataset[0]["task"]=="visual_coherence"): [data.pop("question") for data in dataset] if (dataset[0]["task"]=="visual_ordering"): [data.pop("question") for data in dataset] for data in dataset: order = data["choice_list"][0] data["image_list"] = {0:order[0], 1:order[1], 2:order[2], 3:order[3]} img2ind = {v:k for k,v in data["image_list"].items()} data["choice_list"]=[[img2ind[img] for img in choice] for choice in data["choice_list"]] [[data.pop(key,None) for key in ["context_modality", "split", "qid", "question_modality", "task", "question_text"]] for data in dataset] [[[step.pop(key) for key in ["id","videos"]] for step in data['context']] for data in dataset] # print(json.dumps(dataset[0], indent=4)) def load_image(file_path): # Load image image = tf.io.decode_jpeg(tf.io.read_file(file_path),channels=3) # Convert image to normalized float [0, 1] image = tf.image.convert_image_dtype(image,tf.float32) # resize image image = tf.image.resize(image, [256,256]) # Rescale data to range (-1, 1) image = (image - 0.5) * 2 return image from copy import deepcopy def data_iter(batch_size, dataset, task, split): num_input = len(dataset) np.random.shuffle(dataset) for i in range(num_input // batch_size): Xs = deepcopy(dataset[i*batch_size:(i+1)*batch_size]) Ys = [item.pop("answer") for item in Xs] if task=="textual_cloze": for X in Xs: for step in X["context"]: step["images"] = [load_image("data/images/images-qa/"+split+"/images-qa/"+item) for item in step["images"]] if task=="visual_cloze": for X in Xs: X["choice_list"] = [load_image("data/images/images-qa/"+split+"/images-qa/"+item) for item in X["choice_list"]] X["question"] = [load_image("data/images/images-qa/"+split+"/images-qa/"+item) if not item=="@placeholder" else "@placeholder" for item in X["question"]] if task=="visual_coherence": for X in Xs: X["choice_list"] = [load_image("data/images/images-qa/"+split+"/images-qa/"+item) for item in X["choice_list"]] if task=="visual_ordering": for X in Xs: for k,v in X["image_list"].items(): X["image_list"][k] = load_image("data/images/images-qa/"+split+"/images-qa/"+X["image_list"][k]) yield Xs, Ys def load_embeddings(path_to_glove_file): embedding_index = {} with open(path_to_glove_file) as f: for line in f: word, coefs = line.split(maxsplit=1) coefs = np.fromstring(coefs, "f", sep=" ") embedding_index[word] = coefs return embedding_index def get_embedding_layer(voc, embeddings_index embedding_dim = 100): num_tokens = len(voc) + 2 embedding_matrix = np.zeros((num_tokens, embedding_dim)) for word, i in word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector embedding_layer = tf.keras.layers.Embedding( num_tokens, embedding_dim, embeddings_initializer=tf.keras.initializers.Constant(embedding_matrix), trainable=False, ) return embedding_layer def preprocess(batch_size, split): textual_cloze, visual_cloze, visual_coherence, visual_ordering=read_file("data/"+split+" recipeqa.json") delete_keys(textual_cloze) delete_keys(visual_cloze) delete_keys(visual_coherence) delete_keys(visual_ordering) textual_cloze_iter = data_iter(batch_size, textual_cloze, "textual_cloze", split) visual_cloze_iter = data_iter(batch_size, visual_cloze, "visual_cloze", split) visual_coherence_iter = data_iter(batch_size, visual_coherence, "visual_coherence", split) visual_ordering_iter = data_iter(batch_size, visual_ordering, "visual_ordering", split) return textual_cloze_iter, visual_cloze_iter, visual_coherence_iter, visual_ordering_iter def main(): batch_size = 50 train_it1, train_it2, train_it3, train_it4 = preprocess(batch_size, "train") test_it1, test_it2, test_it3, test_it4 = preprocess(batch_size, "test") val_it1, val_it2, val_it3, val_it4 = preprocess(batch_size, "val") # Usage: for Xs, Ys in val_it2: print(json.dumps(Xs[0], indent=4, default=lambda x:"tf_tensor")) print(Ys[0]) break if __name__=='__main__': main() ```
github_jupyter
from google.colab import drive drive.mount('/content/gdrive') import os os.environ['KAGGLE_CONFIG_DIR'] = "/content" !cp "/content/gdrive/My Drive/Kaggle/kaggle.json" . !kaggle datasets download -d jeromeblanchet/recipeqa-nlp-dataset !unzip recipeqa-nlp-dataset.zip -d data/ > /dev/null !wget http://nlp.stanford.edu/data/glove.6B.zip !unzip -q glove.6B.zip import json import numpy as np import tensorflow as tf def read_file(file_name): with open(file_name) as f: data = json.load(f)['data'] textual_cloze = [item for item in data if item["task"]=="textual_cloze"] visual_cloze = [item for item in data if item["task"]=="visual_cloze"] visual_coherence = [item for item in data if item["task"]=="visual_coherence"] visual_ordering = [item for item in data if item["task"]=="visual_ordering"] return textual_cloze, visual_cloze, visual_coherence, visual_ordering def delete_keys(dataset): if (dataset[0]["task"]=="visual_coherence"): [data.pop("question") for data in dataset] if (dataset[0]["task"]=="visual_ordering"): [data.pop("question") for data in dataset] for data in dataset: order = data["choice_list"][0] data["image_list"] = {0:order[0], 1:order[1], 2:order[2], 3:order[3]} img2ind = {v:k for k,v in data["image_list"].items()} data["choice_list"]=[[img2ind[img] for img in choice] for choice in data["choice_list"]] [[data.pop(key,None) for key in ["context_modality", "split", "qid", "question_modality", "task", "question_text"]] for data in dataset] [[[step.pop(key) for key in ["id","videos"]] for step in data['context']] for data in dataset] # print(json.dumps(dataset[0], indent=4)) def load_image(file_path): # Load image image = tf.io.decode_jpeg(tf.io.read_file(file_path),channels=3) # Convert image to normalized float [0, 1] image = tf.image.convert_image_dtype(image,tf.float32) # resize image image = tf.image.resize(image, [256,256]) # Rescale data to range (-1, 1) image = (image - 0.5) * 2 return image from copy import deepcopy def data_iter(batch_size, dataset, task, split): num_input = len(dataset) np.random.shuffle(dataset) for i in range(num_input // batch_size): Xs = deepcopy(dataset[i*batch_size:(i+1)*batch_size]) Ys = [item.pop("answer") for item in Xs] if task=="textual_cloze": for X in Xs: for step in X["context"]: step["images"] = [load_image("data/images/images-qa/"+split+"/images-qa/"+item) for item in step["images"]] if task=="visual_cloze": for X in Xs: X["choice_list"] = [load_image("data/images/images-qa/"+split+"/images-qa/"+item) for item in X["choice_list"]] X["question"] = [load_image("data/images/images-qa/"+split+"/images-qa/"+item) if not item=="@placeholder" else "@placeholder" for item in X["question"]] if task=="visual_coherence": for X in Xs: X["choice_list"] = [load_image("data/images/images-qa/"+split+"/images-qa/"+item) for item in X["choice_list"]] if task=="visual_ordering": for X in Xs: for k,v in X["image_list"].items(): X["image_list"][k] = load_image("data/images/images-qa/"+split+"/images-qa/"+X["image_list"][k]) yield Xs, Ys def load_embeddings(path_to_glove_file): embedding_index = {} with open(path_to_glove_file) as f: for line in f: word, coefs = line.split(maxsplit=1) coefs = np.fromstring(coefs, "f", sep=" ") embedding_index[word] = coefs return embedding_index def get_embedding_layer(voc, embeddings_index embedding_dim = 100): num_tokens = len(voc) + 2 embedding_matrix = np.zeros((num_tokens, embedding_dim)) for word, i in word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector embedding_layer = tf.keras.layers.Embedding( num_tokens, embedding_dim, embeddings_initializer=tf.keras.initializers.Constant(embedding_matrix), trainable=False, ) return embedding_layer def preprocess(batch_size, split): textual_cloze, visual_cloze, visual_coherence, visual_ordering=read_file("data/"+split+" recipeqa.json") delete_keys(textual_cloze) delete_keys(visual_cloze) delete_keys(visual_coherence) delete_keys(visual_ordering) textual_cloze_iter = data_iter(batch_size, textual_cloze, "textual_cloze", split) visual_cloze_iter = data_iter(batch_size, visual_cloze, "visual_cloze", split) visual_coherence_iter = data_iter(batch_size, visual_coherence, "visual_coherence", split) visual_ordering_iter = data_iter(batch_size, visual_ordering, "visual_ordering", split) return textual_cloze_iter, visual_cloze_iter, visual_coherence_iter, visual_ordering_iter def main(): batch_size = 50 train_it1, train_it2, train_it3, train_it4 = preprocess(batch_size, "train") test_it1, test_it2, test_it3, test_it4 = preprocess(batch_size, "test") val_it1, val_it2, val_it3, val_it4 = preprocess(batch_size, "val") # Usage: for Xs, Ys in val_it2: print(json.dumps(Xs[0], indent=4, default=lambda x:"tf_tensor")) print(Ys[0]) break if __name__=='__main__': main()
0.401805
0.6889
# Kyle, Joe, Mark: Milestone 4, Pre-Trained Network Our pre-trained network is a variation on the VGG16, a 16-layer network used to good effect in the ILSVRC-2014 competition. Details can be found in: Very Deep Convolutional Networks for Large-Scale Image Recognition K. Simonyan, A. Zisserman arXiv:1409.1556 Weights are pre-trained on ImageNet, our input is specified to the same 300x185x3 size we used in our own model, and the same loss function is used to compare the two. Fully connected layers are added at the end to specify our 7 desired output labels. Maximum binary accuracy approached 80% but this was possible by the model returning entirely 0 for its predictions. This model did not perform as well as our own deep network but will continue to be refined as a comparison deep learning method for our final paper. ``` from keras.applications.vgg16 import VGG16 from keras.preprocessing import image from keras.applications.vgg16 import preprocess_input from keras.layers import Input, Flatten, Dense from keras.models import Model from keras.preprocessing.image import ImageDataGenerator import numpy as np import pandas as pd from scipy import ndimage import matplotlib.pyplot as plt %matplotlib inline # Load data %cd ~/data/ labs = pd.read_csv('multilabels.csv') ids = pd.read_csv('features_V1.csv', usecols=[0]) # Take care of some weirdness that led to duplicate entries labs = pd.concat([ids,labs], axis=1, ignore_index=True) labs = labs.drop_duplicates(subset=[0]) ids = labs.pop(0).as_matrix() labs = labs.as_matrix() # Split train/test - 15k is about the limit of what we can hold in memory (12GB on Tesla K80) n_train = 1000 n_test = 500 rnd_ids = np.random.choice(np.squeeze(ids), size=n_train+n_test, replace=False) train_ids = rnd_ids[:n_train] test_ids = rnd_ids[n_train:] # Pull in multilabels y_train = labs[np.nonzero(np.in1d(np.squeeze(ids),train_ids))[0]] y_test = labs[np.nonzero(np.in1d(np.squeeze(ids),test_ids))[0]] # Read in images - need to do some goofy stuff here to handle the highly irregular image sizes and formats X_train = np.zeros([n_train, 600, 185, 3]) ct = 0 for i in train_ids: IM = ndimage.imread('posters/{}.jpg'.format(i)) try: X_train[ct,:IM.shape[0],:,:] = IM[:,:,:3] except: X_train[ct,:IM.shape[0],:,0] = IM ct += 1 if ct % 100 == 0: print 'training data {i}/{n} loaded'.format(i=ct, n=n_train) X_train = X_train[:,:300,:,:] # trim excess off edges print 'training data loaded' X_test = np.zeros([n_test, 600, 185, 3]) ct = 0 for i in test_ids: IM = ndimage.imread('posters/{}.jpg'.format(i)) try: X_test[ct,:IM.shape[0],:,:] = IM[:,:,:3] except: X_test[ct,:IM.shape[0],:,0] = IM ct += 1 if ct % 100 == 0: print 'test data {i}/{n} loaded'.format(i=ct, n=n_test) X_test = X_test[:,:300,:,:] # trim excess off edges print 'test data loaded' # Create dataGenerator to feed image batches - # this is nice because it also standardizes training data datagen = ImageDataGenerator( samplewise_center=True, samplewise_std_normalization=True) # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(X_train) # Generate a model based on the VGG16 model # code adapted from https://github.com/fchollet/keras/issues/4465 model_vgg16_conv = VGG16(weights='imagenet', include_top=False) model_vgg16_conv.summary() #Create your own input format input = Input(shape=(300,185,3),name = 'image_input') #Use the generated model output_vgg16_conv = model_vgg16_conv(input) #Add fully-connected layers x = Flatten(name='flatten')(output_vgg16_conv) x = Dense(4096, activation='relu', name='fc1')(x) x = Dense(4096, activation='relu', name='fc2')(x) x = Dense(7, activation='sigmoid', name='predictions')(x) #Create your own model my_model = Model(input=input, output=x) my_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['binary_accuracy']) #In the summary, weights and layers from VGG part will be hidden, but they will be fit during the training my_model.summary() # Fit the model with a first round of training data my_model.fit_generator(datagen.flow(X_train, y_train, batch_size=25), steps_per_epoch=len(X_train) / 25, epochs=5) score = my_model.evaluate(X_test, y_test, batch_size=50) score plt.pcolor(my_model.predict(X_test)) plt.pcolor(y_test) # Draw a new train/test sample, a little bigger this time n_train = 5000 n_test = 1000 rnd_ids = np.random.choice(np.squeeze(ids), size=n_train+n_test, replace=False) train_ids = rnd_ids[:n_train] test_ids = rnd_ids[n_train:] # Pull in multilabels y_train = labs[np.nonzero(np.in1d(np.squeeze(ids),train_ids))[0]] y_test = labs[np.nonzero(np.in1d(np.squeeze(ids),test_ids))[0]] # Read in images - need to do some goofy stuff here to handle the highly irregular image sizes and formats X_train = np.zeros([n_train, 600, 185, 3]) ct = 0 for i in train_ids: IM = ndimage.imread('posters/{}.jpg'.format(i)) try: X_train[ct,:IM.shape[0],:,:] = IM[:,:,:3] except: X_train[ct,:IM.shape[0],:,0] = IM ct += 1 if ct % 100 == 0: print 'training data {i}/{n} loaded'.format(i=ct, n=n_train) X_train = X_train[:,:300,:,:] # trim excess off edges print 'training data loaded' X_test = np.zeros([n_test, 600, 185, 3]) ct = 0 for i in test_ids: IM = ndimage.imread('posters/{}.jpg'.format(i)) try: X_test[ct,:IM.shape[0],:,:] = IM[:,:,:3] except: X_test[ct,:IM.shape[0],:,0] = IM ct += 1 if ct % 100 == 0: print 'test data {i}/{n} loaded'.format(i=ct, n=n_test) X_test = X_test[:,:300,:,:] # trim excess off edges print 'test data loaded' # Create dataGenerator to feed image batches - # this is nice because it also standardizes training data datagen = ImageDataGenerator( samplewise_center=True, samplewise_std_normalization=True) # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(X_train) # more training my_model.fit_generator(datagen.flow(X_train, y_train, batch_size=25), steps_per_epoch=len(X_train) / 25, epochs=5) ```
github_jupyter
from keras.applications.vgg16 import VGG16 from keras.preprocessing import image from keras.applications.vgg16 import preprocess_input from keras.layers import Input, Flatten, Dense from keras.models import Model from keras.preprocessing.image import ImageDataGenerator import numpy as np import pandas as pd from scipy import ndimage import matplotlib.pyplot as plt %matplotlib inline # Load data %cd ~/data/ labs = pd.read_csv('multilabels.csv') ids = pd.read_csv('features_V1.csv', usecols=[0]) # Take care of some weirdness that led to duplicate entries labs = pd.concat([ids,labs], axis=1, ignore_index=True) labs = labs.drop_duplicates(subset=[0]) ids = labs.pop(0).as_matrix() labs = labs.as_matrix() # Split train/test - 15k is about the limit of what we can hold in memory (12GB on Tesla K80) n_train = 1000 n_test = 500 rnd_ids = np.random.choice(np.squeeze(ids), size=n_train+n_test, replace=False) train_ids = rnd_ids[:n_train] test_ids = rnd_ids[n_train:] # Pull in multilabels y_train = labs[np.nonzero(np.in1d(np.squeeze(ids),train_ids))[0]] y_test = labs[np.nonzero(np.in1d(np.squeeze(ids),test_ids))[0]] # Read in images - need to do some goofy stuff here to handle the highly irregular image sizes and formats X_train = np.zeros([n_train, 600, 185, 3]) ct = 0 for i in train_ids: IM = ndimage.imread('posters/{}.jpg'.format(i)) try: X_train[ct,:IM.shape[0],:,:] = IM[:,:,:3] except: X_train[ct,:IM.shape[0],:,0] = IM ct += 1 if ct % 100 == 0: print 'training data {i}/{n} loaded'.format(i=ct, n=n_train) X_train = X_train[:,:300,:,:] # trim excess off edges print 'training data loaded' X_test = np.zeros([n_test, 600, 185, 3]) ct = 0 for i in test_ids: IM = ndimage.imread('posters/{}.jpg'.format(i)) try: X_test[ct,:IM.shape[0],:,:] = IM[:,:,:3] except: X_test[ct,:IM.shape[0],:,0] = IM ct += 1 if ct % 100 == 0: print 'test data {i}/{n} loaded'.format(i=ct, n=n_test) X_test = X_test[:,:300,:,:] # trim excess off edges print 'test data loaded' # Create dataGenerator to feed image batches - # this is nice because it also standardizes training data datagen = ImageDataGenerator( samplewise_center=True, samplewise_std_normalization=True) # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(X_train) # Generate a model based on the VGG16 model # code adapted from https://github.com/fchollet/keras/issues/4465 model_vgg16_conv = VGG16(weights='imagenet', include_top=False) model_vgg16_conv.summary() #Create your own input format input = Input(shape=(300,185,3),name = 'image_input') #Use the generated model output_vgg16_conv = model_vgg16_conv(input) #Add fully-connected layers x = Flatten(name='flatten')(output_vgg16_conv) x = Dense(4096, activation='relu', name='fc1')(x) x = Dense(4096, activation='relu', name='fc2')(x) x = Dense(7, activation='sigmoid', name='predictions')(x) #Create your own model my_model = Model(input=input, output=x) my_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['binary_accuracy']) #In the summary, weights and layers from VGG part will be hidden, but they will be fit during the training my_model.summary() # Fit the model with a first round of training data my_model.fit_generator(datagen.flow(X_train, y_train, batch_size=25), steps_per_epoch=len(X_train) / 25, epochs=5) score = my_model.evaluate(X_test, y_test, batch_size=50) score plt.pcolor(my_model.predict(X_test)) plt.pcolor(y_test) # Draw a new train/test sample, a little bigger this time n_train = 5000 n_test = 1000 rnd_ids = np.random.choice(np.squeeze(ids), size=n_train+n_test, replace=False) train_ids = rnd_ids[:n_train] test_ids = rnd_ids[n_train:] # Pull in multilabels y_train = labs[np.nonzero(np.in1d(np.squeeze(ids),train_ids))[0]] y_test = labs[np.nonzero(np.in1d(np.squeeze(ids),test_ids))[0]] # Read in images - need to do some goofy stuff here to handle the highly irregular image sizes and formats X_train = np.zeros([n_train, 600, 185, 3]) ct = 0 for i in train_ids: IM = ndimage.imread('posters/{}.jpg'.format(i)) try: X_train[ct,:IM.shape[0],:,:] = IM[:,:,:3] except: X_train[ct,:IM.shape[0],:,0] = IM ct += 1 if ct % 100 == 0: print 'training data {i}/{n} loaded'.format(i=ct, n=n_train) X_train = X_train[:,:300,:,:] # trim excess off edges print 'training data loaded' X_test = np.zeros([n_test, 600, 185, 3]) ct = 0 for i in test_ids: IM = ndimage.imread('posters/{}.jpg'.format(i)) try: X_test[ct,:IM.shape[0],:,:] = IM[:,:,:3] except: X_test[ct,:IM.shape[0],:,0] = IM ct += 1 if ct % 100 == 0: print 'test data {i}/{n} loaded'.format(i=ct, n=n_test) X_test = X_test[:,:300,:,:] # trim excess off edges print 'test data loaded' # Create dataGenerator to feed image batches - # this is nice because it also standardizes training data datagen = ImageDataGenerator( samplewise_center=True, samplewise_std_normalization=True) # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(X_train) # more training my_model.fit_generator(datagen.flow(X_train, y_train, batch_size=25), steps_per_epoch=len(X_train) / 25, epochs=5)
0.7237
0.863161
``` # 处理 IMDB 原始数据的标签 import os imdb_dir = '/mnt/workspace/jupyter_notebook/deep-learning-with-python-notes/aclImdb' train_dir = os.path.join(imdb_dir, 'train') labels = [] texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(train_dir, label_type) for fname in os.listdir(dir_name): if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname)) texts.append(f.read()) f.close() if label_type == 'neg': labels.append(0) else: labels.append(1) # 对 IMDB 原始数据的文本进行分词 from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences import numpy as np maxlen = 100 # 在100个单词后截断评论 training_samples = 200 # 在200个样本上训练 validation_samples = 10000 # 在10000个样本上验证 max_words = 10000 # 只考虑数据集钱10000个最常见的单词 tokenizer = Tokenizer(num_words=max_words) tokenizer.fit_on_texts(texts) sequences = tokenizer.texts_to_sequences(texts) word_index = tokenizer.word_index print('Found %s unique tokens.' % len(word_index)) data = pad_sequences(sequences, maxlen=maxlen) labels = np.asarray(labels) print('Shape of data tensor:', data.shape) print('Shape of label tensor:', labels.shape) indices = np.arange(data.shape[0]) np.random.shuffle(indices) data = data[indices] labels = labels[indices] x_train = data[:training_samples] y_train = labels[:training_samples] x_val = data[training_samples: training_samples + validation_samples] y_val = labels[training_samples: training_samples + validation_samples] #  解析 GloVe 词嵌入文件 glove_dir = '/home/fc/Downloads/glove.6B' embeddings_index = {} f = open(os.path.join(glove_dir, 'glove.6B.100d.txt')) for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() print('Found %s word vectors.' % len(embeddings_index)) #  准备 GloVe 词嵌入矩阵 embedding_dim = 100 embedding_matrix = np.zeros((max_words, embedding_dim)) for word, i in word_index.items(): if i < max_words: embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector # 模型定义 from keras.models import Sequential from keras.layers import Embedding, Flatten, Dense model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() # 将预训练的词嵌入加载到 Embedding 层中 model.layers[0].set_weights([embedding_matrix]) model.layers[0].trainable = False # 训练与评估 model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val)) model.save_weights('pre_trained_glove_model.h5') # 绘制结果 import matplotlib.pyplot as plt % matplotlib inline acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # 在不使用预训练词嵌入的情况下,训练相同的模型 from keras.models import Sequential from keras.layers import Embedding, Flatten, Dense model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val)) # 绘制结果 import matplotlib.pyplot as plt % matplotlib inline acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # 对测试集数据进行分词 test_dir = os.path.join(imdb_dir, 'test') labels = [] texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(test_dir, label_type) for fname in sorted(os.listdir(dir_name)): if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname)) texts.append(f.read()) f.close() if label_type == 'neg': labels.append(0) else: labels.append(1) sequences = tokenizer.texts_to_sequences(texts) x_test = pad_sequences(sequences, maxlen=maxlen) y_test = np.asarray(labels) # 在测试集上评估模型 model.load_weights('pre_trained_glove_model.h5') model.evaluate(x_test, y_test) ```
github_jupyter
# 处理 IMDB 原始数据的标签 import os imdb_dir = '/mnt/workspace/jupyter_notebook/deep-learning-with-python-notes/aclImdb' train_dir = os.path.join(imdb_dir, 'train') labels = [] texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(train_dir, label_type) for fname in os.listdir(dir_name): if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname)) texts.append(f.read()) f.close() if label_type == 'neg': labels.append(0) else: labels.append(1) # 对 IMDB 原始数据的文本进行分词 from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences import numpy as np maxlen = 100 # 在100个单词后截断评论 training_samples = 200 # 在200个样本上训练 validation_samples = 10000 # 在10000个样本上验证 max_words = 10000 # 只考虑数据集钱10000个最常见的单词 tokenizer = Tokenizer(num_words=max_words) tokenizer.fit_on_texts(texts) sequences = tokenizer.texts_to_sequences(texts) word_index = tokenizer.word_index print('Found %s unique tokens.' % len(word_index)) data = pad_sequences(sequences, maxlen=maxlen) labels = np.asarray(labels) print('Shape of data tensor:', data.shape) print('Shape of label tensor:', labels.shape) indices = np.arange(data.shape[0]) np.random.shuffle(indices) data = data[indices] labels = labels[indices] x_train = data[:training_samples] y_train = labels[:training_samples] x_val = data[training_samples: training_samples + validation_samples] y_val = labels[training_samples: training_samples + validation_samples] #  解析 GloVe 词嵌入文件 glove_dir = '/home/fc/Downloads/glove.6B' embeddings_index = {} f = open(os.path.join(glove_dir, 'glove.6B.100d.txt')) for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() print('Found %s word vectors.' % len(embeddings_index)) #  准备 GloVe 词嵌入矩阵 embedding_dim = 100 embedding_matrix = np.zeros((max_words, embedding_dim)) for word, i in word_index.items(): if i < max_words: embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector # 模型定义 from keras.models import Sequential from keras.layers import Embedding, Flatten, Dense model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() # 将预训练的词嵌入加载到 Embedding 层中 model.layers[0].set_weights([embedding_matrix]) model.layers[0].trainable = False # 训练与评估 model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val)) model.save_weights('pre_trained_glove_model.h5') # 绘制结果 import matplotlib.pyplot as plt % matplotlib inline acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # 在不使用预训练词嵌入的情况下,训练相同的模型 from keras.models import Sequential from keras.layers import Embedding, Flatten, Dense model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(Flatten()) model.add(Dense(32, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.summary() model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val)) # 绘制结果 import matplotlib.pyplot as plt % matplotlib inline acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # 对测试集数据进行分词 test_dir = os.path.join(imdb_dir, 'test') labels = [] texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(test_dir, label_type) for fname in sorted(os.listdir(dir_name)): if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname)) texts.append(f.read()) f.close() if label_type == 'neg': labels.append(0) else: labels.append(1) sequences = tokenizer.texts_to_sequences(texts) x_test = pad_sequences(sequences, maxlen=maxlen) y_test = np.asarray(labels) # 在测试集上评估模型 model.load_weights('pre_trained_glove_model.h5') model.evaluate(x_test, y_test)
0.385953
0.338569
## Observations and Insights ## Dependencies and starter code ``` # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st import numpy as np # Study data files mouse_metadata = "data/Mouse_metadata.csv" study_results = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata) study_results = pd.read_csv(study_results) # Combine the data into a single dataset combinedData = mouse_metadata.merge(study_results,on='Mouse ID') ``` ## Summary statistics ``` # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen summaryStats = combinedData.groupby(['Drug Regimen']).agg(['mean','median','var','std','sem'])['Tumor Volume (mm3)'] summaryStats ``` ## Bar plots ``` # Generate a bar plot showing number of data points for each treatment regimen using pandas barDF = pd.DataFrame({'Drug Regimen': combinedData.groupby('Drug Regimen')['Tumor Volume (mm3)'].count().keys(), 'Count': combinedData.groupby('Drug Regimen')['Tumor Volume (mm3)'].count().values}); barDF.plot.bar(x='Drug Regimen',y='Count',figsize=(8,5),legend=False,alpha=0.5) plt.xlabel('Drug Regimen') plt.ylabel('Count') plt.xticks(rotation=-60,ha='left') plt.title('Number of Samples per Regimen') plt.show() # Generate a bar plot showing number of data points for each treatment regimen using pyplot plt.figure(figsize=(8,5)) plt.bar(np.arange(len(barDF['Drug Regimen'])),barDF['Count'],alpha=0.5) plt.xticks(np.arange(len(barDF['Drug Regimen'])),barDF['Drug Regimen'].values,rotation=-60,ha='left') plt.xlabel('Drug Regimen') plt.ylabel('Count') plt.title('Number of Samples per Regimen') plt.show() ``` ## Pie plots ``` # Generate a pie plot showing the distribution of female versus male mice using pandas # Create a dataframe holding the count of unique Mouse IDs by Gender genderCount = pd.DataFrame({'Count':mouse_metadata.groupby('Sex')['Mouse ID'].count().values}, index=mouse_metadata.groupby('Sex')['Mouse ID'].count().keys()) # Use that dataframe to produce the pie plot genderCount.plot.pie(y='Count',figsize=(8, 5),autopct="%1.1f%%",legend=False,startangle=30,shadow=True) plt.axis("equal") plt.title("Distribution of Mice by Sex") plt.ylabel("") plt.show() # Generate a pie plot showing the distribution of female versus male mice using pyplot labels = mouse_metadata.groupby('Sex')['Mouse ID'].count().keys() plt.figure(figsize=(8,5)) plt.pie(genderCount['Count'],autopct="%1.1f%%",startangle=30,shadow=True,labels=labels) plt.axis("equal") plt.title("Distribution of Mice by Sex") plt.show() ``` ## Quartiles, outliers and boxplots Based on our summary statistics, the means of two drugs (Capomulin and Ramicane) appear significantly lower than those of the other drugs. So we will include these two, at least. To decide which other two drugs to include, I made a plot of the means and error bars (using the standard error of the mean). I also limited the y-axis to just focus on these drugs and make the error bars a little clearer. ``` # Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers. x_axis = np.arange(0,len(summaryStats.index),1) + 1 means = summaryStats['mean'].values se = summaryStats['sem'].values fig,ax = plt.subplots() ax.errorbar(x_axis, means, se, fmt="o") ax.set_ylim(51,56) plt.xticks(np.arange(len(barDF['Drug Regimen'])) + 1,barDF['Drug Regimen'].values,rotation=-60,ha='left') plt.title('Means and SEMs for Each Drug Regimen') plt.ylabel('Tumor Volume (mm3)') plt.xlabel('Drug Regimen') plt.show() ``` Based on the above plot, I would say the next two most promising drugs are Propriva and Ceftamin, although arguments could also be made for Infubinol and even Zoniferol. With that said, the readme.md file states that we should use Capomulin, Ramicane, Infubinol, and Ceftamin. Editorial: Based on the samples provided, it looks slightly more likely that Propriva regimens result in a smaller tumor volume, on average, than Ceftamin and Infubinol do. *shrug emoji* Four drugs for further investigation: - Capomulin - Ramicane - Infubinol - Ceftamin ``` # Grab only our Top 4 candidates at the last timestep (45) mostPromising = combinedData[(combinedData['Drug Regimen'] == 'Capomulin') | (combinedData['Drug Regimen'] == 'Ceftamin') | (combinedData['Drug Regimen'] == 'Infubinol') | (combinedData['Drug Regimen'] == 'Ramicane')].groupby('Mouse ID').tail(1) regimen_dict = {} # Loop through each of the regimens and print our quantitative statistics for regimen in ['Capomulin','Ceftamin','Infubinol','Ramicane']: df = mostPromising[mostPromising['Drug Regimen'] == regimen] regimen_dict[regimen] = df quartiles = df['Tumor Volume (mm3)'].quantile([.25,.5,.75]) lowerq = quartiles[0.25] upperq = quartiles[0.75] iqr = upperq-lowerq print(f"For the drug regimen {regimen}...") print(f"The lower quartile of final tumor volume is: {lowerq}") print(f"The upper quartile of final tumor volume is: {upperq}") print(f"The interquartile range of final tumor volume is: {iqr}") print(f"The the median of final tumor volume is: {quartiles[0.5]} ") lower_bound = lowerq - (1.5*iqr) upper_bound = upperq + (1.5*iqr) print('') print('****** Outlier Analysis ******') print(f"Values below {lower_bound} could be outliers.") print(f"Values above {upper_bound} could be outliers.") outlier_volume = df.loc[(df['Tumor Volume (mm3)'] < lower_bound) | (df['Tumor Volume (mm3)'] > upper_bound)] if outlier_volume.index.empty: print("There are no likely outliers.") else: print("Potential outliers detected: ") print(outlier_volume) print('') print('-------------------------------') print('') # Generate a box plot of the final tumor volume of each mouse across four regimens of interest data = [regimen_dict['Capomulin']['Tumor Volume (mm3)'],regimen_dict['Infubinol']['Tumor Volume (mm3)'], regimen_dict['Ceftamin']['Tumor Volume (mm3)'],regimen_dict['Ramicane']['Tumor Volume (mm3)']] green_diamond = dict(markerfacecolor='g', marker='D') fig, ax = plt.subplots(figsize=(10,6)) ax.set_title('Final Tumor Volume Distribution') ax.boxplot(data,flierprops=green_diamond) ax.set_xticks(np.arange(len(data)) + 1) ax.set_xticklabels(['Capomulin','Infubinol','Ceftamin','Ramicane']) ax.set_xlabel('Drug Regimen') ax.set_ylabel('Tumor Volume (mm3)') plt.show() combinedData[combinedData['Drug Regimen'] == 'Capomulin'].sample(1).iloc[0,0] ``` ## Line and scatter plots ``` # Get a random mouse mouseID = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].sample(1).iloc[0,0] # Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin x = combinedData[combinedData['Mouse ID'] == mouseID]['Timepoint'] y = combinedData[combinedData['Mouse ID'] == mouseID]['Tumor Volume (mm3)'] fig, ax = plt.subplots(figsize=(8,5)) ax.plot(x,y) ax.set_title(f'Tumor Volume (mm3) Trend for Mouse {mouseID}') ax.set_xlabel('Timepoint') ax.set_ylabel('Tumor Volume (mm3)') plt.grid(True) # Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen x = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].groupby('Mouse ID')['Weight (g)'].mean() y = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].groupby('Mouse ID')['Tumor Volume (mm3)'].mean() fig, ax = plt.subplots(figsize=(8,5)) ax.scatter(x,y) ax.set_title(f'Mouse Weight vs Mean Tumor Volume (mm3) for Capomulin Regimen') ax.set_xlabel('Weight (g)') ax.set_ylabel('Tumor Volume (mm3)') plt.grid(True) # Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen print(f'The calculated correlation coefficient is {st.pearsonr(x,y)[0]}') (slope, intercept, rvalue, pvalue, stderr) = st.linregress(x, y) regress_values = x * slope + intercept fig, ax = plt.subplots(figsize=(8,5)) ax.scatter(x,y) ax.set_title(f'Mouse Weight vs Mean Tumor Volume (mm3) for Capomulin Regimen') ax.set_xlabel('Weight (g)') ax.set_ylabel('Tumor Volume (mm3)') plt.grid(True) ax.plot(x,regress_values,"k-") line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.annotate(line_eq,(20,36),fontsize=15,color="black") plt.show() ```
github_jupyter
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st import numpy as np # Study data files mouse_metadata = "data/Mouse_metadata.csv" study_results = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata) study_results = pd.read_csv(study_results) # Combine the data into a single dataset combinedData = mouse_metadata.merge(study_results,on='Mouse ID') # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen summaryStats = combinedData.groupby(['Drug Regimen']).agg(['mean','median','var','std','sem'])['Tumor Volume (mm3)'] summaryStats # Generate a bar plot showing number of data points for each treatment regimen using pandas barDF = pd.DataFrame({'Drug Regimen': combinedData.groupby('Drug Regimen')['Tumor Volume (mm3)'].count().keys(), 'Count': combinedData.groupby('Drug Regimen')['Tumor Volume (mm3)'].count().values}); barDF.plot.bar(x='Drug Regimen',y='Count',figsize=(8,5),legend=False,alpha=0.5) plt.xlabel('Drug Regimen') plt.ylabel('Count') plt.xticks(rotation=-60,ha='left') plt.title('Number of Samples per Regimen') plt.show() # Generate a bar plot showing number of data points for each treatment regimen using pyplot plt.figure(figsize=(8,5)) plt.bar(np.arange(len(barDF['Drug Regimen'])),barDF['Count'],alpha=0.5) plt.xticks(np.arange(len(barDF['Drug Regimen'])),barDF['Drug Regimen'].values,rotation=-60,ha='left') plt.xlabel('Drug Regimen') plt.ylabel('Count') plt.title('Number of Samples per Regimen') plt.show() # Generate a pie plot showing the distribution of female versus male mice using pandas # Create a dataframe holding the count of unique Mouse IDs by Gender genderCount = pd.DataFrame({'Count':mouse_metadata.groupby('Sex')['Mouse ID'].count().values}, index=mouse_metadata.groupby('Sex')['Mouse ID'].count().keys()) # Use that dataframe to produce the pie plot genderCount.plot.pie(y='Count',figsize=(8, 5),autopct="%1.1f%%",legend=False,startangle=30,shadow=True) plt.axis("equal") plt.title("Distribution of Mice by Sex") plt.ylabel("") plt.show() # Generate a pie plot showing the distribution of female versus male mice using pyplot labels = mouse_metadata.groupby('Sex')['Mouse ID'].count().keys() plt.figure(figsize=(8,5)) plt.pie(genderCount['Count'],autopct="%1.1f%%",startangle=30,shadow=True,labels=labels) plt.axis("equal") plt.title("Distribution of Mice by Sex") plt.show() # Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers. x_axis = np.arange(0,len(summaryStats.index),1) + 1 means = summaryStats['mean'].values se = summaryStats['sem'].values fig,ax = plt.subplots() ax.errorbar(x_axis, means, se, fmt="o") ax.set_ylim(51,56) plt.xticks(np.arange(len(barDF['Drug Regimen'])) + 1,barDF['Drug Regimen'].values,rotation=-60,ha='left') plt.title('Means and SEMs for Each Drug Regimen') plt.ylabel('Tumor Volume (mm3)') plt.xlabel('Drug Regimen') plt.show() # Grab only our Top 4 candidates at the last timestep (45) mostPromising = combinedData[(combinedData['Drug Regimen'] == 'Capomulin') | (combinedData['Drug Regimen'] == 'Ceftamin') | (combinedData['Drug Regimen'] == 'Infubinol') | (combinedData['Drug Regimen'] == 'Ramicane')].groupby('Mouse ID').tail(1) regimen_dict = {} # Loop through each of the regimens and print our quantitative statistics for regimen in ['Capomulin','Ceftamin','Infubinol','Ramicane']: df = mostPromising[mostPromising['Drug Regimen'] == regimen] regimen_dict[regimen] = df quartiles = df['Tumor Volume (mm3)'].quantile([.25,.5,.75]) lowerq = quartiles[0.25] upperq = quartiles[0.75] iqr = upperq-lowerq print(f"For the drug regimen {regimen}...") print(f"The lower quartile of final tumor volume is: {lowerq}") print(f"The upper quartile of final tumor volume is: {upperq}") print(f"The interquartile range of final tumor volume is: {iqr}") print(f"The the median of final tumor volume is: {quartiles[0.5]} ") lower_bound = lowerq - (1.5*iqr) upper_bound = upperq + (1.5*iqr) print('') print('****** Outlier Analysis ******') print(f"Values below {lower_bound} could be outliers.") print(f"Values above {upper_bound} could be outliers.") outlier_volume = df.loc[(df['Tumor Volume (mm3)'] < lower_bound) | (df['Tumor Volume (mm3)'] > upper_bound)] if outlier_volume.index.empty: print("There are no likely outliers.") else: print("Potential outliers detected: ") print(outlier_volume) print('') print('-------------------------------') print('') # Generate a box plot of the final tumor volume of each mouse across four regimens of interest data = [regimen_dict['Capomulin']['Tumor Volume (mm3)'],regimen_dict['Infubinol']['Tumor Volume (mm3)'], regimen_dict['Ceftamin']['Tumor Volume (mm3)'],regimen_dict['Ramicane']['Tumor Volume (mm3)']] green_diamond = dict(markerfacecolor='g', marker='D') fig, ax = plt.subplots(figsize=(10,6)) ax.set_title('Final Tumor Volume Distribution') ax.boxplot(data,flierprops=green_diamond) ax.set_xticks(np.arange(len(data)) + 1) ax.set_xticklabels(['Capomulin','Infubinol','Ceftamin','Ramicane']) ax.set_xlabel('Drug Regimen') ax.set_ylabel('Tumor Volume (mm3)') plt.show() combinedData[combinedData['Drug Regimen'] == 'Capomulin'].sample(1).iloc[0,0] # Get a random mouse mouseID = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].sample(1).iloc[0,0] # Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin x = combinedData[combinedData['Mouse ID'] == mouseID]['Timepoint'] y = combinedData[combinedData['Mouse ID'] == mouseID]['Tumor Volume (mm3)'] fig, ax = plt.subplots(figsize=(8,5)) ax.plot(x,y) ax.set_title(f'Tumor Volume (mm3) Trend for Mouse {mouseID}') ax.set_xlabel('Timepoint') ax.set_ylabel('Tumor Volume (mm3)') plt.grid(True) # Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen x = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].groupby('Mouse ID')['Weight (g)'].mean() y = combinedData[combinedData['Drug Regimen'] == 'Capomulin'].groupby('Mouse ID')['Tumor Volume (mm3)'].mean() fig, ax = plt.subplots(figsize=(8,5)) ax.scatter(x,y) ax.set_title(f'Mouse Weight vs Mean Tumor Volume (mm3) for Capomulin Regimen') ax.set_xlabel('Weight (g)') ax.set_ylabel('Tumor Volume (mm3)') plt.grid(True) # Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen print(f'The calculated correlation coefficient is {st.pearsonr(x,y)[0]}') (slope, intercept, rvalue, pvalue, stderr) = st.linregress(x, y) regress_values = x * slope + intercept fig, ax = plt.subplots(figsize=(8,5)) ax.scatter(x,y) ax.set_title(f'Mouse Weight vs Mean Tumor Volume (mm3) for Capomulin Regimen') ax.set_xlabel('Weight (g)') ax.set_ylabel('Tumor Volume (mm3)') plt.grid(True) ax.plot(x,regress_values,"k-") line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) plt.annotate(line_eq,(20,36),fontsize=15,color="black") plt.show()
0.64713
0.947575
# Part 2 - Parametric plasma source plotting As show in Part 1, OpenMC can be used to create point sources with different energy distributions. However, there are other ways to create neutron sources for use in neutronics simulations. This python notebook allows users to plot the energy, position and initial directions of a parametric plasma source. The plasma source used is from the parametric_plasma_source package. ``` from random import random import plotly.graph_objects as go from parametric_plasma_source import PlasmaSource ``` This first code block creates a neutron source using the PlasmaSource class from the parametric_plasma_source package. The properties of the source are controlled by the input parameters. ``` my_plasma = PlasmaSource( elongation=1.557, ion_density_origin=1.09e20, ion_density_peaking_factor=1, ion_density_pedestal=1.09e20, ion_density_separatrix=3e19, ion_temperature_origin=45.9, ion_temperature_peaking_factor=8.06, ion_temperature_pedestal=6.09, ion_temperature_separatrix=0.1, major_radius=906.0, minor_radius=292.258, pedestal_radius=0.8 * 292.258, plasma_id=1, shafranov_shift=0.44789, triangularity=0.270, ion_temperature_beta=6 ) ``` To plot the parametric plasma source we store the x y z birth locations, energies and directions of neutrons in the source in separate lists. ``` #creates empty lists ready to be populated x_locations, y_locations, z_locations, x_directions, y_directions, z_directions, energies = ([] for i in range(7)) number_of_samples = 500 for x in range(number_of_samples): # randomises the neutron sampler sample = my_plasma.sample([random(), random(), random(), random(), random(), random(), random(), random()]) x_locations.append(sample[0]) y_locations.append(sample[1]) z_locations.append(sample[2]) x_directions.append(sample[3]) y_directions.append(sample[4]) z_directions.append(sample[5]) energies.append(sample[6]) text = ['Energy = ' + str(i) + ' eV' for i in energies] ``` This code block then plots the birth location of each neutron, coloured by neutron birth energy. ``` fig_coords = go.Figure() fig_coords.add_trace(go.Scatter3d( x=x_locations, y=y_locations, z=z_locations, hovertext=text, text=text, mode='markers', marker={ 'size': 1.5, 'color': energies } ) ) fig_coords.update_layout(title='Neutron birth coordinates, coloured by energy') ``` We can also plot the birth direction of each neutron. ``` fig_directions = go.Figure() fig_directions.add_trace({ "type": "cone", "x": x_locations, "y": y_locations, "z": z_locations, "u": x_directions, "v": y_directions, "w": z_directions, "anchor": "tail", "hoverinfo": "u+v+w+norm", "sizeref": 3, "showscale": False, }) fig_directions.update_layout(title='Neutron birth coordinates with initial directions') ``` **Learning Outcomes for Part 2:** - Plasma sources can be defined using the parametric_plasma_source package.
github_jupyter
from random import random import plotly.graph_objects as go from parametric_plasma_source import PlasmaSource my_plasma = PlasmaSource( elongation=1.557, ion_density_origin=1.09e20, ion_density_peaking_factor=1, ion_density_pedestal=1.09e20, ion_density_separatrix=3e19, ion_temperature_origin=45.9, ion_temperature_peaking_factor=8.06, ion_temperature_pedestal=6.09, ion_temperature_separatrix=0.1, major_radius=906.0, minor_radius=292.258, pedestal_radius=0.8 * 292.258, plasma_id=1, shafranov_shift=0.44789, triangularity=0.270, ion_temperature_beta=6 ) #creates empty lists ready to be populated x_locations, y_locations, z_locations, x_directions, y_directions, z_directions, energies = ([] for i in range(7)) number_of_samples = 500 for x in range(number_of_samples): # randomises the neutron sampler sample = my_plasma.sample([random(), random(), random(), random(), random(), random(), random(), random()]) x_locations.append(sample[0]) y_locations.append(sample[1]) z_locations.append(sample[2]) x_directions.append(sample[3]) y_directions.append(sample[4]) z_directions.append(sample[5]) energies.append(sample[6]) text = ['Energy = ' + str(i) + ' eV' for i in energies] fig_coords = go.Figure() fig_coords.add_trace(go.Scatter3d( x=x_locations, y=y_locations, z=z_locations, hovertext=text, text=text, mode='markers', marker={ 'size': 1.5, 'color': energies } ) ) fig_coords.update_layout(title='Neutron birth coordinates, coloured by energy') fig_directions = go.Figure() fig_directions.add_trace({ "type": "cone", "x": x_locations, "y": y_locations, "z": z_locations, "u": x_directions, "v": y_directions, "w": z_directions, "anchor": "tail", "hoverinfo": "u+v+w+norm", "sizeref": 3, "showscale": False, }) fig_directions.update_layout(title='Neutron birth coordinates with initial directions')
0.560493
0.966124
# 7. Logical Agents **7.1** Suppose the agent has progressed to the point shown in Figure [wumpus-seq35-figure](#/)(a), page [wumpus-seq35-figure](#/), having perceived nothing in \[1,1\], a breeze in \[2,1\], and a stench in \[1,2\], and is now concerned with the contents of \[1,3\], \[2,2\], and \[3,1\]. Each of these can contain a pit, and at most one can contain a wumpus. Following the example of Figure [wumpus-entailment-figure](#/), construct the set of possible worlds. (You should find 32 of them.) Mark the worlds in which the KB is true and those in which each of the following sentences is true: $\alpha_2$ = “There is no pit in [2,2].” $\alpha_3$ = “There is a wumpus in [1,3].” Hence show that ${KB} {\models}\alpha_2$ and ${KB} {\models}\alpha_3$. **7.2** (Adapted from @Barwise+Etchemendy:1993 .) Given the following, can you prove that the unicorn is mythical? How about magical? Horned? > If the unicorn is mythical, then it is immortal, but if it is not > mythical, then it is a mortal mammal. If the unicorn is either > immortal or a mammal, then it is horned. The unicorn is magical if it > is horned. **7.3** \[truth-value-exercise\] Consider the problem of deciding whether a propositional logic sentence is true in a given model. 1. Write a recursive algorithm PL-True?$ (s, m )$ that returns ${true}$ if and only if the sentence $s$ is true in the model $m$ (where $m$ assigns a truth value for every symbol in $s$). The algorithm should run in time linear in the size of the sentence. (Alternatively, use a version of this function from the online code repository.) 2. Give three examples of sentences that can be determined to be true or false in a *partial* model that does not specify a truth value for some of the symbols. 3. Show that the truth value (if any) of a sentence in a partial model cannot be determined efficiently in general. 4. Modify your algorithm so that it can sometimes judge truth from partial models, while retaining its recursive structure and linear run time. Give three examples of sentences whose truth in a partial model is *not* detected by your algorithm. 5. Investigate whether the modified algorithm makes $TT-Entails?$ more efficient. **7.4** Which of the following are correct? 1. ${False} \models {True}$. 2. ${True} \models {False}$. 3. $(A\land B) \models (A{\;\;{\Leftrightarrow}\;\;}B)$. 4. $A{\;\;{\Leftrightarrow}\;\;}B \models A \lor B$. 5. $A{\;\;{\Leftrightarrow}\;\;}B \models \lnot A \lor B$. 6. $(A\land B){\:\;{\Rightarrow}\:\;}C \models (A{\:\;{\Rightarrow}\:\;}C)\lor(B{\:\;{\Rightarrow}\:\;}C)$. 7. $(C\lor (\lnot A \land \lnot B)) \equiv ((A{\:\;{\Rightarrow}\:\;}C) \land (B {\:\;{\Rightarrow}\:\;}C))$. 8. $(A\lor B) \land (\lnot C\lor\lnot D\lor E) \models (A\lor B)$. 9. $(A\lor B) \land (\lnot C\lor\lnot D\lor E) \models (A\lor B) \land (\lnot D\lor E)$. 10. $(A\lor B) \land \lnot(A {\:\;{\Rightarrow}\:\;}B)$ is satisfiable. 11. $(A{\;\;{\Leftrightarrow}\;\;}B) \land (\lnot A \lor B)$ is satisfiable. 12. $(A{\;\;{\Leftrightarrow}\;\;}B) {\;\;{\Leftrightarrow}\;\;}C$ has the same number of models as $(A{\;\;{\Leftrightarrow}\;\;}B)$ for any fixed set of proposition symbols that includes $A$, $B$, $C$. **7.5** Which of the following are correct? 1. ${False} \models {True}$. 2. ${True} \models {False}$. 3. $(A\land B) \models (A{\;\;{\Leftrightarrow}\;\;}B)$. 4. $A{\;\;{\Leftrightarrow}\;\;}B \models A \lor B$. 5. $A{\;\;{\Leftrightarrow}\;\;}B \models \lnot A \lor B$. 6. $(A\lor B) \land (\lnot C\lor\lnot D\lor E) \models (A\lor B\lor C) \land (B\land C\land D{\:\;{\Rightarrow}\:\;}E)$. 7. $(A\lor B) \land (\lnot C\lor\lnot D\lor E) \models (A\lor B) \land (\lnot D\lor E)$. 8. $(A\lor B) \land \lnot(A {\:\;{\Rightarrow}\:\;}B)$ is satisfiable. 9. $(A\land B){\:\;{\Rightarrow}\:\;}C \models (A{\:\;{\Rightarrow}\:\;}C)\lor(B{\:\;{\Rightarrow}\:\;}C)$. 10. $(C\lor (\lnot A \land \lnot B)) \equiv ((A{\:\;{\Rightarrow}\:\;}C) \land (B {\:\;{\Rightarrow}\:\;}C))$. 11. $(A{\;\;{\Leftrightarrow}\;\;}B) \land (\lnot A \lor B)$ is satisfiable. 12. $(A{\;\;{\Leftrightarrow}\;\;}B) {\;\;{\Leftrightarrow}\;\;}C$ has the same number of models as $(A{\;\;{\Leftrightarrow}\;\;}B)$ for any fixed set of proposition symbols that includes $A$, $B$, $C$. **7.6** \[deduction-theorem-exercise\] Prove each of the following assertions: 1. $\alpha$ is valid if and only if ${True}{\models}\alpha$. 2. For any $\alpha$, ${False}{\models}\alpha$. 3. $\alpha{\models}\beta$ if and only if the sentence $(\alpha {\:\;{\Rightarrow}\:\;}\beta)$ is valid. 4. $\alpha \equiv \beta$ if and only if the sentence $(\alpha{\;\;{\Leftrightarrow}\;\;}\beta)$ is valid. 5. $\alpha{\models}\beta$ if and only if the sentence $(\alpha \land \lnot \beta)$ is unsatisfiable. **7.7** Prove, or find a counterexample to, each of the following assertions: 1. If $\alpha\models\gamma$ or $\beta\models\gamma$ (or both) then $(\alpha\land \beta)\models\gamma$ 2. If $(\alpha\land \beta)\models\gamma$ then $\alpha\models\gamma$ or $\beta\models\gamma$ (or both). 3. If $\alpha\models (\beta \lor \gamma)$ then $\alpha \models \beta$ or $\alpha \models \gamma$ (or both). **7.8** Prove, or find a counterexample to, each of the following assertions: 1. If $\alpha\models\gamma$ or $\beta\models\gamma$ (or both) then $(\alpha\land \beta)\models\gamma$ 2. If $\alpha\models (\beta \land \gamma)$ then $\alpha \models \beta$ and $\alpha \models \gamma$. 3. If $\alpha\models (\beta \lor \gamma)$ then $\alpha \models \beta$ or $\alpha \models \gamma$ (or both). **7.9** Consider a vocabulary with only four propositions, $A$, $B$, $C$, and $D$. How many models are there for the following sentences? 1. $B\lor C$. 2. $\lnot A\lor \lnot B \lor \lnot C \lor \lnot D$. 3. $(A{\:\;{\Rightarrow}\:\;}B) \land A \land \lnot B \land C \land D$. **7.10** We have defined four binary logical connectives. 1. Are there any others that might be useful? 2. How many binary connectives can there be? 3. Why are some of them not very useful? **7.11** \[logical-equivalence-exercise\]Using a method of your choice, verify each of the equivalences in Table \[logical-equivalence-table\] (page [logical-equivalence-table](#/)). **7.12** \[propositional-validity-exercise\]Decide whether each of the following sentences is valid, unsatisfiable, or neither. Verify your decisions using truth tables or the equivalence rules of Table \[logical-equivalence-table\] (page [logical-equivalence-table](#/)). 1. ${Smoke} {\:\;{\Rightarrow}\:\;}{Smoke}$ 2. ${Smoke} {\:\;{\Rightarrow}\:\;}{Fire}$ 3. $({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) {\:\;{\Rightarrow}\:\;}(\lnot {Smoke} {\:\;{\Rightarrow}\:\;}\lnot {Fire})$ 4. ${Smoke} \lor {Fire} \lor \lnot {Fire}$ 5. $(({Smoke} \land {Heat}) {\:\;{\Rightarrow}\:\;}{Fire}) {\;\;{\Leftrightarrow}\;\;}(({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) \lor ({Heat} {\:\;{\Rightarrow}\:\;}{Fire}))$ 6. $({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) {\:\;{\Rightarrow}\:\;}(({Smoke} \land {Heat}) {\:\;{\Rightarrow}\:\;}{Fire}) $ 7. ${Big} \lor {Dumb} \lor ({Big} {\:\;{\Rightarrow}\:\;}{Dumb})$ **7.13** \[propositional-validity-exercise\]Decide whether each of the following sentences is valid, unsatisfiable, or neither. Verify your decisions using truth tables or the equivalence rules of Table \[logical-equivalence-table\] (page [logical-equivalence-table](#/)). 1. ${Smoke} {\:\;{\Rightarrow}\:\;}{Smoke}$ 2. ${Smoke} {\:\;{\Rightarrow}\:\;}{Fire}$ 3. $({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) {\:\;{\Rightarrow}\:\;}(\lnot {Smoke} {\:\;{\Rightarrow}\:\;}\lnot {Fire})$ 4. ${Smoke} \lor {Fire} \lor \lnot {Fire}$ 5. $(({Smoke} \land {Heat}) {\:\;{\Rightarrow}\:\;}{Fire}) {\;\;{\Leftrightarrow}\;\;}(({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) \lor ({Heat} {\:\;{\Rightarrow}\:\;}{Fire}))$ 6. ${Big} \lor {Dumb} \lor ({Big} {\:\;{\Rightarrow}\:\;}{Dumb})$ 7. $({Big} \land {Dumb}) \lor \lnot {Dumb}$ **7.14** \[cnf-proof-exercise\] Any propositional logic sentence is logically equivalent to the assertion that each possible world in which it would be false is not the case. From this observation, prove that any sentence can be written in CNF. **7.15** Use resolution to prove the sentence $\lnot A \land \lnot B$ from the clauses in Exercise [convert-clausal-exercise](#/). **7.16** \[inf-exercise\] This exercise looks into the relationship between clauses and implication sentences. 1. Show that the clause $(\lnot P_1 \lor \cdots \lor \lnot P_m \lor Q)$ is logically equivalent to the implication sentence $(P_1 \land \cdots \land P_m) {\;{\Rightarrow}\;}Q$. 2. Show that every clause (regardless of the number of positive literals) can be written in the form $(P_1 \land \cdots \land P_m) {\;{\Rightarrow}\;}(Q_1 \lor \cdots \lor Q_n)$, where the $P$s and $Q$s are proposition symbols. A knowledge base consisting of such sentences is in implicative normal form or **Kowalski form** @Kowalski:1979. 3. Write down the full resolution rule for sentences in implicative normal form. **7.17** According to some political pundits, a person who is radical ($R$) is electable ($E$) if he/she is conservative ($C$), but otherwise is not electable. 1. Which of the following are correct representations of this assertion? 1. $(R\land E)\iff C$ 2. $R{\:\;{\Rightarrow}\:\;}(E\iff C)$ 3. $R{\:\;{\Rightarrow}\:\;}((C{\:\;{\Rightarrow}\:\;}E) \lor \lnot E)$ 2. Which of the sentences in (a) can be expressed in Horn form? **7.18** This question considers representing satisfiability (SAT) problems as CSPs. 1. Draw the constraint graph corresponding to the SAT problem $$(\lnot X_1 \lor X_2) \land (\lnot X_2 \lor X_3) \land \ldots \land (\lnot X_{n-1} \lor X_n)$$ for the particular case $n{{\,{=}\,}}5$. 2. How many solutions are there for this general SAT problem as a function of $n$? 3. Suppose we apply {Backtracking-Search} (page [backtracking-search-algorithm](#/)) to find *all* solutions to a SAT CSP of the type given in (a). (To find *all* solutions to a CSP, we simply modify the basic algorithm so it continues searching after each solution is found.) Assume that variables are ordered $X_1,\ldots,X_n$ and ${false}$ is ordered before ${true}$. How much time will the algorithm take to terminate? (Write an $O(\cdot)$ expression as a function of $n$.) 4. We know that SAT problems in Horn form can be solved in linear time by forward chaining (unit propagation). We also know that every tree-structured binary CSP with discrete, finite domains can be solved in time linear in the number of variables (Section [csp-structure-section](#/)). Are these two facts connected? Discuss. **7.19** This question considers representing satisfiability (SAT) problems as CSPs. 1. Draw the constraint graph corresponding to the SAT problem $$(\lnot X_1 \lor X_2) \land (\lnot X_2 \lor X_3) \land \ldots \land (\lnot X_{n-1} \lor X_n)$$ for the particular case $n{{\,{=}\,}}4$. 2. How many solutions are there for this general SAT problem as a function of $n$? 3. Suppose we apply {Backtracking-Search} (page [backtracking-search-algorithm](#/)) to find *all* solutions to a SAT CSP of the type given in (a). (To find *all* solutions to a CSP, we simply modify the basic algorithm so it continues searching after each solution is found.) Assume that variables are ordered $X_1,\ldots,X_n$ and ${false}$ is ordered before ${true}$. How much time will the algorithm take to terminate? (Write an $O(\cdot)$ expression as a function of $n$.) 4. We know that SAT problems in Horn form can be solved in linear time by forward chaining (unit propagation). We also know that every tree-structured binary CSP with discrete, finite domains can be solved in time linear in the number of variables (Section [csp-structure-section](#/)). Are these two facts connected? Discuss. **7.20** Explain why every nonempty propositional clause, by itself, is satisfiable. Prove rigorously that every set of five 3-SAT clauses is satisfiable, provided that each clause mentions exactly three distinct variables. What is the smallest set of such clauses that is unsatisfiable? Construct such a set. **7.21** A propositional *2-CNF* expression is a conjunction of clauses, each containing *exactly 2* literals, e.g., $$(A\lor B) \land (\lnot A \lor C) \land (\lnot B \lor D) \land (\lnot C \lor G) \land (\lnot D \lor G)\ .$$ 1. Prove using resolution that the above sentence entails $G$. 2. Two clauses are *semantically distinct* if they are not logically equivalent. How many semantically distinct 2-CNF clauses can be constructed from $n$ proposition symbols? 3. Using your answer to (b), prove that propositional resolution always terminates in time polynomial in $n$ given a 2-CNF sentence containing no more than $n$ distinct symbols. 4. Explain why your argument in (c) does not apply to 3-CNF. **7.22** Prove each of the following assertions: 1. Every pair of propositional clauses either has no resolvents, or all their resolvents are logically equivalent. 2. There is no clause that, when resolved with itself, yields (after factoring) the clause $(\lnot P \lor \lnot Q)$. 3. If a propositional clause $C$ can be resolved with a copy of itself, it must be logically equivalent to $ True $. **7.23** Consider the following sentence: $$[ ({Food} {\:\;{\Rightarrow}\:\;}{Party}) \lor ({Drinks} {\:\;{\Rightarrow}\:\;}{Party}) ] {\:\;{\Rightarrow}\:\;}[ ( {Food} \land {Drinks} ) {\:\;{\Rightarrow}\:\;}{Party}]\ .$$ 1. Determine, using enumeration, whether this sentence is valid, satisfiable (but not valid), or unsatisfiable. 2. Convert the left-hand and right-hand sides of the main implication into CNF, showing each step, and explain how the results confirm your answer to (a). 3. Prove your answer to (a) using resolution. **7.24** \[dnf-exercise\] A sentence is in disjunctive normal form(DNF) if it is the disjunction of conjunctions of literals. For example, the sentence $(A \land B \land \lnot C) \lor (\lnot A \land C) \lor (B \land \lnot C)$ is in DNF. 1. Any propositional logic sentence is logically equivalent to the assertion that some possible world in which it would be true is in fact the case. From this observation, prove that any sentence can be written in DNF. 2. Construct an algorithm that converts any sentence in propositional logic into DNF. (*Hint*: The algorithm is similar to the algorithm for conversion to CNF iven in Sectio [pl-resolution-section](#/).) 3. Construct a simple algorithm that takes as input a sentence in DNF and returns a satisfying assignment if one exists, or reports that no satisfying assignment exists. 4. Apply the algorithms in (b) and (c) to the following set of sentences: > $A {\Rightarrow} B$ > $B {\Rightarrow} C$ > $C {\Rightarrow} A$ 5. Since the algorithm in (b) is very similar to the algorithm for conversion to CNF, and since the algorithm in (c) is much simpler than any algorithm for solving a set of sentences in CNF, why is this technique not used in automated reasoning? **7.25** \[convert-clausal-exercise\] Convert the following set of sentences to clausal form. > S1: $A {\;\;{\Leftrightarrow}\;\;}(B \lor E)$. > S2: $E {\:\;{\Rightarrow}\:\;}D$. > S3: $C \land F {\:\;{\Rightarrow}\:\;}\lnot B$. > S4: $E {\:\;{\Rightarrow}\:\;}B$. > S5: $B {\:\;{\Rightarrow}\:\;}F$. > S6: $B {\:\;{\Rightarrow}\:\;}C$ Give a trace of the execution of DPLL on the conjunction of these clauses. **7.26** \[convert-clausal-exercise\] Convert the following set of sentences to clausal form. > S1: $A {\;\;{\Leftrightarrow}\;\;}(C \lor E)$. > S2: $E {\:\;{\Rightarrow}\:\;}D$. > S3: $B \land F {\:\;{\Rightarrow}\:\;}\lnot C$. > S4: $E {\:\;{\Rightarrow}\:\;}C$. > S5: $C {\:\;{\Rightarrow}\:\;}F$. > S6: $C {\:\;{\Rightarrow}\:\;}B$ Give a trace of the execution of DPLL on the conjunction of these clauses. **7.27** Is a randomly generated 4-CNF sentence with $n$ symbols and $m$ clauses more or less likely to be solvable than a randomly generated 3-CNF sentence with $n$ symbols and $m$ clauses? Explain. **7.28** \[minesweeper-exercise\] Minesweeper, the well-known computer game, is closely related to the wumpus world. A minesweeper world is a rectangular grid of $N$ squares with $M$ invisible mines scattered among them. Any square may be probed by the agent; instant death follows if a mine is probed. Minesweeper indicates the presence of mines by revealing, in each probed square, the *number* of mines that are directly or diagonally adjacent. The goal is to probe every unmined square. 1. Let $X_{i,j}$ be true iff square $[i,j]$ contains a mine. Write down the assertion that exactly two mines are adjacent to \[1,1\] as a sentence involving some logical combination of $X_{i,j}$ propositions. 2. Generalize your assertion from (a) by explaining how to construct a CNF sentence asserting that $k$ of $n$ neighbors contain mines. 3. Explain precisely how an agent can use {DPLL} to prove that a given square does (or does not) contain a mine, ignoring the global constraint that there are exactly $M$ mines in all. 4. Suppose that the global constraint is constructed from your method from part (b). How does the number of clauses depend on $M$ and $N$? Suggest a way to modify {DPLL} so that the global constraint does not need to be represented explicitly. 5. Are any conclusions derived by the method in part (c) invalidated when the global constraint is taken into account? 6. Give examples of configurations of probe values that induce *long-range dependencies* such that the contents of a given unprobed square would give information about the contents of a far-distant square. (*Hint*: consider an $N\times 1$ board.) **7.29** \[known-literal-exercise\] How long does it take to prove ${KB}{\models}\alpha$ using {DPLL} when $\alpha$ is a literal *already contained in* ${KB}$? Explain. **7.30** \[dpll-fc-exercise\] Trace the behavior of {DPLL} on the knowledge base in Figure [pl-horn-example-figure](#/) when trying to prove $Q$, and compare this behavior with that of the forward-chaining algorithm. **7.31** Write a successor-state axiom for the ${Locked}$ predicate, which applies to doors, assuming the only actions available are ${Lock}$ and ${Unlock}$. **7.32** Discuss what is meant by *optimal* behavior in the wumpus world. Show that the {Hybrid-Wumpus-Agent} is not optimal, and suggest ways to improve it. **7.33** Suppose an agent inhabits a world with two states, $S$ and $\lnot S$, and can do exactly one of two actions, $a$ and $b$. Action $a$ does nothing and action $b$ flips from one state to the other. Let $S^t$ be the proposition that the agent is in state $S$ at time $t$, and let $a^t$ be the proposition that the agent does action $a$ at time $t$ (similarly for $b^t$). 1. Write a successor-state axiom for $S^{t+1}$. 2. Convert the sentence in (a) into CNF. 3. Show a resolution refutation proof that if the agent is in $\lnot S$ at time $t$ and does $a$, it will still be in $\lnot S$ at time $t+1$. **7.34** \[ss-axiom-exercise\] Section [successor-state-section](#/) provides some of the successor-state axioms required for the wumpus world. Write down axioms for all remaining fluent symbols. **7.35** \[hybrid-wumpus-exercise\]Modify the {Hybrid-Wumpus-Agent} to use the 1-CNF logical state estimation method described on page [1cnf-belief-state-page](#/). We noted on that page that such an agent will not be able to acquire, maintain, and use more complex beliefs such as the disjunction $P_{3,1}\lor P_{2,2}$. Suggest a method for overcoming this problem by defining additional proposition symbols, and try it out in the wumpus world. Does it improve the performance of the agent?
github_jupyter
# 7. Logical Agents **7.1** Suppose the agent has progressed to the point shown in Figure [wumpus-seq35-figure](#/)(a), page [wumpus-seq35-figure](#/), having perceived nothing in \[1,1\], a breeze in \[2,1\], and a stench in \[1,2\], and is now concerned with the contents of \[1,3\], \[2,2\], and \[3,1\]. Each of these can contain a pit, and at most one can contain a wumpus. Following the example of Figure [wumpus-entailment-figure](#/), construct the set of possible worlds. (You should find 32 of them.) Mark the worlds in which the KB is true and those in which each of the following sentences is true: $\alpha_2$ = “There is no pit in [2,2].” $\alpha_3$ = “There is a wumpus in [1,3].” Hence show that ${KB} {\models}\alpha_2$ and ${KB} {\models}\alpha_3$. **7.2** (Adapted from @Barwise+Etchemendy:1993 .) Given the following, can you prove that the unicorn is mythical? How about magical? Horned? > If the unicorn is mythical, then it is immortal, but if it is not > mythical, then it is a mortal mammal. If the unicorn is either > immortal or a mammal, then it is horned. The unicorn is magical if it > is horned. **7.3** \[truth-value-exercise\] Consider the problem of deciding whether a propositional logic sentence is true in a given model. 1. Write a recursive algorithm PL-True?$ (s, m )$ that returns ${true}$ if and only if the sentence $s$ is true in the model $m$ (where $m$ assigns a truth value for every symbol in $s$). The algorithm should run in time linear in the size of the sentence. (Alternatively, use a version of this function from the online code repository.) 2. Give three examples of sentences that can be determined to be true or false in a *partial* model that does not specify a truth value for some of the symbols. 3. Show that the truth value (if any) of a sentence in a partial model cannot be determined efficiently in general. 4. Modify your algorithm so that it can sometimes judge truth from partial models, while retaining its recursive structure and linear run time. Give three examples of sentences whose truth in a partial model is *not* detected by your algorithm. 5. Investigate whether the modified algorithm makes $TT-Entails?$ more efficient. **7.4** Which of the following are correct? 1. ${False} \models {True}$. 2. ${True} \models {False}$. 3. $(A\land B) \models (A{\;\;{\Leftrightarrow}\;\;}B)$. 4. $A{\;\;{\Leftrightarrow}\;\;}B \models A \lor B$. 5. $A{\;\;{\Leftrightarrow}\;\;}B \models \lnot A \lor B$. 6. $(A\land B){\:\;{\Rightarrow}\:\;}C \models (A{\:\;{\Rightarrow}\:\;}C)\lor(B{\:\;{\Rightarrow}\:\;}C)$. 7. $(C\lor (\lnot A \land \lnot B)) \equiv ((A{\:\;{\Rightarrow}\:\;}C) \land (B {\:\;{\Rightarrow}\:\;}C))$. 8. $(A\lor B) \land (\lnot C\lor\lnot D\lor E) \models (A\lor B)$. 9. $(A\lor B) \land (\lnot C\lor\lnot D\lor E) \models (A\lor B) \land (\lnot D\lor E)$. 10. $(A\lor B) \land \lnot(A {\:\;{\Rightarrow}\:\;}B)$ is satisfiable. 11. $(A{\;\;{\Leftrightarrow}\;\;}B) \land (\lnot A \lor B)$ is satisfiable. 12. $(A{\;\;{\Leftrightarrow}\;\;}B) {\;\;{\Leftrightarrow}\;\;}C$ has the same number of models as $(A{\;\;{\Leftrightarrow}\;\;}B)$ for any fixed set of proposition symbols that includes $A$, $B$, $C$. **7.5** Which of the following are correct? 1. ${False} \models {True}$. 2. ${True} \models {False}$. 3. $(A\land B) \models (A{\;\;{\Leftrightarrow}\;\;}B)$. 4. $A{\;\;{\Leftrightarrow}\;\;}B \models A \lor B$. 5. $A{\;\;{\Leftrightarrow}\;\;}B \models \lnot A \lor B$. 6. $(A\lor B) \land (\lnot C\lor\lnot D\lor E) \models (A\lor B\lor C) \land (B\land C\land D{\:\;{\Rightarrow}\:\;}E)$. 7. $(A\lor B) \land (\lnot C\lor\lnot D\lor E) \models (A\lor B) \land (\lnot D\lor E)$. 8. $(A\lor B) \land \lnot(A {\:\;{\Rightarrow}\:\;}B)$ is satisfiable. 9. $(A\land B){\:\;{\Rightarrow}\:\;}C \models (A{\:\;{\Rightarrow}\:\;}C)\lor(B{\:\;{\Rightarrow}\:\;}C)$. 10. $(C\lor (\lnot A \land \lnot B)) \equiv ((A{\:\;{\Rightarrow}\:\;}C) \land (B {\:\;{\Rightarrow}\:\;}C))$. 11. $(A{\;\;{\Leftrightarrow}\;\;}B) \land (\lnot A \lor B)$ is satisfiable. 12. $(A{\;\;{\Leftrightarrow}\;\;}B) {\;\;{\Leftrightarrow}\;\;}C$ has the same number of models as $(A{\;\;{\Leftrightarrow}\;\;}B)$ for any fixed set of proposition symbols that includes $A$, $B$, $C$. **7.6** \[deduction-theorem-exercise\] Prove each of the following assertions: 1. $\alpha$ is valid if and only if ${True}{\models}\alpha$. 2. For any $\alpha$, ${False}{\models}\alpha$. 3. $\alpha{\models}\beta$ if and only if the sentence $(\alpha {\:\;{\Rightarrow}\:\;}\beta)$ is valid. 4. $\alpha \equiv \beta$ if and only if the sentence $(\alpha{\;\;{\Leftrightarrow}\;\;}\beta)$ is valid. 5. $\alpha{\models}\beta$ if and only if the sentence $(\alpha \land \lnot \beta)$ is unsatisfiable. **7.7** Prove, or find a counterexample to, each of the following assertions: 1. If $\alpha\models\gamma$ or $\beta\models\gamma$ (or both) then $(\alpha\land \beta)\models\gamma$ 2. If $(\alpha\land \beta)\models\gamma$ then $\alpha\models\gamma$ or $\beta\models\gamma$ (or both). 3. If $\alpha\models (\beta \lor \gamma)$ then $\alpha \models \beta$ or $\alpha \models \gamma$ (or both). **7.8** Prove, or find a counterexample to, each of the following assertions: 1. If $\alpha\models\gamma$ or $\beta\models\gamma$ (or both) then $(\alpha\land \beta)\models\gamma$ 2. If $\alpha\models (\beta \land \gamma)$ then $\alpha \models \beta$ and $\alpha \models \gamma$. 3. If $\alpha\models (\beta \lor \gamma)$ then $\alpha \models \beta$ or $\alpha \models \gamma$ (or both). **7.9** Consider a vocabulary with only four propositions, $A$, $B$, $C$, and $D$. How many models are there for the following sentences? 1. $B\lor C$. 2. $\lnot A\lor \lnot B \lor \lnot C \lor \lnot D$. 3. $(A{\:\;{\Rightarrow}\:\;}B) \land A \land \lnot B \land C \land D$. **7.10** We have defined four binary logical connectives. 1. Are there any others that might be useful? 2. How many binary connectives can there be? 3. Why are some of them not very useful? **7.11** \[logical-equivalence-exercise\]Using a method of your choice, verify each of the equivalences in Table \[logical-equivalence-table\] (page [logical-equivalence-table](#/)). **7.12** \[propositional-validity-exercise\]Decide whether each of the following sentences is valid, unsatisfiable, or neither. Verify your decisions using truth tables or the equivalence rules of Table \[logical-equivalence-table\] (page [logical-equivalence-table](#/)). 1. ${Smoke} {\:\;{\Rightarrow}\:\;}{Smoke}$ 2. ${Smoke} {\:\;{\Rightarrow}\:\;}{Fire}$ 3. $({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) {\:\;{\Rightarrow}\:\;}(\lnot {Smoke} {\:\;{\Rightarrow}\:\;}\lnot {Fire})$ 4. ${Smoke} \lor {Fire} \lor \lnot {Fire}$ 5. $(({Smoke} \land {Heat}) {\:\;{\Rightarrow}\:\;}{Fire}) {\;\;{\Leftrightarrow}\;\;}(({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) \lor ({Heat} {\:\;{\Rightarrow}\:\;}{Fire}))$ 6. $({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) {\:\;{\Rightarrow}\:\;}(({Smoke} \land {Heat}) {\:\;{\Rightarrow}\:\;}{Fire}) $ 7. ${Big} \lor {Dumb} \lor ({Big} {\:\;{\Rightarrow}\:\;}{Dumb})$ **7.13** \[propositional-validity-exercise\]Decide whether each of the following sentences is valid, unsatisfiable, or neither. Verify your decisions using truth tables or the equivalence rules of Table \[logical-equivalence-table\] (page [logical-equivalence-table](#/)). 1. ${Smoke} {\:\;{\Rightarrow}\:\;}{Smoke}$ 2. ${Smoke} {\:\;{\Rightarrow}\:\;}{Fire}$ 3. $({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) {\:\;{\Rightarrow}\:\;}(\lnot {Smoke} {\:\;{\Rightarrow}\:\;}\lnot {Fire})$ 4. ${Smoke} \lor {Fire} \lor \lnot {Fire}$ 5. $(({Smoke} \land {Heat}) {\:\;{\Rightarrow}\:\;}{Fire}) {\;\;{\Leftrightarrow}\;\;}(({Smoke} {\:\;{\Rightarrow}\:\;}{Fire}) \lor ({Heat} {\:\;{\Rightarrow}\:\;}{Fire}))$ 6. ${Big} \lor {Dumb} \lor ({Big} {\:\;{\Rightarrow}\:\;}{Dumb})$ 7. $({Big} \land {Dumb}) \lor \lnot {Dumb}$ **7.14** \[cnf-proof-exercise\] Any propositional logic sentence is logically equivalent to the assertion that each possible world in which it would be false is not the case. From this observation, prove that any sentence can be written in CNF. **7.15** Use resolution to prove the sentence $\lnot A \land \lnot B$ from the clauses in Exercise [convert-clausal-exercise](#/). **7.16** \[inf-exercise\] This exercise looks into the relationship between clauses and implication sentences. 1. Show that the clause $(\lnot P_1 \lor \cdots \lor \lnot P_m \lor Q)$ is logically equivalent to the implication sentence $(P_1 \land \cdots \land P_m) {\;{\Rightarrow}\;}Q$. 2. Show that every clause (regardless of the number of positive literals) can be written in the form $(P_1 \land \cdots \land P_m) {\;{\Rightarrow}\;}(Q_1 \lor \cdots \lor Q_n)$, where the $P$s and $Q$s are proposition symbols. A knowledge base consisting of such sentences is in implicative normal form or **Kowalski form** @Kowalski:1979. 3. Write down the full resolution rule for sentences in implicative normal form. **7.17** According to some political pundits, a person who is radical ($R$) is electable ($E$) if he/she is conservative ($C$), but otherwise is not electable. 1. Which of the following are correct representations of this assertion? 1. $(R\land E)\iff C$ 2. $R{\:\;{\Rightarrow}\:\;}(E\iff C)$ 3. $R{\:\;{\Rightarrow}\:\;}((C{\:\;{\Rightarrow}\:\;}E) \lor \lnot E)$ 2. Which of the sentences in (a) can be expressed in Horn form? **7.18** This question considers representing satisfiability (SAT) problems as CSPs. 1. Draw the constraint graph corresponding to the SAT problem $$(\lnot X_1 \lor X_2) \land (\lnot X_2 \lor X_3) \land \ldots \land (\lnot X_{n-1} \lor X_n)$$ for the particular case $n{{\,{=}\,}}5$. 2. How many solutions are there for this general SAT problem as a function of $n$? 3. Suppose we apply {Backtracking-Search} (page [backtracking-search-algorithm](#/)) to find *all* solutions to a SAT CSP of the type given in (a). (To find *all* solutions to a CSP, we simply modify the basic algorithm so it continues searching after each solution is found.) Assume that variables are ordered $X_1,\ldots,X_n$ and ${false}$ is ordered before ${true}$. How much time will the algorithm take to terminate? (Write an $O(\cdot)$ expression as a function of $n$.) 4. We know that SAT problems in Horn form can be solved in linear time by forward chaining (unit propagation). We also know that every tree-structured binary CSP with discrete, finite domains can be solved in time linear in the number of variables (Section [csp-structure-section](#/)). Are these two facts connected? Discuss. **7.19** This question considers representing satisfiability (SAT) problems as CSPs. 1. Draw the constraint graph corresponding to the SAT problem $$(\lnot X_1 \lor X_2) \land (\lnot X_2 \lor X_3) \land \ldots \land (\lnot X_{n-1} \lor X_n)$$ for the particular case $n{{\,{=}\,}}4$. 2. How many solutions are there for this general SAT problem as a function of $n$? 3. Suppose we apply {Backtracking-Search} (page [backtracking-search-algorithm](#/)) to find *all* solutions to a SAT CSP of the type given in (a). (To find *all* solutions to a CSP, we simply modify the basic algorithm so it continues searching after each solution is found.) Assume that variables are ordered $X_1,\ldots,X_n$ and ${false}$ is ordered before ${true}$. How much time will the algorithm take to terminate? (Write an $O(\cdot)$ expression as a function of $n$.) 4. We know that SAT problems in Horn form can be solved in linear time by forward chaining (unit propagation). We also know that every tree-structured binary CSP with discrete, finite domains can be solved in time linear in the number of variables (Section [csp-structure-section](#/)). Are these two facts connected? Discuss. **7.20** Explain why every nonempty propositional clause, by itself, is satisfiable. Prove rigorously that every set of five 3-SAT clauses is satisfiable, provided that each clause mentions exactly three distinct variables. What is the smallest set of such clauses that is unsatisfiable? Construct such a set. **7.21** A propositional *2-CNF* expression is a conjunction of clauses, each containing *exactly 2* literals, e.g., $$(A\lor B) \land (\lnot A \lor C) \land (\lnot B \lor D) \land (\lnot C \lor G) \land (\lnot D \lor G)\ .$$ 1. Prove using resolution that the above sentence entails $G$. 2. Two clauses are *semantically distinct* if they are not logically equivalent. How many semantically distinct 2-CNF clauses can be constructed from $n$ proposition symbols? 3. Using your answer to (b), prove that propositional resolution always terminates in time polynomial in $n$ given a 2-CNF sentence containing no more than $n$ distinct symbols. 4. Explain why your argument in (c) does not apply to 3-CNF. **7.22** Prove each of the following assertions: 1. Every pair of propositional clauses either has no resolvents, or all their resolvents are logically equivalent. 2. There is no clause that, when resolved with itself, yields (after factoring) the clause $(\lnot P \lor \lnot Q)$. 3. If a propositional clause $C$ can be resolved with a copy of itself, it must be logically equivalent to $ True $. **7.23** Consider the following sentence: $$[ ({Food} {\:\;{\Rightarrow}\:\;}{Party}) \lor ({Drinks} {\:\;{\Rightarrow}\:\;}{Party}) ] {\:\;{\Rightarrow}\:\;}[ ( {Food} \land {Drinks} ) {\:\;{\Rightarrow}\:\;}{Party}]\ .$$ 1. Determine, using enumeration, whether this sentence is valid, satisfiable (but not valid), or unsatisfiable. 2. Convert the left-hand and right-hand sides of the main implication into CNF, showing each step, and explain how the results confirm your answer to (a). 3. Prove your answer to (a) using resolution. **7.24** \[dnf-exercise\] A sentence is in disjunctive normal form(DNF) if it is the disjunction of conjunctions of literals. For example, the sentence $(A \land B \land \lnot C) \lor (\lnot A \land C) \lor (B \land \lnot C)$ is in DNF. 1. Any propositional logic sentence is logically equivalent to the assertion that some possible world in which it would be true is in fact the case. From this observation, prove that any sentence can be written in DNF. 2. Construct an algorithm that converts any sentence in propositional logic into DNF. (*Hint*: The algorithm is similar to the algorithm for conversion to CNF iven in Sectio [pl-resolution-section](#/).) 3. Construct a simple algorithm that takes as input a sentence in DNF and returns a satisfying assignment if one exists, or reports that no satisfying assignment exists. 4. Apply the algorithms in (b) and (c) to the following set of sentences: > $A {\Rightarrow} B$ > $B {\Rightarrow} C$ > $C {\Rightarrow} A$ 5. Since the algorithm in (b) is very similar to the algorithm for conversion to CNF, and since the algorithm in (c) is much simpler than any algorithm for solving a set of sentences in CNF, why is this technique not used in automated reasoning? **7.25** \[convert-clausal-exercise\] Convert the following set of sentences to clausal form. > S1: $A {\;\;{\Leftrightarrow}\;\;}(B \lor E)$. > S2: $E {\:\;{\Rightarrow}\:\;}D$. > S3: $C \land F {\:\;{\Rightarrow}\:\;}\lnot B$. > S4: $E {\:\;{\Rightarrow}\:\;}B$. > S5: $B {\:\;{\Rightarrow}\:\;}F$. > S6: $B {\:\;{\Rightarrow}\:\;}C$ Give a trace of the execution of DPLL on the conjunction of these clauses. **7.26** \[convert-clausal-exercise\] Convert the following set of sentences to clausal form. > S1: $A {\;\;{\Leftrightarrow}\;\;}(C \lor E)$. > S2: $E {\:\;{\Rightarrow}\:\;}D$. > S3: $B \land F {\:\;{\Rightarrow}\:\;}\lnot C$. > S4: $E {\:\;{\Rightarrow}\:\;}C$. > S5: $C {\:\;{\Rightarrow}\:\;}F$. > S6: $C {\:\;{\Rightarrow}\:\;}B$ Give a trace of the execution of DPLL on the conjunction of these clauses. **7.27** Is a randomly generated 4-CNF sentence with $n$ symbols and $m$ clauses more or less likely to be solvable than a randomly generated 3-CNF sentence with $n$ symbols and $m$ clauses? Explain. **7.28** \[minesweeper-exercise\] Minesweeper, the well-known computer game, is closely related to the wumpus world. A minesweeper world is a rectangular grid of $N$ squares with $M$ invisible mines scattered among them. Any square may be probed by the agent; instant death follows if a mine is probed. Minesweeper indicates the presence of mines by revealing, in each probed square, the *number* of mines that are directly or diagonally adjacent. The goal is to probe every unmined square. 1. Let $X_{i,j}$ be true iff square $[i,j]$ contains a mine. Write down the assertion that exactly two mines are adjacent to \[1,1\] as a sentence involving some logical combination of $X_{i,j}$ propositions. 2. Generalize your assertion from (a) by explaining how to construct a CNF sentence asserting that $k$ of $n$ neighbors contain mines. 3. Explain precisely how an agent can use {DPLL} to prove that a given square does (or does not) contain a mine, ignoring the global constraint that there are exactly $M$ mines in all. 4. Suppose that the global constraint is constructed from your method from part (b). How does the number of clauses depend on $M$ and $N$? Suggest a way to modify {DPLL} so that the global constraint does not need to be represented explicitly. 5. Are any conclusions derived by the method in part (c) invalidated when the global constraint is taken into account? 6. Give examples of configurations of probe values that induce *long-range dependencies* such that the contents of a given unprobed square would give information about the contents of a far-distant square. (*Hint*: consider an $N\times 1$ board.) **7.29** \[known-literal-exercise\] How long does it take to prove ${KB}{\models}\alpha$ using {DPLL} when $\alpha$ is a literal *already contained in* ${KB}$? Explain. **7.30** \[dpll-fc-exercise\] Trace the behavior of {DPLL} on the knowledge base in Figure [pl-horn-example-figure](#/) when trying to prove $Q$, and compare this behavior with that of the forward-chaining algorithm. **7.31** Write a successor-state axiom for the ${Locked}$ predicate, which applies to doors, assuming the only actions available are ${Lock}$ and ${Unlock}$. **7.32** Discuss what is meant by *optimal* behavior in the wumpus world. Show that the {Hybrid-Wumpus-Agent} is not optimal, and suggest ways to improve it. **7.33** Suppose an agent inhabits a world with two states, $S$ and $\lnot S$, and can do exactly one of two actions, $a$ and $b$. Action $a$ does nothing and action $b$ flips from one state to the other. Let $S^t$ be the proposition that the agent is in state $S$ at time $t$, and let $a^t$ be the proposition that the agent does action $a$ at time $t$ (similarly for $b^t$). 1. Write a successor-state axiom for $S^{t+1}$. 2. Convert the sentence in (a) into CNF. 3. Show a resolution refutation proof that if the agent is in $\lnot S$ at time $t$ and does $a$, it will still be in $\lnot S$ at time $t+1$. **7.34** \[ss-axiom-exercise\] Section [successor-state-section](#/) provides some of the successor-state axioms required for the wumpus world. Write down axioms for all remaining fluent symbols. **7.35** \[hybrid-wumpus-exercise\]Modify the {Hybrid-Wumpus-Agent} to use the 1-CNF logical state estimation method described on page [1cnf-belief-state-page](#/). We noted on that page that such an agent will not be able to acquire, maintain, and use more complex beliefs such as the disjunction $P_{3,1}\lor P_{2,2}$. Suggest a method for overcoming this problem by defining additional proposition symbols, and try it out in the wumpus world. Does it improve the performance of the agent?
0.741955
0.923936
``` %%writefile ../NGSEP_pipeline_full.py #! /usr/bin/env python ### Script for aligning reads import argparse import glob from os.path import basename from subprocess import call parser=argparse.ArgumentParser(prog='NGSEP_pipeline_full', description='Script for align, postprocess and call reads using entirely NGSEP') parser.add_argument('-i', '--input', dest='fastqreads', help='The folder with your fastq reads') parser.add_argument('-r', '--referencegenome', dest='ref', help='The reference genome file') #parser.add_argument('-o', '--output', dest='outfolder', help='The output folder') arg=parser.parse_args() ## Check if there is some parameter missing if 'None' in str(arg): parser.error('Input parameter missing!! Please check your command line parameters with -h or --help') ## sample params fastq=arg.fastqreads.split('/')[0] # just for avoid to duplication of the '/' at the end ref=arg.ref reads=glob.glob('%s/*' % fastq) call('mkdir -p aln_res', shell=True) # java -jar NGSEPcore_<VERSION>.jar ReadsAligner -r <REF.fa> -i <SMPL>.fastq -s <SMPL> -o <SMPL>.bam > <SMPL>_aln.log # java -jar picard.jar SortSam SO=coordinate CREATE_INDEX=true I=<SMPL>.bam O=<SMPL>_sorted.bam >& <SMPL>_sort.log for i in reads: smpl = basename(i).split('.')[0] print('Start aligning sample %s' % smpl) bam_out= smpl +'_aln.bam' aln_cmd='''java -jar src/NGSEPcore_4.1.0.jar ReadsAligner -r %s -i %s -s %s -o aln_res/%s > logs/%s_aln.log''' % (ref, i, smpl, bam_out, smpl) print(aln_cmd) call(aln_cmd, shell=True) picard_cmd = '''java -jar src/picard.jar SortSam SO=coordinate CREATE_INDEX=true I=aln_res/%s O=aln_res/%s_sorted.bam >& logs/%s_sort.log''' % (bam_out, smpl, smpl) print(picard_cmd) call(picard_cmd, shell=True) # Multisamples SNPs calling #java -jar src/NGSEPcore_4.1.0.jar MultisampleVariantsDetector -maxBaseQS 30 -maxAlnsPerStartPos 100 -r <REF.fa> -o population.vcf <BAM_FILES>* >& population.log snp_allign_cmd = '''java -jar src/NGSEPcore_4.1.0.jar MultisampleVariantsDetector -maxBaseQS 30 -maxAlnsPerStartPos 100 -r %s -o FINAL_SNP_file.vcf res_aln/*_sorted.bam >& logs/population.log''' % ref print(snp_allign_cmd) call(snp_allign_cmd, shell=True) ``` Make script of script ``` %%writefile ../NGSEP_aln_slurm.py import os import argparse import glob from os.path import basename parser=argparse.ArgumentParser(prog='NGSEP_aln_slurm', description='Script for creating slurm jobs for aligning and postprocessing reads using entirely NGSEP') parser.add_argument('-i', '--input', dest='fastqreads', help='The folder with your fastq reads') parser.add_argument('-r', '--referencegenome', dest='ref', help='The reference genome file') #parser.add_argument('-o', '--output', dest='outfolder', help='The output folder') arg=parser.parse_args() ## Check if there is some parameter missing if 'None' in str(arg): parser.error('Input parameter missing!! Please check your command line parameters with -h or --help') ## sample params fastq=arg.fastqreads.split('/')[0] # just for avoid to duplication of the '/' at the end ref=arg.ref reads=glob.glob('%s/*' % fastq) slurm_header = '''#!/bin/bash #SBATCH --time=24:00:00 #SBATCH --mem=64gb #SBATCH --job-name=%s #SBATCH --error=/work/agro932/sybarreral/agrobinf/sybarreral/log/%s.err #SBATCH --output=/work/agro932/sybarreral/agrobinf/sybarreral/log/%s.out module load java/12 module load python/3.8 ''' for i in reads: sample = basename(i).split('.')[0] slurm = slurm_header%(sample, sample, sample) bam_out= sample +'_aln.bam' aln_cmd='''java -Xmx64g -jar src/NGSEPcore_4.1.0.jar ReadsAligner -r %s -i %s -s %s -o aln_res/%s > logs/%s_aln.log\n''' % (ref, i, sample, bam_out, sample) slurm += aln_cmd picard_cmd = '''java -Xmx64g -jar src/picard.jar SortSam SO=coordinate CREATE_INDEX=true I=aln_res/%s O=aln_res/%s_sorted.bam >& logs/%s_sort.log\n''' % (bam_out, sample, sample) slurm += picard_cmd with open('NGSEP_align_%s.slurm'%sample, 'w') as f: f.write(slurm) ```
github_jupyter
%%writefile ../NGSEP_pipeline_full.py #! /usr/bin/env python ### Script for aligning reads import argparse import glob from os.path import basename from subprocess import call parser=argparse.ArgumentParser(prog='NGSEP_pipeline_full', description='Script for align, postprocess and call reads using entirely NGSEP') parser.add_argument('-i', '--input', dest='fastqreads', help='The folder with your fastq reads') parser.add_argument('-r', '--referencegenome', dest='ref', help='The reference genome file') #parser.add_argument('-o', '--output', dest='outfolder', help='The output folder') arg=parser.parse_args() ## Check if there is some parameter missing if 'None' in str(arg): parser.error('Input parameter missing!! Please check your command line parameters with -h or --help') ## sample params fastq=arg.fastqreads.split('/')[0] # just for avoid to duplication of the '/' at the end ref=arg.ref reads=glob.glob('%s/*' % fastq) call('mkdir -p aln_res', shell=True) # java -jar NGSEPcore_<VERSION>.jar ReadsAligner -r <REF.fa> -i <SMPL>.fastq -s <SMPL> -o <SMPL>.bam > <SMPL>_aln.log # java -jar picard.jar SortSam SO=coordinate CREATE_INDEX=true I=<SMPL>.bam O=<SMPL>_sorted.bam >& <SMPL>_sort.log for i in reads: smpl = basename(i).split('.')[0] print('Start aligning sample %s' % smpl) bam_out= smpl +'_aln.bam' aln_cmd='''java -jar src/NGSEPcore_4.1.0.jar ReadsAligner -r %s -i %s -s %s -o aln_res/%s > logs/%s_aln.log''' % (ref, i, smpl, bam_out, smpl) print(aln_cmd) call(aln_cmd, shell=True) picard_cmd = '''java -jar src/picard.jar SortSam SO=coordinate CREATE_INDEX=true I=aln_res/%s O=aln_res/%s_sorted.bam >& logs/%s_sort.log''' % (bam_out, smpl, smpl) print(picard_cmd) call(picard_cmd, shell=True) # Multisamples SNPs calling #java -jar src/NGSEPcore_4.1.0.jar MultisampleVariantsDetector -maxBaseQS 30 -maxAlnsPerStartPos 100 -r <REF.fa> -o population.vcf <BAM_FILES>* >& population.log snp_allign_cmd = '''java -jar src/NGSEPcore_4.1.0.jar MultisampleVariantsDetector -maxBaseQS 30 -maxAlnsPerStartPos 100 -r %s -o FINAL_SNP_file.vcf res_aln/*_sorted.bam >& logs/population.log''' % ref print(snp_allign_cmd) call(snp_allign_cmd, shell=True) %%writefile ../NGSEP_aln_slurm.py import os import argparse import glob from os.path import basename parser=argparse.ArgumentParser(prog='NGSEP_aln_slurm', description='Script for creating slurm jobs for aligning and postprocessing reads using entirely NGSEP') parser.add_argument('-i', '--input', dest='fastqreads', help='The folder with your fastq reads') parser.add_argument('-r', '--referencegenome', dest='ref', help='The reference genome file') #parser.add_argument('-o', '--output', dest='outfolder', help='The output folder') arg=parser.parse_args() ## Check if there is some parameter missing if 'None' in str(arg): parser.error('Input parameter missing!! Please check your command line parameters with -h or --help') ## sample params fastq=arg.fastqreads.split('/')[0] # just for avoid to duplication of the '/' at the end ref=arg.ref reads=glob.glob('%s/*' % fastq) slurm_header = '''#!/bin/bash #SBATCH --time=24:00:00 #SBATCH --mem=64gb #SBATCH --job-name=%s #SBATCH --error=/work/agro932/sybarreral/agrobinf/sybarreral/log/%s.err #SBATCH --output=/work/agro932/sybarreral/agrobinf/sybarreral/log/%s.out module load java/12 module load python/3.8 ''' for i in reads: sample = basename(i).split('.')[0] slurm = slurm_header%(sample, sample, sample) bam_out= sample +'_aln.bam' aln_cmd='''java -Xmx64g -jar src/NGSEPcore_4.1.0.jar ReadsAligner -r %s -i %s -s %s -o aln_res/%s > logs/%s_aln.log\n''' % (ref, i, sample, bam_out, sample) slurm += aln_cmd picard_cmd = '''java -Xmx64g -jar src/picard.jar SortSam SO=coordinate CREATE_INDEX=true I=aln_res/%s O=aln_res/%s_sorted.bam >& logs/%s_sort.log\n''' % (bam_out, sample, sample) slurm += picard_cmd with open('NGSEP_align_%s.slurm'%sample, 'w') as f: f.write(slurm)
0.239616
0.164215
# Set weather data datetime This notebook formats a date and a time column for weather data measurements with a unix timestamp. Each measurement is then inserted into a pumilio database. #### Required packages <a href="https://github.com/pydata/pandas">pandas</a> <br /> <a href="https://github.com/rasbt/pyprind">pyprind</a> <br /> <a href="https://github.com/jacobdein/pymilio">pymilio</a> #### Variable declarations weather_filepath – path to excel containing weather measurements, each with a unix timestamp ``` weather_filepath = "" ``` #### Import statements ``` import pandas import pyprind from datetime import datetime from Pymilio import database ``` #### Create and format a 'WeatherDate' and 'WeatherTime' column ``` weather_data = pandas.read_excel(weather_filepath) weather_data['WeatherDate'] = weather_data['WeatherDate'].astype('str') weather_data['WeatherTime'] = weather_data['WeatherTime'].astype('str') for index, row in weather_data.iterrows(): timestamp = row['timestamp'] dt = datetime.fromtimestamp(timestamp) date = datetime.strftime(dt, "%Y-%m-%d") time = datetime.strftime(dt, "%H:%M:%S") weather_data.set_value(index, 'WeatherDate', date) weather_data.set_value(index, 'WeatherTime', time) weather_data = weather_data.drop('timestamp', axis=1) weather_data = weather_data.drop('LightIntensity', axis=1) ``` #### Connect to database ``` db = database.Pymilio_db_connection(user='pumilio', database='pumilio', read_default_file='~/.my.cnf.pumilio') ``` #### Insert weather measurements into a pumilio database ``` table_name = 'WeatherData' column_list = [ n for n in weather_data.columns ] column_names = ", ".join(column_list) progress_bar = pyprind.ProgBar(len(weather_data), bar_char='█', title='Progress', monitor=True, stream=1, width=50) for index, row in weather_data.iterrows(): progress_bar.update(item_id=str(index)) value_list = [ str(v) for v in row.as_matrix() ] value_strings = "'" value_strings = value_strings + "', '".join(value_list) value_strings = value_strings + "'" #value_strings = value_strings.replace('nan', 'NULL') statement = """INSERT INTO {0} ({1}) VALUES ({2})""".format(table_name, column_names, value_strings) db = pumilio_db._connect() c = db.cursor() c.execute(statement) c.close() db.close() ``` #### Optionally export dataframe to a csv file ``` #weather_data.to_csv("~/Desktop/weather_db.csv", index=False, header=False) ```
github_jupyter
weather_filepath = "" import pandas import pyprind from datetime import datetime from Pymilio import database weather_data = pandas.read_excel(weather_filepath) weather_data['WeatherDate'] = weather_data['WeatherDate'].astype('str') weather_data['WeatherTime'] = weather_data['WeatherTime'].astype('str') for index, row in weather_data.iterrows(): timestamp = row['timestamp'] dt = datetime.fromtimestamp(timestamp) date = datetime.strftime(dt, "%Y-%m-%d") time = datetime.strftime(dt, "%H:%M:%S") weather_data.set_value(index, 'WeatherDate', date) weather_data.set_value(index, 'WeatherTime', time) weather_data = weather_data.drop('timestamp', axis=1) weather_data = weather_data.drop('LightIntensity', axis=1) db = database.Pymilio_db_connection(user='pumilio', database='pumilio', read_default_file='~/.my.cnf.pumilio') table_name = 'WeatherData' column_list = [ n for n in weather_data.columns ] column_names = ", ".join(column_list) progress_bar = pyprind.ProgBar(len(weather_data), bar_char='█', title='Progress', monitor=True, stream=1, width=50) for index, row in weather_data.iterrows(): progress_bar.update(item_id=str(index)) value_list = [ str(v) for v in row.as_matrix() ] value_strings = "'" value_strings = value_strings + "', '".join(value_list) value_strings = value_strings + "'" #value_strings = value_strings.replace('nan', 'NULL') statement = """INSERT INTO {0} ({1}) VALUES ({2})""".format(table_name, column_names, value_strings) db = pumilio_db._connect() c = db.cursor() c.execute(statement) c.close() db.close() #weather_data.to_csv("~/Desktop/weather_db.csv", index=False, header=False)
0.265119
0.947962
<h2><center> Quick, Draw! Doodle Recognition Challenge & CNNs</center> </h2> ### Let's see how CNNS the already proven model to image classification peforms in this challenge. This is just a demonstration that's why im not using all the categories from the train set. ### Dependencies ``` import os import ast import cv2 import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.utils import shuffle from keras import optimizers from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation, BatchNormalization from keras.models import Sequential from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix %matplotlib inline ``` ### Auxiliar functions ``` def drawing_to_np(drawing, shape=(28, 28)): # evaluates the drawing array drawing = eval(drawing) fig, ax = plt.subplots() for x,y in drawing: ax.plot(x, y, marker='.') ax.axis('off') fig.canvas.draw() # Close figure so it won't get displayed while transforming the set plt.close(fig) # Convert images to numpy array np_drawing = np.array(fig.canvas.renderer._renderer) # Take only one channel np_drawing =np_drawing[:, :, 1] # Normalize data np_drawing = np_drawing / 255. return cv2.resize(np_drawing, shape) # Resize array def plot_metrics_primary(acc, val_acc, loss, val_loss): fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(20,7)) ax1.plot(acc, label='Train Accuracy') ax1.plot(val_acc, label='Validation accuracy') ax1.legend(loc='best') ax1.set_title('Accuracy') ax2.plot(loss, label='Train loss') ax2.plot(val_loss, label='Validation loss') ax2.legend(loc='best') ax2.set_title('Loss') plt.xlabel('Epochs') def plot_confusion_matrix(cnf_matrix, labels): cnf_matrix_norm = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis] df_cm = pd.DataFrame(cnf_matrix_norm, index=labels, columns=labels) plt.figure(figsize=(20,7)) sns.heatmap(df_cm, annot=True, fmt='.2f', cmap="Blues") plt.show() ``` ### Load data ``` TRAIN_PATH = '../input/train_simplified/' TEST_PATH = '../input/test_simplified.csv' SUBMISSION_NAME = 'submission.csv' train = pd.DataFrame() for file in os.listdir(TRAIN_PATH)[:5]: train = train.append(pd.read_csv(TRAIN_PATH + file, usecols=[1, 5], nrows=2000)) # Shuffle data train = shuffle(train, random_state=123) test = pd.read_csv(TEST_PATH, usecols=[0, 2], nrows=100) ``` ### Parameters ``` # Model parameters BATCH_SIZE = 64 EPOCHS = 60 LEARNING_RATE = 0.001 N_CLASSES = train['word'].nunique() HEIGHT = 28 WIDTH = 28 CHANNEL = 1 ``` ### Let's take a look at the raw data ``` print('Train set shape: ', train.shape) print('Train set features: %s' % train.columns.values) print('Train number of label categories: %s' % N_CLASSES) train.head() ``` ### Pre process ``` #Fixing label train['word'] = train['word'].replace(' ', '_', regex=True) # Get labels and one-hot encode them. classes_names = train['word'].unique() labels = pd.get_dummies(train['word']).values train.drop(['word'], axis=1, inplace=True) # Transform drawing into numpy arrays train['drawing_np'] = train['drawing'].apply(drawing_to_np) # Reshape arrays train_drawings = np.asarray([x.reshape(HEIGHT, WIDTH, CHANNEL) for x in train['drawing_np'].values]) train.head() ``` ### Split data in train and validation (90% ~ 10%) ``` x_train, x_val, y_train, y_val = train_test_split(train_drawings, labels, test_size=0.1, random_state=1) ``` ### Model ``` model = Sequential() model.add(Conv2D(32, kernel_size=(5,5),padding='Same', activation='relu', input_shape=(HEIGHT, WIDTH, CHANNEL))) model.add(Conv2D(32, kernel_size=(5,5),padding='Same', activation='relu')) model.add(MaxPool2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(64, kernel_size=(3,3),padding='Same', activation='relu')) model.add(Conv2D(64, kernel_size=(3,3),padding='Same', activation='relu')) model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(N_CLASSES, activation = "softmax")) optimizer = optimizers.adam(lr=LEARNING_RATE) model.compile(optimizer=optimizer , loss="categorical_crossentropy", metrics=["accuracy"]) print('Dataset size: %s' % train.shape[0]) print('Epochs: %s' % EPOCHS) print('Learning rate: %s' % LEARNING_RATE) print('Batch size: %s' % BATCH_SIZE) print('Input dimension: (%s, %s, %s)' % (HEIGHT, WIDTH, CHANNEL)) model.summary() history = model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_data=(x_val, y_val)) ``` Let's take a look at our model loss and accuracy training and validation graph. ``` plot_metrics_primary(history.history['acc'], history.history['val_acc'], history.history['loss'], history.history['val_loss']) ``` A good way to evaluate a classification model is to take a look at the model confusion matrix, this way we can have a better insight on what our model is getting right and what not. ``` cnf_matrix = confusion_matrix(np.argmax(y_val, axis=1), model.predict_classes(x_val)) plot_confusion_matrix(cnf_matrix, classes_names) ``` Finally let's predict the test data and output our predictions. ### Process test ``` # Transform drawing into numpy arrays. test['drawing_np'] = test['drawing'].apply(drawing_to_np) # Reshape arrays. test_drawings = np.asarray([x.reshape(HEIGHT, WIDTH, CHANNEL) for x in test['drawing_np'].values]) predictions = model.predict(test_drawings) top_3_predictions = np.asarray([np.argpartition(pred, -3)[-3:] for pred in predictions]) top_3_predictions = ['%s %s %s' % (classes_names[pred[0]], classes_names[pred[1]], classes_names[pred[2]]) for pred in top_3_predictions] test['word'] = top_3_predictions submission = test[['key_id', 'word']] submission.to_csv(SUBMISSION_NAME, index=False) submission.head() ```
github_jupyter
import os import ast import cv2 import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.utils import shuffle from keras import optimizers from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation, BatchNormalization from keras.models import Sequential from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix %matplotlib inline def drawing_to_np(drawing, shape=(28, 28)): # evaluates the drawing array drawing = eval(drawing) fig, ax = plt.subplots() for x,y in drawing: ax.plot(x, y, marker='.') ax.axis('off') fig.canvas.draw() # Close figure so it won't get displayed while transforming the set plt.close(fig) # Convert images to numpy array np_drawing = np.array(fig.canvas.renderer._renderer) # Take only one channel np_drawing =np_drawing[:, :, 1] # Normalize data np_drawing = np_drawing / 255. return cv2.resize(np_drawing, shape) # Resize array def plot_metrics_primary(acc, val_acc, loss, val_loss): fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(20,7)) ax1.plot(acc, label='Train Accuracy') ax1.plot(val_acc, label='Validation accuracy') ax1.legend(loc='best') ax1.set_title('Accuracy') ax2.plot(loss, label='Train loss') ax2.plot(val_loss, label='Validation loss') ax2.legend(loc='best') ax2.set_title('Loss') plt.xlabel('Epochs') def plot_confusion_matrix(cnf_matrix, labels): cnf_matrix_norm = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis] df_cm = pd.DataFrame(cnf_matrix_norm, index=labels, columns=labels) plt.figure(figsize=(20,7)) sns.heatmap(df_cm, annot=True, fmt='.2f', cmap="Blues") plt.show() TRAIN_PATH = '../input/train_simplified/' TEST_PATH = '../input/test_simplified.csv' SUBMISSION_NAME = 'submission.csv' train = pd.DataFrame() for file in os.listdir(TRAIN_PATH)[:5]: train = train.append(pd.read_csv(TRAIN_PATH + file, usecols=[1, 5], nrows=2000)) # Shuffle data train = shuffle(train, random_state=123) test = pd.read_csv(TEST_PATH, usecols=[0, 2], nrows=100) # Model parameters BATCH_SIZE = 64 EPOCHS = 60 LEARNING_RATE = 0.001 N_CLASSES = train['word'].nunique() HEIGHT = 28 WIDTH = 28 CHANNEL = 1 print('Train set shape: ', train.shape) print('Train set features: %s' % train.columns.values) print('Train number of label categories: %s' % N_CLASSES) train.head() #Fixing label train['word'] = train['word'].replace(' ', '_', regex=True) # Get labels and one-hot encode them. classes_names = train['word'].unique() labels = pd.get_dummies(train['word']).values train.drop(['word'], axis=1, inplace=True) # Transform drawing into numpy arrays train['drawing_np'] = train['drawing'].apply(drawing_to_np) # Reshape arrays train_drawings = np.asarray([x.reshape(HEIGHT, WIDTH, CHANNEL) for x in train['drawing_np'].values]) train.head() x_train, x_val, y_train, y_val = train_test_split(train_drawings, labels, test_size=0.1, random_state=1) model = Sequential() model.add(Conv2D(32, kernel_size=(5,5),padding='Same', activation='relu', input_shape=(HEIGHT, WIDTH, CHANNEL))) model.add(Conv2D(32, kernel_size=(5,5),padding='Same', activation='relu')) model.add(MaxPool2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(64, kernel_size=(3,3),padding='Same', activation='relu')) model.add(Conv2D(64, kernel_size=(3,3),padding='Same', activation='relu')) model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(N_CLASSES, activation = "softmax")) optimizer = optimizers.adam(lr=LEARNING_RATE) model.compile(optimizer=optimizer , loss="categorical_crossentropy", metrics=["accuracy"]) print('Dataset size: %s' % train.shape[0]) print('Epochs: %s' % EPOCHS) print('Learning rate: %s' % LEARNING_RATE) print('Batch size: %s' % BATCH_SIZE) print('Input dimension: (%s, %s, %s)' % (HEIGHT, WIDTH, CHANNEL)) model.summary() history = model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, validation_data=(x_val, y_val)) plot_metrics_primary(history.history['acc'], history.history['val_acc'], history.history['loss'], history.history['val_loss']) cnf_matrix = confusion_matrix(np.argmax(y_val, axis=1), model.predict_classes(x_val)) plot_confusion_matrix(cnf_matrix, classes_names) # Transform drawing into numpy arrays. test['drawing_np'] = test['drawing'].apply(drawing_to_np) # Reshape arrays. test_drawings = np.asarray([x.reshape(HEIGHT, WIDTH, CHANNEL) for x in test['drawing_np'].values]) predictions = model.predict(test_drawings) top_3_predictions = np.asarray([np.argpartition(pred, -3)[-3:] for pred in predictions]) top_3_predictions = ['%s %s %s' % (classes_names[pred[0]], classes_names[pred[1]], classes_names[pred[2]]) for pred in top_3_predictions] test['word'] = top_3_predictions submission = test[['key_id', 'word']] submission.to_csv(SUBMISSION_NAME, index=False) submission.head()
0.651577
0.908374
# Convert PDF of gSlides to Images (PNG) - store 'architectures.pdf' in /vertex-ai-mlops/slides - store 'thumbnails.pdf' in /vertex-ai-mlops/thumbnails - run this notebook in /vertext-ai-mlops/architectures - slides are stored as slide_X.png in /vertext-ai-mlops/architectures/slides - thumbnails are stored as tn_X.png in /vertext-ai-mlops/architectures/thumbnails (/plain, and /playbutton) --- ## Setup ``` !ls !pip install pdf2image -q -U !conda install -c conda-forge poppler -y -q from pdf2image import convert_from_path ``` --- ## Mapping ``` import os, glob notebooks = [] for nb in glob.glob('../*.ipynb'): notebooks.append(nb.split(' - ')[0][3:]) notebooks.sort() notebooks = ['readme'] + notebooks notebooks ``` --- ## Architectures.pdf ``` images = convert_from_path('slides/architectures.pdf',350) for i, image in enumerate(images): if i > 0: # div by 2: int part is index for notebooks, remainder is 0=arch, 1=console slide = notebooks[int((i-1)/2)] if ((i-1) % 2) == 0: suffix = 'arch' else: suffix = 'console' image.save(f'slides/{slide}_{suffix}.png') ``` --- ## Thumbnails.pdf ``` from PIL import Image import os images = convert_from_path('thumbnails/thumbnails.pdf', size=(1920, 1080)) ``` /plain versions ``` for i, image in enumerate(images): if i > 0: image.save(f'thumbnails/plain/{notebooks[i-1]}.png') ``` /prepared versions - add the architecture slide to the plain version ``` for filename in os.listdir('thumbnails/plain'): if not (filename.endswith('.png')): continue if filename == 'readme.png': thumb = Image.open(f'thumbnails/plain/{filename}') thumb.save(f'thumbnails/prepared/{filename}') continue # grab plain thumbnail thumb = Image.open(f'thumbnails/plain/{filename}') tWidth, tHeight = thumb.size # grab related architecture slide slide = Image.open(f"slides/{filename.split('.')[0]}_arch.png").convert("RGBA") sWidth, sHeight = slide.size slide = slide.resize((int(tWidth/1.6), int(tHeight/1.6))) # save the prepared version with architecture added to plain thumbnail thumb.paste(slide, (int(tWidth/3), int(tHeight/7)), slide) thumb.save(f'thumbnails/prepared/{filename}') ``` /playbutton versions - add playbutton to the prepapared versions ``` playbutton = Image.open('thumbnails/logo_youtube_color_1x_web_512dp.png').convert("RGBA") pbWidth, pbHeight = playbutton.size #playbutton.show() for filename in os.listdir('thumbnails/prepared'): if not (filename.endswith('.png')): continue tn = Image.open(f'thumbnails/prepared/{filename}') tnWidth, tnHeight = tn.size print(filename) tn.paste(playbutton, (int(tnWidth/2 - pbWidth/2), int(tnHeight/2 - pbHeight/2)), playbutton) tn.save(f'thumbnails/playbutton/{filename}') ```
github_jupyter
!ls !pip install pdf2image -q -U !conda install -c conda-forge poppler -y -q from pdf2image import convert_from_path import os, glob notebooks = [] for nb in glob.glob('../*.ipynb'): notebooks.append(nb.split(' - ')[0][3:]) notebooks.sort() notebooks = ['readme'] + notebooks notebooks images = convert_from_path('slides/architectures.pdf',350) for i, image in enumerate(images): if i > 0: # div by 2: int part is index for notebooks, remainder is 0=arch, 1=console slide = notebooks[int((i-1)/2)] if ((i-1) % 2) == 0: suffix = 'arch' else: suffix = 'console' image.save(f'slides/{slide}_{suffix}.png') from PIL import Image import os images = convert_from_path('thumbnails/thumbnails.pdf', size=(1920, 1080)) for i, image in enumerate(images): if i > 0: image.save(f'thumbnails/plain/{notebooks[i-1]}.png') for filename in os.listdir('thumbnails/plain'): if not (filename.endswith('.png')): continue if filename == 'readme.png': thumb = Image.open(f'thumbnails/plain/{filename}') thumb.save(f'thumbnails/prepared/{filename}') continue # grab plain thumbnail thumb = Image.open(f'thumbnails/plain/{filename}') tWidth, tHeight = thumb.size # grab related architecture slide slide = Image.open(f"slides/{filename.split('.')[0]}_arch.png").convert("RGBA") sWidth, sHeight = slide.size slide = slide.resize((int(tWidth/1.6), int(tHeight/1.6))) # save the prepared version with architecture added to plain thumbnail thumb.paste(slide, (int(tWidth/3), int(tHeight/7)), slide) thumb.save(f'thumbnails/prepared/{filename}') playbutton = Image.open('thumbnails/logo_youtube_color_1x_web_512dp.png').convert("RGBA") pbWidth, pbHeight = playbutton.size #playbutton.show() for filename in os.listdir('thumbnails/prepared'): if not (filename.endswith('.png')): continue tn = Image.open(f'thumbnails/prepared/{filename}') tnWidth, tnHeight = tn.size print(filename) tn.paste(playbutton, (int(tnWidth/2 - pbWidth/2), int(tnHeight/2 - pbHeight/2)), playbutton) tn.save(f'thumbnails/playbutton/{filename}')
0.257112
0.55911
<a href="https://colab.research.google.com/github/msrana172/Big-Data-Engineering-Coursera-Yandex/blob/master/mnf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` pip install --upgrade google-cloud-translate import os from google.cloud import translate_v2 as translate os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/content/unique-decker-278617-ead19fb5dcc9.json" translate_client = translate.Client() #text ="my name is lokesh." text = "My name is Goat cheese" target = "fr" result = translate_client.translate(text, target_language=target) print(u"Text: {}".format(result["input"])) print(u"Translation: {}".format(result["translatedText"])) print(u"Detected source language: {}".format(result["detectedSourceLanguage"])) result def create_glossary(languages, project_id, glossary_name, glossary_uri): timeout=180 location = "us-central1" client = translate.TranslationServiceClient() name = client.glossary_path(project_id, location, glossary_name) language_codes_set = translate.Glossary.LanguageCodesSet( language_codes=languages) gcs_source = translate.GcsSource(input_uri=glossary_uri) input_config = translate.GlossaryInputConfig(gcs_source=gcs_source) glossary = translate.Glossary( name=name, language_codes_set=language_codes_set, input_config=input_config ) parent = f"projects/{project_id}/locations/{location}" #print("hello") operation = client.create_glossary(parent=parent, glossary=glossary) #print("hello") result = operation.result(timeout) print("Created: {}".format(result.name)) print("Input Uri: {}".format(result.input_config.gcs_source.input_uri)) import os from google.api_core.exceptions import AlreadyExists from google.cloud import translate_v3 as translate os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/content/unique-decker-278617-ead19fb5dcc9.json" os.environ["GCLOUD_PROJECT"]="unique-decker-278617" glossary_langs = ["fr", "en"] PROJECT_ID = "unique-decker-278617" glossary_name = "lokeshgupta1" glossary_uri = "gs://gupta/Book1.csv" create_glossary(glossary_langs, PROJECT_ID, glossary_name, glossary_uri) from google.cloud import translate def translate_text_with_glossary( text = "YOUR_TEXT_TO_TRANSLATE", project_id = "YOUR_PROJECT_ID", glossary_id = "YOUR_GLOSSARY_ID", ): """Translates a given text using a glossary.""" client = translate.TranslationServiceClient() location = "us-central1" parent = f"projects/{project_id}/locations/{location}" glossary = client.glossary_path( project_id, "us-central1", glossary_id # The location of the glossary ) glossary_config = translate.TranslateTextGlossaryConfig(glossary=glossary) # Supported language codes: https://cloud.google.com/translate/docs/languages response = client.translate_text( request={ "contents": [text], "target_language_code": "fr", "source_language_code": "en", "parent": parent, "glossary_config": glossary_config, } ) print("Translated text: \n") for translation in response.glossary_translations: print("\t {}".format(translation.translated_text)) translate_text_with_glossary("My name is Goat cheese", "unique-decker-278617", "lokeshgupta" ) Mon nom est lokesh Mon nom est fromage de chèvre from google.cloud import translate def list_glossaries(project_id="YOUR_PROJECT_ID"): """List Glossaries.""" client = translate.TranslationServiceClient() location = "us-central1" parent = f"projects/{project_id}/locations/{location}" # Iterate over all results for glossary in client.list_glossaries(parent=parent): print("Name: {}".format(glossary.name)) print("Entry count: {}".format(glossary.entry_count)) print("Input uri: {}".format(glossary.input_config.gcs_source.input_uri)) # Note: You can create a glossary using one of two modes: # language_code_set or language_pair. When listing the information for # a glossary, you can only get information for the mode you used # when creating the glossary. for language_code in glossary.language_codes_set.language_codes: print("Language code: {}".format(language_code)) list_glossaries("unique-decker-278617") from google.cloud import translate_v3 as translate def delete_glossary( project_id="YOUR_PROJECT_ID", glossary_id="YOUR_GLOSSARY_ID", timeout=180, ): """Delete a specific glossary based on the glossary ID.""" client = translate.TranslationServiceClient() name = client.glossary_path(project_id, "us-central1", glossary_id) operation = client.delete_glossary(name=name) result = operation.result(timeout) print("Deleted: {}".format(result.name)) delete_glossary("unique-decker-278617", "bistro-glossary" ) from google.cloud import translate_v3 as translate def get_glossary(project_id = "YOUR_PROJECT_ID", glossary_id="YOUR_GLOSSARY_ID"): """Get a particular glossary based on the glossary ID.""" client = translate.TranslationServiceClient() name = client.glossary_path(project_id, "us-central1", glossary_id) response = client.get_glossary(name=name) print(u"Glossary name: {}".format(response.name)) print(u"Entry count: {}".format(response.entry_count)) print(u"Input URI: {}".format(response.input_config.gcs_source.input_uri)) get_glossary("unique-decker-278617", "lokeshgupta") glossary_langs = ["fr", "en"] PROJECT_ID = "unique-decker-278617" glossary_name = "gl" glossary_uri = "gs://cloud-samples-data/translation/bistro_glossary.csv" create_glossary(glossary_langs, PROJECT_ID, glossary_name, glossary_uri) glossary_langs = ["fr", "en"] PROJECT_ID = "unique-decker-278617" glossary_name = "hmm" glossary_uri = "gs://gupta/Book1.csv" create_glossary(glossary_langs, PROJECT_ID, glossary_name, glossary_uri) ```
github_jupyter
pip install --upgrade google-cloud-translate import os from google.cloud import translate_v2 as translate os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/content/unique-decker-278617-ead19fb5dcc9.json" translate_client = translate.Client() #text ="my name is lokesh." text = "My name is Goat cheese" target = "fr" result = translate_client.translate(text, target_language=target) print(u"Text: {}".format(result["input"])) print(u"Translation: {}".format(result["translatedText"])) print(u"Detected source language: {}".format(result["detectedSourceLanguage"])) result def create_glossary(languages, project_id, glossary_name, glossary_uri): timeout=180 location = "us-central1" client = translate.TranslationServiceClient() name = client.glossary_path(project_id, location, glossary_name) language_codes_set = translate.Glossary.LanguageCodesSet( language_codes=languages) gcs_source = translate.GcsSource(input_uri=glossary_uri) input_config = translate.GlossaryInputConfig(gcs_source=gcs_source) glossary = translate.Glossary( name=name, language_codes_set=language_codes_set, input_config=input_config ) parent = f"projects/{project_id}/locations/{location}" #print("hello") operation = client.create_glossary(parent=parent, glossary=glossary) #print("hello") result = operation.result(timeout) print("Created: {}".format(result.name)) print("Input Uri: {}".format(result.input_config.gcs_source.input_uri)) import os from google.api_core.exceptions import AlreadyExists from google.cloud import translate_v3 as translate os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/content/unique-decker-278617-ead19fb5dcc9.json" os.environ["GCLOUD_PROJECT"]="unique-decker-278617" glossary_langs = ["fr", "en"] PROJECT_ID = "unique-decker-278617" glossary_name = "lokeshgupta1" glossary_uri = "gs://gupta/Book1.csv" create_glossary(glossary_langs, PROJECT_ID, glossary_name, glossary_uri) from google.cloud import translate def translate_text_with_glossary( text = "YOUR_TEXT_TO_TRANSLATE", project_id = "YOUR_PROJECT_ID", glossary_id = "YOUR_GLOSSARY_ID", ): """Translates a given text using a glossary.""" client = translate.TranslationServiceClient() location = "us-central1" parent = f"projects/{project_id}/locations/{location}" glossary = client.glossary_path( project_id, "us-central1", glossary_id # The location of the glossary ) glossary_config = translate.TranslateTextGlossaryConfig(glossary=glossary) # Supported language codes: https://cloud.google.com/translate/docs/languages response = client.translate_text( request={ "contents": [text], "target_language_code": "fr", "source_language_code": "en", "parent": parent, "glossary_config": glossary_config, } ) print("Translated text: \n") for translation in response.glossary_translations: print("\t {}".format(translation.translated_text)) translate_text_with_glossary("My name is Goat cheese", "unique-decker-278617", "lokeshgupta" ) Mon nom est lokesh Mon nom est fromage de chèvre from google.cloud import translate def list_glossaries(project_id="YOUR_PROJECT_ID"): """List Glossaries.""" client = translate.TranslationServiceClient() location = "us-central1" parent = f"projects/{project_id}/locations/{location}" # Iterate over all results for glossary in client.list_glossaries(parent=parent): print("Name: {}".format(glossary.name)) print("Entry count: {}".format(glossary.entry_count)) print("Input uri: {}".format(glossary.input_config.gcs_source.input_uri)) # Note: You can create a glossary using one of two modes: # language_code_set or language_pair. When listing the information for # a glossary, you can only get information for the mode you used # when creating the glossary. for language_code in glossary.language_codes_set.language_codes: print("Language code: {}".format(language_code)) list_glossaries("unique-decker-278617") from google.cloud import translate_v3 as translate def delete_glossary( project_id="YOUR_PROJECT_ID", glossary_id="YOUR_GLOSSARY_ID", timeout=180, ): """Delete a specific glossary based on the glossary ID.""" client = translate.TranslationServiceClient() name = client.glossary_path(project_id, "us-central1", glossary_id) operation = client.delete_glossary(name=name) result = operation.result(timeout) print("Deleted: {}".format(result.name)) delete_glossary("unique-decker-278617", "bistro-glossary" ) from google.cloud import translate_v3 as translate def get_glossary(project_id = "YOUR_PROJECT_ID", glossary_id="YOUR_GLOSSARY_ID"): """Get a particular glossary based on the glossary ID.""" client = translate.TranslationServiceClient() name = client.glossary_path(project_id, "us-central1", glossary_id) response = client.get_glossary(name=name) print(u"Glossary name: {}".format(response.name)) print(u"Entry count: {}".format(response.entry_count)) print(u"Input URI: {}".format(response.input_config.gcs_source.input_uri)) get_glossary("unique-decker-278617", "lokeshgupta") glossary_langs = ["fr", "en"] PROJECT_ID = "unique-decker-278617" glossary_name = "gl" glossary_uri = "gs://cloud-samples-data/translation/bistro_glossary.csv" create_glossary(glossary_langs, PROJECT_ID, glossary_name, glossary_uri) glossary_langs = ["fr", "en"] PROJECT_ID = "unique-decker-278617" glossary_name = "hmm" glossary_uri = "gs://gupta/Book1.csv" create_glossary(glossary_langs, PROJECT_ID, glossary_name, glossary_uri)
0.35421
0.42656
## Autograd El paquete de autograd es central a todo el backend donde corre PyTorch. Nos permite tener automatic differentiation (que son las técnicas que permiten realizar cálculo numérico aproximado con el ordenador - derivadas, integrales, etc.) y funciona de manera "define-by-run framework". Es decir que cada ejecución puede ser diferente. Basado en la documentación: **torch.Tensor** is the central class of the package. If you set its attribute **.requires_grad** as True, it starts to track all operations on it. When you finish your computation you can call **.backward()** and have all the gradients computed automatically. The gradient for this tensor will be accumulated into **.grad** attribute. To stop a tensor from tracking history, you can call **.detach()** to detach it from the computation history, and to prevent future computation from being tracked. To prevent tracking history (and using memory), you can also wrap the code block in **with torch.no_grad():**. This can be particularly helpful when evaluating a model because the model may have trainable parameters with **requires_grad=True**, but for which we don’t need the gradients. There’s one more class which is very important for autograd implementation - a **Function**. **Tensor** and **Function** are interconnected and build up an acyclic graph, that encodes a complete history of computation. Each tensor has a **.grad_fn** attribute that references a **Function** that has created the **Tensor** (except for Tensors created by the user - their grad_fn is None). If you want to compute the derivatives, you can call .backward() on a Tensor. If Tensor is a scalar (i.e. it holds a one element data), you don’t need to specify any arguments to backward(), however if it has more elements, you need to specify a gradient argument that is a tensor of matching shape. Este tutorial está basado en: https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py Más información sobre autograd en https://pytorch.org/docs/autograd Veamos unos ejemplos ``` import torch x = torch.ones(2, 2, requires_grad=True) print(x) y = x + 2 print(y) ``` Debido a que y ha sido creado por como resultado de una operación, ahora va a tener un **grad_fn** ``` print(y.grad_fn) z = y * y * 3 out = z.mean() print(z, out) ``` La función .requires_grad_ cambia el atributo .requires_grad_ del tensor. El valor por defecto es falso ``` a = torch.randn(2, 2) a = ((a * 3) / (a - 1)) print(a.requires_grad) a.requires_grad_(True) print(a.requires_grad) b = (a * a).sum() print(b.grad_fn) ``` ## Gradiente Vamos ahora a realizar la operación de backprop (back propagation). Si lo hacemos en un escalar, como la variable "out" la operación out.backward() es igual a out.backward(torch.tensor(1.)) ``` out.backward() out print(x.grad) ``` Matemáticamente, dado el vector x, su gradiente es la matriz Jacobiana ![image.png](attachment:image.png) El paquete torch.autograd permite realizar ese cálculo de manera vectorizada y eficaz. ``` x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() < 1000: y = y * 2 print(y) ``` "y" no es ya un escalar, por lo que torch.autograd no puede calcular la matriz Jacobiana completa pero si podemos calcular el producto vector jacobiano. ``` print(y.grad) v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(v) print(x.grad) ```
github_jupyter
import torch x = torch.ones(2, 2, requires_grad=True) print(x) y = x + 2 print(y) print(y.grad_fn) z = y * y * 3 out = z.mean() print(z, out) a = torch.randn(2, 2) a = ((a * 3) / (a - 1)) print(a.requires_grad) a.requires_grad_(True) print(a.requires_grad) b = (a * a).sum() print(b.grad_fn) out.backward() out print(x.grad) x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() < 1000: y = y * 2 print(y) print(y.grad) v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(v) print(x.grad)
0.368406
0.934395
``` import pandas as pd import numpy as np from datetime import datetime, timedelta import math import plotly.graph_objects as go from plotly.subplots import make_subplots # Import xlsx file and store each sheet in to a df list xl_file = pd.ExcelFile('./data.xlsx',) dfs = {sheet_name: xl_file.parse(sheet_name) for sheet_name in xl_file.sheet_names} # Data from each sheet can be accessed via key keyList = list(dfs.keys()) # Data cleansing for key, df in dfs.items(): dfs[key].loc[:,'Confirmed'].fillna(value=0, inplace=True) dfs[key].loc[:,'Deaths'].fillna(value=0, inplace=True) dfs[key].loc[:,'Recovered'].fillna(value=0, inplace=True) dfs[key]=dfs[key].astype({'Confirmed':'int64', 'Deaths':'int64', 'Recovered':'int64'}) # Change as China for coordinate search dfs[key]=dfs[key].replace({'Country/Region':'Mainland China'}, 'China') dfs[key]=dfs[key].replace({'Province/State':'Queensland'}, 'Brisbane') dfs[key]=dfs[key].replace({'Province/State':'New South Wales'}, 'Sydney') dfs[key]=dfs[key].replace({'Province/State':'Victoria'}, 'Melbourne') # Add a zero to the date so can be convert by datetime.strptime as 0-padded date dfs[key]['Last Update'] = '0' + dfs[key]['Last Update'] # Convert time as Australian eastern daylight time dfs[key]['Date_last_updated_AEDT'] = [datetime.strptime(d, '%m/%d/%Y %H:%M') for d in dfs[key]['Last Update']] dfs[key]['Date_last_updated_AEDT'] = dfs[key]['Date_last_updated_AEDT'] + timedelta(hours=16) # Check dfs[keyList[0]].head() # Import data with coordinates (coordinates were called seperately in "Updated_coordinates") dfs[keyList[0]]=pd.read_csv('{}_data.csv'.format(keyList[0])) dfs[keyList[1]]['Deaths'].sum() # Save numbers into variables to use in the app confirmedCases=dfs[keyList[0]]['Confirmed'].sum() deathsCases=dfs[keyList[0]]['Deaths'].sum() recoveredCases=dfs[keyList[0]]['Recovered'].sum() # Construct new dataframe for line plot DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Confirmed'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Confirmed':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Confirmed', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Confirmed'][0]) OtherList.append(dfTpm['Confirmed'][1:].sum()) df_confirmed = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_confirmed['date_day']=[d.date() for d in df_confirmed['Date']] df_confirmed=df_confirmed.groupby(by=df_confirmed['date_day'], sort=False).transform(max).drop_duplicates(['Date']) df_confirmed['Total']=df_confirmed['Mainland China']+df_confirmed['Other locations'] df_confirmed=df_confirmed.reset_index(drop=True) df_confirmed # Construct new dataframe for 24-hour window case difference DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Confirmed'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Confirmed':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Confirmed', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Confirmed'][0]) OtherList.append(dfTpm['Confirmed'][1:].sum()) df_confirmed_diff = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_confirmed_diff['Total']=df_confirmed_diff['Mainland China']+df_confirmed_diff['Other locations'] # Calculate differenec in a 24-hour window for index, _ in df_confirmed_diff.iterrows(): # Calculate the time differnece in hour diff=(df_confirmed_diff['Date'][0] - df_confirmed_diff['Date'][index]).total_seconds()/3600 # find out the latest time after 24-hour if diff >= 24: break plusConfirmedNum = df_confirmed_diff['Total'][0] - df_confirmed_diff['Total'][index] plusPercentNum1 = (df_confirmed_diff['Total'][0] - df_confirmed_diff['Total'][index])/df_confirmed_diff['Total'][index] # Construct new dataframe for line plot DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Recovered'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Recovered':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Recovered', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Recovered'][0]) OtherList.append(dfTpm['Recovered'][1:].sum()) df_recovered = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_recovered['date_day']=[d.date() for d in df_recovered['Date']] df_recovered=df_recovered.groupby(by=df_recovered['date_day'], sort=False).transform(max).drop_duplicates(['Date']) df_recovered['Total']=df_recovered['Mainland China']+df_recovered['Other locations'] df_recovered=df_recovered.reset_index(drop=True) df_recovered # Construct new dataframe for 24-hour window case difference DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Recovered'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Recovered':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Recovered', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Recovered'][0]) OtherList.append(dfTpm['Recovered'][1:].sum()) df_recovered_diff = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_recovered_diff['Total']=df_recovered_diff['Mainland China']+df_confirmed_diff['Other locations'] # Calculate differenec in a 24-hour window for index, _ in df_recovered_diff.iterrows(): # Calculate the time differnece in hour diff=(df_recovered_diff['Date'][0] - df_recovered_diff['Date'][index]).total_seconds()/3600 # find out the latest time after 24-hour if diff >= 24: break plusRecoveredNum = df_recovered_diff['Total'][0] - df_recovered_diff['Total'][index] plusPercentNum2 = (df_recovered_diff['Total'][0] - df_recovered_diff['Total'][index])/df_recovered_diff['Total'][index] plusPercentNum2 # Construct new dataframe for line plot DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Deaths'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Deaths':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Deaths', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Deaths'][0]) OtherList.append(dfTpm['Deaths'][1:].sum()) df_deaths = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_deaths['date_day']=[d.date() for d in df_deaths['Date']] df_deaths=df_deaths.groupby(by='date_day', sort=False).transform(max).drop_duplicates(['Date']) df_deaths['Total']=df_deaths['Mainland China']+df_deaths['Other locations'] df_deaths=df_deaths.reset_index(drop=True) df_deaths # Construct new dataframe for 24-hour window case difference DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Deaths'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Deaths':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Deaths', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Deaths'][0]) OtherList.append(dfTpm['Deaths'][1:].sum()) df_deaths_diff = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_deaths_diff['Total']=df_deaths_diff['Mainland China']+df_deaths_diff['Other locations'] # Calculate differenec in a 24-hour window for index, _ in df_deaths_diff.iterrows(): # Calculate the time differnece in hour diff=(df_deaths_diff['Date'][0] - df_deaths_diff['Date'][index]).total_seconds()/3600 # find out the latest time after 24-hour if diff >= 24: break plusDeathNum = df_deaths_diff['Total'][0] - df_deaths_diff['Total'][index] plusPercentNum3 = (df_deaths_diff['Total'][0] - df_deaths_diff['Total'][index])/df_deaths_diff['Total'][index] plusPercentNum3 # Generate sum values for Country/Region level dfCase = dfs[keyList[0]].groupby(by='Country/Region', sort=False).sum().reset_index() dfCase = dfCase.sort_values(by=['Confirmed'], ascending=False).reset_index(drop=True) # As lat and lon also underwent sum(), which is not desired, remove from this table. dfCase = dfCase.drop(columns=['lat','lon']) dfCase.head() # Grep lat and lon by the first instance to represent its Country/Region dfGPS = dfs[keyList[0]].groupby(by=['Country/Region'], sort=False).first().reset_index() dfGPS = dfGPS[['Country/Region','lat','lon']] dfGPS.head() # Merge two dataframes dfSum = pd.merge(dfCase, dfGPS, how='inner', on='Country/Region') dfSum = dfSum.replace({'Country/Region':'China'}, 'Mainland China') dfSum = dfSum[['Country/Region','Confirmed','Recovered','Deaths','lat','lon']] dfSum.head() # Save numbers into variables to use in the app latestDate=datetime.strftime(df_confirmed['Date'][0], '%b %d, %Y %H:%M AEDT') secondLastDate=datetime.strftime(df_confirmed['Date'][1], '%b %d') daysOutbreak=(df_confirmed['Date'][0] - datetime.strptime('12/31/2019', '%m/%d/%Y')).days latestDate # Line plot for confirmed cases # Set up tick scale based on confirmed case number tickList = list(np.arange(0, df_confirmed['Mainland China'].max()+1000, 5000)) # Create empty figure canvas fig_confirmed = go.Figure() # Add trace to the figure fig_confirmed.add_trace(go.Scatter(x=df_confirmed['Date'], y=df_confirmed['Mainland China'], mode='lines+markers', line_shape='spline', name='Mainland China', line=dict(color='#921113', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#921113')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_confirmed['Date']], hovertext=['Mainland China confirmed<br>{:,d} cases<br>'.format(i) for i in df_confirmed['Mainland China']], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) fig_confirmed.add_trace(go.Scatter(x=df_confirmed['Date'], y=df_confirmed['Other locations'], mode='lines+markers', line_shape='spline', name='Other Region', line=dict(color='#eb5254', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#eb5254')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_confirmed['Date']], hovertext=['Other locations confirmed<br>{:,d} cases<br>'.format(i) for i in df_confirmed['Other locations']], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) # Customise layout fig_confirmed.update_layout( #title=dict( # text="<b>Confirmed Cases Timeline<b>", # y=0.96, x=0.5, xanchor='center', yanchor='top', # font=dict(size=20, color="#292929", family="Playfair Display") #), margin=go.layout.Margin( l=10, r=10, b=10, t=5, pad=0 ), yaxis=dict( showline=True, linecolor='#272e3e', zeroline=False, gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, tickmode='array', # Set tick range based on the maximum number tickvals=tickList, # Set tick label accordingly ticktext=["{:.0f}k".format(i/1000) for i in tickList] ), # yaxis_title="Total Confirmed Case Number", xaxis=dict( showline=True, linecolor='#272e3e', gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, zeroline=False ), xaxis_tickformat='%b %d', hovermode = 'x', legend_orientation="h", # legend=dict(x=.35, y=-.05), plot_bgcolor='#f4f4f2', paper_bgcolor='#cbd2d3', font=dict(color='#292929') ) fig_confirmed.show() # Line plot for both recovered and death cases # Set up tick scale based on confirmed case number tickList = list(np.arange(0, df_recovered['Mainland China'].max()+200, 500)) # Create empty figure canvas fig_combine = go.Figure() # Add trace to the figure fig_combine.add_trace(go.Scatter(x=df_recovered['Date'], y=df_recovered['Total'], mode='lines+markers', line_shape='spline', name='Total Recovered Cases', line=dict(color='#168038', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#168038')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_recovered['Date']], hovertext=['Total recovered<br>{:,d} cases<br>'.format(i) for i in df_recovered['Total']], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) fig_combine.add_trace(go.Scatter(x=df_deaths['Date'], y=df_deaths['Total'], mode='lines+markers', line_shape='spline', name='Total Death Cases', line=dict(color='#626262', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#626262')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_deaths['Date']], hovertext=['Total death<br>{:,d} cases<br>'.format(i) for i in df_deaths['Total']], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) # Customise layout fig_combine.update_layout( #title=dict( # text="<b>Recovered Cases Timeline<b>", # y=0.96, x=0.5, xanchor='center', yanchor='top', # font=dict(size=20, color="#292929", family="Playfair Display") #), margin=go.layout.Margin( l=10, r=10, b=10, t=5, pad=0 ), yaxis=dict( showline=True, linecolor='#272e3e', zeroline=False, gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, tickmode='array', # Set tick range based on the maximum number tickvals=tickList, # Set tick label accordingly ticktext=['{:.0f}'.format(i) for i in tickList] ), # yaxis_title="Total Recovered Case Number", xaxis=dict( showline=True, linecolor='#272e3e', gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, zeroline=False ), xaxis_tickformat='%b %d', hovermode = 'x', legend_orientation="h", # legend=dict(x=.35, y=-.05), plot_bgcolor='#f4f4f2', paper_bgcolor='#cbd2d3', font=dict(color='#292929') ) fig_combine.show() # Line plot for death rate cases # Set up tick scale based on confirmed case number tickList = list(np.arange(0, (df_deaths['Mainland China']/df_confirmed['Mainland China']*100).max(), 0.5)) # Create empty figure canvas fig_rate = go.Figure() # Add trace to the figure fig_rate.add_trace(go.Scatter(x=df_deaths['Date'], y=df_deaths['Mainland China']/df_confirmed['Mainland China']*100, mode='lines+markers', line_shape='spline', name='Mainland China Death Rate', line=dict(color='#626262', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#626262')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_deaths['Date']], hovertext=['Mainland China death rate<br>{:.2f}%'.format(i) for i in df_deaths['Mainland China']/df_confirmed['Mainland China']*100], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) fig_rate.add_trace(go.Scatter(x=df_deaths['Date'], y=df_deaths['Other locations']/df_confirmed['Other locations']*100, mode='lines+markers', line_shape='spline', name='Other region Death Rate', line=dict(color='#a7a7a7', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#a7a7a7')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_deaths['Date']], hovertext=['Mainland China death rate<br>{:.2f}%'.format(i) for i in df_deaths['Other locations']/df_confirmed['Other locations']*100], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) # Customise layout fig_rate.update_layout( margin=go.layout.Margin( l=10, r=10, b=10, t=5, pad=0 ), yaxis=dict( showline=True, linecolor='#272e3e', zeroline=False, gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, tickmode='array', # Set tick range based on the maximum number tickvals=tickList, # Set tick label accordingly ticktext=['{:.1f}'.format(i) for i in tickList] ), # yaxis_title="Total Death Case Number", xaxis=dict( showline=True, linecolor='#272e3e', gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, zeroline=False ), xaxis_tickformat='%b %d', hovermode = 'x', legend_orientation="h", # legend=dict(x=.35, y=-.05), plot_bgcolor='#f4f4f2', paper_bgcolor='#cbd2d3', font=dict(color='#292929') ) fig_rate.show() mapbox_access_token = "pk.eyJ1IjoicGxvdGx5bWFwYm94IiwiYSI6ImNqdnBvNDMyaTAxYzkzeW5ubWdpZ2VjbmMifQ.TXcBE-xg9BFdV2ocecc_7g" # Generate a list for hover text display textList=[] for area, region in zip(dfs[keyList[0]]['Province/State'], dfs[keyList[0]]['Country/Region']): if type(area) is str: if region == "Hong Kong" or region == "Macau" or region == "Taiwan": textList.append(area) else: textList.append(area+', '+region) else: textList.append(region) fig2 = go.Figure(go.Scattermapbox( lat=dfs[keyList[0]]['lat'], lon=dfs[keyList[0]]['lon'], mode='markers', marker=go.scattermapbox.Marker( color='#ca261d', size=[math.sqrt(i) for i in dfs[keyList[0]]['Confirmed']], sizemin=1, sizemode='area', sizeref=2.*max([math.sqrt(i) for i in dfs[keyList[0]]['Confirmed']])/(100.**2), ), text=textList, hovertext=['Comfirmed: {}<br>Recovered: {}<br>Death: {}'.format(i, j, k) for i, j, k in zip(dfs[keyList[0]]['Confirmed'], dfs[keyList[0]]['Recovered'], dfs[keyList[0]]['Deaths'])], hovertemplate = "<b>%{text}</b><br><br>" + "%{hovertext}<br>" + "<extra></extra>") ) fig2.update_layout( # title=dict( # text="<b>Latest Coronavirus Outbreak Map<b>", # y=0.96, x=0.5, xanchor='center', yanchor='top', # font=dict(size=20, color="#292929", family="Playfair Display") # ), plot_bgcolor='#151920', paper_bgcolor='#cbd2d3', margin=go.layout.Margin( l=10, r=10, b=10, t=0, pad=40 ), hovermode='closest', mapbox=go.layout.Mapbox( accesstoken=mapbox_access_token, style="light", bearing=0, center=go.layout.mapbox.Center( lat=31.1517252, lon=112.8783222 ), pitch=0, zoom=4 ) ) fig2.show() import dash import dash_table import dash_core_components as dcc import dash_html_components as html import dash_bootstrap_components as dbc from dash.dependencies import Input, Output app = dash.Dash(__name__, assets_folder='./assets/', meta_tags=[ {"name": "viewport", "content": "width=device-width, height=device-height, initial-scale=1.0"} ] ) app.layout = html.Div(style={'backgroundColor':'#f4f4f2'}, children=[ html.Div( id="header", children=[ html.H4(children="Wuhan Coronavirus (2019-nCoV) Outbreak Monitor"), html.P( id="description", children="On Dec 31, 2019, the World Health Organization (WHO) was informed of \ an outbreak of “pneumonia of unknown cause” detected in Wuhan City, Hubei Province, China – the \ seventh-largest city in China with 11 million residents. As of {}, there are over {:,d} cases \ of 2019-nCoV confirmed globally.\ This dash board is developed to visualise and track the recent reported \ cases on a daily timescale.".format(latestDate, confirmedCases), ), html.P(style={'fontWeight':'bold'}, children="Last updated on {}.".format(latestDate)) ] ), html.Div( id="number-plate", style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'}, children=[ html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#ffffbf'}, children=[ html.P(style={'color':'#cbd2d3','padding':'.5rem'}, children='x'), '{}'.format(daysOutbreak), ]), html.H5(style={'textAlign':'center', 'fontWeight':'bold','color':'#ffffbf','padding':'.1rem'}, children="Days Since Outbreak") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#d7191c'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from yesterday ({:.1%})'.format(plusConfirmedNum, plusPercentNum1)), '{:,d}'.format(confirmedCases) ]), html.H5(style={'textAlign':'center', 'fontWeight':'bold','color':'#d7191c','padding':'.1rem'}, children="Confirmed Cases") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#1a9622'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from yesterday ({:.1%})'.format(plusRecoveredNum, plusPercentNum2)), '{:,d}'.format(recoveredCases), ]), html.H5(style={'textAlign':'center', 'fontWeight':'bold','color':'#1a9622','padding':'.1rem'}, children="Recovered Cases") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#6c6c6c'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from yesterday ({:.1%})'.format(plusDeathNum, plusPercentNum3)), '{:,d}'.format(deathsCases) ]), html.H5(style={'textAlign':'center', 'fontWeight':'bold','color':'#6c6c6c','padding':'.1rem'}, children="Death Cases") ]) ]), html.Div( id='dcc-plot', style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.35%','marginTop':'.5%'}, children=[ html.Div( style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Confirmed Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_confirmed)]), html.Div( style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Recovered Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_combine)]), html.Div( style={'width':'32.79%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Death Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_rate)])]), html.Div( id='dcc-map', style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'}, children=[ html.Div(style={'width':'72.6%','marginRight':'.8%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Latest Coronavirus Outbreak Map'), dcc.Graph(style={'height':'500px'}, figure=fig2)]), html.Div(style={'width':'26.6%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Cases by Country/Regions'), dash_table.DataTable( columns=[{"name": i, "id": i} for i in dfSum.columns[0:4]], data=dfSum.to_dict("rows"), row_selectable="single", selected_rows=[], sort_action="native", style_as_list_view=True, style_cell={ 'font_family':'Arial', 'font_size':'1.5rem', 'padding':'.1rem', 'backgroundColor':'#f4f4f2' }, fixed_rows={ 'headers': True, 'data': 0 }, style_header={ 'backgroundColor': '#f4f4f2', 'fontWeight':'bold'}, style_table={ 'maxHeight':'500px', 'overflowX':'scroll', }, style_cell_conditional=[ {'if': {'column_id':'Country/Regions'},'width':'40%'}, {'if': {'column_id':'Confirmed'},'width':'20%'}, {'if': {'column_id':'Recovered'},'width':'20%'}, {'if': {'column_id':'Deaths'},'width':'20%'}, {'if': {'column_id':'Confirmed'},'color':'#d7191c'}, {'if': {'column_id':'Recovered'},'color':'#1a9622'}, {'if': {'column_id':'Deaths'},'color':'#6c6c6c'}, {'textAlign': 'center'} ], ) ]) ]), html.Div(style={'marginLeft':'1.5%','marginRight':'1.5%'}, children=[ html.P(style={'textAlign':'center','margin':'auto'}, children=["Data source from ", html.A('JHU CSSE,', href='https://docs.google.com/spreadsheets/d/1yZv9w9z\ RKwrGTaR-YzmAqMefw4wMlaXocejdxZaTs6w/htmlview?usp=sharing&sle=true#'), html.A(' Dingxiangyuan', href='https://ncov.dxy.cn/ncovh5/view/pneumonia?sce\ ne=2&clicktime=1579582238&enterid=1579582238&from=singlemessage&isappinstalled=0'), " | 🙏 Pray for China, Pray for the World 🙏 |", " Developed by ",html.A('Jun', href='https://junye0798.com/')," with ❤️"])]) ]) if __name__ == '__main__': app.run_server(port=8882) # This is the version with app.callback app = dash.Dash(__name__, assets_folder='./assets/', meta_tags=[ {"name": "viewport", "content": "width=device-width, height=device-height, initial-scale=1.0"} ] ) app.layout = html.Div(style={'backgroundColor':'#f4f4f2'}, children=[ html.Div( id="header", children=[ html.H4(children="Coronavirus (2019-nCoV) Outbreak Global Cases Monitor"), html.P( id="description", children="On Dec 31, 2019, the World Health Organization (WHO) was informed of \ an outbreak of “pneumonia of unknown cause” detected in Wuhan City, Hubei Province, China – the \ seventh-largest city in China with 11 million residents. As of {}, there are over {:,d} cases \ of 2019-nCoV confirmed globally.\ This dash board is developed to visualise and track the recent reported \ cases on a daily timescale.".format(latestDate, confirmedCases), ), html.P(style={'fontWeight':'bold'}, children="Last updated on {}.".format(latestDate)) ] ), html.Div( id="number-plate", style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'}, children=[ html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#2674f6'}, children=[ html.P(style={'color':'#cbd2d3','padding':'.5rem'}, children='xxxx xxxx xxxx xxx xxxxx'), '{}'.format(daysOutbreak), ]), html.H5(style={'textAlign':'center','color':'#2674f6','padding':'.1rem'}, children="Days Since Outbreak") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#d7191c'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from past 24h ({:.1%})'.format(plusConfirmedNum, plusPercentNum1)), '{:,d}'.format(confirmedCases) ]), html.H5(style={'textAlign':'center','color':'#d7191c','padding':'.1rem'}, children="Confirmed Cases") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#1a9622'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from past 24h ({:.1%})'.format(plusRecoveredNum, plusPercentNum2)), '{:,d}'.format(recoveredCases), ]), html.H5(style={'textAlign':'center','color':'#1a9622','padding':'.1rem'}, children="Recovered Cases") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#6c6c6c'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from past 24h ({:.1%})'.format(plusDeathNum, plusPercentNum3)), '{:,d}'.format(deathsCases) ]), html.H5(style={'textAlign':'center','color':'#6c6c6c','padding':'.1rem'}, children="Death Cases") ]) ]), html.Div( id='dcc-plot', style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.35%','marginTop':'.5%'}, children=[ html.Div( style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Confirmed Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_confirmed)]), html.Div( style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Recovered/Death Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_combine)]), html.Div( style={'width':'32.79%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Death Rate Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_rate)])]), html.Div( id='dcc-map', style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'}, children=[ html.Div(style={'width':'66.41%','marginRight':'.8%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Latest Coronavirus Outbreak Map'), dcc.Graph( id='datatable-interact-map', style={'height':'500px'}, ) ]), html.Div(style={'width':'32.79%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Cases by Country/Regions'), dash_table.DataTable( id='datatable-interact-location', # Don't show coordinates columns=[{"name": i, "id": i} for i in dfSum.columns[0:4]], # But still store coordinates in the table for interactivity data=dfSum.to_dict("rows"), row_selectable="single", #selected_rows=[], sort_action="native", style_as_list_view=True, style_cell={ 'font_family':'Arial', 'font_size':'1.5rem', 'padding':'.1rem', 'backgroundColor':'#f4f4f2', }, fixed_rows={'headers':True,'data':0}, style_table={ 'maxHeight':'500px', #'overflowY':'scroll', 'overflowX':'scroll', }, style_header={ 'backgroundColor':'#f4f4f2', 'fontWeight':'bold'}, style_cell_conditional=[ {'if': {'column_id':'Country/Regions'},'width':'40%'}, {'if': {'column_id':'Confirmed'},'width':'20%'}, {'if': {'column_id':'Recovered'},'width':'20%'}, {'if': {'column_id':'Deaths'},'width':'20%'}, {'if': {'column_id':'Confirmed'},'color':'#d7191c'}, {'if': {'column_id':'Recovered'},'color':'#1a9622'}, {'if': {'column_id':'Deaths'},'color':'#6c6c6c'}, {'textAlign': 'center'} ], ) ]) ]), html.Div(style={'marginLeft':'1.5%','marginRight':'1.5%'}, children=[ html.P(style={'textAlign':'center','margin':'auto'}, children=["Data source from ", html.A('Dingxiangyuan, ', href='https://ncov.dxy.cn/ncovh5/view/pneumonia?sce\ ne=2&clicktime=1579582238&enterid=1579582238&from=singlemessage&isappinstalled=0'), html.A('Tencent News, ', href='https://news.qq.com//zt2020/page/feiyan.htm#charts'), 'and ', html.A('JHU CSSE', href='https://docs.google.com/spreadsheets/d/1yZv9w9z\ RKwrGTaR-YzmAqMefw4wMlaXocejdxZaTs6w/htmlview?usp=sharing&sle=true#'), " | 🙏 Pray for China, Pray for the World 🙏 |", " Developed by ",html.A('Jun', href='https://junye0798.com/')," with ❤️"])]) ]) @app.callback( Output('datatable-interact-map', 'figure'), [Input('datatable-interact-location', 'derived_virtual_selected_rows')] ) def update_figures(derived_virtual_selected_rows): # When the table is first rendered, `derived_virtual_data` and # `derived_virtual_selected_rows` will be `None`. This is due to an # idiosyncracy in Dash (unsupplied properties are always None and Dash # calls the dependent callbacks when the component is first rendered). # So, if `rows` is `None`, then the component was just rendered # and its value will be the same as the component's dataframe. # Instead of setting `None` in here, you could also set # `derived_virtual_data=df.to_rows('dict')` when you initialize # the component. if derived_virtual_selected_rows is None: derived_virtual_selected_rows = [] dff = dfSum mapbox_access_token = "pk.eyJ1IjoicGxvdGx5bWFwYm94IiwiYSI6ImNqdnBvNDMyaTAxYzkzeW5ubWdpZ2VjbmMifQ.TXcBE-xg9BFdV2ocecc_7g" # Generate a list for hover text display textList=[] for area, region in zip(dfs[keyList[0]]['Province/State'], dfs[keyList[0]]['Country/Region']): if type(area) is str: if region == "Hong Kong" or region == "Macau" or region == "Taiwan": textList.append(area) else: textList.append(area+', '+region) else: textList.append(region) fig2 = go.Figure(go.Scattermapbox( lat=dfs[keyList[0]]['lat'], lon=dfs[keyList[0]]['lon'], mode='markers', marker=go.scattermapbox.Marker( color='#ca261d', size=dfs[keyList[0]]['Confirmed'].tolist(), sizemin=4, sizemode='area', sizeref=2.*max(dfs[keyList[0]]['Confirmed'].tolist())/(150.**2), ), text=textList, hovertext=['Comfirmed: {}<br>Recovered: {}<br>Death: {}'.format(i, j, k) for i, j, k in zip(dfs[keyList[0]]['Confirmed'], dfs[keyList[0]]['Recovered'], dfs[keyList[0]]['Deaths'])], hovertemplate = "<b>%{text}</b><br><br>" + "%{hovertext}<br>" + "<extra></extra>") ) fig2.update_layout( plot_bgcolor='#151920', paper_bgcolor='#cbd2d3', margin=go.layout.Margin(l=10,r=10,b=10,t=0,pad=40), hovermode='closest', transition = {'duration':1000}, mapbox=go.layout.Mapbox( accesstoken=mapbox_access_token, style="light", # The direction you're facing, measured clockwise as an angle from true north on a compass bearing=0, center=go.layout.mapbox.Center( lat=3.684188 if len(derived_virtual_selected_rows)==0 else dff['lat'][derived_virtual_selected_rows[0]], lon=148.374024 if len(derived_virtual_selected_rows)==0 else dff['lon'][derived_virtual_selected_rows[0]] ), pitch=0, zoom=1.2 if len(derived_virtual_selected_rows)==0 else 4 ) ) return fig2 if __name__ == '__main__': app.run_server(port=8882) ```
github_jupyter
import pandas as pd import numpy as np from datetime import datetime, timedelta import math import plotly.graph_objects as go from plotly.subplots import make_subplots # Import xlsx file and store each sheet in to a df list xl_file = pd.ExcelFile('./data.xlsx',) dfs = {sheet_name: xl_file.parse(sheet_name) for sheet_name in xl_file.sheet_names} # Data from each sheet can be accessed via key keyList = list(dfs.keys()) # Data cleansing for key, df in dfs.items(): dfs[key].loc[:,'Confirmed'].fillna(value=0, inplace=True) dfs[key].loc[:,'Deaths'].fillna(value=0, inplace=True) dfs[key].loc[:,'Recovered'].fillna(value=0, inplace=True) dfs[key]=dfs[key].astype({'Confirmed':'int64', 'Deaths':'int64', 'Recovered':'int64'}) # Change as China for coordinate search dfs[key]=dfs[key].replace({'Country/Region':'Mainland China'}, 'China') dfs[key]=dfs[key].replace({'Province/State':'Queensland'}, 'Brisbane') dfs[key]=dfs[key].replace({'Province/State':'New South Wales'}, 'Sydney') dfs[key]=dfs[key].replace({'Province/State':'Victoria'}, 'Melbourne') # Add a zero to the date so can be convert by datetime.strptime as 0-padded date dfs[key]['Last Update'] = '0' + dfs[key]['Last Update'] # Convert time as Australian eastern daylight time dfs[key]['Date_last_updated_AEDT'] = [datetime.strptime(d, '%m/%d/%Y %H:%M') for d in dfs[key]['Last Update']] dfs[key]['Date_last_updated_AEDT'] = dfs[key]['Date_last_updated_AEDT'] + timedelta(hours=16) # Check dfs[keyList[0]].head() # Import data with coordinates (coordinates were called seperately in "Updated_coordinates") dfs[keyList[0]]=pd.read_csv('{}_data.csv'.format(keyList[0])) dfs[keyList[1]]['Deaths'].sum() # Save numbers into variables to use in the app confirmedCases=dfs[keyList[0]]['Confirmed'].sum() deathsCases=dfs[keyList[0]]['Deaths'].sum() recoveredCases=dfs[keyList[0]]['Recovered'].sum() # Construct new dataframe for line plot DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Confirmed'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Confirmed':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Confirmed', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Confirmed'][0]) OtherList.append(dfTpm['Confirmed'][1:].sum()) df_confirmed = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_confirmed['date_day']=[d.date() for d in df_confirmed['Date']] df_confirmed=df_confirmed.groupby(by=df_confirmed['date_day'], sort=False).transform(max).drop_duplicates(['Date']) df_confirmed['Total']=df_confirmed['Mainland China']+df_confirmed['Other locations'] df_confirmed=df_confirmed.reset_index(drop=True) df_confirmed # Construct new dataframe for 24-hour window case difference DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Confirmed'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Confirmed':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Confirmed', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Confirmed'][0]) OtherList.append(dfTpm['Confirmed'][1:].sum()) df_confirmed_diff = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_confirmed_diff['Total']=df_confirmed_diff['Mainland China']+df_confirmed_diff['Other locations'] # Calculate differenec in a 24-hour window for index, _ in df_confirmed_diff.iterrows(): # Calculate the time differnece in hour diff=(df_confirmed_diff['Date'][0] - df_confirmed_diff['Date'][index]).total_seconds()/3600 # find out the latest time after 24-hour if diff >= 24: break plusConfirmedNum = df_confirmed_diff['Total'][0] - df_confirmed_diff['Total'][index] plusPercentNum1 = (df_confirmed_diff['Total'][0] - df_confirmed_diff['Total'][index])/df_confirmed_diff['Total'][index] # Construct new dataframe for line plot DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Recovered'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Recovered':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Recovered', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Recovered'][0]) OtherList.append(dfTpm['Recovered'][1:].sum()) df_recovered = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_recovered['date_day']=[d.date() for d in df_recovered['Date']] df_recovered=df_recovered.groupby(by=df_recovered['date_day'], sort=False).transform(max).drop_duplicates(['Date']) df_recovered['Total']=df_recovered['Mainland China']+df_recovered['Other locations'] df_recovered=df_recovered.reset_index(drop=True) df_recovered # Construct new dataframe for 24-hour window case difference DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Recovered'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Recovered':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Recovered', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Recovered'][0]) OtherList.append(dfTpm['Recovered'][1:].sum()) df_recovered_diff = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_recovered_diff['Total']=df_recovered_diff['Mainland China']+df_confirmed_diff['Other locations'] # Calculate differenec in a 24-hour window for index, _ in df_recovered_diff.iterrows(): # Calculate the time differnece in hour diff=(df_recovered_diff['Date'][0] - df_recovered_diff['Date'][index]).total_seconds()/3600 # find out the latest time after 24-hour if diff >= 24: break plusRecoveredNum = df_recovered_diff['Total'][0] - df_recovered_diff['Total'][index] plusPercentNum2 = (df_recovered_diff['Total'][0] - df_recovered_diff['Total'][index])/df_recovered_diff['Total'][index] plusPercentNum2 # Construct new dataframe for line plot DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Deaths'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Deaths':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Deaths', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Deaths'][0]) OtherList.append(dfTpm['Deaths'][1:].sum()) df_deaths = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_deaths['date_day']=[d.date() for d in df_deaths['Date']] df_deaths=df_deaths.groupby(by='date_day', sort=False).transform(max).drop_duplicates(['Date']) df_deaths['Total']=df_deaths['Mainland China']+df_deaths['Other locations'] df_deaths=df_deaths.reset_index(drop=True) df_deaths # Construct new dataframe for 24-hour window case difference DateList = [] ChinaList =[] OtherList = [] for key, df in dfs.items(): dfTpm = df.groupby(['Country/Region'])['Deaths'].agg(np.sum) dfTpm = pd.DataFrame({'Code':dfTpm.index, 'Deaths':dfTpm.values}) dfTpm = dfTpm.sort_values(by='Deaths', ascending=False).reset_index(drop=True) DateList.append(df['Date_last_updated_AEDT'][0]) ChinaList.append(dfTpm['Deaths'][0]) OtherList.append(dfTpm['Deaths'][1:].sum()) df_deaths_diff = pd.DataFrame({'Date':DateList, 'Mainland China':ChinaList, 'Other locations':OtherList}) df_deaths_diff['Total']=df_deaths_diff['Mainland China']+df_deaths_diff['Other locations'] # Calculate differenec in a 24-hour window for index, _ in df_deaths_diff.iterrows(): # Calculate the time differnece in hour diff=(df_deaths_diff['Date'][0] - df_deaths_diff['Date'][index]).total_seconds()/3600 # find out the latest time after 24-hour if diff >= 24: break plusDeathNum = df_deaths_diff['Total'][0] - df_deaths_diff['Total'][index] plusPercentNum3 = (df_deaths_diff['Total'][0] - df_deaths_diff['Total'][index])/df_deaths_diff['Total'][index] plusPercentNum3 # Generate sum values for Country/Region level dfCase = dfs[keyList[0]].groupby(by='Country/Region', sort=False).sum().reset_index() dfCase = dfCase.sort_values(by=['Confirmed'], ascending=False).reset_index(drop=True) # As lat and lon also underwent sum(), which is not desired, remove from this table. dfCase = dfCase.drop(columns=['lat','lon']) dfCase.head() # Grep lat and lon by the first instance to represent its Country/Region dfGPS = dfs[keyList[0]].groupby(by=['Country/Region'], sort=False).first().reset_index() dfGPS = dfGPS[['Country/Region','lat','lon']] dfGPS.head() # Merge two dataframes dfSum = pd.merge(dfCase, dfGPS, how='inner', on='Country/Region') dfSum = dfSum.replace({'Country/Region':'China'}, 'Mainland China') dfSum = dfSum[['Country/Region','Confirmed','Recovered','Deaths','lat','lon']] dfSum.head() # Save numbers into variables to use in the app latestDate=datetime.strftime(df_confirmed['Date'][0], '%b %d, %Y %H:%M AEDT') secondLastDate=datetime.strftime(df_confirmed['Date'][1], '%b %d') daysOutbreak=(df_confirmed['Date'][0] - datetime.strptime('12/31/2019', '%m/%d/%Y')).days latestDate # Line plot for confirmed cases # Set up tick scale based on confirmed case number tickList = list(np.arange(0, df_confirmed['Mainland China'].max()+1000, 5000)) # Create empty figure canvas fig_confirmed = go.Figure() # Add trace to the figure fig_confirmed.add_trace(go.Scatter(x=df_confirmed['Date'], y=df_confirmed['Mainland China'], mode='lines+markers', line_shape='spline', name='Mainland China', line=dict(color='#921113', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#921113')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_confirmed['Date']], hovertext=['Mainland China confirmed<br>{:,d} cases<br>'.format(i) for i in df_confirmed['Mainland China']], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) fig_confirmed.add_trace(go.Scatter(x=df_confirmed['Date'], y=df_confirmed['Other locations'], mode='lines+markers', line_shape='spline', name='Other Region', line=dict(color='#eb5254', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#eb5254')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_confirmed['Date']], hovertext=['Other locations confirmed<br>{:,d} cases<br>'.format(i) for i in df_confirmed['Other locations']], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) # Customise layout fig_confirmed.update_layout( #title=dict( # text="<b>Confirmed Cases Timeline<b>", # y=0.96, x=0.5, xanchor='center', yanchor='top', # font=dict(size=20, color="#292929", family="Playfair Display") #), margin=go.layout.Margin( l=10, r=10, b=10, t=5, pad=0 ), yaxis=dict( showline=True, linecolor='#272e3e', zeroline=False, gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, tickmode='array', # Set tick range based on the maximum number tickvals=tickList, # Set tick label accordingly ticktext=["{:.0f}k".format(i/1000) for i in tickList] ), # yaxis_title="Total Confirmed Case Number", xaxis=dict( showline=True, linecolor='#272e3e', gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, zeroline=False ), xaxis_tickformat='%b %d', hovermode = 'x', legend_orientation="h", # legend=dict(x=.35, y=-.05), plot_bgcolor='#f4f4f2', paper_bgcolor='#cbd2d3', font=dict(color='#292929') ) fig_confirmed.show() # Line plot for both recovered and death cases # Set up tick scale based on confirmed case number tickList = list(np.arange(0, df_recovered['Mainland China'].max()+200, 500)) # Create empty figure canvas fig_combine = go.Figure() # Add trace to the figure fig_combine.add_trace(go.Scatter(x=df_recovered['Date'], y=df_recovered['Total'], mode='lines+markers', line_shape='spline', name='Total Recovered Cases', line=dict(color='#168038', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#168038')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_recovered['Date']], hovertext=['Total recovered<br>{:,d} cases<br>'.format(i) for i in df_recovered['Total']], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) fig_combine.add_trace(go.Scatter(x=df_deaths['Date'], y=df_deaths['Total'], mode='lines+markers', line_shape='spline', name='Total Death Cases', line=dict(color='#626262', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#626262')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_deaths['Date']], hovertext=['Total death<br>{:,d} cases<br>'.format(i) for i in df_deaths['Total']], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) # Customise layout fig_combine.update_layout( #title=dict( # text="<b>Recovered Cases Timeline<b>", # y=0.96, x=0.5, xanchor='center', yanchor='top', # font=dict(size=20, color="#292929", family="Playfair Display") #), margin=go.layout.Margin( l=10, r=10, b=10, t=5, pad=0 ), yaxis=dict( showline=True, linecolor='#272e3e', zeroline=False, gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, tickmode='array', # Set tick range based on the maximum number tickvals=tickList, # Set tick label accordingly ticktext=['{:.0f}'.format(i) for i in tickList] ), # yaxis_title="Total Recovered Case Number", xaxis=dict( showline=True, linecolor='#272e3e', gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, zeroline=False ), xaxis_tickformat='%b %d', hovermode = 'x', legend_orientation="h", # legend=dict(x=.35, y=-.05), plot_bgcolor='#f4f4f2', paper_bgcolor='#cbd2d3', font=dict(color='#292929') ) fig_combine.show() # Line plot for death rate cases # Set up tick scale based on confirmed case number tickList = list(np.arange(0, (df_deaths['Mainland China']/df_confirmed['Mainland China']*100).max(), 0.5)) # Create empty figure canvas fig_rate = go.Figure() # Add trace to the figure fig_rate.add_trace(go.Scatter(x=df_deaths['Date'], y=df_deaths['Mainland China']/df_confirmed['Mainland China']*100, mode='lines+markers', line_shape='spline', name='Mainland China Death Rate', line=dict(color='#626262', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#626262')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_deaths['Date']], hovertext=['Mainland China death rate<br>{:.2f}%'.format(i) for i in df_deaths['Mainland China']/df_confirmed['Mainland China']*100], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) fig_rate.add_trace(go.Scatter(x=df_deaths['Date'], y=df_deaths['Other locations']/df_confirmed['Other locations']*100, mode='lines+markers', line_shape='spline', name='Other region Death Rate', line=dict(color='#a7a7a7', width=3), marker=dict(size=8, color='#f4f4f2', line=dict(width=1,color='#a7a7a7')), text=[datetime.strftime(d, '%b %d %Y AEDT') for d in df_deaths['Date']], hovertext=['Mainland China death rate<br>{:.2f}%'.format(i) for i in df_deaths['Other locations']/df_confirmed['Other locations']*100], hovertemplate='<b>%{text}</b><br></br>'+ '%{hovertext}'+ '<extra></extra>')) # Customise layout fig_rate.update_layout( margin=go.layout.Margin( l=10, r=10, b=10, t=5, pad=0 ), yaxis=dict( showline=True, linecolor='#272e3e', zeroline=False, gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, tickmode='array', # Set tick range based on the maximum number tickvals=tickList, # Set tick label accordingly ticktext=['{:.1f}'.format(i) for i in tickList] ), # yaxis_title="Total Death Case Number", xaxis=dict( showline=True, linecolor='#272e3e', gridcolor='rgba(203, 210, 211,.3)', gridwidth = .1, zeroline=False ), xaxis_tickformat='%b %d', hovermode = 'x', legend_orientation="h", # legend=dict(x=.35, y=-.05), plot_bgcolor='#f4f4f2', paper_bgcolor='#cbd2d3', font=dict(color='#292929') ) fig_rate.show() mapbox_access_token = "pk.eyJ1IjoicGxvdGx5bWFwYm94IiwiYSI6ImNqdnBvNDMyaTAxYzkzeW5ubWdpZ2VjbmMifQ.TXcBE-xg9BFdV2ocecc_7g" # Generate a list for hover text display textList=[] for area, region in zip(dfs[keyList[0]]['Province/State'], dfs[keyList[0]]['Country/Region']): if type(area) is str: if region == "Hong Kong" or region == "Macau" or region == "Taiwan": textList.append(area) else: textList.append(area+', '+region) else: textList.append(region) fig2 = go.Figure(go.Scattermapbox( lat=dfs[keyList[0]]['lat'], lon=dfs[keyList[0]]['lon'], mode='markers', marker=go.scattermapbox.Marker( color='#ca261d', size=[math.sqrt(i) for i in dfs[keyList[0]]['Confirmed']], sizemin=1, sizemode='area', sizeref=2.*max([math.sqrt(i) for i in dfs[keyList[0]]['Confirmed']])/(100.**2), ), text=textList, hovertext=['Comfirmed: {}<br>Recovered: {}<br>Death: {}'.format(i, j, k) for i, j, k in zip(dfs[keyList[0]]['Confirmed'], dfs[keyList[0]]['Recovered'], dfs[keyList[0]]['Deaths'])], hovertemplate = "<b>%{text}</b><br><br>" + "%{hovertext}<br>" + "<extra></extra>") ) fig2.update_layout( # title=dict( # text="<b>Latest Coronavirus Outbreak Map<b>", # y=0.96, x=0.5, xanchor='center', yanchor='top', # font=dict(size=20, color="#292929", family="Playfair Display") # ), plot_bgcolor='#151920', paper_bgcolor='#cbd2d3', margin=go.layout.Margin( l=10, r=10, b=10, t=0, pad=40 ), hovermode='closest', mapbox=go.layout.Mapbox( accesstoken=mapbox_access_token, style="light", bearing=0, center=go.layout.mapbox.Center( lat=31.1517252, lon=112.8783222 ), pitch=0, zoom=4 ) ) fig2.show() import dash import dash_table import dash_core_components as dcc import dash_html_components as html import dash_bootstrap_components as dbc from dash.dependencies import Input, Output app = dash.Dash(__name__, assets_folder='./assets/', meta_tags=[ {"name": "viewport", "content": "width=device-width, height=device-height, initial-scale=1.0"} ] ) app.layout = html.Div(style={'backgroundColor':'#f4f4f2'}, children=[ html.Div( id="header", children=[ html.H4(children="Wuhan Coronavirus (2019-nCoV) Outbreak Monitor"), html.P( id="description", children="On Dec 31, 2019, the World Health Organization (WHO) was informed of \ an outbreak of “pneumonia of unknown cause” detected in Wuhan City, Hubei Province, China – the \ seventh-largest city in China with 11 million residents. As of {}, there are over {:,d} cases \ of 2019-nCoV confirmed globally.\ This dash board is developed to visualise and track the recent reported \ cases on a daily timescale.".format(latestDate, confirmedCases), ), html.P(style={'fontWeight':'bold'}, children="Last updated on {}.".format(latestDate)) ] ), html.Div( id="number-plate", style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'}, children=[ html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#ffffbf'}, children=[ html.P(style={'color':'#cbd2d3','padding':'.5rem'}, children='x'), '{}'.format(daysOutbreak), ]), html.H5(style={'textAlign':'center', 'fontWeight':'bold','color':'#ffffbf','padding':'.1rem'}, children="Days Since Outbreak") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#d7191c'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from yesterday ({:.1%})'.format(plusConfirmedNum, plusPercentNum1)), '{:,d}'.format(confirmedCases) ]), html.H5(style={'textAlign':'center', 'fontWeight':'bold','color':'#d7191c','padding':'.1rem'}, children="Confirmed Cases") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#1a9622'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from yesterday ({:.1%})'.format(plusRecoveredNum, plusPercentNum2)), '{:,d}'.format(recoveredCases), ]), html.H5(style={'textAlign':'center', 'fontWeight':'bold','color':'#1a9622','padding':'.1rem'}, children="Recovered Cases") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#6c6c6c'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from yesterday ({:.1%})'.format(plusDeathNum, plusPercentNum3)), '{:,d}'.format(deathsCases) ]), html.H5(style={'textAlign':'center', 'fontWeight':'bold','color':'#6c6c6c','padding':'.1rem'}, children="Death Cases") ]) ]), html.Div( id='dcc-plot', style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.35%','marginTop':'.5%'}, children=[ html.Div( style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Confirmed Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_confirmed)]), html.Div( style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Recovered Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_combine)]), html.Div( style={'width':'32.79%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Death Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_rate)])]), html.Div( id='dcc-map', style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'}, children=[ html.Div(style={'width':'72.6%','marginRight':'.8%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Latest Coronavirus Outbreak Map'), dcc.Graph(style={'height':'500px'}, figure=fig2)]), html.Div(style={'width':'26.6%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Cases by Country/Regions'), dash_table.DataTable( columns=[{"name": i, "id": i} for i in dfSum.columns[0:4]], data=dfSum.to_dict("rows"), row_selectable="single", selected_rows=[], sort_action="native", style_as_list_view=True, style_cell={ 'font_family':'Arial', 'font_size':'1.5rem', 'padding':'.1rem', 'backgroundColor':'#f4f4f2' }, fixed_rows={ 'headers': True, 'data': 0 }, style_header={ 'backgroundColor': '#f4f4f2', 'fontWeight':'bold'}, style_table={ 'maxHeight':'500px', 'overflowX':'scroll', }, style_cell_conditional=[ {'if': {'column_id':'Country/Regions'},'width':'40%'}, {'if': {'column_id':'Confirmed'},'width':'20%'}, {'if': {'column_id':'Recovered'},'width':'20%'}, {'if': {'column_id':'Deaths'},'width':'20%'}, {'if': {'column_id':'Confirmed'},'color':'#d7191c'}, {'if': {'column_id':'Recovered'},'color':'#1a9622'}, {'if': {'column_id':'Deaths'},'color':'#6c6c6c'}, {'textAlign': 'center'} ], ) ]) ]), html.Div(style={'marginLeft':'1.5%','marginRight':'1.5%'}, children=[ html.P(style={'textAlign':'center','margin':'auto'}, children=["Data source from ", html.A('JHU CSSE,', href='https://docs.google.com/spreadsheets/d/1yZv9w9z\ RKwrGTaR-YzmAqMefw4wMlaXocejdxZaTs6w/htmlview?usp=sharing&sle=true#'), html.A(' Dingxiangyuan', href='https://ncov.dxy.cn/ncovh5/view/pneumonia?sce\ ne=2&clicktime=1579582238&enterid=1579582238&from=singlemessage&isappinstalled=0'), " | 🙏 Pray for China, Pray for the World 🙏 |", " Developed by ",html.A('Jun', href='https://junye0798.com/')," with ❤️"])]) ]) if __name__ == '__main__': app.run_server(port=8882) # This is the version with app.callback app = dash.Dash(__name__, assets_folder='./assets/', meta_tags=[ {"name": "viewport", "content": "width=device-width, height=device-height, initial-scale=1.0"} ] ) app.layout = html.Div(style={'backgroundColor':'#f4f4f2'}, children=[ html.Div( id="header", children=[ html.H4(children="Coronavirus (2019-nCoV) Outbreak Global Cases Monitor"), html.P( id="description", children="On Dec 31, 2019, the World Health Organization (WHO) was informed of \ an outbreak of “pneumonia of unknown cause” detected in Wuhan City, Hubei Province, China – the \ seventh-largest city in China with 11 million residents. As of {}, there are over {:,d} cases \ of 2019-nCoV confirmed globally.\ This dash board is developed to visualise and track the recent reported \ cases on a daily timescale.".format(latestDate, confirmedCases), ), html.P(style={'fontWeight':'bold'}, children="Last updated on {}.".format(latestDate)) ] ), html.Div( id="number-plate", style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'}, children=[ html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#2674f6'}, children=[ html.P(style={'color':'#cbd2d3','padding':'.5rem'}, children='xxxx xxxx xxxx xxx xxxxx'), '{}'.format(daysOutbreak), ]), html.H5(style={'textAlign':'center','color':'#2674f6','padding':'.1rem'}, children="Days Since Outbreak") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#d7191c'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from past 24h ({:.1%})'.format(plusConfirmedNum, plusPercentNum1)), '{:,d}'.format(confirmedCases) ]), html.H5(style={'textAlign':'center','color':'#d7191c','padding':'.1rem'}, children="Confirmed Cases") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'marginRight':'.8%','verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#1a9622'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from past 24h ({:.1%})'.format(plusRecoveredNum, plusPercentNum2)), '{:,d}'.format(recoveredCases), ]), html.H5(style={'textAlign':'center','color':'#1a9622','padding':'.1rem'}, children="Recovered Cases") ]), html.Div( style={'width':'24.4%','backgroundColor':'#cbd2d3','display':'inline-block', 'verticalAlign':'top'}, children=[ html.H3(style={'textAlign':'center', 'fontWeight':'bold','color':'#6c6c6c'}, children=[ html.P(style={'padding':'.5rem'}, children='+ {:,d} from past 24h ({:.1%})'.format(plusDeathNum, plusPercentNum3)), '{:,d}'.format(deathsCases) ]), html.H5(style={'textAlign':'center','color':'#6c6c6c','padding':'.1rem'}, children="Death Cases") ]) ]), html.Div( id='dcc-plot', style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.35%','marginTop':'.5%'}, children=[ html.Div( style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Confirmed Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_confirmed)]), html.Div( style={'width':'32.79%','display':'inline-block','marginRight':'.8%','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Recovered/Death Case Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_combine)]), html.Div( style={'width':'32.79%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Death Rate Timeline'), dcc.Graph(style={'height':'300px'},figure=fig_rate)])]), html.Div( id='dcc-map', style={'marginLeft':'1.5%','marginRight':'1.5%','marginBottom':'.5%'}, children=[ html.Div(style={'width':'66.41%','marginRight':'.8%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Latest Coronavirus Outbreak Map'), dcc.Graph( id='datatable-interact-map', style={'height':'500px'}, ) ]), html.Div(style={'width':'32.79%','display':'inline-block','verticalAlign':'top'}, children=[ html.H5(style={'textAlign':'center','backgroundColor':'#cbd2d3', 'color':'#292929','padding':'1rem','marginBottom':'0'}, children='Cases by Country/Regions'), dash_table.DataTable( id='datatable-interact-location', # Don't show coordinates columns=[{"name": i, "id": i} for i in dfSum.columns[0:4]], # But still store coordinates in the table for interactivity data=dfSum.to_dict("rows"), row_selectable="single", #selected_rows=[], sort_action="native", style_as_list_view=True, style_cell={ 'font_family':'Arial', 'font_size':'1.5rem', 'padding':'.1rem', 'backgroundColor':'#f4f4f2', }, fixed_rows={'headers':True,'data':0}, style_table={ 'maxHeight':'500px', #'overflowY':'scroll', 'overflowX':'scroll', }, style_header={ 'backgroundColor':'#f4f4f2', 'fontWeight':'bold'}, style_cell_conditional=[ {'if': {'column_id':'Country/Regions'},'width':'40%'}, {'if': {'column_id':'Confirmed'},'width':'20%'}, {'if': {'column_id':'Recovered'},'width':'20%'}, {'if': {'column_id':'Deaths'},'width':'20%'}, {'if': {'column_id':'Confirmed'},'color':'#d7191c'}, {'if': {'column_id':'Recovered'},'color':'#1a9622'}, {'if': {'column_id':'Deaths'},'color':'#6c6c6c'}, {'textAlign': 'center'} ], ) ]) ]), html.Div(style={'marginLeft':'1.5%','marginRight':'1.5%'}, children=[ html.P(style={'textAlign':'center','margin':'auto'}, children=["Data source from ", html.A('Dingxiangyuan, ', href='https://ncov.dxy.cn/ncovh5/view/pneumonia?sce\ ne=2&clicktime=1579582238&enterid=1579582238&from=singlemessage&isappinstalled=0'), html.A('Tencent News, ', href='https://news.qq.com//zt2020/page/feiyan.htm#charts'), 'and ', html.A('JHU CSSE', href='https://docs.google.com/spreadsheets/d/1yZv9w9z\ RKwrGTaR-YzmAqMefw4wMlaXocejdxZaTs6w/htmlview?usp=sharing&sle=true#'), " | 🙏 Pray for China, Pray for the World 🙏 |", " Developed by ",html.A('Jun', href='https://junye0798.com/')," with ❤️"])]) ]) @app.callback( Output('datatable-interact-map', 'figure'), [Input('datatable-interact-location', 'derived_virtual_selected_rows')] ) def update_figures(derived_virtual_selected_rows): # When the table is first rendered, `derived_virtual_data` and # `derived_virtual_selected_rows` will be `None`. This is due to an # idiosyncracy in Dash (unsupplied properties are always None and Dash # calls the dependent callbacks when the component is first rendered). # So, if `rows` is `None`, then the component was just rendered # and its value will be the same as the component's dataframe. # Instead of setting `None` in here, you could also set # `derived_virtual_data=df.to_rows('dict')` when you initialize # the component. if derived_virtual_selected_rows is None: derived_virtual_selected_rows = [] dff = dfSum mapbox_access_token = "pk.eyJ1IjoicGxvdGx5bWFwYm94IiwiYSI6ImNqdnBvNDMyaTAxYzkzeW5ubWdpZ2VjbmMifQ.TXcBE-xg9BFdV2ocecc_7g" # Generate a list for hover text display textList=[] for area, region in zip(dfs[keyList[0]]['Province/State'], dfs[keyList[0]]['Country/Region']): if type(area) is str: if region == "Hong Kong" or region == "Macau" or region == "Taiwan": textList.append(area) else: textList.append(area+', '+region) else: textList.append(region) fig2 = go.Figure(go.Scattermapbox( lat=dfs[keyList[0]]['lat'], lon=dfs[keyList[0]]['lon'], mode='markers', marker=go.scattermapbox.Marker( color='#ca261d', size=dfs[keyList[0]]['Confirmed'].tolist(), sizemin=4, sizemode='area', sizeref=2.*max(dfs[keyList[0]]['Confirmed'].tolist())/(150.**2), ), text=textList, hovertext=['Comfirmed: {}<br>Recovered: {}<br>Death: {}'.format(i, j, k) for i, j, k in zip(dfs[keyList[0]]['Confirmed'], dfs[keyList[0]]['Recovered'], dfs[keyList[0]]['Deaths'])], hovertemplate = "<b>%{text}</b><br><br>" + "%{hovertext}<br>" + "<extra></extra>") ) fig2.update_layout( plot_bgcolor='#151920', paper_bgcolor='#cbd2d3', margin=go.layout.Margin(l=10,r=10,b=10,t=0,pad=40), hovermode='closest', transition = {'duration':1000}, mapbox=go.layout.Mapbox( accesstoken=mapbox_access_token, style="light", # The direction you're facing, measured clockwise as an angle from true north on a compass bearing=0, center=go.layout.mapbox.Center( lat=3.684188 if len(derived_virtual_selected_rows)==0 else dff['lat'][derived_virtual_selected_rows[0]], lon=148.374024 if len(derived_virtual_selected_rows)==0 else dff['lon'][derived_virtual_selected_rows[0]] ), pitch=0, zoom=1.2 if len(derived_virtual_selected_rows)==0 else 4 ) ) return fig2 if __name__ == '__main__': app.run_server(port=8882)
0.361841
0.399343
## AI初探-載入已訓練好的模型 在這個單元裡,我們會學習如何使用別人已經建立好的模型. 目前市場上已經有很多工具提供訓練模型的功能, 你只需要準備好資料, 就可以完成你的模型. 例如微軟的 [Custom Vision](https://azure.microsoft.com/zh-tw/services/cognitive-services/custom-vision-service/), Amazon的 [Rekognition](https://aws.amazon.com/tw/rekognition/), Google的 [AutoML](https://cloud.google.com/automl/?hl=zh-tw). 你只需要載入模型檔案就可以直接預測你的測試資料. 因為深度學習的框架不只一種, 本教學僅介紹如何使用 Keras 跟 Tensorflow 的模型檔案. &copy; 2016 Chih-Chang Yu@CYCU MIT License 首先我們還是要載入需要的函式庫 ``` import numpy as np import matplotlib.pyplot as plt import random from tensorflow.keras.models import load_model ``` ## 使用Keras的模型 載入Keras模型很簡單, 只要使用 `load_model()` 函數就直接載入完成. 你可以把上一個課程訓練出來的模型拿來這邊用. 這裡我們提供了另一個模型檔案 *my_vgg_model.h5*, 對 [Cifar-10](https://www.cs.toronto.edu/~kriz/cifar.html) 資料集有比較好的辨識率, 同學可自行載入來比較兩者的不同. 註: 有時候載入模型的過程會當機, 可能的原因是有其他的 ipynb 專案佔住了 gpu 資源, 可以先從 Notebook 的首頁把其他的 process 先關掉就可以正常執行. ``` # the filepath depends on where you put your model file. k_model = load_model('model/my_vgg_model.h5') ``` 我們先載入類別的名稱, 之後比較好觀察辨認的結果. ``` # the filepath depends on where you put your labels file. with open('model/labels.txt','r') as f: labels = (f.read().splitlines()) ``` 在使用之前, 我們得先確定模型的輸入跟輸出規範. 因此, 我們把模型的最上層(輸出)跟最底層(輸入)的結構印出來看: ``` print(k_model.layers[0].output_shape) print(k_model.layers[-1].output_shape) # -1 means the last layer ``` 可以看到輸入層的結構是 32x32x3, 輸出層是 10. 因此我們必須把輸入資料先正規化成32x32x3的影像才能使用這個模型. 這邊我們還是先用cifar-10提供的的測試資料做為測試. ``` from tensorflow.keras.datasets import cifar10 (_, _),(X_test, y_test) = cifar10.load_data() # We only normalize test data because we don't have to train a model here. X_test = X_test/255 ``` 讀者們可以反覆執行下面的程式區塊來觀察結果. ``` idx = random.randint(1,100) im = X_test[idx] k_output = k_model.predict(im.reshape((1,)+im.shape)) k_predict = np.argmax(k_output) plt.imshow(im) plt.title('{:.2f}% '.format(k_output[0][k_predict]*100) + labels[k_predict]) plt.axis('off') plt.show() ``` ## 使用Tensorflow的模型 由於載入 Tensorflow 模型的程式比較複雜, 我們把它包成一個函式叫做 *load_tf_model*, 這樣之後就可以直接寫 `load_tf_model()`來載入模型. 有興趣的讀者們可以自行研究一下程式碼. ``` import tensorflow as tf def load_tf_model(model_filename): sess = tf.Session() with tf.gfile.FastGFile(model_filename,'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def) return sess ``` 我們一樣有提供一個結果略佳的模型 *my_vgg_model.pb* 供讀者比較 ``` # call load_tf_model() to load tensorflow model. sess = load_tf_model('model/my_vgg_model.pb') ``` Tensorflow 比較麻煩一點, 需要用到輸入層的名稱跟輸出層的名稱才能進行預測. 我們必須先找出模型的輸入層的名稱,以及模型的輸出層的名稱. 因此我們先用下面的程式區塊印出模型中每一層的名字. 我們要找的名稱需帶有 import 前綴字, 通常模型輸入層的名稱會有 Placeholder/input 字樣, 而輸出層會有 Softmax 字樣 (以分類問題來說). 註: 如果當初模型是用keras來建模後轉換成tensorflow模型, 輸入層的名稱可能會有些不同, 還請讀者留意. ``` for op in tf.get_default_graph().get_operations(): print(str(op.name)) ``` 觀察後我們發現輸入層的名字為 `import/input`(類似), 輸出層的名字為 `output/Softmax`(類似). 在 tensorflow 中我們利用 `get_shape()` 這個函式來取得輸入/輸出的格式. ``` input_tensor = sess.graph.get_tensor_by_name('import/input:0') softmax_tensor = sess.graph.get_tensor_by_name('import/predictions/Softmax:0') print(input_tensor.get_shape()) print(softmax_tensor.get_shape()) ``` 以下是使用 Tensorflow 的方式進行運測. 程式碼和 keras 的版本很相似, 差別在於預測呼叫的函式不同. ``` # randomly pick an image idx = random.randint(1,1000) im = X_test[idx] # predict softmax_tensor = sess.graph.get_tensor_by_name('import/predictions/Softmax:0') tf_output = sess.run(softmax_tensor, {'import/input:0': im.reshape((1,)+im.shape)}) tf_predict = np.argmax(tf_output) plt.imshow(im) plt.title('{:.2f}% '.format(tf_output[0][tf_predict]*100) + labels[tf_predict]) plt.axis('off') plt.show() ``` 你也可以從網路上抓一些圖片來測試模型. Python 中提供了 `requests` 函式庫來取得網路上的資料, 因此我們便利用它來取得測試用的圖片. 這裡我們提供一些連結的圖片讓大家練習 - 飛機 https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSMxAvTYUR5hSe3Y_V-HB0XmDaG3ZcX-p-CXZaKI-7g-rZH3bCj - 汽車 https://image.shutterstock.com/image-vector/car-cartoon-sticker-retro-style-260nw-566814880.jpg - 青蛙 https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSJErznWjCWv70L2VudttjRkSYBiiUmo0uLgWBBNm96ftOYlnZ1kQ - 卡車 https://target.scene7.com/is/image/Target/GUEST_5ffb9be8-fcc2-4728-87e2-cb021932896a - 鳥 https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQRPLQBnaubH2Z2b9Bw5U0MbZGReBgqsUXPvsrJt8UwEUdqW8Yk - 鹿 https://huntfish.mdc.mo.gov/sites/default/files/styles/species/public/images/species/deer.jpg 因為建立模型時我們是使用 32x32 的彩色圖片進行訓練, 因此取得的影像必須先把尺寸調整成 32x32 才能正確地輸入到模型. 你可以自行替換掉`get()`內的字串後執行下面的程式區塊來觀察結果. 註: 這個部分的內容(從網路取得圖片)超出了本課程的學習範疇, 因此我們不做解釋, 有興趣的同學可以自行參考一些相關資料,例如[https://blog.gtwang.org/programming/python-requests-module-tutorial/](https://blog.gtwang.org/programming/python-requests-module-tutorial/) ``` import pandas as pd from IPython.display import display from PIL import Image from io import BytesIO import requests # you can get new images by replacing the string in get() function. response = requests.get("https://target.scene7.com/is/image/Target/GUEST_5ffb9be8-fcc2-4728-87e2-cb021932896a") if not response.ok: print(response) im_data = response.content im = Image.open(BytesIO(im_data)) plt.imshow(im) plt.title('original image') plt.axis('off') plt.show() # The input size of the image should be 32x32x3 # remember to normalize the image, use PIL library to resize it im = im.resize((32,32)) im_arr = np.array(im) im_arr = im_arr / 255 # use the keras model to predict # output = k_model.predict(im_arr.reshape((1,)+im_arr.shape)) # use tensorflow model to predict softmax_tensor = sess.graph.get_tensor_by_name('import/predictions/Softmax:0') output = sess.run(softmax_tensor, {'import/input:0': im_arr.reshape((1,)+im_arr.shape)}) # find the largest one predict = np.argmax(output) # this part is just for beautiful printed texts pd.options.display.float_format = '{:.4f}'.format df = pd.DataFrame(output, columns=labels) display(df) predict = np.argmax(output) plt.imshow(im_arr) plt.title('{:.2f}% '.format(output[0][predict]*100) + labels[predict]) plt.axis('off') plt.show() ``` ### 總結 在這個單元裡, 你學習了如何載入已經訓練好的模型, 包含 Keras 以及 Tensorflow 的模型, 包括: * 載入 Keras 模型後進行預測 * 載入 Tensorflow 模型並觀察輸入跟輸出層的名稱與資料輸入的規範 * 從網路上取得一些圖片並交由模型來預測
github_jupyter
import numpy as np import matplotlib.pyplot as plt import random from tensorflow.keras.models import load_model # the filepath depends on where you put your model file. k_model = load_model('model/my_vgg_model.h5') # the filepath depends on where you put your labels file. with open('model/labels.txt','r') as f: labels = (f.read().splitlines()) print(k_model.layers[0].output_shape) print(k_model.layers[-1].output_shape) # -1 means the last layer from tensorflow.keras.datasets import cifar10 (_, _),(X_test, y_test) = cifar10.load_data() # We only normalize test data because we don't have to train a model here. X_test = X_test/255 idx = random.randint(1,100) im = X_test[idx] k_output = k_model.predict(im.reshape((1,)+im.shape)) k_predict = np.argmax(k_output) plt.imshow(im) plt.title('{:.2f}% '.format(k_output[0][k_predict]*100) + labels[k_predict]) plt.axis('off') plt.show() import tensorflow as tf def load_tf_model(model_filename): sess = tf.Session() with tf.gfile.FastGFile(model_filename,'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def) return sess # call load_tf_model() to load tensorflow model. sess = load_tf_model('model/my_vgg_model.pb') for op in tf.get_default_graph().get_operations(): print(str(op.name)) input_tensor = sess.graph.get_tensor_by_name('import/input:0') softmax_tensor = sess.graph.get_tensor_by_name('import/predictions/Softmax:0') print(input_tensor.get_shape()) print(softmax_tensor.get_shape()) # randomly pick an image idx = random.randint(1,1000) im = X_test[idx] # predict softmax_tensor = sess.graph.get_tensor_by_name('import/predictions/Softmax:0') tf_output = sess.run(softmax_tensor, {'import/input:0': im.reshape((1,)+im.shape)}) tf_predict = np.argmax(tf_output) plt.imshow(im) plt.title('{:.2f}% '.format(tf_output[0][tf_predict]*100) + labels[tf_predict]) plt.axis('off') plt.show() import pandas as pd from IPython.display import display from PIL import Image from io import BytesIO import requests # you can get new images by replacing the string in get() function. response = requests.get("https://target.scene7.com/is/image/Target/GUEST_5ffb9be8-fcc2-4728-87e2-cb021932896a") if not response.ok: print(response) im_data = response.content im = Image.open(BytesIO(im_data)) plt.imshow(im) plt.title('original image') plt.axis('off') plt.show() # The input size of the image should be 32x32x3 # remember to normalize the image, use PIL library to resize it im = im.resize((32,32)) im_arr = np.array(im) im_arr = im_arr / 255 # use the keras model to predict # output = k_model.predict(im_arr.reshape((1,)+im_arr.shape)) # use tensorflow model to predict softmax_tensor = sess.graph.get_tensor_by_name('import/predictions/Softmax:0') output = sess.run(softmax_tensor, {'import/input:0': im_arr.reshape((1,)+im_arr.shape)}) # find the largest one predict = np.argmax(output) # this part is just for beautiful printed texts pd.options.display.float_format = '{:.4f}'.format df = pd.DataFrame(output, columns=labels) display(df) predict = np.argmax(output) plt.imshow(im_arr) plt.title('{:.2f}% '.format(output[0][predict]*100) + labels[predict]) plt.axis('off') plt.show()
0.653459
0.868269
``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.optimize as spo def f(X): """Given a scalar X, return some value (a real number)""" Y = (X - 1.5)**2 + 0.5 print("X = {0}, Y = {1}".format(X, Y)) return Y def test_run(): Xguess = 2.0 min_result = spo.minimize(f, Xguess, method="SLSQP", options={"disp": True}) print("Minina found at:") print("X = {0}, Y = {1}".format(min_result.x, min_result.fun)) print("Number of iteration: {0}".format(min_result.nit)) Xplot = np.linspace(0.5, 2.5, 21) Yplot = f(Xplot) plt.plot(Xplot, Yplot) plt.plot(min_result.x, min_result.fun, 'ro') plt.title('Minina of an object function') plt.show() plt.close() if __name__ == "__main__": test_run() def error(line, data): """ Compute error between given line model and observed data Parameters: line: tuple/list/array (C0, C1) where C0 is slope and C1 is Y-intercept data: 2D array where each row is a point (x , y) Returns error as a single real value """ err = np.sum((data[:, 1] - (line[0] * data[:, 0] + line[1])) **2) return err def error_poly(poly, data): """ Compute error between given polynomial and observed data Parameters: poly: np.poly1d or equivalent polynomial coefficients data: 2D array where each row is a point (x , y) Returns error as a single real value """ err = np.sum((data[:, 1] - np.polyval(poly, data[:, 0])) ** 2) return err def run(): l_orig = np.float32([4, 2]) print('Original line: C0 = {0}, C1 = {1}'.format(l_orig[0], l_orig[1])) Xorig = np.linspace(0, 10, 21) Yorig = l_orig[0] * Xorig + l_orig[1] plt.plot(Xorig, Yorig, "b--", linewidth=2.0, label="Original line") #Generate noisy data points noise_sigma = 3.0 noise = np.random.normal(0, noise_sigma, Yorig.shape) data = np.asarray([Xorig, Yorig + noise]).T plt.plot(data[:, 0], data[:, 1], 'go', label="Data points") plt.show() def fit_line(data, error_func): """ Fit a line to given data, using a supplied error function Parameters: data: 2D array where each row is a point (X0, Y) error_func: function that computes the error between a line and observed data Returns line that minimizes the error function """ #Generate initial guess for line model l = np.float32([0, np.mean(data[:, 1])]) #Plot intial guess (optional) x_ends = np.float32([-5, 5]) plt.plot(x_ends, l[0] * x_ends + l[1], 'm--', linewidth=2.0, label="Initial guess") plt.show() result = spo.minimize(error_func, l , args=(data, ), method='SLSQP', options={'disc': True}) return result.x def fit_poly(data, error_func, degree=3): """ Fit a polynial to given data, using a supplied error function Parameters: data: 2D array where each row is a point (X0, Y) error_func: function that computes the error between a line and observed data Returns polynomial that minimizes the error function """ #Generate initial guess for polynomial model guess = np.poly1d(np.ones(degree+1, dtype=np.float32)) #Plot intial guess (optional) x = np.linspace(-5, 5, 21) plt.plot(x, np.polyval(guess, x), 'm--', linewidth=2.0, label="Initial guess") plt.show() result = spo.minimize(error_func, guess, args=(data, ), method='SLSQP', options={'disc': True}) return np.poly1d(result.x) if __name__ == "__main__": run() ```
github_jupyter
import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.optimize as spo def f(X): """Given a scalar X, return some value (a real number)""" Y = (X - 1.5)**2 + 0.5 print("X = {0}, Y = {1}".format(X, Y)) return Y def test_run(): Xguess = 2.0 min_result = spo.minimize(f, Xguess, method="SLSQP", options={"disp": True}) print("Minina found at:") print("X = {0}, Y = {1}".format(min_result.x, min_result.fun)) print("Number of iteration: {0}".format(min_result.nit)) Xplot = np.linspace(0.5, 2.5, 21) Yplot = f(Xplot) plt.plot(Xplot, Yplot) plt.plot(min_result.x, min_result.fun, 'ro') plt.title('Minina of an object function') plt.show() plt.close() if __name__ == "__main__": test_run() def error(line, data): """ Compute error between given line model and observed data Parameters: line: tuple/list/array (C0, C1) where C0 is slope and C1 is Y-intercept data: 2D array where each row is a point (x , y) Returns error as a single real value """ err = np.sum((data[:, 1] - (line[0] * data[:, 0] + line[1])) **2) return err def error_poly(poly, data): """ Compute error between given polynomial and observed data Parameters: poly: np.poly1d or equivalent polynomial coefficients data: 2D array where each row is a point (x , y) Returns error as a single real value """ err = np.sum((data[:, 1] - np.polyval(poly, data[:, 0])) ** 2) return err def run(): l_orig = np.float32([4, 2]) print('Original line: C0 = {0}, C1 = {1}'.format(l_orig[0], l_orig[1])) Xorig = np.linspace(0, 10, 21) Yorig = l_orig[0] * Xorig + l_orig[1] plt.plot(Xorig, Yorig, "b--", linewidth=2.0, label="Original line") #Generate noisy data points noise_sigma = 3.0 noise = np.random.normal(0, noise_sigma, Yorig.shape) data = np.asarray([Xorig, Yorig + noise]).T plt.plot(data[:, 0], data[:, 1], 'go', label="Data points") plt.show() def fit_line(data, error_func): """ Fit a line to given data, using a supplied error function Parameters: data: 2D array where each row is a point (X0, Y) error_func: function that computes the error between a line and observed data Returns line that minimizes the error function """ #Generate initial guess for line model l = np.float32([0, np.mean(data[:, 1])]) #Plot intial guess (optional) x_ends = np.float32([-5, 5]) plt.plot(x_ends, l[0] * x_ends + l[1], 'm--', linewidth=2.0, label="Initial guess") plt.show() result = spo.minimize(error_func, l , args=(data, ), method='SLSQP', options={'disc': True}) return result.x def fit_poly(data, error_func, degree=3): """ Fit a polynial to given data, using a supplied error function Parameters: data: 2D array where each row is a point (X0, Y) error_func: function that computes the error between a line and observed data Returns polynomial that minimizes the error function """ #Generate initial guess for polynomial model guess = np.poly1d(np.ones(degree+1, dtype=np.float32)) #Plot intial guess (optional) x = np.linspace(-5, 5, 21) plt.plot(x, np.polyval(guess, x), 'm--', linewidth=2.0, label="Initial guess") plt.show() result = spo.minimize(error_func, guess, args=(data, ), method='SLSQP', options={'disc': True}) return np.poly1d(result.x) if __name__ == "__main__": run()
0.881226
0.837686
``` %matplotlib inline %load_ext autoreload %autoreload 2 import pandas as pd import numpy as np import networkx as nx from networkx.drawing.nx_agraph import graphviz_layout, to_agraph import pygraphviz as pgv from IPython.display import Image def draw(A): return Image(A.draw(format='png', prog='dot')) import sys from pathlib import Path home = str(Path.home()) sys.path.insert(0,"%s/rankability_toolbox_dev"%home) import pyrankability D2005 = pd.read_csv(home+'/college_football_analysis/data/Big12/2005.csv',header=None) D2005 pyrankability.plot.D_as_graph(D2005,file='D2005_graph.png') ``` ## Hillside BILP ``` k,details = pyrankability.rank.solve(D2005,method='hillside',cont=False) k ``` ### One solution ``` pd.Series(details['P'][0]) pd.DataFrame(details['x']) ``` ## Hillside LP ``` k,details = pyrankability.rank.solve(D2005,method='hillside',cont=True) k pd.DataFrame(pyrankability.common.threshold_x(details['x'])) ``` ## Most distant pairs ``` k_two_distant,details_two_distant = pyrankability.search.solve_pair_max_tau(D2005,method='hillside',verbose=False) details_two_distant['obj'] def calc_tau(n,obj): nchoose2 = pyrankability.common.nCr(n,2) tau = (nchoose2 - obj)/nchoose2 return tau calc_tau(len(D2005),details_two_distant['obj']) details_two_distant['perm_x'] details_two_distant['perm_y'] list(details_two_distant['perm_x']) pyrankability.plot.spider(pyrankability.plot.AB_to_P2(1+np.array(details_two_distant['perm_x']),1+np.array(details_two_distant['perm_y'])),file="example_1_max_pair.png",width=3,height=4) details_two_distant['obj'] ``` ## LOP ``` k,details = pyrankability.rank.solve(D2005,method='lop',cont=False) k pd.Series(details['P'][0]) k_two_distant,details_two_distant = pyrankability.search.solve_pair_max_tau(D2005,method='lop',verbose=False) details_two_distant['obj'] calc_tau(len(D2005),details_two_distant['obj']) pyrankability.plot.spider(pyrankability.plot.AB_to_P2(1+np.array(details_two_distant['perm_x']),1+np.array(details_two_distant['perm_y'])),file="example_1_max_pair_lop") ``` ### LOP LP ``` k,details = pyrankability.rank.solve(D2005,method='lop',cont=True) k pd.DataFrame(pyrankability.common.threshold_x(details['x'])) label = "A" xstars = {} indices = {} details_cont = {} details_cont['lop'] = details for method in details_cont.keys(): details = details_cont[method] xstar = pd.DataFrame(details['x'],index=D2005.index,columns=D2005.columns) xstars["%s. %s"%(label,method)] = xstar indices["%s. %s"%(label,method)] = details['indices'] label = chr(ord(label)+1) g,score_df,ordered_xstars = pyrankability.plot.show_score_xstar2(xstars, group_label="Group",width=300,height=300, columns=2,resolve_scale=True) g ```
github_jupyter
%matplotlib inline %load_ext autoreload %autoreload 2 import pandas as pd import numpy as np import networkx as nx from networkx.drawing.nx_agraph import graphviz_layout, to_agraph import pygraphviz as pgv from IPython.display import Image def draw(A): return Image(A.draw(format='png', prog='dot')) import sys from pathlib import Path home = str(Path.home()) sys.path.insert(0,"%s/rankability_toolbox_dev"%home) import pyrankability D2005 = pd.read_csv(home+'/college_football_analysis/data/Big12/2005.csv',header=None) D2005 pyrankability.plot.D_as_graph(D2005,file='D2005_graph.png') k,details = pyrankability.rank.solve(D2005,method='hillside',cont=False) k pd.Series(details['P'][0]) pd.DataFrame(details['x']) k,details = pyrankability.rank.solve(D2005,method='hillside',cont=True) k pd.DataFrame(pyrankability.common.threshold_x(details['x'])) k_two_distant,details_two_distant = pyrankability.search.solve_pair_max_tau(D2005,method='hillside',verbose=False) details_two_distant['obj'] def calc_tau(n,obj): nchoose2 = pyrankability.common.nCr(n,2) tau = (nchoose2 - obj)/nchoose2 return tau calc_tau(len(D2005),details_two_distant['obj']) details_two_distant['perm_x'] details_two_distant['perm_y'] list(details_two_distant['perm_x']) pyrankability.plot.spider(pyrankability.plot.AB_to_P2(1+np.array(details_two_distant['perm_x']),1+np.array(details_two_distant['perm_y'])),file="example_1_max_pair.png",width=3,height=4) details_two_distant['obj'] k,details = pyrankability.rank.solve(D2005,method='lop',cont=False) k pd.Series(details['P'][0]) k_two_distant,details_two_distant = pyrankability.search.solve_pair_max_tau(D2005,method='lop',verbose=False) details_two_distant['obj'] calc_tau(len(D2005),details_two_distant['obj']) pyrankability.plot.spider(pyrankability.plot.AB_to_P2(1+np.array(details_two_distant['perm_x']),1+np.array(details_two_distant['perm_y'])),file="example_1_max_pair_lop") k,details = pyrankability.rank.solve(D2005,method='lop',cont=True) k pd.DataFrame(pyrankability.common.threshold_x(details['x'])) label = "A" xstars = {} indices = {} details_cont = {} details_cont['lop'] = details for method in details_cont.keys(): details = details_cont[method] xstar = pd.DataFrame(details['x'],index=D2005.index,columns=D2005.columns) xstars["%s. %s"%(label,method)] = xstar indices["%s. %s"%(label,method)] = details['indices'] label = chr(ord(label)+1) g,score_df,ordered_xstars = pyrankability.plot.show_score_xstar2(xstars, group_label="Group",width=300,height=300, columns=2,resolve_scale=True) g
0.242475
0.77552
# 使用序列到序列模型完成数字加法 **作者:** [jm12138](https://github.com/jm12138) <br> **日期:** 2021.10 <br> **摘要:** 本示例介绍如何使用飞桨完成一个数字加法任务,将会使用飞桨提供的`LSTM`,组建一个序列到序列模型,并在随机生成的数据集上完成数字加法任务的模型训练与预测。 ## 一、环境配置 本教程基于Paddle 2.2.0-rc0 编写,如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2.0-rc0 。 ``` # 导入项目运行所需的包 import paddle import paddle.nn as nn import random import numpy as np from visualdl import LogWriter # 打印Paddle版本 print('paddle version: %s' % paddle.__version__) ``` ## 二、构建数据集 * 随机生成数据,并使用生成的数据构造数据集 * 通过继承 ``paddle.io.Dataset`` 来完成数据集的构造 ``` # 编码函数 def encoder(text, LEN, label_dict): # 文本转ID ids = [label_dict[word] for word in text] # 对长度进行补齐 ids += [label_dict[' ']]*(LEN-len(ids)) return ids # 单个数据生成函数 def make_data(inputs, labels, DIGITS, label_dict): MAXLEN = DIGITS + 1 + DIGITS # 对输入输出文本进行ID编码 inputs = encoder(inputs, MAXLEN, label_dict) labels = encoder(labels, DIGITS + 1, label_dict) return inputs, labels # 批量数据生成函数 def gen_datas(DATA_NUM, MAX_NUM, DIGITS, label_dict): datas = [] while len(datas)<DATA_NUM: # 随机取两个数 a = random.randint(0,MAX_NUM) b = random.randint(0,MAX_NUM) # 生成输入文本 inputs = '%d+%d' % (a, b) # 生成输出文本 labels = str(eval(inputs)) # 生成单个数据 inputs, labels = [np.array(_).astype('int64') for _ in make_data(inputs, labels, DIGITS, label_dict)] datas.append([inputs, labels]) return datas # 继承paddle.io.Dataset来构造数据集 class Addition_Dataset(paddle.io.Dataset): # 重写数据集初始化函数 def __init__(self, datas): super(Addition_Dataset, self).__init__() self.datas = datas # 重写生成样本的函数 def __getitem__(self, index): data, label = [paddle.to_tensor(_) for _ in self.datas[index]] return data, label # 重写返回数据集大小的函数 def __len__(self): return len(self.datas) print('generating datas..') # 定义字符表 label_dict = { '0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5, '6': 6, '7': 7, '8': 8, '9': 9, '+': 10, ' ': 11 } # 输入数字最大位数 DIGITS = 2 # 数据数量 train_num = 5000 dev_num = 500 # 数据批大小 batch_size = 32 # 读取线程数 num_workers = 8 # 定义一些所需变量 MAXLEN = DIGITS + 1 + DIGITS MAX_NUM = 10**(DIGITS)-1 # 生成数据 train_datas = gen_datas( train_num, MAX_NUM, DIGITS, label_dict ) dev_datas = gen_datas( dev_num, MAX_NUM, DIGITS, label_dict ) # 实例化数据集 train_dataset = Addition_Dataset(train_datas) dev_dataset = Addition_Dataset(dev_datas) print('making the dataset...') # 实例化数据读取器 train_reader = paddle.io.DataLoader( train_dataset, batch_size=batch_size, shuffle=True, drop_last=True ) dev_reader = paddle.io.DataLoader( dev_dataset, batch_size=batch_size, shuffle=False, drop_last=True ) print('finish') ``` ## 三、模型组网 * 通过继承 ``paddle.nn.Layer`` 类来搭建模型 * 本次介绍的模型是一个简单的基于 ``LSTM`` 的 ``Seq2Seq`` 模型 * 一共有如下四个主要的网络层: 1. 嵌入层(``Embedding``):将输入的文本序列转为嵌入向量 2. 编码层(``LSTM``):将嵌入向量进行编码 3. 解码层(``LSTM``):将编码向量进行解码 4. 全连接层(``Linear``):对解码完成的向量进行线性映射 * 损失函数为交叉熵损失函数 ``` # 继承paddle.nn.Layer类 class Addition_Model(nn.Layer): # 重写初始化函数 # 参数:字符表长度、嵌入层大小、隐藏层大小、解码器层数、处理数字的最大位数 def __init__(self, char_len=12, embedding_size=128, hidden_size=128, num_layers=1, DIGITS=2): super(Addition_Model, self).__init__() # 初始化变量 self.DIGITS = DIGITS self.MAXLEN = DIGITS + 1 + DIGITS self.hidden_size = hidden_size self.char_len = char_len # 嵌入层 self.emb = nn.Embedding( char_len, embedding_size ) # 编码器 self.encoder = nn.LSTM( input_size=embedding_size, hidden_size=hidden_size, num_layers=1 ) # 解码器 self.decoder = nn.LSTM( input_size=hidden_size, hidden_size=hidden_size, num_layers=num_layers ) # 全连接层 self.fc = nn.Linear( hidden_size, char_len ) # 重写模型前向计算函数 # 参数:输入[None, MAXLEN]、标签[None, DIGITS + 1] def forward(self, inputs, labels=None): # 嵌入层 out = self.emb(inputs) # 编码器 out, (_, _) = self.encoder(out) # 按时间步切分编码器输出 out = paddle.split(out, self.MAXLEN, axis=1) # 取最后一个时间步的输出并复制 DIGITS + 1 次 out = paddle.expand(out[-1], [out[-1].shape[0], self.DIGITS + 1, self.hidden_size]) # 解码器 out, (_, _) = self.decoder(out) # 全连接 out = self.fc(out) # 如果标签存在,则计算其损失和准确率 if labels is not None: # 计算交叉熵损失 loss = nn.functional.cross_entropy(out, labels) # 计算准确率 acc = paddle.metric.accuracy(paddle.reshape(out, [-1, self.char_len]), paddle.reshape(labels, [-1, 1])) # 返回损失和准确率 return loss, acc # 返回输出 return out ``` ## 四、模型训练与评估 * 使用 ``Adam`` 作为优化器进行模型训练 * 以模型准确率作为评价指标 * 使用 ``VisualDL`` 对训练数据进行可视化 * 训练过程中会同时进行模型评估和最佳模型的保存 ``` # 初始化log写入器 log_writer = LogWriter(logdir="./log") # 模型参数设置 embedding_size = 128 hidden_size=128 num_layers=1 # 训练参数设置 epoch_num = 50 learning_rate = 0.001 log_iter = 2000 eval_iter = 500 # 定义一些所需变量 global_step = 0 log_step = 0 max_acc = 0 # 实例化模型 model = Addition_Model( char_len=len(label_dict), embedding_size=embedding_size, hidden_size=hidden_size, num_layers=num_layers, DIGITS=DIGITS) # 将模型设置为训练模式 model.train() # 设置优化器,学习率,并且把模型参数给优化器 opt = paddle.optimizer.Adam( learning_rate=learning_rate, parameters=model.parameters() ) # 启动训练,循环epoch_num个轮次 for epoch in range(epoch_num): # 遍历数据集读取数据 for batch_id, data in enumerate(train_reader()): # 读取数据 inputs, labels = data # 模型前向计算 loss, acc = model(inputs, labels=labels) # 打印训练数据 if global_step%log_iter==0: print('train epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, loss.numpy(), acc.numpy())) log_writer.add_scalar(tag="train/loss", step=log_step, value=loss.numpy()) log_writer.add_scalar(tag="train/acc", step=log_step, value=acc.numpy()) log_step+=1 # 模型验证 if global_step%eval_iter==0: model.eval() losses = [] accs = [] for data in dev_reader(): loss_eval, acc_eval = model(inputs, labels=labels) losses.append(loss_eval.numpy()) accs.append(acc_eval.numpy()) avg_loss = np.concatenate(losses).mean() avg_acc = np.concatenate(accs).mean() print('eval epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, avg_loss, avg_acc)) log_writer.add_scalar(tag="dev/loss", step=log_step, value=avg_loss) log_writer.add_scalar(tag="dev/acc", step=log_step, value=avg_acc) # 保存最佳模型 if avg_acc>max_acc: max_acc = avg_acc print('saving the best_model...') paddle.save(model.state_dict(), 'best_model') model.train() # 反向传播 loss.backward() # 使用优化器进行参数优化 opt.step() # 清除梯度 opt.clear_grad() # 全局步数加一 global_step += 1 # 保存最终模型 paddle.save(model.state_dict(),'final_model') ``` ## 五、模型测试 * 使用保存的最佳模型进行测试 ``` # 反转字符表 label_dict_adv = {v: k for k, v in label_dict.items()} # 输入计算题目 input_text = '12+40' # 编码输入为ID inputs = encoder(input_text, MAXLEN, label_dict) # 转换输入为向量形式 inputs = np.array(inputs).reshape(-1, MAXLEN) inputs = paddle.to_tensor(inputs) # 加载模型 params_dict= paddle.load('best_model') model.set_dict(params_dict) # 设置为评估模式 model.eval() # 模型推理 out = model(inputs) # 结果转换 result = ''.join([label_dict_adv[_] for _ in np.argmax(out.numpy(), -1).reshape(-1)]) # 打印结果 print('the model answer: %s=%s' % (input_text, result)) print('the true answer: %s=%s' % (input_text, eval(input_text))) ``` ## 六、总结 * 你还可以通过变换网络结构,调整数据集,尝试不同的参数的方式来进一步提升本示例当中的数字加法的效果 * 同时,也可以尝试在其他的类似的任务中用飞桨来完成实际的实践
github_jupyter
# 导入项目运行所需的包 import paddle import paddle.nn as nn import random import numpy as np from visualdl import LogWriter # 打印Paddle版本 print('paddle version: %s' % paddle.__version__) # 编码函数 def encoder(text, LEN, label_dict): # 文本转ID ids = [label_dict[word] for word in text] # 对长度进行补齐 ids += [label_dict[' ']]*(LEN-len(ids)) return ids # 单个数据生成函数 def make_data(inputs, labels, DIGITS, label_dict): MAXLEN = DIGITS + 1 + DIGITS # 对输入输出文本进行ID编码 inputs = encoder(inputs, MAXLEN, label_dict) labels = encoder(labels, DIGITS + 1, label_dict) return inputs, labels # 批量数据生成函数 def gen_datas(DATA_NUM, MAX_NUM, DIGITS, label_dict): datas = [] while len(datas)<DATA_NUM: # 随机取两个数 a = random.randint(0,MAX_NUM) b = random.randint(0,MAX_NUM) # 生成输入文本 inputs = '%d+%d' % (a, b) # 生成输出文本 labels = str(eval(inputs)) # 生成单个数据 inputs, labels = [np.array(_).astype('int64') for _ in make_data(inputs, labels, DIGITS, label_dict)] datas.append([inputs, labels]) return datas # 继承paddle.io.Dataset来构造数据集 class Addition_Dataset(paddle.io.Dataset): # 重写数据集初始化函数 def __init__(self, datas): super(Addition_Dataset, self).__init__() self.datas = datas # 重写生成样本的函数 def __getitem__(self, index): data, label = [paddle.to_tensor(_) for _ in self.datas[index]] return data, label # 重写返回数据集大小的函数 def __len__(self): return len(self.datas) print('generating datas..') # 定义字符表 label_dict = { '0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5, '6': 6, '7': 7, '8': 8, '9': 9, '+': 10, ' ': 11 } # 输入数字最大位数 DIGITS = 2 # 数据数量 train_num = 5000 dev_num = 500 # 数据批大小 batch_size = 32 # 读取线程数 num_workers = 8 # 定义一些所需变量 MAXLEN = DIGITS + 1 + DIGITS MAX_NUM = 10**(DIGITS)-1 # 生成数据 train_datas = gen_datas( train_num, MAX_NUM, DIGITS, label_dict ) dev_datas = gen_datas( dev_num, MAX_NUM, DIGITS, label_dict ) # 实例化数据集 train_dataset = Addition_Dataset(train_datas) dev_dataset = Addition_Dataset(dev_datas) print('making the dataset...') # 实例化数据读取器 train_reader = paddle.io.DataLoader( train_dataset, batch_size=batch_size, shuffle=True, drop_last=True ) dev_reader = paddle.io.DataLoader( dev_dataset, batch_size=batch_size, shuffle=False, drop_last=True ) print('finish') # 继承paddle.nn.Layer类 class Addition_Model(nn.Layer): # 重写初始化函数 # 参数:字符表长度、嵌入层大小、隐藏层大小、解码器层数、处理数字的最大位数 def __init__(self, char_len=12, embedding_size=128, hidden_size=128, num_layers=1, DIGITS=2): super(Addition_Model, self).__init__() # 初始化变量 self.DIGITS = DIGITS self.MAXLEN = DIGITS + 1 + DIGITS self.hidden_size = hidden_size self.char_len = char_len # 嵌入层 self.emb = nn.Embedding( char_len, embedding_size ) # 编码器 self.encoder = nn.LSTM( input_size=embedding_size, hidden_size=hidden_size, num_layers=1 ) # 解码器 self.decoder = nn.LSTM( input_size=hidden_size, hidden_size=hidden_size, num_layers=num_layers ) # 全连接层 self.fc = nn.Linear( hidden_size, char_len ) # 重写模型前向计算函数 # 参数:输入[None, MAXLEN]、标签[None, DIGITS + 1] def forward(self, inputs, labels=None): # 嵌入层 out = self.emb(inputs) # 编码器 out, (_, _) = self.encoder(out) # 按时间步切分编码器输出 out = paddle.split(out, self.MAXLEN, axis=1) # 取最后一个时间步的输出并复制 DIGITS + 1 次 out = paddle.expand(out[-1], [out[-1].shape[0], self.DIGITS + 1, self.hidden_size]) # 解码器 out, (_, _) = self.decoder(out) # 全连接 out = self.fc(out) # 如果标签存在,则计算其损失和准确率 if labels is not None: # 计算交叉熵损失 loss = nn.functional.cross_entropy(out, labels) # 计算准确率 acc = paddle.metric.accuracy(paddle.reshape(out, [-1, self.char_len]), paddle.reshape(labels, [-1, 1])) # 返回损失和准确率 return loss, acc # 返回输出 return out # 初始化log写入器 log_writer = LogWriter(logdir="./log") # 模型参数设置 embedding_size = 128 hidden_size=128 num_layers=1 # 训练参数设置 epoch_num = 50 learning_rate = 0.001 log_iter = 2000 eval_iter = 500 # 定义一些所需变量 global_step = 0 log_step = 0 max_acc = 0 # 实例化模型 model = Addition_Model( char_len=len(label_dict), embedding_size=embedding_size, hidden_size=hidden_size, num_layers=num_layers, DIGITS=DIGITS) # 将模型设置为训练模式 model.train() # 设置优化器,学习率,并且把模型参数给优化器 opt = paddle.optimizer.Adam( learning_rate=learning_rate, parameters=model.parameters() ) # 启动训练,循环epoch_num个轮次 for epoch in range(epoch_num): # 遍历数据集读取数据 for batch_id, data in enumerate(train_reader()): # 读取数据 inputs, labels = data # 模型前向计算 loss, acc = model(inputs, labels=labels) # 打印训练数据 if global_step%log_iter==0: print('train epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, loss.numpy(), acc.numpy())) log_writer.add_scalar(tag="train/loss", step=log_step, value=loss.numpy()) log_writer.add_scalar(tag="train/acc", step=log_step, value=acc.numpy()) log_step+=1 # 模型验证 if global_step%eval_iter==0: model.eval() losses = [] accs = [] for data in dev_reader(): loss_eval, acc_eval = model(inputs, labels=labels) losses.append(loss_eval.numpy()) accs.append(acc_eval.numpy()) avg_loss = np.concatenate(losses).mean() avg_acc = np.concatenate(accs).mean() print('eval epoch:%d step: %d loss:%f acc:%f' % (epoch, global_step, avg_loss, avg_acc)) log_writer.add_scalar(tag="dev/loss", step=log_step, value=avg_loss) log_writer.add_scalar(tag="dev/acc", step=log_step, value=avg_acc) # 保存最佳模型 if avg_acc>max_acc: max_acc = avg_acc print('saving the best_model...') paddle.save(model.state_dict(), 'best_model') model.train() # 反向传播 loss.backward() # 使用优化器进行参数优化 opt.step() # 清除梯度 opt.clear_grad() # 全局步数加一 global_step += 1 # 保存最终模型 paddle.save(model.state_dict(),'final_model') # 反转字符表 label_dict_adv = {v: k for k, v in label_dict.items()} # 输入计算题目 input_text = '12+40' # 编码输入为ID inputs = encoder(input_text, MAXLEN, label_dict) # 转换输入为向量形式 inputs = np.array(inputs).reshape(-1, MAXLEN) inputs = paddle.to_tensor(inputs) # 加载模型 params_dict= paddle.load('best_model') model.set_dict(params_dict) # 设置为评估模式 model.eval() # 模型推理 out = model(inputs) # 结果转换 result = ''.join([label_dict_adv[_] for _ in np.argmax(out.numpy(), -1).reshape(-1)]) # 打印结果 print('the model answer: %s=%s' % (input_text, result)) print('the true answer: %s=%s' % (input_text, eval(input_text)))
0.424889
0.900004
``` from loguru import logger import sys logger.remove() logger.add( sys.stdout, level="DEBUG", colorize=True, format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> <level>{message}</level>" ) ``` # Insertion Sort ``` def _insert(int_list, i): if int_list[i] < int_list[i-1]: logger.debug("value in position {} > value in position {}, swap their values".format(i, i - 1)) int_list[i-1], int_list[i] = int_list[i], int_list[i-1] if i >= 2: _insert(int_list, i - 1) def insertion_sort(int_list): int_list = int_list.copy() if len(int_list) >= 2: for i in range(1, len(int_list)): _insert(int_list, i) return int_list assert insertion_sort([]) == [] assert insertion_sort([1]) == [1] assert insertion_sort([3, 1, 2, 4, 5]) == [1,2,3,4,5] assert insertion_sort([1,2,3,4,5]) == [1,2,3,4,5] assert insertion_sort([5,4,3,2,1]) == [1,2,3,4,5] assert insertion_sort([1,1,2,2]) == [1,1,2,2] ``` # Heap Sort ``` def _heapify(data_list, n, i): logger.debug('='*10) logger.debug('n:{}'.format(n)) logger.debug('i:{}'.format(i)) logger.debug('{}'.format(data_list)) root_index = i left_index = 2 * i + 1 right_index = 2 * i + 2 largest_index = i if left_index > n: return if right_index > n: if data_list[left_index] > data_list[root_index]: largest_index = left_index else: if (data_list[left_index] > data_list[root_index]) and (data_list[left_index] > data_list[right_index]): largest_index = left_index if (data_list[right_index] > data_list[root_index]) and (data_list[right_index] > data_list[left_index]): largest_index = right_index logger.debug('largetest index:{}'.format(largest_index)) if largest_index != root_index: data_list[largest_index], data_list[root_index] = data_list[root_index], data_list[largest_index] _heapify(data_list, n, largest_index) def heap_sort(data_list): data_list = data_list.copy() n = len(data_list) if n <= 2: return data_list for i in range((n - 1) // 2 - 1, -1, -1): _heapify(data_list, n-1, i) logger.debug('='*20) logger.debug('Max heap built') for i in range(n-1, 1, -1): data_list[0], data_list[i] = data_list[i], data_list[0] _heapify(data_list, i - 1, 0) data_list[0], data_list[1] = data_list[1], data_list[0] return data_list assert heap_sort([]) == [] assert heap_sort([1]) == [1] assert heap_sort([3, 1, 2, 4, 5]) == [1,2,3,4,5] assert heap_sort([1,2,3,4,5]) == [1,2,3,4,5] assert heap_sort([5,4,3,2,1]) == [1,2,3,4,5] assert heap_sort([1,1,2,2]) == [1,1,2,2] ``` # QuickSort ``` def _quick_sort(data_list, low_index, high_index): logger.debug('Before swapping:{}'.format(data_list)) logger.debug('low_index:{}, high_index:{}'.format(low_index, high_index)) if (high_index - low_index) <=0 : pass elif (high_index - low_index) == 1: if data_list[high_index] <= data_list[low_index]: data_list[high_index], data_list[low_index] = data_list[low_index], data_list[high_index] else: pivot_value = data_list[high_index] i = low_index - 1 for j in range(low_index, high_index + 1): if data_list[j] < pivot_value: i += 1 data_list[i], data_list[j] = data_list[j], data_list[i] data_list[high_index], data_list[i + 1] = data_list[i + 1], data_list[high_index] logger.debug('After sorting:{}, i:{}, j:{}'.format(data_list, i, j)) logger.debug('='*20) _quick_sort(data_list, 0, i) _quick_sort(data_list, i + 1, high_index - 1) return data_list def quick_sort(data_list): data_list = data_list.copy() _quick_sort(data_list, 0, len(data_list) - 1) return data_list assert quick_sort([]) == [] assert quick_sort([1]) == [1] assert quick_sort([3, 1, 2, 4, 5]) == [1,2,3,4,5] assert quick_sort([1,2,3,4,5]) == [1,2,3,4,5] assert quick_sort([5,4,3,2,1]) == [1,2,3,4,5] assert quick_sort([1,1,2,2]) == [1,1,2,2] ``` # Merge Sort ``` def _merge(a, b): merged = [] i = 0 j = 0 for k in range(len(a) + len(b)): if (j < len(b)) and (i < len(a)): if a[i] <= b[j]: merged.append(a[i]) i += 1 else: merged.append(b[j]) j += 1 else: if j < len(b): merged += b[j:] j = len(b) else: merged += a[i:] i = len(a) return merged def merge_sort(data_list): data_list == data_list.copy() if len(data_list) <= 1: return data_list mid_index = len(data_list) // 2 left_part = merge_sort(data_list[:mid_index]) right_part = merge_sort(data_list[mid_index:]) return _merge(left_part, right_part) assert merge_sort([]) == [] assert merge_sort([1]) == [1] assert merge_sort([3, 1, 2, 4, 5]) == [1,2,3,4,5] assert merge_sort([1,2,3,4,5]) == [1,2,3,4,5] assert merge_sort([5,4,3,2,1]) == [1,2,3,4,5] assert merge_sort([1,1,2,2]) == [1,1,2,2] ```
github_jupyter
from loguru import logger import sys logger.remove() logger.add( sys.stdout, level="DEBUG", colorize=True, format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> <level>{message}</level>" ) def _insert(int_list, i): if int_list[i] < int_list[i-1]: logger.debug("value in position {} > value in position {}, swap their values".format(i, i - 1)) int_list[i-1], int_list[i] = int_list[i], int_list[i-1] if i >= 2: _insert(int_list, i - 1) def insertion_sort(int_list): int_list = int_list.copy() if len(int_list) >= 2: for i in range(1, len(int_list)): _insert(int_list, i) return int_list assert insertion_sort([]) == [] assert insertion_sort([1]) == [1] assert insertion_sort([3, 1, 2, 4, 5]) == [1,2,3,4,5] assert insertion_sort([1,2,3,4,5]) == [1,2,3,4,5] assert insertion_sort([5,4,3,2,1]) == [1,2,3,4,5] assert insertion_sort([1,1,2,2]) == [1,1,2,2] def _heapify(data_list, n, i): logger.debug('='*10) logger.debug('n:{}'.format(n)) logger.debug('i:{}'.format(i)) logger.debug('{}'.format(data_list)) root_index = i left_index = 2 * i + 1 right_index = 2 * i + 2 largest_index = i if left_index > n: return if right_index > n: if data_list[left_index] > data_list[root_index]: largest_index = left_index else: if (data_list[left_index] > data_list[root_index]) and (data_list[left_index] > data_list[right_index]): largest_index = left_index if (data_list[right_index] > data_list[root_index]) and (data_list[right_index] > data_list[left_index]): largest_index = right_index logger.debug('largetest index:{}'.format(largest_index)) if largest_index != root_index: data_list[largest_index], data_list[root_index] = data_list[root_index], data_list[largest_index] _heapify(data_list, n, largest_index) def heap_sort(data_list): data_list = data_list.copy() n = len(data_list) if n <= 2: return data_list for i in range((n - 1) // 2 - 1, -1, -1): _heapify(data_list, n-1, i) logger.debug('='*20) logger.debug('Max heap built') for i in range(n-1, 1, -1): data_list[0], data_list[i] = data_list[i], data_list[0] _heapify(data_list, i - 1, 0) data_list[0], data_list[1] = data_list[1], data_list[0] return data_list assert heap_sort([]) == [] assert heap_sort([1]) == [1] assert heap_sort([3, 1, 2, 4, 5]) == [1,2,3,4,5] assert heap_sort([1,2,3,4,5]) == [1,2,3,4,5] assert heap_sort([5,4,3,2,1]) == [1,2,3,4,5] assert heap_sort([1,1,2,2]) == [1,1,2,2] def _quick_sort(data_list, low_index, high_index): logger.debug('Before swapping:{}'.format(data_list)) logger.debug('low_index:{}, high_index:{}'.format(low_index, high_index)) if (high_index - low_index) <=0 : pass elif (high_index - low_index) == 1: if data_list[high_index] <= data_list[low_index]: data_list[high_index], data_list[low_index] = data_list[low_index], data_list[high_index] else: pivot_value = data_list[high_index] i = low_index - 1 for j in range(low_index, high_index + 1): if data_list[j] < pivot_value: i += 1 data_list[i], data_list[j] = data_list[j], data_list[i] data_list[high_index], data_list[i + 1] = data_list[i + 1], data_list[high_index] logger.debug('After sorting:{}, i:{}, j:{}'.format(data_list, i, j)) logger.debug('='*20) _quick_sort(data_list, 0, i) _quick_sort(data_list, i + 1, high_index - 1) return data_list def quick_sort(data_list): data_list = data_list.copy() _quick_sort(data_list, 0, len(data_list) - 1) return data_list assert quick_sort([]) == [] assert quick_sort([1]) == [1] assert quick_sort([3, 1, 2, 4, 5]) == [1,2,3,4,5] assert quick_sort([1,2,3,4,5]) == [1,2,3,4,5] assert quick_sort([5,4,3,2,1]) == [1,2,3,4,5] assert quick_sort([1,1,2,2]) == [1,1,2,2] def _merge(a, b): merged = [] i = 0 j = 0 for k in range(len(a) + len(b)): if (j < len(b)) and (i < len(a)): if a[i] <= b[j]: merged.append(a[i]) i += 1 else: merged.append(b[j]) j += 1 else: if j < len(b): merged += b[j:] j = len(b) else: merged += a[i:] i = len(a) return merged def merge_sort(data_list): data_list == data_list.copy() if len(data_list) <= 1: return data_list mid_index = len(data_list) // 2 left_part = merge_sort(data_list[:mid_index]) right_part = merge_sort(data_list[mid_index:]) return _merge(left_part, right_part) assert merge_sort([]) == [] assert merge_sort([1]) == [1] assert merge_sort([3, 1, 2, 4, 5]) == [1,2,3,4,5] assert merge_sort([1,2,3,4,5]) == [1,2,3,4,5] assert merge_sort([5,4,3,2,1]) == [1,2,3,4,5] assert merge_sort([1,1,2,2]) == [1,1,2,2]
0.184988
0.737749
``` !pip install eli5 import pandas as pd import numpy as np from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error from sklearn.model_selection import cross_val_score import eli5 from eli5.sklearn import PermutationImportance from ast import literal_eval from tqdm import tqdm_notebook cd "/content/drive/My Drive/Colab Notebooks/dw_matrix" df = pd.read_csv("data/men_shoes.csv", low_memory=False) df.columns def run_model(feats,model=DecisionTreeRegressor(max_depth=5)): X = df[feats].values y = df['prices_amountmin'].values scores = cross_val_score(model,X,y,scoring= 'neg_mean_absolute_error') return np.mean(scores), np.std(scores) df['brand_cat'] = df['brand'].factorize()[0] print("DecisionTreeRegressor model outcome: ", run_model(['brand_cat'])) model = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0) print("RandomForestRegressor model outcome: ", run_model(['brand_cat'],model)) df['brand_cat2'] = df['brand'].map(lambda x:str(x).lower()).factorize()[0] print("DecisionTreeRegressor model outcome: ", run_model(['brand_cat2'])) print("RandomForestRegressor model outcome: ", run_model(['brand_cat2'],model)) df.features.head().values # widzimy, ze features to slownik zapisany jako str, musimy wrocic do formy slownika aby uzywac tej kolumny # ponizej funkcja literal_eval ktora nam ten process ulatwi str_dict = '[{"key":"Gender","value":["Men"]},{"key":"Shoe Size","value":["M"]},{"key":"Shoe Category","value":["Men\'s Shoes"]},{"key":"Color","value":["Multicolor"]},{"key":"Manufacturer Part Number","value":["8190-W-NAVY-7.5"]},{"key":"Brand","value":["Josmo"]}]' literal_eval(str_dict) # chcemy je miec w takiej formie jak ponizej { 'Gender': 'Men', 'Shoe Size': 'M', 'Shoe Category': "Men's shoes", 'Color': 'Multicolor', 'Manufacturer Part Number': '8190-W-NAVY-7.5', 'Brand': 'Josmo' } def parse_features(x): output_dict = {} if str(x) == 'nan': return output_dict features = literal_eval(x.replace('\\"','"')) for item in features: # theat's how item look right now # {'key': 'Gender', 'value': ['Men']} key = item['key'].lower().strip() value = item['value'][0].lower().strip() output_dict[key] = value return output_dict df['features_parsed'] = df['features'].map(parse_features) df['features_parsed'].head().values keys = set() df['features_parsed'].map(lambda x: keys.update(x.keys())) len(keys) def get_name_feat(key): return "feat_" + key for key in tqdm_notebook(keys): df[get_name_feat(key)] = df.features_parsed.map(lambda feats: feats[key] if key in feats else np.nan) df.columns keys_stats = {} for key in keys: keys_stats[key] = df [ False == df[get_name_feat(key)].isnull()].shape[0] / df.shape[0] * 100 {k:v for k,v in keys_stats.items() if v > 30} df['feat_brand_cat'] = df['feat_brand'].factorize()[0] df['feat_color_cat'] = df['feat_color'].factorize()[0] df['feat_gender_cat'] = df['feat_brand'].factorize()[0] df['feat_manufacturer part number_cat'] = df['feat_manufacturer part number'].factorize()[0] df['feat_material_cat'] = df['feat_material'].factorize()[0] df['feat_sport_cat'] = df['feat_sport'].factorize()[0] df['feat_style_cat'] = df['feat_style'].factorize()[0] for key in keys: df[get_name_feat(key)+'_cat'] = df[get_name_feat(key)].factorize()[0] df['brand'] = df['brand'].map(lambda x: str(x).lower()) df [ df.brand == df.feat_brand].shape model = RandomForestRegressor(max_depth=5, n_estimators=100) run_model(['brand_cat2'],model) feats_cat = [x for x in df.columns if 'cat' in x] #feats_cat feats = [ 'brand_cat2', 'feat_brand_cat', 'feat_gender_cat', 'feat_material_cat', 'feat_movement_cat', 'feat_adjustable_cat', 'feat_resizable_cat', 'feat_fabric content_cat', 'feat_case thickness_cat', 'weight_converted_cat'] #feats += feats_cat #feats = list(set(feats)) model = RandomForestRegressor(max_depth=5, n_estimators=100) result = run_model(feats,model) X = df[feats].values y = df['prices_amountmin'].values m = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0) m.fit(X,y) print(result) perm = PermutationImportance(m, random_state=1).fit(X,y); eli5.show_weights(perm, feature_names=feats) df['brand'].value_counts(normalize=True) df[df['brand'] == 'nike'].features_parsed.sample(5).values #df['weight'].unique() df['weight_string'] = df['weight'].astype(str) def convert_to_grams(weight): if 'nan' in weight: return '0' elif 'g' in weight: return weight[0:-2] elif 'lbs' in weight: x = float(weight[0:-4]) * 453.592 return str(x) elif 'pounds' in weight: x = float(weight[0:-7]) * 453.592 return str(x) elif 'ounces' in weight: x = float(weight[0:-7])* 28.35 return str(x) elif 'Kg' in weight: x - float(weight[0:-3]) * 1000 return str(x) df['weight_converted'] = df['weight_string'].map(convert_to_grams) df['weight_converted_cat'] = df['weight_converted'].factorize()[0] df['weight_converted_cat'] def addGitcommit(filepath, message): !git add /filepath/ !git config --global user.email '[email protected]' !git config --global user.name 'SirMatix' !git commit -m message !git push origin master ls addGitcommit('day5.ipynb',"Trying out commit function") ```
github_jupyter
!pip install eli5 import pandas as pd import numpy as np from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error from sklearn.model_selection import cross_val_score import eli5 from eli5.sklearn import PermutationImportance from ast import literal_eval from tqdm import tqdm_notebook cd "/content/drive/My Drive/Colab Notebooks/dw_matrix" df = pd.read_csv("data/men_shoes.csv", low_memory=False) df.columns def run_model(feats,model=DecisionTreeRegressor(max_depth=5)): X = df[feats].values y = df['prices_amountmin'].values scores = cross_val_score(model,X,y,scoring= 'neg_mean_absolute_error') return np.mean(scores), np.std(scores) df['brand_cat'] = df['brand'].factorize()[0] print("DecisionTreeRegressor model outcome: ", run_model(['brand_cat'])) model = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0) print("RandomForestRegressor model outcome: ", run_model(['brand_cat'],model)) df['brand_cat2'] = df['brand'].map(lambda x:str(x).lower()).factorize()[0] print("DecisionTreeRegressor model outcome: ", run_model(['brand_cat2'])) print("RandomForestRegressor model outcome: ", run_model(['brand_cat2'],model)) df.features.head().values # widzimy, ze features to slownik zapisany jako str, musimy wrocic do formy slownika aby uzywac tej kolumny # ponizej funkcja literal_eval ktora nam ten process ulatwi str_dict = '[{"key":"Gender","value":["Men"]},{"key":"Shoe Size","value":["M"]},{"key":"Shoe Category","value":["Men\'s Shoes"]},{"key":"Color","value":["Multicolor"]},{"key":"Manufacturer Part Number","value":["8190-W-NAVY-7.5"]},{"key":"Brand","value":["Josmo"]}]' literal_eval(str_dict) # chcemy je miec w takiej formie jak ponizej { 'Gender': 'Men', 'Shoe Size': 'M', 'Shoe Category': "Men's shoes", 'Color': 'Multicolor', 'Manufacturer Part Number': '8190-W-NAVY-7.5', 'Brand': 'Josmo' } def parse_features(x): output_dict = {} if str(x) == 'nan': return output_dict features = literal_eval(x.replace('\\"','"')) for item in features: # theat's how item look right now # {'key': 'Gender', 'value': ['Men']} key = item['key'].lower().strip() value = item['value'][0].lower().strip() output_dict[key] = value return output_dict df['features_parsed'] = df['features'].map(parse_features) df['features_parsed'].head().values keys = set() df['features_parsed'].map(lambda x: keys.update(x.keys())) len(keys) def get_name_feat(key): return "feat_" + key for key in tqdm_notebook(keys): df[get_name_feat(key)] = df.features_parsed.map(lambda feats: feats[key] if key in feats else np.nan) df.columns keys_stats = {} for key in keys: keys_stats[key] = df [ False == df[get_name_feat(key)].isnull()].shape[0] / df.shape[0] * 100 {k:v for k,v in keys_stats.items() if v > 30} df['feat_brand_cat'] = df['feat_brand'].factorize()[0] df['feat_color_cat'] = df['feat_color'].factorize()[0] df['feat_gender_cat'] = df['feat_brand'].factorize()[0] df['feat_manufacturer part number_cat'] = df['feat_manufacturer part number'].factorize()[0] df['feat_material_cat'] = df['feat_material'].factorize()[0] df['feat_sport_cat'] = df['feat_sport'].factorize()[0] df['feat_style_cat'] = df['feat_style'].factorize()[0] for key in keys: df[get_name_feat(key)+'_cat'] = df[get_name_feat(key)].factorize()[0] df['brand'] = df['brand'].map(lambda x: str(x).lower()) df [ df.brand == df.feat_brand].shape model = RandomForestRegressor(max_depth=5, n_estimators=100) run_model(['brand_cat2'],model) feats_cat = [x for x in df.columns if 'cat' in x] #feats_cat feats = [ 'brand_cat2', 'feat_brand_cat', 'feat_gender_cat', 'feat_material_cat', 'feat_movement_cat', 'feat_adjustable_cat', 'feat_resizable_cat', 'feat_fabric content_cat', 'feat_case thickness_cat', 'weight_converted_cat'] #feats += feats_cat #feats = list(set(feats)) model = RandomForestRegressor(max_depth=5, n_estimators=100) result = run_model(feats,model) X = df[feats].values y = df['prices_amountmin'].values m = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0) m.fit(X,y) print(result) perm = PermutationImportance(m, random_state=1).fit(X,y); eli5.show_weights(perm, feature_names=feats) df['brand'].value_counts(normalize=True) df[df['brand'] == 'nike'].features_parsed.sample(5).values #df['weight'].unique() df['weight_string'] = df['weight'].astype(str) def convert_to_grams(weight): if 'nan' in weight: return '0' elif 'g' in weight: return weight[0:-2] elif 'lbs' in weight: x = float(weight[0:-4]) * 453.592 return str(x) elif 'pounds' in weight: x = float(weight[0:-7]) * 453.592 return str(x) elif 'ounces' in weight: x = float(weight[0:-7])* 28.35 return str(x) elif 'Kg' in weight: x - float(weight[0:-3]) * 1000 return str(x) df['weight_converted'] = df['weight_string'].map(convert_to_grams) df['weight_converted_cat'] = df['weight_converted'].factorize()[0] df['weight_converted_cat'] def addGitcommit(filepath, message): !git add /filepath/ !git config --global user.email '[email protected]' !git config --global user.name 'SirMatix' !git commit -m message !git push origin master ls addGitcommit('day5.ipynb',"Trying out commit function")
0.40439
0.290572
# Evaluation of feature selection results ## Importing some packages ``` import os.path import numpy as np import pandas as pd from scipy.stats import ttest_ind, ttest_rel import matplotlib.pyplot as plt from matplotlib.transforms import Affine2D from statsmodels.stats.contingency_tables import mcnemar from config import * ``` ## Loading the results ``` results= pd.read_csv('feature_selection_rankings.csv') runtimes= pd.read_csv('feature_selection_runtimes.csv') ``` ## The analysis ``` results results.columns multiindex= pd.MultiIndex.from_tuples([('', 'database'), ('MI', 3), ('MI', 7), ('MI', 11), ('MI', 21), ('MI', 31), ('EQW', 2), ('EQF', 2), ('kmeans', 2), ('DA', 2), ('EQW', 5), ('EQF', 5), ('kmeans', 5), ('DA', 5), ('EQW', 'square-root'), ('EQF', 'square-root'), ('kmeans', 'square-root'), ('DA', 'square-root'), ('EQW', 'Struges-form.'), ('EQF', 'Struges-form.'), ('kmeans', 'Struges-form.'), ('DA', 'Struges-form.'), ('EQW', 'Rice-rule'), ('EQF', 'Rice-rule'), ('kmeans', 'Rice-rule'), ('DA', 'Rice-rule')]) results.columns=multiindex results= results[[('', 'database'), ('MI', 3), ('MI', 7), ('MI', 11), ('MI', 21), ('MI', 31), ('EQW', 2), ('EQW', 5), ('EQW', 'square-root'), ('EQW', 'Struges-form.'),('EQW', 'Rice-rule'), ('EQF', 2), ('EQF', 5), ('EQF', 'square-root'), ('EQF', 'Struges-form.'), ('EQF', 'Rice-rule'), ('kmeans', 2), ('kmeans', 5), ('kmeans', 'square-root'), ('kmeans', 'Struges-form.'), ('kmeans', 'Rice-rule'), ('DA', 2), ('DA', 5), ('DA', 'square-root'), ('DA', 'Struges-form.'), ('DA', 'Rice-rule')]] results for c in results.columns[1:]: results[c]= results[c].apply(lambda x: np.round(x, 1)) results tmp= results.mean().reset_index(drop=False) tmp plt.figure(figsize=(7, 3)) tmp1= tmp[tmp['level_0'] == 'EQW'] tmp1= tmp1.iloc[[0, 1, 3, 4, 2]] plt.plot(np.arange(len(tmp1)), tmp1[0], label='nEV EQW binning', linestyle='-', linewidth=2.0) #tmp1= tmp[tmp['level_0'] == 'EQF'] #plt.plot(np.arange(len(tmp1)), tmp1[0], label='nEV EQF binning', linestyle='-', linewidth=2.0) tmp1= tmp[tmp['level_0'] == 'kmeans'] tmp1= tmp1.iloc[[0, 1, 3, 4, 2]] plt.plot(np.arange(len(tmp1)), tmp1[0], label='nEV kmeans binning', linestyle='dashed', linewidth=2.0) tmp1= tmp[tmp['level_0'] == 'DA'] tmp1= tmp1.iloc[[0, 1, 3, 4, 2]] plt.plot(np.arange(len(tmp1)), tmp1[0], label='nEV DA binning', linestyle='dotted', linewidth=2.0) tmp1= tmp[(tmp['level_0'] == 'MI') & (tmp['level_1'] == 7)] plt.plot(np.arange(5), np.repeat(tmp1[0], 5), label='nMI (7 neighb.)', linestyle='dashed', linewidth=2.0, color='black') plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) plt.xlabel('number of bins') plt.ylabel('average rank') plt.title('Average ranks in feature selection') plt.xticks(np.arange(5), ['2', '5', 'Struges-form.', 'Rice-rule', 'Square-root']) plt.tight_layout() plt.savefig('fs_results.pdf') results from scipy.stats import ranksums, wilcoxon eqw= results['EQW']['square-root'].values.flatten() da= results['DA']['square-root'].values.flatten() kmeans= results['kmeans']['square-root'].values.flatten() eqw= results['EQW'][['Struges-form.', 'Rice-rule', 'square-root']].values.flatten() da= results['DA'][['Struges-form.', 'Rice-rule', 'square-root']].values.flatten() kmeans= results['kmeans'][['Struges-form.', 'Rice-rule', 'square-root']].values.flatten() eqw= results['EQW'][['Rice-rule', 'square-root']].values.flatten() da= results['DA'][['Rice-rule', 'square-root']].values.flatten() kmeans= results['kmeans'][['Rice-rule', 'square-root']].values.flatten() nMI= results['MI'][7].values.flatten() da_sq= results['DA'][['square-root']].values.flatten() kmeans_sq= results['kmeans'][['square-root']].values.flatten() eqw.mean(), da.mean(), kmeans.mean(), nMI.mean() wilcoxon(eqw, da) wilcoxon(eqw, kmeans) wilcoxon(nMI, da_sq) wilcoxon(nMI, kmeans_sq) ```
github_jupyter
import os.path import numpy as np import pandas as pd from scipy.stats import ttest_ind, ttest_rel import matplotlib.pyplot as plt from matplotlib.transforms import Affine2D from statsmodels.stats.contingency_tables import mcnemar from config import * results= pd.read_csv('feature_selection_rankings.csv') runtimes= pd.read_csv('feature_selection_runtimes.csv') results results.columns multiindex= pd.MultiIndex.from_tuples([('', 'database'), ('MI', 3), ('MI', 7), ('MI', 11), ('MI', 21), ('MI', 31), ('EQW', 2), ('EQF', 2), ('kmeans', 2), ('DA', 2), ('EQW', 5), ('EQF', 5), ('kmeans', 5), ('DA', 5), ('EQW', 'square-root'), ('EQF', 'square-root'), ('kmeans', 'square-root'), ('DA', 'square-root'), ('EQW', 'Struges-form.'), ('EQF', 'Struges-form.'), ('kmeans', 'Struges-form.'), ('DA', 'Struges-form.'), ('EQW', 'Rice-rule'), ('EQF', 'Rice-rule'), ('kmeans', 'Rice-rule'), ('DA', 'Rice-rule')]) results.columns=multiindex results= results[[('', 'database'), ('MI', 3), ('MI', 7), ('MI', 11), ('MI', 21), ('MI', 31), ('EQW', 2), ('EQW', 5), ('EQW', 'square-root'), ('EQW', 'Struges-form.'),('EQW', 'Rice-rule'), ('EQF', 2), ('EQF', 5), ('EQF', 'square-root'), ('EQF', 'Struges-form.'), ('EQF', 'Rice-rule'), ('kmeans', 2), ('kmeans', 5), ('kmeans', 'square-root'), ('kmeans', 'Struges-form.'), ('kmeans', 'Rice-rule'), ('DA', 2), ('DA', 5), ('DA', 'square-root'), ('DA', 'Struges-form.'), ('DA', 'Rice-rule')]] results for c in results.columns[1:]: results[c]= results[c].apply(lambda x: np.round(x, 1)) results tmp= results.mean().reset_index(drop=False) tmp plt.figure(figsize=(7, 3)) tmp1= tmp[tmp['level_0'] == 'EQW'] tmp1= tmp1.iloc[[0, 1, 3, 4, 2]] plt.plot(np.arange(len(tmp1)), tmp1[0], label='nEV EQW binning', linestyle='-', linewidth=2.0) #tmp1= tmp[tmp['level_0'] == 'EQF'] #plt.plot(np.arange(len(tmp1)), tmp1[0], label='nEV EQF binning', linestyle='-', linewidth=2.0) tmp1= tmp[tmp['level_0'] == 'kmeans'] tmp1= tmp1.iloc[[0, 1, 3, 4, 2]] plt.plot(np.arange(len(tmp1)), tmp1[0], label='nEV kmeans binning', linestyle='dashed', linewidth=2.0) tmp1= tmp[tmp['level_0'] == 'DA'] tmp1= tmp1.iloc[[0, 1, 3, 4, 2]] plt.plot(np.arange(len(tmp1)), tmp1[0], label='nEV DA binning', linestyle='dotted', linewidth=2.0) tmp1= tmp[(tmp['level_0'] == 'MI') & (tmp['level_1'] == 7)] plt.plot(np.arange(5), np.repeat(tmp1[0], 5), label='nMI (7 neighb.)', linestyle='dashed', linewidth=2.0, color='black') plt.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) plt.xlabel('number of bins') plt.ylabel('average rank') plt.title('Average ranks in feature selection') plt.xticks(np.arange(5), ['2', '5', 'Struges-form.', 'Rice-rule', 'Square-root']) plt.tight_layout() plt.savefig('fs_results.pdf') results from scipy.stats import ranksums, wilcoxon eqw= results['EQW']['square-root'].values.flatten() da= results['DA']['square-root'].values.flatten() kmeans= results['kmeans']['square-root'].values.flatten() eqw= results['EQW'][['Struges-form.', 'Rice-rule', 'square-root']].values.flatten() da= results['DA'][['Struges-form.', 'Rice-rule', 'square-root']].values.flatten() kmeans= results['kmeans'][['Struges-form.', 'Rice-rule', 'square-root']].values.flatten() eqw= results['EQW'][['Rice-rule', 'square-root']].values.flatten() da= results['DA'][['Rice-rule', 'square-root']].values.flatten() kmeans= results['kmeans'][['Rice-rule', 'square-root']].values.flatten() nMI= results['MI'][7].values.flatten() da_sq= results['DA'][['square-root']].values.flatten() kmeans_sq= results['kmeans'][['square-root']].values.flatten() eqw.mean(), da.mean(), kmeans.mean(), nMI.mean() wilcoxon(eqw, da) wilcoxon(eqw, kmeans) wilcoxon(nMI, da_sq) wilcoxon(nMI, kmeans_sq)
0.294215
0.793386
# Use Spark to predict product line with `ibm-watson-machine-learning` This notebook contains steps and code to get data from the IBM Data Science Experience Community, create a predictive model, and start scoring new data. It introduces commands for getting data and for basic data cleaning and exploration, pipeline creation, model training, model persistance to Watson Machine Learning repository, model deployment, and scoring. Some familiarity with Python is helpful. This notebook uses Python 3.6 and Apache® Spark 2.4. You will use a publicly available data set, **GoSales Transactions**, which details anonymous outdoor equipment purchases. Use the details of this data set to predict clients' interests in terms of product line, such as golf accessories, camping equipment, and others. ## Learning goals The learning goals of this notebook are: - Load a CSV file into an Apache® Spark DataFrame. - Explore data. - Prepare data for training and evaluation. - Create an Apache® Spark machine learning pipeline. - Train and evaluate a model. - Persist a pipeline and model in Watson Machine Learning repository. - Deploy a model for online scoring using Wastson Machine Learning API. - Score sample scoring data using the Watson Machine Learning API. - Explore and visualize prediction result using the plotly package. ## Contents This notebook contains the following parts: 1. [Setup](#setup) 2. [Load and explore data](#load) 3. [Create spark ml model](#model) 4. [Persist model](#persistence) 5. [Predict locally](#visualization) 6. [Deploy and score](#scoring) 7. [Clean up](#cleanup) 8. [Summary and next steps](#summary) <a id="setup"></a> ## 1. Set up the environment Before you use the sample code in this notebook, you must perform the following setup tasks: - Contact with your Cloud Pack for Data administrator and ask him for your account credentials ### Connection to WML Authenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`. ``` username = 'PASTE YOUR USERNAME HERE' password = 'PASTE YOUR PASSWORD HERE' url = 'PASTE THE PLATFORM URL HERE' wml_credentials = { "username": username, "password": password, "url": url, "instance_id": 'openshift', "version": '3.5' } ``` ### Install and import the `ibm-watson-machine-learning` package **Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>. ``` !pip install -U ibm-watson-machine-learning from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials) ``` ### Working with spaces First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one. - Click New Deployment Space - Create an empty space - Go to space `Settings` tab - Copy `space_id` and paste it below **Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Space%20management.ipynb). **Action**: Assign space ID below ``` space_id = 'PASTE YOUR SPACE ID HERE' ``` You can use `list` method to print all existing spaces. ``` client.spaces.list(limit=10) ``` To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using. ``` client.set.default_space(space_id) ``` <a id="load"></a> ## 2. Load and explore data In this section you will load the data as an Apache® Spark DataFrame and perform a basic exploration. Load the data to the Spark DataFrame by using *wget* to upload the data to gpfs and then *read* method. ### Test Spark ``` try: from pyspark.sql import SparkSession except: print('Error: Spark runtime is missing. If you are using Watson Studio change the notebook runtime to Spark.') raise ``` The csv file GoSales_Tx.csv is availble on the same repository where this notebook is located. Load the file to Apache® Spark DataFrame using code below. ``` import os from wget import download sample_dir = 'spark_sample_model' if not os.path.isdir(sample_dir): os.mkdir(sample_dir) filename = os.path.join(sample_dir, 'GoSales_Tx.csv') if not os.path.isfile(filename): filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/data/product-line-prediction/GoSales_Tx.csv', out=sample_dir) spark = SparkSession.builder.getOrCreate() df_data = spark.read\ .format('org.apache.spark.sql.execution.datasources.csv.CSVFileFormat')\ .option('header', 'true')\ .option('inferSchema', 'true')\ .load(filename) df_data.take(3) ``` Explore the loaded data by using the following Apache® Spark DataFrame methods: - print schema - print top ten records - count all records ``` df_data.printSchema() ``` As you can see, the data contains five fields. PRODUCT_LINE field is the one we would like to predict (label). ``` df_data.show() df_data.count() ``` As you can see, the data set contains 60252 records. <a id="model"></a> ## 3. Create an Apache® Spark machine learning model In this section you will learn how to prepare data, create an Apache® Spark machine learning pipeline, and train a model. ### 3.1: Prepare data In this subsection you will split your data into: train, test and predict datasets. ``` splitted_data = df_data.randomSplit([0.8, 0.18, 0.02], 24) train_data = splitted_data[0] test_data = splitted_data[1] predict_data = splitted_data[2] print("Number of training records: " + str(train_data.count())) print("Number of testing records : " + str(test_data.count())) print("Number of prediction records : " + str(predict_data.count())) ``` As you can see our data has been successfully split into three datasets: - The train data set, which is the largest group, is used for training. - The test data set will be used for model evaluation and is used to test the assumptions of the model. - The predict data set will be used for prediction. ### 3.2: Create pipeline and train a model In this section you will create an Apache® Spark machine learning pipeline and then train the model. In the first step you need to import the Apache® Spark machine learning packages that will be needed in the subsequent steps. ``` from pyspark.ml.feature import OneHotEncoder, StringIndexer, IndexToString, VectorAssembler from pyspark.ml.classification import RandomForestClassifier from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml import Pipeline, Model ``` In the following step, convert all the string fields to numeric ones by using the StringIndexer transformer. ``` stringIndexer_label = StringIndexer(inputCol="PRODUCT_LINE", outputCol="label").fit(df_data) stringIndexer_prof = StringIndexer(inputCol="PROFESSION", outputCol="PROFESSION_IX") stringIndexer_gend = StringIndexer(inputCol="GENDER", outputCol="GENDER_IX") stringIndexer_mar = StringIndexer(inputCol="MARITAL_STATUS", outputCol="MARITAL_STATUS_IX") ``` In the following step, create a feature vector by combining all features together. ``` vectorAssembler_features = VectorAssembler(inputCols=["GENDER_IX", "AGE", "MARITAL_STATUS_IX", "PROFESSION_IX"], outputCol="features") ``` Next, define estimators you want to use for classification. Random Forest is used in the following example. ``` rf = RandomForestClassifier(labelCol="label", featuresCol="features") ``` Finally, indexed labels back to original labels. ``` labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=stringIndexer_label.labels) ``` Let's build the pipeline now. A pipeline consists of transformers and an estimator. ``` pipeline_rf = Pipeline(stages=[stringIndexer_label, stringIndexer_prof, stringIndexer_gend, stringIndexer_mar, vectorAssembler_features, rf, labelConverter]) ``` Now, you can train your Random Forest model by using the previously defined **pipeline** and **train data**. ``` train_data.printSchema() model_rf = pipeline_rf.fit(train_data) ``` You can check your **model accuracy** now. To evaluate the model, use **test data**. ``` predictions = model_rf.transform(test_data) evaluatorRF = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy") accuracy = evaluatorRF.evaluate(predictions) print("Accuracy = %g" % accuracy) print("Test Error = %g" % (1.0 - accuracy)) ``` You can tune your model now to achieve better accuracy. For simplicity of this example tuning section is omitted. <a id="persistence"></a> ## 4. Persist model In this section you will learn how to store your pipeline and model in Watson Machine Learning repository by using python client libraries. **Note**: Apache® Spark 2.4 is required. ### 4.1: Save pipeline and model In this subsection you will learn how to save pipeline and model artifacts to your Watson Machine Learning instance. ``` saved_model = client.repository.store_model( model=model_rf, meta_props={ client.repository.ModelMetaNames.NAME:'Product Line model', client.repository.ModelMetaNames.SPACE_UID: space_id, client.repository.ModelMetaNames.TYPE: "mllib_2.4", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_id_by_name('spark-mllib_2.4'), client.repository.ModelMetaNames.LABEL_FIELD: "PRODUCT_LINE", }, training_data=train_data, pipeline=pipeline_rf) ``` Get saved model metadata from Watson Machine Learning. ``` published_model_id = client.repository.get_model_uid(saved_model) print("Model Id: " + str(published_model_id)) ``` **Model Id** can be used to retrive latest model version from Watson Machine Learning instance. Below you can see stored model details. ``` client.repository.get_model_details(published_model_id) ``` ### 4.2: Load model In this subsection you will learn how to load back saved model from specified instance of Watson Machine Learning. ``` loaded_model = client.repository.load(published_model_id) print(type(loaded_model)) ``` As you can see the name is correct. You have already learned how save and load the model from Watson Machine Learning repository. <a id="visualization"></a> ## 5. Predict locally In this section you will learn how to score test data using loaded model and visualize the prediction results with plotly package. ### 5.1: Make local prediction using previously loaded model and test data In this subsection you will score *predict_data* data set. ``` predictions = loaded_model.transform(predict_data) ``` Preview the results by calling the *show()* method on the predictions DataFrame. ``` predictions.show(5) ``` By tabulating a count, you can see which product line is the most popular. ``` predictions.select("predictedLabel").groupBy("predictedLabel").count().show() ``` <a id="scoring"></a> ## 6. Deploy and score In this section you will learn how to create online scoring and to score a new data record using `ibm-watson-machine-learning`. **Note:** You can also use REST API to deploy and score. For more information about REST APIs, see the [Swagger Documentation](https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_create). ### 6.1: Create online scoring endpoint Now you can create an online scoring endpoint. #### Create online deployment for published model ``` deployment_details = client.deployments.create( published_model_id, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "Product Line model deployment", client.deployments.ConfigurationMetaNames.ONLINE: {} } ) deployment_details ``` Now, you can use above scoring url to make requests from your external application. <a id="cleanup"></a> ## 7. Clean up If you want to clean up all created assets: - experiments - trainings - pipelines - model definitions - models - functions - deployments please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb). <a id="summary"></a> ## 8. Summary and next steps You successfully completed this notebook! You learned how to use Apache Spark machine learning as well as Watson Machine Learning for model creation and deployment. Check out our [Online Documentation](https://dataplatform.cloudibm.com/docs/content/analyze-data/wml-setup.html) for more samples, tutorials, documentation, how-tos, and blog posts. ### Authors **Amadeusz Masny**, Python Software Developer in Watson Machine Learning at IBM Copyright © 2020 IBM. This notebook and its source code are released under the terms of the MIT License.
github_jupyter
username = 'PASTE YOUR USERNAME HERE' password = 'PASTE YOUR PASSWORD HERE' url = 'PASTE THE PLATFORM URL HERE' wml_credentials = { "username": username, "password": password, "url": url, "instance_id": 'openshift', "version": '3.5' } !pip install -U ibm-watson-machine-learning from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials) space_id = 'PASTE YOUR SPACE ID HERE' client.spaces.list(limit=10) client.set.default_space(space_id) try: from pyspark.sql import SparkSession except: print('Error: Spark runtime is missing. If you are using Watson Studio change the notebook runtime to Spark.') raise import os from wget import download sample_dir = 'spark_sample_model' if not os.path.isdir(sample_dir): os.mkdir(sample_dir) filename = os.path.join(sample_dir, 'GoSales_Tx.csv') if not os.path.isfile(filename): filename = download('https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/data/product-line-prediction/GoSales_Tx.csv', out=sample_dir) spark = SparkSession.builder.getOrCreate() df_data = spark.read\ .format('org.apache.spark.sql.execution.datasources.csv.CSVFileFormat')\ .option('header', 'true')\ .option('inferSchema', 'true')\ .load(filename) df_data.take(3) df_data.printSchema() df_data.show() df_data.count() splitted_data = df_data.randomSplit([0.8, 0.18, 0.02], 24) train_data = splitted_data[0] test_data = splitted_data[1] predict_data = splitted_data[2] print("Number of training records: " + str(train_data.count())) print("Number of testing records : " + str(test_data.count())) print("Number of prediction records : " + str(predict_data.count())) from pyspark.ml.feature import OneHotEncoder, StringIndexer, IndexToString, VectorAssembler from pyspark.ml.classification import RandomForestClassifier from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml import Pipeline, Model stringIndexer_label = StringIndexer(inputCol="PRODUCT_LINE", outputCol="label").fit(df_data) stringIndexer_prof = StringIndexer(inputCol="PROFESSION", outputCol="PROFESSION_IX") stringIndexer_gend = StringIndexer(inputCol="GENDER", outputCol="GENDER_IX") stringIndexer_mar = StringIndexer(inputCol="MARITAL_STATUS", outputCol="MARITAL_STATUS_IX") vectorAssembler_features = VectorAssembler(inputCols=["GENDER_IX", "AGE", "MARITAL_STATUS_IX", "PROFESSION_IX"], outputCol="features") rf = RandomForestClassifier(labelCol="label", featuresCol="features") labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=stringIndexer_label.labels) pipeline_rf = Pipeline(stages=[stringIndexer_label, stringIndexer_prof, stringIndexer_gend, stringIndexer_mar, vectorAssembler_features, rf, labelConverter]) train_data.printSchema() model_rf = pipeline_rf.fit(train_data) predictions = model_rf.transform(test_data) evaluatorRF = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy") accuracy = evaluatorRF.evaluate(predictions) print("Accuracy = %g" % accuracy) print("Test Error = %g" % (1.0 - accuracy)) saved_model = client.repository.store_model( model=model_rf, meta_props={ client.repository.ModelMetaNames.NAME:'Product Line model', client.repository.ModelMetaNames.SPACE_UID: space_id, client.repository.ModelMetaNames.TYPE: "mllib_2.4", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_id_by_name('spark-mllib_2.4'), client.repository.ModelMetaNames.LABEL_FIELD: "PRODUCT_LINE", }, training_data=train_data, pipeline=pipeline_rf) published_model_id = client.repository.get_model_uid(saved_model) print("Model Id: " + str(published_model_id)) client.repository.get_model_details(published_model_id) loaded_model = client.repository.load(published_model_id) print(type(loaded_model)) predictions = loaded_model.transform(predict_data) predictions.show(5) predictions.select("predictedLabel").groupBy("predictedLabel").count().show() deployment_details = client.deployments.create( published_model_id, meta_props={ client.deployments.ConfigurationMetaNames.NAME: "Product Line model deployment", client.deployments.ConfigurationMetaNames.ONLINE: {} } ) deployment_details
0.44071
0.978611
## The implementation of the article "Learning Classifiers from Only Positive and Unlabeled Data" Made by Nurlanov Zhakshylyk, 2020 ``` import pandas as pd import numpy as np from sklearn import linear_model as lm import seaborn as sns sns.set(style="white") import matplotlib.pyplot as plt np.random.seed(47) pos_size = 1000 neg_size = 2000 validation_percent = 20 test_percent = 30 pos_val_size = pos_size * validation_percent // 100 neg_val_size = neg_size * validation_percent // 100 pos_test_size = pos_size * test_percent // 100 neg_test_size = neg_size * test_percent // 100 labeled_percent = 20 ``` ## Generate synthetic data ### Add extra features to allow the decision boundary to be nonlinear. So, feature vector is $$ [x_1, x_2, x_1 x_2, x_1^2, x_2^2] $$ The feature $ x_1 x_2 $ was especially added to eliminate the restriction (which is in the article) allowing ellipses that are only parallel to the axis. ``` def generate_add_features(X): # return np.hstack([X, (X[:, 0]**2).reshape(-1, 1), (X[:, 1]**2).reshape(-1, 1)]) return np.hstack([X, (X[:, 0]*X[:, 1]).reshape(-1, 1), (X[:, 0]**2).reshape(-1, 1), (X[:, 1]**2).reshape(-1, 1)]) positive_mean = np.array([1.0, 5.0]) positive_cov = np.array([[ 1.0, 0.6], [ 0.6, 2.0]]) X_pos = np.random.multivariate_normal(positive_mean, positive_cov, pos_size) X_pos = generate_add_features(X_pos) negative_mean = np.array([-1.0, -2.0]) negative_cov = np.array([[ 3.0, 0.8], [ 0.8, 4.0]]) X_neg = np.random.multivariate_normal(negative_mean, negative_cov, neg_size) X_neg = generate_add_features(X_neg) f, ax = plt.subplots(figsize=(8, 6)) ax.scatter(X_pos[:,0], X_pos[:, 1], color='g', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.scatter(X_neg[:,0], X_neg[:, 1], color='r', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(xlabel="$X_1$", ylabel="$X_2$") ``` ## Split data to train/test/val ``` np.random.shuffle(X_pos) np.random.shuffle(X_neg) X_test_pos, X_val_pos = X_pos[: pos_test_size], X_pos[pos_test_size: pos_test_size+pos_val_size] X_train_pos = X_pos[pos_test_size+pos_val_size: ] X_test_neg, X_val_neg = X_neg[: neg_test_size], X_neg[neg_test_size: neg_test_size+neg_val_size] X_train_neg = X_neg[neg_test_size+neg_val_size: ] print(X_train_pos.shape, X_train_neg.shape) ``` ## Split training data to labeled and unlabeled ``` ## randomness of sampling np.random.shuffle(X_train_pos) np.random.shuffle(X_train_neg) labeled_size = X_train_pos.shape[0] * labeled_percent // 100 X_labeled = X_train_pos[: labeled_size] X_unlabeled = np.vstack([X_train_pos[labeled_size :], X_train_neg]) X_lab_size = X_labeled.shape[0] X_unlab_size = X_unlabeled.shape[0] print(X_labeled.shape, X_unlabeled.shape) ``` _______________ # Method 1 ## Lerning traditional classifier from nontraditional input ## Approximate $g(x)$ using Logistic Regression as a parametric model: $$ g_{\theta}(x) \approx \mathbb{P}(s=1|x) $$ ``` y_lab = np.ones((X_lab_size, 1)) y_unlab = np.zeros((X_unlab_size, 1)) y = np.vstack([y_lab, y_unlab]) X = np.vstack([X_labeled, X_unlabeled]) X_y = np.hstack([X, y]) np.random.shuffle(X_y) g_x = lm.LogisticRegression(solver='lbfgs') g_x.fit(X_y[:, :-1], X_y[:, -1]) ``` ## Approximate sampling constant $c$ on validation set $V$ $$c = \mathbb{P}(s=1|y=1)$$ by $$e_1 = \frac{1}{n} \sum_{x \in P}g(x) \approx c,$$ where $P$ is subset of positive elements in $V$, and $n = |P|$ ``` print(X_val_pos.shape) val_size_to_approx = 30 P = X_val_pos[:val_size_to_approx] sumP = sum(g_x.predict_proba(P)[:, 1]) e_1 = sumP / val_size_to_approx print(f"real c = {labeled_percent}, estimated e_1 = {e_1*100:.2f} on {val_size_to_approx} objects") ``` $$ e_2 = \frac{\sum_{x \in P}g(x)}{\sum_{x \in V}g(x)} $$ ``` rest_pos_size = 30 * 80 // 20 restP = X_val_pos[val_size_to_approx: val_size_to_approx+rest_pos_size] N = X_val_neg[:(val_size_to_approx+rest_pos_size)*2] e_2 = sumP / (sumP + sum(g_x.predict_proba(N)[:, 1]) + sum(g_x.predict_proba(restP)[:, 1])) print(f"real c = {labeled_percent}, estimated e_2 = {e_2*100:.2f} on {3*(val_size_to_approx+rest_pos_size)} objects") ``` $$ e_3 = \max_{x \in V}g(x) $$ ``` e_3 = max(np.max(g_x.predict_proba(X_val_pos[: val_size_to_approx+rest_pos_size])[:, 1]), np.max(g_x.predict_proba(N)[:, 1])) print(f"real c = {labeled_percent}, estimated e_3 = {e_3*100:.2f} on {3*(val_size_to_approx+rest_pos_size)} objects") ``` ### Conclusion: The first and second approximations of constant $c$ are acceptable, third one is not stable. ## Apply using formula: $$ \mathbb{P}(y=1|x) = f(x) = g(x)/c $$ ``` prob_pos = g_x.predict_proba(X_test_pos)[:, 1]/e_1 prob_neg = g_x.predict_proba(X_test_neg)[:, 1]/e_1 print("Accuracy = ", (np.sum(prob_pos >= 0.5) + np.sum(prob_neg < 0.5))/(prob_pos.shape[0] + prob_neg.shape[0])) ``` ## Visualize decision boundary ``` xx, yy = np.mgrid[-6:5:.01, -10:11:.01] grid = np.c_[xx.ravel(), yy.ravel()] grid = generate_add_features(grid) probs = g_x.predict_proba(grid)[:, 1].reshape(xx.shape) / e_1 f, ax = plt.subplots(figsize=(8, 6)) ax.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.6) ax.scatter(X_pos[:,0], X_pos[:, 1], color='g', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.scatter(X_neg[:,0], X_neg[:, 1], color='r', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(xlabel="$X_1$", ylabel="$X_2$") ``` ## Conclusion: We got acceptable results using only positive labeled and unlabeled data. It is noteworthy that we labeled only 20% of all positive data, and used only 30 positive examples to estimate this percentage. ______________ # Method 2: ## Weighting unlabeled data using $g(x)$ from previous method ## Calculate weights of unlabeled data using formula: $$ w(x) = \mathbb{P}(y=1|x, s=0) = \frac{1-c}{c} \cdot \frac{\mathbb{P}(s=1|x)}{1 - \mathbb{P}(s=1|x)} = \frac{1-c}{c} \cdot \frac{g(x)}{1-g(x)} $$ ``` g_unlab = g_x.predict_proba(X_unlabeled)[:, 1] weights_unlab = (1-e_1)/(e_1)*(g_unlab)/(1-g_unlab) ``` ## Train traditional classifier with weighted unlabeled data. Make one copy of unlabeled data positive, i.e. $y=1$, with weights $w(x)$, make second copy of unlabeled data negative, i.e. $y=0$, with weights $1-w(x)$. ``` X_lab_size = X_labeled.shape[0] X_unlab_size = X_unlabeled.shape[0] # array of answers y_lab = np.ones((X_lab_size, 1)) y_unlab_pos = np.ones((X_unlab_size, 1)) y_unlab_neg = np.zeros((X_unlab_size, 1)) y = np.vstack([y_lab, y_unlab_pos, y_unlab_neg]) # array of weights w_lab = np.ones((X_lab_size, 1)) w_unlab_pos = weights_unlab.reshape(-1, 1) w_unlab_neg = 1 - weights_unlab.reshape(-1, 1) w = np.vstack([w_lab, w_unlab_pos, w_unlab_neg]) # array of data and all together X = np.vstack([X_labeled, X_unlabeled, X_unlabeled]) X_y = np.hstack([X, y, w]) np.random.shuffle(X_y) f_x_weighted = lm.LogisticRegression(solver='lbfgs') f_x_weighted.fit(X_y[:, :-2], X_y[:, -2], sample_weight=X_y[:, -1]) ``` ## Apply directly to test set ``` all_tp = np.sum(f_x_weighted.predict(X_test_pos)) + np.sum(f_x_weighted.predict(X_test_neg) == 0) print("Test accuracy = ", all_tp / (X_test_pos.shape[0] + X_test_neg.shape[0])) ``` ## Visualize decision boundary ``` xx, yy = np.mgrid[-6:5:.01, -10:11:.01] grid = np.c_[xx.ravel(), yy.ravel()] grid = generate_add_features(grid) probs = f_x_weighted.predict_proba(grid)[:, 1].reshape(xx.shape) f, ax = plt.subplots(figsize=(8, 6)) ax.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.6) ax.scatter(X_pos[:,0], X_pos[:, 1], color='g', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.scatter(X_neg[:,0], X_neg[:, 1], color='r', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(xlabel="$X_1$", ylabel="$X_2$") ``` ## Conclusion We again obtained acceptable results using only positive (20%) labeled and unlabeled data. Interestingly, method #2 pays more attention to negative data trying to separate them from positive examples, and method #1, on the contrary, tried to separate positive data from negative ones. It is likely that for the one-class classification task the first method will be preferable.
github_jupyter
import pandas as pd import numpy as np from sklearn import linear_model as lm import seaborn as sns sns.set(style="white") import matplotlib.pyplot as plt np.random.seed(47) pos_size = 1000 neg_size = 2000 validation_percent = 20 test_percent = 30 pos_val_size = pos_size * validation_percent // 100 neg_val_size = neg_size * validation_percent // 100 pos_test_size = pos_size * test_percent // 100 neg_test_size = neg_size * test_percent // 100 labeled_percent = 20 def generate_add_features(X): # return np.hstack([X, (X[:, 0]**2).reshape(-1, 1), (X[:, 1]**2).reshape(-1, 1)]) return np.hstack([X, (X[:, 0]*X[:, 1]).reshape(-1, 1), (X[:, 0]**2).reshape(-1, 1), (X[:, 1]**2).reshape(-1, 1)]) positive_mean = np.array([1.0, 5.0]) positive_cov = np.array([[ 1.0, 0.6], [ 0.6, 2.0]]) X_pos = np.random.multivariate_normal(positive_mean, positive_cov, pos_size) X_pos = generate_add_features(X_pos) negative_mean = np.array([-1.0, -2.0]) negative_cov = np.array([[ 3.0, 0.8], [ 0.8, 4.0]]) X_neg = np.random.multivariate_normal(negative_mean, negative_cov, neg_size) X_neg = generate_add_features(X_neg) f, ax = plt.subplots(figsize=(8, 6)) ax.scatter(X_pos[:,0], X_pos[:, 1], color='g', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.scatter(X_neg[:,0], X_neg[:, 1], color='r', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(xlabel="$X_1$", ylabel="$X_2$") np.random.shuffle(X_pos) np.random.shuffle(X_neg) X_test_pos, X_val_pos = X_pos[: pos_test_size], X_pos[pos_test_size: pos_test_size+pos_val_size] X_train_pos = X_pos[pos_test_size+pos_val_size: ] X_test_neg, X_val_neg = X_neg[: neg_test_size], X_neg[neg_test_size: neg_test_size+neg_val_size] X_train_neg = X_neg[neg_test_size+neg_val_size: ] print(X_train_pos.shape, X_train_neg.shape) ## randomness of sampling np.random.shuffle(X_train_pos) np.random.shuffle(X_train_neg) labeled_size = X_train_pos.shape[0] * labeled_percent // 100 X_labeled = X_train_pos[: labeled_size] X_unlabeled = np.vstack([X_train_pos[labeled_size :], X_train_neg]) X_lab_size = X_labeled.shape[0] X_unlab_size = X_unlabeled.shape[0] print(X_labeled.shape, X_unlabeled.shape) y_lab = np.ones((X_lab_size, 1)) y_unlab = np.zeros((X_unlab_size, 1)) y = np.vstack([y_lab, y_unlab]) X = np.vstack([X_labeled, X_unlabeled]) X_y = np.hstack([X, y]) np.random.shuffle(X_y) g_x = lm.LogisticRegression(solver='lbfgs') g_x.fit(X_y[:, :-1], X_y[:, -1]) print(X_val_pos.shape) val_size_to_approx = 30 P = X_val_pos[:val_size_to_approx] sumP = sum(g_x.predict_proba(P)[:, 1]) e_1 = sumP / val_size_to_approx print(f"real c = {labeled_percent}, estimated e_1 = {e_1*100:.2f} on {val_size_to_approx} objects") rest_pos_size = 30 * 80 // 20 restP = X_val_pos[val_size_to_approx: val_size_to_approx+rest_pos_size] N = X_val_neg[:(val_size_to_approx+rest_pos_size)*2] e_2 = sumP / (sumP + sum(g_x.predict_proba(N)[:, 1]) + sum(g_x.predict_proba(restP)[:, 1])) print(f"real c = {labeled_percent}, estimated e_2 = {e_2*100:.2f} on {3*(val_size_to_approx+rest_pos_size)} objects") e_3 = max(np.max(g_x.predict_proba(X_val_pos[: val_size_to_approx+rest_pos_size])[:, 1]), np.max(g_x.predict_proba(N)[:, 1])) print(f"real c = {labeled_percent}, estimated e_3 = {e_3*100:.2f} on {3*(val_size_to_approx+rest_pos_size)} objects") prob_pos = g_x.predict_proba(X_test_pos)[:, 1]/e_1 prob_neg = g_x.predict_proba(X_test_neg)[:, 1]/e_1 print("Accuracy = ", (np.sum(prob_pos >= 0.5) + np.sum(prob_neg < 0.5))/(prob_pos.shape[0] + prob_neg.shape[0])) xx, yy = np.mgrid[-6:5:.01, -10:11:.01] grid = np.c_[xx.ravel(), yy.ravel()] grid = generate_add_features(grid) probs = g_x.predict_proba(grid)[:, 1].reshape(xx.shape) / e_1 f, ax = plt.subplots(figsize=(8, 6)) ax.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.6) ax.scatter(X_pos[:,0], X_pos[:, 1], color='g', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.scatter(X_neg[:,0], X_neg[:, 1], color='r', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(xlabel="$X_1$", ylabel="$X_2$") g_unlab = g_x.predict_proba(X_unlabeled)[:, 1] weights_unlab = (1-e_1)/(e_1)*(g_unlab)/(1-g_unlab) X_lab_size = X_labeled.shape[0] X_unlab_size = X_unlabeled.shape[0] # array of answers y_lab = np.ones((X_lab_size, 1)) y_unlab_pos = np.ones((X_unlab_size, 1)) y_unlab_neg = np.zeros((X_unlab_size, 1)) y = np.vstack([y_lab, y_unlab_pos, y_unlab_neg]) # array of weights w_lab = np.ones((X_lab_size, 1)) w_unlab_pos = weights_unlab.reshape(-1, 1) w_unlab_neg = 1 - weights_unlab.reshape(-1, 1) w = np.vstack([w_lab, w_unlab_pos, w_unlab_neg]) # array of data and all together X = np.vstack([X_labeled, X_unlabeled, X_unlabeled]) X_y = np.hstack([X, y, w]) np.random.shuffle(X_y) f_x_weighted = lm.LogisticRegression(solver='lbfgs') f_x_weighted.fit(X_y[:, :-2], X_y[:, -2], sample_weight=X_y[:, -1]) all_tp = np.sum(f_x_weighted.predict(X_test_pos)) + np.sum(f_x_weighted.predict(X_test_neg) == 0) print("Test accuracy = ", all_tp / (X_test_pos.shape[0] + X_test_neg.shape[0])) xx, yy = np.mgrid[-6:5:.01, -10:11:.01] grid = np.c_[xx.ravel(), yy.ravel()] grid = generate_add_features(grid) probs = f_x_weighted.predict_proba(grid)[:, 1].reshape(xx.shape) f, ax = plt.subplots(figsize=(8, 6)) ax.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.6) ax.scatter(X_pos[:,0], X_pos[:, 1], color='g', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.scatter(X_neg[:,0], X_neg[:, 1], color='r', s=50, cmap="RdBu", vmin=-.2, vmax=1.2, edgecolor="white", linewidth=1) ax.set(xlabel="$X_1$", ylabel="$X_2$")
0.44746
0.950411
# Homework 4 and 5 - Solution Key ``` import quandl #to get econ data import pandas as pd #to deal with dataframes import matplotlib.pyplot as plt #to plot ``` ### Problem 1 - Superify 1 (2 points) For this problem create a list of 5 strings (e.g. a list consisting of words like men, women, cool etc.). Create a for loop that will iterate over the elements of the list. If the current element has more than 4 characters, then print that element after adding the prefix super and a space: e.g. superwomen. For all other cases (less than or equal to 4 characters) print that element adding the prefix super without any space: e.g. supercool. ``` my_list = ["star","natural","man","bowl","sonic"] for i in my_list: if len(i)>4: print("super" + " " + i) else: print("super" + i) ``` ### Problem 2 - Stockplotter 1 (2 points) Create a list of 5 stocks (e.g. ["AAPL","GOOGL","IBM","MSFT","FB"]. Use Quandl to download the data on those stocks from the WIKI database (i.e. "WIKI/FB") and plot the opening price for all 5 of them inside one single plot. ``` stocks = ["AAPL","GOOGL","IBM","MSFT","FB"] for i in stocks: stock_code = "Wiki/"+i data = quandl.get(stock_code) data.Open.plot() plt.show() ``` ### Problem 3 - Stockplotter 2 (1 point) Define a function that will get only one argument: stock ticker (name). Once the argument is given the function must download that stock's data from Quandl (WIKI database) and plot it. ``` def stockplotter(stock_name): stock_code = "Wiki/"+stock_name data = quandl.get(stock_code) data.Open.plot() plt.show() stockplotter("TSLA") ``` ### Problem 4 - Listpop (2 points) While append() is a function used to add an element to the list, pop() is a function (again available only for lists) that is used to delete an element. For example, if we have a list my_list = ['a','b','c','d'], then my_list.pop(2) will delete the 2nd (in Python terminology) element of the list: 'c'. Similarly, my_list.pop(-1) will delete the very last element of it: 'd'. The remaining list after those two operations will be my_list = ['a','b'] Create a while loop, that will delete elements from the stock list in Problem 2, as long as the list is not empty (i.e. the while loop must stop once the stock_list becomes empty). Note: make sure not to create an infinite loop. ``` print("Before: ",my_list) while len(my_list)>0: my_list.pop(-1) print("After: ",my_list) ``` ### Problem 5 - Numlength 2 (4 points) Create a list of positive integers with different digits (some one digit, some two digits, some three etc.). For each element in this list check whether that integer has one, two or three (or four, if you have it in your list) digits. The one digit elements to be written into a new list called one_digit, two digit elements into a list called two_digits and so on. ``` one_digit = [] two_digit = [] three_or_more = [] my_digits = [1,10,100] for i in my_digits: if i <10: one_digit.append(i) elif i<100: two_digit.append(i) else: three_or_more.append(i) ``` ### Problem 6 - Superify 2 (2 points) Solve problem 1 with a while loop. ``` my_list = ["star","natural","man","bowl","sonic"] i=0 while i<len(my_list): current_string = my_list[i] if len(current_string)>4: print("super" + " " + current_string) else: print("super" + current_string) i = i + 1 ``` ### Problem 7 - Descriptive Analytics 1 (4 points) Use Quandl to get data on Youth Unemployment in Armenia (Code: FRED/SLUEM1524ZSARM). Based on your data: - Plot the unemployment trend over years, - Plot the unemployment relative change over years, - What was the unemployment rate on 2001? - When did Armenia observe the highest rate of unemployment? - What are the mean, mode and median of unemployment rates? ``` unemp_data = quandl.get("FRED/SLUEM1524ZSARM") #plot the trend unemp_data.plot() plt.show() print("---------------------------------------------------") #plot the relative change change = unemp_data.pct_change() change.plot() plt.show() print("---------------------------------------------------") #rate on 2001 was 21.724001 print(unemp_data["2001"]) print("---------------------------------------------------") #highest rate - 2009 condition = unemp_data.Value==unemp_data.Value.max() print(unemp_data[condition]) print("---------------------------------------------------") #mean, mode and median print("Mean is ", unemp_data.mean()) print("---------------------------------------------------") print("Mode is ", unemp_data.mode()) print("---------------------------------------------------") print("Median is ", unemp_data.median()) ``` ### Problem 8 - Descriptive Analytics 2 (3 points) Use Quandl to get data on monthly median listing property price in Armenia oer sq. foot (Code: ZILLOW/C25499_MLPFAH). Based on your data: - Calculate how many times the median price has been in the open range (160,170), - Plot the histogram of prices with only 15 bins, some specified color and add title for the graph as well as x and y axis, - Define a function that will return High, Medium or Low when the price>=170, 170>price>160, 160>=price respectively, - Use apply function to create a new column in the dataframe with price classifications, - Use pivot tables in pandas to calculate standard deviation of prices in each class. ``` prop_data = quandl.get("ZILLOW/C25499_MLPFAH") #prop_data = quandl.get("ZILLOW/C25499_MLPFAH") #median in range (160,170) condition1 = prop_data<170 condition2 = prop_data>160 cropped_data = prop_data[condition1 & condition2] print(len(cropped_data)) #plot the histogram prop_data.hist(bins=15,color="palegreen") plt.title("Histogram of montly median propoerty prices, RA") plt.xlabel("Price") plt.ylabel("Frequency") plt.show() #classifier function def classifier(x): if x<=160: result = "Low" elif x>=170: result="High" else: result = "Medium" return result #apply function prop_data["Classes"] = prop_data.Value.apply(classifier) print(prop_data.head(3)) #pivot_table my_pivot = pd.pivot_table(prop_data,index="Classes",values="Value",aggfunc="std") print(my_pivot.head(3)) ```
github_jupyter
import quandl #to get econ data import pandas as pd #to deal with dataframes import matplotlib.pyplot as plt #to plot my_list = ["star","natural","man","bowl","sonic"] for i in my_list: if len(i)>4: print("super" + " " + i) else: print("super" + i) stocks = ["AAPL","GOOGL","IBM","MSFT","FB"] for i in stocks: stock_code = "Wiki/"+i data = quandl.get(stock_code) data.Open.plot() plt.show() def stockplotter(stock_name): stock_code = "Wiki/"+stock_name data = quandl.get(stock_code) data.Open.plot() plt.show() stockplotter("TSLA") print("Before: ",my_list) while len(my_list)>0: my_list.pop(-1) print("After: ",my_list) one_digit = [] two_digit = [] three_or_more = [] my_digits = [1,10,100] for i in my_digits: if i <10: one_digit.append(i) elif i<100: two_digit.append(i) else: three_or_more.append(i) my_list = ["star","natural","man","bowl","sonic"] i=0 while i<len(my_list): current_string = my_list[i] if len(current_string)>4: print("super" + " " + current_string) else: print("super" + current_string) i = i + 1 unemp_data = quandl.get("FRED/SLUEM1524ZSARM") #plot the trend unemp_data.plot() plt.show() print("---------------------------------------------------") #plot the relative change change = unemp_data.pct_change() change.plot() plt.show() print("---------------------------------------------------") #rate on 2001 was 21.724001 print(unemp_data["2001"]) print("---------------------------------------------------") #highest rate - 2009 condition = unemp_data.Value==unemp_data.Value.max() print(unemp_data[condition]) print("---------------------------------------------------") #mean, mode and median print("Mean is ", unemp_data.mean()) print("---------------------------------------------------") print("Mode is ", unemp_data.mode()) print("---------------------------------------------------") print("Median is ", unemp_data.median()) prop_data = quandl.get("ZILLOW/C25499_MLPFAH") #prop_data = quandl.get("ZILLOW/C25499_MLPFAH") #median in range (160,170) condition1 = prop_data<170 condition2 = prop_data>160 cropped_data = prop_data[condition1 & condition2] print(len(cropped_data)) #plot the histogram prop_data.hist(bins=15,color="palegreen") plt.title("Histogram of montly median propoerty prices, RA") plt.xlabel("Price") plt.ylabel("Frequency") plt.show() #classifier function def classifier(x): if x<=160: result = "Low" elif x>=170: result="High" else: result = "Medium" return result #apply function prop_data["Classes"] = prop_data.Value.apply(classifier) print(prop_data.head(3)) #pivot_table my_pivot = pd.pivot_table(prop_data,index="Classes",values="Value",aggfunc="std") print(my_pivot.head(3))
0.190197
0.940898
# Math is Fun - BIOSTAT 823 Assignment #1 This post will consist my solutions for 3 questions taken from [Euler Project](https://projecteuler.net/archives) which they are: 1. How many reversible numbers are there below one-billon? (ID: 145, solved by 16438 people) 2. Permuted multiples. (ID: 52, solved by 65547 people) 3. Summation of primes. (ID: 10, solved by 330347 people) This post can be retrieved from [Pu's Blog for Biostat823](https://puzeng.github.io/BIOSTAT823_Blog_PuZeng/). And the post is auto-converted by Fastpages based on a jupyter nootebook where is kept in the [Pu's BIOSTAT823 repo](https://github.com/puzeng/BIOSTAT823_Blog_PuZeng) under the folder of _notebooks. Repository can be accessd through this website: https://github.com/puzeng/BIOSTAT823_Blog_PuZeng. The post can be accessed from here: https://puzeng.github.io/BIOSTAT823_Blog_PuZeng/. ## 1. How many reversible numbers are there below one-billion? "Some positive integers n have the property that the sum \[ n + reverse(n) \] consists entirely of odd (decimal) digits. For instance, 36 + 63 = 99 and 409 + 904 = 1313. We will call such numbers reversible; so 36, 63, 409, and 904 are reversible. Leading zeroes are not allowed in either n or reverse(n). There are 120 reversible numbers below one-thousand. How many reversible numbers are there below one-billion ($10^9$)?" This is a question posted on the [Euler Project](https://projecteuler.net/problem=145) which was solved by 16435 people so far. I'm going to walk you through the analytical process that I took for solving the problem. This a question that can be solved by a brute force method which means that it can be solved by checking whether the number meet the property that the sum of itself and revsersed number will only have odd digits. Below shows the brute force approach for this question. ### Brute Force Approach First, we need a helper function (is_reversible(input_number)) to identify whether the number is meeting the property or not which is the sum of itself and the reversed number only consists of odd digits. Within the helper function, if the number can be divided by 10, then the number immediately disqualified for be a reversible number. Then, we need to generate the sum of itself and the reversed number. To reverse the number, we can convert the number into a string to reverse it by using \[::-1\], lastly converting to integer again. Once we generate the sum, we can check the digits of the sum by extracting each digit of the number. If the digit is an even digit, then the number is disqualified. But if the digit is odd, we keep checking on the next digit. For a number is reversible, the number can successfully go through the while loop and return True at the end line of the function. Then, we use this helper function inside the function for counting the number of reversible numbers within a range that user defined. The count function starts with generating a list of number that is within the user defined range. Next, we are going to use a for loop to loop through the list to check each number of the list by using the helper function. If it is a reversible number, we increase the count by 1. The count function lastly will return the count which tells us how many reversible numbers within the defined range. ``` def is_reversible(input_number): """Helper function for checking whether the numeber is reversible or not.""" if input_number % 10 == 0: return False reversed_number = int(str(input_number)[::-1]) sum_n_reverse = reversed_number + input_number while sum_n_reverse > 0: if (sum_n_reverse % 10) % 2 == 0: return False sum_n_reverse //= 10 return True def count_reversible_numbers(input_numbers): """Count the number of reversible numbers below one billion.""" input_list = range(input_numbers) #target_list = list() count = 0 for number in input_list: if is_reversible(number): #target_list.append(number) count += 1 #return(target_list) return count count_reversible_numbers(1000) ``` But since we are dealing with a range like one billion numbers, the brute force method is very insufficient in computation. Therefore, we can analyze the question by finding a pattern in the base cases. We can approach this question by analyzing the different scenarios causing by the addition between digits. I will give detailed explanation about how this approach works in the following. ### Analytical Approach ### range(10^1) In the range of numbers are in 1 digits, those number are all disqualified to be reversible since the addition of the numbers between 1-9 to itself is an even number. Therefore, we can't find any reversible number when the numbers are in 1 digit. There is no solution. ### range(10^2) When the numbers are in 2 digits, we can use ab to represent the number in 2 digits. Then, the sum of itself and the reversed number is represented by: a+b_b+a. To meet the property, a+b must be an odd number and cannot have a carryover. In the other words, a+b < 10 and a+b is an odd number. There are 20 pairs of a and b to meet this requirement. ### range(10^3) When the numbers are in 3 digits, we us abc to represent. Then, the sum of itself and the reversed number = a+c_b+b_c+a. We can refer c+a as the outer pair and b+b as the inner pair. Since the inner pair is the addition to itself. Like we analyzed in the 1 digit scenario, the addition to itself will always produce an even number. Therefore, the middle pair needs a 1 from the carryover of the pair c+a to become an odd number. In addition, it implies that the middle pair must not have a carryover otherwise the carryover will cause the pair a+c become an even number based on the fact that a+c needs to be an odd number. Thus, we restrict those solutions to meet the following requirements: 1. b+b < 10 and b+b is an even number. 2. 20 > a+c > 10 and a+c is an odd number. Therefore, there are 5 * 20 = 100 solutions. ### range(10^4) We can use abcd to represent numbers in 4 digits. Then, the sum can be written as: a+d_b+c_c+b_d+a. We are referring a+d as the outer pair and b+c as the inner pair. This is like the scenario that we analyzed in the 2 digits case that d+a must not have a carryover and has to be an odd number and so does the pair, c+b. However, since the pair c and b is in the middle, c+b can be 0. Thus, the solutions need to meet those requirements: 1. a+d < 10 and a+b is an odd number. 2. c+b < 10 and c+b is an odd number and can be 0. Therefore, there are 20*30 = 600 solutions. ### range(10^5) Numbers in 5 digits can be represented as abcde. Then, the sum = a+e_b+d_c+c_d+b_e+a. Since c+c is the addition to itself, it will generate the even number. It also implied that c+c will borrow the 1 from the carryover of d+b. And this also tells us that a+e will take the 1 from the carryover of b+d to become an odd number. Thus, we have the follow restrictions: 1. c+c is an even number. 2. b+d > 10 and is an odd number. 3. a+e is an odd number. However, there is no solution since the a+e needs to be an odd number and needs to take 1 from the carryover of b+d. ### range(10^6) Numbers in 6 digits can be represented as abcdef. The sum = a+f_b+e_c+d_d+c_e+b_f+a. This scenario is also pretty much like the 2 digits case where: 1. c+d < 10 and c+d can be an odd number including 0. 2. b+e <10 and has to be an odd number including 0. 3. a+f <10 and has to be an odd number excluding 0. Therefore, we have 20*30^2 solutions. ### range(10^7) Numbers in 7 digits can be represented as abcdefg. The sum = a+g_b+f_c+e_d+d_e+c_f+b_g+a. The solutions must meet the following restrictions: 1. d+d < 10 and must be an even number. 2. c+e > 10 and must be an odd number including 0. 3. f+b < 10 and must be an odd number including 0. 4. a+g < 10 and must be an odd number excluding 0. Thus, there are 5 * 20 * 20 * 25 = 100 * 500 solutions. ### range(10^8) Numbers in 8 digits can be represented as abcdefgh. The sum = a+h_b+g_c+f_d+e_e+d_f+c_g+b_h+a. The solutions must meet the following restrictions like the case in 2 or 4 or 6 digits: 1. a+h < 10 and must be an odd number excluding 0. 2. b+g, c+f, and d+e < 10 and must be an odd number including 0. Thus, there are 20 * 30^3 solutions. ### range(10^9) Numbers in 9 digits can be represented as abcdefghi. The sum = a+i_b+h_c+g_d+f_e+e_f+d_g+c_h+b_i+a. This case is pretty much similar like the case for 5 digits that we cannot find solutions. The reason why is that e+e needs the 1 from the carryover of d+f since the addition of e itself will only generate the even number. However, it also means that c+g, h+b are both >10 and they are even numbers. Thus, since a+i has to be an odd number, a+i will be changed to even due to the carryover from b+h. Thus, there is no solution. ### Analytical Approach: Summary Since we have done with analyzing those base cases from 1 digit to 9 digits, we can generalize the solutions based on the pattern. For the number of digits in 1, 5, and 9, solutions = 0. For number of digits in 2, 4, 6, and 8, solutions = 20 * 30^n where n = # of digits / 2 - 1. For number of digits in 3 and 7, solutions = 100 * 500^n where n = (# of digits -3) / 4. Based on the pattern, we can transform it into a function. First, we need to figure out what those cases in that number range by finding the maximum numbers of digits that this numer range can hold. For that, we just need to take the log of the input number of base 10. And then, we evaluate the digit number based on the pattern that we found: 1. If the digit number can be divided by 2, then the count = 20 * 30^(digits/2-1). 2. If the digit number can be expressed as 4*i+3 for i = 0,1,2,3,..., then the count = 100 * 500^\[(digits-3)/4\]. 3. If the digit number failed to meet the above two requirements, then there is no solution. Below is the function for counting the number of reversible numbers within a defined number range based on analytical approach: ``` import math def count_reversible_nums(num_range): """Count the number of the reversible numbers within a input-range.""" pow_number = int(math.log10(num_range)) count = 0 for power in range(2,pow_number + 1): if power % 2 == 0: count += 20 * math.pow(30 ,power / 2 - 1) if (power - 3) % 4 == 0: count += 100 * math.pow(500, (power - 3)/4) return int(count) count_reversible_nums(1000000000) ``` # 2. Permuted Multiples "It can be seen that the number, 125874, and its double, 251748, contain exactly the same digits, but in a different order. Find the smallest positive integer, x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits." This is a question posted on the [Euler Project](https://projecteuler.net/problem=52) which was solved by 65547 people so far (ID: 52). I'm going to walk you through the process that I took for solving the problem. We can start with number 2 and move up by 1 until the smallest positive integer that satisfies the requirement is found. To check the requirement, we need to generate several numbers that are corresponding to 2-6 times of that positive integer. And then, a helper function will check whether the positive integer and the corresponding multiple of that number contain the same digits or not. Within the helper function, the checking process will be accomplished by comparing the two sorted numbers after converting them into strings. Once we confirm that those multiples of that positive integer have the same digits as the positive integer itself, we will stop the increments and return to that current number. Below is showing how the solution is coded into a function. ``` def same_digits(num1, num2): """Helper function to check whether two numbers contain same digits.""" if sorted(str(num1)) == sorted(str(num2)): return True return False def permuted_multiple(): """Find the smallest positive integer x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits.""" found = False num = 2 while not found: two_times = num * 2 three_times = num * 3 four_times = num * 4 five_times = num * 5 six_times = num * 6 if (same_digits(num, two_times) and same_digits(num, three_times) and same_digits(num, four_times) and same_digits(num, five_times) and same_digits(num, six_times) ): found = True return num num += 1 return num permuted_multiple() ``` ## 3. Summation of Primes "The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million." This is a question posted on the [Euler Project](https://projecteuler.net/problem=10) which was solved by 330347 people so far (ID: 10). To approach this question, we can start with creating a list that contains every positive integers below the number range and then looping through the list to find all the primes. Since we are looping through the number list, we can exclude some integers from the list prior to the looping process to save some energy. As we know that the prime is a number that can only be divided by 1 or itself, therefore any even number greater than 2 will automatically excluded from being a prime number since even numbers can be divided by 2. Thus, we can remove those even numbers greater than 2 from the number list. In addition, any number ends in 5 can also be removed from the list since those numbers can be divided by 5. Then, we will send the cleaned number list into a for loop to loop through the remainning numbers in the list and sum up all the primes in the list. While we are looping through the number list, we will need a helper function to check whether the current number is a prime or not. If we find the prime, we will add that number to the sum. Within the helper function, we are trying to check whether the input number is a prime or not. This is accomplished by dividing the factors of the input number into two halves. These two halves of factors are mirroring with each other if the number is not prime. We can use number 64 as the illustration. 64 can be obtained from the multiplication of the following pairs: 1. 1 * 64 2. 2 * 32 3. 4 * 16 4. 8 * 8 5. 16 * 4 6. 32 * 2 7. 64 * 1 We can see that the square root of the number is the number can help to divide the factors into two halves. In addition, you may also think what if the square root of number is not an integer. We will then use the nearest integer of the square root as the mirror line. Thus, we can generate a list of factors that are below the mirror number to check whether the input number can be divided by those factors. As long as we find a factor of that number besides 1 and itself, we will return False as the indication of that number is not a prime and the function that generates the sum of the primes will increment by 1 to move to the next number in the list. Below shows how the solution is coded into functions to solve the question: ``` import math def is_prime(num): """Helper function to check whether the number is a prime or not.""" for i in range(2, int(math.sqrt(num))+1): if (num % i == 0): return False return True def sum_primes_below_num_range(num_range): """Sum all the primes below the input number range.""" # remove all the even number above 3 but below 2 million list_nums = range(3,num_range,2) #remove all the numbers above 5 but end in 5 from the list removed_nums = range(15,num_range,10) list_nums = list(set(list_nums) - set(removed_nums)) sum = 2 for ele in list_nums: if is_prime(ele) == True: sum += ele return sum sum_primes_below_num_range(2000000) ```
github_jupyter
def is_reversible(input_number): """Helper function for checking whether the numeber is reversible or not.""" if input_number % 10 == 0: return False reversed_number = int(str(input_number)[::-1]) sum_n_reverse = reversed_number + input_number while sum_n_reverse > 0: if (sum_n_reverse % 10) % 2 == 0: return False sum_n_reverse //= 10 return True def count_reversible_numbers(input_numbers): """Count the number of reversible numbers below one billion.""" input_list = range(input_numbers) #target_list = list() count = 0 for number in input_list: if is_reversible(number): #target_list.append(number) count += 1 #return(target_list) return count count_reversible_numbers(1000) import math def count_reversible_nums(num_range): """Count the number of the reversible numbers within a input-range.""" pow_number = int(math.log10(num_range)) count = 0 for power in range(2,pow_number + 1): if power % 2 == 0: count += 20 * math.pow(30 ,power / 2 - 1) if (power - 3) % 4 == 0: count += 100 * math.pow(500, (power - 3)/4) return int(count) count_reversible_nums(1000000000) def same_digits(num1, num2): """Helper function to check whether two numbers contain same digits.""" if sorted(str(num1)) == sorted(str(num2)): return True return False def permuted_multiple(): """Find the smallest positive integer x, such that 2x, 3x, 4x, 5x, and 6x, contain the same digits.""" found = False num = 2 while not found: two_times = num * 2 three_times = num * 3 four_times = num * 4 five_times = num * 5 six_times = num * 6 if (same_digits(num, two_times) and same_digits(num, three_times) and same_digits(num, four_times) and same_digits(num, five_times) and same_digits(num, six_times) ): found = True return num num += 1 return num permuted_multiple() import math def is_prime(num): """Helper function to check whether the number is a prime or not.""" for i in range(2, int(math.sqrt(num))+1): if (num % i == 0): return False return True def sum_primes_below_num_range(num_range): """Sum all the primes below the input number range.""" # remove all the even number above 3 but below 2 million list_nums = range(3,num_range,2) #remove all the numbers above 5 but end in 5 from the list removed_nums = range(15,num_range,10) list_nums = list(set(list_nums) - set(removed_nums)) sum = 2 for ele in list_nums: if is_prime(ele) == True: sum += ele return sum sum_primes_below_num_range(2000000)
0.736969
0.95594
<a href="https://colab.research.google.com/github/krakowiakpawel9/data-science-bootcamp/blob/master/06_uczenie_maszynowe/07_k_najblizszych_sasiadow.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> * @author: [email protected] * @site: e-smartdata.org ### scikit-learn >Strona biblioteki: [https://scikit-learn.org](https://scikit-learn.org) > >Dokumentacja/User Guide: [https://scikit-learn.org/stable/user_guide.html](https://scikit-learn.org/stable/user_guide.html) > >Podstawowa biblioteka do uczenia maszynowego w języku Python. > >Aby zainstalować bibliotekę scikit-learn, użyj polecenia poniżej: ``` pip install scikit-learn ``` ### Spis treści: 1. [Import bibliotek](#a1) 2. [K-nearest Neighbour Algorithm - Algorytm K-najbliższych sąsiadów](#a2) 3. [Wykres Rozproszenia](#a3) 4. [K-nearest Neighbors Classifier](#a4) 5. [Wykres granic decyzyjnych](#a5) 6. [Grid Search](#6) ### <a name='a1'></a> Import bibliotek ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px sns.set() ``` ### <a name='a2'></a> K-nearest Neighbour Algorithm - Algorytm K-najbliższych sąsiadów Podstawą działania algorytmu jest: * znalezienie z góry określonej liczby próbek treningowych znajdujących się najbliżej naszej obserwacji * przewidzenie na ich podstawie etykiety Liczba sąsiadów jest określana przez użytkownika. Odległości zwykle kalkuluje sie przy pomocy metryki euklidesowej. ``` from sklearn.datasets import load_iris raw_data = load_iris() raw_data.data raw_data.target df1 = pd.DataFrame(data=raw_data.data, columns=raw_data.feature_names) df2 = pd.DataFrame(data=raw_data.target, columns=['class']) df = pd.concat([df1, df2], axis=1) df.head() df.info() ``` ### <a name='a3'></a> Wykres Rozproszenia ``` _ = sns.pairplot(df, hue='class') df.corr() X = raw_data.data y = raw_data.target X = X[:, :2] print('X shape:', X.shape) print('y shape:', y.shape) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='viridis') plt.title('Wykres punktowy') plt.xlabel('cecha_1: sepal_length') plt.ylabel('cecha_2: sepal_width') plt.show() df = pd.DataFrame(X, columns=['sepal_length', 'sepal_width']) target = pd.DataFrame(y, columns=['class']) df = pd.concat([df, target], axis=1) px.scatter(df, x='sepal_length', y='sepal_width', color='class', width=600, height=400) ``` ### <a name='a4'></a> K-nearest Neighbors Classifier ``` from sklearn.neighbors import KNeighborsClassifier classifier = KNeighborsClassifier(n_neighbors=5) classifier.fit(X, y) accuracy = classifier.score(X, y) accuracy ``` ### <a name='a5'></a> Wykres granic decyzyjnych ``` x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) mesh = np.c_[xx.ravel(), yy.ravel()] Z = classifier.predict(mesh) Z = Z.reshape(xx.shape) plt.figure(figsize=(9, 7)) plt.pcolormesh(xx, yy, Z, cmap='gnuplot', alpha=0.1) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='gnuplot', edgecolors='r') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title(f'3-class classification k=5, accuracy: {accuracy:.4f}') plt.show() plt.figure(figsize=(12, 12)) for i in range(1, 7): plt.subplot(3, 2, i) classifier = KNeighborsClassifier(n_neighbors=i) classifier.fit(X, y) accuracy = classifier.score(X, y) xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) mesh = np.c_[xx.ravel(), yy.ravel()] Z = classifier.predict(mesh) Z = Z.reshape(xx.shape) plt.pcolormesh(xx, yy, Z, cmap='gnuplot', alpha=0.1) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='gnuplot', edgecolors='r') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title(f'3-class classification k={i}, accuracy: {accuracy:.4f}') plt.show() ``` ### <a name='6'></a> Grid Search ``` from sklearn.model_selection import GridSearchCV grid_params = {'n_neighbors': range(2, 30)} classifier = KNeighborsClassifier() gs = GridSearchCV(classifier, grid_params, cv=3) gs.fit(X, y) gs.best_params_ k = gs.best_params_['n_neighbors'] k classifier = gs.best_estimator_ classifier x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) mesh = np.c_[xx.ravel(), yy.ravel()] Z = gs.predict(mesh) Z = Z.reshape(xx.shape) plt.figure(figsize=(9, 7)) plt.pcolormesh(xx, yy, Z, cmap='gnuplot', alpha=0.1) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='gnuplot', edgecolors='r') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title(f'3-class classification k: {k}, accuracy: {accuracy:.4f}') plt.show() ```
github_jupyter
pip install scikit-learn import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import plotly.express as px sns.set() from sklearn.datasets import load_iris raw_data = load_iris() raw_data.data raw_data.target df1 = pd.DataFrame(data=raw_data.data, columns=raw_data.feature_names) df2 = pd.DataFrame(data=raw_data.target, columns=['class']) df = pd.concat([df1, df2], axis=1) df.head() df.info() _ = sns.pairplot(df, hue='class') df.corr() X = raw_data.data y = raw_data.target X = X[:, :2] print('X shape:', X.shape) print('y shape:', y.shape) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='viridis') plt.title('Wykres punktowy') plt.xlabel('cecha_1: sepal_length') plt.ylabel('cecha_2: sepal_width') plt.show() df = pd.DataFrame(X, columns=['sepal_length', 'sepal_width']) target = pd.DataFrame(y, columns=['class']) df = pd.concat([df, target], axis=1) px.scatter(df, x='sepal_length', y='sepal_width', color='class', width=600, height=400) from sklearn.neighbors import KNeighborsClassifier classifier = KNeighborsClassifier(n_neighbors=5) classifier.fit(X, y) accuracy = classifier.score(X, y) accuracy x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) mesh = np.c_[xx.ravel(), yy.ravel()] Z = classifier.predict(mesh) Z = Z.reshape(xx.shape) plt.figure(figsize=(9, 7)) plt.pcolormesh(xx, yy, Z, cmap='gnuplot', alpha=0.1) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='gnuplot', edgecolors='r') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title(f'3-class classification k=5, accuracy: {accuracy:.4f}') plt.show() plt.figure(figsize=(12, 12)) for i in range(1, 7): plt.subplot(3, 2, i) classifier = KNeighborsClassifier(n_neighbors=i) classifier.fit(X, y) accuracy = classifier.score(X, y) xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) mesh = np.c_[xx.ravel(), yy.ravel()] Z = classifier.predict(mesh) Z = Z.reshape(xx.shape) plt.pcolormesh(xx, yy, Z, cmap='gnuplot', alpha=0.1) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='gnuplot', edgecolors='r') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title(f'3-class classification k={i}, accuracy: {accuracy:.4f}') plt.show() from sklearn.model_selection import GridSearchCV grid_params = {'n_neighbors': range(2, 30)} classifier = KNeighborsClassifier() gs = GridSearchCV(classifier, grid_params, cv=3) gs.fit(X, y) gs.best_params_ k = gs.best_params_['n_neighbors'] k classifier = gs.best_estimator_ classifier x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5 y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) mesh = np.c_[xx.ravel(), yy.ravel()] Z = gs.predict(mesh) Z = Z.reshape(xx.shape) plt.figure(figsize=(9, 7)) plt.pcolormesh(xx, yy, Z, cmap='gnuplot', alpha=0.1) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='gnuplot', edgecolors='r') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title(f'3-class classification k: {k}, accuracy: {accuracy:.4f}') plt.show()
0.79542
0.951504
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png) # Automated Machine Learning _**Orange Juice Sales Forecasting**_ ## Contents 1. [Introduction](#introduction) 1. [Setup](#setup) 1. [Compute](#compute) 1. [Data](#data) 1. [Train](#train) 1. [Forecast](#forecast) 1. [Operationalize](#operationalize) ## Introduction<a id="introduction"></a> In this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series. Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook. The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area. ## Setup<a id="setup"></a> ``` import azureml.core import pandas as pd import logging from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from azureml.automl.core.featurization import FeaturizationConfig ``` This sample notebook may use features that are not available in previous versions of the Azure ML SDK. ``` print("This notebook was created using version 1.36.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ``` As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. ``` ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = "automl-ojforecasting" experiment = Experiment(ws, experiment_name) output = {} output["Subscription ID"] = ws.subscription_id output["Workspace"] = ws.name output["SKU"] = ws.sku output["Resource Group"] = ws.resource_group output["Location"] = ws.location output["Run History Name"] = experiment_name pd.set_option("display.max_colwidth", -1) outputDf = pd.DataFrame(data=output, index=[""]) outputDf.T ``` ## Compute<a id="compute"></a> You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource. > Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist. #### Creation of AmlCompute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. ``` from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your CPU cluster amlcompute_cluster_name = "oj-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print("Found existing cluster, use it.") except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_D12_V2", max_nodes=6 ) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) ``` ## Data<a id="data"></a> You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type. ``` time_column_name = "WeekStarting" data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name]) # Drop the columns 'logQuantity' as it is a leaky feature. data.drop("logQuantity", axis=1, inplace=True) data.head() ``` Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series: ``` time_series_id_column_names = ["Store", "Brand"] nseries = data.groupby(time_series_id_column_names).ngroups print("Data contains {0} individual time-series.".format(nseries)) ``` For demonstration purposes, we extract sales time-series for just a few of the stores: ``` use_stores = [2, 5, 8] data_subset = data[data.Store.isin(use_stores)] nseries = data_subset.groupby(time_series_id_column_names).ngroups print("Data subset contains {0} individual time-series.".format(nseries)) ``` ### Data Splitting We now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns. ``` n_test_periods = 20 def split_last_n_by_series_id(df, n): """Group df by series identifiers and split on last n rows for each group.""" df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time time_series_id_column_names, group_keys=False ) df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n]) df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:]) return df_head, df_tail train, test = split_last_n_by_series_id(data_subset, n_test_periods) ``` ### Upload data to datastore The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation. ``` train.to_csv(r"./dominicks_OJ_train.csv", index=None, header=True) test.to_csv(r"./dominicks_OJ_test.csv", index=None, header=True) datastore = ws.get_default_datastore() datastore.upload_files( files=["./dominicks_OJ_train.csv", "./dominicks_OJ_test.csv"], target_path="dataset/", overwrite=True, show_progress=True, ) ``` ### Create dataset for training ``` from azureml.core.dataset import Dataset train_dataset = Dataset.Tabular.from_delimited_files( path=datastore.path("dataset/dominicks_OJ_train.csv") ) test_dataset = Dataset.Tabular.from_delimited_files( path=datastore.path("dataset/dominicks_OJ_test.csv") ) train_dataset.to_pandas_dataframe().tail() ``` ## Modeling For forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps: * Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series * Create time-based features to assist in learning seasonal patterns * Encode categorical variables to numeric quantities In this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook. You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame: ``` target_column_name = "Quantity" ``` ## Customization The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include: 1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types. 2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0. 3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data. ``` featurization_config = FeaturizationConfig() # Force the CPWVOL5 feature to be numeric type. featurization_config.add_column_purpose("CPWVOL5", "Numeric") # Fill missing values in the target column, Quantity, with zeros. featurization_config.add_transformer_params( "Imputer", ["Quantity"], {"strategy": "constant", "fill_value": 0} ) # Fill missing values in the INCOME column with median value. featurization_config.add_transformer_params( "Imputer", ["INCOME"], {"strategy": "median"} ) # Fill missing values in the Price column with forward fill (last value carried forward). featurization_config.add_transformer_params("Imputer", ["Price"], {"strategy": "ffill"}) ``` ## Forecasting Parameters To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment. |Property|Description| |-|-| |**time_column_name**|The name of your time column.| |**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).| |**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.| |**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information. ## Train<a id="train"></a> The [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters. For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak. The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon. We note here that AutoML can sweep over two types of time-series models: * Models that are trained for each series such as ARIMA and Facebook's Prophet. * Models trained across multiple time-series using a regression approach. In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig. Here is a summary of AutoMLConfig parameters used for training the OJ model: |Property|Description| |-|-| |**task**|forecasting| |**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i> |**experiment_timeout_hours**|Experimentation timeout in hours.| |**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.| |**training_data**|Input dataset, containing both features and label column.| |**label_column_name**|The name of the label column.| |**compute_target**|The remote compute for training.| |**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection| |**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models| |**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models| |**debug_log**|Log file path for writing debugging information| |**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.| |**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used ``` from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=n_test_periods, time_series_id_column_names=time_series_id_column_names, freq="W-THU", # Set the forecast frequency to be weekly (start on each Thursday) ) automl_config = AutoMLConfig( task="forecasting", debug_log="automl_oj_sales_errors.log", primary_metric="normalized_mean_absolute_error", experiment_timeout_hours=0.25, training_data=train_dataset, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, featurization=featurization_config, n_cross_validations=3, verbosity=logging.INFO, max_cores_per_iteration=-1, forecasting_parameters=forecasting_parameters, ) ``` You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes. Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous. ``` remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() ``` ### Retrieve the Best Model Each run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset: ``` best_run, fitted_model = remote_run.get_output() print(fitted_model.steps) model_name = best_run.properties["model_name"] ``` ## Transparency View updated featurization summary ``` custom_featurizer = fitted_model.named_steps["timeseriestransformer"] custom_featurizer.get_featurization_summary() ``` # Forecast<a id="forecast"></a> Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset. The inference will run on a remote compute. In this example, it will re-use the training compute. ``` test_experiment = Experiment(ws, experiment_name + "_inference") ``` ### Retreiving forecasts from the model We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute. To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data. ``` from run_forecast import run_remote_inference remote_run_infer = run_remote_inference( test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test_dataset, target_column_name=target_column_name, ) remote_run_infer.wait_for_completion(show_output=False) # download the forecast file to the local machine remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv") ``` # Evaluate To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals). We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics. ``` # load forecast data frame fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl scoring module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b") test_test = plt.scatter( fcst_df[target_column_name], fcst_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() ``` # Operationalize<a id="operationalize"></a> _Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model. ``` description = "AutoML OJ forecaster" tags = None model = remote_run.register_model( model_name=model_name, description=description, tags=tags ) print(remote_run.model_id) ``` ### Develop the scoring script For the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run. ``` script_file_name = "score_fcast.py" best_run.download_file("outputs/scoring_file_v_1_0_0.py", script_file_name) ``` ### Deploy the model as a Web Service on Azure Container Instance ``` from azureml.core.model import InferenceConfig from azureml.core.webservice import AciWebservice from azureml.core.webservice import Webservice from azureml.core.model import Model inference_config = InferenceConfig( environment=best_run.get_environment(), entry_script=script_file_name ) aciconfig = AciWebservice.deploy_configuration( cpu_cores=2, memory_gb=4, tags={"type": "automl-forecasting"}, description="Automl forecasting sample service", ) aci_service_name = "automl-oj-forecast-01" print(aci_service_name) aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig) aci_service.wait_for_deployment(True) print(aci_service.state) aci_service.get_logs() ``` ### Call the service ``` import json X_query = test.copy() X_query.pop(target_column_name) # We have to convert datetime to string, because Timestamps cannot be serialized to JSON. X_query[time_column_name] = X_query[time_column_name].astype(str) # The Service object accept the complex dictionary, which is internally converted to JSON string. # The section 'data' contains the data frame in the form of dictionary. sample_quantiles = [0.025, 0.975] test_sample = json.dumps( {"data": X_query.to_dict(orient="records"), "quantiles": sample_quantiles} ) response = aci_service.run(input_data=test_sample) # translate from networkese to datascientese try: res_dict = json.loads(response) y_fcst_all = pd.DataFrame(res_dict["index"]) y_fcst_all[time_column_name] = pd.to_datetime( y_fcst_all[time_column_name], unit="ms" ) y_fcst_all["forecast"] = res_dict["forecast"] y_fcst_all["prediction_interval"] = res_dict["prediction_interval"] except: print(res_dict) y_fcst_all.head() ``` ### Delete the web service if desired ``` serv = Webservice(ws, "automl-oj-forecast-01") serv.delete() # don't do it accidentally ```
github_jupyter
import azureml.core import pandas as pd import logging from azureml.core.workspace import Workspace from azureml.core.experiment import Experiment from azureml.train.automl import AutoMLConfig from azureml.automl.core.featurization import FeaturizationConfig print("This notebook was created using version 1.36.0 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") ws = Workspace.from_config() # choose a name for the run history container in the workspace experiment_name = "automl-ojforecasting" experiment = Experiment(ws, experiment_name) output = {} output["Subscription ID"] = ws.subscription_id output["Workspace"] = ws.name output["SKU"] = ws.sku output["Resource Group"] = ws.resource_group output["Location"] = ws.location output["Run History Name"] = experiment_name pd.set_option("display.max_colwidth", -1) outputDf = pd.DataFrame(data=output, index=[""]) outputDf.T from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your CPU cluster amlcompute_cluster_name = "oj-cluster" # Verify that cluster does not exist already try: compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name) print("Found existing cluster, use it.") except ComputeTargetException: compute_config = AmlCompute.provisioning_configuration( vm_size="STANDARD_D12_V2", max_nodes=6 ) compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) time_column_name = "WeekStarting" data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name]) # Drop the columns 'logQuantity' as it is a leaky feature. data.drop("logQuantity", axis=1, inplace=True) data.head() time_series_id_column_names = ["Store", "Brand"] nseries = data.groupby(time_series_id_column_names).ngroups print("Data contains {0} individual time-series.".format(nseries)) use_stores = [2, 5, 8] data_subset = data[data.Store.isin(use_stores)] nseries = data_subset.groupby(time_series_id_column_names).ngroups print("Data subset contains {0} individual time-series.".format(nseries)) n_test_periods = 20 def split_last_n_by_series_id(df, n): """Group df by series identifiers and split on last n rows for each group.""" df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time time_series_id_column_names, group_keys=False ) df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n]) df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:]) return df_head, df_tail train, test = split_last_n_by_series_id(data_subset, n_test_periods) train.to_csv(r"./dominicks_OJ_train.csv", index=None, header=True) test.to_csv(r"./dominicks_OJ_test.csv", index=None, header=True) datastore = ws.get_default_datastore() datastore.upload_files( files=["./dominicks_OJ_train.csv", "./dominicks_OJ_test.csv"], target_path="dataset/", overwrite=True, show_progress=True, ) from azureml.core.dataset import Dataset train_dataset = Dataset.Tabular.from_delimited_files( path=datastore.path("dataset/dominicks_OJ_train.csv") ) test_dataset = Dataset.Tabular.from_delimited_files( path=datastore.path("dataset/dominicks_OJ_test.csv") ) train_dataset.to_pandas_dataframe().tail() target_column_name = "Quantity" featurization_config = FeaturizationConfig() # Force the CPWVOL5 feature to be numeric type. featurization_config.add_column_purpose("CPWVOL5", "Numeric") # Fill missing values in the target column, Quantity, with zeros. featurization_config.add_transformer_params( "Imputer", ["Quantity"], {"strategy": "constant", "fill_value": 0} ) # Fill missing values in the INCOME column with median value. featurization_config.add_transformer_params( "Imputer", ["INCOME"], {"strategy": "median"} ) # Fill missing values in the Price column with forward fill (last value carried forward). featurization_config.add_transformer_params("Imputer", ["Price"], {"strategy": "ffill"}) from azureml.automl.core.forecasting_parameters import ForecastingParameters forecasting_parameters = ForecastingParameters( time_column_name=time_column_name, forecast_horizon=n_test_periods, time_series_id_column_names=time_series_id_column_names, freq="W-THU", # Set the forecast frequency to be weekly (start on each Thursday) ) automl_config = AutoMLConfig( task="forecasting", debug_log="automl_oj_sales_errors.log", primary_metric="normalized_mean_absolute_error", experiment_timeout_hours=0.25, training_data=train_dataset, label_column_name=target_column_name, compute_target=compute_target, enable_early_stopping=True, featurization=featurization_config, n_cross_validations=3, verbosity=logging.INFO, max_cores_per_iteration=-1, forecasting_parameters=forecasting_parameters, ) remote_run = experiment.submit(automl_config, show_output=False) remote_run.wait_for_completion() best_run, fitted_model = remote_run.get_output() print(fitted_model.steps) model_name = best_run.properties["model_name"] custom_featurizer = fitted_model.named_steps["timeseriestransformer"] custom_featurizer.get_featurization_summary() test_experiment = Experiment(ws, experiment_name + "_inference") from run_forecast import run_remote_inference remote_run_infer = run_remote_inference( test_experiment=test_experiment, compute_target=compute_target, train_run=best_run, test_dataset=test_dataset, target_column_name=target_column_name, ) remote_run_infer.wait_for_completion(show_output=False) # download the forecast file to the local machine remote_run_infer.download_file("outputs/predictions.csv", "predictions.csv") # load forecast data frame fcst_df = pd.read_csv("predictions.csv", parse_dates=[time_column_name]) fcst_df.head() from azureml.automl.core.shared import constants from azureml.automl.runtime.shared.score import scoring from matplotlib import pyplot as plt # use automl scoring module scores = scoring.score_regression( y_test=fcst_df[target_column_name], y_pred=fcst_df["predicted"], metrics=list(constants.Metric.SCALAR_REGRESSION_SET), ) print("[Test data scores]\n") for key, value in scores.items(): print("{}: {:.3f}".format(key, value)) # Plot outputs %matplotlib inline test_pred = plt.scatter(fcst_df[target_column_name], fcst_df["predicted"], color="b") test_test = plt.scatter( fcst_df[target_column_name], fcst_df[target_column_name], color="g" ) plt.legend( (test_pred, test_test), ("prediction", "truth"), loc="upper left", fontsize=8 ) plt.show() description = "AutoML OJ forecaster" tags = None model = remote_run.register_model( model_name=model_name, description=description, tags=tags ) print(remote_run.model_id) script_file_name = "score_fcast.py" best_run.download_file("outputs/scoring_file_v_1_0_0.py", script_file_name) from azureml.core.model import InferenceConfig from azureml.core.webservice import AciWebservice from azureml.core.webservice import Webservice from azureml.core.model import Model inference_config = InferenceConfig( environment=best_run.get_environment(), entry_script=script_file_name ) aciconfig = AciWebservice.deploy_configuration( cpu_cores=2, memory_gb=4, tags={"type": "automl-forecasting"}, description="Automl forecasting sample service", ) aci_service_name = "automl-oj-forecast-01" print(aci_service_name) aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig) aci_service.wait_for_deployment(True) print(aci_service.state) aci_service.get_logs() import json X_query = test.copy() X_query.pop(target_column_name) # We have to convert datetime to string, because Timestamps cannot be serialized to JSON. X_query[time_column_name] = X_query[time_column_name].astype(str) # The Service object accept the complex dictionary, which is internally converted to JSON string. # The section 'data' contains the data frame in the form of dictionary. sample_quantiles = [0.025, 0.975] test_sample = json.dumps( {"data": X_query.to_dict(orient="records"), "quantiles": sample_quantiles} ) response = aci_service.run(input_data=test_sample) # translate from networkese to datascientese try: res_dict = json.loads(response) y_fcst_all = pd.DataFrame(res_dict["index"]) y_fcst_all[time_column_name] = pd.to_datetime( y_fcst_all[time_column_name], unit="ms" ) y_fcst_all["forecast"] = res_dict["forecast"] y_fcst_all["prediction_interval"] = res_dict["prediction_interval"] except: print(res_dict) y_fcst_all.head() serv = Webservice(ws, "automl-oj-forecast-01") serv.delete() # don't do it accidentally
0.678647
0.951908
``` import psycopg2 import pandas as pd import psycopg2.extras class PostgresConnection(object): def __init__(self): self.connection = psycopg2.connect(database="ecomdb", user = "postgres", password = "admin", host = "127.0.0.1", port = "5432") def getConnection(self): print("Connection to DB established!") return self.connection con = PostgresConnection().getConnection() cur = con.cursor() select_stmt = "SELECT s.division, s.district, COUNT(*) " \ "FROM star_schema.fact_table t " \ "JOIN star_schema.store_dim s on s.store_key=t.store_key " \ "JOIN star_schema.time_dim tim on tim.time_key=t.time_key " \ "WHERE tim.month=12 " \ "GROUP BY CUBE(s.division, s.district, tim.month) " \ "ORDER BY s.division" cur.execute(select_stmt) record = cur.fetchall() record record_load = pd.DataFrame(list(record), columns=['divison', 'district', 'sales']) record_load select_stmt2 = "SELECT s.division, COUNT(*) " \ "FROM star_schema.fact_table t " \ "JOIN star_schema.store_dim s on s.store_key=t.store_key " \ "JOIN star_schema.time_dim tim on tim.time_key=t.time_key " \ "WHERE tim.month=12 " \ "GROUP BY s.division " \ "ORDER BY s.division" cur.execute(select_stmt2) record_n = cur.fetchall() df = pd.DataFrame(list(record_n), columns=['divison', 'sales']) df pip install matplotlib plot = df.plot.pie(x='division', y='sales', figsize=(5, 5)) df.dtypes df['sales'] = df['sales'].astype('float64') df = df.set_index(['divison']) plot = df.plot.pie(x='division', y='sales', figsize=(5, 5)) df ``` # Find the division/district/year/month wise total_sale_price joining fact table and respective dimension table ``` # CUBE cur = con.cursor() q1 = "SELECT s.division, s.district, tim.year, tim.month, COUNT(t.total_price) " \ "FROM star_schema.fact_table t " \ "JOIN star_schema.store_dim s on s.store_key=t.store_key " \ "JOIN star_schema.time_dim tim on tim.time_key=t.time_key " \ "GROUP BY CUBE(s.division, s.district, tim.year, tim.month) " \ "ORDER BY s.division" cur.execute(q1) record_n = cur.fetchall() record_n df_q1 = pd.DataFrame(list(record_n), columns=['divison', 'district', 'year', 'month','total_sales_price']) df_q1 # ROLLUP cur = con.cursor() q1 = "SELECT s.division, s.district, tim.year, tim.month, COUNT(t.total_price) " \ "FROM star_schema.fact_table t " \ "JOIN star_schema.store_dim s on s.store_key=t.store_key " \ "JOIN star_schema.time_dim tim on tim.time_key=t.time_key " \ "GROUP BY ROLLUP(s.division, s.district, tim.year, tim.month) " \ "ORDER BY s.division" cur.execute(q1) record_n = cur.fetchall() df_q1 = pd.DataFrame(list(record_n), columns=['divison', 'district', 'year', 'month','total_sales_price']) df_q1 class PostgresConnection(object): def __init__(self): self.connection = psycopg2.connect(database="ecomdb", user = "postgres", password = "admin", host = "127.0.0.1", port = "5432") def getConnection(self): print("Connection to DB established!") return self.connection con = PostgresConnection().getConnection() cur = con.cursor() q1 = "SELECT s.division, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.store_dim s ON ft.store_key=s.store_key " \ "GROUP BY s.division " \ "ORDER BY s.division" cur.execute(q1) record_n = cur.fetchall() div = pd.DataFrame(list(record_n), columns=['division', 'sales']) div #Change type function def changeType2Float(input): input['sales'] = input['sales'].astype('float64') changeType2Float(div) div_res=div.set_index(['division']) div_res.plot.pie(y='sales',figsize=(5,5)) ``` # DISTRICT ``` cur = con.cursor() q1_dis = "SELECT s.district, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.store_dim s ON ft.store_key=s.store_key " \ "GROUP BY s.district " \ "ORDER BY s.district" cur.execute(q1_dis) record_n = cur.fetchall() dis_res = pd.DataFrame(list(record_n), columns=['district', 'sales']) dis_res changeType2Float(dis_res) dis_res=dis_res.set_index(['district']) dis_res import matplotlib as plt dis_res.plot.pie(y='sales',figsize=(25,25)) ``` # YEAR ``` cur = con.cursor() q1_year = "SELECT tim.year, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.time_dim tim ON ft.time_key=tim.time_key " \ "GROUP BY tim.year " \ "ORDER BY tim.year" cur.execute(q1_year) record_n = cur.fetchall() year_res = pd.DataFrame(list(record_n), columns=['year', 'sales']) year_res changeType2Float(year_res) year_res=year_res.set_index(['year']) year_res year_res.plot.pie(y='sales',figsize=(10,10), shadow='True') ``` # MONTH ``` cur = con.cursor() q1_month = "SELECT tim.month, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.time_dim tim ON ft.time_key=tim.time_key " \ "GROUP BY tim.month " \ "ORDER BY tim.month" cur.execute(q1_month) record_n = cur.fetchall() month_res = pd.DataFrame(list(record_n), columns=['month', 'sales']) month_res changeType2Float(month_res) month_res=month_res.set_index(['month']) month_res month_res.plot.pie(y='sales',figsize=(10,10)) ``` # Q2: Find the customer/bank/transaction(cash/online) wise total_sale_price joining fact table and respective dimension table # CUSTOMER ``` cur = con.cursor() q2_customer = "SELECT cus.name, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.customer_dim cus ON ft.customer_key=cus.customer_key " \ "GROUP BY cus.name " \ "ORDER BY cus.name" cur.execute(q2_customer) record_n = cur.fetchall() customer_res = pd.DataFrame(list(record_n), columns=['name', 'sales']) customer_res changeType2Float(customer_res) customer_res=customer_res.set_index(['name']) customer_res customer_res.plot.pie(y='sales',figsize=(10,10)) customer_res = customer_res[:10] customer_res customer_res.plot.pie(y='sales',figsize=(10,10)) ```
github_jupyter
import psycopg2 import pandas as pd import psycopg2.extras class PostgresConnection(object): def __init__(self): self.connection = psycopg2.connect(database="ecomdb", user = "postgres", password = "admin", host = "127.0.0.1", port = "5432") def getConnection(self): print("Connection to DB established!") return self.connection con = PostgresConnection().getConnection() cur = con.cursor() select_stmt = "SELECT s.division, s.district, COUNT(*) " \ "FROM star_schema.fact_table t " \ "JOIN star_schema.store_dim s on s.store_key=t.store_key " \ "JOIN star_schema.time_dim tim on tim.time_key=t.time_key " \ "WHERE tim.month=12 " \ "GROUP BY CUBE(s.division, s.district, tim.month) " \ "ORDER BY s.division" cur.execute(select_stmt) record = cur.fetchall() record record_load = pd.DataFrame(list(record), columns=['divison', 'district', 'sales']) record_load select_stmt2 = "SELECT s.division, COUNT(*) " \ "FROM star_schema.fact_table t " \ "JOIN star_schema.store_dim s on s.store_key=t.store_key " \ "JOIN star_schema.time_dim tim on tim.time_key=t.time_key " \ "WHERE tim.month=12 " \ "GROUP BY s.division " \ "ORDER BY s.division" cur.execute(select_stmt2) record_n = cur.fetchall() df = pd.DataFrame(list(record_n), columns=['divison', 'sales']) df pip install matplotlib plot = df.plot.pie(x='division', y='sales', figsize=(5, 5)) df.dtypes df['sales'] = df['sales'].astype('float64') df = df.set_index(['divison']) plot = df.plot.pie(x='division', y='sales', figsize=(5, 5)) df # CUBE cur = con.cursor() q1 = "SELECT s.division, s.district, tim.year, tim.month, COUNT(t.total_price) " \ "FROM star_schema.fact_table t " \ "JOIN star_schema.store_dim s on s.store_key=t.store_key " \ "JOIN star_schema.time_dim tim on tim.time_key=t.time_key " \ "GROUP BY CUBE(s.division, s.district, tim.year, tim.month) " \ "ORDER BY s.division" cur.execute(q1) record_n = cur.fetchall() record_n df_q1 = pd.DataFrame(list(record_n), columns=['divison', 'district', 'year', 'month','total_sales_price']) df_q1 # ROLLUP cur = con.cursor() q1 = "SELECT s.division, s.district, tim.year, tim.month, COUNT(t.total_price) " \ "FROM star_schema.fact_table t " \ "JOIN star_schema.store_dim s on s.store_key=t.store_key " \ "JOIN star_schema.time_dim tim on tim.time_key=t.time_key " \ "GROUP BY ROLLUP(s.division, s.district, tim.year, tim.month) " \ "ORDER BY s.division" cur.execute(q1) record_n = cur.fetchall() df_q1 = pd.DataFrame(list(record_n), columns=['divison', 'district', 'year', 'month','total_sales_price']) df_q1 class PostgresConnection(object): def __init__(self): self.connection = psycopg2.connect(database="ecomdb", user = "postgres", password = "admin", host = "127.0.0.1", port = "5432") def getConnection(self): print("Connection to DB established!") return self.connection con = PostgresConnection().getConnection() cur = con.cursor() q1 = "SELECT s.division, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.store_dim s ON ft.store_key=s.store_key " \ "GROUP BY s.division " \ "ORDER BY s.division" cur.execute(q1) record_n = cur.fetchall() div = pd.DataFrame(list(record_n), columns=['division', 'sales']) div #Change type function def changeType2Float(input): input['sales'] = input['sales'].astype('float64') changeType2Float(div) div_res=div.set_index(['division']) div_res.plot.pie(y='sales',figsize=(5,5)) cur = con.cursor() q1_dis = "SELECT s.district, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.store_dim s ON ft.store_key=s.store_key " \ "GROUP BY s.district " \ "ORDER BY s.district" cur.execute(q1_dis) record_n = cur.fetchall() dis_res = pd.DataFrame(list(record_n), columns=['district', 'sales']) dis_res changeType2Float(dis_res) dis_res=dis_res.set_index(['district']) dis_res import matplotlib as plt dis_res.plot.pie(y='sales',figsize=(25,25)) cur = con.cursor() q1_year = "SELECT tim.year, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.time_dim tim ON ft.time_key=tim.time_key " \ "GROUP BY tim.year " \ "ORDER BY tim.year" cur.execute(q1_year) record_n = cur.fetchall() year_res = pd.DataFrame(list(record_n), columns=['year', 'sales']) year_res changeType2Float(year_res) year_res=year_res.set_index(['year']) year_res year_res.plot.pie(y='sales',figsize=(10,10), shadow='True') cur = con.cursor() q1_month = "SELECT tim.month, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.time_dim tim ON ft.time_key=tim.time_key " \ "GROUP BY tim.month " \ "ORDER BY tim.month" cur.execute(q1_month) record_n = cur.fetchall() month_res = pd.DataFrame(list(record_n), columns=['month', 'sales']) month_res changeType2Float(month_res) month_res=month_res.set_index(['month']) month_res month_res.plot.pie(y='sales',figsize=(10,10)) cur = con.cursor() q2_customer = "SELECT cus.name, SUM(ft.total_price) " \ "FROM star_schema.fact_table ft " \ "JOIN star_schema.customer_dim cus ON ft.customer_key=cus.customer_key " \ "GROUP BY cus.name " \ "ORDER BY cus.name" cur.execute(q2_customer) record_n = cur.fetchall() customer_res = pd.DataFrame(list(record_n), columns=['name', 'sales']) customer_res changeType2Float(customer_res) customer_res=customer_res.set_index(['name']) customer_res customer_res.plot.pie(y='sales',figsize=(10,10)) customer_res = customer_res[:10] customer_res customer_res.plot.pie(y='sales',figsize=(10,10))
0.264738
0.367242
``` import pandas as pd dataset = pd.read_csv("data/train.csv") dataset.drop_duplicates(inplace=True) dataset.shape dataset.head() dataset[ dataset["is_parent"] == False][:20].values # using this strategy to fix the problem (stated in the paper) for pairs order def preprocess(dataset): aliased_snippet = [] companies = dataset["company1"].append(dataset["company2"]).value_counts().keys() for i in range(dataset.shape[0]): current_row = dataset.iloc[i] snippet = current_row["snippet"] # I am adding more spaces cuz in some samples the words and concatanated for company in companies: snippet = snippet.replace(company, ' ' + company +' ') preprocessed = snippet.replace(current_row["company1"]," company1 ").replace(current_row["company2"]," company2 ").replace("\xa0", " ").replace("\n", " ") aliased_snippet.append(preprocessed) dataset['aliased_snippet'] = aliased_snippet dataset['aliased_snippet'] = dataset['aliased_snippet'].str.lower() print("Companies shape",companies.shape) return dataset dataset = preprocess(dataset) dataset.shape # I will split the train data to train,dev,test in ratio 70/20/10 from sklearn.model_selection import train_test_split train, other = train_test_split(dataset, stratify=dataset["is_parent"],test_size=0.3,random_state=26) train.shape, other.shape train["is_parent"].value_counts() other["is_parent"].value_counts() from sklearn.model_selection import train_test_split dev,test = train_test_split(other, stratify=other["is_parent"], test_size=(1/3), random_state=26) dev.shape, test.shape ``` Lets check whether we splitted it correctly ``` def in_percent(ratio): return ratio*100 print(in_percent(train.shape[0]/dataset.shape[0])) print(in_percent(dev.shape[0]/dataset.shape[0])) print(in_percent(test.shape[0]/dataset.shape[0])) %mkdir split train.to_csv("split/train.csv") dev.to_csv("split/dev.csv") test.to_csv("split/test.csv") train["is_parent"].value_counts() dev["is_parent"].value_counts() test["is_parent"].value_counts() ``` ### Now lets preprocess the unlabeled test set in order to use it as corpus for more words and prepare it for input in the models ``` onto_test = pd.read_csv("data/test-labeled.csv") onto_test.drop_duplicates(inplace=True) onto_test.shape onto_test.head() onto_test["company1"] = onto_test["label1"] onto_test["company2"] = onto_test["label2"] onto_test["is_parent"] = onto_test["relation.1"] onto_test["relation"].value_counts() onto_test = preprocess(onto_test) onto_test.head() %mkdir processed onto_test.to_csv("processed/test.csv", index_label=False) ```
github_jupyter
import pandas as pd dataset = pd.read_csv("data/train.csv") dataset.drop_duplicates(inplace=True) dataset.shape dataset.head() dataset[ dataset["is_parent"] == False][:20].values # using this strategy to fix the problem (stated in the paper) for pairs order def preprocess(dataset): aliased_snippet = [] companies = dataset["company1"].append(dataset["company2"]).value_counts().keys() for i in range(dataset.shape[0]): current_row = dataset.iloc[i] snippet = current_row["snippet"] # I am adding more spaces cuz in some samples the words and concatanated for company in companies: snippet = snippet.replace(company, ' ' + company +' ') preprocessed = snippet.replace(current_row["company1"]," company1 ").replace(current_row["company2"]," company2 ").replace("\xa0", " ").replace("\n", " ") aliased_snippet.append(preprocessed) dataset['aliased_snippet'] = aliased_snippet dataset['aliased_snippet'] = dataset['aliased_snippet'].str.lower() print("Companies shape",companies.shape) return dataset dataset = preprocess(dataset) dataset.shape # I will split the train data to train,dev,test in ratio 70/20/10 from sklearn.model_selection import train_test_split train, other = train_test_split(dataset, stratify=dataset["is_parent"],test_size=0.3,random_state=26) train.shape, other.shape train["is_parent"].value_counts() other["is_parent"].value_counts() from sklearn.model_selection import train_test_split dev,test = train_test_split(other, stratify=other["is_parent"], test_size=(1/3), random_state=26) dev.shape, test.shape def in_percent(ratio): return ratio*100 print(in_percent(train.shape[0]/dataset.shape[0])) print(in_percent(dev.shape[0]/dataset.shape[0])) print(in_percent(test.shape[0]/dataset.shape[0])) %mkdir split train.to_csv("split/train.csv") dev.to_csv("split/dev.csv") test.to_csv("split/test.csv") train["is_parent"].value_counts() dev["is_parent"].value_counts() test["is_parent"].value_counts() onto_test = pd.read_csv("data/test-labeled.csv") onto_test.drop_duplicates(inplace=True) onto_test.shape onto_test.head() onto_test["company1"] = onto_test["label1"] onto_test["company2"] = onto_test["label2"] onto_test["is_parent"] = onto_test["relation.1"] onto_test["relation"].value_counts() onto_test = preprocess(onto_test) onto_test.head() %mkdir processed onto_test.to_csv("processed/test.csv", index_label=False)
0.371935
0.515071
``` import scipy import statsmodels import sklearn import theano import tensorflow import keras import glob import os import numpy as np from keras.models import Sequential from keras.layers import Dense import pandas as pd import math import matplotlib.pyplot as plt #read data df = pd.read_csv('./Data/gill128_2021-02-01-0000_2021-02-08-0000.csv', index_col=False) print(len(df)) df.columns #process raw data df['elevation']=0 #fix here df['elevdiff']=df['elevation'].diff() #ft df['elevdiff']=df['elevation']*0.000189394 #convert ft to mile df['distdiff']=df['Analysis - other - Distance driven [mi]'].diff() df['roadGrade']=df['elevdiff']/df['distdiff'] df['temp']=df['Vehicle - Ambient Air Temperature [°F]'] df['speed'] = df['Vehicle - Wheel Based Vehicle Speed [mi/h]']*1.60934 #convert to km/h #interpolate if raw data is unfilled FuelRate = df['Engine - Engine Fuel Rate [gal/h]'] FuelRate = FuelRate.interpolate() df['FuelRate'] = FuelRate Speed = df['speed'] Speed = Speed.interpolate() df['speed'] = Speed df=df[['speed','FuelRate']] #calculate acceleration speedms = df['speed']*1000/3600 df['acceleration']=speedms.diff() #unit: m/s^2 df = df.drop(df[df.FuelRate == 0].index) df=df.dropna() #split train and test datasets train = df.sample(n=math.floor(0.8*df.shape[0])) test = df.sample(n=math.ceil(0.2*df.shape[0])) #build ann model Y_train = train['FuelRate'] #unit: gal/h X_train = train[['speed','acceleration']] Y_test = test['FuelRate'] X_test = test[['speed','acceleration']] model = Sequential() model.add(Dense(6,kernel_initializer='normal', input_dim=2, activation ='relu')) model.add(Dense(6, kernel_initializer='normal', activation ='relu')) model.add(Dense(1,kernel_initializer='normal', activation ='linear')) model.compile(loss='mean_absolute_error', optimizer='adam') #fit model history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=30, batch_size=256, verbose = 0) #performance plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='test') plt.legend() plt.show() #predict all trips in a for loop path = r'path/' all_files = glob.glob(os.path.join(path, "Trajectory*.csv")) colnames=['time_ms','speed','acceleration','vehicle_ref','actorConfig_id','actorConfig_emissionClass','actorConfig_fuel','actorConfig_ref','actorConfig_vehicleClass'] for f in all_files: # print(f[65:72]) trip=pd.read_csv(f,names=colnames, header=None) trip['speed']=trip['speed']*(0.01*3.6) #km/h trip['acceleration']=trip['acceleration']*(0.001) #m/s2 input2esti=trip[['speed','acceleration']] #prdiction and plot results pre = model.predict(input2esti) tripf=pd.concat([trip,pd.DataFrame(pre,columns=['FuelRate'])], axis=1) with open('./Data/diesel/' + 'diesel' + f[65:73] +'_'+ f[-12:-4] + '.csv', 'w', newline='') as oFile: tripf.to_csv(oFile, index = False) #read trajectory data that needs prediction trip = pd.read_csv("./Route1_trip151687020_065500.csv") trip['speed']=trip['speed']*(0.01*3.6) #km/h trip['acceleration']=trip['acceleration']*(0.001) #m/s2 input2esti=trip[['speed','acceleration']] #prdiction and plot results pre = model.predict(input2esti) tripf=pd.concat([trip,pd.DataFrame(pre,columns=['FuelRate'])], axis=1) fig, ax1 = plt.subplots(figsize=(6, 4)) ax1.plot(tripf.index, tripf.FuelRate, color='blue', linewidth=1) ax1.set_xticks(tripf.index[::360]) ax1.set_xticklabels(tripf.time[::360], rotation=45) plt.tight_layout(pad=4) plt.subplots_adjust(bottom=0.15) plt.xlabel("Time",fontsize = 14) plt.ylabel("Fuel consumption rate (gal/h)",fontsize = 14) plt.show() ```
github_jupyter
import scipy import statsmodels import sklearn import theano import tensorflow import keras import glob import os import numpy as np from keras.models import Sequential from keras.layers import Dense import pandas as pd import math import matplotlib.pyplot as plt #read data df = pd.read_csv('./Data/gill128_2021-02-01-0000_2021-02-08-0000.csv', index_col=False) print(len(df)) df.columns #process raw data df['elevation']=0 #fix here df['elevdiff']=df['elevation'].diff() #ft df['elevdiff']=df['elevation']*0.000189394 #convert ft to mile df['distdiff']=df['Analysis - other - Distance driven [mi]'].diff() df['roadGrade']=df['elevdiff']/df['distdiff'] df['temp']=df['Vehicle - Ambient Air Temperature [°F]'] df['speed'] = df['Vehicle - Wheel Based Vehicle Speed [mi/h]']*1.60934 #convert to km/h #interpolate if raw data is unfilled FuelRate = df['Engine - Engine Fuel Rate [gal/h]'] FuelRate = FuelRate.interpolate() df['FuelRate'] = FuelRate Speed = df['speed'] Speed = Speed.interpolate() df['speed'] = Speed df=df[['speed','FuelRate']] #calculate acceleration speedms = df['speed']*1000/3600 df['acceleration']=speedms.diff() #unit: m/s^2 df = df.drop(df[df.FuelRate == 0].index) df=df.dropna() #split train and test datasets train = df.sample(n=math.floor(0.8*df.shape[0])) test = df.sample(n=math.ceil(0.2*df.shape[0])) #build ann model Y_train = train['FuelRate'] #unit: gal/h X_train = train[['speed','acceleration']] Y_test = test['FuelRate'] X_test = test[['speed','acceleration']] model = Sequential() model.add(Dense(6,kernel_initializer='normal', input_dim=2, activation ='relu')) model.add(Dense(6, kernel_initializer='normal', activation ='relu')) model.add(Dense(1,kernel_initializer='normal', activation ='linear')) model.compile(loss='mean_absolute_error', optimizer='adam') #fit model history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=30, batch_size=256, verbose = 0) #performance plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='test') plt.legend() plt.show() #predict all trips in a for loop path = r'path/' all_files = glob.glob(os.path.join(path, "Trajectory*.csv")) colnames=['time_ms','speed','acceleration','vehicle_ref','actorConfig_id','actorConfig_emissionClass','actorConfig_fuel','actorConfig_ref','actorConfig_vehicleClass'] for f in all_files: # print(f[65:72]) trip=pd.read_csv(f,names=colnames, header=None) trip['speed']=trip['speed']*(0.01*3.6) #km/h trip['acceleration']=trip['acceleration']*(0.001) #m/s2 input2esti=trip[['speed','acceleration']] #prdiction and plot results pre = model.predict(input2esti) tripf=pd.concat([trip,pd.DataFrame(pre,columns=['FuelRate'])], axis=1) with open('./Data/diesel/' + 'diesel' + f[65:73] +'_'+ f[-12:-4] + '.csv', 'w', newline='') as oFile: tripf.to_csv(oFile, index = False) #read trajectory data that needs prediction trip = pd.read_csv("./Route1_trip151687020_065500.csv") trip['speed']=trip['speed']*(0.01*3.6) #km/h trip['acceleration']=trip['acceleration']*(0.001) #m/s2 input2esti=trip[['speed','acceleration']] #prdiction and plot results pre = model.predict(input2esti) tripf=pd.concat([trip,pd.DataFrame(pre,columns=['FuelRate'])], axis=1) fig, ax1 = plt.subplots(figsize=(6, 4)) ax1.plot(tripf.index, tripf.FuelRate, color='blue', linewidth=1) ax1.set_xticks(tripf.index[::360]) ax1.set_xticklabels(tripf.time[::360], rotation=45) plt.tight_layout(pad=4) plt.subplots_adjust(bottom=0.15) plt.xlabel("Time",fontsize = 14) plt.ylabel("Fuel consumption rate (gal/h)",fontsize = 14) plt.show()
0.40204
0.380615
<a href="https://colab.research.google.com/github/minhphan03/AMATH-301-Python-Notebooks/blob/main/interpolation_and_extrapolation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import numpy as np import scipy.interpolate import matplotlib.pyplot as plt import pandas as pd ``` # Interpolation and Extrapolation All of the methods we have discussed in the last two lectures have followed a certain philosophy: We wanted to find a curve that came close to all of our data points, but we were more concerned with finding a relatively simple formula than with actually hitting any of the points. In other words, we were more concerned with avoiding overfitting than with finding a perfect fit. This is usually exactly the right approach. In many scientific disciplines your data comes with enough noise/uncertainty that you cannot hope to find a model that exactly predicts all of it. If you try to predict every single point exactly correctly, then you will end up drastically overfitting your data and finding a very complicated curve that does not do a good job at predicting new data points or capturing any interesting patterns in the data set. The most you can do is find a relatively simple model that captures important trends. However, there are other situations where we really trust our data points but simply don't have enough of them. For example, you might be tracking an animal using a GPS collar. At regular time intervals, these collars report their position via a radio signal. Modern GPS is extremely accurate, so the position data is often close enough to exact that we can treat our data as perfect. The main constraint on tracking is not accuracy, but battery life. As you might imagine, it is difficult to change the battery on a collar/tag attached to a migrating bird or a wild python. To extend the battery life, these collars might only report their position every few hours. A similar situation arises in computer graphics. For example, you might use motion capture technology to track the position of an actors joints (or set the positions of these joints when making an animation). The position data from this technology are quite accurate, but we need to fill in data for the rest of the body. Such situations also arise in a purely mathematical context. For instance, when solving a partial differential equation numerically, you typically only find the solution at regularly spaced grid points instead of an arbitrary $(x, y)$. You can often ensure that your solutions are highly accurate at those grid points, but it is still important to know how the solution behaves in between them. In all of these situations, we want to find a function that hits every one of our data points exactly. We will then use that function to predict values in between (or possibly outside of) those from our data. Predicting values in between your existing data points is known as interpolation, while predicting values outside of the range of your existing data points is known as extrapolation. For mathematical convenience, we will only consider 1-dimensional data like the data sets in `data1.csv`, `data2.csv` and `data3.csv`. That is, we will assume that we have a vector of $x$ values $\mathbf{x} = [x_1, x_2, \dotsc, x_n]$ and a corresponding vector of $y$ values $\mathbf{y} = [y_1, y_2, \dotsc, y_n]$. We will also assume that the $x$'s are in order, so $x_1 < x_2 < \cdots < x_n$. The $y$ data is of the form $y_k = f(x_k)$, but we do not know the function $f$. Our goal is to find a function $f$ so that $f(x_k)$ does actually equal $y_k$ for each one of our data points and then use that function to predict $f(x)$ at some other $x$ value. When $x_1 < x < x_n$, we call this *interpolation*, but when $x < x_1$ or $x > x_n$ then we call this *extrapolation$. We will focus on interpolation for most of this lecture, and briefly discuss the challenges with extrapolation at the end. As an example, we will use the data from `data4.csv`. ``` from google.colab import files uploaded = files.upload() df = pd.read_csv('Week6_data4.csv') data = np.genfromtxt('Week6_data4.csv', delimiter=',') x = data[0, :] y = data[1, :] n = x.size plt.plot(x, y, 'ko') ``` ## Interpolation We already know one (quite bad) method of interpolation. We could fit an $(n-1)$st degree polynomial to our data and then use that polynomial to predict other points. With the data from `data4.csv`, this would mean ``` coeffs = np.polyfit(x, y, n - 1) xplot = np.linspace(1, 2, 1000) yplot = np.polyval(coeffs, xplot) ``` The last line, where we call `polyval`, is an interpolation. We are interpolating our data at all of the $x$ values in `xplot`. To visualize this, we can plot our interpolated values alongside the original data. ``` plt.plot(xplot, yplot, 'b', x, y, 'ko') ``` Because our data set is relatively small (i.e., $n$ is only 11), this was not a complete disaster, but it is not a particularly good strategy. For instance, it is not clear why we would want our interpolation to include large spikes near 1.05 or 1.95. As a general rule, this method is only an acceptable idea if you have six data points or fewer. With any more points, finding a high order polynomial that matches every data point will only lead to problems. However, if you only have a few data points (two to four is ideal, but you can get away with five or six), then a polynomial fit like this often provides a very good interpolation. The key idea in most interpolation methods is to divide our data into small subsets of points and fit each subset separately. The simplest method is to divide our data up into pairs. If you want to interpolate at a point $x$, then you find two data points closest to $x$ (i.e., $x_k < x < x_{k+1}$) and then fit a line between those two points. This is called a *linear interpolation*. We could implement it with the following code: ``` for k in range(n - 1): # Choose two neighboring points x_subset = np.array([x[k], x[k + 1]]) y_subset = np.array([y[k], y[k + 1]]) # Find the line between x[k] and x[k+1] coeffs = np.polyfit(x_subset, y_subset, 1) # Choose x's between x[k] and x[k+1] to interpolate at xplot = np.linspace(x[k], x[k + 1], 1000) # Use polyval to interpolate yplot = np.polyval(coeffs, xplot) # Add to our graph plt.plot(xplot, yplot, 'b') # Add more of one line for visualization purposes if k == 4: xplot = np.linspace(x[k] - 0.2, x[k + 1] + 0.2, 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'r--') plt.plot(x, y, 'ko') ``` Note that this is actually the default method that python uses to graph data, so the command `plt.plot(x, y, 'b')` would have produced the same graph (without the extra dotted lines). There is also a predefined function in the package `scipy.interpolate` that finds a linear interpolation like this. The function is called `interp1d`. It takes two arguments, an array of x data and an array of y data, and it returns a function that calculates the linear interpolation at any given x value. For example, ``` interp_func = scipy.interpolate.interp1d(x, y) print(interp_func(1.45)) ``` We could therefore make the same plot as above (without the extra dotted lines) with the code ``` xplot = np.linspace(1, 2, 1000) yplot = interp_func(xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko') ``` There is no real reason to stop at two data points. We could also pick the nearest three or four (or more, but that starts to risk overfitting) data points. For example, we could pick the four closest data points and then fit a cubic to those points. For most $x$ values, this means that we would choose data points $x_k < x_{k+1} < x < x_{k+2} < x_{k+3}$, but at the far left and right (i.e., to the left of $x_2$ and to the right of $x_{n-1}$), we need to choose slightly different sets of points. We could accomplish this with the following code: ``` for k in range(n - 3): # Choose four neighboring points x_subset = np.array([x[k], x[k + 1], x[k + 2], x[k + 3]]) y_subset = np.array([y[k], y[k + 1], y[k + 2], y[k + 3]]) # Find the cubic fit between these four points coeffs = np.polyfit(x_subset, y_subset, 3) # Choose x's between x[k + 1] and x[k + 2] to interpolate at xplot = np.linspace(x[k + 1], x[k + 2], 1000) # Use polyval to interpolate yplot = np.polyval(coeffs, xplot) # Add to our graph plt.plot(xplot, yplot, 'b') # Add more of one line for visualization purposes if k == 4: xplot = np.linspace(x[k] - 0.2, x[k + 3] + 0.2, 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'r--') # Now handle the first and last interval separately x_subset = x[:4] y_subset = y[:4] coeffs = np.polyfit(x_subset, y_subset, 3) xplot = np.linspace(x[0], x[1], 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'b') x_subset = x[-4:] y_subset = y[-4:] coeffs = np.polyfit(x_subset, y_subset, 3) xplot = np.linspace(x[-2], x[-1], 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'b') plt.plot(x, y, 'ko') ``` This is called a *cubic interpolation*, but it is probably not what you will find if you look up cubic interpolations online, because our version has a problem. It's hard to tell unless you zoom in quite close, but our interpolation function is not smooth: There are still sharp corners at each data point. We are often interested in finding smooth interpolations without any such corners. There are several methods for finding smooth interpolations, and almost all of them are based ont eh same idea: We are creating our best fit cubic using four data points, but it only actually has to match two of them. (If you zoom in on the graph above, you can see that the dotted red line, which is one of our best fit cubics, exactly matches the four neighboring data points at $x = 1.4$, $x = 1.5$, $x = 1.6$ and $x = 1.7$, but we only used that curve to interpolate between $x = 1.5$ and $x = 1.6$. Since we don't actually care if the cubic matches the data points at $x = 1.4$ or $x = 1.7$, we could instead insist that our cubic had the correct derivatives at the two points we actually care about. The algebra involved gets somewhat complicated, so we will skip it here, but such an approach is called a *cubic spline*. The `interp1d` function can also find cubic splines, but not the non-smooth cubic interpolation that we did above. We could find a cubic spline interpolation with the following code: ``` xplot = np.linspace(1, 2, 1000) interp_func = scipy.interpolate.interp1d(x, y, kind='cubic') yplot = interp_func(xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko') ``` If you zoom in on this graph, you will see that it is actually smooth. (Technically speaking, the cubic spline is $C^2$, so its first and second derivatives are continuous, but the third derivative is discontinuous at each data point.) Linear interpolations and cubic splines are two of the most common methods for interpolation, but the interp1 function includes several other methods. Most are variations of the cubic spline with different orders or different ways of enforcing smoothness. In this class, we will only use the default (linear) interpolation or cubic splines. ## Extrapolation So far, we have only focused on interpolating values. That is, we have been interested in finding $f(x)$ when $x$ is in between some of our data points. In principle, there isn't anything stopping us from using the same techniques to extrapolate data. For instance, we could use a linear interpolation or a spline to predict $f(x)$ even if $x > x_n$ or $x < x_1$. The `interp1d` function allows you to do this by adding the extra option `fill_value='extrapolate'` at the end of the function call. For example, we could predict $f(5)$ with the code ``` interp_func = scipy.interpolate.interp1d(x, y, fill_value='extrapolate') print(interp_func(5)) ``` or ``` interp_func = scipy.interpolate.interp1d(x, y, kind='cubic', fill_value='extrapolate') print(interp_func(5)) ``` In general, this is a bad idea. To see why, let's extrapolate at a lot of different $x$ values and plot our predictions. ``` xplot = np.linspace(-2, 5, 1000) interp_func = scipy.interpolate.interp1d(x, y, fill_value='extrapolate') yplot = interp_func(xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko') interp_func = scipy.interpolate.interp1d(x, y, kind='cubic', fill_value='extrapolate') yplot = interp_func(xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko') ``` Which of these is "less bad" is a matter of taste, but it is clear that neither has done a particularly good job of capturing any interesting patterns in our data. As a general rule, you shouldn't use any interpolation method for extrapolation if your new $x$ value is farther away than the typical spacing between your data points. Since our $x$'s are each 0.1 apart, this means that we shouldn't be using `interp1d` to extrapolate for any $x < 0.9$ or $x > 2.1$. Even within those limits, your extrapolated values will have as much to do with your chosen method as with the original data set. The problem isn't really with our methods; it's that extrapolation is hard. The only real way to extrapolate well is to have some outside knowledge of what function $f(x)$ fits your data. For instance, if you know that your data really are linear then you should use the methods from lecture 17 to find a best fit line for your data and then use that line to extrapolate. Likewise, if you know that your data really are exponential, you should use the methods from lecture 18 to find a best fit exponential curve for your data and then use that curve to extrapolate. Simply put, there is no one size fits all extrapolation method.
github_jupyter
import numpy as np import scipy.interpolate import matplotlib.pyplot as plt import pandas as pd from google.colab import files uploaded = files.upload() df = pd.read_csv('Week6_data4.csv') data = np.genfromtxt('Week6_data4.csv', delimiter=',') x = data[0, :] y = data[1, :] n = x.size plt.plot(x, y, 'ko') coeffs = np.polyfit(x, y, n - 1) xplot = np.linspace(1, 2, 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko') for k in range(n - 1): # Choose two neighboring points x_subset = np.array([x[k], x[k + 1]]) y_subset = np.array([y[k], y[k + 1]]) # Find the line between x[k] and x[k+1] coeffs = np.polyfit(x_subset, y_subset, 1) # Choose x's between x[k] and x[k+1] to interpolate at xplot = np.linspace(x[k], x[k + 1], 1000) # Use polyval to interpolate yplot = np.polyval(coeffs, xplot) # Add to our graph plt.plot(xplot, yplot, 'b') # Add more of one line for visualization purposes if k == 4: xplot = np.linspace(x[k] - 0.2, x[k + 1] + 0.2, 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'r--') plt.plot(x, y, 'ko') interp_func = scipy.interpolate.interp1d(x, y) print(interp_func(1.45)) xplot = np.linspace(1, 2, 1000) yplot = interp_func(xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko') for k in range(n - 3): # Choose four neighboring points x_subset = np.array([x[k], x[k + 1], x[k + 2], x[k + 3]]) y_subset = np.array([y[k], y[k + 1], y[k + 2], y[k + 3]]) # Find the cubic fit between these four points coeffs = np.polyfit(x_subset, y_subset, 3) # Choose x's between x[k + 1] and x[k + 2] to interpolate at xplot = np.linspace(x[k + 1], x[k + 2], 1000) # Use polyval to interpolate yplot = np.polyval(coeffs, xplot) # Add to our graph plt.plot(xplot, yplot, 'b') # Add more of one line for visualization purposes if k == 4: xplot = np.linspace(x[k] - 0.2, x[k + 3] + 0.2, 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'r--') # Now handle the first and last interval separately x_subset = x[:4] y_subset = y[:4] coeffs = np.polyfit(x_subset, y_subset, 3) xplot = np.linspace(x[0], x[1], 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'b') x_subset = x[-4:] y_subset = y[-4:] coeffs = np.polyfit(x_subset, y_subset, 3) xplot = np.linspace(x[-2], x[-1], 1000) yplot = np.polyval(coeffs, xplot) plt.plot(xplot, yplot, 'b') plt.plot(x, y, 'ko') xplot = np.linspace(1, 2, 1000) interp_func = scipy.interpolate.interp1d(x, y, kind='cubic') yplot = interp_func(xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko') interp_func = scipy.interpolate.interp1d(x, y, fill_value='extrapolate') print(interp_func(5)) interp_func = scipy.interpolate.interp1d(x, y, kind='cubic', fill_value='extrapolate') print(interp_func(5)) xplot = np.linspace(-2, 5, 1000) interp_func = scipy.interpolate.interp1d(x, y, fill_value='extrapolate') yplot = interp_func(xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko') interp_func = scipy.interpolate.interp1d(x, y, kind='cubic', fill_value='extrapolate') yplot = interp_func(xplot) plt.plot(xplot, yplot, 'b', x, y, 'ko')
0.498047
0.987841
## Project: Image Captioning --- In this notebook, you will learn how to load and pre-process data from the [COCO dataset](http://cocodataset.org/#home). You will also design a CNN-RNN model for automatically generating image captions. <a id='step1'></a> ## Step 1: Explore the Data Loader We have already written a [data loader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) that you can use to load the COCO dataset in batches. In the code cell below, you will initialize the data loader by using the `get_loader` function in **data_loader.py**. > For this project, you are not permitted to change the **data_loader.py** file, which must be used as-is. The `get_loader` function takes as input a number of arguments that can be explored in **data_loader.py**. Take the time to explore these arguments now by opening **data_loader.py** in a new window. Most of the arguments must be left at their default values, and you are only allowed to amend the values of the arguments below: 1. **`transform`** - an [image transform](http://pytorch.org/docs/master/torchvision/transforms.html) specifying how to pre-process the images and convert them to PyTorch tensors before using them as input to the CNN encoder. For now, you are encouraged to keep the transform as provided in `transform_train`. You will have the opportunity later to choose your own image transform to pre-process the COCO images. 2. **`mode`** - one of `'train'` (loads the training data in batches) or `'test'` (for the test data). We will say that the data loader is in training or test mode, respectively. While following the instructions in this notebook, please keep the data loader in training mode by setting `mode='train'`. 3. **`batch_size`** - determines the batch size. When training the model, this is number of image-caption pairs used to amend the model weights in each training step. 4. **`vocab_threshold`** - the total number of times that a word must appear in the in the training captions before it is used as part of the vocabulary. Words that have fewer than `vocab_threshold` occurrences in the training captions are considered unknown words. 5. **`vocab_from_file`** - a Boolean that decides whether to load the vocabulary from file. We will describe the `vocab_threshold` and `vocab_from_file` arguments in more detail soon. For now, run the code cell below. Be patient - it may take a couple of minutes to run! ``` import sys sys.path.append('/opt/cocoapi/PythonAPI') from pycocotools.coco import COCO !pip install nltk import nltk nltk.download('punkt') from data_loader import get_loader from torchvision import transforms # Define a transform to pre-process the training images. transform_train = transforms.Compose([ transforms.Resize(256), # smaller edge of image resized to 256 transforms.RandomCrop(224), # get 224x224 crop from random location transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5 transforms.ToTensor(), # convert the PIL Image to a tensor transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model (0.229, 0.224, 0.225))]) # Set the minimum word count threshold. vocab_threshold = 5 # Specify the batch size. batch_size = 10 # Obtain the data loader. data_loader = get_loader(transform=transform_train, mode='train', batch_size=batch_size, vocab_threshold=vocab_threshold, vocab_from_file=True) ``` When you ran the code cell above, the data loader was stored in the variable `data_loader`. You can access the corresponding dataset as `data_loader.dataset`. This dataset is an instance of the `CoCoDataset` class in **data_loader.py**. If you are unfamiliar with data loaders and datasets, you are encouraged to review [this PyTorch tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). ### Exploring the `__getitem__` Method The `__getitem__` method in the `CoCoDataset` class determines how an image-caption pair is pre-processed before being incorporated into a batch. This is true for all `Dataset` classes in PyTorch; if this is unfamiliar to you, please review [the tutorial linked above](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html). When the data loader is in training mode, this method begins by first obtaining the filename (`path`) of a training image and its corresponding caption (`caption`). #### Image Pre-Processing Image pre-processing is relatively straightforward (from the `__getitem__` method in the `CoCoDataset` class): ```python # Convert image to tensor and pre-process using transform image = Image.open(os.path.join(self.img_folder, path)).convert('RGB') image = self.transform(image) ``` After loading the image in the training folder with name `path`, the image is pre-processed using the same transform (`transform_train`) that was supplied when instantiating the data loader. #### Caption Pre-Processing The captions also need to be pre-processed and prepped for training. In this example, for generating captions, we are aiming to create a model that predicts the next token of a sentence from previous tokens, so we turn the caption associated with any image into a list of tokenized words, before casting it to a PyTorch tensor that we can use to train the network. To understand in more detail how COCO captions are pre-processed, we'll first need to take a look at the `vocab` instance variable of the `CoCoDataset` class. The code snippet below is pulled from the `__init__` method of the `CoCoDataset` class: ```python def __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file, img_folder): ... self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file) ... ``` From the code snippet above, you can see that `data_loader.dataset.vocab` is an instance of the `Vocabulary` class from **vocabulary.py**. Take the time now to verify this for yourself by looking at the full code in **data_loader.py**. We use this instance to pre-process the COCO captions (from the `__getitem__` method in the `CoCoDataset` class): ```python # Convert caption to tensor of word ids. tokens = nltk.tokenize.word_tokenize(str(caption).lower()) # line 1 caption = [] # line 2 caption.append(self.vocab(self.vocab.start_word)) # line 3 caption.extend([self.vocab(token) for token in tokens]) # line 4 caption.append(self.vocab(self.vocab.end_word)) # line 5 caption = torch.Tensor(caption).long() # line 6 ``` As you will see soon, this code converts any string-valued caption to a list of integers, before casting it to a PyTorch tensor. To see how this code works, we'll apply it to the sample caption in the next code cell. ``` sample_caption = 'A person doing a trick on a rail while riding a skateboard.' ``` In **`line 1`** of the code snippet, every letter in the caption is converted to lowercase, and the [`nltk.tokenize.word_tokenize`](http://www.nltk.org/) function is used to obtain a list of string-valued tokens. Run the next code cell to visualize the effect on `sample_caption`. ``` import nltk sample_tokens = nltk.tokenize.word_tokenize(str(sample_caption).lower()) print(sample_tokens) ``` In **`line 2`** and **`line 3`** we initialize an empty list and append an integer to mark the start of a caption. The [paper](https://arxiv.org/pdf/1411.4555.pdf) that you are encouraged to implement uses a special start word (and a special end word, which we'll examine below) to mark the beginning (and end) of a caption. This special start word (`"<start>"`) is decided when instantiating the data loader and is passed as a parameter (`start_word`). You are **required** to keep this parameter at its default value (`start_word="<start>"`). As you will see below, the integer `0` is always used to mark the start of a caption. ``` sample_caption = [] start_word = data_loader.dataset.vocab.start_word print('Special start word:', start_word) sample_caption.append(data_loader.dataset.vocab(start_word)) print(sample_caption) ``` In **`line 4`**, we continue the list by adding integers that correspond to each of the tokens in the caption. ``` sample_caption.extend([data_loader.dataset.vocab(token) for token in sample_tokens]) print(sample_caption) ``` In **`line 5`**, we append a final integer to mark the end of the caption. Identical to the case of the special start word (above), the special end word (`"<end>"`) is decided when instantiating the data loader and is passed as a parameter (`end_word`). You are **required** to keep this parameter at its default value (`end_word="<end>"`). As you will see below, the integer `1` is always used to mark the end of a caption. ``` end_word = data_loader.dataset.vocab.end_word print('Special end word:', end_word) sample_caption.append(data_loader.dataset.vocab(end_word)) print(sample_caption) ``` Finally, in **`line 6`**, we convert the list of integers to a PyTorch tensor and cast it to [long type](http://pytorch.org/docs/master/tensors.html#torch.Tensor.long). You can read more about the different types of PyTorch tensors on the [website](http://pytorch.org/docs/master/tensors.html). ``` import torch sample_caption = torch.Tensor(sample_caption).long() print(sample_caption) ``` And that's it! In summary, any caption is converted to a list of tokens, with _special_ start and end tokens marking the beginning and end of the sentence: ``` [<start>, 'a', 'person', 'doing', 'a', 'trick', 'while', 'riding', 'a', 'skateboard', '.', <end>] ``` This list of tokens is then turned into a list of integers, where every distinct word in the vocabulary has an associated integer value: ``` [0, 3, 98, 754, 3, 396, 207, 139, 3, 753, 18, 1] ``` Finally, this list is converted to a PyTorch tensor. All of the captions in the COCO dataset are pre-processed using this same procedure from **`lines 1-6`** described above. As you saw, in order to convert a token to its corresponding integer, we call `data_loader.dataset.vocab` as a function. The details of how this call works can be explored in the `__call__` method in the `Vocabulary` class in **vocabulary.py**. ```python def __call__(self, word): if not word in self.word2idx: return self.word2idx[self.unk_word] return self.word2idx[word] ``` The `word2idx` instance variable is a Python [dictionary](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) that is indexed by string-valued keys (mostly tokens obtained from training captions). For each key, the corresponding value is the integer that the token is mapped to in the pre-processing step. Use the code cell below to view a subset of this dictionary. ``` # Preview the word2idx dictionary. dict(list(data_loader.dataset.vocab.word2idx.items())[:10]) ``` We also print the total number of keys. ``` # Print the total number of keys in the word2idx dictionary. print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab)) ``` As you will see if you examine the code in **vocabulary.py**, the `word2idx` dictionary is created by looping over the captions in the training dataset. If a token appears no less than `vocab_threshold` times in the training set, then it is added as a key to the dictionary and assigned a corresponding unique integer. You will have the option later to amend the `vocab_threshold` argument when instantiating your data loader. Note that in general, **smaller** values for `vocab_threshold` yield a **larger** number of tokens in the vocabulary. You are encouraged to check this for yourself in the next code cell by decreasing the value of `vocab_threshold` before creating a new data loader. ``` # Modify the minimum word count threshold. vocab_threshold = 4 # Obtain the data loader. data_loader = get_loader(transform=transform_train, mode='train', batch_size=batch_size, vocab_threshold=vocab_threshold, vocab_from_file=False) # Print the total number of keys in the word2idx dictionary. print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab)) ``` There are also a few special keys in the `word2idx` dictionary. You are already familiar with the special start word (`"<start>"`) and special end word (`"<end>"`). There is one more special token, corresponding to unknown words (`"<unk>"`). All tokens that don't appear anywhere in the `word2idx` dictionary are considered unknown words. In the pre-processing step, any unknown tokens are mapped to the integer `2`. ``` unk_word = data_loader.dataset.vocab.unk_word print('Special unknown word:', unk_word) print('All unknown words are mapped to this integer:', data_loader.dataset.vocab(unk_word)) ``` Check this for yourself below, by pre-processing the provided nonsense words that never appear in the training captions. ``` print(data_loader.dataset.vocab('jfkafejw')) print(data_loader.dataset.vocab('ieowoqjf')) ``` The final thing to mention is the `vocab_from_file` argument that is supplied when creating a data loader. To understand this argument, note that when you create a new data loader, the vocabulary (`data_loader.dataset.vocab`) is saved as a [pickle](https://docs.python.org/3/library/pickle.html) file in the project folder, with filename `vocab.pkl`. If you are still tweaking the value of the `vocab_threshold` argument, you **must** set `vocab_from_file=False` to have your changes take effect. But once you are happy with the value that you have chosen for the `vocab_threshold` argument, you need only run the data loader *one more time* with your chosen `vocab_threshold` to save the new vocabulary to file. Then, you can henceforth set `vocab_from_file=True` to load the vocabulary from file and speed the instantiation of the data loader. Note that building the vocabulary from scratch is the most time-consuming part of instantiating the data loader, and so you are strongly encouraged to set `vocab_from_file=True` as soon as you are able. Note that if `vocab_from_file=True`, then any supplied argument for `vocab_threshold` when instantiating the data loader is completely ignored. ``` # Obtain the data loader (from file). Note that it runs much faster than before! data_loader = get_loader(transform=transform_train, mode='train', batch_size=batch_size, vocab_from_file=True) ``` In the next section, you will learn how to use the data loader to obtain batches of training data. <a id='step2'></a> ## Step 2: Use the Data Loader to Obtain Batches The captions in the dataset vary greatly in length. You can see this by examining `data_loader.dataset.caption_lengths`, a Python list with one entry for each training caption (where the value stores the length of the corresponding caption). In the code cell below, we use this list to print the total number of captions in the training data with each length. As you will see below, the majority of captions have length 10. Likewise, very short and very long captions are quite rare. ``` from collections import Counter # Tally the total number of training captions with each length. counter = Counter(data_loader.dataset.caption_lengths) lengths = sorted(counter.items(), key=lambda pair: pair[1], reverse=True) for value, count in lengths: print('value: %2d --- count: %5d' % (value, count)) ``` To generate batches of training data, we begin by first sampling a caption length (where the probability that any length is drawn is proportional to the number of captions with that length in the dataset). Then, we retrieve a batch of size `batch_size` of image-caption pairs, where all captions have the sampled length. This approach for assembling batches matches the procedure in [this paper](https://arxiv.org/pdf/1502.03044.pdf) and has been shown to be computationally efficient without degrading performance. Run the code cell below to generate a batch. The `get_train_indices` method in the `CoCoDataset` class first samples a caption length, and then samples `batch_size` indices corresponding to training data points with captions of that length. These indices are stored below in `indices`. These indices are supplied to the data loader, which then is used to retrieve the corresponding data points. The pre-processed images and captions in the batch are stored in `images` and `captions`. ``` import numpy as np import torch.utils.data as data # Randomly sample a caption length, and sample indices with that length. indices = data_loader.dataset.get_train_indices() print('sampled indices:', indices) # Create and assign a batch sampler to retrieve a batch with the sampled indices. new_sampler = data.sampler.SubsetRandomSampler(indices=indices) data_loader.batch_sampler.sampler = new_sampler # Obtain the batch. images, captions = next(iter(data_loader)) print('images.shape:', images.shape) print('captions.shape:', captions.shape) # (Optional) Uncomment the lines of code below to print the pre-processed images and captions. # print('images:', images) # print('captions:', captions) ``` Each time you run the code cell above, a different caption length is sampled, and a different batch of training data is returned. Run the code cell multiple times to check this out!
github_jupyter
import sys sys.path.append('/opt/cocoapi/PythonAPI') from pycocotools.coco import COCO !pip install nltk import nltk nltk.download('punkt') from data_loader import get_loader from torchvision import transforms # Define a transform to pre-process the training images. transform_train = transforms.Compose([ transforms.Resize(256), # smaller edge of image resized to 256 transforms.RandomCrop(224), # get 224x224 crop from random location transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5 transforms.ToTensor(), # convert the PIL Image to a tensor transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model (0.229, 0.224, 0.225))]) # Set the minimum word count threshold. vocab_threshold = 5 # Specify the batch size. batch_size = 10 # Obtain the data loader. data_loader = get_loader(transform=transform_train, mode='train', batch_size=batch_size, vocab_threshold=vocab_threshold, vocab_from_file=True) # Convert image to tensor and pre-process using transform image = Image.open(os.path.join(self.img_folder, path)).convert('RGB') image = self.transform(image) def __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file, img_folder): ... self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file) ... # Convert caption to tensor of word ids. tokens = nltk.tokenize.word_tokenize(str(caption).lower()) # line 1 caption = [] # line 2 caption.append(self.vocab(self.vocab.start_word)) # line 3 caption.extend([self.vocab(token) for token in tokens]) # line 4 caption.append(self.vocab(self.vocab.end_word)) # line 5 caption = torch.Tensor(caption).long() # line 6 sample_caption = 'A person doing a trick on a rail while riding a skateboard.' import nltk sample_tokens = nltk.tokenize.word_tokenize(str(sample_caption).lower()) print(sample_tokens) sample_caption = [] start_word = data_loader.dataset.vocab.start_word print('Special start word:', start_word) sample_caption.append(data_loader.dataset.vocab(start_word)) print(sample_caption) sample_caption.extend([data_loader.dataset.vocab(token) for token in sample_tokens]) print(sample_caption) end_word = data_loader.dataset.vocab.end_word print('Special end word:', end_word) sample_caption.append(data_loader.dataset.vocab(end_word)) print(sample_caption) import torch sample_caption = torch.Tensor(sample_caption).long() print(sample_caption) [<start>, 'a', 'person', 'doing', 'a', 'trick', 'while', 'riding', 'a', 'skateboard', '.', <end>] [0, 3, 98, 754, 3, 396, 207, 139, 3, 753, 18, 1] def __call__(self, word): if not word in self.word2idx: return self.word2idx[self.unk_word] return self.word2idx[word] # Preview the word2idx dictionary. dict(list(data_loader.dataset.vocab.word2idx.items())[:10]) # Print the total number of keys in the word2idx dictionary. print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab)) # Modify the minimum word count threshold. vocab_threshold = 4 # Obtain the data loader. data_loader = get_loader(transform=transform_train, mode='train', batch_size=batch_size, vocab_threshold=vocab_threshold, vocab_from_file=False) # Print the total number of keys in the word2idx dictionary. print('Total number of tokens in vocabulary:', len(data_loader.dataset.vocab)) unk_word = data_loader.dataset.vocab.unk_word print('Special unknown word:', unk_word) print('All unknown words are mapped to this integer:', data_loader.dataset.vocab(unk_word)) print(data_loader.dataset.vocab('jfkafejw')) print(data_loader.dataset.vocab('ieowoqjf')) # Obtain the data loader (from file). Note that it runs much faster than before! data_loader = get_loader(transform=transform_train, mode='train', batch_size=batch_size, vocab_from_file=True) from collections import Counter # Tally the total number of training captions with each length. counter = Counter(data_loader.dataset.caption_lengths) lengths = sorted(counter.items(), key=lambda pair: pair[1], reverse=True) for value, count in lengths: print('value: %2d --- count: %5d' % (value, count)) import numpy as np import torch.utils.data as data # Randomly sample a caption length, and sample indices with that length. indices = data_loader.dataset.get_train_indices() print('sampled indices:', indices) # Create and assign a batch sampler to retrieve a batch with the sampled indices. new_sampler = data.sampler.SubsetRandomSampler(indices=indices) data_loader.batch_sampler.sampler = new_sampler # Obtain the batch. images, captions = next(iter(data_loader)) print('images.shape:', images.shape) print('captions.shape:', captions.shape) # (Optional) Uncomment the lines of code below to print the pre-processed images and captions. # print('images:', images) # print('captions:', captions)
0.535584
0.98847
# 아들과 딸 파라독스 다음 문제는 수학자인 마틴 가드너가 1959년 사이언티픽 아메리칸에 게재하였던 문제이다. 원래 원문은 다음과 같다 * (Q1) Mr. Jones has two children. The older child is a boy. What is the probability that both children are boys? * (Q2) Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys? 문제의 본질을 바꾸지 않고 다음과 같이 번역할 수 있다. * (문제 1) 두 아이가 있는 어떤 집에서 첫째 아이가 남자이다. 두 아이가 모두 남자일 확률은? * (문제 2) 두 아이가 있는 어떤 집에서 두 아이 중 한 명이 남자이다. 두 아이가 모두 남자일 확률은? 두 아이의 성별에 대해 다음과 같은 경우가 있을 수 있다. <table class="table-bordered"> <tbody><tr> <tr> <td></td> <td>둘째=Boy</td> <td>둘째=Girl</td> </tr> <tr> <td>첫째=Boy</td> <td>BB</td> <td>BG</td> </tr> <tr> <td>첫째=Girl</td> <td>GB</td> <td>GG</td> </tr> </tbody></table> 첫번째 문제의 답은 $\dfrac{1}{2}$ 이다. <table class="table-bordered"> <tbody><tr> <tr> <td></td> <td>둘째=Boy</td> <td>둘째=Girl</td> </tr> <tr> <td>첫째=Boy</td> <td>BB</td> <td>BG</td> </tr> <tr> <td><s>첫째=Girl</s></td> <td><s>GB</s></td> <td><s>GG</s></td> </tr> </tbody></table> 이 문제가 파라독스가 된 이유는 두번째 문제의 답이 사실 두 가지가 있을 수 있기 때문이다. 이 답은 "두 아이 중 한 명이 남자이다"라는 정보의 질(quality)에 따라 달라진다. 다음과 같은 두 가지 경우를 생각하자. * 경우 1: "두 아이 중 적어도 한 명이 남자인가요"라는 질문에 부모가 "네"라고 대답한 경우 * 경우 2: 그 집에서 나오는 아이를 우연히 보았는데 그 아이가 남자인 경우 또는 그 집에 전화를 걸었는데 남자 어린 아이가 전화를 받은 경우 경우 1에서 두 아이가 모두 남자일 확률은 표에서 보듯이 $\dfrac{1}{3}$이다. <table class="table-bordered"> <tbody><tr> <tr> <td></td> <td>둘째=Boy</td> <td>둘째=Girl</td> </tr> <tr> <td>첫째=Boy</td> <td>BB</td> <td>BG</td> </tr> <tr> <td>첫째=Girl</td> <td>GB</td> <td><s>GG</s></td> </tr> </tbody></table> 이를 베이즈 정리로 풀면 다음과 같다. 이 식에서 $Y$는 "두 아이 중 적어도 한 명이 남자인가요"라는 질문에 부모가 "네"라고 대답한 경우를 뜻한다. $$ \begin{eqnarray} P(BB|Y) &=& \dfrac{P(Y|BB)P(BB)}{P(Y)} \\ &=& \dfrac{P(Y|BB)P(BB)}{P(Y|BB)P(BB) + P(Y|BG)P(BG) + P(Y|GB)P(GB) + P(Y|GG)P(GG)} \\ &=& \dfrac{1\cdot 0.25}{1\cdot 0.25 + 1\cdot 0.25 + 1\cdot 0.25 + 0\cdot 0.25} \\ &=& \dfrac{0.25}{0.75} = \dfrac{1}{3} \end{eqnarray} $$ 경우 2에서는 남자 아이를 목격하지는 못해도 실제로는 남자 아이가 있는 경우도 있을 수 있기 때문에 답은 다음과 같이 $\dfrac{1}{2}$가 된다. <table > <tbody><tr> <tr> <td></td> <td>둘째=Boy</td> <td>둘째=Girl</td> </tr> <tr> <td>첫째=Boy</td> <td>BB</td> <td>BG (남자 목격 확률 1/2) </td> </tr> <tr> <td>첫째=Girl</td> <td>GB (남자 목격 확률 1/2) </td> <td><s>GG</s></td> </tr> </tbody></table> 베이지 정리로 풀면 다음과 같다. 이 식에서 $Y$는 "그 집에서 나오는 아이를 우연히 보았는데 그 아이가 남자인 경우"를 뜻한다. $$ \begin{eqnarray} P(BB|Y) &=& \dfrac{P(Y|BB)P(BB)}{P(Y)} \\ &=& \dfrac{P(Y|BB)P(BB)}{P(Y|BB)P(BB) + P(Y|BG)P(BG) + P(Y|GB)P(GB) + P(Y|GG)P(GG)} \\ &=& \dfrac{1\cdot 0.25}{1\cdot 0.25 + 0.5\cdot 0.25 + 0.5\cdot 0.25 + 0\cdot 0.25} \\ &=& \dfrac{0.25}{0.50} = \dfrac{1}{2} \end{eqnarray} $$
github_jupyter
# 아들과 딸 파라독스 다음 문제는 수학자인 마틴 가드너가 1959년 사이언티픽 아메리칸에 게재하였던 문제이다. 원래 원문은 다음과 같다 * (Q1) Mr. Jones has two children. The older child is a boy. What is the probability that both children are boys? * (Q2) Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys? 문제의 본질을 바꾸지 않고 다음과 같이 번역할 수 있다. * (문제 1) 두 아이가 있는 어떤 집에서 첫째 아이가 남자이다. 두 아이가 모두 남자일 확률은? * (문제 2) 두 아이가 있는 어떤 집에서 두 아이 중 한 명이 남자이다. 두 아이가 모두 남자일 확률은? 두 아이의 성별에 대해 다음과 같은 경우가 있을 수 있다. <table class="table-bordered"> <tbody><tr> <tr> <td></td> <td>둘째=Boy</td> <td>둘째=Girl</td> </tr> <tr> <td>첫째=Boy</td> <td>BB</td> <td>BG</td> </tr> <tr> <td>첫째=Girl</td> <td>GB</td> <td>GG</td> </tr> </tbody></table> 첫번째 문제의 답은 $\dfrac{1}{2}$ 이다. <table class="table-bordered"> <tbody><tr> <tr> <td></td> <td>둘째=Boy</td> <td>둘째=Girl</td> </tr> <tr> <td>첫째=Boy</td> <td>BB</td> <td>BG</td> </tr> <tr> <td><s>첫째=Girl</s></td> <td><s>GB</s></td> <td><s>GG</s></td> </tr> </tbody></table> 이 문제가 파라독스가 된 이유는 두번째 문제의 답이 사실 두 가지가 있을 수 있기 때문이다. 이 답은 "두 아이 중 한 명이 남자이다"라는 정보의 질(quality)에 따라 달라진다. 다음과 같은 두 가지 경우를 생각하자. * 경우 1: "두 아이 중 적어도 한 명이 남자인가요"라는 질문에 부모가 "네"라고 대답한 경우 * 경우 2: 그 집에서 나오는 아이를 우연히 보았는데 그 아이가 남자인 경우 또는 그 집에 전화를 걸었는데 남자 어린 아이가 전화를 받은 경우 경우 1에서 두 아이가 모두 남자일 확률은 표에서 보듯이 $\dfrac{1}{3}$이다. <table class="table-bordered"> <tbody><tr> <tr> <td></td> <td>둘째=Boy</td> <td>둘째=Girl</td> </tr> <tr> <td>첫째=Boy</td> <td>BB</td> <td>BG</td> </tr> <tr> <td>첫째=Girl</td> <td>GB</td> <td><s>GG</s></td> </tr> </tbody></table> 이를 베이즈 정리로 풀면 다음과 같다. 이 식에서 $Y$는 "두 아이 중 적어도 한 명이 남자인가요"라는 질문에 부모가 "네"라고 대답한 경우를 뜻한다. $$ \begin{eqnarray} P(BB|Y) &=& \dfrac{P(Y|BB)P(BB)}{P(Y)} \\ &=& \dfrac{P(Y|BB)P(BB)}{P(Y|BB)P(BB) + P(Y|BG)P(BG) + P(Y|GB)P(GB) + P(Y|GG)P(GG)} \\ &=& \dfrac{1\cdot 0.25}{1\cdot 0.25 + 1\cdot 0.25 + 1\cdot 0.25 + 0\cdot 0.25} \\ &=& \dfrac{0.25}{0.75} = \dfrac{1}{3} \end{eqnarray} $$ 경우 2에서는 남자 아이를 목격하지는 못해도 실제로는 남자 아이가 있는 경우도 있을 수 있기 때문에 답은 다음과 같이 $\dfrac{1}{2}$가 된다. <table > <tbody><tr> <tr> <td></td> <td>둘째=Boy</td> <td>둘째=Girl</td> </tr> <tr> <td>첫째=Boy</td> <td>BB</td> <td>BG (남자 목격 확률 1/2) </td> </tr> <tr> <td>첫째=Girl</td> <td>GB (남자 목격 확률 1/2) </td> <td><s>GG</s></td> </tr> </tbody></table> 베이지 정리로 풀면 다음과 같다. 이 식에서 $Y$는 "그 집에서 나오는 아이를 우연히 보았는데 그 아이가 남자인 경우"를 뜻한다. $$ \begin{eqnarray} P(BB|Y) &=& \dfrac{P(Y|BB)P(BB)}{P(Y)} \\ &=& \dfrac{P(Y|BB)P(BB)}{P(Y|BB)P(BB) + P(Y|BG)P(BG) + P(Y|GB)P(GB) + P(Y|GG)P(GG)} \\ &=& \dfrac{1\cdot 0.25}{1\cdot 0.25 + 0.5\cdot 0.25 + 0.5\cdot 0.25 + 0\cdot 0.25} \\ &=& \dfrac{0.25}{0.50} = \dfrac{1}{2} \end{eqnarray} $$
0.270962
0.937954
# Chapter 2: Working in NetworkX ``` # Configure plotting in Jupyter from matplotlib import pyplot as plt %matplotlib inline plt.rcParams.update({ 'figure.figsize': (7.5, 7.5), 'axes.spines.right': False, 'axes.spines.left': False, 'axes.spines.top': False, 'axes.spines.bottom': False}) # Seed random number generator import random from numpy import random as nprand seed = hash("Network Science in Python") % 2**32 nprand.seed(seed) random.seed(seed) # Import networkx import networkx as nx ``` ## The Graph Class: Working with undirected networks ``` G = nx.karate_club_graph() karate_pos = nx.spring_layout(G, k=0.3) nx.draw_networkx(G, karate_pos) list(G.nodes) list(G.edges) ``` ### Checking for nodes ``` mr_hi = 0 mr_hi in G G.has_node(mr_hi) wild_goose = 1337 wild_goose in G G.has_node(wild_goose) ``` ### Finding node neighbors ``` list(G.neighbors(mr_hi)) member_id = 1 (mr_hi, member_id) in G.edges G.has_edge(mr_hi, member_id) john_a = 33 (mr_hi, john_a) in G.edges G.has_edge(mr_hi, john_a) ``` ## Adding attributes to nodes and edges ``` member_club = [ 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] for node_id in G.nodes: G.nodes[node_id]["club"] = member_club[node_id] G.add_node(11, club=0) G.nodes[mr_hi] G.nodes[john_a] node_color = [ '#1f78b4' if G.nodes[v]["club"] == 0 else '#33a02c' for v in G] nx.draw_networkx(G, karate_pos, label=True, node_color=node_color) # Iterate through all edges for v, w in G.edges: # Compare `club` property of edge endpoints # Set edge `internal` property to True if they match if G.nodes[v]["club"] == G.nodes[w]["club"]: G.edges[v, w]["internal"] = True else: G.edges[v, w]["internal"] = False internal = [e for e in G.edges if G.edges[e]["internal"]] external = [e for e in G.edges if ~G.edges[e]["internal"]] # Draw nodes and node labels nx.draw_networkx_nodes(G, karate_pos, node_color=node_color) nx.draw_networkx_labels(G, karate_pos) # Draw internal edges as solid lines nx.draw_networkx_edges(G, karate_pos, edgelist=internal) # Draw external edges as dashed lines nx.draw_networkx_edges(G, karate_pos, edgelist=external, style="dashed") ``` ## Adding Edge Weights ``` def tie_strength(G, v, w): # Get neighbors of nodes v and w in G v_neighbors = set(G.neighbors(v)) w_neighbors = set(G.neighbors(w)) # Return size of the set intersection return 1 + len(v_neighbors & w_neighbors) # Calculate weight for each edge for v, w in G.edges: G.edges[v, w]["weight"] = tie_strength(G, v, w) # Store weights in a list edge_weights = [G.edges[v, w]["weight"] for v, w in G.edges] weighted_pos = nx.spring_layout(G, pos=karate_pos, k=0.3, weight="weight") # Draw network with edge color determined by weight nx.draw_networkx( G, weighted_pos, width=8, node_color=node_color, edge_color=edge_weights, edge_vmin=0, edge_vmax=6, edge_cmap=plt.cm.Blues) # Draw solid/dashed lines on top of internal/external edges nx.draw_networkx_edges(G, weighted_pos, edgelist=internal, edge_color="gray") nx.draw_networkx_edges(G, weighted_pos, edgelist=external, edge_color="gray", style="dashed") ``` ## The DiGraph Class: When direction matters ``` G = nx.read_gexf("data/knecht2008/klas12b-net-1.gexf", node_type=int) student_pos = nx.spring_layout(G, k=1.5) nx.draw_networkx(G, student_pos, arrowsize=20) list(G.neighbors(0)) list(G.successors(0)) list(G.predecessors(0)) # Create undirected copies of G G_either = G.to_undirected() G_both = G.to_undirected(reciprocal=True) # Set up a figure plt.figure(figsize=(10,5)) # Draw G_either on left plt.subplot(1, 2, 1) nx.draw_networkx(G_either, student_pos) # Draw G_both on right plt.subplot(1, 2, 2) nx.draw_networkx(G_both, student_pos) ``` ## MultiGraph and MultiDiGraph: Parallel edges ``` # The seven bridges of Königsberg G = nx.MultiGraph() G.add_edges_from([ ("North Bank", "Kneiphof", {"bridge": "Krämerbrücke"}), ("North Bank", "Kneiphof", {"bridge": "Schmiedebrücke"}), ("North Bank", "Lomse", {"bridge": "Holzbrücke"}), ("Lomse", "Kneiphof", {"bridge": "Dombrücke"}), ("South Bank", "Kneiphof", {"bridge": "Grüne Brücke"}), ("South Bank", "Kneiphof", {"bridge": "Köttelbrücke"}), ("South Bank", "Lomse", {"bridge": "Hohe Brücke"}) ]) list(G.edges)[0] G.edges['North Bank', 'Kneiphof', 0] ```
github_jupyter
# Configure plotting in Jupyter from matplotlib import pyplot as plt %matplotlib inline plt.rcParams.update({ 'figure.figsize': (7.5, 7.5), 'axes.spines.right': False, 'axes.spines.left': False, 'axes.spines.top': False, 'axes.spines.bottom': False}) # Seed random number generator import random from numpy import random as nprand seed = hash("Network Science in Python") % 2**32 nprand.seed(seed) random.seed(seed) # Import networkx import networkx as nx G = nx.karate_club_graph() karate_pos = nx.spring_layout(G, k=0.3) nx.draw_networkx(G, karate_pos) list(G.nodes) list(G.edges) mr_hi = 0 mr_hi in G G.has_node(mr_hi) wild_goose = 1337 wild_goose in G G.has_node(wild_goose) list(G.neighbors(mr_hi)) member_id = 1 (mr_hi, member_id) in G.edges G.has_edge(mr_hi, member_id) john_a = 33 (mr_hi, john_a) in G.edges G.has_edge(mr_hi, john_a) member_club = [ 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] for node_id in G.nodes: G.nodes[node_id]["club"] = member_club[node_id] G.add_node(11, club=0) G.nodes[mr_hi] G.nodes[john_a] node_color = [ '#1f78b4' if G.nodes[v]["club"] == 0 else '#33a02c' for v in G] nx.draw_networkx(G, karate_pos, label=True, node_color=node_color) # Iterate through all edges for v, w in G.edges: # Compare `club` property of edge endpoints # Set edge `internal` property to True if they match if G.nodes[v]["club"] == G.nodes[w]["club"]: G.edges[v, w]["internal"] = True else: G.edges[v, w]["internal"] = False internal = [e for e in G.edges if G.edges[e]["internal"]] external = [e for e in G.edges if ~G.edges[e]["internal"]] # Draw nodes and node labels nx.draw_networkx_nodes(G, karate_pos, node_color=node_color) nx.draw_networkx_labels(G, karate_pos) # Draw internal edges as solid lines nx.draw_networkx_edges(G, karate_pos, edgelist=internal) # Draw external edges as dashed lines nx.draw_networkx_edges(G, karate_pos, edgelist=external, style="dashed") def tie_strength(G, v, w): # Get neighbors of nodes v and w in G v_neighbors = set(G.neighbors(v)) w_neighbors = set(G.neighbors(w)) # Return size of the set intersection return 1 + len(v_neighbors & w_neighbors) # Calculate weight for each edge for v, w in G.edges: G.edges[v, w]["weight"] = tie_strength(G, v, w) # Store weights in a list edge_weights = [G.edges[v, w]["weight"] for v, w in G.edges] weighted_pos = nx.spring_layout(G, pos=karate_pos, k=0.3, weight="weight") # Draw network with edge color determined by weight nx.draw_networkx( G, weighted_pos, width=8, node_color=node_color, edge_color=edge_weights, edge_vmin=0, edge_vmax=6, edge_cmap=plt.cm.Blues) # Draw solid/dashed lines on top of internal/external edges nx.draw_networkx_edges(G, weighted_pos, edgelist=internal, edge_color="gray") nx.draw_networkx_edges(G, weighted_pos, edgelist=external, edge_color="gray", style="dashed") G = nx.read_gexf("data/knecht2008/klas12b-net-1.gexf", node_type=int) student_pos = nx.spring_layout(G, k=1.5) nx.draw_networkx(G, student_pos, arrowsize=20) list(G.neighbors(0)) list(G.successors(0)) list(G.predecessors(0)) # Create undirected copies of G G_either = G.to_undirected() G_both = G.to_undirected(reciprocal=True) # Set up a figure plt.figure(figsize=(10,5)) # Draw G_either on left plt.subplot(1, 2, 1) nx.draw_networkx(G_either, student_pos) # Draw G_both on right plt.subplot(1, 2, 2) nx.draw_networkx(G_both, student_pos) # The seven bridges of Königsberg G = nx.MultiGraph() G.add_edges_from([ ("North Bank", "Kneiphof", {"bridge": "Krämerbrücke"}), ("North Bank", "Kneiphof", {"bridge": "Schmiedebrücke"}), ("North Bank", "Lomse", {"bridge": "Holzbrücke"}), ("Lomse", "Kneiphof", {"bridge": "Dombrücke"}), ("South Bank", "Kneiphof", {"bridge": "Grüne Brücke"}), ("South Bank", "Kneiphof", {"bridge": "Köttelbrücke"}), ("South Bank", "Lomse", {"bridge": "Hohe Brücke"}) ]) list(G.edges)[0] G.edges['North Bank', 'Kneiphof', 0]
0.704668
0.937268
# Plan for this notebook We will attempt to attack the discriminator. # Technicalities & getting the data Don't forget to change the runtime to include a GPU (Runtime -> Change runtime type) ``` import matplotlib.pyplot as plt import matplotlib import numpy as np import pandas as pd ``` Also, let's mount the Google Drive to save the the models. As you surely remember, the filesystem in a colab runtime is not pesistent. ``` import os GDRIVE_PATH = '/data/apsidorenko' THIS_EXERCISE_PATH = os.path.join(GDRIVE_PATH, "GAN_exercises") MODELS_HOME = os.path.join(THIS_EXERCISE_PATH, "mnist guns") os.makedirs(THIS_EXERCISE_PATH, exist_ok=True) os.makedirs(MODELS_HOME, exist_ok=True) ``` Get the data: ``` df = pd.read_csv('https://query.data.world/s/nap7jvxtupud25z5ljvtbzzjjsqqay') df.head() target = pd.read_csv('https://query.data.world/s/sn3dximsq5sw3a6wtqoc3okulevugz') target.head() from sklearn.model_selection import train_test_split train, test, tar_train, tar_test = train_test_split(df, target, test_size=0.2, random_state=12345) train = np.array(train, dtype='float') test = np.array(test, dtype='float') tar_train = np.array(tar_train, dtype='float') tar_test = np.array(tar_test, dtype='float') train= train.reshape((-1, 1, 28, 28)) / 255. test= test.reshape((-1, 1, 28, 28)) / 255. ``` Let's see what we've get. Here's a function to plot a (optionally random) subset of images: ``` def plot_images(images: np.ndarray, nrows: int=5, ncols: int=5, shuffle: bool=True, title: str="", figure: matplotlib.figure.Figure=None) -> matplotlib.figure.Figure: """ Plots a subset of images. Args: images[n_images, n_channels, width, height]: a dataset with images to plot nrows: number of images in a plotted row ncols: numer of images in a plotted colunm shuffle: if True draw a random subset of images, if False -- the first ones figure: if not None, it's used for plotting, if None, a new one is created Returns: a figure containing the plotted images """ if shuffle: images_to_plot = images[np.random.permutation(len(images))[:nrows*ncols]] else: images_to_plot = images[:nrows * ncols] h, w = images_to_plot.shape[2:] if figure is None: figure = plt.figure(figsize=(8,8)) axes = figure.subplots(nrows=nrows, ncols=ncols) for row_idx, ax_row in enumerate(axes): for col_idx, ax in enumerate(ax_row): ax.imshow(images_to_plot[row_idx + ncols*col_idx, 0], interpolation="none") ax.set_axis_off() figure.suptitle(title, fontsize=18) return figure plot_images(train, title="Some digits"); ``` # Building the GAN Finally, let's import torch and define the Reshape layer (same as in the introduction to PyTorch): ``` import torch from torch import nn from torch.nn.functional import logsigmoid class Reshape(torch.nn.Module): """ Reshapes a tensor starting from the 1st dimension (not 0th), i. e. without influencing the batch dimension. """ def __init__(self, *shape): super(Reshape, self).__init__() self.shape = shape def forward(self, x): return x.view(x.shape[0], *self.shape) class Flatten(nn.Module): def forward(self, input): return input.view(input.shape[0], -1) ``` ### Generator & Discriminator ``` GENERATOR_FILE = os.path.join(MODELS_HOME, 'generator_other.pt') DISCRIMINATOR_FILE = os.path.join(MODELS_HOME, 'discriminator_other.pt') CODE_SIZE = 256 DROPOUT_RATE = 0.1 try: generator = torch.load(GENERATOR_FILE) discriminator = torch.load(DISCRIMINATOR_FILE) except FileNotFoundError: print('FUUUUUU!') def sample_fake(batch_size): noise = torch.randn(batch_size, CODE_SIZE, device="cuda") return generator(noise) # A small check that the generator output has the right size test_generated_data = sample_fake(1) assert tuple(test_generated_data.shape[1:]) == train.shape[1:] # As advertised, a discriminator outputs a single number per image assert discriminator(test_generated_data).shape == (1, 1) ``` Check that generator and discriminator complexity is roughly the same: ``` def get_n_params(model): return sum(p.reshape(-1).shape[0] for p in model.parameters()) print('generator params:', get_n_params(generator)) print('discriminator params:', get_n_params(discriminator)) ``` Then, we need a function to sample real and fake images: ``` def sample_images(batch_size): ids = np.random.choice(len(train), size=batch_size) return torch.tensor(train[ids], device="cuda").float() ``` Let's have a look what we can generate before any training: ``` generator.eval() imgs = sample_fake(25).cpu().detach().numpy() plot_images(imgs.clip(0, 1)); ``` Unsurprisingly, the core loss math is the same as we had in the 1D GAN. And we add some noise. Question to you: what for? In this notebook, we want to reach overfitting. ``` noise_power = 0 gradient_penalty = 0.000 def generator_loss(fake): return -logsigmoid(discriminator( fake + torch.randn(*fake.shape, device="cuda") * noise_power )).mean() def discriminator_loss(real, fake): return -logsigmoid(discriminator( real + torch.randn(*real.shape, device="cuda") * noise_power )).mean() - \ logsigmoid(-discriminator( fake + torch.randn(*fake.shape, device="cuda") * noise_power )).mean() def discriminator_penalty(real, size=gradient_penalty): scores = discriminator(real) grad_params = torch.autograd.grad(scores.mean(), discriminator.parameters(), create_graph=True) penalty = sum((grad**2).sum() for grad in grad_params) return penalty * size ``` Let's do some more set-up and run the learning process: ``` optimizer_generator = \ torch.optim.RMSprop(generator.parameters(), lr=0.001) optimizer_discriminator = \ torch.optim.RMSprop(discriminator.parameters(), lr=0.001) disc_scheduler = torch.optim.lr_scheduler.StepLR(optimizer_discriminator, step_size=10, gamma=0.999) gen_scheduler = torch.optim.lr_scheduler.StepLR(optimizer_generator, step_size=10, gamma=0.999) VALIDATION_INTERVAL = 200 SAVE_INTERVAL = 500 DISCRIMINATOR_ITERATIONS_PER_GENEREATOR = 1 BATCH_SIZE=128 losses = np.zeros(100) from IPython.display import clear_output for i in range(100): # Set our models to training mode: generator.train() discriminator.train() gen_scheduler.step() disc_scheduler.step() # Several discriminator updates per step: for j in range(DISCRIMINATOR_ITERATIONS_PER_GENEREATOR): # Sampling reals and fakes real = sample_images(BATCH_SIZE) fake = sample_fake(BATCH_SIZE) # Calculating the loss discriminator_loss_this_iter = discriminator_loss(real, fake) #discriminator_penalty(real) # Doing our regular optimization step for the discriminator optimizer_discriminator.zero_grad() discriminator_loss_this_iter.backward() optimizer_discriminator.step() # Pass the discriminator loss to Tensorboard for plotting #summary_writer.add_scalar("discriminator loss", discriminator_loss_this_iter, # global_step=i) # Now it's generator's time to learn: #generator_loss_this_iter = generator_loss(sample_fake(BATCH_SIZE)) #summary_writer.add_scalar("generator loss", generator_loss_this_iter, # global_step=i) #optimizer_generator.zero_grad() #generator_loss_this_iter.backward() #optimizer_generator.step() losses[i] = discriminator_loss_this_iter if i % SAVE_INTERVAL == 0: torch.save(generator, GENERATOR_FILE) torch.save(discriminator, DISCRIMINATOR_FILE) if i % VALIDATION_INTERVAL == 0: clear_output(wait=True) generator.eval() imgs = sample_fake(25).cpu().detach().numpy() plot_images(imgs.clip(0, 1), title='Iteration '+str(i)); plt.show(); ``` We've taken satisfactory results. Those digits seem very realistic. ``` plt.plot(losses) ``` # Trying to attack ``` n1 = 500 n2 = 500 sample_train = train[np.random.permutation(train.shape[0])[0:n1]] sample_test = test[np.random.permutation(test.shape[0])[0:n2]] our_sample = np.concatenate((sample_train, sample_test), axis=0) (our_sample[500+45] == sample_test[45]).all() our_sample.shape labels = np.array([x < n1 for x in range(n1+n2)], dtype=int) discriminator.eval() ans = discriminator(torch.tensor(our_sample).float().cuda()).cpu().detach().numpy().reshape(-1) df = pd.DataFrame({'disc':ans, 'label':labels}) df.head() df.sort_values(by='disc', ascending=False).iloc[0:n1]['label'].sum() / n1 df.sort_values(by='disc', ascending=False) #random_choice df['label'].sample(n=n1).sum() / n1 ``` The discriminator is too stupid to show overfitting.
github_jupyter
import matplotlib.pyplot as plt import matplotlib import numpy as np import pandas as pd import os GDRIVE_PATH = '/data/apsidorenko' THIS_EXERCISE_PATH = os.path.join(GDRIVE_PATH, "GAN_exercises") MODELS_HOME = os.path.join(THIS_EXERCISE_PATH, "mnist guns") os.makedirs(THIS_EXERCISE_PATH, exist_ok=True) os.makedirs(MODELS_HOME, exist_ok=True) df = pd.read_csv('https://query.data.world/s/nap7jvxtupud25z5ljvtbzzjjsqqay') df.head() target = pd.read_csv('https://query.data.world/s/sn3dximsq5sw3a6wtqoc3okulevugz') target.head() from sklearn.model_selection import train_test_split train, test, tar_train, tar_test = train_test_split(df, target, test_size=0.2, random_state=12345) train = np.array(train, dtype='float') test = np.array(test, dtype='float') tar_train = np.array(tar_train, dtype='float') tar_test = np.array(tar_test, dtype='float') train= train.reshape((-1, 1, 28, 28)) / 255. test= test.reshape((-1, 1, 28, 28)) / 255. def plot_images(images: np.ndarray, nrows: int=5, ncols: int=5, shuffle: bool=True, title: str="", figure: matplotlib.figure.Figure=None) -> matplotlib.figure.Figure: """ Plots a subset of images. Args: images[n_images, n_channels, width, height]: a dataset with images to plot nrows: number of images in a plotted row ncols: numer of images in a plotted colunm shuffle: if True draw a random subset of images, if False -- the first ones figure: if not None, it's used for plotting, if None, a new one is created Returns: a figure containing the plotted images """ if shuffle: images_to_plot = images[np.random.permutation(len(images))[:nrows*ncols]] else: images_to_plot = images[:nrows * ncols] h, w = images_to_plot.shape[2:] if figure is None: figure = plt.figure(figsize=(8,8)) axes = figure.subplots(nrows=nrows, ncols=ncols) for row_idx, ax_row in enumerate(axes): for col_idx, ax in enumerate(ax_row): ax.imshow(images_to_plot[row_idx + ncols*col_idx, 0], interpolation="none") ax.set_axis_off() figure.suptitle(title, fontsize=18) return figure plot_images(train, title="Some digits"); import torch from torch import nn from torch.nn.functional import logsigmoid class Reshape(torch.nn.Module): """ Reshapes a tensor starting from the 1st dimension (not 0th), i. e. without influencing the batch dimension. """ def __init__(self, *shape): super(Reshape, self).__init__() self.shape = shape def forward(self, x): return x.view(x.shape[0], *self.shape) class Flatten(nn.Module): def forward(self, input): return input.view(input.shape[0], -1) GENERATOR_FILE = os.path.join(MODELS_HOME, 'generator_other.pt') DISCRIMINATOR_FILE = os.path.join(MODELS_HOME, 'discriminator_other.pt') CODE_SIZE = 256 DROPOUT_RATE = 0.1 try: generator = torch.load(GENERATOR_FILE) discriminator = torch.load(DISCRIMINATOR_FILE) except FileNotFoundError: print('FUUUUUU!') def sample_fake(batch_size): noise = torch.randn(batch_size, CODE_SIZE, device="cuda") return generator(noise) # A small check that the generator output has the right size test_generated_data = sample_fake(1) assert tuple(test_generated_data.shape[1:]) == train.shape[1:] # As advertised, a discriminator outputs a single number per image assert discriminator(test_generated_data).shape == (1, 1) def get_n_params(model): return sum(p.reshape(-1).shape[0] for p in model.parameters()) print('generator params:', get_n_params(generator)) print('discriminator params:', get_n_params(discriminator)) def sample_images(batch_size): ids = np.random.choice(len(train), size=batch_size) return torch.tensor(train[ids], device="cuda").float() generator.eval() imgs = sample_fake(25).cpu().detach().numpy() plot_images(imgs.clip(0, 1)); noise_power = 0 gradient_penalty = 0.000 def generator_loss(fake): return -logsigmoid(discriminator( fake + torch.randn(*fake.shape, device="cuda") * noise_power )).mean() def discriminator_loss(real, fake): return -logsigmoid(discriminator( real + torch.randn(*real.shape, device="cuda") * noise_power )).mean() - \ logsigmoid(-discriminator( fake + torch.randn(*fake.shape, device="cuda") * noise_power )).mean() def discriminator_penalty(real, size=gradient_penalty): scores = discriminator(real) grad_params = torch.autograd.grad(scores.mean(), discriminator.parameters(), create_graph=True) penalty = sum((grad**2).sum() for grad in grad_params) return penalty * size optimizer_generator = \ torch.optim.RMSprop(generator.parameters(), lr=0.001) optimizer_discriminator = \ torch.optim.RMSprop(discriminator.parameters(), lr=0.001) disc_scheduler = torch.optim.lr_scheduler.StepLR(optimizer_discriminator, step_size=10, gamma=0.999) gen_scheduler = torch.optim.lr_scheduler.StepLR(optimizer_generator, step_size=10, gamma=0.999) VALIDATION_INTERVAL = 200 SAVE_INTERVAL = 500 DISCRIMINATOR_ITERATIONS_PER_GENEREATOR = 1 BATCH_SIZE=128 losses = np.zeros(100) from IPython.display import clear_output for i in range(100): # Set our models to training mode: generator.train() discriminator.train() gen_scheduler.step() disc_scheduler.step() # Several discriminator updates per step: for j in range(DISCRIMINATOR_ITERATIONS_PER_GENEREATOR): # Sampling reals and fakes real = sample_images(BATCH_SIZE) fake = sample_fake(BATCH_SIZE) # Calculating the loss discriminator_loss_this_iter = discriminator_loss(real, fake) #discriminator_penalty(real) # Doing our regular optimization step for the discriminator optimizer_discriminator.zero_grad() discriminator_loss_this_iter.backward() optimizer_discriminator.step() # Pass the discriminator loss to Tensorboard for plotting #summary_writer.add_scalar("discriminator loss", discriminator_loss_this_iter, # global_step=i) # Now it's generator's time to learn: #generator_loss_this_iter = generator_loss(sample_fake(BATCH_SIZE)) #summary_writer.add_scalar("generator loss", generator_loss_this_iter, # global_step=i) #optimizer_generator.zero_grad() #generator_loss_this_iter.backward() #optimizer_generator.step() losses[i] = discriminator_loss_this_iter if i % SAVE_INTERVAL == 0: torch.save(generator, GENERATOR_FILE) torch.save(discriminator, DISCRIMINATOR_FILE) if i % VALIDATION_INTERVAL == 0: clear_output(wait=True) generator.eval() imgs = sample_fake(25).cpu().detach().numpy() plot_images(imgs.clip(0, 1), title='Iteration '+str(i)); plt.show(); plt.plot(losses) n1 = 500 n2 = 500 sample_train = train[np.random.permutation(train.shape[0])[0:n1]] sample_test = test[np.random.permutation(test.shape[0])[0:n2]] our_sample = np.concatenate((sample_train, sample_test), axis=0) (our_sample[500+45] == sample_test[45]).all() our_sample.shape labels = np.array([x < n1 for x in range(n1+n2)], dtype=int) discriminator.eval() ans = discriminator(torch.tensor(our_sample).float().cuda()).cpu().detach().numpy().reshape(-1) df = pd.DataFrame({'disc':ans, 'label':labels}) df.head() df.sort_values(by='disc', ascending=False).iloc[0:n1]['label'].sum() / n1 df.sort_values(by='disc', ascending=False) #random_choice df['label'].sample(n=n1).sum() / n1
0.772187
0.897066
# Energy mix analysis In the results from the ``westeros_bis_baseline.ipynb`` scenario we can notice that the gas power plant is not in use, and that wind generation decreases untill year 710 when it goes out of service because of its 20 years lifetime. At that point, it is more economical to exploit coal plant only rather than building a new wind plant: all investment costs are avoided by the cost minimization algorithm. For the same reason, the construction of gas plant is not even planned in year 690. The price of electricity decreases because of the increasing share of coal generation. ## Proposed adjustments Add a sensitivity analysis starting the baseline scenario in year 690 with - 40% share of coal and 20% gas generation (``westeros_bis_energymix1.ipynb``) - 20% share of coal and 40% gas generation (``westeros_bis_energymix2.ipynb``) and compare the results. ### Pre-requisites - You have the MESSAGEix framework installed and working - You have run ``westeros_bis_baseline.ipynb`` scenario and solved it successfully Importing the baseline scenario ``` import pandas as pd import ixmp import message_ix from message_ix.util import make_df %matplotlib inline mp = ixmp.Platform() model = 'Westeros Electrified' base = message_ix.Scenario(mp, model=model, scenario='baseline') scen_mix1 = base.clone(model, 'energy mix1','exploring the share of fossil generation', keep_solution=False) scen_mix1.check_out() year_df = scen_mix1.vintage_and_active_years() vintage_years, act_years = year_df['year_vtg'], year_df['year_act'] model_horizon = scen_mix1.set('year') country = 'Westeros' ``` ## Editing the energy mix at year 690 Re-introducing parameters for computation of base capacity. ``` history = [690] demand_per_year = 40 * 12 * 1000 / 8760 grid_efficiency = 0.9 historic_demand = 0.85 * demand_per_year historic_generation = historic_demand / grid_efficiency # key parameters! coal_fraction = 0.4 ngcc_fraction = 0.2 ``` Re-introducing capacity factor, base capacity and base activity ``` base_capacity_factor = { 'node_loc': country, 'year_vtg': vintage_years, 'year_act': act_years, 'time': 'year', 'unit': '-', } capacity_factor = { # power plants cf as indicated in the reference https://doi.org/10.1016/B978-0-12-810448-4.00001-X 'coal_ppl': 0.85, # used to be 1 in the absence of other fossil generation, now it is lower than one 'wind_ppl': 0.36, 'ngcc_ppl': 0.87, 'bulb': 1, } for tec, val in capacity_factor.items(): df = make_df(base_capacity_factor, technology=tec, value=val) scen_mix1.add_par('capacity_factor', df) base_capacity = { 'node_loc': country, 'year_vtg': history, 'unit': 'GWa', } base_activity = { 'node_loc': country, 'year_act': history, 'mode': 'standard', 'time': 'year', 'unit': 'GWa', } old_activity = { 'coal_ppl': coal_fraction * historic_generation, 'ngcc_ppl': ngcc_fraction * historic_generation, 'wind_ppl': (1 - coal_fraction - ngcc_fraction) * historic_generation, } for tec, val in old_activity.items(): df = make_df(base_activity, technology=tec, value=val) scen_mix1.add_par('historical_activity', df) act_to_cap = { 'coal_ppl': 1 / 10 / capacity_factor['coal_ppl'] / 3.5, # 35 year lifetime 'wind_ppl': 1 / 10 / capacity_factor['wind_ppl'] / 2, 'ngcc_ppl': 1 / 10 / capacity_factor['ngcc_ppl'] / 2.5, # 25 year lifetime } for tec in act_to_cap: value = old_activity[tec] * act_to_cap[tec] df = make_df(base_capacity, technology=tec, value=value) scen_mix1.add_par('historical_new_capacity', df) ``` Solving the model ``` scen_mix1.commit(comment='introducing emissions and setting an upper bound') scen_mix1.set_as_default() scen_mix1.solve() scen_mix1.var('OBJ')['lvl'] ``` Plotting the results ``` from message_ix.reporting import Reporter from message_ix.util.tutorial import prepare_plots rep = Reporter.from_scenario(scen_mix1) prepare_plots(rep) ``` Activity ``` rep.set_filters(t=["coal_ppl", "wind_ppl", "ngcc_ppl"]) rep.get("plot activity") ``` Capacity ``` rep.get("plot capacity") ``` Electricity price ``` rep.set_filters(t=None, c=["light"]) rep.get("plot prices") ``` # Comments on the results In ``westeros_bis_energymix1.ipynb``, with respect to ``westeros_bis_baseline.ipynb``: - total cost is reduced from 466358.3125 (only coal) to 408462.5625 (12.4%) - Coal technology is increasing in year 700 and constant in last two decades. - Gas technology is present, increasing, and replacing all the wind share in the last decade since it is a cheaper technology. - Price is progressively decreasing from about 5, to about 3, to about 1.5. Close the connection to the database ``` mp.close_db() ```
github_jupyter
import pandas as pd import ixmp import message_ix from message_ix.util import make_df %matplotlib inline mp = ixmp.Platform() model = 'Westeros Electrified' base = message_ix.Scenario(mp, model=model, scenario='baseline') scen_mix1 = base.clone(model, 'energy mix1','exploring the share of fossil generation', keep_solution=False) scen_mix1.check_out() year_df = scen_mix1.vintage_and_active_years() vintage_years, act_years = year_df['year_vtg'], year_df['year_act'] model_horizon = scen_mix1.set('year') country = 'Westeros' history = [690] demand_per_year = 40 * 12 * 1000 / 8760 grid_efficiency = 0.9 historic_demand = 0.85 * demand_per_year historic_generation = historic_demand / grid_efficiency # key parameters! coal_fraction = 0.4 ngcc_fraction = 0.2 base_capacity_factor = { 'node_loc': country, 'year_vtg': vintage_years, 'year_act': act_years, 'time': 'year', 'unit': '-', } capacity_factor = { # power plants cf as indicated in the reference https://doi.org/10.1016/B978-0-12-810448-4.00001-X 'coal_ppl': 0.85, # used to be 1 in the absence of other fossil generation, now it is lower than one 'wind_ppl': 0.36, 'ngcc_ppl': 0.87, 'bulb': 1, } for tec, val in capacity_factor.items(): df = make_df(base_capacity_factor, technology=tec, value=val) scen_mix1.add_par('capacity_factor', df) base_capacity = { 'node_loc': country, 'year_vtg': history, 'unit': 'GWa', } base_activity = { 'node_loc': country, 'year_act': history, 'mode': 'standard', 'time': 'year', 'unit': 'GWa', } old_activity = { 'coal_ppl': coal_fraction * historic_generation, 'ngcc_ppl': ngcc_fraction * historic_generation, 'wind_ppl': (1 - coal_fraction - ngcc_fraction) * historic_generation, } for tec, val in old_activity.items(): df = make_df(base_activity, technology=tec, value=val) scen_mix1.add_par('historical_activity', df) act_to_cap = { 'coal_ppl': 1 / 10 / capacity_factor['coal_ppl'] / 3.5, # 35 year lifetime 'wind_ppl': 1 / 10 / capacity_factor['wind_ppl'] / 2, 'ngcc_ppl': 1 / 10 / capacity_factor['ngcc_ppl'] / 2.5, # 25 year lifetime } for tec in act_to_cap: value = old_activity[tec] * act_to_cap[tec] df = make_df(base_capacity, technology=tec, value=value) scen_mix1.add_par('historical_new_capacity', df) scen_mix1.commit(comment='introducing emissions and setting an upper bound') scen_mix1.set_as_default() scen_mix1.solve() scen_mix1.var('OBJ')['lvl'] from message_ix.reporting import Reporter from message_ix.util.tutorial import prepare_plots rep = Reporter.from_scenario(scen_mix1) prepare_plots(rep) rep.set_filters(t=["coal_ppl", "wind_ppl", "ngcc_ppl"]) rep.get("plot activity") rep.get("plot capacity") rep.set_filters(t=None, c=["light"]) rep.get("plot prices") mp.close_db()
0.540439
0.907763
``` import mmf_setup;mmf_setup.nbinit() ``` # Parameters ``` KEY = 'ALF4' import constants as u import tools data = tools.Data(KEY) print(data.dataset._dims['params']) ``` # Mass/Radius Relationships ``` %pylab inline --no-import-all import tools data = tools.Data(KEY) data.explore_parameters(); ``` # Principal Component Analysis As described in the [`Equation of State.ipynb`](Equation of State.ipynb) notebook, our equation of state is parameterized by the following 18 parameters which have the following values chosen to roughly match the ALF4 equation of state as tabulated in [Read:2009]: We now present a principal component analysis. Here one should generate a sample of binary neutron star systems characterized by their two masses $m_1$ and $m_2$ (in units of solar masses $M_\circ$) and their distance $d$ in Mpc from the earth. We provided below a `PopulationModel` which will generate a Gaussian population with specified mean and standard deviation for each parameter, but one can simply input any list of `(m_1, m_2, d)` tuples. From this sample, we plot the eigenvectors and eigenvalues of the combined Fisher information matrix $\mat{F}$ that would result from LIGO observations of assuming signal-to-noise ratios compatable with the Einstein Telescope. The matrix $\mat{F}$ characterizes the relative errors of the various EoS parameters $p_a$ as follows: $$ \sum_{ab}\frac{\delta p_a}{p_a} F_{ab} \frac{\delta p_b}{p_b} = \sum_{abi}\frac{\delta p_a}{p_a} U_{ai}d_i U_{bi} \frac{\delta p_b}{p_b} = \sum_{i}(\delta\xi_i)^2 d_i \leq 1, \qquad \delta\xi_i = \sum_a \frac{\delta p_a}{p_a} U_{ai}, \qquad \xi_i = \sum_{a}U_{ai}\ln{p_a}. $$ These covariance ellipsoids corresponds to the 1-$\sigma$ variation assuming that all the parameters variances are well described by Gaussian errors. By diagonalizing $\mat{F} = \mat{U} \cdot \diag(\mat{d})\cdot\mat{U}^\dagger$ we obtain independent constraints on each of the the principal components $\xi_i$: $$ \abs{\delta\xi_i} \leq \sqrt{d_i^{-1}} = \sigma_i. $$ *Note: in the following, we use the notation $\sigma_i = 1/\sqrt{d_i}$ to represent the constraint imposed by the $i$'th most significant component.* We now plot the various eigenvalues and display the correspond constraints as a percentage error $100/\sqrt{d_i}$: ``` np.random.seed(1) population_model = tools.PopulationModel( m1=1.2, m2=1.5, distance=40, constant_distance=True) pca = tools.PCA_Explorer(data) pca.plot_PCA(population_model.get_samples(1), significance=50) np.random.seed(2) population_model = tools.PopulationModel(m1=1.2+0.2j, m2=1.5+0.2j, distance=100) display(pca.plot_PCA(population_model.get_samples(100), significance=10)) ``` Here we have instead assumed that we have 100 observations from a population of objects Gaussian distributions of $m_1 = 1.2(2)M_\circ$, $m_2=1.5(2)M_\circ$ uniformly distributed within a sphere of radius $d=100$Mpc. We now see that a variety of parameter combinations are constrained. # Parameter Information Here we test the hypothesis that each star gives essentially only one relevant principal component. We can check this by showing the distribution of the principal component eigenvalues over the range of masses. ``` d, U = pca.get_PCA(); sigmas = 1./np.ma.sqrt(d) plt.figure(figsize=(13., 5.)) for n, gs in enumerate(GridSpec(1,2)): ax = plt.subplot(gs) ax.set_aspect(1) plt.pcolormesh(data.M/u.M0, data.M/u.M0, 1./sigmas[..., -1-n].T) cb = plt.colorbar(label=r'$\sigma_{}$'.format(n)) ticks_ = np.array(cb.get_ticks()) cb.set_ticks(ticks_) cb.set_ticklabels(["{:.2g}%".format(_sigma) for _sigma in 100.0/ticks_]) plt.xlabel('$m_1$ [M0]'); plt.ylabel('$m_2$ [M0]') plt.tight_layout() ``` Notice that the relative constraint provided by the dominant principal component - even in the worst case - is 3 orders of magnitude larger than that of the second component. Thus, it is a good approximation to consider only the dominant principal components. Independent information results from averaging over different masses which have different eigenvectors for the dominant principal component. A related question is: which combinations of masses will provide the most information about a given parameter? We can obtain a qualitative estimate of this by looking at the diagonal entries. *(The maximum $\sqrt{F}$ is shown in the title of each plot with larger values indicating more information.)* ``` masses = data.M params = data.params Np = len(params) F = data.dataset.F gs = GridSpec(3, 6) plt.figure(figsize=(15, 8)) z_max = [np.sqrt(F.data[:,:,n,n]).max() for n in range(Np)] inds = reversed(np.argsort(z_max)) for _n, n in enumerate(inds): ax = plt.subplot(gs[_n]) z = np.sqrt(F.data[:,:,n,n].T) plt.pcolormesh(masses, masses, z) plt.title("{} ({})".format(params[n], int(z.max()))) ax.set_aspect(1) plt.tight_layout() ```
github_jupyter
import mmf_setup;mmf_setup.nbinit() KEY = 'ALF4' import constants as u import tools data = tools.Data(KEY) print(data.dataset._dims['params']) %pylab inline --no-import-all import tools data = tools.Data(KEY) data.explore_parameters(); np.random.seed(1) population_model = tools.PopulationModel( m1=1.2, m2=1.5, distance=40, constant_distance=True) pca = tools.PCA_Explorer(data) pca.plot_PCA(population_model.get_samples(1), significance=50) np.random.seed(2) population_model = tools.PopulationModel(m1=1.2+0.2j, m2=1.5+0.2j, distance=100) display(pca.plot_PCA(population_model.get_samples(100), significance=10)) d, U = pca.get_PCA(); sigmas = 1./np.ma.sqrt(d) plt.figure(figsize=(13., 5.)) for n, gs in enumerate(GridSpec(1,2)): ax = plt.subplot(gs) ax.set_aspect(1) plt.pcolormesh(data.M/u.M0, data.M/u.M0, 1./sigmas[..., -1-n].T) cb = plt.colorbar(label=r'$\sigma_{}$'.format(n)) ticks_ = np.array(cb.get_ticks()) cb.set_ticks(ticks_) cb.set_ticklabels(["{:.2g}%".format(_sigma) for _sigma in 100.0/ticks_]) plt.xlabel('$m_1$ [M0]'); plt.ylabel('$m_2$ [M0]') plt.tight_layout() masses = data.M params = data.params Np = len(params) F = data.dataset.F gs = GridSpec(3, 6) plt.figure(figsize=(15, 8)) z_max = [np.sqrt(F.data[:,:,n,n]).max() for n in range(Np)] inds = reversed(np.argsort(z_max)) for _n, n in enumerate(inds): ax = plt.subplot(gs[_n]) z = np.sqrt(F.data[:,:,n,n].T) plt.pcolormesh(masses, masses, z) plt.title("{} ({})".format(params[n], int(z.max()))) ax.set_aspect(1) plt.tight_layout()
0.358129
0.966188
# IO und Threads #### Marcel Lüthi, Departement Mathematik und Informatik, Universität Basel ### Input / Output > Java bietet flexible Bibliothek für Ein/Ausgabeoperationen * Schwierig zu Nutzen für kleine Anwendungen * aber flexibel und mächtig für professionelle Anwendung Strategie: * Lernen nach Beispiel / Tutorials * Lesen der Dokumentation [Java IO Tutorial](https://docs.oracle.com/javase/tutorial/essential/io/) [API-Dokumentation](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/io/package-summary.html) ### Streams Wichtigste Abstraktion für Input und Output * Sequenz von Daten ![streams](images/io-stream1.gif) ![streams](images/io-stream2.gif) *Quelle: Oracle Java Tutorial* ### Arten von Streams * Bytestream - Liest einzelne Bytes * Wichtige Unterklassen : FileInputStream und FileOuputStream * Characterstream - Liest Characters * Wichtige Unterklassen: FileReader, FileWriter, PrintWriter * Filterstreams - Adapter um Streams, z.B. für automatisches Puffern von Daten * Wichtige Unterklassen: BufferedReader, BufferedWriter, BufferedInputStream, BufferedOutputStream * Datastreams - Lesen/Schreiben von Datentypen im Binärformat ### Verschachteln von Streams Filter können ineinander verschachtelt werden ![io-filters](images/io-filters.png) ```java new DataOutputStream(new BufferedOutputStream(new FileOutputStream("file.txt"))); ``` ### Beispiel: Lesen und Schreiben einer Textdatei ``` // Beispiel wird hier entwickelt ``` ### File: Abstraktion für Dateinamen > Plattformunabhänige Definition von Datei und Verzeichnis Namen. Erzeugen eines File Objekts ```java File myfile = new File(“/some/path/to/a/file”); ``` Abfragen des Pfads: ```java myfile.getName(); // Gibt Dateiname aus myfile.getPath(); // Gibt relativen Pfad zurück myfile.getAbsolutePath(); // Gibt Absoluten Pfad zurück ``` ### Miniübung * Experimentieren Sie mit dem File Objekt. Welche Methoden stellt es zur Verfügung? * Wie bauen Sie Pfade aus einzelnen Verzeichnissen auf * Wie können Sie nachschauen, ob eine Datei existiert * Können Sie ein Verzeichnis erstellen? ``` // Ihre Experimente ``` # Threads ### Gleichzeitige Ausführung von Programmteilen Manchmal sollen verschiedene Teile eines Programms "gleichzeitig" laufen * Beispiel: Fortschrittsbalken aktualisieren während Berechnung läuft #### Umsetzung in Java: Threads ![threads](images/threads.png) Quasiparallele Ausführung verschiedener Programmteile ### Erzeugen von Threads Einfaches Rezept zum Erzeugen eines Threads 1. Eigene Klasse von java.lang.Thread ableiten. 2. Eigenen Kode in die run() Methode schreiben. 3. Thread mit der start() Methode starten. ### Beispiel Folgende Berechnung soll in verschiedenen Threads mehrmals ausgeführt werden ``` // Sinnfreie, aber lange Rechnung, // die nicht jedesmal gleich lange braucht double longRunningComputation() { java.util.Random rng = new java.util.Random(); int n = rng.nextInt() / 8; double sum = 0; for (int j = 0; j < n; j++) { sum += Math.sin(j); } return sum; } longRunningComputation() ``` ### Beispiel (cont): Subklasse von Thread definieren ``` class MyThread extends Thread { PrintWriter writer; String name; MyThread(String name, PrintWriter writer) { this.name = name; this.writer = writer; } @Override public void run() { for (int i = 0; i < 10; i++) { double res = longRunningComputation(); writer.println("Resultat in thread " +name +res); } } } ``` ### Threads starten ``` StringWriter writer = new StringWriter(); PrintWriter outputStream = new PrintWriter(writer); MyThread thread1 = new MyThread("Thread1", outputStream); MyThread thread2 = new MyThread("Thread2", outputStream); thread1.start(); thread2.start(); writer.toString() ```
github_jupyter
new DataOutputStream(new BufferedOutputStream(new FileOutputStream("file.txt"))); // Beispiel wird hier entwickelt File myfile = new File(“/some/path/to/a/file”); myfile.getName(); // Gibt Dateiname aus myfile.getPath(); // Gibt relativen Pfad zurück myfile.getAbsolutePath(); // Gibt Absoluten Pfad zurück // Ihre Experimente // Sinnfreie, aber lange Rechnung, // die nicht jedesmal gleich lange braucht double longRunningComputation() { java.util.Random rng = new java.util.Random(); int n = rng.nextInt() / 8; double sum = 0; for (int j = 0; j < n; j++) { sum += Math.sin(j); } return sum; } longRunningComputation() class MyThread extends Thread { PrintWriter writer; String name; MyThread(String name, PrintWriter writer) { this.name = name; this.writer = writer; } @Override public void run() { for (int i = 0; i < 10; i++) { double res = longRunningComputation(); writer.println("Resultat in thread " +name +res); } } } StringWriter writer = new StringWriter(); PrintWriter outputStream = new PrintWriter(writer); MyThread thread1 = new MyThread("Thread1", outputStream); MyThread thread2 = new MyThread("Thread2", outputStream); thread1.start(); thread2.start(); writer.toString()
0.281702
0.812644
# "Understanding and Creating your own Classification Loss Function" > "In this blog, I firstly try to explain the classification loss function under geometrical perspective. Multiple loss functions will be tested with the Oxford-IIIT Pet Dataset using fastaiv2. The most simple loss function - dot product - will be introduced first, then we will try to create our own loss function to beat the most popular one - cross entropy loss function" - toc: true - branch: master - badges: true - comments: true - author: Dien-Hoa Truong - categories: [loss function, classification] - hide: false - search_exclude: true # Introduction Last week, I suddenly thought about Cross-Entropy loss function (the most popular loss function chosen for classification problem) while following fastai 2020 course. Actually I read about the proof of the function one or two times before and can understand it mathematically. However, I can not explain it naturally, and for me, naturally means there must have some images appear in my head when I think about it. The explanation should be natural, using normal language without using so much advanced math. I then tried to review the topic, searched on the Internet for several resources, found some popular explanations with cross-entropy, probabilities or maximum likelihood which I was too lazy to read to the end. However, at this short article (https://medium.com/data-science-bootcamp/understand-cross-entropy-loss-in-minutes-9fb263caee9a), it has a part using dot-product to compare 2 vectors that shed a light to me. The SoftMax final layer make the output likely be a probability distribution (its sum is 1), but it doesn't have to be thought in this way (even the Softmax layer is not obligatory). After all, it is a vector, and all we need is to predict a vector that is as close as possible to the target vector (for me, geometry is easier to imagine than probability) # Review the topic ## How to compare 2 vectors We start firtly about what is a vector. A vector is just an arrow with a direction and length. So for the binary classification problem, we have an output vector which has 2 elements and sum up to 1. [p1, p2] and p1+p2 = 1 Imagine we want our target vector is [0,1]. The worst prediction is [0,1] and a good predition could be [0.99,0.01] ![](imgs/vec1.png) We notice that $cos(\theta)$ for $\theta$ from 0&deg; to 90&deg; decrese strictly from 1 to 0 (from the best to worst prediction) so it might be a good indicator for our prediction (And actually it exists, you can check for cosine similarity). Any function that have value increasing (or decreasing and we can multiply by -1 or inverse it) strictly from the best prediction to the worst prediction can be considered a loss function But what about the dot-product ? The dot-product has some relevances to the cosine that mentioned above. The dot-product in geometrical point of view is the projection of a vector to the direction of another vector and multiply them both. And the projection is calculated by multiplying the cosine of the angle between these 2 vectors. But in this simple case, the projection is just the y value if our predicted vector is (x,y) and the target vector is (0,1). And the **y value decrease strictly from 1 to 0 from vector (1,0) to vector (0,1)** . So the dot-product can also be a great candidate for our loss function too ![](imgs/vec2.png) In the multiclass classification problem with the target vector encoded by one-hot vector (Vector has just one 1 value and 0 for all others position). The dot-product calculation is very simple. Taking the value in the predicted vector that at its position in the target vector, we have 1. (Dot-product in algebra is just the sum of the element-wise multiplication) ``` v1 = np.array([0,1,0,0]) # target vector v2 = np.array([0.2,0.3,0.1,0.4]) # predicted vector print(sum(v1*v2)) ``` For the Cross-Entropy Loss Function, instead of multiply the predicted vector, we multiply the logarithm of the predicted vector ``` print(sum(v1*np.log(v2))) print(np.log(0.3)) ``` In the next section, we will experiment the dot-product loss function, the cross-entropy loss function and try to invent of own loss function by changing the function applying the the predicted vector (like logarithm in the case of Cross-Entropy) # Compare Different Loss Functions In this part, we will experiment our dot-product loss function, compare its performance with the famous cross-entropy loss function and finally, try to invent a new loss function that is comparable to the cross-entropy loss function. The experiments use data from the Oxford-IIIT Pet Dataset and the resnet18 model from fastai2 library ## Getting Data This part is simply for data preparation. Puting all the images and their labels to the corresponding dataloader ``` from fastai2.vision.all import * path = untar_data(URLs.PETS) items = get_image_files(path/'images') def label_func(fname): return "cat" if fname.name[0].isupper() else "dog" labeller = RegexLabeller(pat=r"(.+)_\d+.jpg") pets = DataBlock(blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(), get_y = Pipeline([lambda x: getattr(x,'name'), labeller]), item_tfms=Resize(224), batch_tfms=aug_transforms(), ) dls = pets.dataloaders(path/'images') dls.c # number of categories in this dataset dls.show_batch() ``` ## Experimenting All our loss functions will have two parts. The first part is the softmax function - scaling our output to [0,1] and. The second part is the how we penalize our prediction - high loss if the predicted vector is far from the target. ### Cross_Entropy loss ``` def softmax(x): return x.exp() / (x.exp().sum(-1)).unsqueeze(-1) def nl(input, target): return -input[range(target.shape[0]), target].log().mean() def our_cross_entropy(input, target): pred = softmax(input) loss = nl(pred, target) return loss learn = cnn_learner(dls, resnet18, loss_func=our_cross_entropy, metrics=error_rate) learn.fine_tune(1) ``` ### Dot Product Loss This is actually a negative dot-production loss function because we multiply the result to -1 to make it increasing from best to worst prediction ``` def dot_product_loss(input, target): pred = softmax(input) return -(pred[range(target.shape[0]), target]).mean() learn = cnn_learner(dls, resnet18, loss_func=dot_product_loss, metrics=error_rate) learn.fine_tune(1) ``` Wow ! despite the simplicity of the dot-product loss function, we got not so bad result (0.14) after 2 epochs. Our dataset has 37 categories of pets and a random prediction will give us the error rate (1-1/37)=0.97. However, can we do it better, somehow geting closer to the performance of the cross-entropy loss function ? ### The difference between cross-entropy loss and dot-product loss How these 2 loss functions penalize the prediction described as below. The target vector is always [0,1] ``` x = np.linspace(0.01,0.99,100) # the predicted vector at index 2 y_dot_product = -x y_cross_entropy = -np.log(x) plt.plot(x, y_dot_product, label='dot_prod') plt.plot(x, y_cross_entropy, label='cross_entropy') plt.legend() plt.show() ``` The shape of the plot is what we interested here. Intuitively, we can see that the cross-entropy function penalize more when we have wrong prediction (kind of exponential shape), maybe it cause the better prediction In the next section we will try other loss function but the core idea is still based on the dot-product loss function. ### Inverse Loss Instead of multiply by -1, we can inverse the predicted value to make it increasing from best to worst prediction. Let's see the plot as below: ``` y_inv = 1/x plt.plot(x, y_dot_product, label='dot_prod') plt.plot(x, y_cross_entropy, label='cross_entropy') plt.plot(x, y_inv, label='inverse loss') plt.legend() plt.show() ``` Hmmm! The inverse loss penalize may be too much compared to the 2 previous ones, no tolerance at all might be not so good. But let's try it anyway ``` def inverse_loss(input, target): pred = softmax(input) return (1/((pred[range(target.shape[0]), target]))).mean() learn = cnn_learner(dls, resnet18, loss_func=inverse_loss, metrics=error_rate) learn.fine_tune(1) ``` Ok, we have worst result. But with this idea, we can easily tunning the loss function. We can power the denominator with value < 1 to decrease the penalization. For example: 0.2 ``` y_inv_tuning = 1/(x**0.2) plt.plot(x, y_dot_product, label='dot_prod') plt.plot(x, y_cross_entropy, label='cross_entropy') plt.plot(x, y_inv_tuning, label='inverse loss tuning') plt.legend() plt.show() ``` Great! Let's try this new loss function ``` def inverse_loss_tunning(input, target): pred = softmax(input) return (1/((pred[range(target.shape[0]), target]).pow(0.2))).mean() learn = cnn_learner(dls, resnet18, loss_func=inverse_loss_tunning, metrics=error_rate) learn.fine_tune(1) ``` We get not so different error rate: 0.091 compared to 0.092 of the cross-entropy loss function. # Conclusion I hope after reading this blog, you can understand deeper the loss function of multi-classes classification problem. My purpose is not actually finding a new loss function to replace the cross-entropy, but give you an idea of how to define you own in maybe another problem. And I also want to remind (for you and even for myself) that in machine learning, your intuition, or your sense, counts. I developed all the experiments above just from my natual intuition, and the most popular approach everybody use may not be the best approach and you can not change it. While following the fastai course, I found a story about Learning Rate Finder from Leslie Smith which is developped not long ago (2015), and it doesn't use very advanced math. So be patient, be brave and be creative to follow the Deep Learning road.
github_jupyter
v1 = np.array([0,1,0,0]) # target vector v2 = np.array([0.2,0.3,0.1,0.4]) # predicted vector print(sum(v1*v2)) print(sum(v1*np.log(v2))) print(np.log(0.3)) from fastai2.vision.all import * path = untar_data(URLs.PETS) items = get_image_files(path/'images') def label_func(fname): return "cat" if fname.name[0].isupper() else "dog" labeller = RegexLabeller(pat=r"(.+)_\d+.jpg") pets = DataBlock(blocks=(ImageBlock, CategoryBlock), get_items=get_image_files, splitter=RandomSplitter(), get_y = Pipeline([lambda x: getattr(x,'name'), labeller]), item_tfms=Resize(224), batch_tfms=aug_transforms(), ) dls = pets.dataloaders(path/'images') dls.c # number of categories in this dataset dls.show_batch() def softmax(x): return x.exp() / (x.exp().sum(-1)).unsqueeze(-1) def nl(input, target): return -input[range(target.shape[0]), target].log().mean() def our_cross_entropy(input, target): pred = softmax(input) loss = nl(pred, target) return loss learn = cnn_learner(dls, resnet18, loss_func=our_cross_entropy, metrics=error_rate) learn.fine_tune(1) def dot_product_loss(input, target): pred = softmax(input) return -(pred[range(target.shape[0]), target]).mean() learn = cnn_learner(dls, resnet18, loss_func=dot_product_loss, metrics=error_rate) learn.fine_tune(1) x = np.linspace(0.01,0.99,100) # the predicted vector at index 2 y_dot_product = -x y_cross_entropy = -np.log(x) plt.plot(x, y_dot_product, label='dot_prod') plt.plot(x, y_cross_entropy, label='cross_entropy') plt.legend() plt.show() y_inv = 1/x plt.plot(x, y_dot_product, label='dot_prod') plt.plot(x, y_cross_entropy, label='cross_entropy') plt.plot(x, y_inv, label='inverse loss') plt.legend() plt.show() def inverse_loss(input, target): pred = softmax(input) return (1/((pred[range(target.shape[0]), target]))).mean() learn = cnn_learner(dls, resnet18, loss_func=inverse_loss, metrics=error_rate) learn.fine_tune(1) y_inv_tuning = 1/(x**0.2) plt.plot(x, y_dot_product, label='dot_prod') plt.plot(x, y_cross_entropy, label='cross_entropy') plt.plot(x, y_inv_tuning, label='inverse loss tuning') plt.legend() plt.show() def inverse_loss_tunning(input, target): pred = softmax(input) return (1/((pred[range(target.shape[0]), target]).pow(0.2))).mean() learn = cnn_learner(dls, resnet18, loss_func=inverse_loss_tunning, metrics=error_rate) learn.fine_tune(1)
0.691289
0.98946
# Logistic regression - Implement logistic regression and apply to the classification task. - Will also improve robustness of the implementation by adding regularization to the training algorithm, and testing on a more difficult problem. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import os path = os.getcwd() + '\data\ex2data1.txt' data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted']) data.head() ``` Let's create a scatter plot of the scores and use color coding to visualize if example is positive or negative ``` positive = data[data['Admitted'].isin([1])] negative = data[data['Admitted'].isin([0])] fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted') ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted') ax.legend() ax.set_xlabel('Exam 1 Score') ax.set_ylabel('Exam 2 Score') ``` A clear decision boundary is visible between these 2 sets Creating a sigmoid function ``` def sigmoid(z): return 1 / (1 + np.exp(-z)) ``` Checking the function ``` nums = np.arange(-10, 10, step=1) fig, ax = plt.subplots(figsize=(12, 8)) ax.plot(nums, sigmoid(nums), 'r') ``` Cost function to evaluate a solution ``` def cost(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1-y), np.log(1 - sigmoid(X * theta.T))) return np.sum(first - second) / len(X) ``` Setup like previous part ``` # add ones column data.insert(0, 'Ones', 1) # set X (training data) and y (target data) cols = data.shape[1] X = data.iloc[:, 0:cols-1] y = data.iloc[:, cols-1:cols] # convert to numpy arrays and init parameter array theta X = np.array(X.values) y = np.array(y.values) theta = np.zeros(3) ``` Check shapes of arrays ``` X.shape, theta.shape, y.shape ``` Compute cost of initial solution ``` cost(theta, X, y) ``` Gradient descent function ``` def gradient(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) gradient = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:, i]) gradient[i] = np.sum(term) / len(X) return gradient ``` We just perform a single gradient step in this function.
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import os path = os.getcwd() + '\data\ex2data1.txt' data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted']) data.head() positive = data[data['Admitted'].isin([1])] negative = data[data['Admitted'].isin([0])] fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted') ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted') ax.legend() ax.set_xlabel('Exam 1 Score') ax.set_ylabel('Exam 2 Score') def sigmoid(z): return 1 / (1 + np.exp(-z)) nums = np.arange(-10, 10, step=1) fig, ax = plt.subplots(figsize=(12, 8)) ax.plot(nums, sigmoid(nums), 'r') def cost(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1-y), np.log(1 - sigmoid(X * theta.T))) return np.sum(first - second) / len(X) # add ones column data.insert(0, 'Ones', 1) # set X (training data) and y (target data) cols = data.shape[1] X = data.iloc[:, 0:cols-1] y = data.iloc[:, cols-1:cols] # convert to numpy arrays and init parameter array theta X = np.array(X.values) y = np.array(y.values) theta = np.zeros(3) X.shape, theta.shape, y.shape cost(theta, X, y) def gradient(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) gradient = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:, i]) gradient[i] = np.sum(term) / len(X) return gradient
0.430147
0.989879
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import datetime as dt from datetime import timedelta from sklearn.linear_model import LinearRegression from sklearn.svm import SVR from statsmodels.tsa.api import Holt data = pd.read_csv("/content/covid_19_data.csv") data.head() data.tail() ``` The Data is from 22nd Jan 2020 to 27th feb 2021 ``` print("shape of the covid data is {}".format(data.shape)) print("data types in data set are: {}".format(data.dtypes)) ``` Checking for the Null Values: ``` print("Null Values in the data set: {}".format(data.isnull().sum())) ``` Dropping the serial no column ``` data.drop(["SNo"],axis=1,inplace=True) data.head() ``` Converting the ObservationDate to year-month-date format ``` data["ObservationDate"] = pd.to_datetime(data["ObservationDate"]) data.head() ``` Grouping Different types of cases as per the date ``` date_wise_cases = data.groupby("ObservationDate").agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) ``` Getting the Information from the data set like total Confirmed cases,Active Cases,Recovered cases,dead cases,closed cases ``` print("Total number of Confirmed cases around the world",date_wise_cases["Confirmed"].iloc[-1]) print("Total number of Recovered cases around the world",date_wise_cases["Recovered"].iloc[-1]) print("Total number of Death cases around the world",date_wise_cases["Deaths"].iloc[-1]) print("Total number of Active cases around the world",(date_wise_cases["Confirmed"].iloc[-1]-date_wise_cases["Recovered"].iloc[-1]-date_wise_cases["Deaths"].iloc[-1])) print("Total number of Closed cases around the world",(date_wise_cases["Recovered"].iloc[-1]+date_wise_cases["Deaths"].iloc[-1])) ``` Lets see the distribution of Active Cases ``` plt.figure(figsize=(100,50)) sns.barplot(x=date_wise_cases.index.date,y=date_wise_cases["Confirmed"]-date_wise_cases["Recovered"]-date_wise_cases["Deaths"]) plt.title("Distribution of Active Cases") plt.xticks(rotation=90) ``` From the above Distribution of the active cases we can see that the cases are increasing. Lest see the Distribution of Closed Cases ``` plt.figure(figsize=(100,50)) sns.barplot(x=date_wise_cases.index.date,y=date_wise_cases["Recovered"]+date_wise_cases["Deaths"]) plt.title("Distribution of Closed Cases") plt.xticks(rotation=90) ``` Checking for the weekly Progress of different type of cases: ``` date_wise_cases["Weekly_cases"] = date_wise_cases.index.weekofyear week_num = [] weekwise_confirmed = [] weekwise_recovered = [] weekwise_deaths = [] w = 1 for i in list(date_wise_cases["Weekly_cases"].unique()): weekwise_confirmed.append(date_wise_cases[date_wise_cases["Weekly_cases"]==i]["Confirmed"].iloc[-1]) weekwise_recovered.append(date_wise_cases[date_wise_cases["Weekly_cases"]==i]["Recovered"].iloc[-1]) weekwise_deaths.append(date_wise_cases[date_wise_cases["Weekly_cases"]==i]["Deaths"].iloc[-1]) week_num.append(w) w=w+1 plt.figure(figsize=(8,5)) plt.plot(week_num,weekwise_confirmed,linewidth=3,color="Red",label="Confirmed") plt.plot(week_num,weekwise_recovered,linewidth =3,color="Green",label="Recovered") plt.plot(week_num,weekwise_deaths,linewidth = 3,color="Blue",label="deaths") plt.xlabel("WeekNumber") plt.ylabel("Number of cases") plt.title("Weekly Progress of different types of cases") plt.legend() ``` Here we can see that the cases have been increased in between 0th and some where around 5th to 6th week. from the 10th week the cases have started increasig exponentially Lets see the weekly increase in confirmed cases and deaths ``` fig,(ax1,ax2) = plt.subplots(1,2,figsize=(20,4)) sns.barplot(x= week_num,y=pd.Series(weekwise_confirmed).diff().fillna(0),ax=ax1) sns.barplot(x= week_num,y=pd.Series(weekwise_deaths).diff().fillna(0),ax=ax2) ax1.set_xlabel("Week Number") ax2.set_xlabel("Week Number") ax1.set_ylabel("Numberof Confirmed cases") ax2.set_ylabel("Numberof Death cases") ax1.set_title("Weekly increase in number of Confirmed cases") ax2.set_title("Weekly increase in number of Death Cases") plt.show() ``` Lets see the Avg increase in the different types of cases ``` print("Average increase in number of Confirmed cases everyday: {}".format(np.round(date_wise_cases["Confirmed"].diff().fillna(0).mean()))) print("Average increase in number of Recovered cases everyday: {}".format(np.round(date_wise_cases["Recovered"].diff().fillna(0).mean()))) print("Average increase in number of Death cases everyday: {}".format(np.round(date_wise_cases["Deaths"].diff().fillna(0).mean()))) plt.figure(figsize=(20,5)) plt.plot(date_wise_cases["Confirmed"].diff().fillna(0),label="Daily increase in confirmed cases",linewidth=3,color="Red") plt.plot(date_wise_cases["Recovered"].diff().fillna(0),label="Daily increase in recovered cases",linewidth=3,color="Green") plt.plot(date_wise_cases["Deaths"].diff().fillna(0),label="Daily increase in death cases",linewidth=3,color="blue") plt.xlabel("Timestamp") plt.ylabel("Daily increase") plt.title("Daily increase") plt.legend() plt.show() ``` Lest see the Country wise Analysis ``` countrywise= data[data["ObservationDate"]==data["ObservationDate"].max()].groupby("Country/Region").agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}).sort_values(["Confirmed"],ascending=False) countrywise ``` Here we can see that US,India,Brazil,Russia followed by UK are with Highest Confirmed cases. Finding the mortality rate ``` countrywise["Mortality"]=(countrywise["Deaths"]/countrywise["Recovered"])*100 countrywise["Recovered"]=(countrywise["Recovered"]/countrywise["Confirmed"])*100 print("country wise Mortality {}".format(countrywise["Mortality"])) print("country wise Recovered {}".format(countrywise["Recovered"])) ``` lets see top 15 confirmed and Death countries ``` fig,(ax1,ax2)=plt.subplots(1,2,figsize=(25,10)) top_15confirmed = countrywise.sort_values(["Confirmed"],ascending=False).head(15) top_15deaths = countrywise.sort_values(["Deaths"],ascending=False).head(15) sns.barplot(x=top_15confirmed["Confirmed"],y=top_15confirmed.index,ax=ax1) ax1.set_title("Top 15 countries as per number of confirmed cases") sns.barplot(x=top_15deaths["Deaths"],y=top_15deaths.index,ax=ax2) ax2.set_title("Top 15 countries as per number of death cases") ``` ANALYSIS FOR INDIA: ``` india_data = data[data["Country/Region"]=="India"] datewise_india = india_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) print(datewise_india.iloc[-1]) print("Total Active Cases",datewise_india["Confirmed"].iloc[-1]-datewise_india["Recovered"].iloc[-1]-datewise_india["Deaths"].iloc[-1]) print("Total Closed Cases",datewise_india["Recovered"].iloc[-1]+datewise_india["Deaths"].iloc[-1]) ``` ANALYSIS FOR US: ``` Us_data = data[data["Country/Region"]=="India"] datewise_Us = Us_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) print(datewise_Us.iloc[-1]) print("Total Active Cases",datewise_Us["Confirmed"].iloc[-1]-datewise_Us["Recovered"].iloc[-1]-datewise_Us["Deaths"].iloc[-1]) print("Total Closed Cases",datewise_Us["Recovered"].iloc[-1]+datewise_Us["Deaths"].iloc[-1]) datewise_india["Weekly_cases"] = datewise_india.index.weekofyear week_num_india = [] india_weekwise_confirmed = [] india_weekwise_recovered = [] india_weekwise_deaths = [] w = 1 for i in list(datewise_india["Weekly_cases"].unique()): india_weekwise_confirmed.append(datewise_india[datewise_india["Weekly_cases"]==i]["Confirmed"].iloc[-1]) india_weekwise_recovered.append(datewise_india[datewise_india["Weekly_cases"]==i]["Recovered"].iloc[-1]) india_weekwise_deaths.append(datewise_india[datewise_india["Weekly_cases"]==i]["Deaths"].iloc[-1]) week_num_india.append(w) w=w+1 plt.figure(figsize=(8,5)) plt.plot(week_num_india,india_weekwise_confirmed,linewidth=3,label="confirmed",color="Red") plt.plot(week_num_india,india_weekwise_recovered,linewidth =3,label="Recovered",color="Green") plt.plot(week_num_india,india_weekwise_deaths,linewidth = 3,label="Deaths",color="Blue") plt.xlabel("Week Number") plt.ylabel("Number of cases") plt.title("Weekly Progress of different types of cases in INDIA") plt.legend() plt.show() ``` Comparing different Countries with India to reach total number of Confirmed cases in INDIA: ``` max_ind = datewise_india["Confirmed"].max() china_data = data[data["Country/Region"]=="Mainland China"] Italy_data = data[data["Country/Region"]=="Italy"] US_data = data[data["Country/Region"]=="US"] spain_data = data[data["Country/Region"]=="Spain"] datewise_china = china_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) datewise_Italy = Italy_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) datewise_US=US_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) datewise_Spain=spain_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) print("It took",datewise_india[datewise_india["Confirmed"]>0].shape[0],"days in India to reach",max_ind,"Confirmed Cases") print("It took",datewise_Italy[(datewise_Italy["Confirmed"]>0)&(datewise_Italy["Confirmed"]<=max_ind)].shape[0],"days in Italy to reach number of Confirmed Cases") print("It took",datewise_US[(datewise_US["Confirmed"]>0)&(datewise_US["Confirmed"]<=max_ind)].shape[0],"days in US to reach number of Confirmed Cases") print("It took",datewise_Spain[(datewise_Spain["Confirmed"]>0)&(datewise_Spain["Confirmed"]<=max_ind)].shape[0],"days in Spain to reach number of Confirmed Cases") print("It took",datewise_china[(datewise_china["Confirmed"]>0)&(datewise_china["Confirmed"]<=max_ind)].shape[0],"days in China to reach number of Confirmed Cases") date_wise_cases["Days Since"]=date_wise_cases.index-date_wise_cases.index[0] date_wise_cases["Days Since"] = date_wise_cases["Days Since"].dt.days train = date_wise_cases.iloc[:int(date_wise_cases.shape[0]*0.95)] test = date_wise_cases.iloc[:int(date_wise_cases.shape[0]*0.95):] model_scores=[] date_wise_cases["Days Since"] lin_reg = LinearRegression(normalize=True) svm = SVR(C=1,degree=5,kernel='poly',epsilon=0.001) lin_reg.fit(np.array(train["Days Since"]).reshape(-1,1),np.array(train["Confirmed"]).reshape(-1,1)) svm.fit(np.array(train["Days Since"]).reshape(-1,1),np.array(train["Confirmed"]).reshape(-1,1)) prediction_valid_lin_reg = lin_reg.predict(np.array(test["Days Since"]).reshape(-1,1)) prediction_valid_svm = svm.predict(np.array(test["Days Since"]).reshape(-1,1)) new_date = [] new_prediction_lr=[] new_prediction_svm=[] for i in range(1,50): new_date.append(date_wise_cases.index[-1]+timedelta(days=i)) new_prediction_lr.append(lin_reg.predict(np.array(date_wise_cases["Days Since"].max()+i).reshape(-1,1))[0][0]) new_prediction_svm.append(svm.predict(np.array(date_wise_cases["Days Since"].max()+i).reshape(-1,1))[0]) pd.set_option("display.float_format",lambda x: '%.f' % x) model_predictions=pd.DataFrame(zip(new_date,new_prediction_lr,new_prediction_svm),columns = ["Dates","LR","SVR"]) model_predictions.head(50) ``` Lets forcast with time series anlaysis ``` model_train = date_wise_cases.iloc[:int(date_wise_cases.shape[0]*0.85)] model_test = date_wise_cases.iloc[int(date_wise_cases.shape[0]*0.85):] holt=Holt(np.asarray(model_train["Confirmed"])).fit(smoothing_level=1.4,smoothing_slope=0.2) y_pred = model_test.copy() y_pred["Holt"]=holt.forecast(len(model_test)) holt_new_date=[] holt_new_prediction=[] for i in range(1,50): holt_new_date.append(date_wise_cases.index[-1]+timedelta(days=i)) holt_new_prediction.append(holt.forecast((len(model_test)+i))[-1]) model_predictions["Holts Linear Model Prediction"]=holt_new_prediction model_predictions.head(50) ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import datetime as dt from datetime import timedelta from sklearn.linear_model import LinearRegression from sklearn.svm import SVR from statsmodels.tsa.api import Holt data = pd.read_csv("/content/covid_19_data.csv") data.head() data.tail() print("shape of the covid data is {}".format(data.shape)) print("data types in data set are: {}".format(data.dtypes)) print("Null Values in the data set: {}".format(data.isnull().sum())) data.drop(["SNo"],axis=1,inplace=True) data.head() data["ObservationDate"] = pd.to_datetime(data["ObservationDate"]) data.head() date_wise_cases = data.groupby("ObservationDate").agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) print("Total number of Confirmed cases around the world",date_wise_cases["Confirmed"].iloc[-1]) print("Total number of Recovered cases around the world",date_wise_cases["Recovered"].iloc[-1]) print("Total number of Death cases around the world",date_wise_cases["Deaths"].iloc[-1]) print("Total number of Active cases around the world",(date_wise_cases["Confirmed"].iloc[-1]-date_wise_cases["Recovered"].iloc[-1]-date_wise_cases["Deaths"].iloc[-1])) print("Total number of Closed cases around the world",(date_wise_cases["Recovered"].iloc[-1]+date_wise_cases["Deaths"].iloc[-1])) plt.figure(figsize=(100,50)) sns.barplot(x=date_wise_cases.index.date,y=date_wise_cases["Confirmed"]-date_wise_cases["Recovered"]-date_wise_cases["Deaths"]) plt.title("Distribution of Active Cases") plt.xticks(rotation=90) plt.figure(figsize=(100,50)) sns.barplot(x=date_wise_cases.index.date,y=date_wise_cases["Recovered"]+date_wise_cases["Deaths"]) plt.title("Distribution of Closed Cases") plt.xticks(rotation=90) date_wise_cases["Weekly_cases"] = date_wise_cases.index.weekofyear week_num = [] weekwise_confirmed = [] weekwise_recovered = [] weekwise_deaths = [] w = 1 for i in list(date_wise_cases["Weekly_cases"].unique()): weekwise_confirmed.append(date_wise_cases[date_wise_cases["Weekly_cases"]==i]["Confirmed"].iloc[-1]) weekwise_recovered.append(date_wise_cases[date_wise_cases["Weekly_cases"]==i]["Recovered"].iloc[-1]) weekwise_deaths.append(date_wise_cases[date_wise_cases["Weekly_cases"]==i]["Deaths"].iloc[-1]) week_num.append(w) w=w+1 plt.figure(figsize=(8,5)) plt.plot(week_num,weekwise_confirmed,linewidth=3,color="Red",label="Confirmed") plt.plot(week_num,weekwise_recovered,linewidth =3,color="Green",label="Recovered") plt.plot(week_num,weekwise_deaths,linewidth = 3,color="Blue",label="deaths") plt.xlabel("WeekNumber") plt.ylabel("Number of cases") plt.title("Weekly Progress of different types of cases") plt.legend() fig,(ax1,ax2) = plt.subplots(1,2,figsize=(20,4)) sns.barplot(x= week_num,y=pd.Series(weekwise_confirmed).diff().fillna(0),ax=ax1) sns.barplot(x= week_num,y=pd.Series(weekwise_deaths).diff().fillna(0),ax=ax2) ax1.set_xlabel("Week Number") ax2.set_xlabel("Week Number") ax1.set_ylabel("Numberof Confirmed cases") ax2.set_ylabel("Numberof Death cases") ax1.set_title("Weekly increase in number of Confirmed cases") ax2.set_title("Weekly increase in number of Death Cases") plt.show() print("Average increase in number of Confirmed cases everyday: {}".format(np.round(date_wise_cases["Confirmed"].diff().fillna(0).mean()))) print("Average increase in number of Recovered cases everyday: {}".format(np.round(date_wise_cases["Recovered"].diff().fillna(0).mean()))) print("Average increase in number of Death cases everyday: {}".format(np.round(date_wise_cases["Deaths"].diff().fillna(0).mean()))) plt.figure(figsize=(20,5)) plt.plot(date_wise_cases["Confirmed"].diff().fillna(0),label="Daily increase in confirmed cases",linewidth=3,color="Red") plt.plot(date_wise_cases["Recovered"].diff().fillna(0),label="Daily increase in recovered cases",linewidth=3,color="Green") plt.plot(date_wise_cases["Deaths"].diff().fillna(0),label="Daily increase in death cases",linewidth=3,color="blue") plt.xlabel("Timestamp") plt.ylabel("Daily increase") plt.title("Daily increase") plt.legend() plt.show() countrywise= data[data["ObservationDate"]==data["ObservationDate"].max()].groupby("Country/Region").agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}).sort_values(["Confirmed"],ascending=False) countrywise countrywise["Mortality"]=(countrywise["Deaths"]/countrywise["Recovered"])*100 countrywise["Recovered"]=(countrywise["Recovered"]/countrywise["Confirmed"])*100 print("country wise Mortality {}".format(countrywise["Mortality"])) print("country wise Recovered {}".format(countrywise["Recovered"])) fig,(ax1,ax2)=plt.subplots(1,2,figsize=(25,10)) top_15confirmed = countrywise.sort_values(["Confirmed"],ascending=False).head(15) top_15deaths = countrywise.sort_values(["Deaths"],ascending=False).head(15) sns.barplot(x=top_15confirmed["Confirmed"],y=top_15confirmed.index,ax=ax1) ax1.set_title("Top 15 countries as per number of confirmed cases") sns.barplot(x=top_15deaths["Deaths"],y=top_15deaths.index,ax=ax2) ax2.set_title("Top 15 countries as per number of death cases") india_data = data[data["Country/Region"]=="India"] datewise_india = india_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) print(datewise_india.iloc[-1]) print("Total Active Cases",datewise_india["Confirmed"].iloc[-1]-datewise_india["Recovered"].iloc[-1]-datewise_india["Deaths"].iloc[-1]) print("Total Closed Cases",datewise_india["Recovered"].iloc[-1]+datewise_india["Deaths"].iloc[-1]) Us_data = data[data["Country/Region"]=="India"] datewise_Us = Us_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) print(datewise_Us.iloc[-1]) print("Total Active Cases",datewise_Us["Confirmed"].iloc[-1]-datewise_Us["Recovered"].iloc[-1]-datewise_Us["Deaths"].iloc[-1]) print("Total Closed Cases",datewise_Us["Recovered"].iloc[-1]+datewise_Us["Deaths"].iloc[-1]) datewise_india["Weekly_cases"] = datewise_india.index.weekofyear week_num_india = [] india_weekwise_confirmed = [] india_weekwise_recovered = [] india_weekwise_deaths = [] w = 1 for i in list(datewise_india["Weekly_cases"].unique()): india_weekwise_confirmed.append(datewise_india[datewise_india["Weekly_cases"]==i]["Confirmed"].iloc[-1]) india_weekwise_recovered.append(datewise_india[datewise_india["Weekly_cases"]==i]["Recovered"].iloc[-1]) india_weekwise_deaths.append(datewise_india[datewise_india["Weekly_cases"]==i]["Deaths"].iloc[-1]) week_num_india.append(w) w=w+1 plt.figure(figsize=(8,5)) plt.plot(week_num_india,india_weekwise_confirmed,linewidth=3,label="confirmed",color="Red") plt.plot(week_num_india,india_weekwise_recovered,linewidth =3,label="Recovered",color="Green") plt.plot(week_num_india,india_weekwise_deaths,linewidth = 3,label="Deaths",color="Blue") plt.xlabel("Week Number") plt.ylabel("Number of cases") plt.title("Weekly Progress of different types of cases in INDIA") plt.legend() plt.show() max_ind = datewise_india["Confirmed"].max() china_data = data[data["Country/Region"]=="Mainland China"] Italy_data = data[data["Country/Region"]=="Italy"] US_data = data[data["Country/Region"]=="US"] spain_data = data[data["Country/Region"]=="Spain"] datewise_china = china_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) datewise_Italy = Italy_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) datewise_US=US_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) datewise_Spain=spain_data.groupby(["ObservationDate"]).agg({"Confirmed":"sum","Recovered":"sum","Deaths":"sum"}) print("It took",datewise_india[datewise_india["Confirmed"]>0].shape[0],"days in India to reach",max_ind,"Confirmed Cases") print("It took",datewise_Italy[(datewise_Italy["Confirmed"]>0)&(datewise_Italy["Confirmed"]<=max_ind)].shape[0],"days in Italy to reach number of Confirmed Cases") print("It took",datewise_US[(datewise_US["Confirmed"]>0)&(datewise_US["Confirmed"]<=max_ind)].shape[0],"days in US to reach number of Confirmed Cases") print("It took",datewise_Spain[(datewise_Spain["Confirmed"]>0)&(datewise_Spain["Confirmed"]<=max_ind)].shape[0],"days in Spain to reach number of Confirmed Cases") print("It took",datewise_china[(datewise_china["Confirmed"]>0)&(datewise_china["Confirmed"]<=max_ind)].shape[0],"days in China to reach number of Confirmed Cases") date_wise_cases["Days Since"]=date_wise_cases.index-date_wise_cases.index[0] date_wise_cases["Days Since"] = date_wise_cases["Days Since"].dt.days train = date_wise_cases.iloc[:int(date_wise_cases.shape[0]*0.95)] test = date_wise_cases.iloc[:int(date_wise_cases.shape[0]*0.95):] model_scores=[] date_wise_cases["Days Since"] lin_reg = LinearRegression(normalize=True) svm = SVR(C=1,degree=5,kernel='poly',epsilon=0.001) lin_reg.fit(np.array(train["Days Since"]).reshape(-1,1),np.array(train["Confirmed"]).reshape(-1,1)) svm.fit(np.array(train["Days Since"]).reshape(-1,1),np.array(train["Confirmed"]).reshape(-1,1)) prediction_valid_lin_reg = lin_reg.predict(np.array(test["Days Since"]).reshape(-1,1)) prediction_valid_svm = svm.predict(np.array(test["Days Since"]).reshape(-1,1)) new_date = [] new_prediction_lr=[] new_prediction_svm=[] for i in range(1,50): new_date.append(date_wise_cases.index[-1]+timedelta(days=i)) new_prediction_lr.append(lin_reg.predict(np.array(date_wise_cases["Days Since"].max()+i).reshape(-1,1))[0][0]) new_prediction_svm.append(svm.predict(np.array(date_wise_cases["Days Since"].max()+i).reshape(-1,1))[0]) pd.set_option("display.float_format",lambda x: '%.f' % x) model_predictions=pd.DataFrame(zip(new_date,new_prediction_lr,new_prediction_svm),columns = ["Dates","LR","SVR"]) model_predictions.head(50) model_train = date_wise_cases.iloc[:int(date_wise_cases.shape[0]*0.85)] model_test = date_wise_cases.iloc[int(date_wise_cases.shape[0]*0.85):] holt=Holt(np.asarray(model_train["Confirmed"])).fit(smoothing_level=1.4,smoothing_slope=0.2) y_pred = model_test.copy() y_pred["Holt"]=holt.forecast(len(model_test)) holt_new_date=[] holt_new_prediction=[] for i in range(1,50): holt_new_date.append(date_wise_cases.index[-1]+timedelta(days=i)) holt_new_prediction.append(holt.forecast((len(model_test)+i))[-1]) model_predictions["Holts Linear Model Prediction"]=holt_new_prediction model_predictions.head(50)
0.315525
0.886273
# Working with demand data ``` import pandas as pd import geopandas as gpd from covidcaremap.data import (PUBLISHED_DATA_DIR, published_data_path, PROCESSED_DATA_DIR, processed_data_path, EXTERNAL_DATA_DIR, external_data_path) ``` ## Cases (actuals) There are open, updated datasets of confirmed cases and deaths from two sources: USAFacts and NYTtimes. ### NY Times Data The NY Times data shows cumulative cases and deaths per state or county per day. This data is pulled from their GitHub repository dynamically via these `covidcaremap.data` package methods: ``` from covidcaremap.cases import get_nytimes_cases_by_county, get_nytimes_cases_by_state nytimes_county_cases = get_nytimes_cases_by_county() nytimes_state_cases = get_nytimes_cases_by_state() nytimes_county_cases ``` ### USAFacts Data The USAFacts data is by county, and is a different format than the NYTimes data. It shown total accumulated counts of death per date. It also seperates out the cases and deaths into separate files: ``` from covidcaremap.cases import get_usafacts_cases_by_county, get_usafacts_deaths_by_county usafacts_cases_df = get_usafacts_cases_by_county() usafacts_deaths_df = get_usafacts_deaths_by_county() usafacts_cases_df ``` We can begin to compare the datasets, e.g. to determine the total counts of Philadelphia County on 3/20: ``` usafacts_cases_df[usafacts_cases_df['County Name'] == 'Philadelphia County'].loc[:,'3/20/2020'].to_frame() nytimes_county_cases[ (nytimes_county_cases['county'] == 'Philadelphia') & (nytimes_county_cases['date'] == '2020-03-20')] ``` ## Forecasts Forecasting demand on the healthcare system is an essential part of identifying the capacity gap. We rely on groups exprienced in epidemiological modeling to produce models we can integrate and data we can ingest. ### IHME - by State The Institute for Health Metric and Evaluation, University of Washington (IHME) produced a fantastic [report](http://www.healthdata.org/research-article/forecasting-covid-19-impact-hospital-bed-days-icu-days-ventilator-days-and-deaths) along with a [data explorer](http://covid19.healthdata.org/projections). They are releasing new data every Monday, with predictions around bed needs per day. Data dictionary taken from the 2020_03_27 data release: - **location_name**: Name of the state - **date_reported**:Date - **allbed_mean**: Mean covid beds needed by day - **allbed_lower**: Lower uncertainty bound of covid beds needed by day - **allbed_upper**: Upper uncertainty bound of covid beds needed by day - **ICUbed_mean**: Mean ICU covid beds needed by day - **ICUbed_lower**: Lower uncertainty bound of ICU covid beds needed by day - **ICUbed_upper**: Upper uncertainty bound of ICU covid beds needed by day - **InvVen_mean**: Mean invasive ventilation needed by day - **InvVen_lower**: Lower uncertainty bound of invasive ventilation needed by day - **InvVen_upper**: Upper uncertainty bound of invasive ventilation needed by day - **deaths_mean**: Mean daily covid deaths - **deaths_lower**: Lower uncertainty bound of daily covid deaths - **deaths_upper**: Upper uncertainty bound of daily covid deaths - **admis_mean**: Mean hospital admissions by day - **admis_lower**: Lower uncertainty bound of hospital admissions by day - **admis_upper**: Upper uncertainty bound of hospital admissions by day - **newICU_mean**: Mean number of new people going to the ICU by day - **newICU_lower**: Lower uncertainty bound of the number of new people going to the ICU by day - **newICU_upper**: Upper uncertainty bound of the number of new people going to the ICU by day - **totdea_mean**: Mean cumulative covid deaths - **totdea_lower**: Lower uncertainty bound of cumulative covid deaths - **totdea_upper**: Upper uncertainty bound of cumulative covid deaths - **bedover_mean**: `covid all beds needed` - (`total bed capacity` - `average all bed usage`) - **bedover_lower**: Lower uncertainty bound of bedover (above) - **bedover_upper**: Upper uncertainty bound of bedover (above) - **icuover_mean**: `covid ICU beds needed` - (`total ICU capacity` - `average ICU bed usage`) - **icuover_lower**: Lower uncertainty bound of icuover (above) - **icuover_upper**: Upper uncertainty bound of icuover (above) ``` from covidcaremap.data import get_ihme_forecast ihme_df = get_ihme_forecast() list(ihme_df.columns) # Join in case data and compare projected total deaths for NY on 2020-03-26 nytimes_state_df = get_nytimes_cases_by_state() forecast_and_cases = ihme_df.rename(columns={ 'location_name': 'state', 'date_reported': 'date' }).merge(nytimes_state_df, on=['state', 'date']) forecast_and_cases[(forecast_and_cases['state'] == 'New York') & (forecast_and_cases['date'] == '2020-03-26')][['totdea_mean', 'deaths']] ``` ### CHIME [CHIME](https://github.com/CodeForPhilly/chime) is a tool was developed by the Predictive Healthcare team at Penn Medicine. It [implements a SIR model](https://code-for-philly.gitbook.io/chime/what-is-chime/sir-modeling) that takes a set of parameters, population, and current confirmed cases to produce a several week estimate of hospitalized, ICU, and ventilated patients. The parameters with their default values can be found in the `covidcaremap.chime` package: ``` import covidcaremap.chime as ccm_chime ccm_chime.DEFAULT_PARAMS ``` The parameters are documented in `covidcaremap/chime.py`: ``` DEFAULT_PARAMS = { # Detection Probability: Used to infer infected population from confirmed cases. "detection_probability": 0.14, # Doubling time before social distancing (days) "doubling_time" : 4, # Social Distancing Reduction Rate: 0.0 - 1.0 "relative_contact_rate": 0.3, # Hospitalized Rate: 0.00001 - 1.0 "hospitalized_rate": 0.025, # Hospitalized Length of Stay (days) "hospitalized_los": 7, # ICU Length of Stay (days) "icu_rate": 0.0075, # ICU Rate: 0.0 - 1.0 "icu_los": 9, # Ventilated Rate: 0.0 - 1.0 "ventilated_rate": 0.005, #Ventilated Length of Stay (days) "ventilated_los": 10, "recovery_days": 14 } ``` This package also has a method to run CHIME over a region: ``` help(ccm_chime.get_regional_predictions) ``` We can use this to create predictions over every county in the US: ``` from covidcaremap.cases import get_county_case_info # Gets confirmed cases from USA Facts per county for date. cases_by_county = get_county_case_info('3/26/2020') chime_county_df = ccm_chime.get_regional_predictions(cases_by_county, region_id_column='County Name') chime_county_df ``` ### HGHI The data from the [Harvard Global Health Institute (HGHI)](https://globalepidemics.org/2020-03-17-caring-for-covid-19-patients/) study also includes forecasts. The columns for projections are: - **Projected Infected Individuals** – How many individuals over the age of 18 are expected to get infected with COVID-19 over the entire course of the pandemic - **Projected Hospitalized Individuals** – How many individuals over the age of 18 are expected to need hospitalization due to COVID-19 over the entire course of the pandemic - **Projected Individuals Needing ICU Care** – How many individuals over the age of 18 are expected to need ICU care due to COVID-19 over the entire course of the pandemic These numbers are based on rough percentages of infected population and hospitalization rates. See their [data dictionary](https://globalepidemics.org/2020-03-17-caring-for-covid-19-patients/#dictionary) for more column descriptions. ``` hghi_state_gdf = gpd.read_file(processed_data_path('hghi_state_data.geojson')) hghi_state_gdf[[ 'State Name', 'Projected Infected Individuals', 'Projected Hospitalized Individuals', 'Projected Individuals Needing ICU Care' ]] ``` Here we can roughly compare of HGHI and IHME total ICU patients per state. ``` # Sum up all the mean new ICU patient forecasts per day for a state to get the # total number of patients needing ICU care. ihme_hghi_df = ihme_df.rename(columns={'location_name': 'State Name'}) \ .groupby('State Name')[['newICU_mean', 'newICU_lower', 'newICU_upper']].sum() \ .merge(hghi_state_gdf, on='State Name') ihme_hghi_df['Difference (Mean)'] = (ihme_hghi_df['newICU_mean'] - ihme_hghi_df['Projected Individuals Needing ICU Care']) ihme_hghi_df[['State Name', 'newICU_mean', 'newICU_lower', 'newICU_upper', 'Projected Individuals Needing ICU Care', 'Difference (Mean)']] ```
github_jupyter
import pandas as pd import geopandas as gpd from covidcaremap.data import (PUBLISHED_DATA_DIR, published_data_path, PROCESSED_DATA_DIR, processed_data_path, EXTERNAL_DATA_DIR, external_data_path) from covidcaremap.cases import get_nytimes_cases_by_county, get_nytimes_cases_by_state nytimes_county_cases = get_nytimes_cases_by_county() nytimes_state_cases = get_nytimes_cases_by_state() nytimes_county_cases from covidcaremap.cases import get_usafacts_cases_by_county, get_usafacts_deaths_by_county usafacts_cases_df = get_usafacts_cases_by_county() usafacts_deaths_df = get_usafacts_deaths_by_county() usafacts_cases_df usafacts_cases_df[usafacts_cases_df['County Name'] == 'Philadelphia County'].loc[:,'3/20/2020'].to_frame() nytimes_county_cases[ (nytimes_county_cases['county'] == 'Philadelphia') & (nytimes_county_cases['date'] == '2020-03-20')] from covidcaremap.data import get_ihme_forecast ihme_df = get_ihme_forecast() list(ihme_df.columns) # Join in case data and compare projected total deaths for NY on 2020-03-26 nytimes_state_df = get_nytimes_cases_by_state() forecast_and_cases = ihme_df.rename(columns={ 'location_name': 'state', 'date_reported': 'date' }).merge(nytimes_state_df, on=['state', 'date']) forecast_and_cases[(forecast_and_cases['state'] == 'New York') & (forecast_and_cases['date'] == '2020-03-26')][['totdea_mean', 'deaths']] import covidcaremap.chime as ccm_chime ccm_chime.DEFAULT_PARAMS DEFAULT_PARAMS = { # Detection Probability: Used to infer infected population from confirmed cases. "detection_probability": 0.14, # Doubling time before social distancing (days) "doubling_time" : 4, # Social Distancing Reduction Rate: 0.0 - 1.0 "relative_contact_rate": 0.3, # Hospitalized Rate: 0.00001 - 1.0 "hospitalized_rate": 0.025, # Hospitalized Length of Stay (days) "hospitalized_los": 7, # ICU Length of Stay (days) "icu_rate": 0.0075, # ICU Rate: 0.0 - 1.0 "icu_los": 9, # Ventilated Rate: 0.0 - 1.0 "ventilated_rate": 0.005, #Ventilated Length of Stay (days) "ventilated_los": 10, "recovery_days": 14 } help(ccm_chime.get_regional_predictions) from covidcaremap.cases import get_county_case_info # Gets confirmed cases from USA Facts per county for date. cases_by_county = get_county_case_info('3/26/2020') chime_county_df = ccm_chime.get_regional_predictions(cases_by_county, region_id_column='County Name') chime_county_df hghi_state_gdf = gpd.read_file(processed_data_path('hghi_state_data.geojson')) hghi_state_gdf[[ 'State Name', 'Projected Infected Individuals', 'Projected Hospitalized Individuals', 'Projected Individuals Needing ICU Care' ]] # Sum up all the mean new ICU patient forecasts per day for a state to get the # total number of patients needing ICU care. ihme_hghi_df = ihme_df.rename(columns={'location_name': 'State Name'}) \ .groupby('State Name')[['newICU_mean', 'newICU_lower', 'newICU_upper']].sum() \ .merge(hghi_state_gdf, on='State Name') ihme_hghi_df['Difference (Mean)'] = (ihme_hghi_df['newICU_mean'] - ihme_hghi_df['Projected Individuals Needing ICU Care']) ihme_hghi_df[['State Name', 'newICU_mean', 'newICU_lower', 'newICU_upper', 'Projected Individuals Needing ICU Care', 'Difference (Mean)']]
0.345326
0.920647
# Working with Forms in Jupyter ##Inspired by: ###https://jakevdp.github.io/blog/2013/06/01/ipython-notebook-javascript-python-communication/ ##Code modernized by reviewing slider code:<br/> ###http://nbviewer.ipython.org/github/jakevdp/mpld3/blob/gh-pages/notebooks/sliderPlugin.ipynb #Changes >Call to execute has not changed, but the information needed by the callback argument and information passed to the callback function has changed. Additional information on messaging in iPython/Jupyter can be found [here](https://ipython.org/ipython-doc/stable/development/messaging.html). // 2.x: // Message was passed back in two arguments: // out_type: type of output {stream, pyout, pyerr} // out: returned output with additional metadata // out_type='stream' out.data was a stream of output // out_type='pyout' out.data["text/plain"] is the returned output // out_type='pyerr' out.ename/out.evalue contained error type and message function handle_output(out_type, out) {//handler code} var callbacks = {'output' : handle_output}; // 3.x: // Everything is returned in one argument // out.msg_type replaces out_type // 'error' replaces 'pyerr' // 'execute_result' replaces 'pyout' // 'stream' is 'stream' // out.content now contains returned messages/output // msg_type = 'error' // out.content.ename = error name // out.content.evalue = error value // out.content.traceback = python traceback // msg_type = 'execute_result' // out.content.data['text/plain'] = returned output // msg_type = 'stream' // if out.content.name = 'stdout' output is in out.content.text function handle_output(out) {//handler code} var callbacks = { 'iopub' : {'output' : handle_output}}; #Types * 'execute_result' : A python object has been returned * 'stream' : Output from a print statement * 'error' : An error of some sort has occurred ``` from IPython.display import HTML input_form = """ <div style="background-color:gainsboro; border:solid black; width:300px; padding:20px;"> Variable Name: <input type="text" id="var_name" value="foo"><br> Variable Value: <input type="text" id="var_value" value="bar"><br> <button onclick="set_value()">Set Value</button> </div> """ javascript = """ <script type="text/Javascript"> function set_value(){ var var_name = document.getElementById('var_name').value; var var_value = document.getElementById('var_value').value; var command = var_name + " = '" + var_value + "'"; console.log("Executing Command: " + command); var kernel = IPython.notebook.kernel; kernel.execute(command); } </script> """ HTML(input_form + javascript) from math import pi, sin # As a test put 'print(a)' into the Code: text box to test 'stream' return types a=5 print(a) # Add an input form similar to what we saw above from IPython.display import HTML input_form = """ <div style="background-color:gainsboro; border:solid black; width:600px; padding:20px;"> Code: <input type="text" id="code_input" size="50" height="2" value="sin(pi / 2)"><br> Result: <input type="text" id="result_output" size="50" value="1.0"><br> <button onclick="exec_code()">Execute</button> </div> """ # here the javascript has a function to execute the code # within the input box, and a callback to handle the output. javascript = """ <script type="text/Javascript"> function handle_output(out){ console.log(out); var res = null; switch (out.msg_type) { case 'stream': if (out.content.name == "stdout") { res = out.content.text; } break; case 'execute_result': res = out.content.data["text/plain"]; break; case 'error': res = out.content.ename + ": " + out.content.evalue; break; default: res = '[Return type undefined: ' + out.msg_type + ' ]'; } document.getElementById("result_output").value = res; } function exec_code(){ var code_input = document.getElementById('code_input').value; var kernel = IPython.notebook.kernel; var callbacks = { 'iopub' : {'output' : handle_output}}; document.getElementById("result_output").value = ""; // clear output box var msg_id = kernel.execute(code_input, callbacks, {silent:false}); console.log("Execute pressed"); } </script> """ HTML(input_form + javascript) ```
github_jupyter
from IPython.display import HTML input_form = """ <div style="background-color:gainsboro; border:solid black; width:300px; padding:20px;"> Variable Name: <input type="text" id="var_name" value="foo"><br> Variable Value: <input type="text" id="var_value" value="bar"><br> <button onclick="set_value()">Set Value</button> </div> """ javascript = """ <script type="text/Javascript"> function set_value(){ var var_name = document.getElementById('var_name').value; var var_value = document.getElementById('var_value').value; var command = var_name + " = '" + var_value + "'"; console.log("Executing Command: " + command); var kernel = IPython.notebook.kernel; kernel.execute(command); } </script> """ HTML(input_form + javascript) from math import pi, sin # As a test put 'print(a)' into the Code: text box to test 'stream' return types a=5 print(a) # Add an input form similar to what we saw above from IPython.display import HTML input_form = """ <div style="background-color:gainsboro; border:solid black; width:600px; padding:20px;"> Code: <input type="text" id="code_input" size="50" height="2" value="sin(pi / 2)"><br> Result: <input type="text" id="result_output" size="50" value="1.0"><br> <button onclick="exec_code()">Execute</button> </div> """ # here the javascript has a function to execute the code # within the input box, and a callback to handle the output. javascript = """ <script type="text/Javascript"> function handle_output(out){ console.log(out); var res = null; switch (out.msg_type) { case 'stream': if (out.content.name == "stdout") { res = out.content.text; } break; case 'execute_result': res = out.content.data["text/plain"]; break; case 'error': res = out.content.ename + ": " + out.content.evalue; break; default: res = '[Return type undefined: ' + out.msg_type + ' ]'; } document.getElementById("result_output").value = res; } function exec_code(){ var code_input = document.getElementById('code_input').value; var kernel = IPython.notebook.kernel; var callbacks = { 'iopub' : {'output' : handle_output}}; document.getElementById("result_output").value = ""; // clear output box var msg_id = kernel.execute(code_input, callbacks, {silent:false}); console.log("Execute pressed"); } </script> """ HTML(input_form + javascript)
0.235988
0.35883
# Lecture 10: Array Indexing, Slicing, and Broadcasting CSCI 1360E: Foundations for Informatics and Analytics ## Overview and Objectives Most of this lecture will be a review of basic indexing and slicing operations, albeit within the context of NumPy arrays. Therefore, there will be some additional functionalities that are critical to understand. By the end of this lecture, you should be able to: - Use "fancy indexing" in NumPy arrays - Create boolean masks to pull out subsets of a NumPy array - Understand array broadcasting for performing operations on subsets of NumPy arrays ## Part 1: NumPy Array Indexing and Slicing Hopefully, you recall basic indexing and slicing from Lecture 4. If not, [please go back and refresh your understanding of the concept](L4.slides.html). ``` li = ["this", "is", "a", "list"] print(li) print(li[1:3]) # Print element 1 (inclusive) to 3 (exclusive) print(li[2:]) # Print element 2 and everything after that print(li[:-1]) # Print everything BEFORE element -1 (the last one) ``` With NumPy arrays, all the same functionality you know and love from lists is still there. ``` import numpy as np x = np.array([1, 2, 3, 4, 5]) print(x) print(x[1:3]) print(x[2:]) print(x[:-1]) ``` These operations all work whether you're using Python lists or NumPy arrays. The first place in which Python lists and NumPy arrays differ is when we get to multidimensional arrays. We'll start with matrices. To build matrices using Python lists, you basically needed "nested" lists, or a list containing lists: ``` python_matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] print(python_matrix) ``` To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy `array` method: ``` numpy_matrix = np.array(python_matrix) print(numpy_matrix) ``` The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements *only* in this way: ``` print(python_matrix) # The full list-of-lists print(python_matrix[0]) # The inner-list at the 0th position of the outer-list print(python_matrix[0][0]) # The 0th element of the 0th inner-list ``` With NumPy arrays, you can use that same notation...*or* you can use comma-separated indices: ``` print(numpy_matrix) print(numpy_matrix[0]) print(numpy_matrix[0, 0]) # Note the comma-separated format! ``` It's not earth-shattering, but enough to warrant a heads-up. When you index NumPy arrays, the nomenclature used is that of an **axis**: you are indexing specific *axes* of a NumPy array object. In particular, when access the `.shape` attribute on a NumPy array, that tells you two things: 1: How many axes there are. This number is `len(ndarray.shape)`, or the number of elements in the tuple returned by `.shape`. In our above example, `numpy_matrix.shape` would return `(3, 3)`, so it would have 2 axes (since there are two numbers--both 3s). 2: How many elements are in each axis. In our above example, where `numpy_matrix.shape` returns `(3, 3)`, there are 2 axes (since the length of that tuple is 2), and both axes have 3 elements (hence the numbers--3 elements in the first axis, 3 in the second). Here's the breakdown of axis notation and indices used in a 2D NumPy array: ![numpymatrix](Lecture10/httpatomoreillycomsourceoreillyimages1346880.png) As with lists, if you want an *entire* axis, just use the colon operator all by itself: ``` x = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]) print(x) print(x[:, 1]) # Take ALL of axis 0, and one index of axis 1. ``` Here's a great visual summary of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3): **STUDY THIS CAREFULLY**. This more or less sums up everything you need to know about slicing with NumPy arrays. ![numpyslicing](Lecture10/httpatomoreillycomsourceoreillyimages1346882.png) Depending on your field, it's entirely possible that you'll go beyond 2D matrices. If so, it's important to be able to recognize what these structures "look" like. For example, a video can be thought of as a 3D cube. Put another way, it's a NumPy array with 3 axes: the first axis is height, the second axis is width, and the third axis is number of frames. ``` video = np.empty(shape = (1920, 1080, 5000)) print("Axis 0 length:", video.shape[0]) # How many rows? print("Axis 1 length:", video.shape[1]) # How many columns? print("Axis 2 length:", video.shape[2]) # How many frames? ``` We know `video` is 3D because we can also access its `ndim` attribute. ``` print(video.ndim) del video ``` Another example--to go straight to cutting-edge academic research--is 3D video microscope data of multiple tagged fluorescent markers. This would result in a *five-axis* NumPy object: ``` tensor = np.empty(shape = (2, 640, 480, 360, 100)) print(tensor.shape) # Axis 0: color channel--used to differentiate between fluorescent markers # Axis 1: height--same as before # Axis 2: width--same as before # Axis 3: depth--capturing 3D depth at each time interval, like a 3D movie # Axis 4: frame--same as before ``` We can also ask how many elements there are *total*, using the `size` attribute: ``` print(tensor.size) del tensor ``` These are extreme examples, but they're to illustrate how flexible NumPy arrays are. If in doubt: once you index the first axis, the NumPy array you get back has the shape of all the *remaining* axes. ``` example = np.empty(shape = (3, 5, 9)) print(example.shape) sliced = example[0] # Indexed the first axis. print(sliced.shape) sliced_again = example[0, 0] # Indexed the first and second axes. print(sliced_again.shape) ``` Notice how the number "9", initially the third axis, steadily marches to the front as the axes before it are accessed. ## Part 2: NumPy Array Broadcasting "Broadcasting" is a fancy term for how Python--specifically, NumPy--handles vectorized operations when arrays of differing shapes are involved. (this is, in some sense, "how the sausage is made") When you write code like this: ``` x = np.array([1, 2, 3, 4, 5]) x += 10 print(x) ``` how does Python know that you want to add the scalar value 10 to each element of the vector `x`? Because (in a word) **broadcasting.** *Broadcasting* is the operation through which a low(er)-dimensional array is in some way "replicated" to be the same shape as a high(er)-dimensional array. We saw this in our previous example: the low-dimensional **scalar** was replicated, or *broadcast*, to each element of the array `x` so that the addition operation could be performed element-wise. This concept can be generalized to higher-dimensional NumPy arrays. ``` zeros = np.zeros(shape = (3, 4)) print(zeros) zeros += 1 # Just add 1. print(zeros) ``` In this example, the scalar value 1 is broadcast to all the elements of `zeros`, converting the operation to element-wise addition. This all happens under the NumPy hood--we don't see it! It "just works"...most of the time. There are some rules that broadcasting abides by. Essentially, dimensions of arrays need to be "compatible" in order for broadcasting to work. "Compatible" is defined as - both dimensions are of equal size (e.g., both have the same number of rows) - one of them is 1 (the scalar case) If these rules aren't met, you get all kinds of strange errors: ``` x = np.zeros(shape = (3, 3)) y = np.ones(4) x + y ``` But on some intuitive level, this hopefully makes sense: there's no reasonable arithmetic operation that can be performed when you have one $3 \times 3$ matrix and a vector of length 4. Draw them out if you need to convince yourself--how would add a $3 \times 3$ matrix and a 4-length vector? Or subtract them? There's no way to do it, and Python knows that. To be rigorous: it's the *trailing* dimensions / axes that you want to make sure line up (as in, the last number that shows up when you do the `.shape` property): ``` x = np.zeros(shape = (3, 4)) y = np.array([1, 2, 3, 4]) z = x + y print(z) ``` In this example, the shape of `x` is (3, 4). The shape of `y` is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise. ## Part 3: "Fancy" Indexing Hopefully you have at least an intuitive understanding of how indexing works so far. Unfortunately, it gets more complicated, but still retains a modicum of simplicity. First: indexing by boolean masks. ### Boolean indexing We've already seen that you can index by integers. Using the colon operator, you can even specify ranges, slicing out entire swaths of rows and columns. But suppose we want something very specific; data in our array which satisfies certain criteria, as opposed to data which is found at certain indices? Put another way: can we pull data out of an array that meets certain conditions? Let's say you have some data. ``` x = np.random.standard_normal(size = (7, 4)) print(x) ``` This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors. Perhaps it's - 7 people who are described by their height, weight, age, and 40-yard dash time, or - Data on 7 video games, each described by their PC Gamer rating, Steam downloads count, average number of active players, and total cheating complaints - ...insert your own example here! Whatever our data, a common first step before any analysis involves some kind of preprocessing (this is just a fancy term for "making sure the data make sense"). If the example we're looking at is the video game scenario from the previous slide, then we know that any negative numbers are junk. After all, how can you have a negative rating? Or a negative number of active players? Perhaps some goofy players decided to make bogus ratings just for the lulz. Funny to them, perhaps, but not exactly useful to you when you're trying to write an algorithm to recommend games to players based on their ratings. So, you have to "clean" the data a bit. So our first course of action might be to set all negative numbers in the data to 0. We could potentially set up a pair of loops--you should know how to do this!--but it's much easier (and faster) to use *boolean indexing*. First, we create a *mask*. This is what it sounds like: it "masks" certain portions of the data we don't want to change (in this case, all the numbers greater than 0, since we're assuming they're already valid). ``` mask = x < 0 print(mask) ``` Just for your reference, here's the original data: notice how, in looking at the data below and the boolean mask above, all the spots where there are negative numbers also correspond to "`True`" in the mask? ``` print(x) ``` Now, we can use our mask to access *only* the indices we want to set to 0. ``` x[mask] = 0 print(x) ``` *voilà!* Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind. One small caveat with boolean indexing. - Yes, you can string multiple boolean conditions together, as you may recall doing in the lecture with conditionals. - But... **`and` and `or` DO NOT WORK.** You have to use the arithmetic versions of the operators: `&` (for `and`) and `|` (for `or`). ``` mask = (x < 1) & (x > 0.5) # True for any value less than 1 but greater than 0.5 x[mask] = 99 # We're setting any value in this matrix < 1 but > 0.5 to 99 print(x) ``` ### Fancy Indexing "Fancy" indexing is a term coined by the NumPy community to refer to this little indexing trick. To explain is simple enough: **fancy indexing allows you to index arrays with other [integer] arrays.** Before you go down the Indexing Inception rabbit hole, just keep in mind: it's basically like slicing, but you're condensing the ability to perform multiple slicings all at one time, instead of one at a time. Now, to demonstrate: Let's build a 2D array that, for the sake of simplicity, has across each row the index of that row. ``` matrix = np.empty(shape = (8, 4)) for i in range(8): matrix[i] = i # Broadcasting is happening here! print(matrix) ``` We have 8 rows and 4 columns, where each row is a 4-element vector of the same value repeated across the columns, and that value is the index of the row. In addition to slicing and boolean indexing, we can also use *other NumPy arrays* to very selectively pick and choose what elements we want, and **even the order in which we want them**. Let's say I want rows 7, 0, 5, and 2. In that order. ``` indices = np.array([7, 0, 5, 2]) # Here's my "indexing" array--note the order of the numbers. print(matrix[indices]) ``` Ta-daaa! Pretty spiffy! Row 7 shows up first (we know that because of the straight 7s), followed by row 0, then row 5, then row 2. You could get the same thing if you did `matrix[7]`, then `matrix[0]`, then `matrix[5]`, and finally `matrix[2]`, and then stacked the results into that final matrix. But this just condenses all those steps. But wait, there's more! Rather than just specifying one dimension, you can provide *tuples* of NumPy arrays that very explicitly pick out certain elements (in a certain order) from another NumPy array. ``` matrix = np.arange(32).reshape((8, 4)) print(matrix) # This 8x4 matrix has integer elements that increment by 1 column-wise, then row-wise. indices = ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) # This is a tuple of 2 NumPy arrays! print(matrix[indices]) ``` Ok, this will take a little explaining, bear with me: When you pass in tuples of NumPy arrays as indices, they act as $(x, y)$ coordinate pairs: the first NumPy array of the tuple is the list of $x$ coordinates, while the second NumPy array is the list of corresponding $y$ coordinates. In this way, the corresponding elements of the two NumPy arrays in the tuple give you the row and column indices to be selected from the original NumPy array. In our previous example, this was our tuple of indices: ``` ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) ``` The $x$ coordinates are in `array([1, 7, 4])`, and the $y$ coordinates are in `array([3, 0, 1])`. More concretely: - The first element to take from the matrix is `(1, 3)`--this is the 7 that was printed! - The second element is at `(7, 4)`--this is the 28 that followed. - The final element is at `(4, 1)`--this corresponds to the 17! **Go back a few slides to the $8 \times 4$ `matrix` array to convince yourself this is what is happening.** Fancy indexing can be tricky at first, but it can be very useful when you want to pull very specific elements out of a NumPy array and in a very specific order. Fancy indexing is **super advanced stuff**, but if you put in the time to practice, it can all but completely eliminate the need to use loops. Don't worry if you're confused right now. That's absolutely alright--this lecture and last Friday's are **easily the most difficult if you've never done any programming before**. Be patient with yourself, practice what you see in this lecture using the code (and tweaking it to see what happens), and ask questions! ## Review Questions Some questions to discuss and consider: 1: Given some arbitrary NumPy array and only access to its `.shape` attribute (as well as its elements), describe (in words or in Python pseudocode) how you would compute exactly how many individual elements exist in the array. 2: Broadcasting hints that there is more happening under the hood than meets the eye with NumPy. With this in mind, do you think it would be more or less efficient to write a loop yourself in Python to add a scalar to each element in a Python list, rather than use NumPy broadcasting? Why or why not? 3: I have a 2D matrix, where the rows represent individual gamers, and the columns represent games. There's a "1" in the column if the gamer won that game, and a "0" if they lost. Describe how you might use boolean indexing to select only the rows corresponding to gamers whose average score was above a certain `threshold`. 4: Show how you could reverse the elements of a 1D NumPy array using one line of code, no loops, and fancy indexing. 5: Let's say I create the following NumPy array: `a = np.zeros(shape = (100, 50, 25, 10))`. What is the shape of the resulting array when I index it as follows: `a[:, 0]`? ## Course Administrivia - **How is A4 going?** Due tonight! - **On Wednesday, June 28, I'll host an online midterm Q&A session in Slack.** There will be a link posted in the Slack chat to a Google Hangouts room, where you're welcome to join and post questions for me to answer. This review session will be held from **12:30pm - 2:30pm**. - The midterm (on **Thursday, June 29**) will be held entirely on JupyterHub. It will be available starting at midnight on the 29th, and will be collected by JupyterHub exactly 24 hours later (midnight to midnight). **You are welcome to take the exam anytime in that 24-hour window**, but once you click the "Fetch" button in JupyterHub, **you will have precisely 90 minutes from that moment to complete and submit the midterm**. I'll re-post these details in the write-up for Wednesday's review session. - Please post in `#questions` if you are confused about any of this--lecture material, homework assignments, or midterm logistics! ## Additional Resources 1. McKinney, Wes. *Python for Data Analysis*. 2012. ISBN-13: 860-1400898857 2. NumPy documentation on array broadcasting http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html 3. NumPy documentation on indexing http://docs.scipy.org/doc/numpy/user/basics.indexing.html 4. *Broadcasting Arrays in NumPy*. http://eli.thegreenplace.net/2015/broadcasting-arrays-in-numpy/
github_jupyter
li = ["this", "is", "a", "list"] print(li) print(li[1:3]) # Print element 1 (inclusive) to 3 (exclusive) print(li[2:]) # Print element 2 and everything after that print(li[:-1]) # Print everything BEFORE element -1 (the last one) import numpy as np x = np.array([1, 2, 3, 4, 5]) print(x) print(x[1:3]) print(x[2:]) print(x[:-1]) python_matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] print(python_matrix) numpy_matrix = np.array(python_matrix) print(numpy_matrix) print(python_matrix) # The full list-of-lists print(python_matrix[0]) # The inner-list at the 0th position of the outer-list print(python_matrix[0][0]) # The 0th element of the 0th inner-list print(numpy_matrix) print(numpy_matrix[0]) print(numpy_matrix[0, 0]) # Note the comma-separated format! x = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]) print(x) print(x[:, 1]) # Take ALL of axis 0, and one index of axis 1. video = np.empty(shape = (1920, 1080, 5000)) print("Axis 0 length:", video.shape[0]) # How many rows? print("Axis 1 length:", video.shape[1]) # How many columns? print("Axis 2 length:", video.shape[2]) # How many frames? print(video.ndim) del video tensor = np.empty(shape = (2, 640, 480, 360, 100)) print(tensor.shape) # Axis 0: color channel--used to differentiate between fluorescent markers # Axis 1: height--same as before # Axis 2: width--same as before # Axis 3: depth--capturing 3D depth at each time interval, like a 3D movie # Axis 4: frame--same as before print(tensor.size) del tensor example = np.empty(shape = (3, 5, 9)) print(example.shape) sliced = example[0] # Indexed the first axis. print(sliced.shape) sliced_again = example[0, 0] # Indexed the first and second axes. print(sliced_again.shape) x = np.array([1, 2, 3, 4, 5]) x += 10 print(x) zeros = np.zeros(shape = (3, 4)) print(zeros) zeros += 1 # Just add 1. print(zeros) x = np.zeros(shape = (3, 3)) y = np.ones(4) x + y x = np.zeros(shape = (3, 4)) y = np.array([1, 2, 3, 4]) z = x + y print(z) x = np.random.standard_normal(size = (7, 4)) print(x) mask = x < 0 print(mask) print(x) x[mask] = 0 print(x) mask = (x < 1) & (x > 0.5) # True for any value less than 1 but greater than 0.5 x[mask] = 99 # We're setting any value in this matrix < 1 but > 0.5 to 99 print(x) matrix = np.empty(shape = (8, 4)) for i in range(8): matrix[i] = i # Broadcasting is happening here! print(matrix) indices = np.array([7, 0, 5, 2]) # Here's my "indexing" array--note the order of the numbers. print(matrix[indices]) matrix = np.arange(32).reshape((8, 4)) print(matrix) # This 8x4 matrix has integer elements that increment by 1 column-wise, then row-wise. indices = ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) # This is a tuple of 2 NumPy arrays! print(matrix[indices]) ( np.array([1, 7, 4]), np.array([3, 0, 1]) )
0.402392
0.990514
# Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: * [Pix2Pix](https://affinelayer.com/pixsrv/) * [CycleGAN](https://github.com/junyanz/CycleGAN) * [A whole list](https://github.com/wiseodd/generative-models) The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. ![GAN diagram](assets/gan_diagram.png) The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. ``` %matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') ``` ## Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input `inputs_real` and the generator input `inputs_z`. We'll assign them the appropriate sizes for each of the networks. ``` def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z ``` ## Generator network ![GAN Network](assets/gan_network.png) Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. #### Variable Scope Here we need to use `tf.variable_scope` for two reasons. Firstly, we're going to make sure all the variable names start with `generator`. Similarly, we'll prepend `discriminator` to the discriminator variables. This will help out later when we're training the separate networks. We could just use `tf.name_scope` to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also _sample from it_ as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the `reuse` keyword for `tf.variable_scope` to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use `tf.variable_scope`, you use a `with` statement: ```python with tf.variable_scope('scope_name', reuse=False): # code here ``` Here's more from [the TensorFlow documentation](https://www.tensorflow.org/programmers_guide/variable_scope#the_problem) to get another look at using `tf.variable_scope`. #### Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to `tf.maximum`. Typically, a parameter `alpha` sets the magnitude of the output for negative values. So, the output for negative input (`x`) values is `alpha*x`, and the output for positive `x` is `x`: $$ f(x) = max(\alpha * x, x) $$ #### Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. ``` def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('generator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out ``` ## Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. ``` def discriminator(x, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('discriminator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits ``` ## Hyperparameters ``` # Size of input image to discriminator input_size = 784 # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Smoothing smooth = 0.1 ``` ## Build network Now we're building the network from the functions defined above. First is to get our inputs, `input_real, input_z` from `model_inputs` using the sizes of the input and z. Then, we'll create the generator, `generator(input_z, input_size)`. This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as `g_model`. So the real data discriminator is `discriminator(input_real)` while the fake discriminator is `discriminator(g_model, reuse=True)`. ``` tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Build the model g_model = generator(input_z, input_size, n_units=n_units, alpha=alpha) # g_model is the generator output d_model_real, d_logits_real = discriminator(input_real, n_units=n_units, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=n_units, alpha=alpha) ``` ## Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_loss_real + d_loss_fake`. The losses will by sigmoid cross-entropys, which we can get with `tf.nn.sigmoid_cross_entropy_with_logits`. We'll also wrap that in `tf.reduce_mean` to get the mean for all the images in the batch. So the losses will look something like ```python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) ``` For the real image logits, we'll use `d_logits_real` which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter `smooth`. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like `labels = tf.ones_like(tensor) * (1 - smooth)` The discriminator loss for the fake data is similar. The logits are `d_logits_fake`, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using `d_logits_fake`, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. ``` # Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_real))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) ``` ## Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use `tf.trainable_variables()`. This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with `generator`. So, we just need to iterate through the list from `tf.trainable_variables()` and keep variables to start with `generator`. Each variable object has an attribute `name` which holds the name of the variable as a string (`var.name == 'weights_0'` for instance). We can do something similar with the discriminator. All the variables in the discriminator start with `discriminator`. Then, in the optimizer we pass the variable lists to `var_list` in the `minimize` method. This tells the optimizer to only update the listed variables. Something like `tf.train.AdamOptimizer().minimize(loss, var_list=var_list)` will only train the variables in `var_list`. ``` # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('generator')] d_vars = [var for var in t_vars if var.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) ``` ## Training ``` batch_size = 100 epochs = 100 samples = [] losses = [] # Only save generator variables saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) ``` ## Training loss Here we'll check out the training losses for the generator and discriminator. ``` fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() ``` ## Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. ``` def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) ``` These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make. ``` _ = view_samples(-1, samples) ``` Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! ``` rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) ``` It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s. ## Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! ``` saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) _ = view_samples(0, [gen_samples]) ```
github_jupyter
%matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z with tf.variable_scope('scope_name', reuse=False): # code here def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('generator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out def discriminator(x, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('discriminator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits # Size of input image to discriminator input_size = 784 # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Smoothing smooth = 0.1 tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Build the model g_model = generator(input_z, input_size, n_units=n_units, alpha=alpha) # g_model is the generator output d_model_real, d_logits_real = discriminator(input_real, n_units=n_units, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=n_units, alpha=alpha) tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) # Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_real))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('generator')] d_vars = [var for var in t_vars if var.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) batch_size = 100 epochs = 100 samples = [] losses = [] # Only save generator variables saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) _ = view_samples(-1, samples) rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) _ = view_samples(0, [gen_samples])
0.829492
0.991859
This notebook will walk through the problem of a naive estimate in the context of program evaluation! This type of work is usually done in stata (within academia), however, I have chosen to use python instead! ``` import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import statsmodels.api as sm from session1_helper import * %matplotlib inline plt.rcParams["figure.figsize"] = (15,10) plt.rcParams["xtick.labelsize"] = 16 plt.rcParams["ytick.labelsize"] = 16 plt.rcParams["axes.labelsize"] = 20 plt.rcParams['legend.fontsize'] = 20 ``` Below I am loading in an unbalanced dataset in order to highlight the selection problem that results in an biased naive estimate. Later on, I will load a balanced data set and we will explore waht implications that has in a naive estimate. ``` data = pd.read_stata("Dataset_Unbalanced.dta") data.head() ``` - income is the person’s annual salary - D is a dummy of 1 if person went to Harris, 0 if rejected applicant - collegegpa is the person’s GPA from college - collegetop50 is a dummy of 1 if person went to a college in the top-50 ranking, 0 if didn’t - salarybefore is how much the person was making before applying to Harris - parentsincome is how much the person’s parents made per year when she applied to Harris As such, the column D, represents whether a person had the treatment (i.e. is a harris student) or was denied treatment (rejected from Harris). ``` data.shape ``` The treated average mean is equivalent to $Y_{1, D=1}$ - imagine I had put a "bar" over the $Y$, because we are talking about the average. I omitted the bar as it is not rendering properly! ``` treated_avg_mean = data[data['D'] == 1.0]['income'].mean() treated_avg_mean ``` untreated average mean is equivalent to $Y_{0, D=0}$ - again, dont forget the bar over the $Y$ ``` untreated_avg_mean = data[data['D'] == 0.0]['income'].mean() untreated_avg_mean naive_estimate = treated_avg_mean - untreated_avg_mean naive_estimate ``` Let's now explore the data in a bunch of different ways to see if there is any hintiong of a selection problem. By selection problem, I mean that there are signfificant differences between the treatment and non-treatment groups. That is to say, there are statistically signfificant differences in the two groups regardless of treatment, thus, we will show that the naive estimate is a **biased** estimate for estimating the impact of obtaining treatment. By biased, I mean the niave estimate over or under-emphasizes the effect of treatment. ``` sns.boxplot(x=data['D'], y=data['collegegpa'], width=0.2); plt.ylabel("GPA in College") plt.xlabel("0.0 : Untreated, 1.0: Treated") treated_group = data[data['D'] == 1.0] treated_group.head() untreated_group = data[data['D'] == 0.0] untreated_group.head() subset = data.groupby(by=['D', 'collegetop50']).count().reset_index() sns.barplot(x=subset['D'], y=subset['income'], hue=subset['collegetop50'], hue_order=[1,0]); sns.boxplot(x=data['D'], y=data['salarybefore'], width=0.2); g = sns.FacetGrid(data, hue="D", size=15, aspect=2) g = g.map(plt.hist, "salarybefore").add_legend() plt.ylabel("Density") plt.xlabel("Salary") g = sns.FacetGrid(data, hue="D", size=15, aspect=2) g = g.map(sns.distplot, "salarybefore").add_legend() plt.ylabel("Density") plt.xlabel("Salary") sns.boxplot(x=data['D'], y=data['parentsincome'], width=0.2); g = sns.FacetGrid(data, hue="D", size=15, aspect=2) g = g.map(plt.hist, "parentsincome").add_legend() plt.ylabel("Density") plt.xlabel("Parents Income") g = sns.FacetGrid(data, hue="D", size=8, aspect=2) g = g.map(sns.distplot, "parentsincome").add_legend() plt.ylabel("Density") plt.xlabel("Parents Income") plt.ylim((0,0.000009)) # Regress collegegpa on D X = sm.add_constant(data['D'].values) y = data['collegegpa'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) # Regress collegetop50 on D X = sm.add_constant(data['D'].values) y = data['collegetop50'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) # Regress salarybefore on D X = sm.add_constant(data['D'].values) y = data['salarybefore'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) # Regress parentsincome on D X = sm.add_constant(data['D'].values) y = data['parentsincome'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) # Regress income on D, collegegpa, collegetop50, salarybefore, parentsincome X = sm.add_constant(data[['D', 'collegegpa', 'collegetop50', 'salarybefore', 'parentsincome']].values) y = data['income'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) naive_estimate treatment_coeff = results.params[1] treatment_coeff naive_estimate - treatment_coeff ``` Now, if we assume that all the observable featurs we have fully capture the selection problem, then we get a coefficient on the treatment ($D$) of \$27,829. That is to say, the effect of going to harris, while controlling for all other variables, results in an average increase of \$27,829 in income. Our naive calculation implied that the average increase was: $88,033 Thus, our naive calculation was overestimating by: $60,203 The takeaway here is that when we have a selection problem, out naive estimate will be biased! ### The naive estimator with no selection problem! ``` data = pd.read_stata("Dataset_Balanced.dta") data.head() ``` Box plots look pretty identical! - Doesnt seem to be a selection problem! ``` gen_boxplot(data, "collegegpa", "D", "Untreated: 0, Treated: 1", "College GPA", width=0.2) ``` Same with the barplots! ``` gen_grouped_bar(data, ['D', 'collegetop50'], x='D', y='income', hue='collegetop50', hue_order=[1,0], xlabel="Untreated: 0, Treated: 1") ``` Looking pretty good here too! ``` gen_overlapped_dist(data,"salarybefore", 'D', 15, 2, "Salary", "Density") data.head() regress(data=data, feature_lst=['D'], target='collegegpa') regress(data=data, feature_lst=['D'], target='collegetop50') regress(data=data, feature_lst=['D'], target='salarybefore') regress(data=data, feature_lst=['D'], target='parentsincome') ``` We see no statistical significance in any of the above! Thus no significant differences on these features between the two groups! Let's now recalculate the naive estimator! ``` treated_avg_mean = data[data['D'] == 1.0]['income'].mean() treated_avg_mean untreated_avg_mean = data[data['D'] == 0.0]['income'].mean() untreated_avg_mean naive_estimate = treated_avg_mean - untreated_avg_mean naive_estimate ``` Woah - we have exactly the same difference - hmm, let's now control for all our features and see what happens! ``` regress(data, feature_lst=list(data.columns[1:].values), target="income") ``` From the above, we see that the coeff for $D$ is 88530.0 ``` naive_estimate - 8.853e04 ``` we see that now our naive estimate slightly under estimated - but this is very very close to our naive estimate. it is safe to say that in this instance, there is no selection problem! Furthermore notice that the F stat is NOT significant at the 5% level! Let's change our regression a little to see if our adj $R^2$ changes. ``` regress(data, ['D'], target='income') ``` Yup - our adj R^2 increased! As such, its better to leave out all the covariates all together! ### Conclusion - naive estimator is unbiased when there are no selection problems. - Empircally, there will **always** be selection problems. - When faced with selection problems, controlling for covariates helps give a better estimate of the actual treatment effect. However, this is not an easy task! #### What could go wrong? - Might make the wrong selection of covariates! We might never find **the** covariate that explains the relationship. - collecting data on certain observable covariates might be costly or unethical! - plenty of unobserved covariates that may explain our Y - but we will never really know. Furthermore, lets say you did highlight an unobservable Y that could play a significant role - you still wouldnt be able to measure it! Program evaluation is about solving this selection problem amongst observable and unobservable covariates. Many ways to help observe the counter factual of $E(Y_{0i})$ : - RCTs - diff-in-diff - instrumental variables - regression discontinuity - matching By using statisical models, we hope to estimate the counterfactuals - that is, tbe hypothetical situation we would never be able to 'see'. In essence, we are trying to estimate what we would have observed, if the treated had **not** been treated!
github_jupyter
import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import statsmodels.api as sm from session1_helper import * %matplotlib inline plt.rcParams["figure.figsize"] = (15,10) plt.rcParams["xtick.labelsize"] = 16 plt.rcParams["ytick.labelsize"] = 16 plt.rcParams["axes.labelsize"] = 20 plt.rcParams['legend.fontsize'] = 20 data = pd.read_stata("Dataset_Unbalanced.dta") data.head() data.shape treated_avg_mean = data[data['D'] == 1.0]['income'].mean() treated_avg_mean untreated_avg_mean = data[data['D'] == 0.0]['income'].mean() untreated_avg_mean naive_estimate = treated_avg_mean - untreated_avg_mean naive_estimate sns.boxplot(x=data['D'], y=data['collegegpa'], width=0.2); plt.ylabel("GPA in College") plt.xlabel("0.0 : Untreated, 1.0: Treated") treated_group = data[data['D'] == 1.0] treated_group.head() untreated_group = data[data['D'] == 0.0] untreated_group.head() subset = data.groupby(by=['D', 'collegetop50']).count().reset_index() sns.barplot(x=subset['D'], y=subset['income'], hue=subset['collegetop50'], hue_order=[1,0]); sns.boxplot(x=data['D'], y=data['salarybefore'], width=0.2); g = sns.FacetGrid(data, hue="D", size=15, aspect=2) g = g.map(plt.hist, "salarybefore").add_legend() plt.ylabel("Density") plt.xlabel("Salary") g = sns.FacetGrid(data, hue="D", size=15, aspect=2) g = g.map(sns.distplot, "salarybefore").add_legend() plt.ylabel("Density") plt.xlabel("Salary") sns.boxplot(x=data['D'], y=data['parentsincome'], width=0.2); g = sns.FacetGrid(data, hue="D", size=15, aspect=2) g = g.map(plt.hist, "parentsincome").add_legend() plt.ylabel("Density") plt.xlabel("Parents Income") g = sns.FacetGrid(data, hue="D", size=8, aspect=2) g = g.map(sns.distplot, "parentsincome").add_legend() plt.ylabel("Density") plt.xlabel("Parents Income") plt.ylim((0,0.000009)) # Regress collegegpa on D X = sm.add_constant(data['D'].values) y = data['collegegpa'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) # Regress collegetop50 on D X = sm.add_constant(data['D'].values) y = data['collegetop50'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) # Regress salarybefore on D X = sm.add_constant(data['D'].values) y = data['salarybefore'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) # Regress parentsincome on D X = sm.add_constant(data['D'].values) y = data['parentsincome'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) # Regress income on D, collegegpa, collegetop50, salarybefore, parentsincome X = sm.add_constant(data[['D', 'collegegpa', 'collegetop50', 'salarybefore', 'parentsincome']].values) y = data['income'].values model = sm.OLS(y, X) results = model.fit() print(results.summary()) naive_estimate treatment_coeff = results.params[1] treatment_coeff naive_estimate - treatment_coeff data = pd.read_stata("Dataset_Balanced.dta") data.head() gen_boxplot(data, "collegegpa", "D", "Untreated: 0, Treated: 1", "College GPA", width=0.2) gen_grouped_bar(data, ['D', 'collegetop50'], x='D', y='income', hue='collegetop50', hue_order=[1,0], xlabel="Untreated: 0, Treated: 1") gen_overlapped_dist(data,"salarybefore", 'D', 15, 2, "Salary", "Density") data.head() regress(data=data, feature_lst=['D'], target='collegegpa') regress(data=data, feature_lst=['D'], target='collegetop50') regress(data=data, feature_lst=['D'], target='salarybefore') regress(data=data, feature_lst=['D'], target='parentsincome') treated_avg_mean = data[data['D'] == 1.0]['income'].mean() treated_avg_mean untreated_avg_mean = data[data['D'] == 0.0]['income'].mean() untreated_avg_mean naive_estimate = treated_avg_mean - untreated_avg_mean naive_estimate regress(data, feature_lst=list(data.columns[1:].values), target="income") naive_estimate - 8.853e04 regress(data, ['D'], target='income')
0.466359
0.946349
# **Working memory training**: Motion and outliers control Step 0: Loading libraries and basic settings ---------------------------------------- ``` %matplotlib inline import warnings warnings.filterwarnings('ignore') import sys sys.path.append("..") import os import numpy as np import pandas as pd import seaborn as sns from scipy import stats from fctools import denoise, figures, stats from nistats.design_matrix import make_first_level_design_matrix # Matplotlib settings import matplotlib.pyplot as plt plt.style.use('seaborn-white') plt.rcParams['font.family'] = 'Helvetica' small = 25 medium = 25 bigger = 25 plt.rc('font', size=small) # controls default text sizes plt.rc('axes', titlesize=small) # fontsize of the axes title plt.rc('axes', linewidth=2.2) plt.rc('axes', labelsize=medium) # fontsize of the x and y labels plt.rc('xtick', labelsize=small) # fontsize of the tick labels plt.rc('ytick', labelsize=small) # fontsize of the tick labels plt.rc('legend', fontsize=small) # legend fontsize plt.rc('figure', titlesize=bigger) # fontsize of the figure title plt.rc('lines', linewidth=2.2, color='gray') ``` Step 1: Data preparation ---------------------------------------- ``` # Setting main input directory top_dir = '/media/finc/Elements/LearningBrain_fmriprep/' out_dir = '/home/finc/Dropbox/Projects/LearningBrain/figures/' # Selecting subjects who finished the study groups = pd.read_csv('../data/behavioral/group_assignment.csv') trained = (groups.group == 'Experimental') | (groups.group == 'Control') trained_subs = groups[trained] subs = trained_subs['sub'].values print(f'Sample size: {len(subs)}') # Setting sessions and task names sess = ['ses-1', 'ses-2', 'ses-3', 'ses-4'] #tasks = ['rest'] tasks = ['dualnback'] # Loading events events = pd.read_csv('../support/onsets_dualnback.csv') condition = denoise.get_condition_column(events) condition['no'] = np.arange(len(condition)) condition.head() ``` Step 2: Looping over subjects and merging their confound files ----------------------------------------------------------------- ``` tasks = ['rest', 'dualnback'] confounds = pd.DataFrame() for sub in subs: for ses in sess: for task in tasks: # Getting directory/file names sub_dir = f'{top_dir}{sub}/{ses}/func/' sub_name = f'{sub}_{ses}_task-{task}' # Loading confound data confounds_path1 = f'{sub_dir}{sub_name}_bold_confounds_clean_acompcor.csv' confounds_path2 = f'{sub_dir}{sub_name}_bold_confounds.tsv' if not os.path.exists(confounds_path1): print(f'{sub}{ses}{task} does not exist') else: conf1 = pd.read_csv(confounds_path1) conf1 = pd.DataFrame(conf1, columns =['scrubbing']) conf1['sub'] = sub conf1['ses'] = ses conf1['task'] = task conf1['no'] = np.arange(len(conf1)) conf2 = pd.read_csv(confounds_path2, delimiter = '\t') conf2 = pd.DataFrame(conf2, columns =['FramewiseDisplacement']) conf2.FramewiseDisplacement[0] = 0 conf2['no'] = np.arange(len(conf2)) conf_all = pd.merge(conf1, conf2, on = 'no') if task == 'rest': conf_all['condition'] = 'rest' else: conf_all = pd.merge(conf_all, condition, on = 'no') confounds = pd.concat((confounds, conf_all)) confounds = pd.merge(confounds, trained_subs, on = 'sub') confounds = confounds.rename(index=str, columns={"group": "Group", "ses": "Session", "condition": "Condition" }) confounds.to_csv('/home/finc/Dropbox/Projects/LearningBrain/github/LearningBrain_networks/data/neuroimaging/coundfounds_summary.csv', sep = ',', index = False) confounds.head() # Read confounds from .csv confounds = pd.read_csv('/home/finc/Dropbox/Projects/LearningBrain/data/neuroimaging/01-extracted_timeseries/coundfounds_summary.csv') confounds.head() ``` Step 3: Summarizing pandas dataframe -------------------------------------- ``` f = {'scrubbing':['sum'], 'FramewiseDisplacement':['mean']} # Total outlier_all = confounds.groupby(['sub','Session','Group','task']).agg(f).reset_index() outlier_all['OutlierPerc'] = [((row.scrubbing['sum']/340)*100) if row.task[0] == 'dualnback' else ((row.scrubbing['sum']/305)*100) for i, row in outlier_all.iterrows()] #outlier_all['OutlierPerc'] = (outlier_all.scrubbing['sum']/340)*100 outlier_all['FD'] = outlier_all.FramewiseDisplacement['mean'] # Grouped by condition outlier_cond = confounds.groupby(['sub','Session','Group', 'Condition']).agg(f).reset_index() outlier_cond['OutlierPerc'] = (outlier_cond.scrubbing['sum']/150)*100 outlier_cond['FD'] = outlier_cond.FramewiseDisplacement['mean'] outlier_cond = outlier_cond[outlier_cond.Condition != 'intro'] outlier_all ``` Step 4: Plotting ----------------- ``` # Setting colors for groups col_groups = ['#379dbc','#ee8c00'] sns.set_palette(col_groups) sns.palplot(sns.color_palette(col_groups)) # Plotting mean total framewise displacement (FD) ax = figures.swarm_box_plot(x="Session", y="FD", hue = 'Group', data = outlier_all[outlier_all.task == 'dualnback']) ax.set(title=' ') ax.set(ylabel='Mean FD (mm)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(0.2, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(0.02, 0.35) plt.savefig(f'{out_dir}fig_S1a.pdf', bbox_inches="tight", dpi=300) # Plotting total percent of outlier scans ax = figures.swarm_box_plot(x="Session", y="OutlierPerc", hue = 'Group', data = outlier_all[outlier_all.task == 'dualnback']) ax.set(title=' ') ax.set(ylabel='Outlier volumes (%)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(10, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(-1, 23) plt.savefig(f'{out_dir}fig_S1b.pdf', bbox_inches="tight", dpi=300) # Plotting mean total framewise displacement (FD) ax = figures.swarm_box_plot(x="Session", y="FD", hue = 'Group', data = outlier_all[outlier_all.task == 'rest']) ax.set(title=' ') ax.set(ylabel='Mean FD (mm)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(0.2, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(0.02, 0.35) plt.savefig(f'{out_dir}fig_S1a.pdf', bbox_inches="tight", dpi=300) # Plotting total percent of outlier scans ax = figures.swarm_box_plot(x="Session", y="OutlierPerc", hue = 'Group', data = outlier_all[(outlier_all.task == 'rest')]) ax.set(title=' ') ax.set(ylabel='Outlier volumes (%)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(10, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(-1, 23) plt.savefig(f'{out_dir}fig_S1b.pdf', bbox_inches="tight", dpi=300) #--- setting colors for conditions col_cond = ['#88d958','#f98766'] sns.set_palette(col_cond) sns.palplot(sns.color_palette(col_cond)) outlier_cond.head() # Plotting mean framewise displacement (FD) for each condition: Control ax = figures.swarm_box_plot(x="Session", y="FD", hue = 'Condition', data = outlier_cond[outlier_cond.Group == 'Control']) ax.set(title='Control') ax.set(ylabel='Mean FD (mm)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(0.2, -1, 4, colors='darkgray', linestyles ='dashed') plt.ylim(0.02, 0.35) plt.setp(ax.spines.values(), linewidth=2.2) plt.savefig(f'{out_dir}fig_S1c.pdf', bbox_inches="tight", dpi=300) # Plotting mean framewise displacement (FD) for each condition: Control ax = figures.swarm_box_plot(x="Session", y="FD", hue = 'Condition', data = outlier_cond[outlier_cond.Group == 'Experimental']) ax.set(title='Experimental') ax.set(ylabel='Mean FD (mm)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(0.2, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(0.02, 0.35) plt.savefig(f'{out_dir}fig_S1d.pdf', bbox_inches="tight", dpi=300) # Plotting % outlier scans for each condition: Control ax = figures.swarm_box_plot(x="Session", y="OutlierPerc", hue = 'Condition', data = outlier_cond[outlier_cond.Group == 'Control']) ax.set(title='Control') ax.set(ylabel='Outlier volumes (%)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(10, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(-1, 23) plt.savefig(f'{out_dir}fig_S1e.pdf', bbox_inches="tight", dpi=300) # Plotting % outlier scans for each condition: Control ax = figures.swarm_box_plot(x="Session", y="OutlierPerc", hue = 'Condition', data = outlier_cond[outlier_cond.Group == 'Experimental']) ax.set(title='Experimental') ax.set(ylabel='Outlier volumes (%)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(10, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(-1, 23) plt.savefig(f'{out_dir}fig_S1f.pdf', bbox_inches="tight", dpi=300) ``` Step 5: Deciding which subjects to exclude ----------------------- ``` outlier_all.head() criteria = (outlier_all.FD > 0.2) | (outlier_all.OutlierPerc > 10) excluded = outlier_all[criteria] ex = np.unique(excluded['sub'].values) print(f'Subjects to exclude due to FD > 0.2 and % of outlier scans > 10%: {ex}') excluded criteria_cond = ((outlier_cond.FD > 0.2) | (outlier_cond.OutlierPerc > 10)) & (outlier_cond.Condition != 'rest') excluded_cond = outlier_cond[criteria_cond] ex_cond = np.unique(excluded_cond['sub'].values) print(f'Subjects to exclude due to FD > 0.2 and % of outlier scans > 10%: {ex_cond}') excluded_cond to_exclude = ['sub-13', 'sub-21', 'sub-23', 'sub-50'] ``` Step 6: Calculate mean of FD and % of outlier scans ----------------------- ``` clean_cond = outlier_cond[~outlier_cond['sub'].isin(to_exclude)] clean_all = outlier_all[~outlier_all['sub'].isin(to_exclude)] clean_cond.groupby(['Group', 'Session','Condition']).mean()[['Group', 'FD', 'OutlierPerc']] clean_all.groupby(['Group', 'Session']).mean()[['Group', 'FD', 'OutlierPerc']] ``` Step 7: Calculate test statistic to compare groups/sessions/conditions ----------------------- ``` sess = ['ses-1', 'ses-2', 'ses-3', 'ses-4'] conds = ['1-back', '2-back'] groups = ['Control', 'Experimental'] ``` Comparing conditions ``` # Differences in FD between conditions for experimental group stats.ttest_rel_cond('Experimental','FD', data = clean_cond) # Differences in FD between conditions for control group stats.ttest_rel_cond('Control','FD', data = clean_cond) # Differences in % of oultier scans between conditions for experimental group stats.ttest_rel_cond('Experimental','OutlierPerc', data = clean_cond) # Differences in % of oultier scans between conditions for control group stats.ttest_rel_cond('Control','OutlierPerc', data = clean_cond) ``` Comparing sessions --------------------------------- ``` stats.ttest_rel_sess('Experimental','FD', data = clean_all)[:,:,0] stats.ttest_rel_sess('Experimental','OutlierPerc', data = clean_all)[:,:,0] stats.ttest_rel_sess('Control','FD', data = clean_all)[:,:,0] stats.ttest_rel_sess('Control','OutlierPerc', data = clean_all)[:,:,0] ``` Comparing groups --------------------------------- ``` stats.ttest_ind_groups('FD', clean_all) stats.ttest_ind_groups('OutlierPerc', clean_all) ```
github_jupyter
%matplotlib inline import warnings warnings.filterwarnings('ignore') import sys sys.path.append("..") import os import numpy as np import pandas as pd import seaborn as sns from scipy import stats from fctools import denoise, figures, stats from nistats.design_matrix import make_first_level_design_matrix # Matplotlib settings import matplotlib.pyplot as plt plt.style.use('seaborn-white') plt.rcParams['font.family'] = 'Helvetica' small = 25 medium = 25 bigger = 25 plt.rc('font', size=small) # controls default text sizes plt.rc('axes', titlesize=small) # fontsize of the axes title plt.rc('axes', linewidth=2.2) plt.rc('axes', labelsize=medium) # fontsize of the x and y labels plt.rc('xtick', labelsize=small) # fontsize of the tick labels plt.rc('ytick', labelsize=small) # fontsize of the tick labels plt.rc('legend', fontsize=small) # legend fontsize plt.rc('figure', titlesize=bigger) # fontsize of the figure title plt.rc('lines', linewidth=2.2, color='gray') # Setting main input directory top_dir = '/media/finc/Elements/LearningBrain_fmriprep/' out_dir = '/home/finc/Dropbox/Projects/LearningBrain/figures/' # Selecting subjects who finished the study groups = pd.read_csv('../data/behavioral/group_assignment.csv') trained = (groups.group == 'Experimental') | (groups.group == 'Control') trained_subs = groups[trained] subs = trained_subs['sub'].values print(f'Sample size: {len(subs)}') # Setting sessions and task names sess = ['ses-1', 'ses-2', 'ses-3', 'ses-4'] #tasks = ['rest'] tasks = ['dualnback'] # Loading events events = pd.read_csv('../support/onsets_dualnback.csv') condition = denoise.get_condition_column(events) condition['no'] = np.arange(len(condition)) condition.head() tasks = ['rest', 'dualnback'] confounds = pd.DataFrame() for sub in subs: for ses in sess: for task in tasks: # Getting directory/file names sub_dir = f'{top_dir}{sub}/{ses}/func/' sub_name = f'{sub}_{ses}_task-{task}' # Loading confound data confounds_path1 = f'{sub_dir}{sub_name}_bold_confounds_clean_acompcor.csv' confounds_path2 = f'{sub_dir}{sub_name}_bold_confounds.tsv' if not os.path.exists(confounds_path1): print(f'{sub}{ses}{task} does not exist') else: conf1 = pd.read_csv(confounds_path1) conf1 = pd.DataFrame(conf1, columns =['scrubbing']) conf1['sub'] = sub conf1['ses'] = ses conf1['task'] = task conf1['no'] = np.arange(len(conf1)) conf2 = pd.read_csv(confounds_path2, delimiter = '\t') conf2 = pd.DataFrame(conf2, columns =['FramewiseDisplacement']) conf2.FramewiseDisplacement[0] = 0 conf2['no'] = np.arange(len(conf2)) conf_all = pd.merge(conf1, conf2, on = 'no') if task == 'rest': conf_all['condition'] = 'rest' else: conf_all = pd.merge(conf_all, condition, on = 'no') confounds = pd.concat((confounds, conf_all)) confounds = pd.merge(confounds, trained_subs, on = 'sub') confounds = confounds.rename(index=str, columns={"group": "Group", "ses": "Session", "condition": "Condition" }) confounds.to_csv('/home/finc/Dropbox/Projects/LearningBrain/github/LearningBrain_networks/data/neuroimaging/coundfounds_summary.csv', sep = ',', index = False) confounds.head() # Read confounds from .csv confounds = pd.read_csv('/home/finc/Dropbox/Projects/LearningBrain/data/neuroimaging/01-extracted_timeseries/coundfounds_summary.csv') confounds.head() f = {'scrubbing':['sum'], 'FramewiseDisplacement':['mean']} # Total outlier_all = confounds.groupby(['sub','Session','Group','task']).agg(f).reset_index() outlier_all['OutlierPerc'] = [((row.scrubbing['sum']/340)*100) if row.task[0] == 'dualnback' else ((row.scrubbing['sum']/305)*100) for i, row in outlier_all.iterrows()] #outlier_all['OutlierPerc'] = (outlier_all.scrubbing['sum']/340)*100 outlier_all['FD'] = outlier_all.FramewiseDisplacement['mean'] # Grouped by condition outlier_cond = confounds.groupby(['sub','Session','Group', 'Condition']).agg(f).reset_index() outlier_cond['OutlierPerc'] = (outlier_cond.scrubbing['sum']/150)*100 outlier_cond['FD'] = outlier_cond.FramewiseDisplacement['mean'] outlier_cond = outlier_cond[outlier_cond.Condition != 'intro'] outlier_all # Setting colors for groups col_groups = ['#379dbc','#ee8c00'] sns.set_palette(col_groups) sns.palplot(sns.color_palette(col_groups)) # Plotting mean total framewise displacement (FD) ax = figures.swarm_box_plot(x="Session", y="FD", hue = 'Group', data = outlier_all[outlier_all.task == 'dualnback']) ax.set(title=' ') ax.set(ylabel='Mean FD (mm)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(0.2, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(0.02, 0.35) plt.savefig(f'{out_dir}fig_S1a.pdf', bbox_inches="tight", dpi=300) # Plotting total percent of outlier scans ax = figures.swarm_box_plot(x="Session", y="OutlierPerc", hue = 'Group', data = outlier_all[outlier_all.task == 'dualnback']) ax.set(title=' ') ax.set(ylabel='Outlier volumes (%)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(10, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(-1, 23) plt.savefig(f'{out_dir}fig_S1b.pdf', bbox_inches="tight", dpi=300) # Plotting mean total framewise displacement (FD) ax = figures.swarm_box_plot(x="Session", y="FD", hue = 'Group', data = outlier_all[outlier_all.task == 'rest']) ax.set(title=' ') ax.set(ylabel='Mean FD (mm)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(0.2, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(0.02, 0.35) plt.savefig(f'{out_dir}fig_S1a.pdf', bbox_inches="tight", dpi=300) # Plotting total percent of outlier scans ax = figures.swarm_box_plot(x="Session", y="OutlierPerc", hue = 'Group', data = outlier_all[(outlier_all.task == 'rest')]) ax.set(title=' ') ax.set(ylabel='Outlier volumes (%)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(10, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(-1, 23) plt.savefig(f'{out_dir}fig_S1b.pdf', bbox_inches="tight", dpi=300) #--- setting colors for conditions col_cond = ['#88d958','#f98766'] sns.set_palette(col_cond) sns.palplot(sns.color_palette(col_cond)) outlier_cond.head() # Plotting mean framewise displacement (FD) for each condition: Control ax = figures.swarm_box_plot(x="Session", y="FD", hue = 'Condition', data = outlier_cond[outlier_cond.Group == 'Control']) ax.set(title='Control') ax.set(ylabel='Mean FD (mm)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(0.2, -1, 4, colors='darkgray', linestyles ='dashed') plt.ylim(0.02, 0.35) plt.setp(ax.spines.values(), linewidth=2.2) plt.savefig(f'{out_dir}fig_S1c.pdf', bbox_inches="tight", dpi=300) # Plotting mean framewise displacement (FD) for each condition: Control ax = figures.swarm_box_plot(x="Session", y="FD", hue = 'Condition', data = outlier_cond[outlier_cond.Group == 'Experimental']) ax.set(title='Experimental') ax.set(ylabel='Mean FD (mm)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(0.2, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(0.02, 0.35) plt.savefig(f'{out_dir}fig_S1d.pdf', bbox_inches="tight", dpi=300) # Plotting % outlier scans for each condition: Control ax = figures.swarm_box_plot(x="Session", y="OutlierPerc", hue = 'Condition', data = outlier_cond[outlier_cond.Group == 'Control']) ax.set(title='Control') ax.set(ylabel='Outlier volumes (%)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(10, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(-1, 23) plt.savefig(f'{out_dir}fig_S1e.pdf', bbox_inches="tight", dpi=300) # Plotting % outlier scans for each condition: Control ax = figures.swarm_box_plot(x="Session", y="OutlierPerc", hue = 'Condition', data = outlier_cond[outlier_cond.Group == 'Experimental']) ax.set(title='Experimental') ax.set(ylabel='Outlier volumes (%)') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.hlines(10, -1, 4, colors='darkgray', linestyles ='dashed') plt.setp(ax.spines.values(), linewidth=2.2) plt.ylim(-1, 23) plt.savefig(f'{out_dir}fig_S1f.pdf', bbox_inches="tight", dpi=300) outlier_all.head() criteria = (outlier_all.FD > 0.2) | (outlier_all.OutlierPerc > 10) excluded = outlier_all[criteria] ex = np.unique(excluded['sub'].values) print(f'Subjects to exclude due to FD > 0.2 and % of outlier scans > 10%: {ex}') excluded criteria_cond = ((outlier_cond.FD > 0.2) | (outlier_cond.OutlierPerc > 10)) & (outlier_cond.Condition != 'rest') excluded_cond = outlier_cond[criteria_cond] ex_cond = np.unique(excluded_cond['sub'].values) print(f'Subjects to exclude due to FD > 0.2 and % of outlier scans > 10%: {ex_cond}') excluded_cond to_exclude = ['sub-13', 'sub-21', 'sub-23', 'sub-50'] clean_cond = outlier_cond[~outlier_cond['sub'].isin(to_exclude)] clean_all = outlier_all[~outlier_all['sub'].isin(to_exclude)] clean_cond.groupby(['Group', 'Session','Condition']).mean()[['Group', 'FD', 'OutlierPerc']] clean_all.groupby(['Group', 'Session']).mean()[['Group', 'FD', 'OutlierPerc']] sess = ['ses-1', 'ses-2', 'ses-3', 'ses-4'] conds = ['1-back', '2-back'] groups = ['Control', 'Experimental'] # Differences in FD between conditions for experimental group stats.ttest_rel_cond('Experimental','FD', data = clean_cond) # Differences in FD between conditions for control group stats.ttest_rel_cond('Control','FD', data = clean_cond) # Differences in % of oultier scans between conditions for experimental group stats.ttest_rel_cond('Experimental','OutlierPerc', data = clean_cond) # Differences in % of oultier scans between conditions for control group stats.ttest_rel_cond('Control','OutlierPerc', data = clean_cond) stats.ttest_rel_sess('Experimental','FD', data = clean_all)[:,:,0] stats.ttest_rel_sess('Experimental','OutlierPerc', data = clean_all)[:,:,0] stats.ttest_rel_sess('Control','FD', data = clean_all)[:,:,0] stats.ttest_rel_sess('Control','OutlierPerc', data = clean_all)[:,:,0] stats.ttest_ind_groups('FD', clean_all) stats.ttest_ind_groups('OutlierPerc', clean_all)
0.437824
0.798776
``` import spacy from spacy.tokens import Doc, Token, Span from spacy.matcher import PhraseMatcher, Matcher import random import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') ``` # Training and updating models ## Using `Matcher` to get training data ``` nlp = spacy.load('en_core_web_sm') # Initialize Matcher matcher = Matcher(nlp.vocab) # Create patterns to match iPhone X and other iPhone models pattern1 = [{'LOWER': 'iphone'}, {'LOWER': 'x'}] # pattern2 = [{'LOWER': 'iphone'}, {'IS_DIGIT': True, 'OP': '?'}] # Add patterns to the matcher matcher.add('GADGET', None, pattern1) # Define phrases phone_pharses = ['I just bought a new iPhone X!', 'Had iPhone X for a month, but it broke', 'iPhone 6 was my favorite!', 'Need a new phone, any tips?', 'Best iPhone X deals in Boston!'] for doc in nlp.pipe(phone_pharses): # Find the matches in the doc matches = matcher(doc) # Print results entities = [(start, end, 'GADGET') for match_id, start, end in matches] print(doc.text, entities) # Build a training set TRAINING_DATA = [] # Create a Doc object for each text in TEXTS for doc in nlp.pipe(phone_pharses): # Match on the doc and create a list of matched spans spans = [doc[start:end] for match_id, start, end in matcher(doc)] # Get (start character, end character, label) tuples of matches entities = [(span.start_char, span.end_char, 'GADGET') for span in spans] # Format the matches as a (doc.text, entities) tuple training_example = (doc.text, {'entities': entities}) # Append the example to the training data TRAINING_DATA.append(training_example) # Print out the training data print(*TRAINING_DATA, sep='\n') ``` ## Train a model from scratch ``` # Create a blank 'en' model nlp = spacy.blank('en') # Create a new entity recognizer and add it to the pipeline ner = nlp.create_pipe('ner') nlp.add_pipe(ner) # Add the label 'GADGET' to the entity recognizer ner.add_label('GADGET') # Start the training nlp.begin_training() # Loop for 10 iterations loss = [] for i in range(10): # Shuffle the training data random.shuffle(TRAINING_DATA) losses = {} # Batch the examples and iterate over them for batch in spacy.util.minibatch(TRAINING_DATA, size = 3): texts = [text for text, entities in batch] annotations = [entities for text, entities in batch] # Update the model nlp.update(texts, annotations, losses = losses) loss.append(losses) # Visualize the loss loss_ = [x['ner'] for x in loss] plt.figure(figsize = (12, 3)) plt.plot(loss_) plt.title('NER training loss over 10 iterations') plt.plot() ```
github_jupyter
import spacy from spacy.tokens import Doc, Token, Span from spacy.matcher import PhraseMatcher, Matcher import random import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') nlp = spacy.load('en_core_web_sm') # Initialize Matcher matcher = Matcher(nlp.vocab) # Create patterns to match iPhone X and other iPhone models pattern1 = [{'LOWER': 'iphone'}, {'LOWER': 'x'}] # pattern2 = [{'LOWER': 'iphone'}, {'IS_DIGIT': True, 'OP': '?'}] # Add patterns to the matcher matcher.add('GADGET', None, pattern1) # Define phrases phone_pharses = ['I just bought a new iPhone X!', 'Had iPhone X for a month, but it broke', 'iPhone 6 was my favorite!', 'Need a new phone, any tips?', 'Best iPhone X deals in Boston!'] for doc in nlp.pipe(phone_pharses): # Find the matches in the doc matches = matcher(doc) # Print results entities = [(start, end, 'GADGET') for match_id, start, end in matches] print(doc.text, entities) # Build a training set TRAINING_DATA = [] # Create a Doc object for each text in TEXTS for doc in nlp.pipe(phone_pharses): # Match on the doc and create a list of matched spans spans = [doc[start:end] for match_id, start, end in matcher(doc)] # Get (start character, end character, label) tuples of matches entities = [(span.start_char, span.end_char, 'GADGET') for span in spans] # Format the matches as a (doc.text, entities) tuple training_example = (doc.text, {'entities': entities}) # Append the example to the training data TRAINING_DATA.append(training_example) # Print out the training data print(*TRAINING_DATA, sep='\n') # Create a blank 'en' model nlp = spacy.blank('en') # Create a new entity recognizer and add it to the pipeline ner = nlp.create_pipe('ner') nlp.add_pipe(ner) # Add the label 'GADGET' to the entity recognizer ner.add_label('GADGET') # Start the training nlp.begin_training() # Loop for 10 iterations loss = [] for i in range(10): # Shuffle the training data random.shuffle(TRAINING_DATA) losses = {} # Batch the examples and iterate over them for batch in spacy.util.minibatch(TRAINING_DATA, size = 3): texts = [text for text, entities in batch] annotations = [entities for text, entities in batch] # Update the model nlp.update(texts, annotations, losses = losses) loss.append(losses) # Visualize the loss loss_ = [x['ner'] for x in loss] plt.figure(figsize = (12, 3)) plt.plot(loss_) plt.title('NER training loss over 10 iterations') plt.plot()
0.524395
0.760362
# Efficient Grammar Fuzzing In the [chapter on grammars](Grammars.ipynb), we have seen how to use _grammars_ for very effective and efficient testing. In this chapter, we refine the previous _string-based_ algorithm into a _tree-based_ algorithm, which is much faster and allows for much more control over the production of fuzz inputs. The algorithm in this chapter serves as a foundation for several more techniques; this chapter thus is a "hub" in the book. ``` from bookutils import YouTubeVideo YouTubeVideo('84k8AO_3ChY') ``` **Prerequisites** * You should know how grammar-based fuzzing works, e.g. from the [chapter on grammars](Grammars.ipynb). ## Synopsis <!-- Automatically generated. Do not edit. --> To [use the code provided in this chapter](Importing.ipynb), write ```python >>> from fuzzingbook.GrammarFuzzer import <identifier> ``` and then make use of the following features. ### Efficient Grammar Fuzzing This chapter introduces `GrammarFuzzer`, an efficient grammar fuzzer that takes a grammar to produce syntactically valid input strings. Here's a typical usage: ```python >>> from Grammars import US_PHONE_GRAMMAR >>> phone_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR) >>> phone_fuzzer.fuzz() '(613)417-7523' ``` The `GrammarFuzzer` constructor takes a number of keyword arguments to control its behavior. `start_symbol`, for instance, allows to set the symbol that expansion starts with (instead of `<start>`): ```python >>> area_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR, start_symbol='<area>') >>> area_fuzzer.fuzz() '367' ``` Here's how to parameterize the `GrammarFuzzer` constructor: ```python Produce strings from `grammar`, starting with `start_symbol`. If `min_nonterminals` or `max_nonterminals` is given, use them as limits for the number of nonterminals produced. If `disp` is set, display the intermediate derivation trees. If `log` is set, show intermediate steps as text on standard output. ``` ![](PICS/GrammarFuzzer-synopsis-1.svg) ### Derivation Trees Internally, `GrammarFuzzer` makes use of [derivation trees](#Derivation-Trees), which it expands step by step. After producing a string, the tree produced can be accessed in the `derivation_tree` attribute. ```python >>> display_tree(phone_fuzzer.derivation_tree) ``` ![](PICS/GrammarFuzzer-synopsis-2.svg) In the internal representation of a derivation tree, a _node_ is a pair (`symbol`, `children`). For nonterminals, `symbol` is the symbol that is being expanded, and `children` is a list of further nodes. For terminals, `symbol` is the terminal string, and `children` is empty. ```python >>> phone_fuzzer.derivation_tree ('<start>', [('<phone-number>', [('(', []), ('<area>', [('<lead-digit>', [('6', [])]), ('<digit>', [('1', [])]), ('<digit>', [('3', [])])]), (')', []), ('<exchange>', [('<lead-digit>', [('4', [])]), ('<digit>', [('1', [])]), ('<digit>', [('7', [])])]), ('-', []), ('<line>', [('<digit>', [('7', [])]), ('<digit>', [('5', [])]), ('<digit>', [('2', [])]), ('<digit>', [('3', [])])])])]) ``` The chapter contains various helpers to work with derivation trees, including visualization tools – notably, `display_tree()`, above. ## An Insufficient Algorithm In the [previous chapter](Grammars.ipynb), we have introduced the `simple_grammar_fuzzer()` function which takes a grammar and automatically produces a syntactically valid string from it. However, `simple_grammar_fuzzer()` is just what its name suggests – simple. To illustrate the problem, let us get back to the `expr_grammar` we created from `EXPR_GRAMMAR_BNF` in the [chapter on grammars](Grammars.ipynb): ``` import bookutils from bookutils import quiz from typing import Tuple, List, Optional, Any, Union, Set, Callable, Dict from bookutils import unicode_escape from Grammars import EXPR_EBNF_GRAMMAR, convert_ebnf_grammar, Grammar, Expansion from Grammars import simple_grammar_fuzzer, is_valid_grammar, exp_string expr_grammar = convert_ebnf_grammar(EXPR_EBNF_GRAMMAR) expr_grammar ``` `expr_grammar` has an interesting property. If we feed it into `simple_grammar_fuzzer()`, the function gets stuck: ``` from ExpectError import ExpectTimeout with ExpectTimeout(1): simple_grammar_fuzzer(grammar=expr_grammar, max_nonterminals=3) ``` Why is that so? Have a look at the grammar; remember what you know about `simple_grammar_fuzzer()`; and run `simple_grammar_fuzzer()` with `log=true` argument to see the expansions. ``` quiz("Why does `simple_grammar_fuzzer()` hang?", [ "It produces an infinite number of additions", "It produces an infinite number of digits", "It produces an infinite number of parentheses", "It produces an infinite number of signs", ], '(3 * 3 * 3) ** (3 / (3 * 3))') ``` Indeed! The problem is in this rule: ``` expr_grammar['<factor>'] ``` Here, any choice except for `(expr)` increases the number of symbols, even if only temporary. Since we place a hard limit on the number of symbols to expand, the only choice left for expanding `<factor>` is `(<expr>)`, which leads to an _infinite addition of parentheses._ The problem of potentially infinite expansion is only one of the problems with `simple_grammar_fuzzer()`. More problems include: 1. *It is inefficient*. With each iteration, this fuzzer would go search the string produced so far for symbols to expand. This becomes inefficient as the production string grows. 2. *It is hard to control.* Even while limiting the number of symbols, it is still possible to obtain very long strings – and even infinitely long ones, as discussed above. Let us illustrate both problems by plotting the time required for strings of different lengths. ``` from Grammars import simple_grammar_fuzzer from Grammars import START_SYMBOL, EXPR_GRAMMAR, URL_GRAMMAR, CGI_GRAMMAR from Grammars import RE_NONTERMINAL, nonterminals, is_nonterminal from Timer import Timer trials = 50 xs = [] ys = [] for i in range(trials): with Timer() as t: s = simple_grammar_fuzzer(EXPR_GRAMMAR, max_nonterminals=15) xs.append(len(s)) ys.append(t.elapsed_time()) print(i, end=" ") print() average_time = sum(ys) / trials print("Average time:", average_time) %matplotlib inline import matplotlib.pyplot as plt plt.scatter(xs, ys) plt.title('Time required for generating an output'); ``` We see that (1) the time needed to generate an output increases quadratically with the length of that ouptut, and that (2) a large portion of the produced outputs are tens of thousands of characters long. To address these problems, we need a _smarter algorithm_ – one that is more efficient, that gets us better control over expansions, and that is able to foresee in `expr_grammar` that the `(expr)` alternative yields a potentially infinite expansion, in contrast to the other two. ## Derivation Trees To both obtain a more efficient algorithm _and_ exercise better control over expansions, we will use a special representation for the strings that our grammar produces. The general idea is to use a *tree* structure that will be subsequently expanded – a so-called *derivation tree*. This representation allows us to always keep track of our expansion status – answering questions such as which elements have been expanded into which others, and which symbols still need to be expanded. Furthermore, adding new elements to a tree is far more efficient than replacing strings again and again. Like other trees used in programming, a derivation tree (also known as *parse tree* or *concrete syntax tree*) consists of *nodes* which have other nodes (called *child nodes*) as their *children*. The tree starts with one node that has no parent; this is called the *root node*; a node without children is called a *leaf*. The grammar expansion process with derivation trees is illustrated in the following steps, using the arithmetic grammar [from the chapter on grammars](Grammars.ipynb). We start with a single node as root of the tree, representing the *start symbol* – in our case `<start>`. ``` # ignore from graphviz import Digraph # ignore tree = Digraph("root") tree.attr('node', shape='plain') tree.node(r"\<start\>") # ignore tree ``` To expand the tree, we traverse it, searching for a nonterminal symbol $S$ without children. $S$ thus is a symbol that still has to be expanded. We then chose an expansion for $S$ from the grammar. Then, we add the expansion as a new child of $S$. For our start symbol `<start>`, the only expansion is `<expr>`, so we add it as a child. ``` # ignore tree.edge(r"\<start\>", r"\<expr\>") # ignore tree ``` To construct the produced string from a derivation tree, we traverse the tree in order and collect the symbols at the leaves of the tree. In the case above, we obtain the string `"<expr>"`. To further expand the tree, we choose another symbol to expand, and add its expansion as new children. This would get us the `<expr>` symbol, which gets expanded into `<expr> + <term>`, adding three children. ``` # ignore tree.edge(r"\<expr\>", r"\<expr\> ") tree.edge(r"\<expr\>", r"+") tree.edge(r"\<expr\>", r"\<term\>") # ignore tree ``` We repeat the expansion until there are no symbols left to expand: ``` # ignore tree.edge(r"\<expr\> ", r"\<term\> ") tree.edge(r"\<term\> ", r"\<factor\> ") tree.edge(r"\<factor\> ", r"\<integer\> ") tree.edge(r"\<integer\> ", r"\<digit\> ") tree.edge(r"\<digit\> ", r"2 ") tree.edge(r"\<term\>", r"\<factor\>") tree.edge(r"\<factor\>", r"\<integer\>") tree.edge(r"\<integer\>", r"\<digit\>") tree.edge(r"\<digit\>", r"2") # ignore tree ``` We now have a representation for the string `2 + 2`. In contrast to the string alone, though, the derivation tree records _the entire structure_ (and production history, or _derivation_ history) of the produced string. It also allows for simple comparison and manipulation – say, replacing one subtree (substructure) against another. ## Representing Derivation Trees To represent a derivation tree in Python, we use the following format. A node is a pair ```python (SYMBOL_NAME, CHILDREN) ``` where `SYMBOL_NAME` is a string representing the node (i.e. `"<start>"` or `"+"`) and `CHILDREN` is a list of children nodes. `CHILDREN` can take some special values: 1. `None` as a placeholder for future expansion. This means that the node is a *nonterminal symbol* that should be expanded further. 2. `[]` (i.e., the empty list) to indicate _no_ children. This means that the node is a *terminal symbol* that can no longer be expanded. The type `DerivationTree` captures this very structure. (`Any` should actually read `DerivationTree`, but the Python static type checker cannot handle recursive types well.) ``` DerivationTree = Tuple[str, Optional[List[Any]]] ``` Let us take a very simple derivation tree, representing the intermediate step `<expr> + <term>`, above. ``` derivation_tree: DerivationTree = ("<start>", [("<expr>", [("<expr>", None), (" + ", []), ("<term>", None)] )]) ``` To better understand the structure of this tree, let us introduce a function `display_tree()` that visualizes this tree. #### Excursion: Implementing `display_tree()` We use the `dot` drawing program from the `graphviz` package algorithmically, traversing the above structure. (Unless you're deeply interested in tree visualization, you can directly skip to the example below.) ``` from graphviz import Digraph from IPython.display import display import re def dot_escape(s: str) -> str: """Return s in a form suitable for dot""" s = re.sub(r'([^a-zA-Z0-9" ])', r"\\\1", s) return s assert dot_escape("hello") == "hello" assert dot_escape("<hello>, world") == "\\<hello\\>\\, world" assert dot_escape("\\n") == "\\\\n" ``` While we are interested at present in visualizing a `derivation_tree`, it is in our interest to generalize the visualization procedure. In particular, it would be helpful if our method `display_tree()` can display *any* tree like data structure. To enable this, we define a helper method `extract_node()` that extract the current symbol and children from a given data structure. The default implementation simply extracts the symbol, children, and annotation from any `derivation_tree` node. ``` def extract_node(node, id): symbol, children, *annotation = node return symbol, children, ''.join(str(a) for a in annotation) ``` While visualizing a tree, it is often useful to display certain nodes differently. For example, it is sometimes useful to distinguish between non-processed nodes and processed nodes. We define a helper procedure `default_node_attr()` that provides the basic display, which can be customized by the user. ``` def default_node_attr(dot, nid, symbol, ann): dot.node(repr(nid), dot_escape(unicode_escape(symbol))) ``` Similar to nodes, the edges may also require modifications. We define `default_edge_attr()` as a helper procedure that can be customized by the user. ``` def default_edge_attr(dot, start_node, stop_node): dot.edge(repr(start_node), repr(stop_node)) ``` While visualizing a tree, one may sometimes wish to change the appearance of the tree. For example, it is sometimes easier to view the tree if it was laid out left to right rather than top to bottom. We define another helper procedure `default_graph_attr()` for that. ``` def default_graph_attr(dot): dot.attr('node', shape='plain') ``` Finally, we define a method `display_tree()` that accepts these four functions `extract_node()`, `default_edge_attr()`, `default_node_attr()` and `default_graph_attr()` and uses them to display the tree. ``` def display_tree(derivation_tree: DerivationTree, log: bool = False, extract_node: Callable = extract_node, node_attr: Callable = default_node_attr, edge_attr: Callable = default_edge_attr, graph_attr: Callable = default_graph_attr) -> Any: # If we import display_tree, we also have to import its functions from graphviz import Digraph counter = 0 def traverse_tree(dot, tree, id=0): (symbol, children, annotation) = extract_node(tree, id) node_attr(dot, id, symbol, annotation) if children: for child in children: nonlocal counter counter += 1 child_id = counter edge_attr(dot, id, child_id) traverse_tree(dot, child, child_id) dot = Digraph(comment="Derivation Tree") graph_attr(dot) traverse_tree(dot, derivation_tree) if log: print(dot) return dot ``` #### End of Excursion This is what our tree visualizes into: ``` display_tree(derivation_tree) quiz("And which of these is the internal representation of `derivation_tree`?", [ "`('<start>', [('<expr>', (['<expr> + <term>']))])`", "`('<start>', [('<expr>', (['<expr>', ' + ', <term>']))])`", "`" + repr(derivation_tree) + "`", "`(" + repr(derivation_tree) + ", None)`" ], len("eleven") - len("one")) ``` You can check it out yourself: ``` derivation_tree ``` Within this book, we also occasionally use a function `display_annotated_tree()` which allows to add annotations to individual nodes. #### Excursion: Source code and example for `display_annotated_tree()` `display_annotated_tree()` displays an annotated tree structure, and lays out the graph left to right. ``` def display_annotated_tree(tree: DerivationTree, a_nodes: Dict[int, str], a_edges: Dict[Tuple[int, int], str], log: bool = False): def graph_attr(dot): dot.attr('node', shape='plain') dot.graph_attr['rankdir'] = 'LR' def annotate_node(dot, nid, symbol, ann): if nid in a_nodes: dot.node(repr(nid), "%s (%s)" % (dot_escape(unicode_escape(symbol)), a_nodes[nid])) else: dot.node(repr(nid), dot_escape(unicode_escape(symbol))) def annotate_edge(dot, start_node, stop_node): if (start_node, stop_node) in a_edges: dot.edge(repr(start_node), repr(stop_node), a_edges[(start_node, stop_node)]) else: dot.edge(repr(start_node), repr(stop_node)) return display_tree(tree, log=log, node_attr=annotate_node, edge_attr=annotate_edge, graph_attr=graph_attr) display_annotated_tree(derivation_tree, {3: 'plus'}, {(1, 3): 'op'}, log=False) ``` #### End of Excursion If we want to see all the leaf nodes in a tree as a string, the following `all_terminals()` function comes in handy: ``` def all_terminals(tree: DerivationTree) -> str: (symbol, children) = tree if children is None: # This is a nonterminal symbol not expanded yet return symbol if len(children) == 0: # This is a terminal symbol return symbol # This is an expanded symbol: # Concatenate all terminal symbols from all children return ''.join([all_terminals(c) for c in children]) all_terminals(derivation_tree) ``` The alternative `tree_to_string()` function also converts the tree to a string; however, it replaces nonterminal symbols by empty strings. ``` def tree_to_string(tree: DerivationTree) -> str: symbol, children, *_ = tree if children: return ''.join(tree_to_string(c) for c in children) else: return '' if is_nonterminal(symbol) else symbol tree_to_string(derivation_tree) ``` ## Expanding a Node Let us now develop an algorithm that takes a tree with unexpanded symbols (say, `derivation_tree`, above), and expands all these symbols one after the other. As with earlier fuzzers, we create a special subclass of `Fuzzer` – in this case, `GrammarFuzzer`. A `GrammarFuzzer` gets a grammar and a start symbol; the other parameters will be used later to further control creation and to support debugging. ``` from Fuzzer import Fuzzer class GrammarFuzzer(Fuzzer): """Produce strings from grammars efficiently, using derivation trees.""" def __init__(self, grammar: Grammar, start_symbol: str = START_SYMBOL, min_nonterminals: int = 0, max_nonterminals: int = 10, disp: bool = False, log: Union[bool, int] = False) -> None: """Produce strings from `grammar`, starting with `start_symbol`. If `min_nonterminals` or `max_nonterminals` is given, use them as limits for the number of nonterminals produced. If `disp` is set, display the intermediate derivation trees. If `log` is set, show intermediate steps as text on standard output.""" self.grammar = grammar self.start_symbol = start_symbol self.min_nonterminals = min_nonterminals self.max_nonterminals = max_nonterminals self.disp = disp self.log = log self.check_grammar() # Invokes is_valid_grammar() ``` To add further methods to `GrammarFuzzer`, we use the hack already introduced for [the `MutationFuzzer` class](MutationFuzzer.ipynb). The construct ```python class GrammarFuzzer(GrammarFuzzer): def new_method(self, args): pass ``` allows us to add a new method `new_method()` to the `GrammarFuzzer` class. (Actually, we get a new `GrammarFuzzer` class that extends the old one, but for all our purposes, this does not matter.) #### Excursion: `check_grammar()` implementation We can use the above hack to define the helper method `check_grammar()`, which checks the given grammar for consistency: ``` class GrammarFuzzer(GrammarFuzzer): def check_grammar(self) -> None: """Check the grammar passed""" assert self.start_symbol in self.grammar assert is_valid_grammar( self.grammar, start_symbol=self.start_symbol, supported_opts=self.supported_opts()) def supported_opts(self) -> Set[str]: """Set of supported options. To be overloaded in subclasses.""" return set() # We don't support specific options ``` #### End of Excursion Let us now define a helper method `init_tree()` that constructs a tree with just the start symbol: ``` class GrammarFuzzer(GrammarFuzzer): def init_tree(self) -> DerivationTree: return (self.start_symbol, None) f = GrammarFuzzer(EXPR_GRAMMAR) display_tree(f.init_tree()) ``` This is the tree we want to expand. ### Picking a Children Alternative to be Expanded One of the central methods in `GrammarFuzzer` is `choose_node_expansion()`. This method gets a node (say, the `<start>` node and a list of possible lists of children to be expanded (one for every possible expansion from the grammar), chooses one of them, and returns its index in the possible children list. By overloading this method (notably in later chapters), we can implement different strategies – for now, it simply randomly picks one of the given lists of children (which in turn are lists of derivation trees). ``` class GrammarFuzzer(GrammarFuzzer): def choose_node_expansion(self, node: DerivationTree, children_alternatives: List[List[DerivationTree]]) -> int: """Return index of expansion in `children_alternatives` to be selected. 'children_alternatives`: a list of possible children for `node`. Defaults to random. To be overloaded in subclasses.""" return random.randrange(0, len(children_alternatives)) ``` ### Getting a List of Possible Expansions To actually obtain the list of possible children, we will need a helper function `expansion_to_children()` that takes an expansion string and decomposes it into a list of derivation trees – one for each symbol (terminal or nonterminal) in the string. #### Excursion: Implementing `expansion_to_children()` The function `expansion_to_children()` uses the `re.split()` method to split an expansion string into a list of children nodes: ``` def expansion_to_children(expansion: Expansion) -> List[DerivationTree]: # print("Converting " + repr(expansion)) # strings contains all substrings -- both terminals and nonterminals such # that ''.join(strings) == expansion expansion = exp_string(expansion) assert isinstance(expansion, str) if expansion == "": # Special case: epsilon expansion return [("", [])] strings = re.split(RE_NONTERMINAL, expansion) return [(s, None) if is_nonterminal(s) else (s, []) for s in strings if len(s) > 0] ``` #### End of Excursion ``` expansion_to_children("<term> + <expr>") ``` The case of an *epsilon expansion*, i.e. expanding into an empty string as in `<symbol> ::=` needs special treatment: ``` expansion_to_children("") ``` Just like `nonterminals()` in the [chapter on Grammars](Grammars.ipynb), we provide for future extensions, allowing the expansion to be a tuple with extra data (which will be ignored). ``` expansion_to_children(("+<term>", {"extra_data": 1234})) ``` We realize this helper as a method in `GrammarFuzzer` such that it can be overloaded by subclasses: ``` class GrammarFuzzer(GrammarFuzzer): def expansion_to_children(self, expansion: Expansion) -> List[DerivationTree]: return expansion_to_children(expansion) ``` ### Putting Things Together With this, we can now take 1. some unexpanded node in the tree, 2. choose a random expansion, and 3. return the new tree. This is what the method `expand_node_randomly()` does. #### Excursion: `expand_node_randomly()` implementation The function `expand_node_randomly()` uses a helper function `choose_node_expansion()` to randomly pick an index from an array of possible children. (`choose_node_expansion()` can be overloaded in subclasses.) ``` import random class GrammarFuzzer(GrammarFuzzer): def expand_node_randomly(self, node: DerivationTree) -> DerivationTree: """Choose a random expansion for `node` and return it""" (symbol, children) = node assert children is None if self.log: print("Expanding", all_terminals(node), "randomly") # Fetch the possible expansions from grammar... expansions = self.grammar[symbol] children_alternatives: List[List[DerivationTree]] = [ self.expansion_to_children(expansion) for expansion in expansions ] # ... and select a random expansion index = self.choose_node_expansion(node, children_alternatives) chosen_children = children_alternatives[index] # Process children (for subclasses) chosen_children = self.process_chosen_children(chosen_children, expansions[index]) # Return with new children return (symbol, chosen_children) ``` The generic `expand_node()` method can later be used to select different expansion strategies; as of now, it only uses `expand_node_randomly()`. ``` class GrammarFuzzer(GrammarFuzzer): def expand_node(self, node: DerivationTree) -> DerivationTree: return self.expand_node_randomly(node) ``` The helper function `process_chosen_children()` does nothing; it can be overloaded by subclasses to process the children once chosen. ``` class GrammarFuzzer(GrammarFuzzer): def process_chosen_children(self, chosen_children: List[DerivationTree], expansion: Expansion) -> List[DerivationTree]: """Process children after selection. By default, does nothing.""" return chosen_children ``` #### End of Excursion This is how `expand_node_randomly()` works: ``` f = GrammarFuzzer(EXPR_GRAMMAR, log=True) print("Before expand_node_randomly():") expr_tree = ("<integer>", None) display_tree(expr_tree) print("After expand_node_randomly():") expr_tree = f.expand_node_randomly(expr_tree) display_tree(expr_tree) # docassert assert expr_tree[1][0][0] == '<digit>' quiz("What tree do we get if we expand the `<digit>` subtree?", [ "We get another `<digit>` as new child of `<digit>`", "We get some digit as child of `<digit>`", "We get another `<digit>` as second child of `<integer>`", "The entire tree becomes a single node with a digit" ], 'len("2") + len("2")') ``` We can surely put this to the test, right? Here we go: ``` digit_subtree = expr_tree[1][0] # type: ignore display_tree(digit_subtree) print("After expanding the <digit> subtree:") digit_subtree = f.expand_node_randomly(digit_subtree) display_tree(digit_subtree) ``` We see that `<digit>` gets expanded again according to the grammar rules – namely, into a single digit. ``` quiz("Is the original `expr_tree` affected by this change?", [ "Yes, it has also gained a new child", "No, it is unchanged" ], "1 ** (1 - 1)") ``` Although we have changed one of the subtrees, the original `expr_tree` is unaffected: ``` display_tree(expr_tree) ``` That is because `expand_node_randomly()` returns a new (expanded) tree and does not change the tree passed as argument. ## Expanding a Tree Let us now apply our functions for expanding a single node to some node in the tree. To this end, we first need to _search the tree for unexpanded nodes_. `possible_expansions()` counts how many unexpanded symbols there are in a tree: ``` class GrammarFuzzer(GrammarFuzzer): def possible_expansions(self, node: DerivationTree) -> int: (symbol, children) = node if children is None: return 1 return sum(self.possible_expansions(c) for c in children) f = GrammarFuzzer(EXPR_GRAMMAR) print(f.possible_expansions(derivation_tree)) ``` The method `any_possible_expansions()` returns True if the tree has any unexpanded nodes. ``` class GrammarFuzzer(GrammarFuzzer): def any_possible_expansions(self, node: DerivationTree) -> bool: (symbol, children) = node if children is None: return True return any(self.any_possible_expansions(c) for c in children) f = GrammarFuzzer(EXPR_GRAMMAR) f.any_possible_expansions(derivation_tree) ``` Here comes `expand_tree_once()`, the core method of our tree expansion algorithm. It first checks whether it is currently being applied on a nonterminal symbol without expansion; if so, it invokes `expand_node()` on it, as discussed above. If the node is already expanded (i.e. has children), it checks the subset of children which still have unexpanded symbols, randomly selects one of them, and applies itself recursively on that child. #### Excursion: `expand_tree_once()` implementation The `expand_tree_once()` method replaces the child _in place_, meaning that it actually mutates the tree being passed as an argument rather than returning a new tree. This in-place mutation is what makes this function particularly efficient. Again, we use a helper method (`choose_tree_expansion()`) to return the chosen index from a list of children that can be expanded. ``` class GrammarFuzzer(GrammarFuzzer): def choose_tree_expansion(self, tree: DerivationTree, children: List[DerivationTree]) -> int: """Return index of subtree in `children` to be selected for expansion. Defaults to random.""" return random.randrange(0, len(children)) def expand_tree_once(self, tree: DerivationTree) -> DerivationTree: """Choose an unexpanded symbol in tree; expand it. Can be overloaded in subclasses.""" (symbol, children) = tree if children is None: # Expand this node return self.expand_node(tree) # Find all children with possible expansions expandable_children = [ c for c in children if self.any_possible_expansions(c)] # `index_map` translates an index in `expandable_children` # back into the original index in `children` index_map = [i for (i, c) in enumerate(children) if c in expandable_children] # Select a random child child_to_be_expanded = \ self.choose_tree_expansion(tree, expandable_children) # Expand in place children[index_map[child_to_be_expanded]] = \ self.expand_tree_once(expandable_children[child_to_be_expanded]) return tree ``` #### End of Excursion Let us illustrate how `expand_tree_once()` works. We start with our derivation tree from above... ``` derivation_tree = ("<start>", [("<expr>", [("<expr>", None), (" + ", []), ("<term>", None)] )]) display_tree(derivation_tree) ``` ... and now expand it twice: ``` f = GrammarFuzzer(EXPR_GRAMMAR, log=True) derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) ``` We see that with each step, one more symbol is expanded. Now all it takes is to apply this again and again, expanding the tree further and further. ## Closing the Expansion With `expand_tree_once()`, we can keep on expanding the tree – but how do we actually stop? The key idea here, introduced by Luke in \cite{Luke2000}, is that after inflating the derivation tree to some maximum size, we _only want to apply expansions that increase the size of the tree by a minimum_. For `<factor>`, for instance, we would prefer an expansion into `<integer>`, as this will not introduce further recursion (and potential size inflation); for `<integer>`, likewise, an expansion into `<digit>` is preferred, as it will less increase tree size than `<digit><integer>`. To identify the _cost_ of expanding a symbol, we introduce two functions that mutually rely on each other: * `symbol_cost()` returns the minimum cost of all expansions of a symbol, using `expansion_cost()` to compute the cost for each expansion. * `expansion_cost()` returns the sum of all expansions in `expansions`. If a nonterminal is encountered again during traversal, the cost of the expansion is $\infty$, indicating (potentially infinite) recursion. ### Excursion: Implementing Cost Functions ``` class GrammarFuzzer(GrammarFuzzer): def symbol_cost(self, symbol: str, seen: Set[str] = set()) \ -> Union[int, float]: expansions = self.grammar[symbol] return min(self.expansion_cost(e, seen | {symbol}) for e in expansions) def expansion_cost(self, expansion: Expansion, seen: Set[str] = set()) -> Union[int, float]: symbols = nonterminals(expansion) if len(symbols) == 0: return 1 # no symbol if any(s in seen for s in symbols): return float('inf') # the value of a expansion is the sum of all expandable variables # inside + 1 return sum(self.symbol_cost(s, seen) for s in symbols) + 1 ``` ### End of Excursion Here's two examples: The minimum cost of expanding a digit is 1, since we have to choose from one of its expansions. ``` f = GrammarFuzzer(EXPR_GRAMMAR) assert f.symbol_cost("<digit>") == 1 ``` The minimum cost of expanding `<expr>`, though, is five, as this is the minimum number of expansions required. (`<expr>` $\rightarrow$ `<term>` $\rightarrow$ `<factor>` $\rightarrow$ `<integer>` $\rightarrow$ `<digit>` $\rightarrow$ 1) ``` assert f.symbol_cost("<expr>") == 5 ``` We define `expand_node_by_cost(self, node, choose)`, a variant of `expand_node()` that takes the above cost into account. It determines the minimum cost `cost` across all children and then chooses a child from the list using the `choose` function, which by default is the minimum cost. If multiple children all have the same minimum cost, it chooses randomly between these. #### Excursion: `expand_node_by_cost()` implementation ``` class GrammarFuzzer(GrammarFuzzer): def expand_node_by_cost(self, node: DerivationTree, choose: Callable = min) -> DerivationTree: (symbol, children) = node assert children is None # Fetch the possible expansions from grammar... expansions = self.grammar[symbol] children_alternatives_with_cost = [(self.expansion_to_children(expansion), self.expansion_cost(expansion, {symbol}), expansion) for expansion in expansions] costs = [cost for (child, cost, expansion) in children_alternatives_with_cost] chosen_cost = choose(costs) children_with_chosen_cost = [child for (child, child_cost, _) in children_alternatives_with_cost if child_cost == chosen_cost] expansion_with_chosen_cost = [expansion for (_, child_cost, expansion) in children_alternatives_with_cost if child_cost == chosen_cost] index = self.choose_node_expansion(node, children_with_chosen_cost) chosen_children = children_with_chosen_cost[index] chosen_expansion = expansion_with_chosen_cost[index] chosen_children = self.process_chosen_children( chosen_children, chosen_expansion) # Return with a new list return (symbol, chosen_children) ``` #### End of Excursion The shortcut `expand_node_min_cost()` passes `min()` as the `choose` function, which makes it expand nodes at minimum cost. ``` class GrammarFuzzer(GrammarFuzzer): def expand_node_min_cost(self, node: DerivationTree) -> DerivationTree: if self.log: print("Expanding", all_terminals(node), "at minimum cost") return self.expand_node_by_cost(node, min) ``` We can now apply this function to close the expansion of our derivation tree, using `expand_tree_once()` with the above `expand_node_min_cost()` as expansion function. ``` class GrammarFuzzer(GrammarFuzzer): def expand_node(self, node: DerivationTree) -> DerivationTree: return self.expand_node_min_cost(node) f = GrammarFuzzer(EXPR_GRAMMAR, log=True) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) ``` We keep on expanding until all nonterminals are expanded. ``` while f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) ``` Here is the final tree: ``` display_tree(derivation_tree) ``` We see that in each step, `expand_node_min_cost()` chooses an expansion that does not increase the number of symbols, eventually closing all open expansions. ## Node Inflation Especially at the beginning of an expansion, we may be interested in getting _as many nodes as possible_ – that is, we'd like to prefer expansions that give us _more_ nonterminals to expand. This is actually the exact opposite of what `expand_node_min_cost()` gives us, and we can implement a method `expand_node_max_cost()` that will always choose among the nodes with the _highest_ cost: ``` class GrammarFuzzer(GrammarFuzzer): def expand_node_max_cost(self, node: DerivationTree) -> DerivationTree: if self.log: print("Expanding", all_terminals(node), "at maximum cost") return self.expand_node_by_cost(node, max) ``` To illustrate `expand_node_max_cost()`, we can again redefine `expand_node()` to use it, and then use `expand_tree_once()` to show a few expansion steps: ``` class GrammarFuzzer(GrammarFuzzer): def expand_node(self, node: DerivationTree) -> DerivationTree: return self.expand_node_max_cost(node) derivation_tree = ("<start>", [("<expr>", [("<expr>", None), (" + ", []), ("<term>", None)] )]) f = GrammarFuzzer(EXPR_GRAMMAR, log=True) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) ``` We see that with each step, the number of nonterminals increases. Obviously, we have to put a limit on this number. ## Three Expansion Phases We can now put all three phases together in a single function `expand_tree()` which will work as follows: 1. **Max cost expansion.** Expand the tree using expansions with maximum cost until we have at least `min_nonterminals` nonterminals. This phase can be easily skipped by setting `min_nonterminals` to zero. 2. **Random expansion.** Keep on expanding the tree randomly until we reach `max_nonterminals` nonterminals. 3. **Min cost expansion.** Close the expansion with minimum cost. We implement these three phases by having `expand_node` reference the expansion method to apply. This is controlled by setting `expand_node` (the method reference) to first `expand_node_max_cost` (i.e., calling `expand_node()` invokes `expand_node_max_cost()`), then `expand_node_randomly`, and finally `expand_node_min_cost`. In the first two phases, we also set a maximum limit of `min_nonterminals` and `max_nonterminals`, respectively. #### Excursion: Implementation of three-phase `expand_tree()` ``` class GrammarFuzzer(GrammarFuzzer): def log_tree(self, tree: DerivationTree) -> None: """Output a tree if self.log is set; if self.display is also set, show the tree structure""" if self.log: print("Tree:", all_terminals(tree)) if self.disp: display(display_tree(tree)) # print(self.possible_expansions(tree), "possible expansion(s) left") def expand_tree_with_strategy(self, tree: DerivationTree, expand_node_method: Callable, limit: Optional[int] = None): """Expand tree using `expand_node_method` as node expansion function until the number of possible expansions reaches `limit`.""" self.expand_node = expand_node_method # type: ignore while ((limit is None or self.possible_expansions(tree) < limit) and self.any_possible_expansions(tree)): tree = self.expand_tree_once(tree) self.log_tree(tree) return tree def expand_tree(self, tree: DerivationTree) -> DerivationTree: """Expand `tree` in a three-phase strategy until all expansions are complete.""" self.log_tree(tree) tree = self.expand_tree_with_strategy( tree, self.expand_node_max_cost, self.min_nonterminals) tree = self.expand_tree_with_strategy( tree, self.expand_node_randomly, self.max_nonterminals) tree = self.expand_tree_with_strategy( tree, self.expand_node_min_cost) assert self.possible_expansions(tree) == 0 return tree ``` #### End of Excursion Let us try this out on our example. We start with a half-expanded derivation tree: ``` initial_derivation_tree: DerivationTree = ("<start>", [("<expr>", [("<expr>", None), (" + ", []), ("<term>", None)] )]) display_tree(initial_derivation_tree) ``` We now apply our expansion strategy on this tree. We see that initially, nodes are expanded at maximum cost, then randomly, and then closing the expansion at minimum cost. ``` f = GrammarFuzzer( EXPR_GRAMMAR, min_nonterminals=3, max_nonterminals=5, log=True) derivation_tree = f.expand_tree(initial_derivation_tree) ``` This is the final derivation tree: ``` display_tree(derivation_tree) ``` And this is the resulting string: ``` all_terminals(derivation_tree) ``` ## Putting it all Together Based on this, we can now define a function `fuzz()` that – like `simple_grammar_fuzzer()` – simply takes a grammar and produces a string from it. It thus no longer exposes the complexity of derivation trees. ``` class GrammarFuzzer(GrammarFuzzer): def fuzz_tree(self) -> DerivationTree: """Produce a derivation tree from the grammar.""" tree = self.init_tree() # print(tree) # Expand all nonterminals tree = self.expand_tree(tree) if self.log: print(repr(all_terminals(tree))) if self.disp: display(display_tree(tree)) return tree def fuzz(self) -> str: """Produce a string from the grammar.""" self.derivation_tree = self.fuzz_tree() return all_terminals(self.derivation_tree) ``` We can now apply this on all our defined grammars (and visualize the derivation tree along) ``` f = GrammarFuzzer(EXPR_GRAMMAR) f.fuzz() ``` After calling `fuzz()`, the produced derivation tree is accessible in the `derivation_tree` attribute: ``` display_tree(f.derivation_tree) ``` Let us try out the grammar fuzzer (and its trees) on other grammar formats. ``` f = GrammarFuzzer(URL_GRAMMAR) f.fuzz() display_tree(f.derivation_tree) f = GrammarFuzzer(CGI_GRAMMAR, min_nonterminals=3, max_nonterminals=5) f.fuzz() display_tree(f.derivation_tree) ``` How do we stack up against `simple_grammar_fuzzer()`? ``` trials = 50 xs = [] ys = [] f = GrammarFuzzer(EXPR_GRAMMAR, max_nonterminals=20) for i in range(trials): with Timer() as t: s = f.fuzz() xs.append(len(s)) ys.append(t.elapsed_time()) print(i, end=" ") print() average_time = sum(ys) / trials print("Average time:", average_time) %matplotlib inline import matplotlib.pyplot as plt plt.scatter(xs, ys) plt.title('Time required for generating an output'); ``` Our test generation is much faster, but also our inputs are much smaller. We see that with derivation trees, we can get much better control over grammar production. Finally, how does `GrammarFuzzer` work with `expr_grammar`, where `simple_grammar_fuzzer()` failed? It works without any issue: ``` f = GrammarFuzzer(expr_grammar, max_nonterminals=10) f.fuzz() ``` With `GrammarFuzzer`, we now have a solid foundation on which to build further fuzzers and illustrate more exciting concepts from the world of generating software tests. Many of these do not even require writing a grammar – instead, they _infer_ a grammar from the domain at hand, and thus allow to use grammar-based fuzzing even without writing a grammar. Stay tuned! ## Synopsis ### Efficient Grammar Fuzzing This chapter introduces `GrammarFuzzer`, an efficient grammar fuzzer that takes a grammar to produce syntactically valid input strings. Here's a typical usage: ``` from Grammars import US_PHONE_GRAMMAR phone_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR) phone_fuzzer.fuzz() ``` The `GrammarFuzzer` constructor takes a number of keyword arguments to control its behavior. `start_symbol`, for instance, allows to set the symbol that expansion starts with (instead of `<start>`): ``` area_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR, start_symbol='<area>') area_fuzzer.fuzz() ``` Here's how to parameterize the `GrammarFuzzer` constructor: ``` # ignore import inspect # ignore print(inspect.getdoc(GrammarFuzzer.__init__)) # ignore from ClassDiagram import display_class_hierarchy # ignore display_class_hierarchy([GrammarFuzzer], public_methods=[ Fuzzer.__init__, Fuzzer.fuzz, Fuzzer.run, Fuzzer.runs, GrammarFuzzer.__init__, GrammarFuzzer.fuzz, GrammarFuzzer.fuzz_tree, ], types={ 'DerivationTree': DerivationTree, 'Expansion': Expansion, 'Grammar': Grammar }, project='fuzzingbook') ``` ### Derivation Trees Internally, `GrammarFuzzer` makes use of [derivation trees](#Derivation-Trees), which it expands step by step. After producing a string, the tree produced can be accessed in the `derivation_tree` attribute. ``` display_tree(phone_fuzzer.derivation_tree) ``` In the internal representation of a derivation tree, a _node_ is a pair (`symbol`, `children`). For nonterminals, `symbol` is the symbol that is being expanded, and `children` is a list of further nodes. For terminals, `symbol` is the terminal string, and `children` is empty. ``` phone_fuzzer.derivation_tree ``` The chapter contains various helpers to work with derivation trees, including visualization tools – notably, `display_tree()`, above. ## Lessons Learned * _Derivation trees_ are important for expressing input structure * _Grammar fuzzing based on derivation trees_ 1. is much more efficient than string-based grammar fuzzing, 2. gives much better control over input generation, and 3. effectively avoids running into infinite expansions. ## Next Steps Congratulations! You have reached one of the central "hubs" of the book. From here, there is a wide range of techniques that build on grammar fuzzing. ### Extending Grammars First, we have a number of techniques that all _extend_ grammars in some form: * [Parsing and recombining inputs](Parser.ipynb) allows to make use of existing inputs, again using derivation trees * [Covering grammar expansions](GrammarCoverageFuzzer.ipynb) allows for _combinatorial_ coverage * [Assigning _probabilities_ to individual expansions](ProbabilisticGrammarFuzzer.ipynb) gives additional control over expansions * [Assigning _constraints_ to individual expansions](GeneratorGrammarFuzzer.ipynb) allows to express _semantic constraints_ on individual rules. ### Applying Grammars Second, we can _apply_ grammars in a variety of contexts that all involve some form of learning it automatically: * [Fuzzing APIs](APIFuzzer.ipynb), learning a grammar from APIs * [Fuzzing graphical user interfaces](WebFuzzer.ipynb), learning a grammar from user interfaces for subsequent fuzzing * [Mining grammars](GrammarMiner.ipynb), learning a grammar for arbitrary input formats Keep on expanding! ## Background Derivation trees (then frequently called _parse trees_) are a standard data structure into which *parsers* decompose inputs. The *Dragon Book* (also known as *Compilers: Principles, Techniques, and Tools*) \cite{Aho2006} discusses parsing into derivation trees as part of compiling programs. We also use derivation trees [when parsing and recombining inputs](Parser.ipynb). The key idea in this chapter, namely expanding until a limit of symbols is reached, and then always choosing the shortest path, stems from Luke \cite{Luke2000}. ## Exercises ### Exercise 1: Caching Method Results Tracking `GrammarFuzzer` reveals that some methods are called again and again, always with the same values. Set up a class `FasterGrammarFuzzer` with a _cache_ that checks whether the method has been called before, and if so, return the previously computed "memoized" value. Do this for `expansion_to_children()`. Compare the number of invocations before and after the optimization. **Important**: For `expansion_to_children()`, make sure that each list returned is an individual copy. If you return the same (cached) list, this will interfere with the in-place modification of `GrammarFuzzer`. Use the Python `copy.deepcopy()` function for this purpose. **Solution.** Let us demonstrate this for `expansion_to_children()`: ``` import copy class FasterGrammarFuzzer(GrammarFuzzer): """Variant of `GrammarFuzzer` with memoized values""" def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self._expansion_cache: Dict[Expansion, List[DerivationTree]] = {} self._expansion_invocations = 0 self._expansion_invocations_cached = 0 def expansion_to_children(self, expansion: Expansion) \ -> List[DerivationTree]: self._expansion_invocations += 1 if expansion in self._expansion_cache: self._expansion_invocations_cached += 1 cached_result = copy.deepcopy(self._expansion_cache[expansion]) return cached_result result = super().expansion_to_children(expansion) self._expansion_cache[expansion] = result return result f = FasterGrammarFuzzer(EXPR_GRAMMAR, min_nonterminals=3, max_nonterminals=5) f.fuzz() f._expansion_invocations f._expansion_invocations_cached print("%.2f%% of invocations can be cached" % (f._expansion_invocations_cached * 100 / f._expansion_invocations)) ``` ### Exercise 2: Grammar Pre-Compilation Some methods such as `symbol_cost()` or `expansion_cost()` return a value that is dependent on the grammar only. Set up a class `EvenFasterGrammarFuzzer()` that pre-computes these values once upon initialization, such that later invocations of `symbol_cost()` or `expansion_cost()` need only look up these values. **Solution.** Here's a possible solution, using a hack to substitute the `symbol_cost()` and `expansion_cost()` functions once the pre-computed values are set up. ``` class EvenFasterGrammarFuzzer(GrammarFuzzer): """Variant of `GrammarFuzzer` with precomputed costs""" def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self._symbol_costs: Dict[str, Union[int, float]] = {} self._expansion_costs: Dict[Expansion, Union[int, float]] = {} self.precompute_costs() def new_symbol_cost(self, symbol: str, seen: Set[str] = set()) -> Union[int, float]: return self._symbol_costs[symbol] def new_expansion_cost(self, expansion: Expansion, seen: Set[str] = set()) -> Union[int, float]: return self._expansion_costs[expansion] def precompute_costs(self) -> None: for symbol in self.grammar: self._symbol_costs[symbol] = super().symbol_cost(symbol) for expansion in self.grammar[symbol]: self._expansion_costs[expansion] = \ super().expansion_cost(expansion) # Make sure we now call the caching methods self.symbol_cost = self.new_symbol_cost # type: ignore self.expansion_cost = self.new_expansion_cost # type: ignore f = EvenFasterGrammarFuzzer(EXPR_GRAMMAR) ``` Here are the individual costs: ``` f._symbol_costs f._expansion_costs f = EvenFasterGrammarFuzzer(EXPR_GRAMMAR) f.fuzz() ``` ### Exercise 3: Maintaining Trees to be Expanded In `expand_tree_once()`, the algorithm traverses the tree again and again to find nonterminals that still can be extended. Speed up the process by keeping a list of nonterminal symbols in the tree that still can be expanded. **Solution.** Left as exercise for the reader. ### Exercise 4: Alternate Random Expansions We could define `expand_node_randomly()` such that it simply invokes `expand_node_by_cost(node, random.choice)`: ``` class ExerciseGrammarFuzzer(GrammarFuzzer): def expand_node_randomly(self, node: DerivationTree) -> DerivationTree: if self.log: print("Expanding", all_terminals(node), "randomly by cost") return self.expand_node_by_cost(node, random.choice) ``` What is the difference between the original implementation and this alternative? **Solution.** The alternative in `ExerciseGrammarFuzzer` has another probability distribution. In the original `GrammarFuzzer`, all expansions have the same likelihood of being expanded. In `ExerciseGrammarFuzzer`, first, a cost is chosen (randomly); then, one of the expansions with this cost is chosen (again randomly). This means that expansions whose cost is unique have a higher chance of being selected.
github_jupyter
from bookutils import YouTubeVideo YouTubeVideo('84k8AO_3ChY') >>> from fuzzingbook.GrammarFuzzer import <identifier> >>> from Grammars import US_PHONE_GRAMMAR >>> phone_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR) >>> phone_fuzzer.fuzz() '(613)417-7523' >>> area_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR, start_symbol='<area>') >>> area_fuzzer.fuzz() '367' Produce strings from `grammar`, starting with `start_symbol`. If `min_nonterminals` or `max_nonterminals` is given, use them as limits for the number of nonterminals produced. If `disp` is set, display the intermediate derivation trees. If `log` is set, show intermediate steps as text on standard output. >>> display_tree(phone_fuzzer.derivation_tree) >>> phone_fuzzer.derivation_tree ('<start>', [('<phone-number>', [('(', []), ('<area>', [('<lead-digit>', [('6', [])]), ('<digit>', [('1', [])]), ('<digit>', [('3', [])])]), (')', []), ('<exchange>', [('<lead-digit>', [('4', [])]), ('<digit>', [('1', [])]), ('<digit>', [('7', [])])]), ('-', []), ('<line>', [('<digit>', [('7', [])]), ('<digit>', [('5', [])]), ('<digit>', [('2', [])]), ('<digit>', [('3', [])])])])]) import bookutils from bookutils import quiz from typing import Tuple, List, Optional, Any, Union, Set, Callable, Dict from bookutils import unicode_escape from Grammars import EXPR_EBNF_GRAMMAR, convert_ebnf_grammar, Grammar, Expansion from Grammars import simple_grammar_fuzzer, is_valid_grammar, exp_string expr_grammar = convert_ebnf_grammar(EXPR_EBNF_GRAMMAR) expr_grammar from ExpectError import ExpectTimeout with ExpectTimeout(1): simple_grammar_fuzzer(grammar=expr_grammar, max_nonterminals=3) quiz("Why does `simple_grammar_fuzzer()` hang?", [ "It produces an infinite number of additions", "It produces an infinite number of digits", "It produces an infinite number of parentheses", "It produces an infinite number of signs", ], '(3 * 3 * 3) ** (3 / (3 * 3))') expr_grammar['<factor>'] from Grammars import simple_grammar_fuzzer from Grammars import START_SYMBOL, EXPR_GRAMMAR, URL_GRAMMAR, CGI_GRAMMAR from Grammars import RE_NONTERMINAL, nonterminals, is_nonterminal from Timer import Timer trials = 50 xs = [] ys = [] for i in range(trials): with Timer() as t: s = simple_grammar_fuzzer(EXPR_GRAMMAR, max_nonterminals=15) xs.append(len(s)) ys.append(t.elapsed_time()) print(i, end=" ") print() average_time = sum(ys) / trials print("Average time:", average_time) %matplotlib inline import matplotlib.pyplot as plt plt.scatter(xs, ys) plt.title('Time required for generating an output'); # ignore from graphviz import Digraph # ignore tree = Digraph("root") tree.attr('node', shape='plain') tree.node(r"\<start\>") # ignore tree # ignore tree.edge(r"\<start\>", r"\<expr\>") # ignore tree # ignore tree.edge(r"\<expr\>", r"\<expr\> ") tree.edge(r"\<expr\>", r"+") tree.edge(r"\<expr\>", r"\<term\>") # ignore tree # ignore tree.edge(r"\<expr\> ", r"\<term\> ") tree.edge(r"\<term\> ", r"\<factor\> ") tree.edge(r"\<factor\> ", r"\<integer\> ") tree.edge(r"\<integer\> ", r"\<digit\> ") tree.edge(r"\<digit\> ", r"2 ") tree.edge(r"\<term\>", r"\<factor\>") tree.edge(r"\<factor\>", r"\<integer\>") tree.edge(r"\<integer\>", r"\<digit\>") tree.edge(r"\<digit\>", r"2") # ignore tree (SYMBOL_NAME, CHILDREN) DerivationTree = Tuple[str, Optional[List[Any]]] derivation_tree: DerivationTree = ("<start>", [("<expr>", [("<expr>", None), (" + ", []), ("<term>", None)] )]) from graphviz import Digraph from IPython.display import display import re def dot_escape(s: str) -> str: """Return s in a form suitable for dot""" s = re.sub(r'([^a-zA-Z0-9" ])', r"\\\1", s) return s assert dot_escape("hello") == "hello" assert dot_escape("<hello>, world") == "\\<hello\\>\\, world" assert dot_escape("\\n") == "\\\\n" def extract_node(node, id): symbol, children, *annotation = node return symbol, children, ''.join(str(a) for a in annotation) def default_node_attr(dot, nid, symbol, ann): dot.node(repr(nid), dot_escape(unicode_escape(symbol))) def default_edge_attr(dot, start_node, stop_node): dot.edge(repr(start_node), repr(stop_node)) def default_graph_attr(dot): dot.attr('node', shape='plain') def display_tree(derivation_tree: DerivationTree, log: bool = False, extract_node: Callable = extract_node, node_attr: Callable = default_node_attr, edge_attr: Callable = default_edge_attr, graph_attr: Callable = default_graph_attr) -> Any: # If we import display_tree, we also have to import its functions from graphviz import Digraph counter = 0 def traverse_tree(dot, tree, id=0): (symbol, children, annotation) = extract_node(tree, id) node_attr(dot, id, symbol, annotation) if children: for child in children: nonlocal counter counter += 1 child_id = counter edge_attr(dot, id, child_id) traverse_tree(dot, child, child_id) dot = Digraph(comment="Derivation Tree") graph_attr(dot) traverse_tree(dot, derivation_tree) if log: print(dot) return dot display_tree(derivation_tree) quiz("And which of these is the internal representation of `derivation_tree`?", [ "`('<start>', [('<expr>', (['<expr> + <term>']))])`", "`('<start>', [('<expr>', (['<expr>', ' + ', <term>']))])`", "`" + repr(derivation_tree) + "`", "`(" + repr(derivation_tree) + ", None)`" ], len("eleven") - len("one")) derivation_tree def display_annotated_tree(tree: DerivationTree, a_nodes: Dict[int, str], a_edges: Dict[Tuple[int, int], str], log: bool = False): def graph_attr(dot): dot.attr('node', shape='plain') dot.graph_attr['rankdir'] = 'LR' def annotate_node(dot, nid, symbol, ann): if nid in a_nodes: dot.node(repr(nid), "%s (%s)" % (dot_escape(unicode_escape(symbol)), a_nodes[nid])) else: dot.node(repr(nid), dot_escape(unicode_escape(symbol))) def annotate_edge(dot, start_node, stop_node): if (start_node, stop_node) in a_edges: dot.edge(repr(start_node), repr(stop_node), a_edges[(start_node, stop_node)]) else: dot.edge(repr(start_node), repr(stop_node)) return display_tree(tree, log=log, node_attr=annotate_node, edge_attr=annotate_edge, graph_attr=graph_attr) display_annotated_tree(derivation_tree, {3: 'plus'}, {(1, 3): 'op'}, log=False) def all_terminals(tree: DerivationTree) -> str: (symbol, children) = tree if children is None: # This is a nonterminal symbol not expanded yet return symbol if len(children) == 0: # This is a terminal symbol return symbol # This is an expanded symbol: # Concatenate all terminal symbols from all children return ''.join([all_terminals(c) for c in children]) all_terminals(derivation_tree) def tree_to_string(tree: DerivationTree) -> str: symbol, children, *_ = tree if children: return ''.join(tree_to_string(c) for c in children) else: return '' if is_nonterminal(symbol) else symbol tree_to_string(derivation_tree) from Fuzzer import Fuzzer class GrammarFuzzer(Fuzzer): """Produce strings from grammars efficiently, using derivation trees.""" def __init__(self, grammar: Grammar, start_symbol: str = START_SYMBOL, min_nonterminals: int = 0, max_nonterminals: int = 10, disp: bool = False, log: Union[bool, int] = False) -> None: """Produce strings from `grammar`, starting with `start_symbol`. If `min_nonterminals` or `max_nonterminals` is given, use them as limits for the number of nonterminals produced. If `disp` is set, display the intermediate derivation trees. If `log` is set, show intermediate steps as text on standard output.""" self.grammar = grammar self.start_symbol = start_symbol self.min_nonterminals = min_nonterminals self.max_nonterminals = max_nonterminals self.disp = disp self.log = log self.check_grammar() # Invokes is_valid_grammar() class GrammarFuzzer(GrammarFuzzer): def new_method(self, args): pass class GrammarFuzzer(GrammarFuzzer): def check_grammar(self) -> None: """Check the grammar passed""" assert self.start_symbol in self.grammar assert is_valid_grammar( self.grammar, start_symbol=self.start_symbol, supported_opts=self.supported_opts()) def supported_opts(self) -> Set[str]: """Set of supported options. To be overloaded in subclasses.""" return set() # We don't support specific options class GrammarFuzzer(GrammarFuzzer): def init_tree(self) -> DerivationTree: return (self.start_symbol, None) f = GrammarFuzzer(EXPR_GRAMMAR) display_tree(f.init_tree()) class GrammarFuzzer(GrammarFuzzer): def choose_node_expansion(self, node: DerivationTree, children_alternatives: List[List[DerivationTree]]) -> int: """Return index of expansion in `children_alternatives` to be selected. 'children_alternatives`: a list of possible children for `node`. Defaults to random. To be overloaded in subclasses.""" return random.randrange(0, len(children_alternatives)) def expansion_to_children(expansion: Expansion) -> List[DerivationTree]: # print("Converting " + repr(expansion)) # strings contains all substrings -- both terminals and nonterminals such # that ''.join(strings) == expansion expansion = exp_string(expansion) assert isinstance(expansion, str) if expansion == "": # Special case: epsilon expansion return [("", [])] strings = re.split(RE_NONTERMINAL, expansion) return [(s, None) if is_nonterminal(s) else (s, []) for s in strings if len(s) > 0] expansion_to_children("<term> + <expr>") expansion_to_children("") expansion_to_children(("+<term>", {"extra_data": 1234})) class GrammarFuzzer(GrammarFuzzer): def expansion_to_children(self, expansion: Expansion) -> List[DerivationTree]: return expansion_to_children(expansion) import random class GrammarFuzzer(GrammarFuzzer): def expand_node_randomly(self, node: DerivationTree) -> DerivationTree: """Choose a random expansion for `node` and return it""" (symbol, children) = node assert children is None if self.log: print("Expanding", all_terminals(node), "randomly") # Fetch the possible expansions from grammar... expansions = self.grammar[symbol] children_alternatives: List[List[DerivationTree]] = [ self.expansion_to_children(expansion) for expansion in expansions ] # ... and select a random expansion index = self.choose_node_expansion(node, children_alternatives) chosen_children = children_alternatives[index] # Process children (for subclasses) chosen_children = self.process_chosen_children(chosen_children, expansions[index]) # Return with new children return (symbol, chosen_children) class GrammarFuzzer(GrammarFuzzer): def expand_node(self, node: DerivationTree) -> DerivationTree: return self.expand_node_randomly(node) class GrammarFuzzer(GrammarFuzzer): def process_chosen_children(self, chosen_children: List[DerivationTree], expansion: Expansion) -> List[DerivationTree]: """Process children after selection. By default, does nothing.""" return chosen_children f = GrammarFuzzer(EXPR_GRAMMAR, log=True) print("Before expand_node_randomly():") expr_tree = ("<integer>", None) display_tree(expr_tree) print("After expand_node_randomly():") expr_tree = f.expand_node_randomly(expr_tree) display_tree(expr_tree) # docassert assert expr_tree[1][0][0] == '<digit>' quiz("What tree do we get if we expand the `<digit>` subtree?", [ "We get another `<digit>` as new child of `<digit>`", "We get some digit as child of `<digit>`", "We get another `<digit>` as second child of `<integer>`", "The entire tree becomes a single node with a digit" ], 'len("2") + len("2")') digit_subtree = expr_tree[1][0] # type: ignore display_tree(digit_subtree) print("After expanding the <digit> subtree:") digit_subtree = f.expand_node_randomly(digit_subtree) display_tree(digit_subtree) quiz("Is the original `expr_tree` affected by this change?", [ "Yes, it has also gained a new child", "No, it is unchanged" ], "1 ** (1 - 1)") display_tree(expr_tree) class GrammarFuzzer(GrammarFuzzer): def possible_expansions(self, node: DerivationTree) -> int: (symbol, children) = node if children is None: return 1 return sum(self.possible_expansions(c) for c in children) f = GrammarFuzzer(EXPR_GRAMMAR) print(f.possible_expansions(derivation_tree)) class GrammarFuzzer(GrammarFuzzer): def any_possible_expansions(self, node: DerivationTree) -> bool: (symbol, children) = node if children is None: return True return any(self.any_possible_expansions(c) for c in children) f = GrammarFuzzer(EXPR_GRAMMAR) f.any_possible_expansions(derivation_tree) class GrammarFuzzer(GrammarFuzzer): def choose_tree_expansion(self, tree: DerivationTree, children: List[DerivationTree]) -> int: """Return index of subtree in `children` to be selected for expansion. Defaults to random.""" return random.randrange(0, len(children)) def expand_tree_once(self, tree: DerivationTree) -> DerivationTree: """Choose an unexpanded symbol in tree; expand it. Can be overloaded in subclasses.""" (symbol, children) = tree if children is None: # Expand this node return self.expand_node(tree) # Find all children with possible expansions expandable_children = [ c for c in children if self.any_possible_expansions(c)] # `index_map` translates an index in `expandable_children` # back into the original index in `children` index_map = [i for (i, c) in enumerate(children) if c in expandable_children] # Select a random child child_to_be_expanded = \ self.choose_tree_expansion(tree, expandable_children) # Expand in place children[index_map[child_to_be_expanded]] = \ self.expand_tree_once(expandable_children[child_to_be_expanded]) return tree derivation_tree = ("<start>", [("<expr>", [("<expr>", None), (" + ", []), ("<term>", None)] )]) display_tree(derivation_tree) f = GrammarFuzzer(EXPR_GRAMMAR, log=True) derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) class GrammarFuzzer(GrammarFuzzer): def symbol_cost(self, symbol: str, seen: Set[str] = set()) \ -> Union[int, float]: expansions = self.grammar[symbol] return min(self.expansion_cost(e, seen | {symbol}) for e in expansions) def expansion_cost(self, expansion: Expansion, seen: Set[str] = set()) -> Union[int, float]: symbols = nonterminals(expansion) if len(symbols) == 0: return 1 # no symbol if any(s in seen for s in symbols): return float('inf') # the value of a expansion is the sum of all expandable variables # inside + 1 return sum(self.symbol_cost(s, seen) for s in symbols) + 1 f = GrammarFuzzer(EXPR_GRAMMAR) assert f.symbol_cost("<digit>") == 1 assert f.symbol_cost("<expr>") == 5 class GrammarFuzzer(GrammarFuzzer): def expand_node_by_cost(self, node: DerivationTree, choose: Callable = min) -> DerivationTree: (symbol, children) = node assert children is None # Fetch the possible expansions from grammar... expansions = self.grammar[symbol] children_alternatives_with_cost = [(self.expansion_to_children(expansion), self.expansion_cost(expansion, {symbol}), expansion) for expansion in expansions] costs = [cost for (child, cost, expansion) in children_alternatives_with_cost] chosen_cost = choose(costs) children_with_chosen_cost = [child for (child, child_cost, _) in children_alternatives_with_cost if child_cost == chosen_cost] expansion_with_chosen_cost = [expansion for (_, child_cost, expansion) in children_alternatives_with_cost if child_cost == chosen_cost] index = self.choose_node_expansion(node, children_with_chosen_cost) chosen_children = children_with_chosen_cost[index] chosen_expansion = expansion_with_chosen_cost[index] chosen_children = self.process_chosen_children( chosen_children, chosen_expansion) # Return with a new list return (symbol, chosen_children) class GrammarFuzzer(GrammarFuzzer): def expand_node_min_cost(self, node: DerivationTree) -> DerivationTree: if self.log: print("Expanding", all_terminals(node), "at minimum cost") return self.expand_node_by_cost(node, min) class GrammarFuzzer(GrammarFuzzer): def expand_node(self, node: DerivationTree) -> DerivationTree: return self.expand_node_min_cost(node) f = GrammarFuzzer(EXPR_GRAMMAR, log=True) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) while f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) class GrammarFuzzer(GrammarFuzzer): def expand_node_max_cost(self, node: DerivationTree) -> DerivationTree: if self.log: print("Expanding", all_terminals(node), "at maximum cost") return self.expand_node_by_cost(node, max) class GrammarFuzzer(GrammarFuzzer): def expand_node(self, node: DerivationTree) -> DerivationTree: return self.expand_node_max_cost(node) derivation_tree = ("<start>", [("<expr>", [("<expr>", None), (" + ", []), ("<term>", None)] )]) f = GrammarFuzzer(EXPR_GRAMMAR, log=True) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) # docassert assert f.any_possible_expansions(derivation_tree) if f.any_possible_expansions(derivation_tree): derivation_tree = f.expand_tree_once(derivation_tree) display_tree(derivation_tree) class GrammarFuzzer(GrammarFuzzer): def log_tree(self, tree: DerivationTree) -> None: """Output a tree if self.log is set; if self.display is also set, show the tree structure""" if self.log: print("Tree:", all_terminals(tree)) if self.disp: display(display_tree(tree)) # print(self.possible_expansions(tree), "possible expansion(s) left") def expand_tree_with_strategy(self, tree: DerivationTree, expand_node_method: Callable, limit: Optional[int] = None): """Expand tree using `expand_node_method` as node expansion function until the number of possible expansions reaches `limit`.""" self.expand_node = expand_node_method # type: ignore while ((limit is None or self.possible_expansions(tree) < limit) and self.any_possible_expansions(tree)): tree = self.expand_tree_once(tree) self.log_tree(tree) return tree def expand_tree(self, tree: DerivationTree) -> DerivationTree: """Expand `tree` in a three-phase strategy until all expansions are complete.""" self.log_tree(tree) tree = self.expand_tree_with_strategy( tree, self.expand_node_max_cost, self.min_nonterminals) tree = self.expand_tree_with_strategy( tree, self.expand_node_randomly, self.max_nonterminals) tree = self.expand_tree_with_strategy( tree, self.expand_node_min_cost) assert self.possible_expansions(tree) == 0 return tree initial_derivation_tree: DerivationTree = ("<start>", [("<expr>", [("<expr>", None), (" + ", []), ("<term>", None)] )]) display_tree(initial_derivation_tree) f = GrammarFuzzer( EXPR_GRAMMAR, min_nonterminals=3, max_nonterminals=5, log=True) derivation_tree = f.expand_tree(initial_derivation_tree) display_tree(derivation_tree) all_terminals(derivation_tree) class GrammarFuzzer(GrammarFuzzer): def fuzz_tree(self) -> DerivationTree: """Produce a derivation tree from the grammar.""" tree = self.init_tree() # print(tree) # Expand all nonterminals tree = self.expand_tree(tree) if self.log: print(repr(all_terminals(tree))) if self.disp: display(display_tree(tree)) return tree def fuzz(self) -> str: """Produce a string from the grammar.""" self.derivation_tree = self.fuzz_tree() return all_terminals(self.derivation_tree) f = GrammarFuzzer(EXPR_GRAMMAR) f.fuzz() display_tree(f.derivation_tree) f = GrammarFuzzer(URL_GRAMMAR) f.fuzz() display_tree(f.derivation_tree) f = GrammarFuzzer(CGI_GRAMMAR, min_nonterminals=3, max_nonterminals=5) f.fuzz() display_tree(f.derivation_tree) trials = 50 xs = [] ys = [] f = GrammarFuzzer(EXPR_GRAMMAR, max_nonterminals=20) for i in range(trials): with Timer() as t: s = f.fuzz() xs.append(len(s)) ys.append(t.elapsed_time()) print(i, end=" ") print() average_time = sum(ys) / trials print("Average time:", average_time) %matplotlib inline import matplotlib.pyplot as plt plt.scatter(xs, ys) plt.title('Time required for generating an output'); f = GrammarFuzzer(expr_grammar, max_nonterminals=10) f.fuzz() from Grammars import US_PHONE_GRAMMAR phone_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR) phone_fuzzer.fuzz() area_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR, start_symbol='<area>') area_fuzzer.fuzz() # ignore import inspect # ignore print(inspect.getdoc(GrammarFuzzer.__init__)) # ignore from ClassDiagram import display_class_hierarchy # ignore display_class_hierarchy([GrammarFuzzer], public_methods=[ Fuzzer.__init__, Fuzzer.fuzz, Fuzzer.run, Fuzzer.runs, GrammarFuzzer.__init__, GrammarFuzzer.fuzz, GrammarFuzzer.fuzz_tree, ], types={ 'DerivationTree': DerivationTree, 'Expansion': Expansion, 'Grammar': Grammar }, project='fuzzingbook') display_tree(phone_fuzzer.derivation_tree) phone_fuzzer.derivation_tree import copy class FasterGrammarFuzzer(GrammarFuzzer): """Variant of `GrammarFuzzer` with memoized values""" def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self._expansion_cache: Dict[Expansion, List[DerivationTree]] = {} self._expansion_invocations = 0 self._expansion_invocations_cached = 0 def expansion_to_children(self, expansion: Expansion) \ -> List[DerivationTree]: self._expansion_invocations += 1 if expansion in self._expansion_cache: self._expansion_invocations_cached += 1 cached_result = copy.deepcopy(self._expansion_cache[expansion]) return cached_result result = super().expansion_to_children(expansion) self._expansion_cache[expansion] = result return result f = FasterGrammarFuzzer(EXPR_GRAMMAR, min_nonterminals=3, max_nonterminals=5) f.fuzz() f._expansion_invocations f._expansion_invocations_cached print("%.2f%% of invocations can be cached" % (f._expansion_invocations_cached * 100 / f._expansion_invocations)) class EvenFasterGrammarFuzzer(GrammarFuzzer): """Variant of `GrammarFuzzer` with precomputed costs""" def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self._symbol_costs: Dict[str, Union[int, float]] = {} self._expansion_costs: Dict[Expansion, Union[int, float]] = {} self.precompute_costs() def new_symbol_cost(self, symbol: str, seen: Set[str] = set()) -> Union[int, float]: return self._symbol_costs[symbol] def new_expansion_cost(self, expansion: Expansion, seen: Set[str] = set()) -> Union[int, float]: return self._expansion_costs[expansion] def precompute_costs(self) -> None: for symbol in self.grammar: self._symbol_costs[symbol] = super().symbol_cost(symbol) for expansion in self.grammar[symbol]: self._expansion_costs[expansion] = \ super().expansion_cost(expansion) # Make sure we now call the caching methods self.symbol_cost = self.new_symbol_cost # type: ignore self.expansion_cost = self.new_expansion_cost # type: ignore f = EvenFasterGrammarFuzzer(EXPR_GRAMMAR) f._symbol_costs f._expansion_costs f = EvenFasterGrammarFuzzer(EXPR_GRAMMAR) f.fuzz() class ExerciseGrammarFuzzer(GrammarFuzzer): def expand_node_randomly(self, node: DerivationTree) -> DerivationTree: if self.log: print("Expanding", all_terminals(node), "randomly by cost") return self.expand_node_by_cost(node, random.choice)
0.823506
0.987664
``` import csv import seaborn as sns from matplotlib import pyplot as plt import numpy as np import pandas as pd %matplotlib inline %load_ext autoreload %autoreload 2 ``` ## STU Expansion ``` # Opening data with open("results/STU.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days_stu = [] spell_stu = [] dtin_stu = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days_stu.append(float(row[0])) spell_stu.append(float(row[1])) dtin_stu.append(int(row[2])) first_spell_stu = [] second_spell_stu = [] for idx in range(len(days_stu)): if spell_stu[idx]==1: first_spell_stu.append(days_stu[idx]) elif spell_stu[idx]==2: second_spell_stu.append(days_stu[idx]) ``` ## Non-Employment ``` # Opening data with open("results/NE.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days_ne = [] spell_ne = [] dtin_ne = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days_ne.append(float(row[0])) spell_ne.append(float(row[1])) dtin_ne.append(int(row[2])) first_spell_ne = [] second_spell_ne = [] for idx in range(len(days_ne)): if spell_ne[idx]==1: first_spell_ne.append(days_ne[idx]) elif spell_ne[idx]==2: second_spell_ne.append(days_ne[idx]) ``` ## Spell Adjustment ``` # Opening data with open("results/Upper.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days_SAdj = [] spell_SAdj = [] dtin_SAdj = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days_SAdj.append(float(row[0])) spell_SAdj.append(float(row[1])) dtin_SAdj.append(int(row[2])) first_spell_SAdj = [] second_spell_SAdj = [] for idx in range(len(days_SAdj)): if spell_SAdj[idx]==1: first_spell_SAdj.append(days_SAdj[idx]) elif spell_SAdj[idx]==2: second_spell_SAdj.append(days_SAdj[idx]) ``` ## LTU Expansion ``` # Opening data with open("results/Lower.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days2 = [] spell2 = [] dtin2 = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days2.append(float(row[0])) spell2.append(float(row[1])) dtin2.append(int(row[2])) first_spell2 = [] second_spell2 = [] for idx in range(len(days2)): if spell2[idx]==1: first_spell2.append(days2[idx]) elif spell2[idx]==2: second_spell2.append(days2[idx]) ``` ## Raw data ``` # Opening data with open("results/LLower.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days3 = [] spell3 = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days3.append(float(row[0])) spell3.append(float(row[1])) first_spell3 = [] second_spell3 = [] for idx in range(len(days3)): if spell3[idx]==1: first_spell3.append(days3[idx]) elif spell3[idx]==2: second_spell3.append(days3[idx]) ``` # Plots ``` sns.set_style("whitegrid") week_range = np.arange(0,1092,7) # LTU data_21, bins21 = np.histogram(first_spell2,week_range) data_22, bins22 = np.histogram(second_spell2,week_range) # Raw data_31, bins31 = np.histogram(first_spell3,week_range) data_32, bins32 = np.histogram(second_spell3,week_range) # STU data_stu, bins_stu = np.histogram(first_spell_stu,week_range) data_stu2, bins_stu2 = np.histogram(second_spell_stu,week_range) # NE data_ne, bins_ne = np.histogram(first_spell_ne,week_range) data_21 = data_21 / float(sum(data_21)) data_31 = data_31 / float(sum(data_31)) data_stu = data_stu / float(sum(data_stu)) data_ne = data_ne / float(sum(data_ne)) data_22 = data_22 / float(sum(data_22)) data_32 = data_32 / float(sum(data_32)) data_stu2 = data_stu2 / float(sum(data_stu2)) sns.set_palette('deep',4) plt.figure(figsize=(28,12)) spikes = (4,8,13,17,26,52,78,104) plt.plot(data_31, lw=3, label='RU') #c='grey' # plt.plot(data_21,lw=3, label='LTU Expansion',ls='--') # plt.plot(data_stu, lw= 3.5, label='STU Expansion', ) #c='red' # plt.plot(data_ne, lw= 3, label='Non-Employment',ls='-.') #c='darkorange' plt.legend(loc='best', framealpha=1.0, fontsize=32) # plt.xticks(spikes,fontsize=28) plt.xticks(np.arange(0,120,4),fontsize=24) plt.yticks(fontsize=28) for i in spikes: plt.axvline(i, c='navy', alpha=.3) plt.xlim(0,120) plt.xlabel('First Spell, duration in weeks',fontsize=32, labelpad=30 ) plt.ylim(0,0.08) plt.savefig("./plots/RU_only_histogram.eps", format="eps", bbox_inches='tight') plt.show() sns.set_palette('Greys',4) plt.figure(figsize=(28,12)) spikes = (4,8,13,17,26,52,78,104) plt.plot(data_31, lw=3, label='Raw') #c='grey' plt.plot(data_21,lw=3, label='LTU Expansion',ls='--') plt.plot(data_stu, lw= 3.5, label='STU Expansion', ) #c='red' # plt.plot(data_ne, lw= 3, label='Non-Employment',ls='-.') #c='darkorange' plt.legend(loc='best', framealpha=1.0, fontsize=32) plt.xticks(spikes,fontsize=28) plt.yticks(fontsize=28) for i in spikes: plt.axvline(i, c='white', alpha=0.2) plt.xlim(0,120) plt.xlabel('First Spell, duration in weeks',fontsize=32 ) plt.ylim(0,0.08) # plt.savefig("Add2_spikes_bw.png", format="png", bbox_inches='tight') plt.show() sns.set_palette('deep',4) plt.figure(figsize=(28,12)) spikes = (4,8,13,17,26,52,78,104) plt.plot(data_31, lw=4, label='Raw Data',c='grey',alpha=0.5) plt.plot(data_21,lw=4, label='LTU Expansion',alpha=0.5) plt.plot(data_stu, lw= 4, label='STU Expansion', c='purple' ) plt.plot(data_ne, lw= 4, label='Non-Employment',c='darkorange') plt.legend(loc='best', framealpha=1.0, fontsize=32) plt.xticks(spikes,fontsize=28) plt.yticks(fontsize=28) for i in spikes: plt.axvline(i, c='white', alpha=0.2) plt.xlim(0,120) plt.xlabel('First Spell, duration in weeks',fontsize=32 ) plt.ylim(0,0.08) # plt.savefig("plots/Add2_spikes.png", format="png", bbox_inches='tight') # plt.savefig("Add2_spikes_bw.png", format="png", bbox_inches='tight') # plt.savefig("plots/NE_only_histogram.pdf", format="pdf", bbox_inches='tight') plt.show() plt.figure(figsize=(28,12)) spikes = (4,8,13,17,26,52,78,104) plt.plot(data_31, lw=3.5, label='RU',c='grey') plt.plot(data_21,lw=3.5, label='LTU Expansion') plt.legend(loc='best', framealpha=1.0, fontsize=32) plt.xticks(spikes,fontsize=28) plt.yticks(fontsize=28) for i in spikes: plt.axvline(i, c='white', alpha=0.2) plt.xlim(0,120) plt.xlabel('First Spell, duration in weeks',fontsize=32 ) plt.ylim(0,0.08) plt.savefig("plots/Add1_spikes.png", format="png", bbox_inches='tight') plt.show() ``` # First and Second Spells ``` plt.figure(figsize=(20,8)) #plt.suptitle('Hazard rate by Spell Number', fontsize=22) plt.subplot(131) plt.plot(data_31, c= 'purple', label='First Spell') plt.plot(data_32, c='b', label='Second Spell') plt.legend(loc='best', fontsize=16) plt.title('MCVL Original', fontsize=20) plt.ylim(0,0.08) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(132) plt.plot(data_21, c='purple', label='First Spell') plt.plot(data_22, c='b', label='Second Spell') plt.legend(loc='best', fontsize=16) plt.title('LTU Expansion', fontsize=20) plt.ylim(0,0.08) plt.xlabel('Spell duration in weeks',fontsize=16 ) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(133) plt.plot(data_stu, c='purple', label='First Spell') plt.plot(data_stu2, c='b', label='Second Spell') plt.legend(loc='best', fontsize=16) plt.title('STU Expansion', fontsize=20) plt.ylim(0,0.08) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.tight_layout() plt.savefig("plots/n_spell.png", format='png', box_inches='tight') plt.show() ```
github_jupyter
import csv import seaborn as sns from matplotlib import pyplot as plt import numpy as np import pandas as pd %matplotlib inline %load_ext autoreload %autoreload 2 # Opening data with open("results/STU.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days_stu = [] spell_stu = [] dtin_stu = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days_stu.append(float(row[0])) spell_stu.append(float(row[1])) dtin_stu.append(int(row[2])) first_spell_stu = [] second_spell_stu = [] for idx in range(len(days_stu)): if spell_stu[idx]==1: first_spell_stu.append(days_stu[idx]) elif spell_stu[idx]==2: second_spell_stu.append(days_stu[idx]) # Opening data with open("results/NE.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days_ne = [] spell_ne = [] dtin_ne = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days_ne.append(float(row[0])) spell_ne.append(float(row[1])) dtin_ne.append(int(row[2])) first_spell_ne = [] second_spell_ne = [] for idx in range(len(days_ne)): if spell_ne[idx]==1: first_spell_ne.append(days_ne[idx]) elif spell_ne[idx]==2: second_spell_ne.append(days_ne[idx]) # Opening data with open("results/Upper.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days_SAdj = [] spell_SAdj = [] dtin_SAdj = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days_SAdj.append(float(row[0])) spell_SAdj.append(float(row[1])) dtin_SAdj.append(int(row[2])) first_spell_SAdj = [] second_spell_SAdj = [] for idx in range(len(days_SAdj)): if spell_SAdj[idx]==1: first_spell_SAdj.append(days_SAdj[idx]) elif spell_SAdj[idx]==2: second_spell_SAdj.append(days_SAdj[idx]) # Opening data with open("results/Lower.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days2 = [] spell2 = [] dtin2 = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days2.append(float(row[0])) spell2.append(float(row[1])) dtin2.append(int(row[2])) first_spell2 = [] second_spell2 = [] for idx in range(len(days2)): if spell2[idx]==1: first_spell2.append(days2[idx]) elif spell2[idx]==2: second_spell2.append(days2[idx]) # Opening data with open("results/LLower.csv", 'rt') as f: reader = csv.reader(f) data = list(reader) # Passing data to lists, then to arrays (should change this to make it all in one) days3 = [] spell3 = [] for row in data[1:]: if row[0]== '' or row[1] == '': pass else: days3.append(float(row[0])) spell3.append(float(row[1])) first_spell3 = [] second_spell3 = [] for idx in range(len(days3)): if spell3[idx]==1: first_spell3.append(days3[idx]) elif spell3[idx]==2: second_spell3.append(days3[idx]) sns.set_style("whitegrid") week_range = np.arange(0,1092,7) # LTU data_21, bins21 = np.histogram(first_spell2,week_range) data_22, bins22 = np.histogram(second_spell2,week_range) # Raw data_31, bins31 = np.histogram(first_spell3,week_range) data_32, bins32 = np.histogram(second_spell3,week_range) # STU data_stu, bins_stu = np.histogram(first_spell_stu,week_range) data_stu2, bins_stu2 = np.histogram(second_spell_stu,week_range) # NE data_ne, bins_ne = np.histogram(first_spell_ne,week_range) data_21 = data_21 / float(sum(data_21)) data_31 = data_31 / float(sum(data_31)) data_stu = data_stu / float(sum(data_stu)) data_ne = data_ne / float(sum(data_ne)) data_22 = data_22 / float(sum(data_22)) data_32 = data_32 / float(sum(data_32)) data_stu2 = data_stu2 / float(sum(data_stu2)) sns.set_palette('deep',4) plt.figure(figsize=(28,12)) spikes = (4,8,13,17,26,52,78,104) plt.plot(data_31, lw=3, label='RU') #c='grey' # plt.plot(data_21,lw=3, label='LTU Expansion',ls='--') # plt.plot(data_stu, lw= 3.5, label='STU Expansion', ) #c='red' # plt.plot(data_ne, lw= 3, label='Non-Employment',ls='-.') #c='darkorange' plt.legend(loc='best', framealpha=1.0, fontsize=32) # plt.xticks(spikes,fontsize=28) plt.xticks(np.arange(0,120,4),fontsize=24) plt.yticks(fontsize=28) for i in spikes: plt.axvline(i, c='navy', alpha=.3) plt.xlim(0,120) plt.xlabel('First Spell, duration in weeks',fontsize=32, labelpad=30 ) plt.ylim(0,0.08) plt.savefig("./plots/RU_only_histogram.eps", format="eps", bbox_inches='tight') plt.show() sns.set_palette('Greys',4) plt.figure(figsize=(28,12)) spikes = (4,8,13,17,26,52,78,104) plt.plot(data_31, lw=3, label='Raw') #c='grey' plt.plot(data_21,lw=3, label='LTU Expansion',ls='--') plt.plot(data_stu, lw= 3.5, label='STU Expansion', ) #c='red' # plt.plot(data_ne, lw= 3, label='Non-Employment',ls='-.') #c='darkorange' plt.legend(loc='best', framealpha=1.0, fontsize=32) plt.xticks(spikes,fontsize=28) plt.yticks(fontsize=28) for i in spikes: plt.axvline(i, c='white', alpha=0.2) plt.xlim(0,120) plt.xlabel('First Spell, duration in weeks',fontsize=32 ) plt.ylim(0,0.08) # plt.savefig("Add2_spikes_bw.png", format="png", bbox_inches='tight') plt.show() sns.set_palette('deep',4) plt.figure(figsize=(28,12)) spikes = (4,8,13,17,26,52,78,104) plt.plot(data_31, lw=4, label='Raw Data',c='grey',alpha=0.5) plt.plot(data_21,lw=4, label='LTU Expansion',alpha=0.5) plt.plot(data_stu, lw= 4, label='STU Expansion', c='purple' ) plt.plot(data_ne, lw= 4, label='Non-Employment',c='darkorange') plt.legend(loc='best', framealpha=1.0, fontsize=32) plt.xticks(spikes,fontsize=28) plt.yticks(fontsize=28) for i in spikes: plt.axvline(i, c='white', alpha=0.2) plt.xlim(0,120) plt.xlabel('First Spell, duration in weeks',fontsize=32 ) plt.ylim(0,0.08) # plt.savefig("plots/Add2_spikes.png", format="png", bbox_inches='tight') # plt.savefig("Add2_spikes_bw.png", format="png", bbox_inches='tight') # plt.savefig("plots/NE_only_histogram.pdf", format="pdf", bbox_inches='tight') plt.show() plt.figure(figsize=(28,12)) spikes = (4,8,13,17,26,52,78,104) plt.plot(data_31, lw=3.5, label='RU',c='grey') plt.plot(data_21,lw=3.5, label='LTU Expansion') plt.legend(loc='best', framealpha=1.0, fontsize=32) plt.xticks(spikes,fontsize=28) plt.yticks(fontsize=28) for i in spikes: plt.axvline(i, c='white', alpha=0.2) plt.xlim(0,120) plt.xlabel('First Spell, duration in weeks',fontsize=32 ) plt.ylim(0,0.08) plt.savefig("plots/Add1_spikes.png", format="png", bbox_inches='tight') plt.show() plt.figure(figsize=(20,8)) #plt.suptitle('Hazard rate by Spell Number', fontsize=22) plt.subplot(131) plt.plot(data_31, c= 'purple', label='First Spell') plt.plot(data_32, c='b', label='Second Spell') plt.legend(loc='best', fontsize=16) plt.title('MCVL Original', fontsize=20) plt.ylim(0,0.08) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(132) plt.plot(data_21, c='purple', label='First Spell') plt.plot(data_22, c='b', label='Second Spell') plt.legend(loc='best', fontsize=16) plt.title('LTU Expansion', fontsize=20) plt.ylim(0,0.08) plt.xlabel('Spell duration in weeks',fontsize=16 ) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(133) plt.plot(data_stu, c='purple', label='First Spell') plt.plot(data_stu2, c='b', label='Second Spell') plt.legend(loc='best', fontsize=16) plt.title('STU Expansion', fontsize=20) plt.ylim(0,0.08) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.tight_layout() plt.savefig("plots/n_spell.png", format='png', box_inches='tight') plt.show()
0.168823
0.749041
# Tensorflow のスクリプトに対してハイパーパラメータのチューニングを行う #### ノートブックに含まれる内容 - Tensorflow のスクリプトに対して,ハイパーパラーメータのチューニングを行う方法 - ハイパーパラーメータチューニングの概要説明と基本的な使い方 #### ノートブックで使われている手法の詳細 - アルゴリズム: CNN - データ: MNIST ## セットアップ 必要なパラメタをセットアップします. ``` import boto3 from time import gmtime, strftime import sagemaker role = sagemaker.get_execution_role() ``` 以下を実行する前に,**<span style="color: red;">`sagemaker/hpo-tensorflow-high/XX` の `XX` を指定された適切な数字に変更</span>**してください ``` bucket = sagemaker.Session().default_bucket() prefix = 'sagemaker/hpo-tensorflow-high/XX' ``` ## データのロード Tensorflow 経由で MNIST データをロードします.その上で,学習用,検証用,テスト用の 3 つにデータを分割します. ``` import utils from tensorflow.contrib.learn.python.learn.datasets import mnist import tensorflow as tf data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000) utils.convert_to(data_sets.train, 'train', 'data') utils.convert_to(data_sets.validation, 'validation', 'data') utils.convert_to(data_sets.test, 'test', 'data') ``` データの準備が終わったら,`sagemaker.Session().upload_data()` を使ってデータを S3 にアップロードします. ``` inputs = sagemaker.Session().upload_data(path='data', bucket=bucket, key_prefix=prefix+'/data/mnist') print (inputs) ``` ## ハイパーパラメータチューニングジョブの実行 続いて,チューニングジョブのセットアップを行い実行します.ここには以下の 4 つの処理が含まれます. 1. 通常の学習ジョブのときと同様に,Tensorflow クラスのオブジェクトを作成します 1. チューニングしたいハイパーパラメータ名と範囲を,Dictionary 型で指定します 1. チューニングの評価を行うためのターゲットメトリクスを指定します 1. 実際にチューニングジョブを実行します ### 1. Tensorflow オブジェクトの作成 これは,通常の学習ジョブを実行するとの全く同じ手順です. ``` from sagemaker.tensorflow import TensorFlow estimator = TensorFlow(entry_point='mnist.py', role=role, training_steps=100, evaluation_steps=10, train_instance_count=1, train_instance_type='ml.m4.xlarge') ``` ### 2. ハイパーパラメータのリストを作成 次に,チューニングしたいハイパーパラメータのリストを作成します.ハイパーパラメータの中身に応じたオブジェクトがあるので,これを使用します.ハイパーパラメータがカテゴリの場合は探索対象のカテゴリのリストを,連続値の場合は範囲を指定する形にしてください.なお整数の場合は,通常の連続値とは異なるオブジェクトを用いて指定します. - カテゴリ: `CategoricalParameter(list)` - 連続値: `ContinuousParameter(min, max)` - 整数: `IntegerParameter(min, max)` 今回は,`learning_rate` をターゲットとして選択しました. ``` from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner hyperparameter_ranges = {'learning_rate': ContinuousParameter(0.01, 0.2)} ``` ### 3. ターゲットメトリクスの指定 続いて,チューニングの評価をするためのメトリクスを指定します.このメトリクスは,Sagemaker 側でジョブ実行時の標準出力から正規表現で抽出します.対象となるメトリクスが標準出力ログに出力されるように,自身のスクリプトを記述してください.ここでいう標準出力ログとは,ジョブ実行時に Cloud Watch Logs に出力されるログのことを指します. このターゲットメトリクスを最小化するか,最大化するかを選択することができます.デフォルトは最大化となります. ここでは,損失関数の値をターゲットとして,これを最小化することを目指します. ``` objective_metric_name = 'loss' objective_type = 'Minimize' metric_definitions = [{'Name': 'loss', 'Regex': 'loss = ([0-9\\.]+)'}] ``` ### 4. チューニングジョブの実行 以上の準備が終わったら,チューニングジョブのオプビェクトを作成して,`fit()` で実際に実行します.その際に **<span style="color: red;">`base_tuning_job_name` の `DEMO-hpo-tensorflow-XX` にある `XX` を指定された適切な数字に変更</span>**してください `HyperparameterTuner` の詳細については[ドキュメント](https://sagemaker.readthedocs.io/en/latest/tuner.html)をご確認ください. ``` tuner = HyperparameterTuner(estimator, objective_metric_name, hyperparameter_ranges, metric_definitions, max_jobs=9, max_parallel_jobs=3, objective_type=objective_type, base_tuning_job_name='hpo-tensorflow-XX') tuner.fit(inputs) ``` チューニングジョブの実行状況は,`boto3` クライアント経由で確認することが可能です. ``` boto3.client('sagemaker').describe_hyper_parameter_tuning_job( HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus'] ```
github_jupyter
import boto3 from time import gmtime, strftime import sagemaker role = sagemaker.get_execution_role() bucket = sagemaker.Session().default_bucket() prefix = 'sagemaker/hpo-tensorflow-high/XX' import utils from tensorflow.contrib.learn.python.learn.datasets import mnist import tensorflow as tf data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000) utils.convert_to(data_sets.train, 'train', 'data') utils.convert_to(data_sets.validation, 'validation', 'data') utils.convert_to(data_sets.test, 'test', 'data') inputs = sagemaker.Session().upload_data(path='data', bucket=bucket, key_prefix=prefix+'/data/mnist') print (inputs) from sagemaker.tensorflow import TensorFlow estimator = TensorFlow(entry_point='mnist.py', role=role, training_steps=100, evaluation_steps=10, train_instance_count=1, train_instance_type='ml.m4.xlarge') from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner hyperparameter_ranges = {'learning_rate': ContinuousParameter(0.01, 0.2)} objective_metric_name = 'loss' objective_type = 'Minimize' metric_definitions = [{'Name': 'loss', 'Regex': 'loss = ([0-9\\.]+)'}] tuner = HyperparameterTuner(estimator, objective_metric_name, hyperparameter_ranges, metric_definitions, max_jobs=9, max_parallel_jobs=3, objective_type=objective_type, base_tuning_job_name='hpo-tensorflow-XX') tuner.fit(inputs) boto3.client('sagemaker').describe_hyper_parameter_tuning_job( HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
0.407923
0.972178
# Static vs Dynamic Neural Networks in NNabla NNabla allows you to define static and dynamic neural networks. Static neural networks have a fixed layer architecture, i.e., a static computation graph. In contrast, dynamic neural networks use a dynamic computation graph, e.g., randomly dropping layers for each minibatch. This tutorial compares both computation graphs. ``` # python2/3 compatibility from __future__ import print_function from __future__ import absolute_import from __future__ import division %matplotlib inline import nnabla as nn import nnabla.functions as F import nnabla.parametric_functions as PF import nnabla.solvers as S import numpy as np np.random.seed(0) GPU = 0 # ID of GPU that we will use batch_size = 64 # Reduce to fit your device memory ``` ### Dataset loading We will first setup the digits dataset from scikit-learn: ``` from tiny_digits import * digits = load_digits() data = data_iterator_tiny_digits(digits, batch_size=batch_size, shuffle=True) ``` Each sample in this dataset is a grayscale image of size 8x8 and belongs to one of the ten classes `0`, `1`, ..., `9`. ``` img, label = data.next() print(img.shape, label.shape) ``` ### Network definition As an example, we define a (unnecessarily) deep CNN: ``` def cnn(x): """Unnecessarily Deep CNN. Args: x : Variable, shape (B, 1, 8, 8) Returns: y : Variable, shape (B, 10) """ with nn.parameter_scope("cnn"): # Parameter scope can be nested with nn.parameter_scope("conv1"): h = F.tanh(PF.batch_normalization( PF.convolution(x, 64, (3, 3), pad=(1, 1)))) for i in range(10): # unnecessarily deep with nn.parameter_scope("conv{}".format(i + 2)): h = F.tanh(PF.batch_normalization( PF.convolution(h, 128, (3, 3), pad=(1, 1)))) with nn.parameter_scope("conv_last"): h = F.tanh(PF.batch_normalization( PF.convolution(h, 512, (3, 3), pad=(1, 1)))) h = F.average_pooling(h, (2, 2)) with nn.parameter_scope("fc"): h = F.tanh(PF.affine(h, 1024)) with nn.parameter_scope("classifier"): y = PF.affine(h, 10) return y ``` ## Static computation graph First, we will look at the case of a static computation graph where the neural network does not change during training. ``` from nnabla.ext_utils import get_extension_context # setup cuda extension ctx_cuda = get_extension_context('cudnn', device_id=GPU) # replace 'cudnn' by 'cpu' if you want to run the example on the CPU nn.set_default_context(ctx_cuda) # create variables for network input and label x = nn.Variable(img.shape) t = nn.Variable(label.shape) # create network static_y = cnn(x) static_y.persistent = True # define loss function for training static_l = F.mean(F.softmax_cross_entropy(static_y, t)) ``` Setup solver for training ``` solver = S.Adam(alpha=1e-3) solver.set_parameters(nn.get_parameters()) ``` Create data iterator ``` loss = [] def epoch_end_callback(epoch): global loss print("[", epoch, np.mean(loss), itr, "]", end='') loss = [] data = data_iterator_tiny_digits(digits, batch_size=batch_size, shuffle=True) data.register_epoch_end_callback(epoch_end_callback) ``` Perform training iterations and output training loss: ``` %%time for epoch in range(30): itr = 0 while data.epoch == epoch: x.d, t.d = data.next() static_l.forward(clear_no_need_grad=True) solver.zero_grad() static_l.backward(clear_buffer=True) solver.update() loss.append(static_l.d.copy()) itr += 1 print('') ``` ## Dynamic computation graph Now, we will use a dynamic computation graph, where the neural network is setup each time we want to do a forward/backward pass through it. This allows us to, e.g., randomly dropout layers or to have network architectures that depend on input data. In this example, we will use for simplicity the same neural network structure and only dynamically create it. For example, adding a `if np.random.rand() > dropout_probability:` into `cnn()` allows to dropout layers. First, we setup the solver and the data iterator for the training: ``` nn.clear_parameters() solver = S.Adam(alpha=1e-3) solver.set_parameters(nn.get_parameters()) loss = [] def epoch_end_callback(epoch): global loss print("[", epoch, np.mean(loss), itr, "]", end='') loss = [] data = data_iterator_tiny_digits(digits, batch_size=batch_size, shuffle=True) data.register_epoch_end_callback(epoch_end_callback) %%time for epoch in range(30): itr = 0 while data.epoch == epoch: x.d, t.d = data.next() with nn.auto_forward(): dynamic_y = cnn(x) dynamic_l = F.mean(F.softmax_cross_entropy(dynamic_y, t)) solver.set_parameters(nn.get_parameters(), reset=False, retain_state=True) # this can be done dynamically solver.zero_grad() dynamic_l.backward(clear_buffer=True) solver.update() loss.append(dynamic_l.d.copy()) itr += 1 print('') ``` Comparing the two processing times, we can observe that both schemes ("static" and "dynamic") takes the same execution time, i.e., although we created the computation graph dynamically, we did not loose performance.
github_jupyter
# python2/3 compatibility from __future__ import print_function from __future__ import absolute_import from __future__ import division %matplotlib inline import nnabla as nn import nnabla.functions as F import nnabla.parametric_functions as PF import nnabla.solvers as S import numpy as np np.random.seed(0) GPU = 0 # ID of GPU that we will use batch_size = 64 # Reduce to fit your device memory from tiny_digits import * digits = load_digits() data = data_iterator_tiny_digits(digits, batch_size=batch_size, shuffle=True) img, label = data.next() print(img.shape, label.shape) def cnn(x): """Unnecessarily Deep CNN. Args: x : Variable, shape (B, 1, 8, 8) Returns: y : Variable, shape (B, 10) """ with nn.parameter_scope("cnn"): # Parameter scope can be nested with nn.parameter_scope("conv1"): h = F.tanh(PF.batch_normalization( PF.convolution(x, 64, (3, 3), pad=(1, 1)))) for i in range(10): # unnecessarily deep with nn.parameter_scope("conv{}".format(i + 2)): h = F.tanh(PF.batch_normalization( PF.convolution(h, 128, (3, 3), pad=(1, 1)))) with nn.parameter_scope("conv_last"): h = F.tanh(PF.batch_normalization( PF.convolution(h, 512, (3, 3), pad=(1, 1)))) h = F.average_pooling(h, (2, 2)) with nn.parameter_scope("fc"): h = F.tanh(PF.affine(h, 1024)) with nn.parameter_scope("classifier"): y = PF.affine(h, 10) return y from nnabla.ext_utils import get_extension_context # setup cuda extension ctx_cuda = get_extension_context('cudnn', device_id=GPU) # replace 'cudnn' by 'cpu' if you want to run the example on the CPU nn.set_default_context(ctx_cuda) # create variables for network input and label x = nn.Variable(img.shape) t = nn.Variable(label.shape) # create network static_y = cnn(x) static_y.persistent = True # define loss function for training static_l = F.mean(F.softmax_cross_entropy(static_y, t)) solver = S.Adam(alpha=1e-3) solver.set_parameters(nn.get_parameters()) loss = [] def epoch_end_callback(epoch): global loss print("[", epoch, np.mean(loss), itr, "]", end='') loss = [] data = data_iterator_tiny_digits(digits, batch_size=batch_size, shuffle=True) data.register_epoch_end_callback(epoch_end_callback) %%time for epoch in range(30): itr = 0 while data.epoch == epoch: x.d, t.d = data.next() static_l.forward(clear_no_need_grad=True) solver.zero_grad() static_l.backward(clear_buffer=True) solver.update() loss.append(static_l.d.copy()) itr += 1 print('') nn.clear_parameters() solver = S.Adam(alpha=1e-3) solver.set_parameters(nn.get_parameters()) loss = [] def epoch_end_callback(epoch): global loss print("[", epoch, np.mean(loss), itr, "]", end='') loss = [] data = data_iterator_tiny_digits(digits, batch_size=batch_size, shuffle=True) data.register_epoch_end_callback(epoch_end_callback) %%time for epoch in range(30): itr = 0 while data.epoch == epoch: x.d, t.d = data.next() with nn.auto_forward(): dynamic_y = cnn(x) dynamic_l = F.mean(F.softmax_cross_entropy(dynamic_y, t)) solver.set_parameters(nn.get_parameters(), reset=False, retain_state=True) # this can be done dynamically solver.zero_grad() dynamic_l.backward(clear_buffer=True) solver.update() loss.append(dynamic_l.d.copy()) itr += 1 print('')
0.762866
0.977263
``` class Solution: def longestWPI(self, hours) -> int: n = len(hours) # 大于8小时计1分 小于等于8小时计-1分 score = [0] * n for i in range(n): if hours[i] > 8: score[i] = 1 else: score[i] = -1 # 前缀和 presum = [0] * (n + 1) for i in range(1, n + 1): presum[i] = presum[i - 1] + score[i - 1] print(presum) ans = 0 stack = [] # 顺序生成单调栈,栈中元素从第一个元素开始严格单调递减,最后一个元素肯定是数组中的最小元素所在位置 for i in range(n + 1): if not stack or presum[stack[-1]] > presum[i]: stack.append(i) print(stack) # 倒序扫描数组,求最大长度坡 i = n while i > ans: while stack and presum[stack[-1]] < presum[i]: ans = max(ans, i - stack[-1]) stack.pop() i -= 1 return ans solution = Solution() solution.longestWPI(hours = [9,9,6,0,6,6,9]) class Solution: def longestWPI(self, hours): presum = [0] # 生成前缀和,只有当 h > 8 的时候,+1,否则 - 1 for h in hours: item = presum[-1] + 1 if h > 8 else presum[-1] - 1 presum.append(item) print(presum) stack = [] # 生成一个单调递减的栈, 存放的是可能区间的 起点 idx for i, v in enumerate(presum): if not stack or v < presum[stack[-1]]: stack.append(i) print(stack) # 实际上就是找从队尾算起,往前找一个比队尾元素小的值,使得presum[j] - presum[i] > 0 ans = 0 idx = len(hours) # 从后往前扫 while idx > ans: while stack and presum[stack[-1]] < presum[idx]: ans = max(ans, idx - stack[-1]) stack.pop() idx -= 1 return ans solution = Solution() solution.longestWPI(hours = [9,9,6,0,6,6,9]) [0, -1] -1 > -2 0 > -1: 0:4, 1:5 class Solution: def longestWPI(self, hours): n = len(hours) stack = [] res = 0 for i, t in enumerate(hours): if t > 8: no_tir = 0 tir = 1 # 不劳累、劳累的天数 fir = -1 while tir > no_tir and stack: idx = stack.pop() if hours[idx] > 8: tir += 1 else: no_tir += 1 fir = idx res = max(res, i - fir) stack.append(i) return res class Solution: def longestWPI(self, hours): def check(gap): for s in range(n-gap+1): e = s + (gap-1) tir = presum[e+1] - presum[s] print(hours[s:e+1], gap, tir) if tir > 0: return True return False n = len(hours) presum = [0] for h in hours: if h > 8: presum.append(presum[-1] + 1) else: presum.append(presum[-1] - 1) left, right = 1, n res = 0 while left <= right: mid = left + (right - left) // 2 if check(mid): res = mid left = mid + 1 else: right = mid - 1 return res solution = Solution() solution.longestWPI(hours = [9,9,6,0,6,6,9]) round(3 /2) round(5 / 2) vals = [1, 2, 3, 4, 5, 6] presum = [0] for v in vals: presum.append(v + presum[-1]) print(presum) presum[3] - presum[0] a = [-1,0] if any(a) > 0: print(2) ```
github_jupyter
class Solution: def longestWPI(self, hours) -> int: n = len(hours) # 大于8小时计1分 小于等于8小时计-1分 score = [0] * n for i in range(n): if hours[i] > 8: score[i] = 1 else: score[i] = -1 # 前缀和 presum = [0] * (n + 1) for i in range(1, n + 1): presum[i] = presum[i - 1] + score[i - 1] print(presum) ans = 0 stack = [] # 顺序生成单调栈,栈中元素从第一个元素开始严格单调递减,最后一个元素肯定是数组中的最小元素所在位置 for i in range(n + 1): if not stack or presum[stack[-1]] > presum[i]: stack.append(i) print(stack) # 倒序扫描数组,求最大长度坡 i = n while i > ans: while stack and presum[stack[-1]] < presum[i]: ans = max(ans, i - stack[-1]) stack.pop() i -= 1 return ans solution = Solution() solution.longestWPI(hours = [9,9,6,0,6,6,9]) class Solution: def longestWPI(self, hours): presum = [0] # 生成前缀和,只有当 h > 8 的时候,+1,否则 - 1 for h in hours: item = presum[-1] + 1 if h > 8 else presum[-1] - 1 presum.append(item) print(presum) stack = [] # 生成一个单调递减的栈, 存放的是可能区间的 起点 idx for i, v in enumerate(presum): if not stack or v < presum[stack[-1]]: stack.append(i) print(stack) # 实际上就是找从队尾算起,往前找一个比队尾元素小的值,使得presum[j] - presum[i] > 0 ans = 0 idx = len(hours) # 从后往前扫 while idx > ans: while stack and presum[stack[-1]] < presum[idx]: ans = max(ans, idx - stack[-1]) stack.pop() idx -= 1 return ans solution = Solution() solution.longestWPI(hours = [9,9,6,0,6,6,9]) [0, -1] -1 > -2 0 > -1: 0:4, 1:5 class Solution: def longestWPI(self, hours): n = len(hours) stack = [] res = 0 for i, t in enumerate(hours): if t > 8: no_tir = 0 tir = 1 # 不劳累、劳累的天数 fir = -1 while tir > no_tir and stack: idx = stack.pop() if hours[idx] > 8: tir += 1 else: no_tir += 1 fir = idx res = max(res, i - fir) stack.append(i) return res class Solution: def longestWPI(self, hours): def check(gap): for s in range(n-gap+1): e = s + (gap-1) tir = presum[e+1] - presum[s] print(hours[s:e+1], gap, tir) if tir > 0: return True return False n = len(hours) presum = [0] for h in hours: if h > 8: presum.append(presum[-1] + 1) else: presum.append(presum[-1] - 1) left, right = 1, n res = 0 while left <= right: mid = left + (right - left) // 2 if check(mid): res = mid left = mid + 1 else: right = mid - 1 return res solution = Solution() solution.longestWPI(hours = [9,9,6,0,6,6,9]) round(3 /2) round(5 / 2) vals = [1, 2, 3, 4, 5, 6] presum = [0] for v in vals: presum.append(v + presum[-1]) print(presum) presum[3] - presum[0] a = [-1,0] if any(a) > 0: print(2)
0.348756
0.444505
``` %matplotlib inline import numpy as np model_folder='../../results/NCOUNT_2009/' import umap import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np base_umaps={} guided_umaps={} all_base={} all_guided={} type_='' i=1 for layer in ['mixed10','finetuned_features1', 'finetuned_features2', 'finetuned_features3']: print(layer) feats=np.load('{}/{}features{}.npy'.format(model_folder, type_,layer)) idxs=np.random.choice(np.arange(len(feats)), size=4000, replace=False) to_transform=feats[idxs] embedding = umap.UMAP().fit_transform(to_transform) features_df = pd.DataFrame(data=embedding, columns=['x', 'y']) labs=np.load('{}/{}labels{}.npy'.format(model_folder,type_, layer)) cvals=np.load('{}/{}cvalues{}.npy'.format(model_folder,type_, layer)) features_df['label']= labs[idxs] features_df['cval']= cvals[idxs] feats=np.load('{}/{}base_features{}.npy'.format(model_folder, type_,layer)) to_transform=feats[idxs] embedding = umap.UMAP().fit_transform(to_transform) base_features_df = pd.DataFrame(data=embedding, columns=['x', 'y']) labs=np.load('{}/{}base_labels{}.npy'.format(model_folder,type_, layer)) cvals=np.load('{}/{}base_cvalues{}.npy'.format(model_folder,type_, layer)) base_features_df['label']= labs[idxs] base_features_df['cval']= cvals[idxs] small=features_df.where(features_df['cval']<-1.2) big=features_df.where(features_df['cval']>1.2) print('concatenating') df=pd.concat([small, big]) base_small=base_features_df.where(base_features_df['cval']<-1.2) base_big=base_features_df.where(base_features_df['cval']>1.2) base_df=pd.concat([base_small, base_big]) base_umaps[layer]=base_df guided_umaps[layer]=df all_base[layer]=base_features_df all_guided[layer]=features_df import statsmodels.api as sm import statsmodels.formula.api as smf fig, ax = plt.subplots(figsize=(11, 8.5)) sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), #palette="Set2", data=base_umaps['finetuned_features1'], legend=False) ax.axhline(y=0, color='k', linewidth=1) # added because i want the origin ax.axvline(x=0, color='k', linewidth=1) fitted = smf.ols(formula='x ~ cval', data=base_umaps['finetuned_features1']).fit(cov_type='HC3') #x_pred = np.linspace(x.min() - 1, x.max() + 1, 50) X = data['x'].values.reshape(-1, 1) # values converts it into a numpy array Y = data['cval'].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column linear_regressor = LinearRegression() # create object for the class linear_regressor.fit(X, Y) # perform linear regression Y_pred = linear_regressor.predict(X) # make predictions #plt.plot(X,Y_pred)#, ax=ax) #sns.regplot(x="x", y="cval", #hue="label", #size="cval",sizes=(5,100), #palette="Set2", # data=base_umaps['finetuned_features1'],) plt.plot(X,Y_pred)#, ax=ax) x_pred = np.linspace(x.min() - 1, x.max() + 1, 50) len(X) from sklearn.linear_model import LinearRegression data=all_guided['finetuned_features1'] Y_pred.ravel() x_pred X = data.iloc[:,0:1].values.reshape(-1, 1) # values converts it into a numpy array Y = data['cval'].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column linear_regressor = LinearRegression() # create object for the class linear_regressor.fit(X, Y) # perform linear regression Y_pred = linear_regressor.predict(X) # make predictions Y_pred plt.rcParams['figure.figsize']=(15,24) legend_bool=False i=1 fig=plt.figure() for layer in ['mixed10','finetuned_features1']:#, 'finetuned_features2']:#, 'finetuned_features3']: ax=plt.subplot(4,2,i) if i ==1: plt.title('baseline') sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), #palette="Set2", data=base_umaps[layer],legend=False) #ax.axhline(y=0, color='k', linewidth=1) # added because i want the origin #ax.axvline(x=0, color='k', linewidth=1) #plt.ylabel(layer)#, rotation='horizontal') #plt.text(-,5, layer) plt.axis('off') i += 1 ax=plt.subplot(4,2,i) if i ==2: plt.title('guided: model-ID3') if i==4: legend_bool=True sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), #palette="Set2", data=guided_umaps[layer], legend=legend_bool ) #ax.axhline(y=0, color='k', linewidth=1) # added because i want the origin #ax.axvline(x=0, color='k', linewidth=1) if i==5: plt.legend(loc='lower right') plt.axis('off') i += 1 plt.rcParams['figure.figsize']=(15,24) legend_bool=False i=1 fig=plt.figure() for layer in ['mixed10']:#,'finetuned_features1', 'finetuned_features2', 'finetuned_features3']: plt.subplot(4,2,i) if i ==1: plt.title('base') sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), palette="pastel", data=all_base[layer],legend=False) #plt.ylabel(layer)#, rotation='horizontal') #plt.text(-,5, layer) plt.axis('off') i += 1 plt.subplot(4,2,i) if i ==2: plt.title('guided') if i==8: legend_bool=True sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), palette="pastel", data=all_guided[layer], legend=legend_bool ) if i==8: plt.legend(loc='lower right') plt.axis('off') i += 1 guided_umaps. ```
github_jupyter
%matplotlib inline import numpy as np model_folder='../../results/NCOUNT_2009/' import umap import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np base_umaps={} guided_umaps={} all_base={} all_guided={} type_='' i=1 for layer in ['mixed10','finetuned_features1', 'finetuned_features2', 'finetuned_features3']: print(layer) feats=np.load('{}/{}features{}.npy'.format(model_folder, type_,layer)) idxs=np.random.choice(np.arange(len(feats)), size=4000, replace=False) to_transform=feats[idxs] embedding = umap.UMAP().fit_transform(to_transform) features_df = pd.DataFrame(data=embedding, columns=['x', 'y']) labs=np.load('{}/{}labels{}.npy'.format(model_folder,type_, layer)) cvals=np.load('{}/{}cvalues{}.npy'.format(model_folder,type_, layer)) features_df['label']= labs[idxs] features_df['cval']= cvals[idxs] feats=np.load('{}/{}base_features{}.npy'.format(model_folder, type_,layer)) to_transform=feats[idxs] embedding = umap.UMAP().fit_transform(to_transform) base_features_df = pd.DataFrame(data=embedding, columns=['x', 'y']) labs=np.load('{}/{}base_labels{}.npy'.format(model_folder,type_, layer)) cvals=np.load('{}/{}base_cvalues{}.npy'.format(model_folder,type_, layer)) base_features_df['label']= labs[idxs] base_features_df['cval']= cvals[idxs] small=features_df.where(features_df['cval']<-1.2) big=features_df.where(features_df['cval']>1.2) print('concatenating') df=pd.concat([small, big]) base_small=base_features_df.where(base_features_df['cval']<-1.2) base_big=base_features_df.where(base_features_df['cval']>1.2) base_df=pd.concat([base_small, base_big]) base_umaps[layer]=base_df guided_umaps[layer]=df all_base[layer]=base_features_df all_guided[layer]=features_df import statsmodels.api as sm import statsmodels.formula.api as smf fig, ax = plt.subplots(figsize=(11, 8.5)) sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), #palette="Set2", data=base_umaps['finetuned_features1'], legend=False) ax.axhline(y=0, color='k', linewidth=1) # added because i want the origin ax.axvline(x=0, color='k', linewidth=1) fitted = smf.ols(formula='x ~ cval', data=base_umaps['finetuned_features1']).fit(cov_type='HC3') #x_pred = np.linspace(x.min() - 1, x.max() + 1, 50) X = data['x'].values.reshape(-1, 1) # values converts it into a numpy array Y = data['cval'].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column linear_regressor = LinearRegression() # create object for the class linear_regressor.fit(X, Y) # perform linear regression Y_pred = linear_regressor.predict(X) # make predictions #plt.plot(X,Y_pred)#, ax=ax) #sns.regplot(x="x", y="cval", #hue="label", #size="cval",sizes=(5,100), #palette="Set2", # data=base_umaps['finetuned_features1'],) plt.plot(X,Y_pred)#, ax=ax) x_pred = np.linspace(x.min() - 1, x.max() + 1, 50) len(X) from sklearn.linear_model import LinearRegression data=all_guided['finetuned_features1'] Y_pred.ravel() x_pred X = data.iloc[:,0:1].values.reshape(-1, 1) # values converts it into a numpy array Y = data['cval'].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column linear_regressor = LinearRegression() # create object for the class linear_regressor.fit(X, Y) # perform linear regression Y_pred = linear_regressor.predict(X) # make predictions Y_pred plt.rcParams['figure.figsize']=(15,24) legend_bool=False i=1 fig=plt.figure() for layer in ['mixed10','finetuned_features1']:#, 'finetuned_features2']:#, 'finetuned_features3']: ax=plt.subplot(4,2,i) if i ==1: plt.title('baseline') sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), #palette="Set2", data=base_umaps[layer],legend=False) #ax.axhline(y=0, color='k', linewidth=1) # added because i want the origin #ax.axvline(x=0, color='k', linewidth=1) #plt.ylabel(layer)#, rotation='horizontal') #plt.text(-,5, layer) plt.axis('off') i += 1 ax=plt.subplot(4,2,i) if i ==2: plt.title('guided: model-ID3') if i==4: legend_bool=True sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), #palette="Set2", data=guided_umaps[layer], legend=legend_bool ) #ax.axhline(y=0, color='k', linewidth=1) # added because i want the origin #ax.axvline(x=0, color='k', linewidth=1) if i==5: plt.legend(loc='lower right') plt.axis('off') i += 1 plt.rcParams['figure.figsize']=(15,24) legend_bool=False i=1 fig=plt.figure() for layer in ['mixed10']:#,'finetuned_features1', 'finetuned_features2', 'finetuned_features3']: plt.subplot(4,2,i) if i ==1: plt.title('base') sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), palette="pastel", data=all_base[layer],legend=False) #plt.ylabel(layer)#, rotation='horizontal') #plt.text(-,5, layer) plt.axis('off') i += 1 plt.subplot(4,2,i) if i ==2: plt.title('guided') if i==8: legend_bool=True sns.scatterplot(x="x", y="y", hue="label", size="cval",sizes=(5,100), palette="pastel", data=all_guided[layer], legend=legend_bool ) if i==8: plt.legend(loc='lower right') plt.axis('off') i += 1 guided_umaps.
0.325306
0.374047