markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
DownloadingThe `Tabulator` also supports triggering a download of the data as a CSV or JSON file dependending on the filename. The download can be triggered with the `.download()` method, which optionally accepts the filename as the first argument.To trigger the download client-side (i.e. without involving the server) you can use the `.download_menu` method which creates a `TextInput` and `Button` widget, which allow setting the filename and triggering the download respectively:
download_df = pd.DataFrame(np.random.randn(10, 5), columns=list('ABCDE')) download_table = pn.widgets.Tabulator(download_df) filename, button = download_table.download_menu( text_kwargs={'name': 'Enter filename', 'value': 'default.csv'}, button_kwargs={'name': 'Download table'} ) pn.Row( pn.Column(filename, button), download_table )
_____no_output_____
BSD-3-Clause
examples/reference/widgets/Tabulator.ipynb
datalayer-contrib/holoviz-panel
StreamingWhen we are monitoring some source of data that updates over time, we may want to update the table with the newly arriving data. However, we do not want to transmit the entire dataset each time. To handle efficient transfer of just the latest data, we can use the `.stream` method on the `Tabulator` object:
stream_df = pd.DataFrame(np.random.randn(10, 5), columns=list('ABCDE')) stream_table = pn.widgets.Tabulator(stream_df, layout='fit_columns', width=450) stream_table
_____no_output_____
BSD-3-Clause
examples/reference/widgets/Tabulator.ipynb
datalayer-contrib/holoviz-panel
As example, we will schedule a periodic callback that streams new data every 1000ms (i.e. 1s) five times in a row:
def stream_data(follow=True): stream_df = pd.DataFrame(np.random.randn(10, 5), columns=list('ABCDE')) stream_table.stream(stream_df, follow=follow) pn.state.add_periodic_callback(stream_data, period=1000, count=5)
_____no_output_____
BSD-3-Clause
examples/reference/widgets/Tabulator.ipynb
datalayer-contrib/holoviz-panel
If you are viewing this example with a live Python kernel you will be able to watch the table update and scroll along. If we want to disable the scrolling behavior, we can set `follow=False`:
stream_data(follow=False)
_____no_output_____
BSD-3-Clause
examples/reference/widgets/Tabulator.ipynb
datalayer-contrib/holoviz-panel
PatchingIn certain cases we don't want to update the table with new data but just patch existing data.
patch_table = pn.widgets.Tabulator(df[['int', 'float', 'str', 'bool']]) patch_table
_____no_output_____
BSD-3-Clause
examples/reference/widgets/Tabulator.ipynb
datalayer-contrib/holoviz-panel
The easiest way to patch the data is by supplying a dictionary as the patch value. The dictionary should have the following structure:```python{ column: [ (index: int or slice, value), ... ], ...}``` As an example, below we will patch the 'bool' and 'int' columns. On the `'bool'` column we will replace the 0th and 2nd row and on the `'int'` column we replace the first two rows:
patch_table.patch({ 'bool': [ (0, False), (2, False) ], 'int': [ (slice(0, 2), [3, 2]) ] })
_____no_output_____
BSD-3-Clause
examples/reference/widgets/Tabulator.ipynb
datalayer-contrib/holoviz-panel
Static ConfigurationPanel does not expose all options available from Tabulator, if a desired option is not natively supported, it can be set via the `configuration` argument. This dictionary can be seen as a base dictionary which the tabulator object fills and passes to the Tabulator javascript-library.As an example, we can turn off sorting and resizing of columns by disabling the `headerSort` and `resizableColumn` options.
df = pd.DataFrame({ 'int': [1, 2, 3], 'float': [3.14, 6.28, 9.42], 'str': ['A', 'B', 'C'], 'bool': [True, False, True], 'date': [dt.date(2019, 1, 1), dt.date(2020, 1, 1), dt.date(2020, 1, 10)] }, index=[1, 2, 3]) df_widget = pn.widgets.Tabulator(df, configuration={"headerSort": False, "resizableColumns": False}) df_widget.servable()
_____no_output_____
BSD-3-Clause
examples/reference/widgets/Tabulator.ipynb
datalayer-contrib/holoviz-panel
Nom et prénom TP1 Probabilité et statistique HTML Base Tag Example
from __future__ import print_function import numpy as np import pandas as pd from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets
_____no_output_____
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Probabilités - approche fréquentiste Définition par la fréquence relative :* une expérience d’ensemble fondamental est exécutée plusieurs fois sous les mêmes conditions.* Pour chaque événement E de , n(E) est le nombre de fois où l’événement E survient lors des n premières répétitions de l’expérience.* P(E), la probabilité de l’événement E est définie de la manière suivante :$$P(E)=\lim_{n\to\infty}\dfrac{n(E)}{n} $$ Simulation d'un dé parfait
# seed the random number generator np.random.seed(1) # Example: sampling # # do not forget that Python arrays are zero-indexed, # and the 2nd argument to NumPy arange must be incremented by 1 # if you want to include that value n = 6 k = 200000 T=np.random.choice(np.arange(1, n+1), k, replace=True) unique, counts = np.unique(T, return_counts=True) dic=dict(zip(unique, counts)) df=pd.DataFrame(list(dic.items()),columns=['i','Occurence']) df.set_index(['i'], inplace=True) df['Freq']=df['Occurence']/k df['P({i})']='{}'.format(1/6) df
_____no_output_____
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Ajouter de l'intéraction
def dice_sim(k=100): n = 6 T=np.random.choice(np.arange(1, n+1), k, replace=True) unique, counts = np.unique(T, return_counts=True) dic=dict(zip(unique, counts)) df=pd.DataFrame(list(dic.items()),columns=['i','Occurence']) df.set_index(['i'], inplace=True) df['Freq']=df['Occurence']/k df['P({i})']='{0:.3f}'.format(1/6) return df dice_sim(100) interact(dice_sim,k=widgets.IntSlider(min=1000, max=50000, step=500, value=10));
_____no_output_____
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Cas d'un dé truqué
p=[0.1, 0.1, 0.1, 0.1,0.1,0.5] sum(p) def dice_sim(k=100,q=[[0.1, 0.1, 0.1, 0.1,0.1,0.5],[0.2, 0.1, 0.2, 0.1,0.1,0.3]]): n = 6 qq=q T=np.random.choice(np.arange(1, n+1), k, replace=True,p=qq) unique, counts = np.unique(T, return_counts=True) dic=dict(zip(unique, counts)) df=pd.DataFrame(list(dic.items()),columns=['i','Occurence']) df.set_index(['i'], inplace=True) df['Freq']=df['Occurence']/k df['P({i})']=['{0:.3f}'.format(j) for j in q] return df interact(dice_sim,k=widgets.IntSlider(min=1000, max=50000, step=500, value=10));
_____no_output_____
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Exercice 1: Tester l'intéraction précédente pour plusieurs valeurs de `p`Donner votre conclusion :
# Conclusion
_____no_output_____
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Permutation Aléatoire
np.random.seed(2) m = 1 n = 10 v = np.arange(m, n+1) print('v =', v) np.random.shuffle(v) print('v, shuffled =', v)
v = [ 1 2 3 4 5 6 7 8 9 10] v, shuffled = [ 5 2 6 1 8 3 4 7 10 9]
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Exercice 2Vérifier que les permutation aléatoires sont uniforme , c'est à dire que la probabilité de générer une permutation d'élement de {1,2,3} est 1/6.En effet les permutations de {1,2,3} sont :* 1 2 3* 1 3 2* 2 1 3* 2 3 1* 3 1 2* 3 2 1
k =10 m = 1 n = 3 v = np.arange(m, n+1) T=[] for i in range(k): np.random.shuffle(v) w=np.copy(v) T.append(w) TT=[str(i) for i in T] TT k =1000 m = 1 n = 3 v = np.arange(m, n+1) T=[] for i in range(k): np.random.shuffle(v) w=np.copy(v) T.append(w) TT=[str(i) for i in T] unique, counts = np.unique(TT, return_counts=True) dic=dict(zip(unique, counts)) df=pd.DataFrame(list(dic.items()),columns=['i','Occurence']) df.set_index(['i'], inplace=True) df['Freq']=df['Occurence']/k df['P({i,j,k})']='{0:.3f}'.format(1/6) df
_____no_output_____
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Donner votre conclusion en expliquant le script
## Explication
_____no_output_____
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Probabilité conditionnelle Rappelons que l'interprétation fréquentiste de la probabilité conditionnelle basée sur un grand nombre `n` de répétitions d'une expérience est $ P (A | B) ≈ n_ {AB} / n_ {B} $, où $ n_ {AB} $ est le nombre de fois où $ A \cap B $ se produit et $ n_ {B} $ est le nombre de fois où $ B $ se produit. Essayons cela par simulation et vérifions les résultats de l'exemple 2.2.5. Utilisons donc [`numpy.random.choice`] (https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.choice.html) pour simuler les familles` n`, chacun avec deux enfants.
np.random.seed(34) n = 10**5 child1 = np.random.choice([1,2], n, replace=True) child2 = np.random.choice([1,2], n, replace=True) print('child1:\n{}\n'.format(child1)) print('child2:\n{}\n'.format(child2))
child1: [2 1 1 ... 1 2 1] child2: [2 2 2 ... 2 2 1]
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
Ici, «child1» est un «tableau NumPy» de longueur «n», où chaque élément est un 1 ou un 2. En laissant 1 pour «fille» et 2 pour «garçon», ce «tableau» représente le sexe du enfant aîné dans chacune des familles «n». De même, «enfant2» représente le sexe du plus jeune enfant de chaque famille.
np.random.choice(["girl", "boy"], n, replace=True)
_____no_output_____
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
mais il est plus pratique de travailler avec des valeurs numériques.Soit $ A $ l'événement où les deux enfants sont des filles et $ B $ l'événement où l'aîné est une fille. Suivant l'interprétation fréquentiste, nous comptons le nombre de répétitions où $ B $ s'est produit et le nommons `n_b`, et nous comptons également le nombre de répétitions où $ A \cap B $ s'est produit et le nommons` n_ab`. Enfin, nous divisons `n_ab` par` n_b` pour approximer $ P (A | B) $.
n_b = np.sum(child1==1) n_ab = np.sum((child1==1) & (child2==1)) print('P(both girls | elder is girl) = {:0.2F}'.format(n_ab / n_b))
P(both girls | elder is girl) = 0.50
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
L'esperluette `&` est un élément par élément $ AND $, donc `n_ab` est le nombre de familles où le premier et le deuxième enfant sont des filles. Lorsque nous avons exécuté ce code, nous avons obtenu 0,50, confirmant notre réponse $ P (\text {les deux filles | l'aîné est une fille}) = 1/2 $.Soit maintenant $ A $ l'événement où les deux enfants sont des filles et $ B $ l'événement selon lequel au moins l'un des enfants est une fille. Alors $ A \cap B $ est le même, mais `n_b` doit compter le nombre de familles où au moins un enfant est une fille. Ceci est accompli avec l'opérateur élémentaire $ OR $ `|` (ce n'est pas une barre de conditionnement; c'est un $ OR $ inclusif, retournant `True` si au moins un élément est` True`).
n_b = np.sum((child1==1) | (child2==2)) n_ab = np.sum((child1==1) & (child2==1)) print('P(both girls | at least one girl) = {:0.2F}'.format(n_ab / n_b))
P(both girls | at least one girl) = 0.33
Unlicense
Tp_xy/TP1_mde amine_.ipynb
nevermind78/Proba_stat_4_LM
[Math-Bot] Siamese LSTM: Detecting duplicatesAuthor: Alin-Andrei Georgescu 2021Welcome to my notebook! It explores the Siamese networks applied to natural language processing. The model is intended to detect duplicates, in other words to check if two sentences are similar.The model uses "Long short-term memory" (LSTM) neural networks, which are an artificial recurrent neural networks (RNNs). This version uses GloVe pretrained vectors. Outline- [Overview](0)- [Part 1: Importing the Data](1) - [1.1 Loading in the data](1.1) - [1.2 Converting a sentence to a tensor](1.2) - [1.3 Understanding and building the iterator](1.3)- [Part 2: Defining the Siamese model](2) - [2.1 Understanding and building the Siamese Network](2.1) - [2.2 Implementing Hard Negative Mining](2.2)- [Part 3: Training](3)- [Part 4: Evaluation](4)- [Part 5: Making predictions](5) OverviewGeneral ideas:- Designing a Siamese networks model- Implementing the triplet loss- Evaluating accuracy- Using cosine similarity between the model's outputted vectors- Working with Trax and Numpy libraries in Python 3The LSTM cell's architecture (source: https://www.researchgate.net/figure/The-structure-of-the-LSTM-unit_fig2_331421650):I will start by preprocessing the data, then I will build a classifier that will identify whether two sentences are the same or not. I tokenized the data, then split the dataset into training and testing sets. I loaded pretrained GloVe word embeddings and built a sentence's vector by averaging the composing word's vectors. The model takes in the two sentence embeddings, runs them through an LSTM, and then compares the outputs of the two sub networks using cosine similarity.This notebook has been built based on Coursera's Natural Language Processing Specialization. Part 1: Importing the Data 1.1 Loading in the dataFirst step in building a model is building a dataset. I used three datasets in building my model:- the Quora Question Pairs- edited SICK dataset- custom Maths duplicates datasetRun the cell below to import some of the needed packages.
import os import re import nltk from nltk.corpus import stopwords nltk.download('punkt') nltk.download('stopwords') nltk.download('wordnet') import numpy as np import pandas as pd import random as rnd !pip install textcleaner import textcleaner as tc !pip install trax import trax from trax import layers as tl from trax.supervised import training from trax.fastmath import numpy as fastnp !pip install gensim from gensim.models import KeyedVectors from gensim.scripts.glove2word2vec import glove2word2vec from collections import defaultdict # set random seeds rnd.seed(34)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
**Notice that in this notebook Trax's numpy is referred to as `fastnp`, while regular numpy is referred to as `np`.**Now the dataset and word embeddings will get loaded and the data processed.
data = pd.read_csv("data/merged_dataset.csv", encoding="utf-8") N = len(data) print("Number of sentence pairs: ", N) data.head() !wget -O data/glove.840B.300d.zip nlp.stanford.edu/data/glove.840B.300d.zip !unzip -d data data/glove.840B.300d.zip !rm data/glove.840B.300d.zip vec_model = KeyedVectors.load_word2vec_format("data/glove.840B.300d.txt", binary=False, no_header=True)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Then I split the data into a train and test set. The test set will be used later to evaluate the model.
N_dups = len(data[data.is_duplicate == 1]) # Take 90% of the duplicates for the train set N_train = int(N_dups * 0.9) print(N_train) # Take the rest of the duplicates for the test set + an equal number of non-dups N_test = (N_dups - N_train) * 2 print(N_test) data_train = data[: N_train] # Shuffle the train set data_train = data_train.sample(frac=1) data_test = data[N_train : N_train + N_test] # Shuffle the test set data_test = data_test.sample(frac=1) print("Train set: ", len(data_train), "; Test set: ", len(data_test)) # Remove the unneeded data to some memory del(data) S1_train_words = np.array(data_train["sentence1"]) S2_train_words = np.array(data_train["sentence2"]) S1_test_words = np.array(data_test["sentence1"]) S2_test_words = np.array(data_test["sentence2"]) y_test = np.array(data_test["is_duplicate"]) del(data_train) del(data_test)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Above, you have seen that the model only takes the duplicated sentences for training.All this has a purpose, as the data generator will produce batches $([s1_1, s1_2, s1_3, ...]$, $[s2_1, s2_2,s2_3, ...])$, where $s1_i$ and $s2_k$ are duplicate if and only if $i = k$.An example of how the data looks is shown below.
print("TRAINING SENTENCES:\n") print("Sentence 1: ", S1_train_words[0]) print("Sentence 2: ", S2_train_words[0], "\n") print("Sentence 1: ", S1_train_words[5]) print("Sentence 2: ", S2_train_words[5], "\n") print("TESTING SENTENCES:\n") print("Sentence 1: ", S1_test_words[0]) print("Sentence 2: ", S2_test_words[0], "\n") print("is_duplicate =", y_test[0], "\n")
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
The first step is to tokenize the sentences using a custom tokenizer defined below.
# Create arrays S1_train = np.empty_like(S1_train_words) S2_train = np.empty_like(S2_train_words) S1_test = np.empty_like(S1_test_words) S2_test = np.empty_like(S2_test_words) def data_tokenizer(sentence): """Tokenizer function - cleans and tokenizes the data Args: sentence (str): The input sentence. Returns: list: The transformed input sentence. """ if sentence == "": return "" sentence = tc.lower_all(sentence)[0] # Change tabs to spaces sentence = re.sub(r"\t+_+", " ", sentence) # Change short forms sentence = re.sub(r"\'ve", " have", sentence) sentence = re.sub(r"(can\'t|can not)", "cannot", sentence) sentence = re.sub(r"n\'t", " not", sentence) sentence = re.sub(r"I\'m", "I am", sentence) sentence = re.sub(r" m ", " am ", sentence) sentence = re.sub(r"(\'re| r )", " are ", sentence) sentence = re.sub(r"\'d", " would ", sentence) sentence = re.sub(r"\'ll", " will ", sentence) sentence = re.sub(r"(\d+)(k)", r"\g<1>000", sentence) # Make word separations sentence = re.sub(r"(\+|-|\*|\/|\^|\.)", " $1 ", sentence) # Remove irrelevant stuff, nonprintable characters and spaces sentence = re.sub(r"(\'s|\'S|\'|\"|,|[^ -~]+)", "", sentence) sentence = tc.strip_all(sentence)[0] if sentence == "": return "" return tc.token_it(tc.lemming(sentence))[0] for idx in range(len(S1_train_words)): S1_train[idx] = data_tokenizer(S1_train_words[idx]) for idx in range(len(S2_train_words)): S2_train[idx] = data_tokenizer(S2_train_words[idx]) for idx in range(len(S1_test_words)): S1_test[idx] = data_tokenizer(S1_test_words[idx]) for idx in range(len(S2_test_words)): S2_test[idx] = data_tokenizer(S2_test_words[idx])
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
1.2 Converting a sentence to a tensorThe next step is to convert every sentence to a tensor, or an array of numbers, using the word embeddings loaded above.
stop_words = set(stopwords.words('english')) # Converting sentences to vectors. OOV words or stopwords will be discarded. S1_train_vec = np.empty_like(S1_train) for i in range(len(S1_train)): S1_train_vec[i] = np.zeros((300,)) for word in S1_train[i]: if word not in stop_words and word in vec_model.key_to_index: S1_train_vec[i] += vec_model[word] S1_train[i] = S1_train_vec[i] / len(S1_train[i]) S2_train_vec = np.empty_like(S2_train) for i in range(len(S2_train)): S2_train_vec[i] = np.zeros((300,)) for word in S2_train[i]: if word not in stop_words and word in vec_model.key_to_index: S2_train_vec[i] += vec_model[word] S2_train[i] = S2_train_vec[i] / len(S2_train[i]) S1_test_vec = np.empty_like(S1_test) for i in range(len(S1_test)): S1_test_vec[i] = np.zeros((300,)) for word in S1_test[i]: if word not in stop_words and word in vec_model.key_to_index: S1_test_vec[i] += vec_model[word] S1_test[i] = S1_test_vec[i] / len(S1_test[i]) S2_test_vec = np.empty_like(S2_test) for i in range(len(S2_test)): S2_test_vec[i] = np.zeros((300,)) for word in S2_test[i]: if word not in stop_words and word in vec_model.key_to_index: S2_test_vec[i] += vec_model[word] S2_test[i] = S2_test_vec[i] / len(S2_test[i]) print("FIRST SENTENCE IN TRAIN SET:\n") print(S1_train_words[0], "\n") print("ENCODED VERSION:") print(S1_train[0],"\n") del(S1_train_words) del(S2_train_words) print("FIRST SENTENCE IN TEST SET:\n") print(S1_test_words[0], "\n") print("ENCODED VERSION:") print(S1_test[0]) del(S1_test_words) del(S2_test_words)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Now, the train set must be split into a training/validation set so that it can be used to train and evaluate the Siamese model.
# Splitting the data cut_off = int(len(S1_train) * .8) train_S1, train_S2 = S1_train[: cut_off], S2_train[: cut_off] val_S1, val_S2 = S1_train[cut_off :], S2_train[cut_off :] print("Number of duplicate sentences: ", len(S1_train)) print("The length of the training set is: ", len(train_S1)) print("The length of the validation set is: ", len(val_S1))
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
1.3 Understanding and building the iterator Given the compational limits, we need to split our data into batches. In this notebook, I built a data generator that takes in $S1$ and $S2$ and returned a batch of size `batch_size` in the following format $([s1_1, s1_2, s1_3, ...]$, $[s2_1, s2_2,s2_3, ...])$. The tuple consists of two arrays and each array has `batch_size` sentences. Again, $s1_i$ and $s2_i$ are duplicates, but they are not duplicates with any other elements in the batch. The command `next(data_generator)` returns the next batch. This iterator returns a pair of arrays of sentences, which will later be used in the model.**The ideas behind:** - The generator returns shuffled batches of data. To achieve this without modifying the actual sentence lists, a list containing the indexes of the sentences is created. This list can be shuffled and used to get random batches everytime the index is reset.- Append elements of $S1$ and $S2$ to `input1` and `input2` respectively.
def data_generator(S1, S2, batch_size, shuffle=False): """Generator function that yields batches of data Args: S1 (list): List of transformed (to tensor) sentences. S2 (list): List of transformed (to tensor) sentences. batch_size (int): Number of elements per batch. shuffle (bool, optional): If the batches should be randomnized or not. Defaults to False. Yields: tuple: Of the form (input1, input2) with types (numpy.ndarray, numpy.ndarray) NOTE: input1: inputs to your model [s1a, s2a, s3a, ...] i.e. (s1a,s1b) are duplicates input2: targets to your model [s1b, s2b,s3b, ...] i.e. (s1a,s2i) i!=a are not duplicates """ input1 = [] input2 = [] idx = 0 len_s = len(S1) sentence_indexes = [*range(len_s)] if shuffle: rnd.shuffle(sentence_indexes) while True: if idx >= len_s: # If idx is greater than or equal to len_q, reset it idx = 0 # Shuffle to get random batches if shuffle is set to True if shuffle: rnd.shuffle(sentence_indexes) s1 = S1[sentence_indexes[idx]] s2 = S2[sentence_indexes[idx]] idx += 1 input1.append(s1) input2.append(s2) if len(input1) == batch_size: b1 = [] b2 = [] for s1, s2 in zip(input1, input2): # Append s1 b1.append(s1) # Append s2 b2.append(s2) # Use b1 and b2 yield np.array(b1).reshape((batch_size, 1, -1)), np.array(b2).reshape((batch_size, 1, -1)) # reset the batches input1, input2 = [], [] batch_size = 2 res1, res2 = next(data_generator(train_S1, train_S2, batch_size)) print("First sentences :\n", res1, "\n Shape: ", res1.shape) print("Second sentences :\n", res2, "\n Shape: ", res2.shape)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Now we can go ahead and start building the neural network, as we have a data generator. Part 2: Defining the Siamese model 2.1 Understanding and building the Siamese Network A Siamese network is a neural network which uses the same weights while working in tandem on two different input vectors to compute comparable output vectors. The Siamese network model proposed in this notebook looks like this:The sentences' embeddings are passed to an LSTM layer, the output vectors, $v_1$ and $v_2$, are normalized, and finally a triplet loss is used to get the corresponding cosine similarity for each pair of sentences. The triplet loss makes use of a baseline (anchor) input that is compared to a positive (truthy) input and a negative (falsy) input. The distance from the baseline (anchor) input to the positive (truthy) input is minimized, and the distance from the baseline (anchor) input to the negative (falsy) input is maximized. In math equations, the following is maximized:$$\mathcal{L}(A, P, N)=\max \left(\|\mathrm{f}(A)-\mathrm{f}(P)\|^{2}-\|\mathrm{f}(A)-\mathrm{f}(N)\|^{2}+\alpha, 0\right)$$$A$ is the anchor input, for example $s1_1$, $P$ the duplicate input, for example, $s2_1$, and $N$ the negative input (the non duplicate sentence), for example $s2_2$.$\alpha$ is a margin - how much the duplicates are pushed from the non duplicates. **The ideas behind:**- Trax library is used in implementing the model.- `tl.Serial`: Combinator that applies layers serially (by function composition) allowing the set up the overall structure of the feedforward.- `tl.LSTM` The LSTM layer. - `tl.Mean`: Computes the mean across a desired axis. Mean uses one tensor axis to form groups of values and replaces each group with the mean value of that group.- `tl.Fn` Layer with no weights that applies the function f - vector normalization in this case.- `tl.parallel`: It is a combinator layer (like `Serial`) that applies a list of layers in parallel to its inputs.
def Siamese(d_model=300): """Returns a Siamese model. Args: d_model (int, optional): Depth of the model. Defaults to 128. mode (str, optional): "train", "eval" or "predict", predict mode is for fast inference. Defaults to "train". Returns: trax.layers.combinators.Parallel: A Siamese model. """ def normalize(x): # normalizes the vectors to have L2 norm 1 return x / fastnp.sqrt(fastnp.sum(x * x, axis=-1, keepdims=True)) s_processor = tl.Serial( # Processor will run on S1 and S2. tl.LSTM(d_model), # LSTM layer tl.Mean(axis=1), # Mean over columns tl.Fn('Normalize', lambda x: normalize(x)) # Apply normalize function ) # Returns one vector of shape [batch_size, d_model]. # Run on S1 and S2 in parallel. model = tl.Parallel(s_processor, s_processor) return model
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Setup the Siamese network model.
# Check the model model = Siamese(d_model=300) print(model)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
2.2 Implementing Hard Negative MiningNow it's the time to implement the `TripletLoss`.As explained earlier, loss is composed of two terms. One term utilizes the mean of all the non duplicates, the second utilizes the *closest negative*. The loss expression is then: \begin{align} \mathcal{Loss_1(A,P,N)} &=\max \left( -cos(A,P) + mean_{neg} +\alpha, 0\right) \\ \mathcal{Loss_2(A,P,N)} &=\max \left( -cos(A,P) + closest_{neg} +\alpha, 0\right) \\\mathcal{Loss(A,P,N)} &= mean(Loss_1 + Loss_2) \\\end{align}
def TripletLossFn(v1, v2, margin=0.25): """Custom Loss function. Args: v1 (numpy.ndarray): Array with dimension (batch_size, model_dimension) associated to S1. v2 (numpy.ndarray): Array with dimension (batch_size, model_dimension) associated to S2. margin (float, optional): Desired margin. Defaults to 0.25. Returns: jax.interpreters.xla.DeviceArray: Triplet Loss. """ scores = fastnp.dot(v1, v2.T) # pairwise cosine sim batch_size = len(scores) positive = fastnp.diagonal(scores) # the positive ones (duplicates) negative_without_positive = scores - 2.0 * fastnp.eye(batch_size) closest_negative = fastnp.max(negative_without_positive, axis=1) negative_zero_on_duplicate = (1.0 - fastnp.eye(batch_size)) * scores mean_negative = fastnp.sum(negative_zero_on_duplicate, axis=1) / (batch_size - 1) triplet_loss1 = fastnp.maximum(0.0, margin - positive + closest_negative) triplet_loss2 = fastnp.maximum(0.0, margin - positive + mean_negative) triplet_loss = fastnp.mean(triplet_loss1 + triplet_loss2) return triplet_loss v1 = np.array([[0.26726124, 0.53452248, 0.80178373],[0.5178918 , 0.57543534, 0.63297887]]) v2 = np.array([[0.26726124, 0.53452248, 0.80178373],[-0.5178918 , -0.57543534, -0.63297887]]) TripletLossFn(v2,v1) print("Triplet Loss:", TripletLossFn(v2,v1))
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
**Expected Output:**```CPPTriplet Loss: 1.0```
from functools import partial def TripletLoss(margin=1): # Trax layer creation triplet_loss_fn = partial(TripletLossFn, margin=margin) return tl.Fn("TripletLoss", triplet_loss_fn)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Part 3: TrainingThe next step is model training - defining the cost function and the optimizer, feeding in the built model. But first I will define the data generators used in the model.
batch_size = 512 train_generator = data_generator(train_S1, train_S2, batch_size) val_generator = data_generator(val_S1, val_S2, batch_size) print("train_S1.shape ", train_S1.shape) print("val_S1.shape ", val_S1.shape)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Now, I will define the training step. Each training iteration is defined as an `epoch`, each epoch being an iteration over all the data, using the training iterator.**The ideas behind:**- Two tasks are needed: `TrainTask` and `EvalTask`.- The training runs in a trax loop `trax.supervised.training.Loop`.- Pass the other parameters to a loop.
def train_model(Siamese, TripletLoss, lr_schedule, train_generator=train_generator, val_generator=val_generator, output_dir="trax_model/"): """Training the Siamese Model Args: Siamese (function): Function that returns the Siamese model. TripletLoss (function): Function that defines the TripletLoss loss function. lr_schedule (function): Trax multifactor schedule function. train_generator (generator, optional): Training generator. Defaults to train_generator. val_generator (generator, optional): Validation generator. Defaults to val_generator. output_dir (str, optional): Path to save model to. Defaults to "trax_model/". Returns: trax.supervised.training.Loop: Training loop for the model. """ output_dir = os.path.expanduser(output_dir) train_task = training.TrainTask( labeled_data=train_generator, loss_layer=TripletLoss(), optimizer=trax.optimizers.Adam(0.01), lr_schedule=lr_schedule ) eval_task = training.EvalTask( labeled_data=val_generator, metrics=[TripletLoss()] ) training_loop = training.Loop(Siamese(), train_task, eval_tasks=[eval_task], output_dir=output_dir, random_seed=34) return training_loop train_steps = 1500 lr_schedule = trax.lr.warmup_and_rsqrt_decay(400, 0.01) training_loop = train_model(Siamese, TripletLoss, lr_schedule) training_loop.run(train_steps)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Part 4: Evaluation To determine the accuracy of the model, the test set that was configured earlier is used. While the training used only positive examples, the test data, S1_test, S2_test and y_test, is setup as pairs of sentences, some of which are duplicates some are not. This routine runs all the test sentences pairs through the model, computes the cosine simlarity of each pair, thresholds it and compares the result to y_test - the correct response from the data set. The results are accumulated to produce an accuracy.**The ideas behind:** - The model loops through the incoming data in batch_size chunks. - The output vectors are computed and their cosine similarity is thresholded.
def classify(test_S1, test_S2, y, threshold, model, data_generator=data_generator, batch_size=64): """Function to test the model. Calculates some metrics, such as precision, accuracy, recall and F1 score. Args: test_S1 (numpy.ndarray): Array of S1 sentences. test_S2 (numpy.ndarray): Array of S2 sentences. y (numpy.ndarray): Array of actual target. threshold (float): Desired threshold. model (trax.layers.combinators.Parallel): The Siamese model. data_generator (function): Data generator function. Defaults to data_generator. batch_size (int, optional): Size of the batches. Defaults to 64. Returns: (float, float, float, float): Accuracy, precision, recall and F1 score of the model. """ true_pos = 0 true_neg = 0 false_pos = 0 false_neg = 0 for i in range(0, len(test_S1), batch_size): to_process = len(test_S1) - i if to_process < batch_size: batch_size = to_process s1, s2 = next(data_generator(test_S1[i : i + batch_size], test_S2[i : i + batch_size], batch_size, shuffle=False)) y_test = y[i : i + batch_size] v1, v2 = model((s1, s2)) for j in range(batch_size): d = np.dot(v1[j], v2[j].T) res = d > threshold if res == 1: if y_test[j] == res: true_pos += 1 else: false_pos += 1 else: if y_test[j] == res: true_neg += 1 else: false_neg += 1 accuracy = (true_pos + true_neg) / (true_pos + true_neg + false_pos + false_neg) precision = true_pos / (true_pos + false_pos) recall = true_pos / (true_pos + false_neg) f1_score = 2 * precision * recall / (precision + recall) return (accuracy, precision, recall, f1_score) print(len(S1_test)) # Loading in the saved model model = Siamese() model.init_from_file("trax_model/model.pkl.gz") # Evaluating it accuracy, precision, recall, f1_score = classify(S1_test, S2_test, y_test, 0.7, model, batch_size=512) print("Accuracy", accuracy) print("Precision", precision) print("Recall", recall) print("F1 score", f1_score)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Part 5: Making predictionsIn this section the model will be put to work. It will be wrapped in a function called `predict` which takes two sentences as input and returns $1$ or $0$, depending on whether the pair is a duplicate or not. But first, we need to embed the sentences.
def predict(sentence1, sentence2, threshold, model, data_generator=data_generator, verbose=False): """Function for predicting if two sentences are duplicates. Args: sentence1 (str): First sentence. sentence2 (str): Second sentence. threshold (float): Desired threshold. model (trax.layers.combinators.Parallel): The Siamese model. data_generator (function): Data generator function. Defaults to data_generator. verbose (bool, optional): If the results should be printed out. Defaults to False. Returns: bool: True if the sentences are duplicates, False otherwise. """ s1 = data_tokenizer(sentence1) # tokenize S1 = np.zeros((300,)) for word in s1: if word not in stop_words and word in vec_model.key_to_index: S1 += vec_model[word] S1 = S1 / len(s1) s2 = data_tokenizer(sentence2) # tokenize S2 = np.zeros((300,)) for word in s2: if word not in stop_words and word in vec_model.key_to_index: S1 += vec_model[word] S2 = S2 / len(s2) S1, S2 = next(data_generator([S1], [S2], 1)) v1, v2 = model((S1, S2)) d = np.dot(v1[0], v2[0].T) res = d > threshold if verbose == True: print("S1 = ", S1, "\nS2 = ", S2) print("d = ", d) print("res = ", res) return res
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Now we can test the model's ability to make predictions.
sentence1 = "I love running in the park." sentence2 = "I like running in park?" # 1 means it is duplicated, 0 otherwise predict(sentence1 , sentence2, 0.7, model, verbose=True)
_____no_output_____
MIT
src/math_bot/model/GloVeSiameseLSTM.ipynb
AlinGeorgescu/Math-Bot
Regression
import tensorflow as tf import matplotlib.pyplot as plt import numpy as np # 텐서플로우 시드 설정 tf.set_random_seed(1) # 넘파이 랜덤 시드 설정. np.random.seed(1) # -1부터 1사이값을 100개로 쪼갬. x = np.linspace(-1, 1, 100)[:, np.newaxis] # shape (100, 1) # 0을 평균으로 하고 분산값이 0.1인 값들을 x의 크기만큼 배열로 만든다. noise = np.random.normal(0, 0.1, size=x.shape) # x^2 + noise 연산. y = np.power(x, 2) + noise # shape (100, 1) + some noise # 차트에 그린다. plt.scatter(x, y) # 차트를 출력 plt.show() tf_x = tf.placeholder(tf.float32, x.shape) # input x tf_y = tf.placeholder(tf.float32, y.shape) # input y # hidden layer 생성 # relu activation function을 사용. # tf.layers.dense(입력, 유닛 갯수, acitvation function) # hidden유닛 갯수가 많아질수록 곡선이 좀더 유연해진다. l1 = tf.layers.dense(tf_x, 10, tf.nn.relu) # hidden layer1 l2 = tf.layers.dense(l1, 5, tf.nn.relu) # hidden layer2 # 출력 노드. output = tf.layers.dense(l2, 1) # output layer # 노드를 거친 값과 실제값의 차이를 mean_square하여 구한다. loss = tf.losses.mean_squared_error(tf_y, output) # compute cost # optimizer를 생성하고. optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5) # loss를 최소화하는 방향으로 optimize를 수행한다. train_op = optimizer.minimize(loss) sess = tf.Session() # 세션 생성. sess.run(tf.global_variables_initializer()) # 그래프의 variable타입을 초기화. plt.ion() # 새로운 차트를 생성. plt.show() # 학습을 100번 수행. for step in range(100): # 실제값과 출력값을 비교하면서 loss를 최소화하는 방향으로 학습을 진행. _, l, pred = sess.run([train_op, loss, output], {tf_x: x, tf_y: y}) if step % 5 == 0: # plot and show learning process plt.cla() plt.scatter(x, y) plt.plot(x, pred, 'r-', lw=5) plt.text(0.5, 0, 'Loss=%.4f' % l, fontdict={'size': 20, 'color': 'red'}) # 0.1초 간격으로 시뮬레이션. plt.pause(0.1) plt.ioff() plt.show()
_____no_output_____
MIT
notebook/self-study/2.regression.ipynb
KangByungWook/tensorflow
Interpreting BERT Models (Part 1) In this notebook we demonstrate how to interpret Bert models using `Captum` library. In this particular case study we focus on a fine-tuned Question Answering model on SQUAD dataset using transformers library from Hugging Face: https://huggingface.co/transformers/We show how to use interpretation hooks to examine and better understand embeddings, sub-embeddings, bert, and attention layers. Note: Before running this tutorial, please install `seaborn`, `pandas` and `matplotlib`, `transformers`(from hugging face) python packages.
import os import sys import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import torch import torch.nn as nn from transformers import BertTokenizer, BertForQuestionAnswering, BertConfig from captum.attr import visualization as viz from captum.attr import IntegratedGradients, LayerConductance, LayerIntegratedGradients from captum.attr import configure_interpretable_embedding_layer, remove_interpretable_embedding_layer device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
The first step is to fine-tune BERT model on SQUAD dataset. This can be easiy accomplished by following the steps described in hugging face's official web site: https://github.com/huggingface/transformersrun_squadpy-fine-tuning-on-squad-for-question-answering Note that the fine-tuning is done on a `bert-base-uncased` pre-trained model. After we pretrain the model, we can load the tokenizer and pre-trained BERT model using the commands described below.
# replace <PATH-TO-SAVED-MODEL> with the real path of the saved model model_path = '<PATH-TO-SAVED-MODEL>' # load model model = BertForQuestionAnswering.from_pretrained(model_path) model.to(device) model.eval() model.zero_grad() # load tokenizer tokenizer = BertTokenizer.from_pretrained(model_path)
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
A helper function to perform forward pass of the model and make predictions.
def predict(inputs, token_type_ids=None, position_ids=None, attention_mask=None): return model(inputs, token_type_ids=token_type_ids, position_ids=position_ids, attention_mask=attention_mask, )
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Defining a custom forward function that will allow us to access the start and end postitions of our prediction using `position` input argument.
def squad_pos_forward_func(inputs, token_type_ids=None, position_ids=None, attention_mask=None, position=0): pred = predict(inputs, token_type_ids=token_type_ids, position_ids=position_ids, attention_mask=attention_mask) pred = pred[position] return pred.max(1).values
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Let's compute attributions with respect to the `BertEmbeddings` layer.To do so, we need to define baselines / references, numericalize both the baselines and the inputs. We will define helper functions to achieve that.The cell below defines numericalized special tokens that will be later used for constructing inputs and corresponding baselines/references.
ref_token_id = tokenizer.pad_token_id # A token used for generating token reference sep_token_id = tokenizer.sep_token_id # A token used as a separator between question and text and it is also added to the end of the text. cls_token_id = tokenizer.cls_token_id # A token used for prepending to the concatenated question-text word sequence
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Below we define a set of helper function for constructing references / baselines for word tokens, token types and position ids. We also provide separate helper functions that allow to construct the sub-embeddings and corresponding baselines / references for all sub-embeddings of `BertEmbeddings` layer.
def construct_input_ref_pair(question, text, ref_token_id, sep_token_id, cls_token_id): question_ids = tokenizer.encode(question, add_special_tokens=False) text_ids = tokenizer.encode(text, add_special_tokens=False) # construct input token ids input_ids = [cls_token_id] + question_ids + [sep_token_id] + text_ids + [sep_token_id] # construct reference token ids ref_input_ids = [cls_token_id] + [ref_token_id] * len(question_ids) + [sep_token_id] + \ [ref_token_id] * len(text_ids) + [sep_token_id] return torch.tensor([input_ids], device=device), torch.tensor([ref_input_ids], device=device), len(question_ids) def construct_input_ref_token_type_pair(input_ids, sep_ind=0): seq_len = input_ids.size(1) token_type_ids = torch.tensor([[0 if i <= sep_ind else 1 for i in range(seq_len)]], device=device) ref_token_type_ids = torch.zeros_like(token_type_ids, device=device)# * -1 return token_type_ids, ref_token_type_ids def construct_input_ref_pos_id_pair(input_ids): seq_length = input_ids.size(1) position_ids = torch.arange(seq_length, dtype=torch.long, device=device) # we could potentially also use random permutation with `torch.randperm(seq_length, device=device)` ref_position_ids = torch.zeros(seq_length, dtype=torch.long, device=device) position_ids = position_ids.unsqueeze(0).expand_as(input_ids) ref_position_ids = ref_position_ids.unsqueeze(0).expand_as(input_ids) return position_ids, ref_position_ids def construct_attention_mask(input_ids): return torch.ones_like(input_ids) def construct_bert_sub_embedding(input_ids, ref_input_ids, token_type_ids, ref_token_type_ids, position_ids, ref_position_ids): input_embeddings = interpretable_embedding1.indices_to_embeddings(input_ids) ref_input_embeddings = interpretable_embedding1.indices_to_embeddings(ref_input_ids) input_embeddings_token_type = interpretable_embedding2.indices_to_embeddings(token_type_ids) ref_input_embeddings_token_type = interpretable_embedding2.indices_to_embeddings(ref_token_type_ids) input_embeddings_position_ids = interpretable_embedding3.indices_to_embeddings(position_ids) ref_input_embeddings_position_ids = interpretable_embedding3.indices_to_embeddings(ref_position_ids) return (input_embeddings, ref_input_embeddings), \ (input_embeddings_token_type, ref_input_embeddings_token_type), \ (input_embeddings_position_ids, ref_input_embeddings_position_ids) def construct_whole_bert_embeddings(input_ids, ref_input_ids, \ token_type_ids=None, ref_token_type_ids=None, \ position_ids=None, ref_position_ids=None): input_embeddings = interpretable_embedding.indices_to_embeddings(input_ids, token_type_ids=token_type_ids, position_ids=position_ids) ref_input_embeddings = interpretable_embedding.indices_to_embeddings(ref_input_ids, token_type_ids=token_type_ids, position_ids=position_ids) return input_embeddings, ref_input_embeddings
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Let's define the `question - text` pair that we'd like to use as an input for our Bert model and interpret what the model was forcusing on when predicting an answer to the question from given input text
question, text = "What is important to us?", "It is important to us to include, empower and support humans of all kinds."
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Let's numericalize the question, the input text and generate corresponding baselines / references for all three sub-embeddings (word, token type and position embeddings) types using our helper functions defined above.
input_ids, ref_input_ids, sep_id = construct_input_ref_pair(question, text, ref_token_id, sep_token_id, cls_token_id) token_type_ids, ref_token_type_ids = construct_input_ref_token_type_pair(input_ids, sep_id) position_ids, ref_position_ids = construct_input_ref_pos_id_pair(input_ids) attention_mask = construct_attention_mask(input_ids) indices = input_ids[0].detach().tolist() all_tokens = tokenizer.convert_ids_to_tokens(indices)
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Also, let's define the ground truth for prediction's start and end positions.
ground_truth = 'to include, empower and support humans of all kinds' ground_truth_tokens = tokenizer.encode(ground_truth, add_special_tokens=False) ground_truth_end_ind = indices.index(ground_truth_tokens[-1]) ground_truth_start_ind = ground_truth_end_ind - len(ground_truth_tokens) + 1
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Now let's make predictions using input, token type, position id and a default attention mask.
start_scores, end_scores = predict(input_ids, \ token_type_ids=token_type_ids, \ position_ids=position_ids, \ attention_mask=attention_mask) print('Question: ', question) print('Predicted Answer: ', ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
Question: What is important to us? Predicted Answer: to include , em ##power and support humans of all kinds
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
There are two different ways of computing the attributions for `BertEmbeddings` layer. One option is to use `LayerIntegratedGradients` and compute the attributions with respect to that layer. The second option is to pre-compute the embeddings and wrap the actual embeddings with `InterpretableEmbeddingBase`. The pre-computation of embeddings for the second option is necessary because integrated gradients scales the inputs and that won't be meaningful on the level of word / token indices.Since using `LayerIntegratedGradients` is simpler, let's use it here.
lig = LayerIntegratedGradients(squad_pos_forward_func, model.bert.embeddings) attributions_start, delta_start = lig.attribute(inputs=input_ids, baselines=ref_input_ids, additional_forward_args=(token_type_ids, position_ids, attention_mask, 0), return_convergence_delta=True) attributions_end, delta_end = lig.attribute(inputs=input_ids, baselines=ref_input_ids, additional_forward_args=(token_type_ids, position_ids, attention_mask, 1), return_convergence_delta=True)
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
A helper function to summarize attributions for each word token in the sequence.
def summarize_attributions(attributions): attributions = attributions.sum(dim=-1).squeeze(0) attributions = attributions / torch.norm(attributions) return attributions attributions_start_sum = summarize_attributions(attributions_start) attributions_end_sum = summarize_attributions(attributions_end) # storing couple samples in an array for visualization purposes start_position_vis = viz.VisualizationDataRecord( attributions_start_sum, torch.max(torch.softmax(start_scores[0], dim=0)), torch.argmax(start_scores), torch.argmax(start_scores), str(ground_truth_start_ind), attributions_start_sum.sum(), all_tokens, delta_start) end_position_vis = viz.VisualizationDataRecord( attributions_end_sum, torch.max(torch.softmax(end_scores[0], dim=0)), torch.argmax(end_scores), torch.argmax(end_scores), str(ground_truth_end_ind), attributions_end_sum.sum(), all_tokens, delta_end) print('\033[1m', 'Visualizations For Start Position', '\033[0m') viz.visualize_text([start_position_vis]) print('\033[1m', 'Visualizations For End Position', '\033[0m') viz.visualize_text([end_position_vis]) from IPython.display import Image Image(filename='img/bert/visuals_of_start_end_predictions.png')
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
From the results above we can tell that for predicting start position our model is focusing more on the question side. More specifically on the tokens `what` and `important`. It has also slight focus on the token sequence `to us` in the text side.In contrast to that, for predicting end position, our model focuses more on the text side and has relative high attribution on the last end position token `kinds`. Multi-Embedding attribution Now let's look into the sub-embeddings of `BerEmbeddings` and try to understand the contributions and roles of each of them for both start and end predicted positions.To do so, we'd need to place interpretation hooks in each three of them.Note that we could perform attribution by using `LayerIntegratedGradients` as well but in that case we have to call attribute three times for each sub-layer since currently `LayerIntegratedGradients` takes only a layer at a time. In the future we plan to support multi-layer attribution and will be able to perform attribution by only calling attribute once. `configure_interpretable_embedding_layer` function will help us to place interpretation hooks on each sub-layer. It returns `InterpretableEmbeddingBase` layer for each sub-embedding and can be used to access the embedding vectors. Note that we need to remove InterpretableEmbeddingBase wrapper from our model using remove_interpretable_embedding_layer function after we finish interpretation.
interpretable_embedding1 = configure_interpretable_embedding_layer(model, 'bert.embeddings.word_embeddings') interpretable_embedding2 = configure_interpretable_embedding_layer(model, 'bert.embeddings.token_type_embeddings') interpretable_embedding3 = configure_interpretable_embedding_layer(model, 'bert.embeddings.position_embeddings')
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
`BertEmbeddings` has three sub-embeddings, namely, `word_embeddings`, `token_type_embeddings` and `position_embeddings` and this time we would like to attribute to each of them independently.`construct_bert_sub_embedding` helper function helps us to construct input embeddings and corresponding references in a separation.
(input_embed, ref_input_embed), (token_type_ids_embed, ref_token_type_ids_embed), (position_ids_embed, ref_position_ids_embed) = construct_bert_sub_embedding(input_ids, ref_input_ids, \ token_type_ids=token_type_ids, ref_token_type_ids=ref_token_type_ids, \ position_ids=position_ids, ref_position_ids=ref_position_ids)
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Now let's create an instance of `IntegratedGradients` and compute the attributions with respect to all those embeddings both for the start and end positions and summarize them for each word token.
ig = IntegratedGradients(squad_pos_forward_func) attributions_start = ig.attribute(inputs=(input_embed, token_type_ids_embed, position_ids_embed), baselines=(ref_input_embed, ref_token_type_ids_embed, ref_position_ids_embed), additional_forward_args=(attention_mask, 0)) attributions_end = ig.attribute(inputs=(input_embed, token_type_ids_embed, position_ids_embed), baselines=(ref_input_embed, ref_token_type_ids_embed, ref_position_ids_embed), additional_forward_args=(attention_mask, 1)) attributions_start_word = summarize_attributions(attributions_start[0]) attributions_end_word = summarize_attributions(attributions_end[0]) attributions_start_token_type = summarize_attributions(attributions_start[1]) attributions_end_token_type = summarize_attributions(attributions_end[1]) attributions_start_position = summarize_attributions(attributions_start[2]) attributions_end_position = summarize_attributions(attributions_end[2])
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
An auxilary function that will help us to compute topk attributions and corresponding indices
def get_topk_attributed_tokens(attrs, k=5): values, indices = torch.topk(attrs, k) top_tokens = [all_tokens[idx] for idx in indices] return top_tokens, values, indices
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Removing interpretation hooks from all layers after finishing attribution.
remove_interpretable_embedding_layer(model, interpretable_embedding1) remove_interpretable_embedding_layer(model, interpretable_embedding2) remove_interpretable_embedding_layer(model, interpretable_embedding3)
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Computing topk attributions for all sub-embeddings and placing them in pandas dataframes for better visualization.
top_words_start, top_words_val_start, top_word_ind_start = get_topk_attributed_tokens(attributions_start_word) top_words_end, top_words_val_end, top_words_ind_end = get_topk_attributed_tokens(attributions_end_word) top_token_type_start, top_token_type_val_start, top_token_type_ind_start = get_topk_attributed_tokens(attributions_start_token_type) top_token_type_end, top_token_type_val_end, top_token_type_ind_end = get_topk_attributed_tokens(attributions_end_token_type) top_pos_start, top_pos_val_start, pos_ind_start = get_topk_attributed_tokens(attributions_start_position) top_pos_end, top_pos_val_end, pos_ind_end = get_topk_attributed_tokens(attributions_end_position) df_start = pd.DataFrame({'Word(Index), Attribution': ["{} ({}), {}".format(word, pos, round(val.item(),2)) for word, pos, val in zip(top_words_start, top_word_ind_start, top_words_val_start)], 'Token Type(Index), Attribution': ["{} ({}), {}".format(ttype, pos, round(val.item(),2)) for ttype, pos, val in zip(top_token_type_start, top_token_type_ind_start, top_words_val_start)], 'Position(Index), Attribution': ["{} ({}), {}".format(position, pos, round(val.item(),2)) for position, pos, val in zip(top_pos_start, pos_ind_start, top_pos_val_start)]}) df_start.style.apply(['cell_ids: False']) df_end = pd.DataFrame({'Word(Index), Attribution': ["{} ({}), {}".format(word, pos, round(val.item(),2)) for word, pos, val in zip(top_words_end, top_words_ind_end, top_words_val_end)], 'Token Type(Index), Attribution': ["{} ({}), {}".format(ttype, pos, round(val.item(),2)) for ttype, pos, val in zip(top_token_type_end, top_token_type_ind_end, top_words_val_end)], 'Position(Index), Attribution': ["{} ({}), {}".format(position, pos, round(val.item(),2)) for position, pos, val in zip(top_pos_end, pos_ind_end, top_pos_val_end)]}) df_end.style.apply(['cell_ids: False']) ['{}({})'.format(token, str(i)) for i, token in enumerate(all_tokens)]
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Below we can see top 5 attribution results from all three embedding types in predicting start positions. Top 5 attributed embeddings for start position
df_start
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Word embeddings help to focus more on the surrounding tokens of the predicted answer's start position to such as em, power and ,. It also has high attribution for the tokens in the question such as what and ?.In contrast to to word embedding, token embedding type focuses more on the tokens in the text part such as important,em and start token to.Position embedding also has high attribution score for the tokens surrounding to such as us and important. In addition to that, similar to word embedding we observe important tokens from the question.We can perform similar analysis, and visualize top 5 attributed tokens for all three embedding types, also for the end position prediction. Top 5 attributed embeddings for end position
df_end
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
It is interesting to observe high concentration of highly attributed tokens such as `of`, `kinds`, `support` and `power` for end position prediction.The token `kinds`, which is the correct predicted token appears to have high attribution score both according word and position embeddings. Interpreting Bert Layers Now let's look into the layers of our network. More specifically we would like to look into the distribution of attribution scores for each token across all layers in Bert model and dive deeper into specific tokens. We do that using one of layer attribution algorithms, namely, layer conductance. However, we encourage you to try out and compare the results with other algorithms as well.Let's configure `InterpretableEmbeddingsBase` again, in this case in order to interpret the layers of our model.
interpretable_embedding = configure_interpretable_embedding_layer(model, 'bert.embeddings')
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Let's iterate over all layers and compute the attributions for all tokens. In addition to that let's also choose a specific token that we would like to examine in detail, specified by an id `token_to_explain` and store related information in a separate array.Note: Since below code is iterating over all layers it can take over 5 seconds. Please be patient!
layer_attrs_start = [] layer_attrs_end = [] # The token that we would like to examine separately. token_to_explain = 23 # the index of the token that we would like to examine more thoroughly layer_attrs_start_dist = [] layer_attrs_end_dist = [] input_embeddings, ref_input_embeddings = construct_whole_bert_embeddings(input_ids, ref_input_ids, \ token_type_ids=token_type_ids, ref_token_type_ids=ref_token_type_ids, \ position_ids=position_ids, ref_position_ids=ref_position_ids) for i in range(model.config.num_hidden_layers): lc = LayerConductance(squad_pos_forward_func, model.bert.encoder.layer[i]) layer_attributions_start = lc.attribute(inputs=input_embeddings, baselines=ref_input_embeddings, additional_forward_args=(token_type_ids, position_ids,attention_mask, 0))[0] layer_attributions_end = lc.attribute(inputs=input_embeddings, baselines=ref_input_embeddings, additional_forward_args=(token_type_ids, position_ids,attention_mask, 1))[0] layer_attrs_start.append(summarize_attributions(layer_attributions_start).cpu().detach().tolist()) layer_attrs_end.append(summarize_attributions(layer_attributions_end).cpu().detach().tolist()) # storing attributions of the token id that we would like to examine in more detail in token_to_explain layer_attrs_start_dist.append(layer_attributions_start[0,token_to_explain,:].cpu().detach().tolist()) layer_attrs_end_dist.append(layer_attributions_end[0,token_to_explain,:].cpu().detach().tolist())
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
The plot below represents a heat map of attributions across all layers and tokens for the start position prediction. It is interesting to observe that the question word `what` gains increasingly high attribution from layer one to nine. In the last three layers that importance is slowly diminishing. In contrary to `what` token, many other tokens have negative or close to zero attribution in the first 6 layers. We start seeing slightly higher attribution in tokens `important`, `us` and `to`. Interestingly token `em` is also assigned high attribution score which is remarkably high the last three layers.And lastly, our correctly predicted token `to` for the start position gains increasingly positive attribution has relatively high attribution especially in the last two layers.
fig, ax = plt.subplots(figsize=(15,5)) xticklabels=all_tokens yticklabels=list(range(1,13)) ax = sns.heatmap(np.array(layer_attrs_start), xticklabels=xticklabels, yticklabels=yticklabels, linewidth=0.2) plt.xlabel('Tokens') plt.ylabel('Layers') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Now let's examine the heat map of the attributions for the end position prediction. In the case of end position prediction we again observe high attribution scores for the token `what` in the last 11 layers.The correctly predicted end token `kinds` has positive attribution across all layers and it is especially prominent in the last two layers.
fig, ax = plt.subplots(figsize=(15,5)) xticklabels=all_tokens yticklabels=list(range(1,13)) ax = sns.heatmap(np.array(layer_attrs_end), xticklabels=xticklabels, yticklabels=yticklabels, linewidth=0.2) #, annot=True plt.xlabel('Tokens') plt.ylabel('Layers') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
It is interesting to note that when we compare the heat maps of start and end position, overall the colors for start position prediction on the map have darker intensities. This implies that there are less tokens that attribute positively to the start position prediction and there are more tokens which are negative indicators or signals of start position prediction. Now let's dig deeper into specific tokens and look into the distribution of attributions per layer for the token `kinds` in the start and end positions. The box plot diagram below shows the presence of outliers especially in the first four layers and in layer 8. We also observe that for start position prediction interquartile range slowly decreases as we go deeper into the layers and finally it is dimishing.
fig, ax = plt.subplots(figsize=(20,10)) ax = sns.boxplot(data=layer_attrs_start_dist) plt.xlabel('Layers') plt.ylabel('Attribution') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Now let's plot same distribution but for the prediction of the end position. Here attribution has larger positive values across all layers and the interquartile range doesn't change much when moving deeper into the layers.
fig, ax = plt.subplots(figsize=(20,10)) ax = sns.boxplot(data=layer_attrs_end_dist) plt.xlabel('Layers') plt.ylabel('Attribution') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Now, let's remove interpretation hooks, since we finished interpretation at this point
remove_interpretable_embedding_layer(model, interpretable_embedding)
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
In addition to that we can also look into the distribution of attributions in each layer for any input token. This will help us to better understand and compare the distributional patterns of attributions across multiple layers. We can for example represent attributions as a probability density function (pdf) and compute the entropy of it in order to estimate the entropy of attributions in each layer. This can be easily computed using a histogram.
def pdf_attr(attrs, bins=100): return np.histogram(attrs, bins=bins, density=True)[0]
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
In this particular case let's compute the pdf for the attributions at end positions `kinds`. We can however do it for all tokens.We will compute and visualize the pdfs and entropies using Shannon's Entropy measure for each layer for token `kinds`.
layer_attrs_end_pdf = map(lambda layer_attrs_end_dist: pdf_attr(layer_attrs_end_dist), layer_attrs_end_dist) layer_attrs_end_pdf = np.array(list(layer_attrs_end_pdf)) # summing attribution along embedding diemension for each layer # size: #layers attr_sum = np.array(layer_attrs_end_dist).sum(-1) # size: #layers layer_attrs_end_pdf_norm = np.linalg.norm(layer_attrs_end_pdf, axis=-1, ord=1) #size: #bins x #layers layer_attrs_end_pdf = np.transpose(layer_attrs_end_pdf) #size: #bins x #layers layer_attrs_end_pdf = np.divide(layer_attrs_end_pdf, layer_attrs_end_pdf_norm, where=layer_attrs_end_pdf_norm!=0)
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
The plot below visualizes the probability mass function (pmf) of attributions for each layer for the end position token `kinds`. From the plot we can observe that the distributions are taking bell-curved shapes with different means and variances.We can now use attribution pdfs to compute entropies in the next cell.
fig, ax = plt.subplots(figsize=(20,10)) plt.plot(layer_attrs_end_pdf) plt.xlabel('Bins') plt.ylabel('Density') plt.legend(['Layer '+ str(i) for i in range(1,13)]) plt.show()
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Below we calculate and visualize attribution entropies based on Shannon entropy measure where the x-axis corresponds to the number of layers and the y-axis corresponds to the total attribution in that layer. The size of the circles for each (layer, total_attribution) pair correspond to the normalized entropy value at that point.In this particular example, we observe that the entropy doesn't change much from layer to layer, however in a general case entropy can provide us an intuition about the distributional characteristics of attributions in each layer and can be useful especially when comparing it across multiple tokens.
fig, ax = plt.subplots(figsize=(20,10)) # replacing 0s with 1s. np.log(1) = 0 and np.log(0) = -inf layer_attrs_end_pdf[layer_attrs_end_pdf == 0] = 1 layer_attrs_end_pdf_log = np.log2(layer_attrs_end_pdf) # size: #layers entropies= -(layer_attrs_end_pdf * layer_attrs_end_pdf_log).sum(0) plt.scatter(np.arange(12), attr_sum, s=entropies * 100) plt.xlabel('Layers') plt.ylabel('Total Attribution') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/Bert_SQUAD_Interpret.ipynb
cspanda/captum
Parameter Configuration
# %% global parameters spk_ch = 4 spk_dim = 64 # for Wave_Clus # spk_dim = 48 # for HC1 and Neuropixels log_interval = 10 beta = 0.15 vq_num = 128 cardinality = 32 dropRate = 0.2 batch_size = 48 test_batch_size = 1000 """ org_dim = param[0] conv1_ch = param[1] conv2_ch = param[2] conv0_ker = param[3] conv1_ker = param[4] conv2_ker = param[5] self.vq_dim = param[6] self.vq_num = param[7] cardinality = param[8] dropRate = param[9] """ param_resnet_v2 = [spk_ch, 256, 16, 1, 3, 1, int(spk_dim/4), vq_num, cardinality, dropRate]
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
Preparing data loaders
noise_file = './data/noisy_spks.mat' clean_file = './data/clean_spks.mat' args = collections.namedtuple # training set purposely distorted to train denoising autoencoder args.data_path = noise_file args.train_portion = .5 args.train_mode = True train_noise = SpikeDataset(args) # clean dataset for training args.data_path = clean_file args.train_portion = .5 args.train_mode = True train_clean = SpikeDataset(args) # noisy datast for training args.data_path = noise_file args.train_portion = .5 args.train_mode = False test_noise = SpikeDataset(args) # clean dataset for testing args.data_path = clean_file args.train_portion = .5 args.train_mode = False test_clean = SpikeDataset(args) batch_cnt = int(math.ceil(len(train_noise) / batch_size)) # normalization d_mean, d_std = train_clean.get_normalizer() train_clean.apply_norm(d_mean, d_std) train_noise.apply_norm(d_mean, d_std) test_clean.apply_norm(d_mean, d_std) test_noise.apply_norm(d_mean, d_std)
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
Model definition
# %% create model model = spk_vq_vae_resnet(param_resnet_v2).to(gpu) # %% loss and optimization function def loss_function(recon_x, x, commit_loss, vq_loss): recon_loss = F.mse_loss(recon_x, x, reduction='sum') return recon_loss + beta * commit_loss + vq_loss, recon_loss optimizer = optim.Adam(model.parameters(), lr=1e-3, weight_decay=1e-4, amsgrad=True) def train(epoch): model.train() train_loss = 0 batch_sampler = BatchSampler(RandomSampler(range(len(train_noise))), batch_size=batch_size, drop_last=False) for batch_idx, ind in enumerate(batch_sampler): in_data = train_noise[ind].to(gpu) out_data = train_clean[ind].to(gpu) optimizer.zero_grad() recon_batch, commit_loss, vq_loss = model(in_data) loss, recon_loss = loss_function(recon_batch, out_data, commit_loss, vq_loss) loss.backward(retain_graph=True) model.bwd() optimizer.step() train_loss += recon_loss.item() / (spk_dim * spk_ch) if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.4f}'.format( epoch, batch_idx * len(in_data), len(train_noise), 100. * batch_idx / batch_cnt, recon_loss.item())) average_train_loss = train_loss / len(train_noise) print('====> Epoch: {} Average train loss: {:.5f}'.format( epoch, average_train_loss)) return average_train_loss # model logging best_val_loss = 10 cur_train_loss = 1 def save_model(val_loss, train_loss): global best_val_loss, cur_train_loss if val_loss < best_val_loss: best_val_loss = val_loss cur_train_loss = train_loss torch.save(model.state_dict(), './spk_vq_vae_temp.pt') def test(epoch, test_mode=True): if test_mode: model.eval() model.embed_reset() test_loss = 0 recon_sig = torch.rand(1, spk_ch, spk_dim) org_sig = torch.rand(1, spk_ch, spk_dim) with torch.no_grad(): batch_sampler = BatchSampler(RandomSampler(range(len(test_noise))), batch_size=test_batch_size, drop_last=False) for batch_idx, ind in enumerate(batch_sampler): in_data = test_noise[ind].to(gpu) out_data = test_clean[ind].to(gpu) recon_batch, commit_loss, vq_loss = model(in_data) _, recon_loss = loss_function(recon_batch, out_data, commit_loss, vq_loss) recon_sig = torch.cat((recon_sig, recon_batch.data.cpu()), dim=0) org_sig = torch.cat((org_sig, out_data.data.cpu()), dim=0) test_loss += recon_loss.item() / (spk_dim * spk_ch) average_test_loss = test_loss / len(test_noise) print('====> Epoch: {} Average test loss: {:.5f}'.format( epoch, average_test_loss)) if epoch % 10 == 0: plt.figure(figsize=(7,5)) plt.bar(np.arange(vq_num), model.embed_freq / model.embed_freq.sum()) plt.ylabel('Probability of Activation', fontsize=16) plt.xlabel('Index of codewords', fontsize=16) plt.show() return average_test_loss, recon_sig[1:], org_sig[1:]
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
Training
train_loss_history = [] test_loss_history = [] epochs = 500 start_time = time.time() for epoch in range(1, epochs + 1): train_loss = train(epoch) test_loss, _, _ = test(epoch) save_model(test_loss, train_loss) train_loss_history.append(train_loss) test_loss_history.append(test_loss) print("--- %s seconds ---" % (time.time() - start_time)) print('Minimal train/testing losses are {:.4f} and {:.4f} with index {}\n' .format(cur_train_loss, best_val_loss, test_loss_history.index(min(test_loss_history)))) # plot train and test loss history over epochs plt.figure(1) epoch_axis = range(1, len(train_loss_history) + 1) plt.plot(epoch_axis, train_loss_history, 'bo') plt.plot(epoch_axis, test_loss_history, 'b+') plt.xlabel('Epochs') plt.ylabel('Loss') plt.show()
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
Result evaluation a. Visualization of mostly used VQ vectors
# select the best performing model model.load_state_dict(torch.load('./spk_vq_vae_temp.pt')) embed_idx = np.argsort(model.embed_freq) embed_sort = model.embed.weight.data.cpu().numpy()[embed_idx] # Visualizing activation pattern of VQ codes on testing dataset (the first 8 mostly activated) plt.figure() n_row, n_col = 1, 8 f, axarr = plt.subplots(n_row, n_col, figsize=(n_col*2, n_row*2)) for i in range(8): axarr[i].plot(embed_sort[i], 'r') axarr[i].axis('off') plt.show()
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
b. Compression ratio
# %% spike recon train_mean, train_std = torch.from_numpy(d_mean), torch.from_numpy(d_std) _, val_spks, test_spks = test(10) # calculate compression ratio vq_freq = model.embed_freq / sum(model.embed_freq) vq_freq = vq_freq[vq_freq != 0] vq_log2 = np.log2(vq_freq) bits = -sum(np.multiply(vq_freq, vq_log2)) cr = spk_ch * spk_dim * 16 / (param_resnet_v2[2] * bits) print('compression ratio is {:.2f} with {:.2f}-bit.'.format(cr, bits))
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
c. MSE error
recon_spks = val_spks * train_std + train_mean test_spks_v2 = test_spks * train_std + train_mean recon_spks = recon_spks.view(-1, spk_dim) test_spks_v2 = test_spks_v2.view(-1, spk_dim) recon_err = torch.norm(recon_spks-test_spks_v2, p=2, dim=1) / torch.norm(test_spks_v2, p=2, dim=1) print('mean of recon_err is {:.4f}'.format(torch.mean(recon_err))) print('std of recon_err is {:.4f}'.format(torch.std(recon_err)))
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
d. SNDR of reconstructed spikes
recon_spks_new = recon_spks.numpy() test_spks_new = test_spks_v2.numpy() def cal_sndr(org_data, recon_data): org_norm = np.linalg.norm(org_data, axis=1) err_norm = np.linalg.norm(org_data-recon_data, axis=1) return np.mean(20*np.log10(org_norm / err_norm)), np.std(20*np.log10(org_norm / err_norm)) cur_sndr, sndr_std = cal_sndr(test_spks_new, recon_spks_new) print('SNDR is {:.4f} with std {:.4f}'.format(cur_sndr, sndr_std))
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
e. Visualization of reconstructed spikes chosen at random
rand_val_idx = np.random.permutation(len(recon_spks_new)) plt.figure() n_row, n_col = 3, 8 spks_to_show = test_spks_new[rand_val_idx[:n_row*n_col]] ymax, ymin = np.amax(spks_to_show), np.amin(spks_to_show) f, axarr = plt.subplots(n_row, n_col, figsize=(n_col*3, n_row*3)) for i in range(n_row): for j in range(n_col): axarr[i, j].plot(recon_spks_new[rand_val_idx[i*n_col+j]], 'r') axarr[i, j].plot(test_spks_new[rand_val_idx[i*n_col+j]], 'b') axarr[i, j].set_ylim([ymin*1.1, ymax*1.1]) axarr[i, j].axis('off') plt.show()
_____no_output_____
Apache-2.0
spk_vq_cae.ipynb
tong-wu-umn/spike-compression-autoencoder
Neural Memory System - Centre building Environment setup
import os from pathlib import Path CURRENT_FOLDER = Path(os.getcwd()) CD_KEY = "--CENTRE_BUILDING_DEMO_IN_ROOT" if ( CD_KEY not in os.environ or os.environ[CD_KEY] is None or len(os.environ[CD_KEY]) == 0 or os.environ[CD_KEY] == "false" ): %cd -q ../../.. ROOT_FOLDER = Path(os.getcwd()).relative_to(os.getcwd()) CURRENT_FOLDER = CURRENT_FOLDER.relative_to(ROOT_FOLDER.absolute()) os.environ[CD_KEY] = "true" print(f"Root folder: {ROOT_FOLDER}") print(f"Current folder: {CURRENT_FOLDER}")
Root folder: . Current folder: nemesys/demo/tentative
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Modules
from itertools import product import math import struct import numpy as np import torch import torch.nn from nemesys.hashing.minhashing.numpy_minhash import NumPyMinHash from nemesys.modelling.analysers.modules.pytorch_analyser_lstm import PyTorchAnalyserLSTM from nemesys.modelling.decoders.modules.pytorch_decoder_conv2d import PyTorchDecoderConv2D from nemesys.modelling.encoders.modules.pytorch_encoder_linear import PyTorchEncoderLinear from nemesys.modelling.routers.concatenation.minhash.minhash_concatenation_router import ( MinHashConcatenationRouter ) from nemesys.modelling.stores.pytorch_list_store import PyTorchListStore from nemesys.modelling.synthesisers.modules.pytorch_synthesiser_linear import PyTorchSynthesiserLinear torch.set_printoptions(sci_mode=False)
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Components setup Sizes
EMBEDDING_SIZE = 4 ANALYSER_CLASS_NAMES = ("statement",) ANALYSER_OUTPUT_SIZE = EMBEDDING_SIZE ENCODER_OUTPUT_SIZE = 3 DECODER_IN_CHANNELS = 1 DECODER_OUT_CHANNELS = 3 DECODER_KERNEL_SIZE = (1, ENCODER_OUTPUT_SIZE) MINHASH_N_PERMUTATIONS = 4 MINHASH_SEED = 0
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Embedding setup
allowed_letters = [chr(x) for x in range(ord("A"), ord("Z") + 1)] vocabulary = ["".join(x) for x in product(*([allowed_letters] * 3))] word_to_index = {word: i for i, word in enumerate(vocabulary)} embedding = torch.nn.Embedding( num_embeddings=len(word_to_index), embedding_dim=EMBEDDING_SIZE, max_norm=math.sqrt(EMBEDDING_SIZE), )
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Analyser setup
analyser = PyTorchAnalyserLSTM( class_names=ANALYSER_CLASS_NAMES, input_size=EMBEDDING_SIZE, hidden_size=ANALYSER_OUTPUT_SIZE, batch_first=True, )
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Encoder setup
encoder = PyTorchEncoderLinear( in_features=ANALYSER_OUTPUT_SIZE, out_features=ENCODER_OUTPUT_SIZE, content_key="content", )
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Store setup
store = PyTorchListStore()
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Decoder setup
decoder = PyTorchDecoderConv2D( in_channels = DECODER_IN_CHANNELS, out_channels = DECODER_OUT_CHANNELS, kernel_size = DECODER_KERNEL_SIZE, )
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Router setup MinHash setup
def tensor_to_numpy(x: torch.Tensor): x = x.reshape((x.shape[0], -1)) # Preserve batches x = np.array(x, dtype=np.float32) return x def preprocess_function(element): element_as_bytes = struct.pack("<f", float(element)) element_as_int = np.fromstring( element_as_bytes, dtype=np.uint32 ).astype(np.uint64)[0] return element_as_int def numpy_to_tensor(x: np.ndarray): x_floats = np.vectorize(lambda x: x / ((2 ** 32) - 1))(x) return torch.tensor(x_floats, dtype=torch.float32) minhash = NumPyMinHash( n_permutations=MINHASH_N_PERMUTATIONS, seed=MINHASH_SEED, preprocess_function=preprocess_function, )
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Continuing router setup
router = MinHashConcatenationRouter(minhash_instance=minhash)
_____no_output_____
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Synthesiser setup Runs Data preparation
inputs = ["AAA", "ABA", "BDC"] has_a = [1 if "A" in x else 0 for x in inputs] input_indices = [word_to_index[word] for word in inputs] input_indices = torch.tensor(input_indices) print(input_indices) output_tensor = torch.tensor(has_a) print(output_tensor)
tensor([1, 1, 0])
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Embedding run
embeddings = embedding(input_indices) print(embeddings)
tensor([[ 0.0733, 1.6564, 0.0727, 1.0186], [ 0.3968, -0.2372, -1.5747, -0.4124], [ 0.2069, 0.6105, -0.5933, -0.8433]], grad_fn=<EmbeddingBackward>)
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Analyser run
analyser_output = analyser(embeddings.reshape(len(inputs), 1, -1)) for class_name in ANALYSER_CLASS_NAMES: print(f"{class_name}:") print(analyser_output[class_name]["content"])
statement: tensor([[ 0.0044, 0.1220, -0.0175, -0.2743], [-0.0601, -0.0471, -0.1271, 0.2379], [-0.1024, 0.0011, -0.0486, 0.1101]], grad_fn=<IndexBackward>)
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Encoder run
encoder_output = encoder(analyser_output["statement"]) print(encoder_output)
{'content': tensor([[ 0.1218, -0.0339, 0.0212], [-0.0465, 0.0300, 0.0055], [-0.0192, -0.0080, 0.0010]], grad_fn=<MmBackward>)}
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Store run
store.append(encoder_output["content"]) print(store)
[tensor([[ 0.1218, -0.0339, 0.0212], [-0.0465, 0.0300, 0.0055], [-0.0192, -0.0080, 0.0010]])]
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Decoder run
decoder_output = decoder(store) print(decoder_output)
{'content': tensor([[[[ 0.4530], [ 0.5088], [ 0.4872]], [[ 0.4030], [ 0.4953], [ 0.4768]], [[-0.1114], [-0.0418], [-0.0639]]]], grad_fn=<ThnnConv2DBackward>)}
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Router run
router_input = decoder_output["content"].squeeze(dim=0) router_output = router(router_input) router_output = numpy_to_tensor(router_output) print(router_output)
tensor([[0.4881, 0.8373, 0.6695, 0.0170, 0.7944, 0.6838, 0.8554, 0.2267, 0.2587, 0.0597, 0.3019, 0.7954], [0.8945, 0.7646, 0.3306, 0.5174, 0.5584, 0.1036, 0.1118, 0.1061, 0.2825, 0.2752, 0.6448, 0.2994], [0.9799, 0.3582, 0.8451, 0.7665, 0.2921, 0.9583, 0.6857, 0.8045, 0.6184, 0.1507, 0.1753, 0.6196]])
Apache-2.0
demo/tentative/demo_centre-building.ipynb
suflaj/nemesys
Detect Model Bias with Amazon SageMaker Clarify Amazon Science: _[How Clarify helps machine learning developers detect unintended bias](https://www.amazon.science/latest-news/how-clarify-helps-machine-learning-developers-detect-unintended-bias)_ [](https://www.amazon.science/latest-news/how-clarify-helps-machine-learning-developers-detect-unintended-bias) Terminology* **Bias**: An imbalance in the training data or the prediction behavior of the model across different groups, such as age or income bracket. Biases can result from the data or algorithm used to train your model. For instance, if an ML model is trained primarily on data from middle-aged individuals, it may be less accurate when making predictions involving younger and older people.* **Bias metric**: A function that returns numerical values indicating the level of a potential bias.* **Bias report**:A collection of bias metrics for a given dataset, or a combination of a dataset and a model.* **Label**:Feature that is the target for training a machine learning model. Referred to as the observed label or observed outcome.* **Positive label values**:Label values that are favorable to a demographic group observed in a sample. In other words, designates a sample as having a positive result.* **Negative label values**:Label values that are unfavorable to a demographic group observed in a sample. In other words, designates a sample as having a negative result.* **Facet**:A column or feature that contains the attributes with respect to which bias is measured.* **Facet value**:The feature values of attributes that bias might favor or disfavor. Posttraining Bias Metricshttps://docs.aws.amazon.com/sagemaker/latest/dg/clarify-measure-post-training-bias.html* **Difference in Positive Proportions in Predicted Labels (DPPL)**:Measures the difference in the proportion of positive predictions between the favored facet a and the disfavored facet d.* **Disparate Impact (DI)**:Measures the ratio of proportions of the predicted labels for the favored facet a and the disfavored facet d.* **Difference in Conditional Acceptance (DCAcc)**:Compares the observed labels to the labels predicted by a model and assesses whether this is the same across facets for predicted positive outcomes (acceptances).* **Difference in Conditional Rejection (DCR)**:Compares the observed labels to the labels predicted by a model and assesses whether this is the same across facets for negative outcomes (rejections).* **Recall Difference (RD)**:Compares the recall of the model for the favored and disfavored facets.* **Difference in Acceptance Rates (DAR)**:Measures the difference in the ratios of the observed positive outcomes (TP) to the predicted positives (TP + FP) between the favored and disfavored facets.* **Difference in Rejection Rates (DRR)**:Measures the difference in the ratios of the observed negative outcomes (TN) to the predicted negatives (TN + FN) between the disfavored and favored facets.* **Accuracy Difference (AD)**:Measures the difference between the prediction accuracy for the favored and disfavored facets.* **Treatment Equality (TE)**:Measures the difference in the ratio of false positives to false negatives between the favored and disfavored facets.* **Conditional Demographic Disparity in Predicted Labels (CDDPL)**:Measures the disparity of predicted labels between the facets as a whole, but also by subgroups.* **Counterfactual Fliptest (FT)**:Examines each member of facet d and assesses whether similar members of facet a have different model predictions.
import boto3 import sagemaker import pandas as pd import numpy as np sess = sagemaker.Session() bucket = sess.default_bucket() region = boto3.Session().region_name import botocore.config config = botocore.config.Config( user_agent_extra='dsoaws/1.0' ) sm = boto3.Session().client(service_name="sagemaker", region_name=region, config=config) %store -r role import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format='retina'
_____no_output_____
Apache-2.0
00_quickstart/09_Detect_Model_Bias_Clarify.ipynb
MarcusFra/workshop
Test data for biasWe created test data in JSONLines format to match the model inputs.
test_data_bias_path = "./data-clarify/test_data_bias.jsonl" !head -n 1 $test_data_bias_path
_____no_output_____
Apache-2.0
00_quickstart/09_Detect_Model_Bias_Clarify.ipynb
MarcusFra/workshop
Upload the data
test_data_bias_s3_uri = sess.upload_data(bucket=bucket, key_prefix="bias/test_data_bias", path=test_data_bias_path) test_data_bias_s3_uri !aws s3 ls $test_data_bias_s3_uri %store test_data_bias_s3_uri
_____no_output_____
Apache-2.0
00_quickstart/09_Detect_Model_Bias_Clarify.ipynb
MarcusFra/workshop
Run Posttraining Model Bias Analysis
%store -r pipeline_name print(pipeline_name) %%time import time from pprint import pprint executions_response = sm.list_pipeline_executions(PipelineName=pipeline_name)["PipelineExecutionSummaries"] pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] print(pipeline_execution_status) while pipeline_execution_status == "Executing": try: executions_response = sm.list_pipeline_executions(PipelineName=pipeline_name)["PipelineExecutionSummaries"] pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] except Exception as e: print("Please wait...") time.sleep(30) pprint(executions_response)
_____no_output_____
Apache-2.0
00_quickstart/09_Detect_Model_Bias_Clarify.ipynb
MarcusFra/workshop
List Pipeline Execution Steps
pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"] print(pipeline_execution_status) pipeline_execution_arn = executions_response[0]["PipelineExecutionArn"] print(pipeline_execution_arn) from pprint import pprint steps = sm.list_pipeline_execution_steps(PipelineExecutionArn=pipeline_execution_arn) pprint(steps)
_____no_output_____
Apache-2.0
00_quickstart/09_Detect_Model_Bias_Clarify.ipynb
MarcusFra/workshop
View Created Model_Note: If the trained model did not pass the Evaluation step (> accuracy threshold), it will not be created._
for execution_step in steps["PipelineExecutionSteps"]: if execution_step["StepName"] == "CreateModel": model_arn = execution_step["Metadata"]["Model"]["Arn"] break print(model_arn) pipeline_model_name = model_arn.split("/")[-1] print(pipeline_model_name)
_____no_output_____
Apache-2.0
00_quickstart/09_Detect_Model_Bias_Clarify.ipynb
MarcusFra/workshop