markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Let’s plot the loss and accuracy of the model over the training and validation data during training:
fig, ax = plt.subplots(1, 1, figsize=(10,6)) ax.plot((history.history['acc']), 'r', label='train') ax.plot((history.history['val_acc']), 'b' ,label='val') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Accuracy', fontsize=20) ax.legend() ax.tick_params(labelsize=20)
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Let's try data augmentation
datagen = ImageDataGenerator( rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest')
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
These are just a few of the options available (for more, see the Keras documentation). Let’s quickly go over this code:- rotation_range is a value in degrees (0–180), a range within which to randomly rotate pictures.- width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally.- shear_range is for randomly applying shearing transformations.- zoom_range is for randomly zooming inside pictures.- horizontal_flip is for randomly flipping half the images horizontally—relevant when there are no assumptions of - horizontal asymmetry (for example, real-world pictures).- fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift. Let’s look at the augmented images
from keras.preprocessing import image fnames = [os.path.join(train_dogs_dir, fname) for fname in os.listdir(train_dogs_dir)] img_path = fnames[3] # Chooses one image to augment img = image.load_img(img_path, target_size=(150, 150)) # Reads the image and resizes it x = image.img_to_array(img) # Converts it to a Numpy array with shape (150, 150, 3) x = x.reshape((1,) + x.shape) # Reshapes it to (1, 150, 150, 3) i=0 for batch in datagen.flow(x, batch_size=1): plt.figure(i) imgplot = plt.imshow(image.array_to_img(batch[0])) i += 1 if i % 4 == 0: break plt.show()
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
If you train a new network using this data-augmentation configuration, the network will never see the same input twice. But the inputs it sees are still heavily intercorrelated, because they come from a small number of original images—you can’t produce new information, you can only remix existing information. As such, this may not be enough to completely get rid of overfitting. To further fight overfitting, you’ll also add a **Dropout** layer to your model right before the densely connected classifier.
model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dropout(0.5)) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) # Let’s train the network using data augmentation and dropout. train_datagen = ImageDataGenerator( rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True,) test_datagen = ImageDataGenerator(rescale=1./255) # Note that the validation data shouldn’t be augmented! train_generator = train_datagen.flow_from_directory( train_dir, target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=32, class_mode='binary') history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=5, # TODO: should be 100 validation_data=validation_generator, validation_steps=50) model.save('cats_and_dogs_small_2.h5')
Found 2000 images belonging to 2 classes. Found 1000 images belonging to 2 classes. Epoch 1/5 100/100 [==============================] - 94s 935ms/step - loss: 0.6902 - acc: 0.5294 - val_loss: 0.7003 - val_acc: 0.4924 Epoch 2/5 100/100 [==============================] - 88s 882ms/step - loss: 0.6763 - acc: 0.5703 - val_loss: 0.7350 - val_acc: 0.5090 Epoch 3/5 100/100 [==============================] - 90s 899ms/step - loss: 0.6681 - acc: 0.5816 - val_loss: 0.6458 - val_acc: 0.6098 Epoch 4/5 100/100 [==============================] - 88s 877ms/step - loss: 0.6496 - acc: 0.6241 - val_loss: 0.6431 - val_acc: 0.6192 Epoch 5/5 100/100 [==============================] - 89s 886ms/step - loss: 0.6298 - acc: 0.6369 - val_loss: 0.5897 - val_acc: 0.6580
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
And let’s plot the results again. Thanks to data augmentation and dropout, you’re no longer overfitting: the training curves are closely tracking the validation curves. You now reach an accuracy of 82%, a 15% relative improvement over the non-regularized model. (Note: these numbers are for 100 epochs..)
fig, ax = plt.subplots(1, 1, figsize=(10,6)) ax.plot((history.history['acc']), 'r', label='train') ax.plot((history.history['val_acc']), 'b' ,label='val') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Accuracy', fontsize=20) ax.legend() ax.tick_params(labelsize=20)
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
By using regularization techniques even further, and by tuning the network’s parameters (such as the number of filters per convolution layer, or the number of layers in the network), you may be able to get an even better accuracy, likely up to 86% or 87%. But it would prove difficult to go any higher just by training your own convnet from scratch, because you have so little data to work with. As a next step to improve your accuracy on this problem, you’ll have to use a pretrained model. Part 4: keras viz toolkit https://github.com/raghakot/keras-vis/blob/master/examples/mnist/attention.ipynb
class_idx = 0 indices = np.where(test_labels[:, class_idx] == 1.)[0] # pick some random input from here. idx = indices[0] # Lets sanity check the picked image. from matplotlib import pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (18, 6) plt.imshow(test_images[idx][..., 0]) input_shape=(28, 28, 1) num_classes = 10 batch_size = 128 epochs = 5 model = Sequential() model.add(layers.Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D(pool_size=(2, 2))) model.add(layers.Dropout(0.25)) model.add(layers.Flatten()) model.add(layers.Dense(128, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(num_classes, activation='softmax', name='preds')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) model.fit(train_images, train_labels, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(test_images, test_labels)) score = model.evaluate(test_images, test_labels, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) from vis.visualization import visualize_saliency from vis.utils import utils from keras import activations # Utility to search for layer index by name. # Alternatively we can specify this as -1 since it corresponds to the last layer. layer_idx = utils.find_layer_idx(model, 'preds') plt.rcParams["figure.figsize"] = (5,5) from vis.visualization import visualize_cam import warnings warnings.filterwarnings('ignore') # This corresponds to the Dense linear layer. for class_idx in np.arange(10): indices = np.where(test_labels[:, class_idx] == 1.)[0] idx = indices[0] f, ax = plt.subplots(1, 4) ax[0].imshow(test_images[idx][..., 0]) for i, modifier in enumerate([None, 'guided', 'relu']): grads = visualize_cam(model, layer_idx, filter_indices=class_idx, seed_input=test_images[idx], backprop_modifier=modifier) if modifier is None: modifier = 'vanilla' ax[i+1].set_title(modifier) ax[i+1].imshow(grads, cmap='jet')
_____no_output_____
MIT
docs/lectures/lecture8/lab5/cs109b-lab5-cnn-solutions.ipynb
rahuliem/2019-CS109B-1
Pragmatic color describers
__author__ = "Christopher Potts" __version__ = "CS224u, Stanford, Spring 2020"
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [The corpus](The-corpus) 1. [Corpus reader](Corpus-reader) 1. [ColorsCorpusExample instances](ColorsCorpusExample-instances) 1. [Displaying examples](Displaying-examples) 1. [Color representations](Color-representations) 1. [Utterance texts](Utterance-texts) 1. [Far, Split, and Close conditions](Far,-Split,-and-Close-conditions)1. [Toy problems for development work](Toy-problems-for-development-work)1. [Core model](Core-model) 1. [Toy dataset illustration](Toy-dataset-illustration) 1. [Predicting sequences](Predicting-sequences) 1. [Listener-based evaluation](Listener-based-evaluation) 1. [Other prediction and evaluation methods](Other-prediction-and-evaluation-methods) 1. [Cross-validation](Cross-validation)1. [Baseline SCC model](Baseline-SCC-model)1. [Modifying the core model](Modifying-the-core-model) 1. [Illustration: LSTM Cells](Illustration:-LSTM-Cells) 1. [Illustration: Deeper models](Illustration:-Deeper-models) OverviewThis notebook is part of our unit on grounding. It illustrates core concepts from the unit, and it provides useful background material for the associated homework and bake-off. Set-up
from colors import ColorsCorpusReader import os import pandas as pd from sklearn.model_selection import train_test_split import torch from torch_color_describer import ( ContextualColorDescriber, create_example_dataset) import utils from utils import START_SYMBOL, END_SYMBOL, UNK_SYMBOL utils.fix_random_seeds()
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
The [Stanford English Colors in Context corpus](https://cocolab.stanford.edu/datasets/colors.html) (SCC) is included in the data distribution for this course. If you store the data in a non-standard place, you'll need to update the following:
COLORS_SRC_FILENAME = os.path.join( "data", "colors", "filteredCorpus.csv")
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
The corpus The SCC corpus is based in a two-player interactive game. The two players share a context consisting of three color patches, with the display order randomized between them so that they can't use positional information when communicating.The __speaker__ is privately assigned a target color and asked to produce a description of it that will enable the __listener__ to identify the speaker's target. The listener makes a choice based on the speaker's message, and the two succeed if and only if the listener identifies the target correctly.In the game, the two players played repeated reference games and could communicate with each other in a free-form way. This opens up the possibility of modeling these repeated interactions as task-oriented dialogues. However, for this unit, we'll ignore most of this structure. We'll treat the corpus as a bunch of independent reference games played by anonymous players, and we will ignore the listener and their choices entirely.For the bake-off, we will be distributing a separate test set. Thus, all of the data in the SCC can be used for exploration and development. Corpus reader The corpus reader class is `ColorsCorpusReader` in `colors.py`. The reader's primary function is to let you iterate over corpus examples:
corpus = ColorsCorpusReader( COLORS_SRC_FILENAME, word_count=None, normalize_colors=True)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
The two keyword arguments have their default values here. * If you supply `word_count` with an interger value, it will restrict to just examples where the utterance has that number of words (using a whitespace heuristic). This creates smaller corpora that are useful for development.* The colors in the corpus are in [HLS format](https://en.wikipedia.org/wiki/HSL_and_HSV). With `normalize_colors=False`, the first (hue) value is an integer between 1 and 360 inclusive, and the L (lightness) and S (saturation) values are between 1 and 100 inclusive. With `normalize_colors=True`, these values are all scaled to between 0 and 1 inclusive. The default is `normalize_colors=True` because this is a better choice for all the machine learning models we'll consider.
examples = list(corpus.read())
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
We can verify that we read in the same number of examples as reported in [Monroe et al. 2017](https://transacl.org/ojs/index.php/tacl/article/view/1142):
# Should be 46994: len(examples)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
ColorsCorpusExample instances The examples are `ColorsCorpusExample` instances:
ex1 = next(corpus.read())
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
These objects have a lot of attributes and methods designed to help you study the corpus and use it for our machine learning tasks. Let's review some highlights. Displaying examples You can see what the speaker saw, with the utterance they chose wote above the patches:
ex1.display(typ='speaker')
The darker blue one
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
This is the original order of patches for the speaker. The target happens to the be the leftmost patch, as indicated by the black box around it.Here's what the listener saw, with the speaker's message printed above the patches:
ex1.display(typ='listener')
The darker blue one
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
The listener isn't shown the target, of course, so no patches are highlighted. If `display` is called with no arguments, then the target is placed in the final position and the other two are given in an order determined by the corpus metadata:
ex1.display()
The darker blue one
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
This is the representation order we use for our machine learning models. Color representations For machine learning, we'll often need to access the color representations directly. The primary attribute for this is `colors`:
ex1.colors
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
In this display order, the third element is the target color and the first two are the distractors. The attributes `speaker_context` and `listener_context` return the same colors but in the order that those players saw them. For example:
ex1.speaker_context
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Utterance texts Utterances are just strings:
ex1.contents
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
There are cases where the speaker made a sequences of utterances for the same trial. We follow [Monroe et al. 2017](https://transacl.org/ojs/index.php/tacl/article/view/1142) in concatenating these into a single utterances. To preserve the original information, the individual turns are separated by `" "`. Example 3 is the first with this property – let's check it out:
ex3 = examples[2] ex3.contents
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
The method `parse_turns` will parse this into individual turns:
ex3.parse_turns()
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
For examples consisting of a single turn, `parse_turns` returns a list of length 1:
ex1.parse_turns()
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Far, Split, and Close conditions The SCC contains three conditions: __Far condition__: All three colors are far apart in color space. Example:
print("Condition type:", examples[1].condition) examples[1].display()
Condition type: far purple
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
__Split condition__: The target is close to one of the distractors, and the other is far away from both of them. Example:
print("Condition type:", examples[3].condition) examples[3].display()
Condition type: split lime
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
__Close condition__: The target is similar to both distractors. Example:
print("Condition type:", examples[2].condition) examples[2].display()
Condition type: close Medium pink ### the medium dark one
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
These conditions go from easiest to hardest when it comes to reliable communication. In the __Far__ condition, the context is hardly relevant, whereas the nature of the distractors reliably shapes the speaker's choices in the other two conditions. You can begin to see how this affects speaker choices in the above examples: "purple" suffices for the __Far__ condition, a more marked single word ("lime") suffices in the __Split__ condition, and the __Close__ condition triggers a pretty long, complex description. The `condition` attribute provides access to this value:
ex1.condition
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
The following verifies that we have the same number of examples per condition as reported in [Monroe et al. 2017](https://transacl.org/ojs/index.php/tacl/article/view/1142):
pd.Series([ex.condition for ex in examples]).value_counts()
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Toy problems for development work The SCC corpus is fairly large and quite challenging as an NLU task. This means it isn't ideal when it comes to testing hypotheses and debugging code. Poor performance could trace to a mistake, but it could just as easily trace to the fact that the problem is very challenging from the point of view of optimization.To address this, the module `torch_color_describer.py` includes a function `create_example_dataset` for creating small, easy datasets with the same basic properties as the SCC corpus.Here's a toy problem containing just six examples:
tiny_contexts, tiny_words, tiny_vocab = create_example_dataset( group_size=2, vec_dim=2) tiny_vocab tiny_words tiny_contexts
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Each member of `tiny_contexts` contains three vectors. The final (target) vector always has values in a range that determines the corresponding word sequence, which is drawn from a set of three fixed sequences. Thus, the model basically just needs to learn to ignore the distractors and find the association between the target vector and the corresponding sequence. All the models we study have a capacity to solve this task with very little data, so you should see perfect or near perfect performance on reasonably-sized versions of this task. Core model Our core model for this problem is implemented in `torch_color_describer.py` as `ContextualColorDescriber`. At its heart, this is a pretty standard encoder–decoder model:* `Encoder`: Processes the color contexts as a sequence. We always place the target in final position so that it is closest to the supervision signals that we get when decoding.* `Decoder`: A neural language model whose initial hidden representation is the final hidden representation of the `Encoder`.* `EncoderDecoder`: Coordinates the operations of the `Encoder` and `Decoder`.Finally, `ContextualColorDescriber` is a wrapper around these model components. It handle the details of training and implements the prediction and evaluation functions that we will use.Many additional details about this model are included in the slides for this unit. Toy dataset illustration To highlight the core functionality of `ContextualColorDescriber`, let's create a small toy dataset and use it to train and evaluate a model:
toy_color_seqs, toy_word_seqs, toy_vocab = create_example_dataset( group_size=50, vec_dim=2) toy_color_seqs_train, toy_color_seqs_test, toy_word_seqs_train, toy_word_seqs_test = \ train_test_split(toy_color_seqs, toy_word_seqs)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Here we expose all of the available parameters with their default values:
toy_mod = ContextualColorDescriber( toy_vocab, embedding=None, # Option to supply a pretrained matrix as an `np.array`. embed_dim=10, hidden_dim=10, max_iter=100, eta=0.01, optimizer=torch.optim.Adam, batch_size=128, l2_strength=0.0, warm_start=False, device=None) _ = toy_mod.fit(toy_color_seqs_train, toy_word_seqs_train)
Epoch 100; err = 0.13451486825942993
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Predicting sequences The `predict` method takes a list of color contexts as input and returns model descriptions:
toy_preds = toy_mod.predict(toy_color_seqs_test) toy_preds[0]
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
We can then check that we predicted all correct sequences:
toy_correct = sum(1 for x, p in zip(toy_word_seqs_test, toy_preds)) toy_correct / len(toy_word_seqs_test)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
For real problems, this is too stringent a requirement, since there are generally many equally good descriptions. This insight gives rise to metrics like [BLEU](https://en.wikipedia.org/wiki/BLEU), [METEOR](https://en.wikipedia.org/wiki/METEOR), [ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric)), [CIDEr](https://arxiv.org/pdf/1411.5726.pdf), and others, which seek to relax the requirement of an exact match with the test sequence. These are reasonable options to explore, but we will instead adopt a communcation-based evaluation, as discussed in the next section. Listener-based evaluation `ContextualColorDescriber` implements a method `listener_accuracy` that we will use for our primary evaluations in the assignment and bake-off. The essence of the method is that we can calculate$$c^{*} = \text{argmax}_{c \in C} P_S(\text{utterance} \mid c)$$where $P_S$ is our describer model and $C$ is the set of all permutations of all three colors in the color context. We take $c^{*}$ to be a correct prediction if it is one where the target is in the privileged final position. (There are two such contexts; we try both in case the order of the distractors influences the predictions, and the model is correct if one of them has the highest probability.)Here's the listener accuracy of our toy model:
toy_mod.listener_accuracy(toy_color_seqs_test, toy_word_seqs_test)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Other prediction and evaluation methods You can get the perplexities for test examles with `perpelexities`:
toy_perp = toy_mod.perplexities(toy_color_seqs_test, toy_word_seqs_test) toy_perp[0]
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
You can use `predict_proba` to see the full probability distributions assigned to test examples:
toy_proba = toy_mod.predict_proba(toy_color_seqs_test, toy_word_seqs_test) toy_proba[0].shape for timestep in toy_proba[0]: print(dict(zip(toy_vocab, timestep)))
{'<s>': 1.0, '</s>': 0.0, 'A': 0.0, 'B': 0.0, '$UNK': 0.0} {'<s>': 0.0036859103, '</s>': 0.0002668097, 'A': 0.9854643, 'B': 0.00914348, '$UNK': 0.0014396048} {'<s>': 0.004782134, '</s>': 0.024507374, 'A': 0.0019362223, 'B': 0.96381474, '$UNK': 0.0049594548} {'<s>': 0.0050890064, '</s>': 0.9780351, 'A': 0.014443797, 'B': 0.0008280464, '$UNK': 0.0016041624}
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Cross-validation You can use `utils.fit_classifier_with_crossvalidation` to cross-validate these models. Just be sure to set `scoring=None` so that the sklearn model selection methods use the `score` method of `ContextualColorDescriber`, which is an alias for `listener_accuracy`:
best_mod = utils.fit_classifier_with_crossvalidation( toy_color_seqs_train, toy_word_seqs_train, toy_mod, cv=2, scoring=None, param_grid={'hidden_dim': [10, 20]})
Epoch 100; err = 0.12754583358764648
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Baseline SCC model Just to show how all the pieces come together, here's a very basic SCC experiment using the core code and very simplistic assumptions (which you will revisit in the assignment) about how to represent the examples: To facilitate quick development, we'll restrict attention to the two-word examples:
dev_corpus = ColorsCorpusReader(COLORS_SRC_FILENAME, word_count=2) dev_examples = list(dev_corpus.read()) len(dev_examples)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Here we extract the raw colors and texts (as strings):
dev_cols, dev_texts = zip(*[[ex.colors, ex.contents] for ex in dev_examples])
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
To tokenize the examples, we'll just split on whitespace, taking care to add the required boundary symbols:
dev_word_seqs = [[START_SYMBOL] + text.split() + [END_SYMBOL] for text in dev_texts]
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
We'll use a random train–test split:
dev_cols_train, dev_cols_test, dev_word_seqs_train, dev_word_seqs_test = \ train_test_split(dev_cols, dev_word_seqs)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Our vocab is determined by the train set, and we take care to include the `$UNK` token:
dev_vocab = sorted({w for toks in dev_word_seqs_train for w in toks}) + [UNK_SYMBOL]
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
And now we're ready to train a model:
dev_mod = ContextualColorDescriber( dev_vocab, embed_dim=10, hidden_dim=10, max_iter=10, batch_size=128) _ = dev_mod.fit(dev_cols_train, dev_word_seqs_train)
Epoch 10; err = 101.7589635848999
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
And finally an evaluation in terms of listener accuracy:
dev_mod.listener_accuracy(dev_cols_test, dev_word_seqs_test)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Modifying the core model The first few assignment problems concern how you preprocess the data for your model. After that, the goal is to subclass model components in `torch_color_describer.py`. For the bake-off submission, you can do whatever you like in terms of modeling, but my hope is that you'll be able to continue subclassing based on `torch_color_describer.py`.This section provides some illustrative examples designed to give you a feel for how the code is structured and what your options are in terms of creating subclasses. Illustration: LSTM Cells Both the `Encoder` and the `Decoder` of `torch_color_describer` are currently GRU cells. Switching to another cell type is easy: __Step 1__: Subclass the `Encoder`; all we have to do here is change `GRU` from the original to `LSTM`:
import torch.nn as nn from torch_color_describer import Encoder class LSTMEncoder(Encoder): def __init__(self, color_dim, hidden_dim): super().__init__(color_dim, hidden_dim) self.rnn = nn.LSTM( input_size=self.color_dim, hidden_size=self.hidden_dim, batch_first=True)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
__Step 2__: Subclass the `Decoder`, making the same simple change as above:
import torch.nn as nn from torch_color_describer import Encoder, Decoder class LSTMDecoder(Decoder): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.rnn = nn.LSTM( input_size=self.embed_dim, hidden_size=self.hidden_dim, batch_first=True)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
__Step 3__:`ContextualColorDescriber` has a method called `build_graph` that sets up the `Encoder` and `Decoder`. The needed revision just uses `LSTMEncoder`:
from torch_color_describer import EncoderDecoder class LSTMContextualColorDescriber(ContextualColorDescriber): def build_graph(self): # Use the new Encoder: encoder = LSTMEncoder( color_dim=self.color_dim, hidden_dim=self.hidden_dim) # Use the new Decoder: decoder = LSTMDecoder( vocab_size=self.vocab_size, embed_dim=self.embed_dim, embedding=self.embedding, hidden_dim=self.hidden_dim) return EncoderDecoder(encoder, decoder)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Here's an example run:
lstm_mod = LSTMContextualColorDescriber( toy_vocab, embed_dim=10, hidden_dim=10, max_iter=100, batch_size=128) _ = lstm_mod.fit(toy_color_seqs_train, toy_word_seqs_train) lstm_mod.listener_accuracy(toy_color_seqs_test, toy_word_seqs_test)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Illustration: Deeper models The `Encoder` and `Decoder` are both currently hard-coded to have just one hidden layer. It is straightforward to make them deeper as long as we ensure that both the `Encoder` and `Decoder` have the same depth; since the `Encoder` final states are the initial hidden states for the `Decoder`, we need this alignment. (Strictly speaking, we could have different numbers of `Encoder` and `Decoder` layers, as long as we did some kind of averaging or copying to achieve the hand-off from `Encoder` to `Decocer`. I'll set this possibility aside.) __Step 1__: We need to subclass the `Encoder` and `Decoder` so that they have `num_layers` argument that is fed into the RNN cell:
import torch.nn as nn from torch_color_describer import Encoder, Decoder class DeepEncoder(Encoder): def __init__(self, *args, num_layers=2, **kwargs): super().__init__(*args, **kwargs) self.num_layers = num_layers self.rnn = nn.GRU( input_size=self.color_dim, hidden_size=self.hidden_dim, num_layers=self.num_layers, batch_first=True) class DeepDecoder(Decoder): def __init__(self, *args, num_layers=2, **kwargs): super().__init__(*args, **kwargs) self.num_layers = num_layers self.rnn = nn.GRU( input_size=self.embed_dim, hidden_size=self.hidden_dim, num_layers=self.num_layers, batch_first=True)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
__Step 2__: As before, we need to update the `build_graph` method of `ContextualColorDescriber`. The needed revision just uses `DeepEncoder` and `DeepDecoder`. To expose this new argument to the user, we also add a new keyword argument to `ContextualColorDescriber`:
from torch_color_describer import EncoderDecoder class DeepContextualColorDescriber(ContextualColorDescriber): def __init__(self, *args, num_layers=2, **kwargs): self.num_layers = num_layers super().__init__(*args, **kwargs) def build_graph(self): encoder = DeepEncoder( color_dim=self.color_dim, hidden_dim=self.hidden_dim, num_layers=self.num_layers) # The new piece is this argument. decoder = DeepDecoder( vocab_size=self.vocab_size, embed_dim=self.embed_dim, embedding=self.embedding, hidden_dim=self.hidden_dim, num_layers=self.num_layers) # The new piece is this argument. return EncoderDecoder(encoder, decoder)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
An example/test run:
mod_deep = DeepContextualColorDescriber( toy_vocab, embed_dim=10, hidden_dim=10, max_iter=100, batch_size=128) _ = mod_deep.fit(toy_color_seqs_train, toy_word_seqs_train) mod_deep.listener_accuracy(toy_color_seqs_test, toy_word_seqs_test)
_____no_output_____
Apache-2.0
colors_overview.ipynb
MallikaKhullar/Stanford_CS224u
Introduction In the `numpy` package the terminology used for vectors, matrices and higher-dimensional data sets is *array*. Creating `numpy` arrays There are a number of ways to initialize new numpy arrays, for example from* a Python list or tuples* using functions that are dedicated to generating numpy arrays, such as `arange`, `linspace`, etc.* reading data from files From lists We can use the `numpy.array` function.
# a vector: the argument to the array function is a Python list v = array([1,2,3,4]) v # a matrix: the argument to the array function is a nested Python list M = array([[1, 2], [3, 4]]) M
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
The `v` and `M` objects are both of the type `numpy.ndarray`
type(v), type(M)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
The difference between the `v` and `M` arrays is only their shapes. We can check it with the `ndarray.shape` property.
v.shape M.shape
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
The number of elements in the array is available through the `ndarray.size` property:
M.size
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Equivalently, we could use the function `numpy.shape` and `numpy.size`
shape(M) size(M)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
So far the `numpy.ndarray` looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type? **There are several reasons** * Python lists are very general. - They can contain any kind of object. - They are dynamically typed. * They do not support mathematical functions - such as matrix and dot multiplications, etc. - Implementating such functions for Python lists would not be very efficient * because of the dynamic typing * Numpy arrays are **statically typed** and **homogeneous**. - The type of the elements is determined when array is created - By already knowing the static type, numpy can implement low-level optimization* Numpy arrays are memory efficient. - fast implementation of mathematical functions can be implemented in a compiled language * C and Fortran is used Using the `dtype` (data type) property of an `ndarray`, we can see what type the data of an array has:
M.dtype
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
We get an error if we try to assign a value of the wrong type to an element in a numpy array:
M[0,0] = "hello"
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
If we want, we can explicitly define the type of the array data when we create it, using the `dtype` keyword argument:
M = array([[1, 2], [3, 4]], dtype=complex) M
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Common types that can be used with `dtype` `int`, `float`, `complex`, `bool`, `object`, etc.We can also explicitly define the bit size of the data types `int64`, `int16`, `float128`, `complex128`. If i don't see it, i don't believe it `ndarray` = n-dimension array A quick benchmark
# Normal python vector dim = 10000 a = range(dim) t1 = %timeit -o [i**2 for i in a] # Numpy vector with normal python loop b = arange(dim) t2 = %timeit -o [i**2 for i in b] # Numpy vector with numpy loop c = arange(dim) t3 = %timeit -n 1000 -o [c**2] print "Python loops (no) speedup: ", t1.best / t2.best print "Numpy loops speedup:", int(t1.best / t3.best), "x"
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
We want to make sure...
print "Type", type(a), [i**2 for i in a][0:10] print type(b), (b**2)[0:10]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Using more array-generating functions arange
# create a range x = arange(0, 10, 1) # arguments: start, stop, step x x = arange(-1, 1, 0.1) x type(x)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
mgrid
print numpy.mgrid.__doc__.split('\n')[0] x, y = mgrid[0:5, 0:5] # similar to meshgrid in MATLAB x y
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
random data
from numpy import random # uniform random numbers in [0,1] random.rand(5,5) # standard normal distributed random numbers random.randn(5,5)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
diag
# a diagonal matrix diag([1,2,3]) # diagonal with offset from the main diagonal diag([1,2,3], k=1)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
zeros and ones
zeros((3,3)) ones((3,3))
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
More properties of arrays
M.itemsize # bytes per element M.nbytes # number of bytes M.ndim # number of dimensions # With `newaxis`, we can insert new dimensions in an array v = array([1,2,3]) print "Original:", shape(v) # column matrix print "Col:", v[:,newaxis].shape # row matrix print "Row:", v[newaxis,:].shape
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Exercise Create your own matrix and try some of the operations shown so far Manipulating arrays Indexing We can index elements in an array using the square bracket and indices:
# v is a vector, and has only one dimension, taking one index v[0] # M is a matrix, or a 2 dimensional array, taking two indices M[1,1]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
M M[1]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
The same thing can be achieved with using `:` instead of an index
M[1,:] # row 1 M[:,1] # column 1
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
We can assign new values to elements in an array using indexing
M[0,0] = 1 M # also works for rows and columns M[1,:] = 0 M[:,2] = -1 M
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Index slicing Index slicing is the technical name for the syntax `M[lower:upper:step]` to extract part of an array
A = array([1,2,3,4,5]) A A[1:3]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Array slices are *mutable*: if they are assigned a new value the original array from which the slice was extracted is modified
A[1:3] = [-2,-3] A
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
We can omit any of the three parameters in `M[lower:upper:step]`:
A[::] # lower, upper, step all take the default values A[::2] # step is 2, lower and upper defaults to the beginning and end of the array A[:3] # first three elements A[3:] # elements from index 3
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Negative indices counts from the end of the array (positive index from the begining):
A = array([1,2,3,4,5]) A[-1] # the last element in the array A[-3:] # the last three elements
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Index slicing works exactly the same way for multidimensional arrays:
A = array([[n+m*10 for n in range(5)] for m in range(5)]) A # a block from the original array A[1:4, 1:4] # strides A[::2, ::2]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Fancy indexing Fancy indexing is the name for when **an array or list** is used in-place of an *index*
row_indices = [1, 2, 3] A[row_indices] col_indices = [1, 2, -1] # remember, index -1 means the last element A[row_indices, col_indices]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Exercise - Define two odd number: n, m- Create a random matrix with shape n x m- Compute the middle cell position and get the center element
odd_list = range(1,10,2) import random as stdrand n = stdrand.choice(odd_list) m = stdrand.choice(odd_list) print "Dimenions:", n, m MAT = random.randn(n,m) matrix_center = (n/2, m/2) print MAT MAT[matrix_center]
Dimenions: 3 1 [[-2.19577913] [ 1.64321869] [-1.65874573]]
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
We can also index masks* e.g. a Numpy array of data type `bool` - an element is selected (True) or not (False) - depending on the value of the index mask at the position each element
B = array([n for n in range(5)]) B row_mask = array([True, False, True, False, False]) B[row_mask] # same thing row_mask = array([1,0,1,0,0], dtype=bool) B[row_mask]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
This feature is very useful to conditionally select elements from an array, using for example comparison operators:
x = arange(0, 10, 0.5) x mask = (5 < x) * (x < 7.5) x[mask] print mask print x[mask]
[False False False False False False False False False False False True True True True False False False False False] [ 5.5 6. 6.5 7. ]
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Other functions for extracting data from arrays and creating arrays where The index mask can be converted to position index using the `where` function
indices = where(mask) indices x[indices] # this indexing is equivalent to the fancy indexing x[mask]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
diag With the diag function we can also extract the diagonal and subdiagonals of an array
diag(A) diag(A, -1)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
choose Constructs an array by picking elements form several arrays
which = [1, 0, 1, 0] choices = [[-2,-2,-2,-2], [5,5,5,5]] choose(which, choices)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Linear algebra Efficient numerical calculation with Numpy- Object should always be formulated in terms of matrix and vector operations- like matrix-matrix multiplication. Scalar-array operations We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
v1 = arange(0, 5) v1 * 2 v1 + 2 # Also works on a matrix A * 2, A + 2
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Exercise Can you list the first 20 elements of the *"power of two"* using scalar array operations?
from numpy import array elements = 20 two = array([2]*elements) for i in range(len(two)): two[i:] = two[i:]*2 print two
[ 4 8 16 32 64 128 256 512 1024 2048 4096 8192 16384 32768 65536 131072 262144 524288 1048576 2097152]
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Element-wise array-array operations When we add, subtract, multiply and divide arrays with each other, the default behaviour is **element-wise** operations:
print A print A * A # element-wise multiplication v1 * v1
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:
A.shape, v1.shape A * v1
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Matrix algebra What about matrix mutiplication? * We can either use the `dot` function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:
dot(A, A) dot(A, v1) dot(v1, v1)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Alternatively* we can cast the array objects to the type `matrix`. Note: This changes the behavior of the standard arithmetic operators `+, -, *` to use matrix algebra.
M = matrix(A) M v = matrix(v1).T # make it a column vector v M * M M * v # inner product v.T * v # with matrix objects, standard matrix algebra applies v + M*v
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
warningIf we try to add, subtract or multiply objects with incomplatible shapes we get an error:
v = matrix([1,2,3,4,5,6]).T shape(M), shape(v) M * v
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
See also the related functions: `inner`, `outer`, `cross`, `kron`, `tensordot` Matrix computations Inverse
C = matrix([[1j, 2j], [3j, 4j]]) inv(C) # equivalent to C.I C.I * C
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Determinant
det(C) det(C.I)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Data processingFile Input/Output Comma-separated values (CSV) A very common file format for data files are the comma-separated values (CSV).
# To read data from such file into Numpy arrays we can use the `numpy.genfromtxt` function ?genfromtxt
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
data source: https://archive.ics.uci.edu/ml/datasets/Covertype
A = genfromtxt('data/num.csv.gz', delimiter = ',') A.shape A.size A[:4,:3]
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Using `numpy.savetxt` we can store a Numpy array to a file in **TSV** format:
M = rand(3,3) M savetxt("random-matrix.csv", M) !cat random-matrix.csv
3.703227066684050550e-01 7.210743066150180347e-01 8.177851226846111210e-03 5.481568888557009078e-01 4.370262664386908025e-01 4.854412252923472337e-01 9.957061580132914314e-01 8.303192596515113211e-01 3.483319463859742005e-01
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Exercise Read from the gzipped csv:- Only from row 11 to 20- Only third and sixth column- Truncate values to integer
read = 10 skip = 10 a_len = len(A) B = genfromtxt('data/num.csv.gz', delimiter = ',', usecols = (2, 5), skip_header=skip, skip_footer=a_len-(read+skip), dtype=np.int16)
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Numpy's native file format Useful when storing and reading back numpy array data. Use the functions `numpy.save` and `numpy.load`:
# numpy binary file saving save("random-matrix.npy", M) # check type of file !file random-matrix.npy # very fast, but not portable load("random-matrix.npy")
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
Statistics Numpy provides a number of functions to calculate statistics of datasets in arrays.
data = A[:1000,:5] data.shape
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
mean
# The mean of the 4th element mean(data[:,3])
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
standard deviations and variance
std(data[:,3]), var(data[:,3])
_____no_output_____
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures
min and max
# search the lowest value of a column col = 4 print "Min value for", col, "is", data[:,col].min() print "Max value for", col, "is", data[:,col].max()
Min value for 4 is -45.0 Max value for 4 is 245.0
MIT
pydata/03_numerics.ipynb
andreamelloncelli/cineca-lectures