markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)
trn = get_data(path+'train') val = get_data(path+'valid') save_array(path+'results/val.dat', val) save_array(path+'results/trn.dat', trn) val = load_array(path+'results/val.dat') trn = load_array(path+'results/trn.dat')
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Re-run sample experiments on full dataset We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models. Single conv layer
def conv1(batches): model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Convolution2D(32,3,3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Convolution2D(64,3,3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dense(10, activation='softmax') ]) model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr = 0.001 model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) return model model = conv1(batches)
Epoch 1/2 18946/18946 [==============================] - 114s - loss: 0.2273 - acc: 0.9405 - val_loss: 2.4946 - val_acc: 0.2826 Epoch 2/2 18946/18946 [==============================] - 114s - loss: 0.0120 - acc: 0.9990 - val_loss: 1.5872 - val_acc: 0.5253 Epoch 1/4 18946/18946 [==============================] - 114s - loss: 0.0093 - acc: 0.9992 - val_loss: 1.4836 - val_acc: 0.5825 Epoch 2/4 18946/18946 [==============================] - 114s - loss: 0.0032 - acc: 1.0000 - val_loss: 1.3142 - val_acc: 0.6162 Epoch 3/4 18946/18946 [==============================] - 114s - loss: 0.0035 - acc: 0.9996 - val_loss: 1.5061 - val_acc: 0.5771 Epoch 4/4 18946/18946 [==============================] - 114s - loss: 0.0036 - acc: 0.9997 - val_loss: 1.4528 - val_acc: 0.5808
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results. Data augmentation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches) model.optimizer.lr = 0.0001 model.fit_generator(batches, batches.nb_sample, nb_epoch=15, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
Epoch 1/15 18946/18946 [==============================] - 114s - loss: 0.2391 - acc: 0.9361 - val_loss: 1.2511 - val_acc: 0.6886 Epoch 2/15 18946/18946 [==============================] - 114s - loss: 0.2075 - acc: 0.9430 - val_loss: 1.1327 - val_acc: 0.7294 Epoch 3/15 18946/18946 [==============================] - 114s - loss: 0.1800 - acc: 0.9529 - val_loss: 1.1099 - val_acc: 0.7294 Epoch 4/15 18946/18946 [==============================] - 114s - loss: 0.1675 - acc: 0.9557 - val_loss: 1.0660 - val_acc: 0.7363 Epoch 5/15 18946/18946 [==============================] - 114s - loss: 0.1432 - acc: 0.9625 - val_loss: 1.1585 - val_acc: 0.7073 Epoch 6/15 18946/18946 [==============================] - 114s - loss: 0.1358 - acc: 0.9627 - val_loss: 1.1389 - val_acc: 0.6947 Epoch 7/15 18946/18946 [==============================] - 114s - loss: 0.1283 - acc: 0.9665 - val_loss: 1.1329 - val_acc: 0.7369 Epoch 8/15 18946/18946 [==============================] - 114s - loss: 0.1180 - acc: 0.9686 - val_loss: 1.1817 - val_acc: 0.7194 Epoch 9/15 18946/18946 [==============================] - 114s - loss: 0.1137 - acc: 0.9704 - val_loss: 1.0923 - val_acc: 0.7142 Epoch 10/15 18946/18946 [==============================] - 114s - loss: 0.1076 - acc: 0.9720 - val_loss: 1.0983 - val_acc: 0.7358 Epoch 11/15 18946/18946 [==============================] - 114s - loss: 0.1032 - acc: 0.9736 - val_loss: 1.0206 - val_acc: 0.7458 Epoch 12/15 18946/18946 [==============================] - 114s - loss: 0.0956 - acc: 0.9740 - val_loss: 0.9039 - val_acc: 0.7809 Epoch 13/15 18946/18946 [==============================] - 114s - loss: 0.0962 - acc: 0.9740 - val_loss: 1.3386 - val_acc: 0.6587 Epoch 14/15 18946/18946 [==============================] - 114s - loss: 0.0892 - acc: 0.9777 - val_loss: 1.1150 - val_acc: 0.7470 Epoch 15/15 18946/18946 [==============================] - 114s - loss: 0.0886 - acc: 0.9773 - val_loss: 1.9190 - val_acc: 0.5802
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
I'm shocked by *how* good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation. Four conv/pooling pairs + dropout Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Convolution2D(32,3,3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Convolution2D(64,3,3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Convolution2D(128,3,3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D(), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dropout(0.5), Dense(200, activation='relu'), BatchNormalization(), Dropout(0.5), Dense(10, activation='softmax') ]) model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr=0.001 model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr=0.00001 model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
Epoch 1/10 18946/18946 [==============================] - 159s - loss: 0.3183 - acc: 0.8976 - val_loss: 1.0359 - val_acc: 0.7688 Epoch 2/10 18946/18946 [==============================] - 158s - loss: 0.2788 - acc: 0.9109 - val_loss: 1.5806 - val_acc: 0.6705 Epoch 3/10 18946/18946 [==============================] - 158s - loss: 0.2810 - acc: 0.9124 - val_loss: 0.9836 - val_acc: 0.7887 Epoch 4/10 18946/18946 [==============================] - 158s - loss: 0.2403 - acc: 0.9244 - val_loss: 1.1832 - val_acc: 0.7493 Epoch 5/10 18946/18946 [==============================] - 159s - loss: 0.2195 - acc: 0.9303 - val_loss: 1.1524 - val_acc: 0.7510 Epoch 6/10 18946/18946 [==============================] - 159s - loss: 0.2085 - acc: 0.9359 - val_loss: 1.2245 - val_acc: 0.7415 Epoch 7/10 18946/18946 [==============================] - 158s - loss: 0.1961 - acc: 0.9399 - val_loss: 1.1232 - val_acc: 0.7654 Epoch 8/10 18946/18946 [==============================] - 158s - loss: 0.1851 - acc: 0.9416 - val_loss: 1.0956 - val_acc: 0.6892 Epoch 9/10 18946/18946 [==============================] - 158s - loss: 0.1798 - acc: 0.9451 - val_loss: 1.0586 - val_acc: 0.7740 Epoch 10/10 18946/18946 [==============================] - 159s - loss: 0.1669 - acc: 0.9471 - val_loss: 1.4633 - val_acc: 0.6656
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however... Imagenet conv features Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
vgg = Vgg16() model=vgg.model last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1] conv_layers = model.layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path) conv_feat = conv_model.predict_generator(batches, batches.nb_sample) conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample) conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample) save_array(path+'results/conv_val_feat.dat', conv_val_feat) save_array(path+'results/conv_test_feat.dat', conv_test_feat) save_array(path+'results/conv_feat.dat', conv_feat) conv_feat = load_array(path+'results/conv_feat.dat') conv_val_feat = load_array(path+'results/conv_val_feat.dat') conv_val_feat.shape
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Batchnorm dense layers on pretrained conv layers Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.
def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dropout(p/2), Dense(128, activation='relu'), BatchNormalization(), Dropout(p/2), Dense(128, activation='relu'), BatchNormalization(), Dropout(p), Dense(10, activation='softmax') ] p=0.8 bn_model = Sequential(get_bn_layers(p)) bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy']) bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=2, validation_data=(conv_val_feat, val_labels)) bn_model.save_weights(path+'models/conv8.h5')
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model. Pre-computed data augmentation + dropout We'll use our usual data augmentation parameters:
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) da_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)
Found 18946 images belonging to 10 classes.
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
We use those to create a dataset of convolutional features 5x bigger than the training set.
da_conv_feat = conv_model.predict_generator(da_batches, da_batches.nb_sample*5) save_array(path+'results/da_conv_feat2.dat', da_conv_feat) da_conv_feat = load_array(path+'results/da_conv_feat2.dat')
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Let's include the real training data as well in its non-augmented form.
da_conv_feat = np.concatenate([da_conv_feat, conv_feat])
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.
da_trn_labels = np.concatenate([trn_labels]*6)
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Based on some experiments the previous model works well, with bigger dense layers.
def get_bn_da_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dropout(p), Dense(256, activation='relu'), BatchNormalization(), Dropout(p), Dense(256, activation='relu'), BatchNormalization(), Dropout(p), Dense(10, activation='softmax') ] p=0.8 bn_model = Sequential(get_bn_da_layers(p)) bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Now we can train the model as usual, with pre-computed augmented data.
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.01 bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.0001 bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels))
Train on 113676 samples, validate on 3478 samples Epoch 1/4 113676/113676 [==============================] - 16s - loss: 0.3837 - acc: 0.8775 - val_loss: 0.6904 - val_acc: 0.8197 Epoch 2/4 113676/113676 [==============================] - 16s - loss: 0.3576 - acc: 0.8872 - val_loss: 0.6593 - val_acc: 0.8209 Epoch 3/4 113676/113676 [==============================] - 16s - loss: 0.3384 - acc: 0.8939 - val_loss: 0.7057 - val_acc: 0.8085 Epoch 4/4 113676/113676 [==============================] - 16s - loss: 0.3254 - acc: 0.8977 - val_loss: 0.6867 - val_acc: 0.8128
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Looks good - let's save those weights.
bn_model.save_weights(path+'models/da_conv8_1.h5')
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Pseudo labeling We're going to try using a combination of [pseudo labeling](http://deeplearning.net/wp-content/uploads/2013/03/pseudo_label_final.pdf) and [knowledge distillation](https://arxiv.org/abs/1503.02531) to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set. To do this, we simply calculate the predictions of our model...
val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
...concatenate them with our training labels...
comb_pseudo = np.concatenate([da_trn_labels, val_pseudo]) comb_feat = np.concatenate([da_conv_feat, conv_val_feat])
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
...and fine-tune our model using that data.
bn_model.load_weights(path+'models/da_conv8_1.h5') bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=1, validation_data=(conv_val_feat, val_labels)) bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels)) bn_model.optimizer.lr=0.00001 bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4, validation_data=(conv_val_feat, val_labels))
Train on 117154 samples, validate on 3478 samples Epoch 1/4 117154/117154 [==============================] - 17s - loss: 0.2837 - acc: 0.9134 - val_loss: 0.7901 - val_acc: 0.8200 Epoch 2/4 117154/117154 [==============================] - 17s - loss: 0.2760 - acc: 0.9155 - val_loss: 0.7648 - val_acc: 0.8275 Epoch 3/4 117154/117154 [==============================] - 17s - loss: 0.2723 - acc: 0.9183 - val_loss: 0.7382 - val_acc: 0.8358 Epoch 4/4 117154/117154 [==============================] - 17s - loss: 0.2657 - acc: 0.9191 - val_loss: 0.7227 - val_acc: 0.8329
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.
bn_model.save_weights(path+'models/bn-ps8.h5')
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Submit We'll find a good clipping amount using the validation set, prior to submitting.
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx) keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval() conv_test_feat = load_array(path+'results/conv_test_feat.dat') preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2) subm = do_clip(preds,0.93) subm_name = path+'results/subm.gz' classes = sorted(batches.class_indices, key=batches.class_indices.get) submission = pd.DataFrame(subm, columns=classes) submission.insert(0, 'img', [a[4:] for a in test_filenames]) submission.head() submission.to_csv(subm_name, index=False, compression='gzip') FileLink(subm_name)
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
This gets 0.534 on the leaderboard. The "things that didn't really work" section You can safely ignore everything from here on, because they didn't really help. Finetune some conv layers too
for l in get_bn_layers(p): conv_model.add(l) for l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]): l2.set_weights(l1.get_weights()) for l in conv_model.layers: l.trainable =False for l in conv_model.layers[last_conv_idx+1:]: l.trainable =True comb = np.concatenate([trn, val]) gen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04, shear_range=0.03, channel_shift_range=10, width_shift_range=0.08) batches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size) val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False) conv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, batches.N, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.N) conv_model.optimizer.lr = 0.0001 conv_model.fit_generator(batches, batches.N, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.N) for l in conv_model.layers[16:]: l.trainable =True conv_model.optimizer.lr = 0.00001 conv_model.fit_generator(batches, batches.N, nb_epoch=8, validation_data=val_batches, nb_val_samples=val_batches.N) conv_model.save_weights(path+'models/conv8_ps.h5') conv_model.load_weights(path+'models/conv8_da.h5') val_pseudo = conv_model.predict(val, batch_size=batch_size*2) save_array(path+'models/pseudo8_da.dat', val_pseudo)
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Ensembling
drivers_ds = pd.read_csv(path+'driver_imgs_list.csv') drivers_ds.head() img2driver = drivers_ds.set_index('img')['subject'].to_dict() driver2imgs = {k: g["img"].tolist() for k,g in drivers_ds[['subject', 'img']].groupby("subject")} def get_idx(driver_list): return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list] drivers = driver2imgs.keys() rnd_drivers = np.random.permutation(drivers) ds1 = rnd_drivers[:len(rnd_drivers)//2] ds2 = rnd_drivers[len(rnd_drivers)//2:] models=[fit_conv([d]) for d in drivers] models=[m for m in models if m is not None] all_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models]) avg_preds = all_preds.mean(axis=0) avg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1) keras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval() keras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()
_____no_output_____
Apache-2.0
deeplearning1/nbs/statefarm.ipynb
Fandekasp/fastai_courses
Built-in function
print(abs(4.5)) print(abs(-4.5))
4.5 4.5
MIT
mod_05_list_tuple_dict/Untitled0.ipynb
merazlab/python
all - Return True if all elements of the iterable are true (or if the iterable is empty)
a = [2, 3, 4, 5] b = [0] c = [] d = [2, 3, 4, 0] print(all(a)) print(all(b)) print(all(c)) print(all(d))
True False True False
MIT
mod_05_list_tuple_dict/Untitled0.ipynb
merazlab/python
any - Return True if any element of the iterable is true. If the iterable is empty, return False. Equivalent to:
a = [2, 3, 4, 5] b = [0] c = [] d = [2, 3, 4, 0] print(any(a)) print(any(b)) print(any(c)) print(any(d))
True False False True
MIT
mod_05_list_tuple_dict/Untitled0.ipynb
merazlab/python
ascii
print(ascii(1111)) dir() dir(struct) meta = {} meta[item] = []
_____no_output_____
MIT
mod_05_list_tuple_dict/Untitled0.ipynb
merazlab/python
Using notebooks we will explore the process of trainign and consuming a modelFirst let's load some packages to maniupulate images
#r "nuget:SixLabors.ImageSharp,1.0.2"
_____no_output_____
MIT
Notebooks/Classifier.ipynb
colombod/MachineTeaching
Get imagesWe can download images from the web, let's create some helper function
using SixLabors.ImageSharp; using SixLabors.ImageSharp.PixelFormats; using System.Net.Http; Image GetImage(string url) { var client = new HttpClient(); var image = client.GetByteArrayAsync(url).Result; return Image.Load(image); } var image = GetImage("https://user-images.githubusercontent.com/2546640/56708992-deee8780-66ec-11e9-9991-eb85abb1d10a.png"); image
_____no_output_____
MIT
Notebooks/Classifier.ipynb
colombod/MachineTeaching
it would be better to see the image, let's use the foramtter api
using System.IO; using SixLabors.ImageSharp.Formats.Png; using Microsoft.DotNet.Interactive.Formatting; Formatter.Register<Image>((image, writer) => { var id = Guid.NewGuid().ToString("N"); using var stream = new MemoryStream(); image.Save(stream, new PngEncoder()); stream.Flush(); var data = stream.ToArray(); var imageSource = $"data:image/png;base64, {Convert.ToBase64String(data)}"; PocketView imgTag = PocketViewTags.img[id: id, src: imageSource, height: image.Height, width: image.Width](); writer.Write(imgTag); }, HtmlFormatter.MimeType); image
_____no_output_____
MIT
Notebooks/Classifier.ipynb
colombod/MachineTeaching
Good but something smaller would be better
using SixLabors.ImageSharp.Processing; Image Reduce(Image source, int maxSize = 300){ var max = Math.Max(source.Width, source.Height); var ratio = ((double)(maxSize)) / max; return source.Clone(c => c.Resize((int)(source.Width * ratio), (int)(source.Height * ratio))); } Reduce(image)
_____no_output_____
MIT
Notebooks/Classifier.ipynb
colombod/MachineTeaching
Better, now I am interested in bayblade, let's display some
var urls = new string[]{ "https://cdn.shopify.com/s/files/1/0016/0674/6186/products/B154_1_1024x1024.jpg?v=1573909023", "https://i.ytimg.com/vi/yUH2QeluaIU/maxresdefault.jpg", "https://www.biggerbids.com/members/images/29371/public/8065336_-DSC5628-32467-26524-.jpg", "https://i.ytimg.com/vi/BT4SwVmnqqQ/maxresdefault.jpg", "https://cdn.shopify.com/s/files/1/0016/0674/6186/products/B160covercopy2_1200x1200.jpg?v=1585425105", "https://animeukiyo.com/wp-content/uploads/2020/05/king-helios-zone-1B-1140x570.jpg", "https://http2.mlstatic.com/beyblade-burn-phoenix-ice-blue-90wf-takara-tomy-frete-pac-D_NQ_NP_19415-MLB20171031427_092014-F.jpg" }; var beyBlades = urls.Select(url => new { Image = Reduce(GetImage(url))}); beyBlades
_____no_output_____
MIT
Notebooks/Classifier.ipynb
colombod/MachineTeaching
Enter lobeWe will now use lobe and it's .NET Bindings to developa model to classify those images. Let's start lobe and have a look first, then we will proceed with loading the pacakges we need.
#r "nuget:lobe" #r "nuget:lobe.ImageSharp" using lobe; using lobe.ImageSharp;
_____no_output_____
MIT
Notebooks/Classifier.ipynb
colombod/MachineTeaching
Lobe can be accessed via web api let's use that for fast loops
#r "nuget:lobe.Http" using lobe.Http; var beyblades_start = new Uri("http://localhost:38100/predict/3af915df-14b7-4834-afbd-6615deca4e26"); var beyblades = new Uri("http://localhost:38100/predict/f56e1050-391e-4cd6-9bb9-ff74dc4d84f5"); var beyblades_2 = new Uri("http://localhost:38100/predict/f56e1050-391e-4cd6-9bb9-ff74dc4d84f5"); var beyblades_3 = new Uri("http://localhost:38100/predict/a3271b3a-f63b-4c00-9304-beda43375284"); var beyblade_remote = new Uri("http://lobe-diego.ngrok.io/predict/2a6a3005-a8cc-4bc1-a71a-a0fe85f258bb"); var httpClassifier = new LobeClient(beyblades_3); httpClassifier.Classify(beyBlades.First().Image.CloneAs<Rgb24>()) var imageSources = urls.Select(url => Reduce(GetImage(url),800).CloneAs<Rgb24>()).ToList(); var classifications = imageSources.Select((img) => { var cls = httpClassifier.Classify(img); return new { Image = Reduce(img), Label = cls.Prediction.Label, Confidence = cls.Prediction.Confidence }; }); classifications
_____no_output_____
MIT
Notebooks/Classifier.ipynb
colombod/MachineTeaching
- Install TPOT within Anancoda - https://anaconda.org/conda-forge/tpot- More Details - https://epistasislab.github.io/tpot/using/- GitHub - https://github.com/EpistasisLab/tpot/
import pandas as pd from tpot import TPOTClassifier from sklearn.model_selection import train_test_split train = pd.read_csv("excel_full_train.csv") test = pd.read_csv("excel_test.csv") X = train.drop(['PassengerId','Survived'], axis = 1) y = train['Survived'] #### Use Test Train Split to divide into train and test import numpy as np from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=21)
_____no_output_____
MIT
Titanic Solution using Excel and Python/21_Titanic_Solution_TPOT_All_feat_30Jan2019.ipynb
KunaalNaik/YT_Kaggle_Titanic_Solution
Run AutoML with TPOT **Verbose** - How much information TPOT communicates while it is running.- 0 = none, 1 = minimal, 2 = high, 3 = all.- A setting of 2 or higher will add a progress bar during the optimization procedure.
#Set max time for 10 Minutes tpot = TPOTClassifier(verbosity=2, max_time_mins=1) tpot.fit(X_train, y_train) print(f'Train : {tpot.score(X_test, y_test):.3f}') print(f'Test : {tpot.score(X_train, y_train):.3f}')
Train : 0.840 Test : 0.867
MIT
Titanic Solution using Excel and Python/21_Titanic_Solution_TPOT_All_feat_30Jan2019.ipynb
KunaalNaik/YT_Kaggle_Titanic_Solution
Export Best Pipeline
tpot.export('Auto_ML_TPOT_titanic_pipeline2.py')
_____no_output_____
MIT
Titanic Solution using Excel and Python/21_Titanic_Solution_TPOT_All_feat_30Jan2019.ipynb
KunaalNaik/YT_Kaggle_Titanic_Solution
Prediction of Test
sub_test = test.drop(['PassengerId'], axis = 1) sub_test_pred = tpot.predict(sub_test).astype(int) AllSub = pd.DataFrame({ 'PassengerId': test['PassengerId'], 'Survived' : sub_test_pred }) AllSub.to_csv("Auto_ML_TPOT_Titanic_Solution.csv", index = False) #Kaggle LB Score - 0.78468
_____no_output_____
MIT
Titanic Solution using Excel and Python/21_Titanic_Solution_TPOT_All_feat_30Jan2019.ipynb
KunaalNaik/YT_Kaggle_Titanic_Solution
Display Sample Records
import gzip import json import re import os import sys import numpy as np import pandas as pd
_____no_output_____
Apache-2.0
samples.ipynb
nancywen25/goodreads
**Specify your directory here:**
DIR = './data'
_____no_output_____
Apache-2.0
samples.ipynb
nancywen25/goodreads
**This function shows how to load datasets**
def load_data(file_name, head = 500): ''' Given a *.json.gz file, returns a list of dictionaries, optionally can select the first n records ''' count = 0 data = [] with gzip.open(file_name) as fin: for l in fin: d = json.loads(l) count += 1 data.append(d) # break if reaches the 500th line if (head is not None) and (count >= head): break return data
_____no_output_____
Apache-2.0
samples.ipynb
nancywen25/goodreads
**Load and display sample records of books/authors/works/series**
poetry = load_data(os.path.join(DIR, 'goodreads_books_poetry.json.gz')) # books = load_data(os.path.join(DIR, 'goodreads_books.json.gz')) # authors = load_data(os.path.join(DIR, 'goodreads_book_authors.json.gz')) # works = load_data(os.path.join(DIR, 'goodreads_book_works.json.gz')) # series = load_data(os.path.join(DIR, 'goodreads_book_series.json.gz')) len(poetry) poetry[0] # print(' == sample record (books) ==') # display(np.random.choice(books)) # print(' == sample record (authors) ==') # display(np.random.choice(authors)) # print(' == sample record (works) ==') # display(np.random.choice(works)) # print(' == sample record (series) ==') # display(np.random.choice(series))
_____no_output_____
Apache-2.0
samples.ipynb
nancywen25/goodreads
**Load and display sample records of user-book interactions (shelves)**
interactions = load_data(os.path.join(DIR, 'goodreads_interactions_poetry.json.gz')) np.random.choice(interactions)
_____no_output_____
Apache-2.0
samples.ipynb
nancywen25/goodreads
**Load and display sample records of book reviews**
reviews = load_data(os.path.join(DIR, 'goodreads_reviews_poetry.json.gz')) np.random.choice(reviews)
_____no_output_____
Apache-2.0
samples.ipynb
nancywen25/goodreads
**Load and display sample records of book reviews (with spoiler tags)**
spoilers = load_data(os.path.join(DIR, 'goodreads_reviews_spoiler.json.gz')) np.random.choice([s for s in spoilers if s['has_spoiler']]) # spoilers = load_data(os.path.join(DIR, 'goodreads_reviews_spoiler_raw.json.gz')) # np.random.choice([s for s in spoilers if 'view spoiler' in s['review_text']])
_____no_output_____
Apache-2.0
samples.ipynb
nancywen25/goodreads
Introduction to the Interstellar Medium Jonathan Williams Figure 4.2: Extinction curve uses extcurve_s16.py and cubicspline.py from https://faun.rc.fas.harvard.edu/eschlafly/apored/extcurve.html
import numpy as np import matplotlib.pyplot as plt %matplotlib inline import extcurve_s16 fig = plt.figure(figsize=(6,4)) ax1 = fig.add_subplot(1,1,1) #ax1.set_xlabel('$\lambda$ (nm)', fontsize=16) ax1.set_xlabel('$\lambda\ (\mu m)$', fontsize=16) ax1.set_ylabel('$A(\lambda)/A_K$', fontsize=16) #ax1.set_xlim(350,2500) ax1.set_xlim(0.350,2.500) #ax1.set_ylim(0,1.3) ax1.set_ylim(0,15) lam = np.linspace(500,2500, 100) lam_ext = np.linspace(350,500, 10) oir = np.nonzero((lam > 500) & (lam < 3000)) ec = extcurve_s16.extcurve(0.0) #f = ec(5420)/ec(5510) f = ec(5420)/ec(21900) x = np.log10(lam) y = f*ec(10*lam) w = 500/lam[oir] w = lam[oir] * 0 + 1 a,b = np.polyfit(x[oir],np.log10(y[oir]),1,w=w) print("R_V = 3.3 power law index = {0:4.2f}".format(a)) #ax1.plot(10**x,10**(a*x+b),'r-') #ax1.plot(lam, y, 'k-', lw=2) #ax1.plot(lam_ext, f*ec(10*lam_ext), 'k:', lw=2) ax1.plot(lam/1000, y, 'k-', lw=2) ax1.plot(lam_ext/1000, f*ec(10*lam_ext), 'k:', lw=2) ec = extcurve_s16.extcurve(0.04) #f = ec(5420)/ec(5510) f = ec(5420)/ec(21900) y = f*ec(10*lam) a,b = np.polyfit(x[oir],np.log10(y[oir]),1,w=w) print("R_V = 3.6 power law index = {0:4.2f}".format(a)) #ax1.plot(10**x,10**(a*x+b),'r-') #ax1.plot(lam, f*ec(10*lam), 'k-', lw=0.5) #ax1.plot(lam_ext, f*ec(10*lam_ext), 'k:', lw=0.5) ax1.plot(lam/1000, f*ec(10*lam), 'k-', lw=0.5) ax1.plot(lam_ext/1000, f*ec(10*lam_ext), 'k:', lw=0.5) ec = extcurve_s16.extcurve(-0.04) #f = ec(5420)/ec(5510) f = ec(5420)/ec(21900) y = f*ec(10*lam) a,b = np.polyfit(x[oir],np.log10(y[oir]),1,w=w) print("R_V = 3.0 power law index = {0:4.2f}".format(a)) ax1.plot(lam/1000, f*ec(10*lam), 'k-', lw=0.5) ax1.plot(lam_ext/1000, f*ec(10*lam_ext), 'k:', lw=0.5) ylab = 3.7 plt.text(.445, ylab, 'B', fontsize=16, ha='center') plt.text(.551, ylab, 'V', fontsize=16, ha='center') plt.text(.656, ylab, 'R', fontsize=16, ha='center') plt.text(.806, ylab, 'I', fontsize=16, ha='center') plt.text(1.220,ylab, 'J', fontsize=16, ha='center') plt.text(1.630,ylab, 'H', fontsize=16, ha='center') plt.text(2.190,ylab, 'K', fontsize=16, ha='center') plt.savefig('extinction.pdf')
R_V = 3.3 power law index = -1.76 R_V = 3.6 power law index = -1.72 R_V = 3.0 power law index = -1.79
CC0-1.0
dust/.ipynb_checkpoints/extinction-checkpoint.ipynb
CambridgeUniversityPress/IntroductionInterstellarMedium
dataset = [ ['i1','i2','i5'], ['i2', 'i4'], ['i2', 'i3'], ['i1', 'i2', 'i4'], ['i1', 'i3'], ['i2', 'i3'], ['i1','i3'], ['i1', 'i2', 'i3','i5'], ['i1', 'i2','i3']] import pandas as pd from mlxtend.preprocessing import TransactionEncoder te = TransactionEncoder() te_ary = te.fit(dataset).transform(dataset) df = pd.DataFrame(te_ary, columns=te.columns_) df from mlxtend.frequent_patterns import apriori apriori(df, min_support=0.22) apriori(df, min_support=0.22, use_colnames=True) frequent_itemsets = apriori(df, min_support=0.2, use_colnames=True) frequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x)) frequent_itemsets frequent_itemsets[ (frequent_itemsets['length'] == 2) & (frequent_itemsets['support'] >= 0.2) ] frequent_itemsets[ (frequent_itemsets['length'] == 3) & (frequent_itemsets['support'] >= 0.2) ] dataset = [ ['i1','i2','i5'], ['i2', 'i4'], ['i1', 'i2', 'i4'], ['i1', 'i3'], ['i2', 'i3'], ['i1','i3'], ['i1', 'i2','i3']] print(dataset) frequent_itemsets = apriori(df, min_support=0.22, use_colnames=True) frequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x)) frequent_itemsets dataset = [ ['i1','i2','i4'], ['i1', 'i4'], ['i2', 'i3', 'i4'], ['i2', 'i3'], ['i2', 'i4'], ['i1','i5'], ['i1', 'i4','i5']] frequent_itemsets = apriori(df, min_support=0.20, use_colnames=True) frequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x)) frequent_itemsets
_____no_output_____
Apache-2.0
ASSIGNMENT_7.ipynb
archana1822/DMDW
.. _data_tutorial:.. currentmodule:: seaborn Data structures accepted by seaborn===================================.. raw:: html As a data visualization library, seaborn requires that you provide it with data. This chapter explains the various ways to accomplish that task. Seaborn supports several different dataset formats, and most functions accept data represented with objects from the `pandas `_ or `numpy `_ libraries as well as built-in Python types like lists and dictionaries. Understanding the usage patterns associated with these different options will help you quickly create useful visualizations for nearly any dataset... note:: As of current writing (v0.11.0), the full breadth of options covered here are supported by only a subset of the modules in seaborn (namely, the :ref:`relational ` and :ref:`distribution ` modules). The other modules offer much of the same flexibility, but have some exceptions (e.g., :func:`catplot` and :func:`lmplot` are limited to long-form data with named variables). The data-ingest code will be standardized over the next few release cycles, but until that point, be mindful of the specific documentation for each function if it is not doing what you expect with your dataset.
import numpy as np import pandas as pd import seaborn as sns sns.set_theme()
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Long-form vs. wide-form data----------------------------Most plotting functions in seaborn are oriented towards *vectors* of data. When plotting ``x`` against ``y``, each variable should be a vector. Seaborn accepts data *sets* that have more than one vector organized in some tabular fashion. There is a fundamental distinction between "long-form" and "wide-form" data tables, and seaborn will treat each differently.Long-form data~~~~~~~~~~~~~~A long-form data table has the following characteristics:- Each variable is a column- Each observation is a row As a simple example, consider the "flights" dataset, which records the number of airline passengers who flew in each month from 1949 to 1960. This dataset has three variables (*year*, *month*, and number of *passengers*):
flights = sns.load_dataset("flights") flights.head()
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
With long-form data, columns in the table are given roles in the plot by explicitly assigning them to one of the variables. For example, making a monthly plot of the number of passengers per year looks like this:
sns.relplot(data=flights, x="year", y="passengers", hue="month", kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
The advantage of long-form data is that it lends itself well to this explicit specification of the plot. It can accommodate datasets of arbitrary complexity, so long as the variables and observations can be clearly defined. But this format takes some getting used to, because it is often not the model of the data that one has in their head.Wide-form data~~~~~~~~~~~~~~For simple datasets, it is often more intuitive to think about data the way it might be viewed in a spreadsheet, where the columns and rows contain *levels* of different variables. For example, we can convert the flights dataset into a wide-form organization by "pivoting" it so that each column has each month's time series over years:
flights_wide = flights.pivot(index="year", columns="month", values="passengers") flights_wide.head()
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Here we have the same three variables, but they are organized differently. The variables in this dataset are linked to the *dimensions* of the table, rather than to named fields. Each observation is defined by both the value at a cell in the table and the coordinates of that cell with respect to the row and column indices. With long-form data, we can access variables in the dataset by their name. That is not the case with wide-form data. Nevertheless, because there is a clear association between the dimensions of the table and the variable in the dataset, seaborn is able to assign those variables roles in the plot... note:: Seaborn treats the argument to ``data`` as wide form when neither ``x`` nor ``y`` are assigned.
sns.relplot(data=flights_wide, kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
This plot looks very similar to the one before. Seaborn has assigned the index of the dataframe to ``x``, the values of the dataframe to ``y``, and it has drawn a separate line for each month. There is a notable difference between the two plots, however. When the dataset went through the "pivot" operation that converted it from long-form to wide-form, the information about what the values mean was lost. As a result, there is no y axis label. (The lines also have dashes here, because :func:`relplot` has mapped the column variable to both the ``hue`` and ``style`` semantic so that the plot is more accessible. We didn't do that in the long-form case, but we could have by setting ``style="month"``).Thus far, we did much less typing while using wide-form data and made nearly the same plot. This seems easier! But a big advantage of long-form data is that, once you have the data in the correct format, you no longer need to think about its *structure*. You can design your plots by thinking only about the variables contained within it. For example, to draw lines that represent the monthly time series for each year, simply reassign the variables:
sns.relplot(data=flights, x="month", y="passengers", hue="year", kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
To achieve the same remapping with the wide-form dataset, we would need to transpose the table:
sns.relplot(data=flights_wide.transpose(), kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
(This example also illustrates another wrinkle, which is that seaborn currently considers the column variable in a wide-form dataset to be categorical regardless of its datatype, whereas, because the long-form variable is numeric, it is assigned a quantitative color palette and legend. This may change in the future).The absence of explicit variable assignments also means that each plot type needs to define a fixed mapping between the dimensions of the wide-form data and the roles in the plot. Because this natural mapping may vary across plot types, the results are less predictable when using wide-form data. For example, the :ref:`categorical ` plots assign the *column* dimension of the table to ``x`` and then aggregate across the rows (ignoring the index):
sns.catplot(data=flights_wide, kind="box")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
When using pandas to represent wide-form data, you are limited to just a few variables (no more than three). This is because seaborn does not make use of multi-index information, which is how pandas represents additional variables in a tabular format. The `xarray `_ project offers labeled N-dimensional array objects, which can be considered a generalization of wide-form data to higher dimensions. At present, seaborn does not directly support objects from ``xarray``, but they can be transformed into a long-form :class:`pandas.DataFrame` using the ``to_pandas`` method and then plotted in seaborn like any other long-form data set.In summary, we can think of long-form and wide-form datasets as looking something like this:
import matplotlib.pyplot as plt f = plt.figure(figsize=(7, 5)) gs = plt.GridSpec( ncols=6, nrows=2, figure=f, left=0, right=.35, bottom=0, top=.9, height_ratios=(1, 20), wspace=.1, hspace=.01 ) colors = [c + (.5,) for c in sns.color_palette()] f.add_subplot(gs[0, :], facecolor=".8") [ f.add_subplot(gs[1:, i], facecolor=colors[i]) for i in range(gs.ncols) ] gs = plt.GridSpec( ncols=2, nrows=2, figure=f, left=.4, right=1, bottom=.2, top=.8, height_ratios=(1, 8), width_ratios=(1, 11), wspace=.015, hspace=.02 ) f.add_subplot(gs[0, 1:], facecolor=colors[2]) f.add_subplot(gs[1:, 0], facecolor=colors[1]) f.add_subplot(gs[1, 1], facecolor=colors[0]) for ax in f.axes: ax.set(xticks=[], yticks=[]) f.text(.35 / 2, .91, "Long-form", ha="center", va="bottom", size=15) f.text(.7, .81, "Wide-form", ha="center", va="bottom", size=15)
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Messy data~~~~~~~~~~Many datasets cannot be clearly interpreted using either long-form or wide-form rules. If datasets that are clearly long-form or wide-form are `"tidy" `_, we might say that these more ambiguous datasets are "messy". In a messy dataset, the variables are neither uniquely defined by the keys nor by the dimensions of the table. This often occurs with *repeated-measures* data, where it is natural to organize a table such that each row corresponds to the *unit* of data collection. Consider this simple dataset from a psychology experiment in which twenty subjects performed a memory task where they studied anagrams while their attention was either divided or focused:
anagrams = sns.load_dataset("anagrams") anagrams
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
The attention variable is *between-subjects*, but there is also a *within-subjects* variable: the number of possible solutions to the anagrams, which varied from 1 to 3. The dependent measure is a score of memory performance. These two variables (number and score) are jointly encoded across several columns. As a result, the whole dataset is neither clearly long-form nor clearly wide-form.How might we tell seaborn to plot the average score as a function of attention and number of solutions? We'd first need to coerce the data into one of our two structures. Let's transform it to a tidy long-form table, such that each variable is a column and each row is an observation. We can use the method :meth:`pandas.DataFrame.melt` to accomplish this task:
anagrams_long = anagrams.melt(id_vars=["subidr", "attnr"], var_name="solutions", value_name="score") anagrams_long.head()
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Now we can make the plot that we want:
sns.catplot(data=anagrams_long, x="solutions", y="score", hue="attnr", kind="point")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Further reading and take-home points~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~For a longer discussion about tabular data structures, you could read the `"Tidy Data" `_ paper by Hadley Whickham. Note that seaborn uses a slightly different set of concepts than are defined in the paper. While the paper associates tidyness with long-form structure, we have drawn a distinction between "tidy wide-form" data, where there is a clear mapping between variables in the dataset and the dimensions of the table, and "messy data", where no such mapping exists.The long-form structure has clear advantages. It allows you to create figures by explicitly assigning variables in the dataset to roles in plot, and you can do so with more than three variables. When possible, try to represent your data with a long-form structure when embarking on serious analysis. Most of the examples in the seaborn documentation will use long-form data. But in cases where it is more natural to keep the dataset wide, remember that seaborn can remain useful. Options for visualizing long-form data--------------------------------------While long-form data has a precise definition, seaborn is fairly flexible in terms of how it is actually organized across the data structures in memory. The examples in the rest of the documentation will typically use :class:`pandas.DataFrame` objects and reference variables in them by assigning names of their columns to the variables in the plot. But it is also possible to store vectors in a Python dictionary or a class that implements that interface:
flights_dict = flights.to_dict() sns.relplot(data=flights_dict, x="year", y="passengers", hue="month", kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Many pandas operations, such as a the split-apply-combine operations of a group-by, will produce a dataframe where information has moved from the columns of the input dataframe to the index of the output. So long as the name is retained, you can still reference the data as normal:
flights_avg = flights.groupby("year").mean() sns.relplot(data=flights_avg, x="year", y="passengers", kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Additionally, it's possible to pass vectors of data directly as arguments to ``x``, ``y``, and other plotting variables. If these vectors are pandas objects, the ``name`` attribute will be used to label the plot:
year = flights_avg.index passengers = flights_avg["passengers"] sns.relplot(x=year, y=passengers, kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Numpy arrays and other objects that implement the Python sequence interface work too, but if they don't have names, the plot will not be as informative without further tweaking:
sns.relplot(x=year.to_numpy(), y=passengers.to_list(), kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Options for visualizing wide-form data--------------------------------------The options for passing wide-form data are even more flexible. As with long-form data, pandas objects are preferable because the name (and, in some cases, index) information can be used. But in essence, any format that can be viewed as a single vector or a collection of vectors can be passed to ``data``, and a valid plot can usually be constructed.The example we saw above used a rectangular :class:`pandas.DataFrame`, which can be thought of as a collection of its columns. A dict or list of pandas objects will also work, but we'll lose the axis labels:
flights_wide_list = [col for _, col in flights_wide.items()] sns.relplot(data=flights_wide_list, kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
The vectors in a collection do not need to have the same length. If they have an ``index``, it will be used to align them:
two_series = [flights_wide.loc[:1955, "Jan"], flights_wide.loc[1952:, "Aug"]] sns.relplot(data=two_series, kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Whereas an ordinal index will be used for numpy arrays or simple Python sequences:
two_arrays = [s.to_numpy() for s in two_series] sns.relplot(data=two_arrays, kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
But a dictionary of such vectors will at least use the keys:
two_arrays_dict = {s.name: s.to_numpy() for s in two_series} sns.relplot(data=two_arrays_dict, kind="line")
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
Rectangular numpy arrays are treated just like a dataframe without index information, so they are viewed as a collection of column vectors. Note that this is different from how numpy indexing operations work, where a single indexer will access a row. But it is consistent with how pandas would turn the array into a dataframe or how matplotlib would plot it:
flights_array = flights_wide.to_numpy() sns.relplot(data=flights_array, kind="line") # TODO once the categorical module is refactored, its single vectors will get special treatment # (they'll look like collection of singletons, rather than a single collection). That should be noted.
_____no_output_____
MIT
doc/tutorial/data_structure.ipynb
luzpaz/seaborn
风格迁移的实现本文件是集智学园开发的“火炬上的深度学习”课程的配套源代码。我们讲解了Prisma软件实现风格迁移的实现原理在这节课中,我们将学会玩图像的风格迁移。我们需要准备两张图像,一张作为化作风格,一张作为图像内容同时,在本文件中,我们还展示了如何实用GPU来进行计算 本文件是集智学园http://campus.swarma.org 出品的“火炬上的深度学习”第IV课的配套源代码
#导入必要的包 from __future__ import print_function import torch import torch.nn as nn import torch.optim as optim from PIL import Image import matplotlib.pyplot as plt import torchvision.transforms as transforms import torchvision.models as models import copy # 是否用GPU计算,如果检测到有安装好的GPU,则利用它来计算 use_cuda = torch.cuda.is_available() dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
_____no_output_____
Apache-2.0
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
一、准备输入文件我们需要准备两张同样大小的文件,一张作为风格,一张作为内容
#风格图像的路径,自行设定 style = 'images/escher.jpg' #内容图像的路径,自行设定 content = 'images/portrait1.jpg' #风格损失所占比重 style_weight=1000 #内容损失所占比重 content_weight=1 #希望得到的图片大小(越大越清晰,计算越慢) imsize = 128 loader = transforms.Compose([ transforms.Resize(imsize), # 将加载的图像转变为指定的大小 transforms.ToTensor()]) # 将图像转化为tensor #图片加载函数 def image_loader(image_name): image = Image.open(image_name) image = loader(image).clone().detach().requires_grad_(True) # 为了适应卷积网络的需要,虚拟一个batch的维度 image = image.unsqueeze(0) return image #载入图片并检查尺寸 style_img = image_loader(style).type(dtype) content_img = image_loader(content).type(dtype) assert style_img.size() == content_img.size(), \ "我们需要输入相同尺寸的风格和内容图像" # 绘制图像的函数 def imshow(tensor, title=None): image = tensor.clone().cpu() # 克隆Tensor防止改变 image = image.view(3, imsize, imsize) # 删除添加的batch层 image = unloader(image) plt.imshow(image) if title is not None: plt.title(title) plt.pause(0.001) # 停一会以便更新视图 #绘制图片并查看 unloader = transforms.ToPILImage() # 将其转化为PIL图像(Python Imaging Library) plt.ion() plt.figure() imshow(style_img.data, title='Style Image') plt.figure() imshow(content_img.data, title='Content Image')
_____no_output_____
Apache-2.0
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
二、风格迁移网络的实现值得注意的是,风格迁移的实现并没有训练一个神经网络,而是将已训练好的卷积神经网络价格直接迁移过来网络的学习过程并不体现为对神经网络权重的训练,而是训练一张输入的图像,让它尽可能地靠近内容图像的内容和风格图像的风格为了实现风格迁移,我们需要在迁移网络的基础上再构建一个计算图,这样可以加速计算。构建计算图分为两部:1、加载一个训练好的CNN;2、在原网络的基础上添加计算风格损失和内容损失的新计算层 1. 加载已训练好的大型网络VGG
cnn = models.vgg19(pretrained=True).features # 如果可能就用GPU计算: if use_cuda: cnn = cnn.cuda()
_____no_output_____
Apache-2.0
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
2. 重新定义新的计算模块
#内容损失模块 class ContentLoss(nn.Module): def __init__(self, target, weight): super(ContentLoss, self).__init__() # 由于网络的权重都是从target上迁移过来,所以在计算梯度的时候,需要把它和原始计算图分离 self.target = target.detach() * weight self.weight = weight self.criterion = nn.MSELoss() def forward(self, input): # 输入input为一个特征图 # 它的功能就是计算误差,误差就是当前计算的内容与target之间的均方误差 self.loss = self.criterion(input * self.weight, self.target) self.output = input return self.output def backward(self, retain_graph=True): # 开始进行反向传播算法 self.loss.backward(retain_graph=retain_graph) return self.loss class StyleLoss(nn.Module): # 计算风格损失的神经模块 def __init__(self, target, weight): super(StyleLoss, self).__init__() self.target = target.detach() * weight self.weight = weight #self.gram = GramMatrix() self.criterion = nn.MSELoss() def forward(self, input): # 输入input就是一个特征图 self.output = input.clone() # 计算本图像的gram矩阵,并将它与target对比 input = input.cuda() if use_cuda else input self_G = Gram(input) self_G.mul_(self.weight) # 计算损失函数,即输入特征图的gram矩阵与目标特征图的gram矩阵之间的差异 self.loss = self.criterion(self_G, self.target) return self.output def backward(self, retain_graph=True): # 反向传播算法 self.loss.backward(retain_graph=retain_graph) return self.loss #定义Gram矩阵 def Gram(input): # 输入一个特征图,计算gram矩阵 a, b, c, d = input.size() # a=batch size(=1) # b=特征图的数量 # (c,d)=特征图的图像尺寸 (N=c*d) features = input.view(a * b, c * d) # 将特征图图像扁平化为一个向量 G = torch.mm(features, features.t()) # 计算任意两个向量之间的乘积 # 我们通过除以特征图中的像素数量来归一化特征图 return G.div(a * b * c * d) # 希望计算的内容或者风格层 : content_layers = ['conv_4'] #只考虑第四个卷积层的内容 style_layers = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5'] # 考虑第1、2、3、4、5层的风格损失 # 定义列表存储每一个周期的计算损失 content_losses = [] style_losses = [] model = nn.Sequential() # 一个新的序贯网络模型 # 如果有GPU就把这些计算挪到GPU上: if use_cuda: model = model.cuda() # 接下来要做的操作是:循环vgg的每一层,同时构造一个全新的神经网络model # 这个新网络与vgg基本一样,只是多了一些新的层来计算风格损失和内容损失。 # 将每层卷积核的数据都加载到新的网络模型model上来 i = 1 for layer in list(cnn): if isinstance(layer, nn.Conv2d): name = "conv_" + str(i) #将已加载的模块放到model这个新的神经模块中 model.add_module(name, layer) if name in content_layers: # 如果当前层模型在定义好的要计算内容的层: target = model(content_img).clone() #将内容图像当前层的feature信息拷贝到target中 content_loss = ContentLoss(target, content_weight) #定义content_loss的目标函数 content_loss = content_loss if use_cuda else content_loss model.add_module("content_loss_" + str(i), content_loss) #在新网络上加content_loss层 content_losses.append(content_loss) if name in style_layers: # 如果当前层在指定的风格层中,进行风格层损失的计算 target_feature = model(style_img).clone() target_feature = target_feature.cuda() if use_cuda else target_feature target_feature_gram = Gram(target_feature) style_loss = StyleLoss(target_feature_gram, style_weight) style_loss = style_loss.cuda() if use_cuda else style_loss model.add_module("style_loss_" + str(i), style_loss) style_losses.append(style_loss) if isinstance(layer, nn.ReLU): #如果不是卷积层,则做同样处理 name = "relu_" + str(i) model.add_module(name, layer) i += 1 if isinstance(layer, nn.MaxPool2d): name = "pool_" + str(i) model.add_module(name, layer) # ***
_____no_output_____
Apache-2.0
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
二、风格迁移的训练 1. 首先,我们需要现准备一张原始的图像,可以是一张噪音图或者就是内容图
# 如果想从调整一张噪声图像开始,请用下面一行的代码 input_img = torch.randn(content_img.data.size()) if use_cuda: input_img = input_img.cuda() content_img = content_img.cuda() style_img = style_img.cuda() # 将选中的待调整图打印出来: plt.figure() imshow(input_img.data, title='Input Image')
_____no_output_____
Apache-2.0
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
2. 优化输入的图像(训练过程)
# 首先,需要先讲输入图像变成神经网络的参数,这样我们就可以用反向传播算法来调节这个输入图像了 input_param = nn.Parameter(input_img.data) #定义个优化器,采用LBFGS优化算法来优化(试验效果很好,它的特点是可以计算大规模数据的梯度下降) optimizer = optim.LBFGS([input_param]) # 迭代步数 num_steps=300 """运行风格迁移的主算法过程.""" print('正在构造风格迁移模型..') print('开始优化..') for i in range(num_steps): #每一个训练周期 # 限制输入图像的色彩取值范围在0-1间 input_param.data.clamp_(0, 1) # 清空梯度 optimizer.zero_grad() # 将图像输入构造的神经网络中 model(input_param) style_score = 0 content_score = 0 # 每个损失函数层都开始反向传播算法 for sl in style_losses: style_score += sl.backward() for cl in content_losses: content_score += cl.backward() # 每隔50个周期打印一次训练数据 if i % 50 == 0: print("运行 {}轮:".format(i)) print('风格损失 : {:4f} 内容损失: {:4f}'.format( style_score.data.item(), content_score.data.item())) print() def closure(): return style_score + content_score #一步优化 optimizer.step(closure) # 做一些修正,防止数据超界... output = input_param.data.clamp_(0, 1) # 打印结果图 plt.figure() imshow(output, title='Output Image') plt.ioff() plt.show()
_____no_output_____
Apache-2.0
07_StyleTransfer/图像风格迁移_Style_Transfer.ipynb
swarmapytorch/book_DeepLearning_in_PyTorch_Source
Overfitting and regularization (with ``gluon``)Now that we've built a [regularized logistic regression model from scratch](regularization-scratch.html), let's make this more efficient with ``gluon``. We recommend that you read that section for a description as to why regularization is a good idea. As always, we begin by loading libraries and some data.[**REFINED DRAFT - RELEASE STAGE: CATFOOD**]
from __future__ import print_function import mxnet as mx from mxnet import autograd from mxnet import gluon import mxnet.ndarray as nd import numpy as np ctx = mx.cpu() # for plotting purposes %matplotlib inline import matplotlib import matplotlib.pyplot as plt
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
The MNIST Dataset
mnist = mx.test_utils.get_mnist() num_examples = 1000 batch_size = 64 train_data = mx.gluon.data.DataLoader( mx.gluon.data.ArrayDataset(mnist["train_data"][:num_examples], mnist["train_label"][:num_examples].astype(np.float32)), batch_size, shuffle=True) test_data = mx.gluon.data.DataLoader( mx.gluon.data.ArrayDataset(mnist["test_data"][:num_examples], mnist["test_label"][:num_examples].astype(np.float32)), batch_size, shuffle=False)
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
Multiclass Logistic Regression
net = gluon.nn.Sequential() with net.name_scope(): net.add(gluon.nn.Dense(10))
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
Parameter initialization
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
Softmax Cross Entropy Loss
loss = gluon.loss.SoftmaxCrossEntropyLoss()
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
OptimizerBy default ``gluon`` tries to keep the coefficients from diverging by using a *weight decay* penalty. So, to get the real overfitting experience we need to switch it off. We do this by passing `'wd': 0.0'` when we instantiate the trainer.
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.01, 'wd': 0.0})
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
Evaluation Metric
def evaluate_accuracy(data_iterator, net, loss_fun): acc = mx.metric.Accuracy() loss_avg = 0. for i, (data, label) in enumerate(data_iterator): data = data.as_in_context(ctx).reshape((-1,784)) label = label.as_in_context(ctx) output = net(data) loss = loss_fun(output, label) predictions = nd.argmax(output, axis=1) acc.update(preds=predictions, labels=label) loss_avg = loss_avg*i/(i+1) + nd.mean(loss).asscalar()/(i+1) return acc.get()[1], loss_avg def plot_learningcurves(loss_tr,loss_ts, acc_tr,acc_ts): xs = list(range(len(loss_tr))) f = plt.figure(figsize=(12,6)) fg1 = f.add_subplot(121) fg2 = f.add_subplot(122) fg1.set_xlabel('epoch',fontsize=14) fg1.set_title('Comparing loss functions') fg1.semilogy(xs, loss_tr) fg1.semilogy(xs, loss_ts) fg1.grid(True,which="both") fg1.legend(['training loss', 'testing loss'],fontsize=14) fg2.set_title('Comparing accuracy') fg1.set_xlabel('epoch',fontsize=14) fg2.plot(xs, acc_tr) fg2.plot(xs, acc_ts) fg2.grid(True,which="both") fg2.legend(['training accuracy', 'testing accuracy'],fontsize=14) plt.show()
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
Execute training loop
epochs = 700 moving_loss = 0. niter=0 loss_seq_train = [] loss_seq_test = [] acc_seq_train = [] acc_seq_test = [] for e in range(epochs): for i, (data, label) in enumerate(train_data): data = data.as_in_context(ctx).reshape((-1,784)) label = label.as_in_context(ctx) with autograd.record(): output = net(data) cross_entropy = loss(output, label) cross_entropy.backward() trainer.step(data.shape[0]) ########################## # Keep a moving average of the losses ########################## niter +=1 moving_loss = .99 * moving_loss + .01 * nd.mean(cross_entropy).asscalar() est_loss = moving_loss/(1-0.99**niter) test_accuracy, test_loss = evaluate_accuracy(test_data, net, loss) train_accuracy, train_loss = evaluate_accuracy(train_data, net, loss) # save them for later loss_seq_train.append(train_loss) loss_seq_test.append(test_loss) acc_seq_train.append(train_accuracy) acc_seq_test.append(test_accuracy) if e % 20 == 0: print("Completed epoch %s. Train Loss: %s, Test Loss %s, Train_acc %s, Test_acc %s" % (e+1, train_loss, test_loss, train_accuracy, test_accuracy)) ## Plotting the learning curves plot_learningcurves(loss_seq_train,loss_seq_test,acc_seq_train,acc_seq_test)
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
RegularizationNow let's see what this mysterious *weight decay* is all about. We begin with a bit of math. When we add an L2 penalty to the weights we are effectively adding $\frac{\lambda}{2} \|w\|^2$ to the loss. Hence, every time we compute the gradient it gets an additional $\lambda w$ term that is added to $g_t$, since this is the very derivative of the L2 penalty. As a result we end up taking a descent step not in the direction $-\eta g_t$ but rather in the direction $-\eta (g_t + \lambda w)$. This effectively shrinks $w$ at each step by $\eta \lambda w$, thus the name weight decay. To make this work in practice we just need to set the weight decay to something nonzero.
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx, force_reinit=True) trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.01, 'wd': 0.001}) moving_loss = 0. niter=0 loss_seq_train = [] loss_seq_test = [] acc_seq_train = [] acc_seq_test = [] for e in range(epochs): for i, (data, label) in enumerate(train_data): data = data.as_in_context(ctx).reshape((-1,784)) label = label.as_in_context(ctx) with autograd.record(): output = net(data) cross_entropy = loss(output, label) cross_entropy.backward() trainer.step(data.shape[0]) ########################## # Keep a moving average of the losses ########################## niter +=1 moving_loss = .99 * moving_loss + .01 * nd.mean(cross_entropy).asscalar() est_loss = moving_loss/(1-0.99**niter) test_accuracy, test_loss = evaluate_accuracy(test_data, net,loss) train_accuracy, train_loss = evaluate_accuracy(train_data, net, loss) # save them for later loss_seq_train.append(train_loss) loss_seq_test.append(test_loss) acc_seq_train.append(train_accuracy) acc_seq_test.append(test_accuracy) if e % 20 == 0: print("Completed epoch %s. Train Loss: %s, Test Loss %s, Train_acc %s, Test_acc %s" % (e+1, train_loss, test_loss, train_accuracy, test_accuracy)) ## Plotting the learning curves plot_learningcurves(loss_seq_train,loss_seq_test,acc_seq_train,acc_seq_test)
_____no_output_____
Apache-2.0
chapter02_supervised-learning/regularization-gluon.ipynb
andylamp/mxnet-the-straight-dope
Introduction**Prerequisites**- Python Fundamentals**Outcomes**- Understand the core pandas objects - Series - DataFrame - Index into particular elements of a Series and DataFrame - Understand what `.dtype`/`.dtypes` do - Make basic visualizations **Data**- US regional unemployment data from Bureau of Labor Statistics PandasThis notebook begins the material on `pandas`To start we will import the pandas package and give it the nickname`"pd"`, which is the conventional way to import pandas
import pandas as pd # Don't worry about this line for now! %matplotlib inline
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
Sometimes it will be helpful to know which version of pandas we areusingWe can check this by running the code below SeriesThe first main pandas type we will introduce is called SeriesA Series is a single column of data, with row labels for eachobservationPandas refers to the row labels as the *index* of the Series Below we create a Series which contains the US unemployment rate everyother year starting in 1995
values = [5.6, 5.3, 4.3, 4.2, 5.8, 5.3, 4.6, 7.8, 9.1, 8., 5.7] years = list(range(1995, 2017, 2)) unemp = pd.Series(data=values, index=years, name="Unemployment") unemp
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
We can look at the index and values in our Series
unemp.index unemp.values unemp.
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
What can we do with a Series object? `.head` and `.tail`Often our data will have many rows and we won’t want to display it allat onceThe methods `.head` and `.tail` show rows at the beginning and endof our Series, respectively
unemp.head() unemp.tail()
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
Basic PlottingWe can also plot data using the `.plot` methodThis is why we needed the `%matplotlib inline` — it tells the notebookto display figures inside the notebook itself*Note*: Pandas can do much more in terms of visualizationWe will talk about more advanced visualization features later
unemp.plot(kind="bar")
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
Unique valuesIn this dataset it doesn’t make much sense, but we may want to find theunique values in a SeriesThis can be done with the `.unique` method
unemp.unique()
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
IndexingSometimes we will want to select particular elements from a SeriesWe can do this using `.loc[index_things]`; where `index_things` isan item from the index, or a list of items in the indexWe will see this more in depth in a coming lecture, but for now wedemonstrate how to select one or multiple elements of the Series
unemp unemp.loc[[2009, 1995]] unemp.iloc[-1] unemp.loc[[1995, 2005, 2015]]
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
**Check for understanding**For each of the following exercises, we recommend reading the documentationfor help- Display only the first 2 elements of the Series using the `.head` method - Using the `plot` method, make a bar plot - Use `.loc` to select the lowest/highest unemployment rate shown in the Series - Run the code `unemp.dtype` below. What does it give you? Talk with your neighbor about where it might come from
unemp.loc[[unemp.idxmin(), unemp.idxmax()]]
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
DataFrameA DataFrame is how pandas stores one or more columns of dataWe can think a DataFrames a multiple Series stacked side by side ascolumnsThis is similar to a sheet in an Excel workbook or a table in a SQLdatabaseIn addition to row labels (an index), DataFrames also have column labelsWe refer to these column labels as the columns or column names Below we create a DataFrame that contains the unemployment rate everyother year by region of the US starting in 1995.
data = {"NorthEast": [5.9, 5.6, 4.4, 3.8, 5.8, 4.9, 4.3, 7.1, 8.3, 7.9, 5.7], "MidWest": [4.5, 4.3, 3.6, 4. , 5.7, 5.7, 4.9, 8.1, 8.7, 7.4, 5.1], "South": [5.3, 5.2, 4.2, 4. , 5.7, 5.2, 4.3, 7.6, 9.1, 7.4, 5.5], "West": [6.6, 6., 5.2, 4.6, 6.5, 5.5, 4.5, 8.6, 10.7, 8.5, 6.1], "National": [5.6, 5.3, 4.3, 4.2, 5.8, 5.3, 4.6, 7.8, 9.1, 8., 5.7]} unemp_region = pd.DataFrame(data, index=years) unemp_region
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
We can retrieve the index and the DataFrame values in the same way wedid with a Series
unemp_region.index unemp_region.values
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
What can we do with a DataFrame?Pretty much everything we can do with a Series `.head` and `.tail`As with Series, we can use `.head` and `.tail` to show only thefirst or last `n` rows
unemp_region.head() unemp_region.tail(3)
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
PlottingWe can generate plots with the `.plot` methodNotice we now have a separate line for each column of data
unemp_region.plot()
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
IndexingWe can also do indexing using `.loc`However, there is a little more to it than before because we can choosesubsets of both row and columns
unemp_region.head() unemp_region.loc[1995, "NorthEast"] unemp_region.loc[[1995, 2005], "South"] unemp_region.loc[1995, ["NorthEast", "National"]] unemp_region.loc[:, "NorthEast"] # `[string]` with no `.loc` extracts a whole column unemp_region["MidWest"]
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
Computations with columnsPandas can do various computations and mathematical operations oncolumnsLet’s take a look at a few of them
# Divide by 100 to move from percent units to a rate unemp_region["West"] / 100 # Find maximum unemp_region["West"].max() unemp_region["West"].iloc[1:5] unemp_region["MidWest"].head(6) # Find the difference between two columns # Notice that pandas applies `-` to _all rows_ at one time # We'll see more of this throughout these materials unemp_region["West"].iloc[1:5] - unemp_region["MidWest"].head(6) # Find correlation between two columns unemp_region.West.corr(unemp_region["MidWest"]) # find correlation between all column pairs unemp_region.corr()
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
**Check for understanding**For each of the following, we recommend reading the documentation for help- Use introspection (or google-fu) to find a way to obtain a list with all of the column names in `unemp_region` - Using the `plot` method, make a bar plot. What does it look like now? - Use `.loc` to select the the unemployment data for the `NorthEast` and `West` for the years 1995, 2005, 2011, and 2015. - Run the code `unemp_region.dtypes` below. What does it give you? How does this compare with `unemp.dtype`? Data typesWe asked you to run the commands `unemp.dtype` and`unemp_region.dtypes` and think about what these methods outputYou might have guessed that they return the type of the values insideeach columnOccasionally, you might need to investigate what types you have in yourDataFrame when an operation is not doing what you expect it to
unemp.dtype unemp_region.dtypes
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
DataFrames will only distinguish between a few types- Booleans (`bool`) - Floating point numbers (`float64`) - Integers (`int64`) - Dates (`datetime`) — we will learn this soon - Categorical data (`categorical`) - Everything else, including strings (`object`) In the future, we will often refer to the type of data stored in acolumn as its `dtype`Let’s look at an example for when having an incorrect `dtype` cancause problemsSuppose that when we imported the data the `South` column wasinterpreted as a string
str_unemp = unemp_region.copy() str_unemp["South"] = str_unemp["South"].astype(str) str_unemp.dtypes
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
Everything *looks* ok…
str_unemp.head()
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
But if we try to do something like compute the sum of all the columns,we get unexpected results…
str_unemp.sum()
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
This happened because `.sum` effectively calls `+` on all rows ineach columnRecall that when we apply `+` to two strings, the result is thestrings mashed togetherSo in this case we saw that the entries in all the rows of the Southcolumn were stitched together into one long string Changing DataFramesWe can change the data inside of a DataFrame in various ways:- Adding new columns - Changing index labels or column names - Altering existing data (e.g. doing some arithmetic or making a column of strings lowercase) Some of these “mutations” will be topics of future notebooks, so we willonly briefly discuss a few of the things we can do below Creating new columnsWe can create new data by “assigning values to a column” similar to howwe assign values to a variableIn pandas, we create a new column of a DataFrame by writing ```pythondf["New Column Name"] = new_values``` Below we create an unweighted mean of the unemployment rate across thefour regions of the US — notice this differs from the nationalunemployment rate
unemp_region["UnweightedMean"] = (unemp_region["NorthEast"] + unemp_region["MidWest"] + unemp_region["South"] + unemp_region["West"])/4 unemp_region.head()
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop
Changing valuesChanging the values inside of a DataFrame should be done sparinglyHowever, it can be done by assigning a value to a location in theDataFrame`df.loc[index, column] = value`
unemp_region.loc[1995, "UnweightedMean"] = 0.0 unemp_region.head()
_____no_output_____
MIT
2019-07-09__InDepthPandas/03_pandas_intro.ipynb
snowdj/UCF-MSDA-workshop