path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/api/examples/BioGRID.ipynb | ###Markdown
BioGRID API methods
###Code
import api_doc
api_doc.get_api_methods_by_tag('BioGRID')
###Output
_____no_output_____ |
autoencoder/Convolutional_Autoencoder.ipynb | ###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.axis("off")
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
n_classes = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32,[None,28,28,1])
targets_ = tf.placeholder(tf.float32,[None,28,28,1])
### Encoder
conv1 = tf.layers.conv2d(inputs_,filters = 16,kernel_size=2,activation=tf.nn.relu,padding="same")
print(conv1)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1,strides =2,pool_size=2)
print(maxpool1)
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1,filters = 8,kernel_size=2,activation = tf.nn.relu,padding="same")
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2,strides = 2,pool_size=2)
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, filters = 8,kernel_size=2,activation=tf.nn.relu,padding="same")
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3,strides=2,padding="same",pool_size=2)
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded,[7,7])
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1,kernel_size=2,filters=8,activation=tf.nn.relu,padding="same")
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4,[14,14])
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2,kernel_size=2,filters=8,activation=tf.nn.relu,padding="same")
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5,[28,28])
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3,kernel_size=2,activation=tf.nn.relu,filters=16,padding="same")
# Now 28x28x16
logits = tf.layers.conv2d(conv6,kernel_size=3,activation=None,filters=1,padding="same")
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 5
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =tf.layers.conv2d(inputs_,kernel_size=2,activation=tf.nn.relu,filters=32)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1,pool_size=2,strides=2,padding="same")
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1,kernel_size=2,activation=tf.nn.relu,filters=32)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2,pool_size=2,strides=2,padding="same")
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2,filters=16,kernel_size=2,activation=tf.nn.relu,padding="same")
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3,pool_size=2,strides=2,padding="same")
print(encoded)
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded,[7,7])
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1,filters=16,activation=tf.nn.relu,kernel_size=2,padding="same")
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4,[14,14])
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2,filters=32,kernel_size=2,padding="same",activation = tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5,[28,28])
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3,filters=32,activation=tf.nn.relu,kernel_size=2,padding="same")
# Now 28x28x32
logits = tf.layers.conv2d(conv6,filters=1,activation=None,padding="same",kernel_size=2)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **deconvolutional** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor).
###Code
learning_rate = 0.001
n_elements = 28*28
inputs_ = tf.placeholder(tf.float32,(None,28,28,1))
targets_ = tf.placeholder(tf.float32,(None,28,28,1))
### Encoder
conv1 = tf.layers.conv2d(inputs_,16,(3,3),padding='same',activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1,(2,2),(2,2),padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1,8,(3,3),padding='same',activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2,(2,2),(2,2),padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2,8,(3,3),padding='same',activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv2,(2,2),(2,2),padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded,(7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d_transpose(upsample1,8,(3,3), padding='same')
# Now 7x7x8
upsample2 = tf.image.resize_images(conv4,(14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d_transpose(upsample2,8,(3,3), padding='same')
# Now 14x14x8
upsample3 = tf.image.resize_images(conv5,(28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d_transpose(upsample3,16,(3,3), padding='same')
# Now 28x28x16
logits = tf.layers.conv2d(conv6,1,(3,3),padding='same')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 1
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
img_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.1))
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 10
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 2
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.conv2d(conv3, 16, (3,3), padding='same', activation=None)
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 2
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/2... Training loss: 0.6907
Epoch: 1/2... Training loss: 0.6744
Epoch: 1/2... Training loss: 0.6431
Epoch: 1/2... Training loss: 0.5992
Epoch: 1/2... Training loss: 0.5492
Epoch: 1/2... Training loss: 0.5149
Epoch: 1/2... Training loss: 0.5464
Epoch: 1/2... Training loss: 0.5319
Epoch: 1/2... Training loss: 0.5023
Epoch: 1/2... Training loss: 0.4810
Epoch: 1/2... Training loss: 0.4711
Epoch: 1/2... Training loss: 0.4742
Epoch: 1/2... Training loss: 0.4674
Epoch: 1/2... Training loss: 0.4564
Epoch: 1/2... Training loss: 0.4497
Epoch: 1/2... Training loss: 0.4416
Epoch: 1/2... Training loss: 0.4278
Epoch: 1/2... Training loss: 0.4032
Epoch: 1/2... Training loss: 0.3786
Epoch: 1/2... Training loss: 0.3717
Epoch: 1/2... Training loss: 0.3667
Epoch: 1/2... Training loss: 0.3451
Epoch: 1/2... Training loss: 0.3311
Epoch: 1/2... Training loss: 0.3146
Epoch: 1/2... Training loss: 0.3127
Epoch: 1/2... Training loss: 0.2983
Epoch: 1/2... Training loss: 0.2911
Epoch: 1/2... Training loss: 0.2856
Epoch: 1/2... Training loss: 0.2790
Epoch: 1/2... Training loss: 0.2739
Epoch: 1/2... Training loss: 0.2700
Epoch: 1/2... Training loss: 0.2784
Epoch: 1/2... Training loss: 0.2674
Epoch: 1/2... Training loss: 0.2696
Epoch: 1/2... Training loss: 0.2614
Epoch: 1/2... Training loss: 0.2655
Epoch: 1/2... Training loss: 0.2655
Epoch: 1/2... Training loss: 0.2664
Epoch: 1/2... Training loss: 0.2681
Epoch: 1/2... Training loss: 0.2613
Epoch: 1/2... Training loss: 0.2528
Epoch: 1/2... Training loss: 0.2635
Epoch: 1/2... Training loss: 0.2679
Epoch: 1/2... Training loss: 0.2640
Epoch: 1/2... Training loss: 0.2642
Epoch: 1/2... Training loss: 0.2642
Epoch: 1/2... Training loss: 0.2621
Epoch: 1/2... Training loss: 0.2581
Epoch: 1/2... Training loss: 0.2591
Epoch: 1/2... Training loss: 0.2540
Epoch: 1/2... Training loss: 0.2566
Epoch: 1/2... Training loss: 0.2556
Epoch: 1/2... Training loss: 0.2554
Epoch: 1/2... Training loss: 0.2523
Epoch: 1/2... Training loss: 0.2411
Epoch: 1/2... Training loss: 0.2573
Epoch: 1/2... Training loss: 0.2482
Epoch: 1/2... Training loss: 0.2544
Epoch: 1/2... Training loss: 0.2437
Epoch: 1/2... Training loss: 0.2525
Epoch: 1/2... Training loss: 0.2345
Epoch: 1/2... Training loss: 0.2405
Epoch: 1/2... Training loss: 0.2346
Epoch: 1/2... Training loss: 0.2327
Epoch: 1/2... Training loss: 0.2265
Epoch: 1/2... Training loss: 0.2311
Epoch: 1/2... Training loss: 0.2241
Epoch: 1/2... Training loss: 0.2238
Epoch: 1/2... Training loss: 0.2127
Epoch: 1/2... Training loss: 0.2137
Epoch: 1/2... Training loss: 0.2197
Epoch: 1/2... Training loss: 0.2162
Epoch: 1/2... Training loss: 0.2187
Epoch: 1/2... Training loss: 0.2132
Epoch: 1/2... Training loss: 0.2165
Epoch: 1/2... Training loss: 0.2109
Epoch: 1/2... Training loss: 0.2135
Epoch: 1/2... Training loss: 0.2079
Epoch: 1/2... Training loss: 0.2070
Epoch: 1/2... Training loss: 0.2032
Epoch: 1/2... Training loss: 0.2028
Epoch: 1/2... Training loss: 0.2004
Epoch: 1/2... Training loss: 0.1981
Epoch: 1/2... Training loss: 0.1972
Epoch: 1/2... Training loss: 0.1891
Epoch: 1/2... Training loss: 0.1896
Epoch: 1/2... Training loss: 0.1878
Epoch: 1/2... Training loss: 0.1899
Epoch: 1/2... Training loss: 0.1934
Epoch: 1/2... Training loss: 0.1913
Epoch: 1/2... Training loss: 0.1812
Epoch: 1/2... Training loss: 0.1915
Epoch: 1/2... Training loss: 0.1904
Epoch: 1/2... Training loss: 0.1877
Epoch: 1/2... Training loss: 0.1869
Epoch: 1/2... Training loss: 0.1849
Epoch: 1/2... Training loss: 0.1809
Epoch: 1/2... Training loss: 0.1796
Epoch: 1/2... Training loss: 0.1783
Epoch: 1/2... Training loss: 0.1803
Epoch: 1/2... Training loss: 0.1836
Epoch: 1/2... Training loss: 0.1767
Epoch: 1/2... Training loss: 0.1719
Epoch: 1/2... Training loss: 0.1760
Epoch: 1/2... Training loss: 0.1707
Epoch: 1/2... Training loss: 0.1703
Epoch: 1/2... Training loss: 0.1725
Epoch: 1/2... Training loss: 0.1684
Epoch: 1/2... Training loss: 0.1667
Epoch: 1/2... Training loss: 0.1692
Epoch: 1/2... Training loss: 0.1714
Epoch: 1/2... Training loss: 0.1669
Epoch: 1/2... Training loss: 0.1633
Epoch: 1/2... Training loss: 0.1638
Epoch: 1/2... Training loss: 0.1664
Epoch: 1/2... Training loss: 0.1685
Epoch: 1/2... Training loss: 0.1598
Epoch: 1/2... Training loss: 0.1645
Epoch: 1/2... Training loss: 0.1675
Epoch: 1/2... Training loss: 0.1666
Epoch: 1/2... Training loss: 0.1642
Epoch: 1/2... Training loss: 0.1571
Epoch: 1/2... Training loss: 0.1656
Epoch: 1/2... Training loss: 0.1592
Epoch: 1/2... Training loss: 0.1569
Epoch: 1/2... Training loss: 0.1606
Epoch: 1/2... Training loss: 0.1595
Epoch: 1/2... Training loss: 0.1567
Epoch: 1/2... Training loss: 0.1568
Epoch: 1/2... Training loss: 0.1566
Epoch: 1/2... Training loss: 0.1559
Epoch: 1/2... Training loss: 0.1556
Epoch: 1/2... Training loss: 0.1576
Epoch: 1/2... Training loss: 0.1574
Epoch: 1/2... Training loss: 0.1550
Epoch: 1/2... Training loss: 0.1513
Epoch: 1/2... Training loss: 0.1543
Epoch: 1/2... Training loss: 0.1499
Epoch: 1/2... Training loss: 0.1494
Epoch: 1/2... Training loss: 0.1532
Epoch: 1/2... Training loss: 0.1520
Epoch: 1/2... Training loss: 0.1524
Epoch: 1/2... Training loss: 0.1523
Epoch: 1/2... Training loss: 0.1490
Epoch: 1/2... Training loss: 0.1551
Epoch: 1/2... Training loss: 0.1481
Epoch: 1/2... Training loss: 0.1505
Epoch: 1/2... Training loss: 0.1527
Epoch: 1/2... Training loss: 0.1505
Epoch: 1/2... Training loss: 0.1527
Epoch: 1/2... Training loss: 0.1516
Epoch: 1/2... Training loss: 0.1497
Epoch: 1/2... Training loss: 0.1496
Epoch: 1/2... Training loss: 0.1455
Epoch: 1/2... Training loss: 0.1485
Epoch: 1/2... Training loss: 0.1541
Epoch: 1/2... Training loss: 0.1459
Epoch: 1/2... Training loss: 0.1512
Epoch: 1/2... Training loss: 0.1510
Epoch: 1/2... Training loss: 0.1455
Epoch: 1/2... Training loss: 0.1496
Epoch: 1/2... Training loss: 0.1495
Epoch: 1/2... Training loss: 0.1437
Epoch: 1/2... Training loss: 0.1450
Epoch: 1/2... Training loss: 0.1444
Epoch: 1/2... Training loss: 0.1445
Epoch: 1/2... Training loss: 0.1432
Epoch: 1/2... Training loss: 0.1421
Epoch: 1/2... Training loss: 0.1444
Epoch: 1/2... Training loss: 0.1462
Epoch: 1/2... Training loss: 0.1431
Epoch: 1/2... Training loss: 0.1436
Epoch: 1/2... Training loss: 0.1392
Epoch: 1/2... Training loss: 0.1401
Epoch: 1/2... Training loss: 0.1430
Epoch: 1/2... Training loss: 0.1455
Epoch: 1/2... Training loss: 0.1432
Epoch: 1/2... Training loss: 0.1411
Epoch: 1/2... Training loss: 0.1358
Epoch: 1/2... Training loss: 0.1411
Epoch: 1/2... Training loss: 0.1432
Epoch: 1/2... Training loss: 0.1443
Epoch: 1/2... Training loss: 0.1427
Epoch: 1/2... Training loss: 0.1434
Epoch: 1/2... Training loss: 0.1405
Epoch: 1/2... Training loss: 0.1379
Epoch: 1/2... Training loss: 0.1344
Epoch: 1/2... Training loss: 0.1424
Epoch: 1/2... Training loss: 0.1392
Epoch: 1/2... Training loss: 0.1375
Epoch: 1/2... Training loss: 0.1329
Epoch: 1/2... Training loss: 0.1376
Epoch: 1/2... Training loss: 0.1372
Epoch: 1/2... Training loss: 0.1412
Epoch: 1/2... Training loss: 0.1352
Epoch: 1/2... Training loss: 0.1400
Epoch: 1/2... Training loss: 0.1376
Epoch: 1/2... Training loss: 0.1359
Epoch: 1/2... Training loss: 0.1368
Epoch: 1/2... Training loss: 0.1405
Epoch: 1/2... Training loss: 0.1334
Epoch: 1/2... Training loss: 0.1362
Epoch: 1/2... Training loss: 0.1362
Epoch: 1/2... Training loss: 0.1372
Epoch: 1/2... Training loss: 0.1403
Epoch: 1/2... Training loss: 0.1362
Epoch: 1/2... Training loss: 0.1315
Epoch: 1/2... Training loss: 0.1349
Epoch: 1/2... Training loss: 0.1333
Epoch: 1/2... Training loss: 0.1361
Epoch: 1/2... Training loss: 0.1310
Epoch: 1/2... Training loss: 0.1394
Epoch: 1/2... Training loss: 0.1352
Epoch: 1/2... Training loss: 0.1379
Epoch: 1/2... Training loss: 0.1339
Epoch: 1/2... Training loss: 0.1348
Epoch: 1/2... Training loss: 0.1388
Epoch: 1/2... Training loss: 0.1332
Epoch: 1/2... Training loss: 0.1343
Epoch: 1/2... Training loss: 0.1321
Epoch: 1/2... Training loss: 0.1365
Epoch: 1/2... Training loss: 0.1398
Epoch: 1/2... Training loss: 0.1329
Epoch: 1/2... Training loss: 0.1332
Epoch: 1/2... Training loss: 0.1295
Epoch: 1/2... Training loss: 0.1295
Epoch: 1/2... Training loss: 0.1351
Epoch: 1/2... Training loss: 0.1316
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, 28, 28])
targets_ = tf.placeholder(tf.int32, [None, 10])
### Encoder
conv1 = tf.layers.conv2d(inputs, 16, (3,3),
padding='same',
activation='relu')
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3),
padding='same',
activation='relu')
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3),
padding='same',
activation='relu')
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, 2, 2)
# Now 4x4x8
### Decoder
upsample1 = tf.layers.conv2d_transpose(encoded, 8, [None, 7, 7, 8], padding='same')
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 9, (3, 3), padding='same')
# Now 7x7x8
upsample2 = tf.layers.conv2d_transpose(conv4, 8, [None, 14, 14, 8])
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, ( None, 28,28,1))
targets_ = tf.placeholder(tf.float32, (None, 28,28,1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1,2,2, padding="same")
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2,2,2, padding="same")
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3,2,2, padding="same")
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(.001).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
# Input and target placeholders
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu, name="conv1")
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1,2,2, padding="same", name="maxpool1")
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu, name="conv2")
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2,2,2, padding="same", name="maxpool2")
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu, name="conv3")
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3,2,2, padding="same", name="maxpool3")
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7), name="upsample1")
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu, name="conv4")
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14), name="upsample1")
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu, name="conv5")
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28), name="upsample1")
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu, name="conv6")
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None, name="logits")
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name="decoded")
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6943
Epoch: 1/100... Training loss: 0.6617
Epoch: 1/100... Training loss: 0.6244
Epoch: 1/100... Training loss: 0.5784
Epoch: 1/100... Training loss: 0.5297
Epoch: 1/100... Training loss: 0.5069
Epoch: 1/100... Training loss: 0.5103
Epoch: 1/100... Training loss: 0.5297
Epoch: 1/100... Training loss: 0.5104
Epoch: 1/100... Training loss: 0.4974
Epoch: 1/100... Training loss: 0.4719
Epoch: 1/100... Training loss: 0.4594
Epoch: 1/100... Training loss: 0.4712
Epoch: 1/100... Training loss: 0.4552
Epoch: 1/100... Training loss: 0.4584
Epoch: 1/100... Training loss: 0.4563
Epoch: 1/100... Training loss: 0.4415
Epoch: 1/100... Training loss: 0.4365
Epoch: 1/100... Training loss: 0.4258
Epoch: 1/100... Training loss: 0.4144
Epoch: 1/100... Training loss: 0.4048
Epoch: 1/100... Training loss: 0.3923
Epoch: 1/100... Training loss: 0.3838
Epoch: 1/100... Training loss: 0.3776
Epoch: 1/100... Training loss: 0.3627
Epoch: 1/100... Training loss: 0.3527
Epoch: 1/100... Training loss: 0.3463
Epoch: 1/100... Training loss: 0.3307
Epoch: 1/100... Training loss: 0.3269
Epoch: 1/100... Training loss: 0.3132
Epoch: 1/100... Training loss: 0.3132
Epoch: 1/100... Training loss: 0.3014
Epoch: 1/100... Training loss: 0.2896
Epoch: 1/100... Training loss: 0.2833
Epoch: 1/100... Training loss: 0.2891
Epoch: 1/100... Training loss: 0.2813
Epoch: 1/100... Training loss: 0.2711
Epoch: 1/100... Training loss: 0.2656
Epoch: 1/100... Training loss: 0.2665
Epoch: 1/100... Training loss: 0.2702
Epoch: 1/100... Training loss: 0.2696
Epoch: 1/100... Training loss: 0.2614
Epoch: 1/100... Training loss: 0.2622
Epoch: 1/100... Training loss: 0.2661
Epoch: 1/100... Training loss: 0.2642
Epoch: 1/100... Training loss: 0.2679
Epoch: 1/100... Training loss: 0.2618
Epoch: 1/100... Training loss: 0.2627
Epoch: 1/100... Training loss: 0.2560
Epoch: 1/100... Training loss: 0.2570
Epoch: 1/100... Training loss: 0.2557
Epoch: 1/100... Training loss: 0.2469
Epoch: 1/100... Training loss: 0.2496
Epoch: 1/100... Training loss: 0.2409
Epoch: 1/100... Training loss: 0.2477
Epoch: 1/100... Training loss: 0.2477
Epoch: 1/100... Training loss: 0.2373
Epoch: 1/100... Training loss: 0.2398
Epoch: 1/100... Training loss: 0.2388
Epoch: 1/100... Training loss: 0.2357
Epoch: 1/100... Training loss: 0.2456
Epoch: 1/100... Training loss: 0.2382
Epoch: 1/100... Training loss: 0.2272
Epoch: 1/100... Training loss: 0.2446
Epoch: 1/100... Training loss: 0.2484
Epoch: 1/100... Training loss: 0.2421
Epoch: 1/100... Training loss: 0.2323
Epoch: 1/100... Training loss: 0.2315
Epoch: 1/100... Training loss: 0.2386
Epoch: 1/100... Training loss: 0.2348
Epoch: 1/100... Training loss: 0.2339
Epoch: 1/100... Training loss: 0.2329
Epoch: 1/100... Training loss: 0.2410
Epoch: 1/100... Training loss: 0.2384
Epoch: 1/100... Training loss: 0.2326
Epoch: 1/100... Training loss: 0.2261
Epoch: 1/100... Training loss: 0.2290
Epoch: 1/100... Training loss: 0.2355
Epoch: 1/100... Training loss: 0.2339
Epoch: 1/100... Training loss: 0.2313
Epoch: 1/100... Training loss: 0.2285
Epoch: 1/100... Training loss: 0.2247
Epoch: 1/100... Training loss: 0.2285
Epoch: 1/100... Training loss: 0.2248
Epoch: 1/100... Training loss: 0.2309
Epoch: 1/100... Training loss: 0.2245
Epoch: 1/100... Training loss: 0.2281
Epoch: 1/100... Training loss: 0.2255
Epoch: 1/100... Training loss: 0.2276
Epoch: 1/100... Training loss: 0.2253
Epoch: 1/100... Training loss: 0.2258
Epoch: 1/100... Training loss: 0.2248
Epoch: 1/100... Training loss: 0.2226
Epoch: 1/100... Training loss: 0.2201
Epoch: 1/100... Training loss: 0.2192
Epoch: 1/100... Training loss: 0.2233
Epoch: 1/100... Training loss: 0.2191
Epoch: 1/100... Training loss: 0.2223
Epoch: 1/100... Training loss: 0.2160
Epoch: 1/100... Training loss: 0.2181
Epoch: 1/100... Training loss: 0.2199
Epoch: 1/100... Training loss: 0.2177
Epoch: 1/100... Training loss: 0.2257
Epoch: 1/100... Training loss: 0.2305
Epoch: 1/100... Training loss: 0.2196
Epoch: 1/100... Training loss: 0.2139
Epoch: 1/100... Training loss: 0.2221
Epoch: 1/100... Training loss: 0.2131
Epoch: 1/100... Training loss: 0.2301
Epoch: 1/100... Training loss: 0.2245
Epoch: 1/100... Training loss: 0.2215
Epoch: 1/100... Training loss: 0.2217
Epoch: 1/100... Training loss: 0.2192
Epoch: 1/100... Training loss: 0.2232
Epoch: 1/100... Training loss: 0.2135
Epoch: 1/100... Training loss: 0.2274
Epoch: 1/100... Training loss: 0.2145
Epoch: 1/100... Training loss: 0.2137
Epoch: 1/100... Training loss: 0.2110
Epoch: 1/100... Training loss: 0.2157
Epoch: 1/100... Training loss: 0.2152
Epoch: 1/100... Training loss: 0.2141
Epoch: 1/100... Training loss: 0.2193
Epoch: 1/100... Training loss: 0.2090
Epoch: 1/100... Training loss: 0.2116
Epoch: 1/100... Training loss: 0.2081
Epoch: 1/100... Training loss: 0.2110
Epoch: 1/100... Training loss: 0.2116
Epoch: 1/100... Training loss: 0.2114
Epoch: 1/100... Training loss: 0.2049
Epoch: 1/100... Training loss: 0.2065
Epoch: 1/100... Training loss: 0.2085
Epoch: 1/100... Training loss: 0.2107
Epoch: 1/100... Training loss: 0.2096
Epoch: 1/100... Training loss: 0.2163
Epoch: 1/100... Training loss: 0.2095
Epoch: 1/100... Training loss: 0.2073
Epoch: 1/100... Training loss: 0.2083
Epoch: 1/100... Training loss: 0.2017
Epoch: 1/100... Training loss: 0.2102
Epoch: 1/100... Training loss: 0.2085
Epoch: 1/100... Training loss: 0.2024
Epoch: 1/100... Training loss: 0.2016
Epoch: 1/100... Training loss: 0.2025
Epoch: 1/100... Training loss: 0.2023
Epoch: 1/100... Training loss: 0.2069
Epoch: 1/100... Training loss: 0.2006
Epoch: 1/100... Training loss: 0.2070
Epoch: 1/100... Training loss: 0.1962
Epoch: 1/100... Training loss: 0.2071
Epoch: 1/100... Training loss: 0.1939
Epoch: 1/100... Training loss: 0.1941
Epoch: 1/100... Training loss: 0.1934
Epoch: 1/100... Training loss: 0.1983
Epoch: 1/100... Training loss: 0.2040
Epoch: 1/100... Training loss: 0.2022
Epoch: 1/100... Training loss: 0.1974
Epoch: 1/100... Training loss: 0.2072
Epoch: 1/100... Training loss: 0.1920
Epoch: 1/100... Training loss: 0.2000
Epoch: 1/100... Training loss: 0.1998
Epoch: 1/100... Training loss: 0.2007
Epoch: 1/100... Training loss: 0.1980
Epoch: 1/100... Training loss: 0.1918
Epoch: 1/100... Training loss: 0.2010
Epoch: 1/100... Training loss: 0.1949
Epoch: 1/100... Training loss: 0.2020
Epoch: 1/100... Training loss: 0.1959
Epoch: 1/100... Training loss: 0.1978
Epoch: 1/100... Training loss: 0.1924
Epoch: 1/100... Training loss: 0.1931
Epoch: 1/100... Training loss: 0.1964
Epoch: 1/100... Training loss: 0.1964
Epoch: 1/100... Training loss: 0.1938
Epoch: 1/100... Training loss: 0.1896
Epoch: 1/100... Training loss: 0.1936
Epoch: 1/100... Training loss: 0.1951
Epoch: 1/100... Training loss: 0.1944
Epoch: 1/100... Training loss: 0.1963
Epoch: 1/100... Training loss: 0.1948
Epoch: 1/100... Training loss: 0.1942
Epoch: 1/100... Training loss: 0.1947
Epoch: 1/100... Training loss: 0.1922
Epoch: 1/100... Training loss: 0.1866
Epoch: 1/100... Training loss: 0.1953
Epoch: 1/100... Training loss: 0.1873
Epoch: 1/100... Training loss: 0.1895
Epoch: 1/100... Training loss: 0.1873
Epoch: 1/100... Training loss: 0.1904
Epoch: 1/100... Training loss: 0.1874
Epoch: 1/100... Training loss: 0.1916
Epoch: 1/100... Training loss: 0.1926
Epoch: 1/100... Training loss: 0.1943
Epoch: 1/100... Training loss: 0.1922
Epoch: 1/100... Training loss: 0.1894
Epoch: 1/100... Training loss: 0.1889
Epoch: 1/100... Training loss: 0.1879
Epoch: 1/100... Training loss: 0.1838
Epoch: 1/100... Training loss: 0.1865
Epoch: 1/100... Training loss: 0.1900
Epoch: 1/100... Training loss: 0.1862
Epoch: 1/100... Training loss: 0.1832
Epoch: 1/100... Training loss: 0.1911
Epoch: 1/100... Training loss: 0.1869
Epoch: 1/100... Training loss: 0.1877
Epoch: 1/100... Training loss: 0.1885
Epoch: 1/100... Training loss: 0.1880
Epoch: 1/100... Training loss: 0.1855
Epoch: 1/100... Training loss: 0.1885
Epoch: 1/100... Training loss: 0.1848
Epoch: 1/100... Training loss: 0.1858
Epoch: 1/100... Training loss: 0.1907
Epoch: 1/100... Training loss: 0.1837
Epoch: 1/100... Training loss: 0.1863
Epoch: 1/100... Training loss: 0.1857
Epoch: 1/100... Training loss: 0.1845
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
height = 28
width = 28
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, height, width, 1))
targets_ = tf.placeholder(tf.float32, (None, height, width, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (1, 1), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (1, 1), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (1, 1), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (1, 1), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3, 3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (1, 1), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (1, 1), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3, 3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6739
Epoch: 1/100... Training loss: 0.6408
Epoch: 1/100... Training loss: 0.5968
Epoch: 1/100... Training loss: 0.5471
Epoch: 1/100... Training loss: 0.5089
Epoch: 1/100... Training loss: 0.5263
Epoch: 1/100... Training loss: 0.5378
Epoch: 1/100... Training loss: 0.5173
Epoch: 1/100... Training loss: 0.4971
Epoch: 1/100... Training loss: 0.4717
Epoch: 1/100... Training loss: 0.4652
Epoch: 1/100... Training loss: 0.4677
Epoch: 1/100... Training loss: 0.4590
Epoch: 1/100... Training loss: 0.4532
Epoch: 1/100... Training loss: 0.4474
Epoch: 1/100... Training loss: 0.4306
Epoch: 1/100... Training loss: 0.4316
Epoch: 1/100... Training loss: 0.4183
Epoch: 1/100... Training loss: 0.4082
Epoch: 1/100... Training loss: 0.3897
Epoch: 1/100... Training loss: 0.3793
Epoch: 1/100... Training loss: 0.3696
Epoch: 1/100... Training loss: 0.3616
Epoch: 1/100... Training loss: 0.3436
Epoch: 1/100... Training loss: 0.3408
Epoch: 1/100... Training loss: 0.3264
Epoch: 1/100... Training loss: 0.3264
Epoch: 1/100... Training loss: 0.3198
Epoch: 1/100... Training loss: 0.3055
Epoch: 1/100... Training loss: 0.2960
Epoch: 1/100... Training loss: 0.2891
Epoch: 1/100... Training loss: 0.2893
Epoch: 1/100... Training loss: 0.2764
Epoch: 1/100... Training loss: 0.2767
Epoch: 1/100... Training loss: 0.2703
Epoch: 1/100... Training loss: 0.2742
Epoch: 1/100... Training loss: 0.2662
Epoch: 1/100... Training loss: 0.2717
Epoch: 1/100... Training loss: 0.2722
Epoch: 1/100... Training loss: 0.2726
Epoch: 1/100... Training loss: 0.2662
Epoch: 1/100... Training loss: 0.2678
Epoch: 1/100... Training loss: 0.2648
Epoch: 1/100... Training loss: 0.2630
Epoch: 1/100... Training loss: 0.2597
Epoch: 1/100... Training loss: 0.2586
Epoch: 1/100... Training loss: 0.2620
Epoch: 1/100... Training loss: 0.2552
Epoch: 1/100... Training loss: 0.2636
Epoch: 1/100... Training loss: 0.2618
Epoch: 1/100... Training loss: 0.2547
Epoch: 1/100... Training loss: 0.2563
Epoch: 1/100... Training loss: 0.2522
Epoch: 1/100... Training loss: 0.2475
Epoch: 1/100... Training loss: 0.2401
Epoch: 1/100... Training loss: 0.2560
Epoch: 1/100... Training loss: 0.2463
Epoch: 1/100... Training loss: 0.2427
Epoch: 1/100... Training loss: 0.2516
Epoch: 1/100... Training loss: 0.2353
Epoch: 1/100... Training loss: 0.2459
Epoch: 1/100... Training loss: 0.2467
Epoch: 1/100... Training loss: 0.2323
Epoch: 1/100... Training loss: 0.2364
Epoch: 1/100... Training loss: 0.2306
Epoch: 1/100... Training loss: 0.2358
Epoch: 1/100... Training loss: 0.2335
Epoch: 1/100... Training loss: 0.2315
Epoch: 1/100... Training loss: 0.2257
Epoch: 1/100... Training loss: 0.2344
Epoch: 1/100... Training loss: 0.2260
Epoch: 1/100... Training loss: 0.2307
Epoch: 1/100... Training loss: 0.2283
Epoch: 1/100... Training loss: 0.2238
Epoch: 1/100... Training loss: 0.2252
Epoch: 1/100... Training loss: 0.2269
Epoch: 1/100... Training loss: 0.2251
Epoch: 1/100... Training loss: 0.2264
Epoch: 1/100... Training loss: 0.2192
Epoch: 1/100... Training loss: 0.2208
Epoch: 1/100... Training loss: 0.2246
Epoch: 1/100... Training loss: 0.2236
Epoch: 1/100... Training loss: 0.2207
Epoch: 1/100... Training loss: 0.2240
Epoch: 1/100... Training loss: 0.2155
Epoch: 1/100... Training loss: 0.2100
Epoch: 1/100... Training loss: 0.2150
Epoch: 1/100... Training loss: 0.2170
Epoch: 1/100... Training loss: 0.2189
Epoch: 1/100... Training loss: 0.2054
Epoch: 1/100... Training loss: 0.2212
Epoch: 1/100... Training loss: 0.2108
Epoch: 1/100... Training loss: 0.2094
Epoch: 1/100... Training loss: 0.2123
Epoch: 1/100... Training loss: 0.2149
Epoch: 1/100... Training loss: 0.2153
Epoch: 1/100... Training loss: 0.2118
Epoch: 1/100... Training loss: 0.2061
Epoch: 1/100... Training loss: 0.2120
Epoch: 1/100... Training loss: 0.2128
Epoch: 1/100... Training loss: 0.2086
Epoch: 1/100... Training loss: 0.2046
Epoch: 1/100... Training loss: 0.2044
Epoch: 1/100... Training loss: 0.2066
Epoch: 1/100... Training loss: 0.2082
Epoch: 1/100... Training loss: 0.2106
Epoch: 1/100... Training loss: 0.2099
Epoch: 1/100... Training loss: 0.2042
Epoch: 1/100... Training loss: 0.2063
Epoch: 1/100... Training loss: 0.2061
Epoch: 1/100... Training loss: 0.2075
Epoch: 1/100... Training loss: 0.2075
Epoch: 1/100... Training loss: 0.2070
Epoch: 1/100... Training loss: 0.2017
Epoch: 1/100... Training loss: 0.2055
Epoch: 1/100... Training loss: 0.2016
Epoch: 1/100... Training loss: 0.1961
Epoch: 1/100... Training loss: 0.2029
Epoch: 1/100... Training loss: 0.1998
Epoch: 1/100... Training loss: 0.2073
Epoch: 1/100... Training loss: 0.1954
Epoch: 1/100... Training loss: 0.2015
Epoch: 1/100... Training loss: 0.2039
Epoch: 1/100... Training loss: 0.1982
Epoch: 1/100... Training loss: 0.2005
Epoch: 1/100... Training loss: 0.1984
Epoch: 1/100... Training loss: 0.2006
Epoch: 1/100... Training loss: 0.2019
Epoch: 1/100... Training loss: 0.1988
Epoch: 1/100... Training loss: 0.1937
Epoch: 1/100... Training loss: 0.1998
Epoch: 1/100... Training loss: 0.1947
Epoch: 1/100... Training loss: 0.1989
Epoch: 1/100... Training loss: 0.1990
Epoch: 1/100... Training loss: 0.1957
Epoch: 1/100... Training loss: 0.1929
Epoch: 1/100... Training loss: 0.1938
Epoch: 1/100... Training loss: 0.1999
Epoch: 1/100... Training loss: 0.1939
Epoch: 1/100... Training loss: 0.1931
Epoch: 1/100... Training loss: 0.1937
Epoch: 1/100... Training loss: 0.1911
Epoch: 1/100... Training loss: 0.2055
Epoch: 1/100... Training loss: 0.1935
Epoch: 1/100... Training loss: 0.1952
Epoch: 1/100... Training loss: 0.1931
Epoch: 1/100... Training loss: 0.1903
Epoch: 1/100... Training loss: 0.1922
Epoch: 1/100... Training loss: 0.1925
Epoch: 1/100... Training loss: 0.1901
Epoch: 1/100... Training loss: 0.1882
Epoch: 1/100... Training loss: 0.1884
Epoch: 1/100... Training loss: 0.1857
Epoch: 1/100... Training loss: 0.1874
Epoch: 1/100... Training loss: 0.1912
Epoch: 1/100... Training loss: 0.1906
Epoch: 1/100... Training loss: 0.1835
Epoch: 1/100... Training loss: 0.1855
Epoch: 1/100... Training loss: 0.1859
Epoch: 1/100... Training loss: 0.1857
Epoch: 1/100... Training loss: 0.1864
Epoch: 1/100... Training loss: 0.1843
Epoch: 1/100... Training loss: 0.1878
Epoch: 1/100... Training loss: 0.1893
Epoch: 1/100... Training loss: 0.1906
Epoch: 1/100... Training loss: 0.1884
Epoch: 1/100... Training loss: 0.1886
Epoch: 1/100... Training loss: 0.1858
Epoch: 1/100... Training loss: 0.1887
Epoch: 1/100... Training loss: 0.1832
Epoch: 1/100... Training loss: 0.1854
Epoch: 1/100... Training loss: 0.1844
Epoch: 1/100... Training loss: 0.1850
Epoch: 1/100... Training loss: 0.1795
Epoch: 1/100... Training loss: 0.1800
Epoch: 1/100... Training loss: 0.1833
Epoch: 1/100... Training loss: 0.1905
Epoch: 1/100... Training loss: 0.1839
Epoch: 1/100... Training loss: 0.1834
Epoch: 1/100... Training loss: 0.1801
Epoch: 1/100... Training loss: 0.1857
Epoch: 1/100... Training loss: 0.1836
Epoch: 1/100... Training loss: 0.1830
Epoch: 1/100... Training loss: 0.1823
Epoch: 1/100... Training loss: 0.1797
Epoch: 1/100... Training loss: 0.1853
Epoch: 1/100... Training loss: 0.1815
Epoch: 1/100... Training loss: 0.1824
Epoch: 1/100... Training loss: 0.1805
Epoch: 1/100... Training loss: 0.1796
Epoch: 1/100... Training loss: 0.1824
Epoch: 1/100... Training loss: 0.1797
Epoch: 1/100... Training loss: 0.1851
Epoch: 1/100... Training loss: 0.1845
Epoch: 1/100... Training loss: 0.1753
Epoch: 1/100... Training loss: 0.1784
Epoch: 1/100... Training loss: 0.1813
Epoch: 1/100... Training loss: 0.1826
Epoch: 1/100... Training loss: 0.1790
Epoch: 1/100... Training loss: 0.1780
Epoch: 1/100... Training loss: 0.1831
Epoch: 1/100... Training loss: 0.1796
Epoch: 1/100... Training loss: 0.1775
Epoch: 1/100... Training loss: 0.1786
Epoch: 1/100... Training loss: 0.1795
Epoch: 1/100... Training loss: 0.1821
Epoch: 1/100... Training loss: 0.1772
Epoch: 1/100... Training loss: 0.1735
Epoch: 1/100... Training loss: 0.1821
Epoch: 1/100... Training loss: 0.1780
Epoch: 1/100... Training loss: 0.1736
Epoch: 1/100... Training loss: 0.1803
Epoch: 1/100... Training loss: 0.1821
Epoch: 1/100... Training loss: 0.1761
Epoch: 1/100... Training loss: 0.1722
Epoch: 1/100... Training loss: 0.1841
Epoch: 1/100... Training loss: 0.1775
Epoch: 1/100... Training loss: 0.1697
Epoch: 1/100... Training loss: 0.1801
Epoch: 1/100... Training loss: 0.1786
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3) , strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3,16,(3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), strides=(1,1), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same' )
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.7004
Epoch: 1/100... Training loss: 0.6772
Epoch: 1/100... Training loss: 0.6541
Epoch: 1/100... Training loss: 0.6243
Epoch: 1/100... Training loss: 0.5807
Epoch: 1/100... Training loss: 0.5341
Epoch: 1/100... Training loss: 0.4937
Epoch: 1/100... Training loss: 0.4954
Epoch: 1/100... Training loss: 0.5104
Epoch: 1/100... Training loss: 0.5392
Epoch: 1/100... Training loss: 0.4934
Epoch: 1/100... Training loss: 0.4868
Epoch: 1/100... Training loss: 0.4713
Epoch: 1/100... Training loss: 0.4618
Epoch: 1/100... Training loss: 0.4689
Epoch: 1/100... Training loss: 0.4641
Epoch: 1/100... Training loss: 0.4574
Epoch: 1/100... Training loss: 0.4432
Epoch: 1/100... Training loss: 0.4254
Epoch: 1/100... Training loss: 0.4375
Epoch: 1/100... Training loss: 0.4402
Epoch: 1/100... Training loss: 0.4191
Epoch: 1/100... Training loss: 0.4101
Epoch: 1/100... Training loss: 0.3850
Epoch: 1/100... Training loss: 0.3834
Epoch: 1/100... Training loss: 0.3681
Epoch: 1/100... Training loss: 0.3628
Epoch: 1/100... Training loss: 0.3457
Epoch: 1/100... Training loss: 0.3451
Epoch: 1/100... Training loss: 0.3312
Epoch: 1/100... Training loss: 0.3291
Epoch: 1/100... Training loss: 0.3135
Epoch: 1/100... Training loss: 0.3098
Epoch: 1/100... Training loss: 0.2998
Epoch: 1/100... Training loss: 0.2910
Epoch: 1/100... Training loss: 0.2826
Epoch: 1/100... Training loss: 0.2806
Epoch: 1/100... Training loss: 0.2821
Epoch: 1/100... Training loss: 0.2851
Epoch: 1/100... Training loss: 0.2800
Epoch: 1/100... Training loss: 0.2781
Epoch: 1/100... Training loss: 0.2761
Epoch: 1/100... Training loss: 0.2823
Epoch: 1/100... Training loss: 0.2716
Epoch: 1/100... Training loss: 0.2711
Epoch: 1/100... Training loss: 0.2766
Epoch: 1/100... Training loss: 0.2659
Epoch: 1/100... Training loss: 0.2630
Epoch: 1/100... Training loss: 0.2650
Epoch: 1/100... Training loss: 0.2655
Epoch: 1/100... Training loss: 0.2721
Epoch: 1/100... Training loss: 0.2624
Epoch: 1/100... Training loss: 0.2716
Epoch: 1/100... Training loss: 0.2622
Epoch: 1/100... Training loss: 0.2623
Epoch: 1/100... Training loss: 0.2620
Epoch: 1/100... Training loss: 0.2593
Epoch: 1/100... Training loss: 0.2604
Epoch: 1/100... Training loss: 0.2595
Epoch: 1/100... Training loss: 0.2530
Epoch: 1/100... Training loss: 0.2575
Epoch: 1/100... Training loss: 0.2557
Epoch: 1/100... Training loss: 0.2572
Epoch: 1/100... Training loss: 0.2538
Epoch: 1/100... Training loss: 0.2554
Epoch: 1/100... Training loss: 0.2618
Epoch: 1/100... Training loss: 0.2539
Epoch: 1/100... Training loss: 0.2516
Epoch: 1/100... Training loss: 0.2490
Epoch: 1/100... Training loss: 0.2574
Epoch: 1/100... Training loss: 0.2510
Epoch: 1/100... Training loss: 0.2433
Epoch: 1/100... Training loss: 0.2578
Epoch: 1/100... Training loss: 0.2372
Epoch: 1/100... Training loss: 0.2471
Epoch: 1/100... Training loss: 0.2360
Epoch: 1/100... Training loss: 0.2424
Epoch: 1/100... Training loss: 0.2396
Epoch: 1/100... Training loss: 0.2361
Epoch: 1/100... Training loss: 0.2476
Epoch: 1/100... Training loss: 0.2392
Epoch: 1/100... Training loss: 0.2388
Epoch: 1/100... Training loss: 0.2333
Epoch: 1/100... Training loss: 0.2385
Epoch: 1/100... Training loss: 0.2356
Epoch: 1/100... Training loss: 0.2303
Epoch: 1/100... Training loss: 0.2307
Epoch: 1/100... Training loss: 0.2307
Epoch: 1/100... Training loss: 0.2251
Epoch: 1/100... Training loss: 0.2223
Epoch: 1/100... Training loss: 0.2211
Epoch: 1/100... Training loss: 0.2239
Epoch: 1/100... Training loss: 0.2229
Epoch: 1/100... Training loss: 0.2230
Epoch: 1/100... Training loss: 0.2198
Epoch: 1/100... Training loss: 0.2217
Epoch: 1/100... Training loss: 0.2214
Epoch: 1/100... Training loss: 0.2204
Epoch: 1/100... Training loss: 0.2178
Epoch: 1/100... Training loss: 0.2196
Epoch: 1/100... Training loss: 0.2195
Epoch: 1/100... Training loss: 0.2191
Epoch: 1/100... Training loss: 0.2211
Epoch: 1/100... Training loss: 0.2169
Epoch: 1/100... Training loss: 0.2203
Epoch: 1/100... Training loss: 0.2171
Epoch: 1/100... Training loss: 0.2110
Epoch: 1/100... Training loss: 0.2100
Epoch: 1/100... Training loss: 0.2147
Epoch: 1/100... Training loss: 0.2152
Epoch: 1/100... Training loss: 0.2199
Epoch: 1/100... Training loss: 0.2146
Epoch: 1/100... Training loss: 0.2102
Epoch: 1/100... Training loss: 0.2110
Epoch: 1/100... Training loss: 0.2165
Epoch: 1/100... Training loss: 0.2053
Epoch: 1/100... Training loss: 0.2086
Epoch: 1/100... Training loss: 0.2096
Epoch: 1/100... Training loss: 0.2075
Epoch: 1/100... Training loss: 0.2074
Epoch: 1/100... Training loss: 0.2031
Epoch: 1/100... Training loss: 0.2027
Epoch: 1/100... Training loss: 0.2077
Epoch: 1/100... Training loss: 0.2045
Epoch: 1/100... Training loss: 0.2077
Epoch: 1/100... Training loss: 0.2085
Epoch: 1/100... Training loss: 0.2055
Epoch: 1/100... Training loss: 0.2010
Epoch: 1/100... Training loss: 0.2033
Epoch: 1/100... Training loss: 0.2069
Epoch: 1/100... Training loss: 0.2028
Epoch: 1/100... Training loss: 0.2091
Epoch: 1/100... Training loss: 0.2080
Epoch: 1/100... Training loss: 0.1997
Epoch: 1/100... Training loss: 0.2006
Epoch: 1/100... Training loss: 0.2008
Epoch: 1/100... Training loss: 0.1948
Epoch: 1/100... Training loss: 0.2016
Epoch: 1/100... Training loss: 0.2021
Epoch: 1/100... Training loss: 0.1979
Epoch: 1/100... Training loss: 0.2000
Epoch: 1/100... Training loss: 0.1969
Epoch: 1/100... Training loss: 0.2001
Epoch: 1/100... Training loss: 0.1956
Epoch: 1/100... Training loss: 0.1901
Epoch: 1/100... Training loss: 0.1930
Epoch: 1/100... Training loss: 0.2001
Epoch: 1/100... Training loss: 0.2041
Epoch: 1/100... Training loss: 0.1999
Epoch: 1/100... Training loss: 0.1934
Epoch: 1/100... Training loss: 0.1938
Epoch: 1/100... Training loss: 0.1940
Epoch: 1/100... Training loss: 0.1927
Epoch: 1/100... Training loss: 0.1925
Epoch: 1/100... Training loss: 0.1957
Epoch: 1/100... Training loss: 0.1956
Epoch: 1/100... Training loss: 0.1955
Epoch: 1/100... Training loss: 0.1944
Epoch: 1/100... Training loss: 0.1941
Epoch: 1/100... Training loss: 0.1878
Epoch: 1/100... Training loss: 0.2005
Epoch: 1/100... Training loss: 0.1957
Epoch: 1/100... Training loss: 0.2021
Epoch: 1/100... Training loss: 0.2042
Epoch: 1/100... Training loss: 0.1978
Epoch: 1/100... Training loss: 0.1943
Epoch: 1/100... Training loss: 0.1992
Epoch: 1/100... Training loss: 0.1944
Epoch: 1/100... Training loss: 0.1909
Epoch: 1/100... Training loss: 0.1922
Epoch: 1/100... Training loss: 0.1968
Epoch: 1/100... Training loss: 0.1889
Epoch: 1/100... Training loss: 0.1867
Epoch: 1/100... Training loss: 0.1876
Epoch: 1/100... Training loss: 0.1893
Epoch: 1/100... Training loss: 0.1906
Epoch: 1/100... Training loss: 0.1937
Epoch: 1/100... Training loss: 0.1921
Epoch: 1/100... Training loss: 0.1872
Epoch: 1/100... Training loss: 0.1949
Epoch: 1/100... Training loss: 0.1913
Epoch: 1/100... Training loss: 0.1846
Epoch: 1/100... Training loss: 0.1898
Epoch: 1/100... Training loss: 0.1851
Epoch: 1/100... Training loss: 0.1818
Epoch: 1/100... Training loss: 0.1843
Epoch: 1/100... Training loss: 0.1897
Epoch: 1/100... Training loss: 0.1866
Epoch: 1/100... Training loss: 0.1876
Epoch: 1/100... Training loss: 0.1910
Epoch: 1/100... Training loss: 0.1864
Epoch: 1/100... Training loss: 0.1800
Epoch: 1/100... Training loss: 0.1821
Epoch: 1/100... Training loss: 0.1839
Epoch: 1/100... Training loss: 0.1865
Epoch: 1/100... Training loss: 0.1891
Epoch: 1/100... Training loss: 0.1881
Epoch: 1/100... Training loss: 0.1810
Epoch: 1/100... Training loss: 0.1795
Epoch: 1/100... Training loss: 0.1892
Epoch: 1/100... Training loss: 0.1881
Epoch: 1/100... Training loss: 0.1762
Epoch: 1/100... Training loss: 0.1839
Epoch: 1/100... Training loss: 0.1837
Epoch: 1/100... Training loss: 0.1786
Epoch: 1/100... Training loss: 0.1819
Epoch: 1/100... Training loss: 0.1866
Epoch: 1/100... Training loss: 0.1834
Epoch: 1/100... Training loss: 0.1818
Epoch: 1/100... Training loss: 0.1781
Epoch: 1/100... Training loss: 0.1835
Epoch: 1/100... Training loss: 0.1793
Epoch: 1/100... Training loss: 0.1853
Epoch: 1/100... Training loss: 0.1851
Epoch: 1/100... Training loss: 0.1784
Epoch: 1/100... Training loss: 0.1847
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
image_size = img.shape[0]
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name = 'inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name = 'targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, strides = 2, pool_size=2, padding = 'same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, strides = 2, pool_size=2, padding = 'same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, strides = 2, pool_size=2, padding = 'same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding = 'same', activation = None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name = 'decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_, logits = logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, strides = 2, pool_size = 2, padding = 'same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, strides = 2, pool_size = 2, padding = 'same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, strides = 2, pool_size = 2, padding = 'same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 16, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding = 'same', activation = tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(upsample3, 1, (3, 3), padding = 'same', activation = None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name = 'decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_, logits = logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 10
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/10... Training loss: 0.1752
Epoch: 2/10... Training loss: 0.1522
Epoch: 3/10... Training loss: 0.1369
Epoch: 4/10... Training loss: 0.1381
Epoch: 5/10... Training loss: 0.1306
Epoch: 6/10... Training loss: 0.1283
Epoch: 7/10... Training loss: 0.1269
Epoch: 8/10... Training loss: 0.1247
Epoch: 9/10... Training loss: 0.1251
Epoch: 10/10... Training loss: 0.1228
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32,(None,28,28,1),name='inputs')
targets_ = tf.placeholder(tf.float32, (None,28,28,1),name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_,16,(3,3),padding='same',activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1,(2,2),(2,2),padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1,8,(3,3),padding='same',activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2,(2,2),(2,2),padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2,8,(3,3),padding='same',activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3,(2,2),(2,2),padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded,(7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1,8,(3,3),padding='same',activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4,(14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2,8,(3,3),padding='same',activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5,(28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3,16,(3,3),padding='same',activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6,1,(3,3),padding='same',activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits,name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_,logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
mnist.train.images.shape[1]
learning_rate = 0.001
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(dtype = tf.float32,shape=(None,28,28,1),name='inputs')
targets_ = tf.placeholder(dtype = tf.float32,shape = (None,28,28,1), name='targets' )
### Encoder
conv1 = tf.layers.conv2d(inputs = inputs_, filters = 16, kernel_size=(3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(inputs =conv1, pool_size=2, strides = 2, padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(inputs = maxpool1, filters=8, kernel_size=(3,3), strides=1, padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=2, strides=2, padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(inputs = maxpool2,filters = 8, kernel_size=3, strides = 1, padding='same', activation = tf.nn.relu)
# Now 7x7x8
# encoded = tf.layers.dense(inputs = conv3,units = 8,activation = None )
encoded = tf.layers.max_pooling2d(inputs = conv3, pool_size=2, strides=2, padding='same')
print('encoded shape = ',encoded.shape)
# Now 4x4x8, smaller than input of 28x28x1 (~16% of original )
### Decoder
# upsample1 = tf.image.resize_nearest_neighbor(images=encoded, size=7)
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
print('upsample1 shape = ',upsample1.shape)
# Now 7x7x8
conv4 = tf.layers.conv2d(inputs = upsample1, filters = 8, kernel_size = 2, strides = 1, padding = 'same', activation = tf.nn.relu)
print('conv4 shape = ', conv4.shape)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(images = conv4, size = (14,14))
print('upsample2 shape = ', upsample2.shape)
# Now 14x14x8
conv5 = tf.layers.conv2d(inputs = upsample2, filters = 8, kernel_size = 3, strides = 1, padding='same', activation = tf.nn.relu)
print('conv5 shape = ', conv5.shape)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(images = conv5, size = (28,28))
print('upsample3 shape = ',upsample3.shape)
# Now 28x28x8
conv6 = tf.layers.conv2d(inputs = upsample3, filters = 16, kernel_size = 3, strides = 1, padding = 'same', activation = tf.nn.relu)
print('conv6 shape = ',conv6.shape)
# Now 28x28x16
logits = tf.layers.conv2d(inputs = conv6, filters =1, kernel_size = 3, padding='same',activation = None)
print('logits shape = ',logits.shape)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name = 'decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits = logits, labels = targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
encoded shape = (?, 4, 4, 8)
upsample1 shape = (?, 7, 7, 8)
conv4 shape = (?, 7, 7, 8)
upsample2 shape = (?, 14, 14, 8)
conv5 shape = (?, 14, 14, 8)
upsample3 shape = (?, 28, 28, 8)
conv6 shape = (?, 28, 28, 16)
logits shape = (?, 28, 28, 1)
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practice. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')#not flattening images, so input is size of images
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs= inputs_,filters = 32, kernel_size =2 ,strides = 1, padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(inputs = conv1,pool_size = 2, padding='same',strides = 2)
# Now 14x14x32
conv2 = tf.layers.conv2d(inputs = maxpool1, filters = 32, kernel_size = 2, strides = 1, padding = 'same', activation = tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(inputs = conv2, pool_size = 3, padding = 'same', strides = 2)
# Now 7x7x32
conv3 = tf.layers.conv2d(inputs = maxpool2, filters = 16, kernel_size = 3, strides = 1, padding = 'same', activation = tf.nn.relu)
print('conv3 shape = ',conv3.shape)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(inputs = conv3, pool_size = 3, padding = 'same', strides = 2)
print('encoded shape = ', encoded.shape)
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(images = encoded, size = (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(inputs = upsample1, filters = 16, kernel_size = 3, strides = 1, padding = 'same', activation = tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(images=conv4, size =(14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(inputs = upsample2, filters = 32, kernel_size = 3, strides = 1, padding = 'same', activation = tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(images = conv5, size = (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(inputs = upsample3, filters = 32, kernel_size = 3, strides = 1, padding = 'same', activation = tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(inputs = conv6, filters = 1, kernel_size = 3, strides = 1, padding = 'same', activation = None)
print('logits shape', logits.shape)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits = logits, labels = targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None,28,28,1], name='inputs')
targets_ = tf.placeholder(tf.float32, [None,28,28,1], name='labels')
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3),
padding='same', activation=tf.nn.relu, name='enc_conv1')
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2),
strides=(2,2), padding='same', name='enc_maxpool1')
# Now 14x14x16
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, kernel_size=(3,3),
padding='same', activation=tf.nn.relu, name='enc_conv2')
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2,2),
strides=(2,2), padding='same', name='enc_maxpool2')
# Now 7x7x8
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=8, kernel_size=(3,3),
padding='same', activation=tf.nn.relu, name='enc_conv3')
# Now 7x7x8
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2,2),
strides=(2,2), padding='same', name='encoded')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_bilinear(images=encoded, size=(7,7), name='dec_upsample1')
# Now 7x7x8
conv4 = tf.layers.conv2d(inputs=upsample1, filters=8, kernel_size=(3,3),
padding='same', activation=tf.nn.relu, name='dec_conv4')
# Now 7x7x8
upsample2 = tf.image.resize_bilinear(images=conv4, size=(14,14), name='dec_upsample2')
# Now 14x14x8
conv5 = tf.layers.conv2d(inputs=upsample2, filters=8, kernel_size=(3,3),
padding='same', activation=tf.nn.relu, name='dec_conv5')
# Now 14x14x8
upsample3 = tf.image.resize_bilinear(images=conv5, size=(28,28), name='dec_upsample3')
# Now 28x28x8
conv6 = tf.layers.conv2d(inputs=upsample3, filters=16, kernel_size=(3,3),
padding='same', activation=tf.nn.relu, name='dec_conv6')
# Now 28x28x16
logits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=(3,3),
padding='same', activation=None, name='logits')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_, name='loss')
# Get cost and define the optimizer
cost = tf.reduce_mean(loss, name='cost')
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3),
padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2),
strides=(2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3),
padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2),
strides=(2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3),
padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2),
strides=(2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_bilinear(images=encoded, size=(7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3),
padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_bilinear(images=encoded, size=(14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3),
padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_bilinear(images=encoded, size=(28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3),
padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=(3,3),
padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='output')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.1236
Epoch: 2/100... Training loss: 0.1207
Epoch: 3/100... Training loss: 0.1181
Epoch: 4/100... Training loss: 0.1163
Epoch: 5/100... Training loss: 0.1185
Epoch: 6/100... Training loss: 0.1158
Epoch: 7/100... Training loss: 0.1123
Epoch: 8/100... Training loss: 0.1129
Epoch: 9/100... Training loss: 0.1138
Epoch: 10/100... Training loss: 0.1118
Epoch: 11/100... Training loss: 0.1146
Epoch: 12/100... Training loss: 0.1138
Epoch: 13/100... Training loss: 0.1115
Epoch: 14/100... Training loss: 0.1126
Epoch: 15/100... Training loss: 0.1089
Epoch: 16/100... Training loss: 0.1100
Epoch: 17/100... Training loss: 0.1133
Epoch: 18/100... Training loss: 0.1126
Epoch: 19/100... Training loss: 0.1119
Epoch: 20/100... Training loss: 0.1098
Epoch: 21/100... Training loss: 0.1084
Epoch: 22/100... Training loss: 0.1103
Epoch: 23/100... Training loss: 0.1101
Epoch: 24/100... Training loss: 0.1089
Epoch: 25/100... Training loss: 0.1099
Epoch: 26/100... Training loss: 0.1108
Epoch: 27/100... Training loss: 0.1114
Epoch: 28/100... Training loss: 0.1082
Epoch: 29/100... Training loss: 0.1109
Epoch: 30/100... Training loss: 0.1088
Epoch: 31/100... Training loss: 0.1098
Epoch: 32/100... Training loss: 0.1063
Epoch: 33/100... Training loss: 0.1097
Epoch: 34/100... Training loss: 0.1085
Epoch: 35/100... Training loss: 0.1081
Epoch: 36/100... Training loss: 0.1105
Epoch: 37/100... Training loss: 0.1094
Epoch: 38/100... Training loss: 0.1050
Epoch: 39/100... Training loss: 0.1068
Epoch: 40/100... Training loss: 0.1084
Epoch: 41/100... Training loss: 0.1073
Epoch: 42/100... Training loss: 0.1094
Epoch: 43/100... Training loss: 0.1071
Epoch: 44/100... Training loss: 0.1135
Epoch: 45/100... Training loss: 0.1052
Epoch: 46/100... Training loss: 0.1110
Epoch: 47/100... Training loss: 0.1086
Epoch: 48/100... Training loss: 0.1095
Epoch: 49/100... Training loss: 0.1086
Epoch: 50/100... Training loss: 0.1091
Epoch: 51/100... Training loss: 0.1070
Epoch: 52/100... Training loss: 0.1098
Epoch: 53/100... Training loss: 0.1086
Epoch: 54/100... Training loss: 0.1118
Epoch: 55/100... Training loss: 0.1091
Epoch: 56/100... Training loss: 0.1077
Epoch: 57/100... Training loss: 0.1077
Epoch: 58/100... Training loss: 0.1105
Epoch: 59/100... Training loss: 0.1103
Epoch: 60/100... Training loss: 0.1068
Epoch: 61/100... Training loss: 0.1089
Epoch: 62/100... Training loss: 0.1090
Epoch: 63/100... Training loss: 0.1050
Epoch: 64/100... Training loss: 0.1083
Epoch: 65/100... Training loss: 0.1073
Epoch: 66/100... Training loss: 0.1062
Epoch: 67/100... Training loss: 0.1065
Epoch: 68/100... Training loss: 0.1081
Epoch: 69/100... Training loss: 0.1074
Epoch: 70/100... Training loss: 0.1065
Epoch: 71/100... Training loss: 0.1090
Epoch: 72/100... Training loss: 0.1068
Epoch: 73/100... Training loss: 0.1082
Epoch: 74/100... Training loss: 0.1073
Epoch: 75/100... Training loss: 0.1096
Epoch: 76/100... Training loss: 0.1110
Epoch: 77/100... Training loss: 0.1086
Epoch: 78/100... Training loss: 0.1037
Epoch: 79/100... Training loss: 0.1092
Epoch: 80/100... Training loss: 0.1073
Epoch: 81/100... Training loss: 0.1072
Epoch: 82/100... Training loss: 0.1089
Epoch: 83/100... Training loss: 0.1085
Epoch: 84/100... Training loss: 0.1067
Epoch: 85/100... Training loss: 0.1073
Epoch: 86/100... Training loss: 0.1071
Epoch: 87/100... Training loss: 0.1079
Epoch: 88/100... Training loss: 0.1050
Epoch: 89/100... Training loss: 0.1081
Epoch: 90/100... Training loss: 0.1082
Epoch: 91/100... Training loss: 0.1076
Epoch: 92/100... Training loss: 0.1087
Epoch: 93/100... Training loss: 0.1068
Epoch: 94/100... Training loss: 0.1114
Epoch: 95/100... Training loss: 0.1061
Epoch: 96/100... Training loss: 0.1106
Epoch: 97/100... Training loss: 0.1088
Epoch: 98/100... Training loss: 0.1083
Epoch: 99/100... Training loss: 0.1086
Epoch: 100/100... Training loss: 0.1061
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.0001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None,28,28,1])
targets_ = tf.placeholder(tf.float32, [None,28,28,1])
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, 2, 2)
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, [7,7])
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same',activation=tf.nn.relu )
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, [14,14])
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (4,4), padding='same',activation=tf.nn.relu )
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, [28,28])
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (4,4), padding='same',activation=tf.nn.relu )
# Now 28x28x16
logits = tf.reduce_sum(conv6, 3, keepdims=True)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits )
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 40
batch_size = 50
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.6f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_,64, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 64, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 32, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, 2, 2)
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, [7,7])
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 32, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, [14,14])
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 64, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, [28,28])
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 64, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='SAME', activation=None)
#tf.reduce_sum(conv6,3,keepdims=True)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits )
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 20
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.8f}".format(batch_cost))
###Output
Epoch: 1/20... Training loss: 0.15730238
Epoch: 2/20... Training loss: 0.13428178
Epoch: 3/20... Training loss: 0.12409125
Epoch: 4/20... Training loss: 0.11945432
Epoch: 5/20... Training loss: 0.11523636
Epoch: 6/20... Training loss: 0.11374103
Epoch: 7/20... Training loss: 0.10671496
Epoch: 8/20... Training loss: 0.11078617
Epoch: 9/20... Training loss: 0.11382782
Epoch: 10/20... Training loss: 0.10428642
Epoch: 11/20... Training loss: 0.10865628
Epoch: 12/20... Training loss: 0.10374803
Epoch: 13/20... Training loss: 0.10416833
Epoch: 14/20... Training loss: 0.10447748
Epoch: 15/20... Training loss: 0.10091510
Epoch: 16/20... Training loss: 0.10157769
Epoch: 17/20... Training loss: 0.10307117
Epoch: 18/20... Training loss: 0.10475995
Epoch: 19/20... Training loss: 0.10058136
Epoch: 20/20... Training loss: 0.10329575
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
import tensorflow as tf
conv2d = tf.layers.conv2d
img_shape = mnist.train.images.shape[1]
print("Shape: {}".format(str(img_shape)))
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **deconvolutional** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor).
###Code
"""
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='SAME', activation=tf.nn.relu) # Now 28x28x16
maxpool1 = tf.layers.max_pooling2d( conv1, pool_size = (2,2), strides = (2,2), padding='SAME') # Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='SAME', activation=tf.nn.relu) # Now 14x14x8
maxpool2 = tf.layers.max_pooling2d( conv2, pool_size = (2,2), strides = (2,2), padding='SAME') # Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='SAME', activation=tf.nn.relu) # Now 7x7x8
maxpool3 = tf.layers.max_pooling2d( conv3, pool_size = (2,2), strides = (2,2), padding='SAME') # Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(maxpool3, size = [7, 7] ) # Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='SAME', activation=tf.nn.relu) # Now 7x7x8
upsample2 = tf.image.resize_images(conv4, size = [14, 14] ) # Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='SAME', activation=tf.nn.relu) # Now 14x14x8
upsample3 = tf.image.resize_images(conv5, size = [28, 28] ) # Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='SAME', activation=tf.nn.relu) # Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='SAME', activation=None) # Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(
labels=targets_,
logits=logits
)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name="inputs")
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name="outputs")
### Encoder
# https://www.tensorflow.org/api_docs/python/tf/layers/conv2d
conv1 = tf.layers.conv2d(inputs_, 16, (4,4), (1,1), padding="same", activation=tf.nn.relu)
# Now 28x28x16
#https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding="same")
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (4,4), (1,1), padding="same", activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding="same")
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (4,4), (1,1), padding="same", activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding="same")
# Now 4x4x8
### Decoder
#https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (4,4), (1,1), padding="same", activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (4,4), (1,1), padding="same", activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (4,4), (1,1), padding="same", activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (4,4), (1,1), padding="same", activation=tf.nn.relu)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
print(mnist.train.images.shape[1])
###Output
784
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
targets_ = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 14, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
# I changed epochs from 100 to 5, there are not so much difference
epochs = 5
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/5... Training loss: 0.6848
Epoch: 1/5... Training loss: 0.6611
Epoch: 1/5... Training loss: 0.6255
Epoch: 1/5... Training loss: 0.5772
Epoch: 1/5... Training loss: 0.5241
Epoch: 1/5... Training loss: 0.4930
Epoch: 1/5... Training loss: 0.5094
Epoch: 1/5... Training loss: 0.5244
Epoch: 1/5... Training loss: 0.5128
Epoch: 1/5... Training loss: 0.4900
Epoch: 1/5... Training loss: 0.4624
Epoch: 1/5... Training loss: 0.4549
Epoch: 1/5... Training loss: 0.4503
Epoch: 1/5... Training loss: 0.4530
Epoch: 1/5... Training loss: 0.4430
Epoch: 1/5... Training loss: 0.4321
Epoch: 1/5... Training loss: 0.4273
Epoch: 1/5... Training loss: 0.4182
Epoch: 1/5... Training loss: 0.3932
Epoch: 1/5... Training loss: 0.3954
Epoch: 1/5... Training loss: 0.3853
Epoch: 1/5... Training loss: 0.3668
Epoch: 1/5... Training loss: 0.3568
Epoch: 1/5... Training loss: 0.3420
Epoch: 1/5... Training loss: 0.3428
Epoch: 1/5... Training loss: 0.3176
Epoch: 1/5... Training loss: 0.3099
Epoch: 1/5... Training loss: 0.2972
Epoch: 1/5... Training loss: 0.3074
Epoch: 1/5... Training loss: 0.2866
Epoch: 1/5... Training loss: 0.2836
Epoch: 1/5... Training loss: 0.2731
Epoch: 1/5... Training loss: 0.2734
Epoch: 1/5... Training loss: 0.2768
Epoch: 1/5... Training loss: 0.2710
Epoch: 1/5... Training loss: 0.2702
Epoch: 1/5... Training loss: 0.2630
Epoch: 1/5... Training loss: 0.2710
Epoch: 1/5... Training loss: 0.2645
Epoch: 1/5... Training loss: 0.2579
Epoch: 1/5... Training loss: 0.2583
Epoch: 1/5... Training loss: 0.2630
Epoch: 1/5... Training loss: 0.2615
Epoch: 1/5... Training loss: 0.2593
Epoch: 1/5... Training loss: 0.2536
Epoch: 1/5... Training loss: 0.2522
Epoch: 1/5... Training loss: 0.2498
Epoch: 1/5... Training loss: 0.2496
Epoch: 1/5... Training loss: 0.2495
Epoch: 1/5... Training loss: 0.2494
Epoch: 1/5... Training loss: 0.2492
Epoch: 1/5... Training loss: 0.2382
Epoch: 1/5... Training loss: 0.2474
Epoch: 1/5... Training loss: 0.2465
Epoch: 1/5... Training loss: 0.2482
Epoch: 1/5... Training loss: 0.2288
Epoch: 1/5... Training loss: 0.2534
Epoch: 1/5... Training loss: 0.2415
Epoch: 1/5... Training loss: 0.2417
Epoch: 1/5... Training loss: 0.2466
Epoch: 1/5... Training loss: 0.2330
Epoch: 1/5... Training loss: 0.2481
Epoch: 1/5... Training loss: 0.2341
Epoch: 1/5... Training loss: 0.2287
Epoch: 1/5... Training loss: 0.2346
Epoch: 1/5... Training loss: 0.2333
Epoch: 1/5... Training loss: 0.2332
Epoch: 1/5... Training loss: 0.2307
Epoch: 1/5... Training loss: 0.2278
Epoch: 1/5... Training loss: 0.2333
Epoch: 1/5... Training loss: 0.2267
Epoch: 1/5... Training loss: 0.2287
Epoch: 1/5... Training loss: 0.2251
Epoch: 1/5... Training loss: 0.2282
Epoch: 1/5... Training loss: 0.2223
Epoch: 1/5... Training loss: 0.2254
Epoch: 1/5... Training loss: 0.2263
Epoch: 1/5... Training loss: 0.2247
Epoch: 1/5... Training loss: 0.2185
Epoch: 1/5... Training loss: 0.2183
Epoch: 1/5... Training loss: 0.2222
Epoch: 1/5... Training loss: 0.2149
Epoch: 1/5... Training loss: 0.2228
Epoch: 1/5... Training loss: 0.2167
Epoch: 1/5... Training loss: 0.2203
Epoch: 1/5... Training loss: 0.2157
Epoch: 1/5... Training loss: 0.2210
Epoch: 1/5... Training loss: 0.2225
Epoch: 1/5... Training loss: 0.2128
Epoch: 1/5... Training loss: 0.2177
Epoch: 1/5... Training loss: 0.2141
Epoch: 1/5... Training loss: 0.2070
Epoch: 1/5... Training loss: 0.2142
Epoch: 1/5... Training loss: 0.2197
Epoch: 1/5... Training loss: 0.2079
Epoch: 1/5... Training loss: 0.2156
Epoch: 1/5... Training loss: 0.2140
Epoch: 1/5... Training loss: 0.2132
Epoch: 1/5... Training loss: 0.2157
Epoch: 1/5... Training loss: 0.2128
Epoch: 1/5... Training loss: 0.2036
Epoch: 1/5... Training loss: 0.2100
Epoch: 1/5... Training loss: 0.2132
Epoch: 1/5... Training loss: 0.2125
Epoch: 1/5... Training loss: 0.2146
Epoch: 1/5... Training loss: 0.2085
Epoch: 1/5... Training loss: 0.2113
Epoch: 1/5... Training loss: 0.2071
Epoch: 1/5... Training loss: 0.2057
Epoch: 1/5... Training loss: 0.2080
Epoch: 1/5... Training loss: 0.2053
Epoch: 1/5... Training loss: 0.2063
Epoch: 1/5... Training loss: 0.2008
Epoch: 1/5... Training loss: 0.2083
Epoch: 1/5... Training loss: 0.2031
Epoch: 1/5... Training loss: 0.2033
Epoch: 1/5... Training loss: 0.2077
Epoch: 1/5... Training loss: 0.2007
Epoch: 1/5... Training loss: 0.2061
Epoch: 1/5... Training loss: 0.1995
Epoch: 1/5... Training loss: 0.2043
Epoch: 1/5... Training loss: 0.2077
Epoch: 1/5... Training loss: 0.2028
Epoch: 1/5... Training loss: 0.1973
Epoch: 1/5... Training loss: 0.1996
Epoch: 1/5... Training loss: 0.2029
Epoch: 1/5... Training loss: 0.2002
Epoch: 1/5... Training loss: 0.2018
Epoch: 1/5... Training loss: 0.2035
Epoch: 1/5... Training loss: 0.2029
Epoch: 1/5... Training loss: 0.1990
Epoch: 1/5... Training loss: 0.2022
Epoch: 1/5... Training loss: 0.2005
Epoch: 1/5... Training loss: 0.1938
Epoch: 1/5... Training loss: 0.1972
Epoch: 1/5... Training loss: 0.1957
Epoch: 1/5... Training loss: 0.1917
Epoch: 1/5... Training loss: 0.1991
Epoch: 1/5... Training loss: 0.2013
Epoch: 1/5... Training loss: 0.1887
Epoch: 1/5... Training loss: 0.2007
Epoch: 1/5... Training loss: 0.1954
Epoch: 1/5... Training loss: 0.1959
Epoch: 1/5... Training loss: 0.1951
Epoch: 1/5... Training loss: 0.1912
Epoch: 1/5... Training loss: 0.1918
Epoch: 1/5... Training loss: 0.1977
Epoch: 1/5... Training loss: 0.1987
Epoch: 1/5... Training loss: 0.1929
Epoch: 1/5... Training loss: 0.1961
Epoch: 1/5... Training loss: 0.1961
Epoch: 1/5... Training loss: 0.1967
Epoch: 1/5... Training loss: 0.1901
Epoch: 1/5... Training loss: 0.1930
Epoch: 1/5... Training loss: 0.1968
Epoch: 1/5... Training loss: 0.1955
Epoch: 1/5... Training loss: 0.1902
Epoch: 1/5... Training loss: 0.1900
Epoch: 1/5... Training loss: 0.1954
Epoch: 1/5... Training loss: 0.1943
Epoch: 1/5... Training loss: 0.1924
Epoch: 1/5... Training loss: 0.1947
Epoch: 1/5... Training loss: 0.1937
Epoch: 1/5... Training loss: 0.1975
Epoch: 1/5... Training loss: 0.1859
Epoch: 1/5... Training loss: 0.1835
Epoch: 1/5... Training loss: 0.1945
Epoch: 1/5... Training loss: 0.1921
Epoch: 1/5... Training loss: 0.1880
Epoch: 1/5... Training loss: 0.1814
Epoch: 1/5... Training loss: 0.1883
Epoch: 1/5... Training loss: 0.1888
Epoch: 1/5... Training loss: 0.1908
Epoch: 1/5... Training loss: 0.1827
Epoch: 1/5... Training loss: 0.1857
Epoch: 1/5... Training loss: 0.1926
Epoch: 1/5... Training loss: 0.1893
Epoch: 1/5... Training loss: 0.1879
Epoch: 1/5... Training loss: 0.1849
Epoch: 1/5... Training loss: 0.1840
Epoch: 1/5... Training loss: 0.1848
Epoch: 1/5... Training loss: 0.1820
Epoch: 1/5... Training loss: 0.1884
Epoch: 1/5... Training loss: 0.1821
Epoch: 1/5... Training loss: 0.1852
Epoch: 1/5... Training loss: 0.1896
Epoch: 1/5... Training loss: 0.1849
Epoch: 1/5... Training loss: 0.1873
Epoch: 1/5... Training loss: 0.1914
Epoch: 1/5... Training loss: 0.1909
Epoch: 1/5... Training loss: 0.1812
Epoch: 1/5... Training loss: 0.1806
Epoch: 1/5... Training loss: 0.1811
Epoch: 1/5... Training loss: 0.1882
Epoch: 1/5... Training loss: 0.1878
Epoch: 1/5... Training loss: 0.1835
Epoch: 1/5... Training loss: 0.1883
Epoch: 1/5... Training loss: 0.1805
Epoch: 1/5... Training loss: 0.1835
Epoch: 1/5... Training loss: 0.1816
Epoch: 1/5... Training loss: 0.1770
Epoch: 1/5... Training loss: 0.1831
Epoch: 1/5... Training loss: 0.1790
Epoch: 1/5... Training loss: 0.1846
Epoch: 1/5... Training loss: 0.1822
Epoch: 1/5... Training loss: 0.1871
Epoch: 1/5... Training loss: 0.1821
Epoch: 1/5... Training loss: 0.1821
Epoch: 1/5... Training loss: 0.1821
Epoch: 1/5... Training loss: 0.1854
Epoch: 1/5... Training loss: 0.1792
Epoch: 1/5... Training loss: 0.1829
Epoch: 1/5... Training loss: 0.1847
Epoch: 1/5... Training loss: 0.1819
Epoch: 1/5... Training loss: 0.1877
Epoch: 1/5... Training loss: 0.1785
Epoch: 1/5... Training loss: 0.1820
Epoch: 1/5... Training loss: 0.1839
Epoch: 1/5... Training loss: 0.1792
Epoch: 1/5... Training loss: 0.1840
Epoch: 1/5... Training loss: 0.1778
Epoch: 1/5... Training loss: 0.1777
Epoch: 1/5... Training loss: 0.1742
Epoch: 1/5... Training loss: 0.1752
Epoch: 1/5... Training loss: 0.1812
Epoch: 1/5... Training loss: 0.1776
Epoch: 1/5... Training loss: 0.1800
Epoch: 1/5... Training loss: 0.1777
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28,28,1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28,28,1), name='targets')
### Encoder
#Start with 28x28x1
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf. layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), 2)
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), 2)
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), 2)
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded, (7, 7), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_images(conv4, (14, 14), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_images(conv5, (28, 28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (5, 5), padding='same')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), 2)
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), 2)
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2, 2), 2)
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_images(encoded, (7, 7), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 7x7x16
conv4 = tf.layers.conv2d_transpose(upsample1, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_images(conv4, (14, 14), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 14x14x16
conv5 = tf.layers.conv2d_transpose(upsample2, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_images(conv5, (28, 28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 28x28x32
conv6 = tf.layers.conv2d_transpose(upsample3, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d_transpose(conv6, 1, (5, 5), padding='same')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 5
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/5... Training loss: 0.6997
Epoch: 1/5... Training loss: 0.6892
Epoch: 1/5... Training loss: 0.6751
Epoch: 1/5... Training loss: 0.6445
Epoch: 1/5... Training loss: 0.5871
Epoch: 1/5... Training loss: 0.5088
Epoch: 1/5... Training loss: 0.5170
Epoch: 1/5... Training loss: 0.5019
Epoch: 1/5... Training loss: 0.4609
Epoch: 1/5... Training loss: 0.4272
Epoch: 1/5... Training loss: 0.4260
Epoch: 1/5... Training loss: 0.4107
Epoch: 1/5... Training loss: 0.3915
Epoch: 1/5... Training loss: 0.3803
Epoch: 1/5... Training loss: 0.3543
Epoch: 1/5... Training loss: 0.3252
Epoch: 1/5... Training loss: 0.3192
Epoch: 1/5... Training loss: 0.3086
Epoch: 1/5... Training loss: 0.2942
Epoch: 1/5... Training loss: 0.2923
Epoch: 1/5... Training loss: 0.2934
Epoch: 1/5... Training loss: 0.2854
Epoch: 1/5... Training loss: 0.2805
Epoch: 1/5... Training loss: 0.2756
Epoch: 1/5... Training loss: 0.2734
Epoch: 1/5... Training loss: 0.2721
Epoch: 1/5... Training loss: 0.2803
Epoch: 1/5... Training loss: 0.2715
Epoch: 1/5... Training loss: 0.2738
Epoch: 1/5... Training loss: 0.2773
Epoch: 1/5... Training loss: 0.2685
Epoch: 1/5... Training loss: 0.2691
Epoch: 1/5... Training loss: 0.2651
Epoch: 1/5... Training loss: 0.2753
Epoch: 1/5... Training loss: 0.2615
Epoch: 1/5... Training loss: 0.2766
Epoch: 1/5... Training loss: 0.2732
Epoch: 1/5... Training loss: 0.2661
Epoch: 1/5... Training loss: 0.2697
Epoch: 1/5... Training loss: 0.2712
Epoch: 1/5... Training loss: 0.2715
Epoch: 1/5... Training loss: 0.2630
Epoch: 1/5... Training loss: 0.2663
Epoch: 1/5... Training loss: 0.2641
Epoch: 1/5... Training loss: 0.2702
Epoch: 1/5... Training loss: 0.2618
Epoch: 1/5... Training loss: 0.2716
Epoch: 1/5... Training loss: 0.2662
Epoch: 1/5... Training loss: 0.2612
Epoch: 1/5... Training loss: 0.2640
Epoch: 1/5... Training loss: 0.2787
Epoch: 1/5... Training loss: 0.2639
Epoch: 1/5... Training loss: 0.2625
Epoch: 1/5... Training loss: 0.2625
Epoch: 1/5... Training loss: 0.2537
Epoch: 1/5... Training loss: 0.2604
Epoch: 1/5... Training loss: 0.2677
Epoch: 1/5... Training loss: 0.2593
Epoch: 1/5... Training loss: 0.2638
Epoch: 1/5... Training loss: 0.2527
Epoch: 1/5... Training loss: 0.2614
Epoch: 1/5... Training loss: 0.2693
Epoch: 1/5... Training loss: 0.2605
Epoch: 1/5... Training loss: 0.2640
Epoch: 1/5... Training loss: 0.2650
Epoch: 1/5... Training loss: 0.2555
Epoch: 1/5... Training loss: 0.2557
Epoch: 1/5... Training loss: 0.2682
Epoch: 1/5... Training loss: 0.2550
Epoch: 1/5... Training loss: 0.2622
Epoch: 1/5... Training loss: 0.2550
Epoch: 1/5... Training loss: 0.2582
Epoch: 1/5... Training loss: 0.2571
Epoch: 1/5... Training loss: 0.2528
Epoch: 1/5... Training loss: 0.2434
Epoch: 1/5... Training loss: 0.2510
Epoch: 1/5... Training loss: 0.2487
Epoch: 1/5... Training loss: 0.2467
Epoch: 1/5... Training loss: 0.2523
Epoch: 1/5... Training loss: 0.2589
Epoch: 1/5... Training loss: 0.2545
Epoch: 1/5... Training loss: 0.2503
Epoch: 1/5... Training loss: 0.2525
Epoch: 1/5... Training loss: 0.2476
Epoch: 1/5... Training loss: 0.2505
Epoch: 1/5... Training loss: 0.2405
Epoch: 1/5... Training loss: 0.2368
Epoch: 1/5... Training loss: 0.2396
Epoch: 1/5... Training loss: 0.2394
Epoch: 1/5... Training loss: 0.2410
Epoch: 1/5... Training loss: 0.2426
Epoch: 1/5... Training loss: 0.2405
Epoch: 1/5... Training loss: 0.2378
Epoch: 1/5... Training loss: 0.2367
Epoch: 1/5... Training loss: 0.2372
Epoch: 1/5... Training loss: 0.2399
Epoch: 1/5... Training loss: 0.2337
Epoch: 1/5... Training loss: 0.2280
Epoch: 1/5... Training loss: 0.2421
Epoch: 1/5... Training loss: 0.2364
Epoch: 1/5... Training loss: 0.2383
Epoch: 1/5... Training loss: 0.2430
Epoch: 1/5... Training loss: 0.2365
Epoch: 1/5... Training loss: 0.2460
Epoch: 1/5... Training loss: 0.2357
Epoch: 1/5... Training loss: 0.2349
Epoch: 1/5... Training loss: 0.2376
Epoch: 1/5... Training loss: 0.2443
Epoch: 1/5... Training loss: 0.2266
Epoch: 1/5... Training loss: 0.2394
Epoch: 1/5... Training loss: 0.2258
Epoch: 1/5... Training loss: 0.2320
Epoch: 1/5... Training loss: 0.2290
Epoch: 1/5... Training loss: 0.2232
Epoch: 1/5... Training loss: 0.2327
Epoch: 1/5... Training loss: 0.2300
Epoch: 1/5... Training loss: 0.2356
Epoch: 1/5... Training loss: 0.2321
Epoch: 1/5... Training loss: 0.2294
Epoch: 1/5... Training loss: 0.2254
Epoch: 1/5... Training loss: 0.2219
Epoch: 1/5... Training loss: 0.2201
Epoch: 1/5... Training loss: 0.2297
Epoch: 1/5... Training loss: 0.2289
Epoch: 1/5... Training loss: 0.2196
Epoch: 1/5... Training loss: 0.2257
Epoch: 1/5... Training loss: 0.2174
Epoch: 1/5... Training loss: 0.2138
Epoch: 1/5... Training loss: 0.2185
Epoch: 1/5... Training loss: 0.2203
Epoch: 1/5... Training loss: 0.2117
Epoch: 1/5... Training loss: 0.2097
Epoch: 1/5... Training loss: 0.2212
Epoch: 1/5... Training loss: 0.2188
Epoch: 1/5... Training loss: 0.2125
Epoch: 1/5... Training loss: 0.2102
Epoch: 1/5... Training loss: 0.2136
Epoch: 1/5... Training loss: 0.2137
Epoch: 1/5... Training loss: 0.2143
Epoch: 1/5... Training loss: 0.2069
Epoch: 1/5... Training loss: 0.2077
Epoch: 1/5... Training loss: 0.2082
Epoch: 1/5... Training loss: 0.2029
Epoch: 1/5... Training loss: 0.2064
Epoch: 1/5... Training loss: 0.2105
Epoch: 1/5... Training loss: 0.2179
Epoch: 1/5... Training loss: 0.2127
Epoch: 1/5... Training loss: 0.2013
Epoch: 1/5... Training loss: 0.2225
Epoch: 1/5... Training loss: 0.2155
Epoch: 1/5... Training loss: 0.2177
Epoch: 1/5... Training loss: 0.1994
Epoch: 1/5... Training loss: 0.2199
Epoch: 1/5... Training loss: 0.2021
Epoch: 1/5... Training loss: 0.2126
Epoch: 1/5... Training loss: 0.2004
Epoch: 1/5... Training loss: 0.2004
Epoch: 1/5... Training loss: 0.2060
Epoch: 1/5... Training loss: 0.2001
Epoch: 1/5... Training loss: 0.1976
Epoch: 1/5... Training loss: 0.2020
Epoch: 1/5... Training loss: 0.1985
Epoch: 1/5... Training loss: 0.1973
Epoch: 1/5... Training loss: 0.1939
Epoch: 1/5... Training loss: 0.1999
Epoch: 1/5... Training loss: 0.1936
Epoch: 1/5... Training loss: 0.1990
Epoch: 1/5... Training loss: 0.1987
Epoch: 1/5... Training loss: 0.1917
Epoch: 1/5... Training loss: 0.1946
Epoch: 1/5... Training loss: 0.1908
Epoch: 1/5... Training loss: 0.1922
Epoch: 1/5... Training loss: 0.1909
Epoch: 1/5... Training loss: 0.1875
Epoch: 1/5... Training loss: 0.1943
Epoch: 1/5... Training loss: 0.1891
Epoch: 1/5... Training loss: 0.1902
Epoch: 1/5... Training loss: 0.1865
Epoch: 1/5... Training loss: 0.1856
Epoch: 1/5... Training loss: 0.1839
Epoch: 1/5... Training loss: 0.1884
Epoch: 1/5... Training loss: 0.1828
Epoch: 1/5... Training loss: 0.1859
Epoch: 1/5... Training loss: 0.1830
Epoch: 1/5... Training loss: 0.1850
Epoch: 1/5... Training loss: 0.1882
Epoch: 1/5... Training loss: 0.1830
Epoch: 1/5... Training loss: 0.1828
Epoch: 1/5... Training loss: 0.1837
Epoch: 1/5... Training loss: 0.1871
Epoch: 1/5... Training loss: 0.1896
Epoch: 1/5... Training loss: 0.1907
Epoch: 1/5... Training loss: 0.1835
Epoch: 1/5... Training loss: 0.1807
Epoch: 1/5... Training loss: 0.1846
Epoch: 1/5... Training loss: 0.1837
Epoch: 1/5... Training loss: 0.1793
Epoch: 1/5... Training loss: 0.1761
Epoch: 1/5... Training loss: 0.1853
Epoch: 1/5... Training loss: 0.1778
Epoch: 1/5... Training loss: 0.1757
Epoch: 1/5... Training loss: 0.1805
Epoch: 1/5... Training loss: 0.1755
Epoch: 1/5... Training loss: 0.1829
Epoch: 1/5... Training loss: 0.1715
Epoch: 1/5... Training loss: 0.1790
Epoch: 1/5... Training loss: 0.1797
Epoch: 1/5... Training loss: 0.1731
Epoch: 1/5... Training loss: 0.1786
Epoch: 1/5... Training loss: 0.1766
Epoch: 1/5... Training loss: 0.1819
Epoch: 1/5... Training loss: 0.1801
Epoch: 1/5... Training loss: 0.1705
Epoch: 1/5... Training loss: 0.1743
Epoch: 1/5... Training loss: 0.1787
Epoch: 1/5... Training loss: 0.1786
Epoch: 1/5... Training loss: 0.1772
Epoch: 1/5... Training loss: 0.1674
Epoch: 1/5... Training loss: 0.1774
Epoch: 1/5... Training loss: 0.1706
Epoch: 1/5... Training loss: 0.1693
Epoch: 1/5... Training loss: 0.1741
Epoch: 1/5... Training loss: 0.1736
Epoch: 1/5... Training loss: 0.1779
Epoch: 1/5... Training loss: 0.1800
Epoch: 1/5... Training loss: 0.1802
Epoch: 1/5... Training loss: 0.1778
Epoch: 1/5... Training loss: 0.1848
Epoch: 1/5... Training loss: 0.1673
Epoch: 1/5... Training loss: 0.1687
Epoch: 1/5... Training loss: 0.1735
Epoch: 1/5... Training loss: 0.1700
Epoch: 1/5... Training loss: 0.1709
Epoch: 1/5... Training loss: 0.1734
Epoch: 1/5... Training loss: 0.1747
Epoch: 1/5... Training loss: 0.1683
Epoch: 1/5... Training loss: 0.1750
Epoch: 1/5... Training loss: 0.1725
Epoch: 1/5... Training loss: 0.1729
Epoch: 1/5... Training loss: 0.1684
Epoch: 1/5... Training loss: 0.1716
Epoch: 1/5... Training loss: 0.1705
Epoch: 1/5... Training loss: 0.1711
Epoch: 1/5... Training loss: 0.1681
Epoch: 1/5... Training loss: 0.1738
Epoch: 1/5... Training loss: 0.1724
Epoch: 1/5... Training loss: 0.1695
Epoch: 1/5... Training loss: 0.1691
Epoch: 1/5... Training loss: 0.1698
Epoch: 1/5... Training loss: 0.1744
Epoch: 1/5... Training loss: 0.1654
Epoch: 1/5... Training loss: 0.1697
Epoch: 1/5... Training loss: 0.1661
Epoch: 1/5... Training loss: 0.1700
Epoch: 1/5... Training loss: 0.1637
Epoch: 1/5... Training loss: 0.1696
Epoch: 1/5... Training loss: 0.1678
Epoch: 1/5... Training loss: 0.1617
Epoch: 1/5... Training loss: 0.1664
Epoch: 1/5... Training loss: 0.1676
Epoch: 1/5... Training loss: 0.1647
Epoch: 1/5... Training loss: 0.1643
Epoch: 1/5... Training loss: 0.1662
Epoch: 1/5... Training loss: 0.1649
Epoch: 1/5... Training loss: 0.1630
Epoch: 1/5... Training loss: 0.1659
Epoch: 1/5... Training loss: 0.1651
Epoch: 1/5... Training loss: 0.1634
Epoch: 1/5... Training loss: 0.1610
Epoch: 1/5... Training loss: 0.1628
Epoch: 1/5... Training loss: 0.1664
Epoch: 1/5... Training loss: 0.1649
Epoch: 1/5... Training loss: 0.1575
Epoch: 1/5... Training loss: 0.1679
Epoch: 1/5... Training loss: 0.1720
Epoch: 1/5... Training loss: 0.1637
Epoch: 1/5... Training loss: 0.1600
Epoch: 1/5... Training loss: 0.1633
Epoch: 1/5... Training loss: 0.1642
Epoch: 1/5... Training loss: 0.1659
Epoch: 1/5... Training loss: 0.1613
Epoch: 1/5... Training loss: 0.1587
Epoch: 1/5... Training loss: 0.1617
Epoch: 1/5... Training loss: 0.1631
Epoch: 1/5... Training loss: 0.1551
Epoch: 1/5... Training loss: 0.1611
Epoch: 1/5... Training loss: 0.1592
Epoch: 1/5... Training loss: 0.1679
Epoch: 1/5... Training loss: 0.1629
Epoch: 1/5... Training loss: 0.1634
Epoch: 1/5... Training loss: 0.1677
Epoch: 1/5... Training loss: 0.1587
Epoch: 1/5... Training loss: 0.1599
Epoch: 1/5... Training loss: 0.1602
Epoch: 1/5... Training loss: 0.1637
Epoch: 1/5... Training loss: 0.1603
Epoch: 1/5... Training loss: 0.1627
Epoch: 1/5... Training loss: 0.1597
Epoch: 1/5... Training loss: 0.1538
Epoch: 1/5... Training loss: 0.1581
Epoch: 2/5... Training loss: 0.1583
Epoch: 2/5... Training loss: 0.1647
Epoch: 2/5... Training loss: 0.1602
Epoch: 2/5... Training loss: 0.1560
Epoch: 2/5... Training loss: 0.1596
Epoch: 2/5... Training loss: 0.1618
Epoch: 2/5... Training loss: 0.1553
Epoch: 2/5... Training loss: 0.1640
Epoch: 2/5... Training loss: 0.1585
Epoch: 2/5... Training loss: 0.1594
Epoch: 2/5... Training loss: 0.1595
Epoch: 2/5... Training loss: 0.1592
Epoch: 2/5... Training loss: 0.1632
Epoch: 2/5... Training loss: 0.1557
Epoch: 2/5... Training loss: 0.1575
Epoch: 2/5... Training loss: 0.1588
Epoch: 2/5... Training loss: 0.1534
Epoch: 2/5... Training loss: 0.1559
Epoch: 2/5... Training loss: 0.1525
Epoch: 2/5... Training loss: 0.1566
Epoch: 2/5... Training loss: 0.1564
Epoch: 2/5... Training loss: 0.1601
Epoch: 2/5... Training loss: 0.1582
Epoch: 2/5... Training loss: 0.1566
Epoch: 2/5... Training loss: 0.1612
Epoch: 2/5... Training loss: 0.1529
Epoch: 2/5... Training loss: 0.1547
Epoch: 2/5... Training loss: 0.1484
Epoch: 2/5... Training loss: 0.1525
Epoch: 2/5... Training loss: 0.1566
Epoch: 2/5... Training loss: 0.1524
Epoch: 2/5... Training loss: 0.1538
Epoch: 2/5... Training loss: 0.1531
Epoch: 2/5... Training loss: 0.1573
Epoch: 2/5... Training loss: 0.1511
Epoch: 2/5... Training loss: 0.1600
Epoch: 2/5... Training loss: 0.1563
Epoch: 2/5... Training loss: 0.1501
Epoch: 2/5... Training loss: 0.1547
Epoch: 2/5... Training loss: 0.1563
Epoch: 2/5... Training loss: 0.1535
Epoch: 2/5... Training loss: 0.1575
Epoch: 2/5... Training loss: 0.1531
Epoch: 2/5... Training loss: 0.1573
Epoch: 2/5... Training loss: 0.1489
Epoch: 2/5... Training loss: 0.1474
Epoch: 2/5... Training loss: 0.1529
Epoch: 2/5... Training loss: 0.1536
Epoch: 2/5... Training loss: 0.1528
Epoch: 2/5... Training loss: 0.1493
Epoch: 2/5... Training loss: 0.1519
Epoch: 2/5... Training loss: 0.1552
Epoch: 2/5... Training loss: 0.1533
Epoch: 2/5... Training loss: 0.1523
Epoch: 2/5... Training loss: 0.1492
Epoch: 2/5... Training loss: 0.1540
Epoch: 2/5... Training loss: 0.1472
Epoch: 2/5... Training loss: 0.1496
Epoch: 2/5... Training loss: 0.1499
Epoch: 2/5... Training loss: 0.1538
Epoch: 2/5... Training loss: 0.1585
Epoch: 2/5... Training loss: 0.1541
Epoch: 2/5... Training loss: 0.1511
Epoch: 2/5... Training loss: 0.1563
Epoch: 2/5... Training loss: 0.1493
Epoch: 2/5... Training loss: 0.1535
Epoch: 2/5... Training loss: 0.1501
Epoch: 2/5... Training loss: 0.1498
Epoch: 2/5... Training loss: 0.1471
Epoch: 2/5... Training loss: 0.1445
Epoch: 2/5... Training loss: 0.1525
Epoch: 2/5... Training loss: 0.1503
Epoch: 2/5... Training loss: 0.1550
Epoch: 2/5... Training loss: 0.1500
Epoch: 2/5... Training loss: 0.1473
Epoch: 2/5... Training loss: 0.1454
Epoch: 2/5... Training loss: 0.1488
Epoch: 2/5... Training loss: 0.1476
Epoch: 2/5... Training loss: 0.1507
Epoch: 2/5... Training loss: 0.1530
Epoch: 2/5... Training loss: 0.1469
Epoch: 2/5... Training loss: 0.1513
Epoch: 2/5... Training loss: 0.1482
Epoch: 2/5... Training loss: 0.1509
Epoch: 2/5... Training loss: 0.1496
Epoch: 2/5... Training loss: 0.1495
Epoch: 2/5... Training loss: 0.1544
Epoch: 2/5... Training loss: 0.1440
Epoch: 2/5... Training loss: 0.1493
Epoch: 2/5... Training loss: 0.1488
Epoch: 2/5... Training loss: 0.1451
Epoch: 2/5... Training loss: 0.1509
Epoch: 2/5... Training loss: 0.1532
Epoch: 2/5... Training loss: 0.1424
Epoch: 2/5... Training loss: 0.1487
Epoch: 2/5... Training loss: 0.1447
Epoch: 2/5... Training loss: 0.1439
Epoch: 2/5... Training loss: 0.1438
Epoch: 2/5... Training loss: 0.1409
Epoch: 2/5... Training loss: 0.1426
Epoch: 2/5... Training loss: 0.1452
Epoch: 2/5... Training loss: 0.1433
Epoch: 2/5... Training loss: 0.1407
Epoch: 2/5... Training loss: 0.1363
Epoch: 2/5... Training loss: 0.1468
Epoch: 2/5... Training loss: 0.1402
Epoch: 2/5... Training loss: 0.1484
Epoch: 2/5... Training loss: 0.1468
Epoch: 2/5... Training loss: 0.1464
Epoch: 2/5... Training loss: 0.1474
Epoch: 2/5... Training loss: 0.1478
Epoch: 2/5... Training loss: 0.1458
Epoch: 2/5... Training loss: 0.1520
Epoch: 2/5... Training loss: 0.1500
Epoch: 2/5... Training loss: 0.1466
Epoch: 2/5... Training loss: 0.1509
Epoch: 2/5... Training loss: 0.1421
Epoch: 2/5... Training loss: 0.1522
Epoch: 2/5... Training loss: 0.1482
Epoch: 2/5... Training loss: 0.1487
Epoch: 2/5... Training loss: 0.1476
Epoch: 2/5... Training loss: 0.1480
Epoch: 2/5... Training loss: 0.1398
Epoch: 2/5... Training loss: 0.1385
Epoch: 2/5... Training loss: 0.1431
Epoch: 2/5... Training loss: 0.1474
Epoch: 2/5... Training loss: 0.1423
Epoch: 2/5... Training loss: 0.1420
Epoch: 2/5... Training loss: 0.1458
Epoch: 2/5... Training loss: 0.1413
Epoch: 2/5... Training loss: 0.1452
Epoch: 2/5... Training loss: 0.1442
Epoch: 2/5... Training loss: 0.1434
Epoch: 2/5... Training loss: 0.1448
Epoch: 2/5... Training loss: 0.1433
Epoch: 2/5... Training loss: 0.1410
Epoch: 2/5... Training loss: 0.1401
Epoch: 2/5... Training loss: 0.1457
Epoch: 2/5... Training loss: 0.1439
Epoch: 2/5... Training loss: 0.1427
Epoch: 2/5... Training loss: 0.1429
Epoch: 2/5... Training loss: 0.1480
Epoch: 2/5... Training loss: 0.1388
Epoch: 2/5... Training loss: 0.1414
Epoch: 2/5... Training loss: 0.1455
Epoch: 2/5... Training loss: 0.1389
Epoch: 2/5... Training loss: 0.1439
Epoch: 2/5... Training loss: 0.1465
Epoch: 2/5... Training loss: 0.1357
Epoch: 2/5... Training loss: 0.1445
Epoch: 2/5... Training loss: 0.1440
Epoch: 2/5... Training loss: 0.1421
Epoch: 2/5... Training loss: 0.1405
Epoch: 2/5... Training loss: 0.1406
Epoch: 2/5... Training loss: 0.1477
Epoch: 2/5... Training loss: 0.1476
Epoch: 2/5... Training loss: 0.1425
Epoch: 2/5... Training loss: 0.1408
Epoch: 2/5... Training loss: 0.1455
Epoch: 2/5... Training loss: 0.1397
Epoch: 2/5... Training loss: 0.1454
Epoch: 2/5... Training loss: 0.1384
Epoch: 2/5... Training loss: 0.1440
Epoch: 2/5... Training loss: 0.1468
Epoch: 2/5... Training loss: 0.1481
Epoch: 2/5... Training loss: 0.1407
Epoch: 2/5... Training loss: 0.1398
Epoch: 2/5... Training loss: 0.1425
Epoch: 2/5... Training loss: 0.1436
Epoch: 2/5... Training loss: 0.1434
Epoch: 2/5... Training loss: 0.1379
Epoch: 2/5... Training loss: 0.1403
Epoch: 2/5... Training loss: 0.1424
Epoch: 2/5... Training loss: 0.1440
Epoch: 2/5... Training loss: 0.1416
Epoch: 2/5... Training loss: 0.1509
Epoch: 2/5... Training loss: 0.1417
Epoch: 2/5... Training loss: 0.1407
Epoch: 2/5... Training loss: 0.1413
Epoch: 2/5... Training loss: 0.1359
Epoch: 2/5... Training loss: 0.1477
Epoch: 2/5... Training loss: 0.1378
Epoch: 2/5... Training loss: 0.1407
Epoch: 2/5... Training loss: 0.1424
Epoch: 2/5... Training loss: 0.1447
Epoch: 2/5... Training loss: 0.1410
Epoch: 2/5... Training loss: 0.1430
Epoch: 2/5... Training loss: 0.1399
Epoch: 2/5... Training loss: 0.1447
Epoch: 2/5... Training loss: 0.1394
Epoch: 2/5... Training loss: 0.1428
Epoch: 2/5... Training loss: 0.1394
Epoch: 2/5... Training loss: 0.1421
Epoch: 2/5... Training loss: 0.1390
Epoch: 2/5... Training loss: 0.1421
Epoch: 2/5... Training loss: 0.1411
Epoch: 2/5... Training loss: 0.1410
Epoch: 2/5... Training loss: 0.1398
Epoch: 2/5... Training loss: 0.1416
Epoch: 2/5... Training loss: 0.1344
Epoch: 2/5... Training loss: 0.1381
Epoch: 2/5... Training loss: 0.1380
Epoch: 2/5... Training loss: 0.1403
Epoch: 2/5... Training loss: 0.1447
Epoch: 2/5... Training loss: 0.1426
Epoch: 2/5... Training loss: 0.1427
Epoch: 2/5... Training loss: 0.1351
Epoch: 2/5... Training loss: 0.1403
Epoch: 2/5... Training loss: 0.1420
Epoch: 2/5... Training loss: 0.1400
Epoch: 2/5... Training loss: 0.1377
Epoch: 2/5... Training loss: 0.1396
Epoch: 2/5... Training loss: 0.1427
Epoch: 2/5... Training loss: 0.1395
Epoch: 2/5... Training loss: 0.1414
Epoch: 2/5... Training loss: 0.1420
Epoch: 2/5... Training loss: 0.1405
Epoch: 2/5... Training loss: 0.1387
Epoch: 2/5... Training loss: 0.1408
Epoch: 2/5... Training loss: 0.1393
Epoch: 2/5... Training loss: 0.1388
Epoch: 2/5... Training loss: 0.1400
Epoch: 2/5... Training loss: 0.1395
Epoch: 2/5... Training loss: 0.1343
Epoch: 2/5... Training loss: 0.1377
Epoch: 2/5... Training loss: 0.1411
Epoch: 2/5... Training loss: 0.1396
Epoch: 2/5... Training loss: 0.1435
Epoch: 2/5... Training loss: 0.1377
Epoch: 2/5... Training loss: 0.1357
Epoch: 2/5... Training loss: 0.1368
Epoch: 2/5... Training loss: 0.1372
Epoch: 2/5... Training loss: 0.1355
Epoch: 2/5... Training loss: 0.1401
Epoch: 2/5... Training loss: 0.1386
Epoch: 2/5... Training loss: 0.1367
Epoch: 2/5... Training loss: 0.1409
Epoch: 2/5... Training loss: 0.1368
Epoch: 2/5... Training loss: 0.1361
Epoch: 2/5... Training loss: 0.1391
Epoch: 2/5... Training loss: 0.1390
Epoch: 2/5... Training loss: 0.1394
Epoch: 2/5... Training loss: 0.1358
Epoch: 2/5... Training loss: 0.1370
Epoch: 2/5... Training loss: 0.1429
Epoch: 2/5... Training loss: 0.1327
Epoch: 2/5... Training loss: 0.1381
Epoch: 2/5... Training loss: 0.1366
Epoch: 2/5... Training loss: 0.1372
Epoch: 2/5... Training loss: 0.1367
Epoch: 2/5... Training loss: 0.1377
Epoch: 2/5... Training loss: 0.1369
Epoch: 2/5... Training loss: 0.1365
Epoch: 2/5... Training loss: 0.1338
Epoch: 2/5... Training loss: 0.1364
Epoch: 2/5... Training loss: 0.1345
Epoch: 2/5... Training loss: 0.1344
Epoch: 2/5... Training loss: 0.1284
Epoch: 2/5... Training loss: 0.1354
Epoch: 2/5... Training loss: 0.1354
Epoch: 2/5... Training loss: 0.1382
Epoch: 2/5... Training loss: 0.1327
Epoch: 2/5... Training loss: 0.1332
Epoch: 2/5... Training loss: 0.1356
Epoch: 2/5... Training loss: 0.1359
Epoch: 2/5... Training loss: 0.1327
Epoch: 2/5... Training loss: 0.1369
Epoch: 2/5... Training loss: 0.1346
Epoch: 2/5... Training loss: 0.1364
Epoch: 2/5... Training loss: 0.1361
Epoch: 2/5... Training loss: 0.1384
Epoch: 2/5... Training loss: 0.1355
Epoch: 2/5... Training loss: 0.1338
Epoch: 2/5... Training loss: 0.1324
Epoch: 2/5... Training loss: 0.1330
Epoch: 2/5... Training loss: 0.1313
Epoch: 2/5... Training loss: 0.1372
Epoch: 2/5... Training loss: 0.1363
Epoch: 2/5... Training loss: 0.1383
Epoch: 2/5... Training loss: 0.1328
Epoch: 2/5... Training loss: 0.1394
Epoch: 2/5... Training loss: 0.1354
Epoch: 2/5... Training loss: 0.1363
Epoch: 2/5... Training loss: 0.1351
Epoch: 2/5... Training loss: 0.1358
Epoch: 2/5... Training loss: 0.1399
Epoch: 2/5... Training loss: 0.1394
Epoch: 2/5... Training loss: 0.1374
Epoch: 2/5... Training loss: 0.1311
Epoch: 2/5... Training loss: 0.1386
Epoch: 2/5... Training loss: 0.1355
Epoch: 2/5... Training loss: 0.1300
Epoch: 2/5... Training loss: 0.1365
Epoch: 2/5... Training loss: 0.1332
Epoch: 2/5... Training loss: 0.1306
Epoch: 2/5... Training loss: 0.1311
Epoch: 2/5... Training loss: 0.1354
Epoch: 2/5... Training loss: 0.1252
Epoch: 2/5... Training loss: 0.1369
Epoch: 2/5... Training loss: 0.1325
Epoch: 3/5... Training loss: 0.1295
Epoch: 3/5... Training loss: 0.1391
Epoch: 3/5... Training loss: 0.1338
Epoch: 3/5... Training loss: 0.1336
Epoch: 3/5... Training loss: 0.1346
Epoch: 3/5... Training loss: 0.1355
Epoch: 3/5... Training loss: 0.1336
Epoch: 3/5... Training loss: 0.1353
Epoch: 3/5... Training loss: 0.1353
Epoch: 3/5... Training loss: 0.1350
Epoch: 3/5... Training loss: 0.1407
Epoch: 3/5... Training loss: 0.1322
Epoch: 3/5... Training loss: 0.1339
Epoch: 3/5... Training loss: 0.1327
Epoch: 3/5... Training loss: 0.1336
Epoch: 3/5... Training loss: 0.1347
Epoch: 3/5... Training loss: 0.1280
Epoch: 3/5... Training loss: 0.1346
Epoch: 3/5... Training loss: 0.1305
Epoch: 3/5... Training loss: 0.1318
Epoch: 3/5... Training loss: 0.1365
Epoch: 3/5... Training loss: 0.1390
Epoch: 3/5... Training loss: 0.1319
Epoch: 3/5... Training loss: 0.1279
Epoch: 3/5... Training loss: 0.1311
Epoch: 3/5... Training loss: 0.1365
Epoch: 3/5... Training loss: 0.1367
Epoch: 3/5... Training loss: 0.1359
Epoch: 3/5... Training loss: 0.1293
Epoch: 3/5... Training loss: 0.1322
Epoch: 3/5... Training loss: 0.1324
Epoch: 3/5... Training loss: 0.1350
Epoch: 3/5... Training loss: 0.1310
Epoch: 3/5... Training loss: 0.1332
Epoch: 3/5... Training loss: 0.1392
Epoch: 3/5... Training loss: 0.1344
Epoch: 3/5... Training loss: 0.1321
Epoch: 3/5... Training loss: 0.1330
Epoch: 3/5... Training loss: 0.1350
Epoch: 3/5... Training loss: 0.1338
Epoch: 3/5... Training loss: 0.1316
Epoch: 3/5... Training loss: 0.1312
Epoch: 3/5... Training loss: 0.1292
Epoch: 3/5... Training loss: 0.1315
Epoch: 3/5... Training loss: 0.1316
Epoch: 3/5... Training loss: 0.1315
Epoch: 3/5... Training loss: 0.1310
Epoch: 3/5... Training loss: 0.1323
Epoch: 3/5... Training loss: 0.1353
Epoch: 3/5... Training loss: 0.1344
Epoch: 3/5... Training loss: 0.1344
Epoch: 3/5... Training loss: 0.1301
Epoch: 3/5... Training loss: 0.1297
Epoch: 3/5... Training loss: 0.1321
Epoch: 3/5... Training loss: 0.1298
Epoch: 3/5... Training loss: 0.1291
Epoch: 3/5... Training loss: 0.1342
Epoch: 3/5... Training loss: 0.1314
Epoch: 3/5... Training loss: 0.1316
Epoch: 3/5... Training loss: 0.1357
Epoch: 3/5... Training loss: 0.1338
Epoch: 3/5... Training loss: 0.1327
Epoch: 3/5... Training loss: 0.1349
Epoch: 3/5... Training loss: 0.1277
Epoch: 3/5... Training loss: 0.1264
Epoch: 3/5... Training loss: 0.1377
Epoch: 3/5... Training loss: 0.1260
Epoch: 3/5... Training loss: 0.1338
Epoch: 3/5... Training loss: 0.1340
Epoch: 3/5... Training loss: 0.1338
Epoch: 3/5... Training loss: 0.1334
Epoch: 3/5... Training loss: 0.1313
Epoch: 3/5... Training loss: 0.1280
Epoch: 3/5... Training loss: 0.1308
Epoch: 3/5... Training loss: 0.1311
Epoch: 3/5... Training loss: 0.1250
Epoch: 3/5... Training loss: 0.1300
Epoch: 3/5... Training loss: 0.1315
Epoch: 3/5... Training loss: 0.1312
Epoch: 3/5... Training loss: 0.1280
Epoch: 3/5... Training loss: 0.1315
Epoch: 3/5... Training loss: 0.1285
Epoch: 3/5... Training loss: 0.1299
Epoch: 3/5... Training loss: 0.1344
Epoch: 3/5... Training loss: 0.1349
Epoch: 3/5... Training loss: 0.1312
Epoch: 3/5... Training loss: 0.1260
Epoch: 3/5... Training loss: 0.1271
Epoch: 3/5... Training loss: 0.1315
Epoch: 3/5... Training loss: 0.1280
Epoch: 3/5... Training loss: 0.1315
Epoch: 3/5... Training loss: 0.1258
Epoch: 3/5... Training loss: 0.1272
Epoch: 3/5... Training loss: 0.1297
Epoch: 3/5... Training loss: 0.1294
Epoch: 3/5... Training loss: 0.1293
Epoch: 3/5... Training loss: 0.1334
Epoch: 3/5... Training loss: 0.1328
Epoch: 3/5... Training loss: 0.1313
Epoch: 3/5... Training loss: 0.1328
Epoch: 3/5... Training loss: 0.1355
Epoch: 3/5... Training loss: 0.1304
Epoch: 3/5... Training loss: 0.1287
Epoch: 3/5... Training loss: 0.1355
Epoch: 3/5... Training loss: 0.1309
Epoch: 3/5... Training loss: 0.1265
Epoch: 3/5... Training loss: 0.1279
Epoch: 3/5... Training loss: 0.1330
Epoch: 3/5... Training loss: 0.1264
Epoch: 3/5... Training loss: 0.1278
Epoch: 3/5... Training loss: 0.1311
Epoch: 3/5... Training loss: 0.1307
Epoch: 3/5... Training loss: 0.1284
Epoch: 3/5... Training loss: 0.1254
Epoch: 3/5... Training loss: 0.1274
Epoch: 3/5... Training loss: 0.1298
Epoch: 3/5... Training loss: 0.1272
Epoch: 3/5... Training loss: 0.1320
Epoch: 3/5... Training loss: 0.1321
Epoch: 3/5... Training loss: 0.1283
Epoch: 3/5... Training loss: 0.1282
Epoch: 3/5... Training loss: 0.1272
Epoch: 3/5... Training loss: 0.1290
Epoch: 3/5... Training loss: 0.1224
Epoch: 3/5... Training loss: 0.1242
Epoch: 3/5... Training loss: 0.1281
Epoch: 3/5... Training loss: 0.1327
Epoch: 3/5... Training loss: 0.1303
Epoch: 3/5... Training loss: 0.1289
Epoch: 3/5... Training loss: 0.1266
Epoch: 3/5... Training loss: 0.1348
Epoch: 3/5... Training loss: 0.1274
Epoch: 3/5... Training loss: 0.1336
Epoch: 3/5... Training loss: 0.1240
Epoch: 3/5... Training loss: 0.1349
Epoch: 3/5... Training loss: 0.1361
Epoch: 3/5... Training loss: 0.1280
Epoch: 3/5... Training loss: 0.1304
Epoch: 3/5... Training loss: 0.1248
Epoch: 3/5... Training loss: 0.1301
Epoch: 3/5... Training loss: 0.1315
Epoch: 3/5... Training loss: 0.1269
Epoch: 3/5... Training loss: 0.1297
Epoch: 3/5... Training loss: 0.1244
Epoch: 3/5... Training loss: 0.1263
Epoch: 3/5... Training loss: 0.1296
Epoch: 3/5... Training loss: 0.1272
Epoch: 3/5... Training loss: 0.1306
Epoch: 3/5... Training loss: 0.1305
Epoch: 3/5... Training loss: 0.1270
Epoch: 3/5... Training loss: 0.1239
Epoch: 3/5... Training loss: 0.1319
Epoch: 3/5... Training loss: 0.1313
Epoch: 3/5... Training loss: 0.1281
Epoch: 3/5... Training loss: 0.1288
Epoch: 3/5... Training loss: 0.1262
Epoch: 3/5... Training loss: 0.1264
Epoch: 3/5... Training loss: 0.1292
Epoch: 3/5... Training loss: 0.1277
Epoch: 3/5... Training loss: 0.1240
Epoch: 3/5... Training loss: 0.1310
Epoch: 3/5... Training loss: 0.1266
Epoch: 3/5... Training loss: 0.1248
Epoch: 3/5... Training loss: 0.1285
Epoch: 3/5... Training loss: 0.1254
Epoch: 3/5... Training loss: 0.1285
Epoch: 3/5... Training loss: 0.1284
Epoch: 3/5... Training loss: 0.1274
Epoch: 3/5... Training loss: 0.1276
Epoch: 3/5... Training loss: 0.1262
Epoch: 3/5... Training loss: 0.1270
Epoch: 3/5... Training loss: 0.1265
Epoch: 3/5... Training loss: 0.1288
Epoch: 3/5... Training loss: 0.1213
Epoch: 3/5... Training loss: 0.1262
Epoch: 3/5... Training loss: 0.1295
Epoch: 3/5... Training loss: 0.1238
Epoch: 3/5... Training loss: 0.1266
Epoch: 3/5... Training loss: 0.1284
Epoch: 3/5... Training loss: 0.1282
Epoch: 3/5... Training loss: 0.1234
Epoch: 3/5... Training loss: 0.1262
Epoch: 3/5... Training loss: 0.1226
Epoch: 3/5... Training loss: 0.1311
Epoch: 3/5... Training loss: 0.1317
Epoch: 3/5... Training loss: 0.1259
Epoch: 3/5... Training loss: 0.1316
Epoch: 3/5... Training loss: 0.1268
Epoch: 3/5... Training loss: 0.1230
Epoch: 3/5... Training loss: 0.1245
Epoch: 3/5... Training loss: 0.1331
Epoch: 3/5... Training loss: 0.1236
Epoch: 3/5... Training loss: 0.1281
Epoch: 3/5... Training loss: 0.1279
Epoch: 3/5... Training loss: 0.1292
Epoch: 3/5... Training loss: 0.1248
Epoch: 3/5... Training loss: 0.1274
Epoch: 3/5... Training loss: 0.1273
Epoch: 3/5... Training loss: 0.1304
Epoch: 3/5... Training loss: 0.1270
Epoch: 3/5... Training loss: 0.1240
Epoch: 3/5... Training loss: 0.1214
Epoch: 3/5... Training loss: 0.1275
Epoch: 3/5... Training loss: 0.1266
Epoch: 3/5... Training loss: 0.1262
Epoch: 3/5... Training loss: 0.1269
Epoch: 3/5... Training loss: 0.1222
Epoch: 3/5... Training loss: 0.1287
Epoch: 3/5... Training loss: 0.1270
Epoch: 3/5... Training loss: 0.1233
Epoch: 3/5... Training loss: 0.1247
Epoch: 3/5... Training loss: 0.1243
Epoch: 3/5... Training loss: 0.1259
Epoch: 3/5... Training loss: 0.1267
Epoch: 3/5... Training loss: 0.1264
Epoch: 3/5... Training loss: 0.1249
Epoch: 3/5... Training loss: 0.1235
Epoch: 3/5... Training loss: 0.1270
Epoch: 3/5... Training loss: 0.1288
Epoch: 3/5... Training loss: 0.1235
Epoch: 3/5... Training loss: 0.1269
Epoch: 3/5... Training loss: 0.1293
Epoch: 3/5... Training loss: 0.1309
Epoch: 3/5... Training loss: 0.1234
Epoch: 3/5... Training loss: 0.1258
Epoch: 3/5... Training loss: 0.1318
Epoch: 3/5... Training loss: 0.1304
Epoch: 3/5... Training loss: 0.1222
Epoch: 3/5... Training loss: 0.1200
Epoch: 3/5... Training loss: 0.1226
Epoch: 3/5... Training loss: 0.1275
Epoch: 3/5... Training loss: 0.1234
Epoch: 3/5... Training loss: 0.1256
Epoch: 3/5... Training loss: 0.1189
Epoch: 3/5... Training loss: 0.1294
Epoch: 3/5... Training loss: 0.1220
Epoch: 3/5... Training loss: 0.1268
Epoch: 3/5... Training loss: 0.1226
Epoch: 3/5... Training loss: 0.1236
Epoch: 3/5... Training loss: 0.1243
Epoch: 3/5... Training loss: 0.1308
Epoch: 3/5... Training loss: 0.1227
Epoch: 3/5... Training loss: 0.1303
Epoch: 3/5... Training loss: 0.1275
Epoch: 3/5... Training loss: 0.1233
Epoch: 3/5... Training loss: 0.1252
Epoch: 3/5... Training loss: 0.1254
Epoch: 3/5... Training loss: 0.1280
Epoch: 3/5... Training loss: 0.1234
Epoch: 3/5... Training loss: 0.1256
Epoch: 3/5... Training loss: 0.1234
Epoch: 3/5... Training loss: 0.1225
Epoch: 3/5... Training loss: 0.1273
Epoch: 3/5... Training loss: 0.1285
Epoch: 3/5... Training loss: 0.1259
Epoch: 3/5... Training loss: 0.1251
Epoch: 3/5... Training loss: 0.1251
Epoch: 3/5... Training loss: 0.1258
Epoch: 3/5... Training loss: 0.1238
Epoch: 3/5... Training loss: 0.1259
Epoch: 3/5... Training loss: 0.1246
Epoch: 3/5... Training loss: 0.1271
Epoch: 3/5... Training loss: 0.1274
Epoch: 3/5... Training loss: 0.1273
Epoch: 3/5... Training loss: 0.1187
Epoch: 3/5... Training loss: 0.1210
Epoch: 3/5... Training loss: 0.1207
Epoch: 3/5... Training loss: 0.1256
Epoch: 3/5... Training loss: 0.1261
Epoch: 3/5... Training loss: 0.1246
Epoch: 3/5... Training loss: 0.1282
Epoch: 3/5... Training loss: 0.1204
Epoch: 3/5... Training loss: 0.1307
Epoch: 3/5... Training loss: 0.1250
Epoch: 3/5... Training loss: 0.1275
Epoch: 3/5... Training loss: 0.1217
Epoch: 3/5... Training loss: 0.1267
Epoch: 3/5... Training loss: 0.1271
Epoch: 3/5... Training loss: 0.1211
Epoch: 3/5... Training loss: 0.1264
Epoch: 3/5... Training loss: 0.1238
Epoch: 3/5... Training loss: 0.1250
Epoch: 3/5... Training loss: 0.1264
Epoch: 3/5... Training loss: 0.1231
Epoch: 3/5... Training loss: 0.1236
Epoch: 3/5... Training loss: 0.1314
Epoch: 3/5... Training loss: 0.1212
Epoch: 3/5... Training loss: 0.1265
Epoch: 3/5... Training loss: 0.1185
Epoch: 3/5... Training loss: 0.1244
Epoch: 3/5... Training loss: 0.1295
Epoch: 3/5... Training loss: 0.1193
Epoch: 3/5... Training loss: 0.1232
Epoch: 3/5... Training loss: 0.1208
Epoch: 3/5... Training loss: 0.1245
Epoch: 3/5... Training loss: 0.1240
Epoch: 3/5... Training loss: 0.1272
Epoch: 3/5... Training loss: 0.1218
Epoch: 3/5... Training loss: 0.1192
Epoch: 3/5... Training loss: 0.1273
Epoch: 4/5... Training loss: 0.1235
Epoch: 4/5... Training loss: 0.1230
Epoch: 4/5... Training loss: 0.1200
Epoch: 4/5... Training loss: 0.1222
Epoch: 4/5... Training loss: 0.1210
Epoch: 4/5... Training loss: 0.1261
Epoch: 4/5... Training loss: 0.1273
Epoch: 4/5... Training loss: 0.1281
Epoch: 4/5... Training loss: 0.1255
Epoch: 4/5... Training loss: 0.1252
Epoch: 4/5... Training loss: 0.1207
Epoch: 4/5... Training loss: 0.1259
Epoch: 4/5... Training loss: 0.1259
Epoch: 4/5... Training loss: 0.1208
Epoch: 4/5... Training loss: 0.1180
Epoch: 4/5... Training loss: 0.1261
Epoch: 4/5... Training loss: 0.1210
Epoch: 4/5... Training loss: 0.1270
Epoch: 4/5... Training loss: 0.1263
Epoch: 4/5... Training loss: 0.1242
Epoch: 4/5... Training loss: 0.1224
Epoch: 4/5... Training loss: 0.1211
Epoch: 4/5... Training loss: 0.1220
Epoch: 4/5... Training loss: 0.1190
Epoch: 4/5... Training loss: 0.1220
Epoch: 4/5... Training loss: 0.1231
Epoch: 4/5... Training loss: 0.1237
Epoch: 4/5... Training loss: 0.1263
Epoch: 4/5... Training loss: 0.1194
Epoch: 4/5... Training loss: 0.1198
Epoch: 4/5... Training loss: 0.1176
Epoch: 4/5... Training loss: 0.1245
Epoch: 4/5... Training loss: 0.1202
Epoch: 4/5... Training loss: 0.1224
Epoch: 4/5... Training loss: 0.1193
Epoch: 4/5... Training loss: 0.1245
Epoch: 4/5... Training loss: 0.1220
Epoch: 4/5... Training loss: 0.1253
Epoch: 4/5... Training loss: 0.1231
Epoch: 4/5... Training loss: 0.1279
Epoch: 4/5... Training loss: 0.1255
Epoch: 4/5... Training loss: 0.1238
Epoch: 4/5... Training loss: 0.1254
Epoch: 4/5... Training loss: 0.1207
Epoch: 4/5... Training loss: 0.1237
Epoch: 4/5... Training loss: 0.1270
Epoch: 4/5... Training loss: 0.1225
Epoch: 4/5... Training loss: 0.1277
Epoch: 4/5... Training loss: 0.1273
Epoch: 4/5... Training loss: 0.1222
Epoch: 4/5... Training loss: 0.1265
Epoch: 4/5... Training loss: 0.1236
Epoch: 4/5... Training loss: 0.1283
Epoch: 4/5... Training loss: 0.1241
Epoch: 4/5... Training loss: 0.1254
Epoch: 4/5... Training loss: 0.1236
Epoch: 4/5... Training loss: 0.1218
Epoch: 4/5... Training loss: 0.1244
Epoch: 4/5... Training loss: 0.1238
Epoch: 4/5... Training loss: 0.1260
Epoch: 4/5... Training loss: 0.1235
Epoch: 4/5... Training loss: 0.1253
Epoch: 4/5... Training loss: 0.1209
Epoch: 4/5... Training loss: 0.1203
Epoch: 4/5... Training loss: 0.1220
Epoch: 4/5... Training loss: 0.1220
Epoch: 4/5... Training loss: 0.1275
Epoch: 4/5... Training loss: 0.1231
Epoch: 4/5... Training loss: 0.1212
Epoch: 4/5... Training loss: 0.1214
Epoch: 4/5... Training loss: 0.1240
Epoch: 4/5... Training loss: 0.1276
Epoch: 4/5... Training loss: 0.1214
Epoch: 4/5... Training loss: 0.1176
Epoch: 4/5... Training loss: 0.1239
Epoch: 4/5... Training loss: 0.1214
Epoch: 4/5... Training loss: 0.1187
Epoch: 4/5... Training loss: 0.1167
Epoch: 4/5... Training loss: 0.1194
Epoch: 4/5... Training loss: 0.1202
Epoch: 4/5... Training loss: 0.1174
Epoch: 4/5... Training loss: 0.1230
Epoch: 4/5... Training loss: 0.1203
Epoch: 4/5... Training loss: 0.1241
Epoch: 4/5... Training loss: 0.1237
Epoch: 4/5... Training loss: 0.1228
Epoch: 4/5... Training loss: 0.1214
Epoch: 4/5... Training loss: 0.1164
Epoch: 4/5... Training loss: 0.1146
Epoch: 4/5... Training loss: 0.1235
Epoch: 4/5... Training loss: 0.1220
Epoch: 4/5... Training loss: 0.1190
Epoch: 4/5... Training loss: 0.1227
Epoch: 4/5... Training loss: 0.1234
Epoch: 4/5... Training loss: 0.1184
Epoch: 4/5... Training loss: 0.1227
Epoch: 4/5... Training loss: 0.1182
Epoch: 4/5... Training loss: 0.1227
Epoch: 4/5... Training loss: 0.1189
Epoch: 4/5... Training loss: 0.1224
Epoch: 4/5... Training loss: 0.1263
Epoch: 4/5... Training loss: 0.1211
Epoch: 4/5... Training loss: 0.1218
Epoch: 4/5... Training loss: 0.1219
Epoch: 4/5... Training loss: 0.1243
Epoch: 4/5... Training loss: 0.1220
Epoch: 4/5... Training loss: 0.1197
Epoch: 4/5... Training loss: 0.1222
Epoch: 4/5... Training loss: 0.1186
Epoch: 4/5... Training loss: 0.1219
Epoch: 4/5... Training loss: 0.1242
Epoch: 4/5... Training loss: 0.1165
Epoch: 4/5... Training loss: 0.1193
Epoch: 4/5... Training loss: 0.1211
Epoch: 4/5... Training loss: 0.1209
Epoch: 4/5... Training loss: 0.1221
Epoch: 4/5... Training loss: 0.1214
Epoch: 4/5... Training loss: 0.1248
Epoch: 4/5... Training loss: 0.1206
Epoch: 4/5... Training loss: 0.1233
Epoch: 4/5... Training loss: 0.1216
Epoch: 4/5... Training loss: 0.1194
Epoch: 4/5... Training loss: 0.1263
Epoch: 4/5... Training loss: 0.1194
Epoch: 4/5... Training loss: 0.1216
Epoch: 4/5... Training loss: 0.1192
Epoch: 4/5... Training loss: 0.1238
Epoch: 4/5... Training loss: 0.1278
Epoch: 4/5... Training loss: 0.1205
Epoch: 4/5... Training loss: 0.1194
Epoch: 4/5... Training loss: 0.1205
Epoch: 4/5... Training loss: 0.1175
Epoch: 4/5... Training loss: 0.1176
Epoch: 4/5... Training loss: 0.1247
Epoch: 4/5... Training loss: 0.1193
Epoch: 4/5... Training loss: 0.1219
Epoch: 4/5... Training loss: 0.1188
Epoch: 4/5... Training loss: 0.1230
Epoch: 4/5... Training loss: 0.1217
Epoch: 4/5... Training loss: 0.1210
Epoch: 4/5... Training loss: 0.1187
Epoch: 4/5... Training loss: 0.1212
Epoch: 4/5... Training loss: 0.1209
Epoch: 4/5... Training loss: 0.1232
Epoch: 4/5... Training loss: 0.1165
Epoch: 4/5... Training loss: 0.1192
Epoch: 4/5... Training loss: 0.1181
Epoch: 4/5... Training loss: 0.1235
Epoch: 4/5... Training loss: 0.1192
Epoch: 4/5... Training loss: 0.1244
Epoch: 4/5... Training loss: 0.1192
Epoch: 4/5... Training loss: 0.1196
Epoch: 4/5... Training loss: 0.1212
Epoch: 4/5... Training loss: 0.1193
Epoch: 4/5... Training loss: 0.1223
Epoch: 4/5... Training loss: 0.1234
Epoch: 4/5... Training loss: 0.1176
Epoch: 4/5... Training loss: 0.1179
Epoch: 4/5... Training loss: 0.1189
Epoch: 4/5... Training loss: 0.1258
Epoch: 4/5... Training loss: 0.1226
Epoch: 4/5... Training loss: 0.1227
Epoch: 4/5... Training loss: 0.1193
Epoch: 4/5... Training loss: 0.1190
Epoch: 4/5... Training loss: 0.1193
Epoch: 4/5... Training loss: 0.1211
Epoch: 4/5... Training loss: 0.1234
Epoch: 4/5... Training loss: 0.1208
Epoch: 4/5... Training loss: 0.1223
Epoch: 4/5... Training loss: 0.1197
Epoch: 4/5... Training loss: 0.1223
Epoch: 4/5... Training loss: 0.1198
Epoch: 4/5... Training loss: 0.1173
Epoch: 4/5... Training loss: 0.1180
Epoch: 4/5... Training loss: 0.1222
Epoch: 4/5... Training loss: 0.1146
Epoch: 4/5... Training loss: 0.1208
Epoch: 4/5... Training loss: 0.1204
Epoch: 4/5... Training loss: 0.1233
Epoch: 4/5... Training loss: 0.1263
Epoch: 4/5... Training loss: 0.1227
Epoch: 4/5... Training loss: 0.1217
Epoch: 4/5... Training loss: 0.1238
Epoch: 4/5... Training loss: 0.1177
Epoch: 4/5... Training loss: 0.1211
Epoch: 4/5... Training loss: 0.1194
Epoch: 4/5... Training loss: 0.1187
Epoch: 4/5... Training loss: 0.1214
Epoch: 4/5... Training loss: 0.1179
Epoch: 4/5... Training loss: 0.1188
Epoch: 4/5... Training loss: 0.1195
Epoch: 4/5... Training loss: 0.1176
Epoch: 4/5... Training loss: 0.1217
Epoch: 4/5... Training loss: 0.1253
Epoch: 4/5... Training loss: 0.1217
Epoch: 4/5... Training loss: 0.1227
Epoch: 4/5... Training loss: 0.1157
Epoch: 4/5... Training loss: 0.1203
Epoch: 4/5... Training loss: 0.1192
Epoch: 4/5... Training loss: 0.1189
Epoch: 4/5... Training loss: 0.1197
Epoch: 4/5... Training loss: 0.1191
Epoch: 4/5... Training loss: 0.1220
Epoch: 4/5... Training loss: 0.1199
Epoch: 4/5... Training loss: 0.1200
Epoch: 4/5... Training loss: 0.1201
Epoch: 4/5... Training loss: 0.1160
Epoch: 4/5... Training loss: 0.1226
Epoch: 4/5... Training loss: 0.1215
Epoch: 4/5... Training loss: 0.1190
Epoch: 4/5... Training loss: 0.1183
Epoch: 4/5... Training loss: 0.1173
Epoch: 4/5... Training loss: 0.1223
Epoch: 4/5... Training loss: 0.1165
Epoch: 4/5... Training loss: 0.1221
Epoch: 4/5... Training loss: 0.1238
Epoch: 4/5... Training loss: 0.1196
Epoch: 4/5... Training loss: 0.1229
Epoch: 4/5... Training loss: 0.1174
Epoch: 4/5... Training loss: 0.1174
Epoch: 4/5... Training loss: 0.1180
Epoch: 4/5... Training loss: 0.1187
Epoch: 4/5... Training loss: 0.1202
Epoch: 4/5... Training loss: 0.1206
Epoch: 4/5... Training loss: 0.1170
Epoch: 4/5... Training loss: 0.1218
Epoch: 4/5... Training loss: 0.1142
Epoch: 4/5... Training loss: 0.1149
Epoch: 4/5... Training loss: 0.1186
Epoch: 4/5... Training loss: 0.1213
Epoch: 4/5... Training loss: 0.1194
Epoch: 4/5... Training loss: 0.1192
Epoch: 4/5... Training loss: 0.1169
Epoch: 4/5... Training loss: 0.1215
Epoch: 4/5... Training loss: 0.1188
Epoch: 4/5... Training loss: 0.1188
Epoch: 4/5... Training loss: 0.1197
Epoch: 4/5... Training loss: 0.1210
Epoch: 4/5... Training loss: 0.1201
Epoch: 4/5... Training loss: 0.1245
Epoch: 4/5... Training loss: 0.1189
Epoch: 4/5... Training loss: 0.1189
Epoch: 4/5... Training loss: 0.1214
Epoch: 4/5... Training loss: 0.1168
Epoch: 4/5... Training loss: 0.1177
Epoch: 4/5... Training loss: 0.1182
Epoch: 4/5... Training loss: 0.1142
Epoch: 4/5... Training loss: 0.1206
Epoch: 4/5... Training loss: 0.1229
Epoch: 4/5... Training loss: 0.1181
Epoch: 4/5... Training loss: 0.1189
Epoch: 4/5... Training loss: 0.1168
Epoch: 4/5... Training loss: 0.1212
Epoch: 4/5... Training loss: 0.1196
Epoch: 4/5... Training loss: 0.1213
Epoch: 4/5... Training loss: 0.1180
Epoch: 4/5... Training loss: 0.1170
Epoch: 4/5... Training loss: 0.1173
Epoch: 4/5... Training loss: 0.1196
Epoch: 4/5... Training loss: 0.1203
Epoch: 4/5... Training loss: 0.1185
Epoch: 4/5... Training loss: 0.1216
Epoch: 4/5... Training loss: 0.1201
Epoch: 4/5... Training loss: 0.1213
Epoch: 4/5... Training loss: 0.1176
Epoch: 4/5... Training loss: 0.1189
Epoch: 4/5... Training loss: 0.1172
Epoch: 4/5... Training loss: 0.1193
Epoch: 4/5... Training loss: 0.1202
Epoch: 4/5... Training loss: 0.1185
Epoch: 4/5... Training loss: 0.1163
Epoch: 4/5... Training loss: 0.1282
Epoch: 4/5... Training loss: 0.1223
Epoch: 4/5... Training loss: 0.1211
Epoch: 4/5... Training loss: 0.1182
Epoch: 4/5... Training loss: 0.1210
Epoch: 4/5... Training loss: 0.1159
Epoch: 4/5... Training loss: 0.1177
Epoch: 4/5... Training loss: 0.1167
Epoch: 4/5... Training loss: 0.1236
Epoch: 4/5... Training loss: 0.1153
Epoch: 4/5... Training loss: 0.1164
Epoch: 4/5... Training loss: 0.1184
Epoch: 4/5... Training loss: 0.1180
Epoch: 4/5... Training loss: 0.1193
Epoch: 4/5... Training loss: 0.1194
Epoch: 4/5... Training loss: 0.1147
Epoch: 4/5... Training loss: 0.1151
Epoch: 4/5... Training loss: 0.1145
Epoch: 4/5... Training loss: 0.1212
Epoch: 4/5... Training loss: 0.1208
Epoch: 4/5... Training loss: 0.1176
Epoch: 4/5... Training loss: 0.1167
Epoch: 4/5... Training loss: 0.1181
Epoch: 4/5... Training loss: 0.1166
Epoch: 4/5... Training loss: 0.1163
Epoch: 4/5... Training loss: 0.1209
Epoch: 4/5... Training loss: 0.1165
Epoch: 4/5... Training loss: 0.1171
Epoch: 4/5... Training loss: 0.1210
Epoch: 5/5... Training loss: 0.1208
Epoch: 5/5... Training loss: 0.1167
Epoch: 5/5... Training loss: 0.1208
Epoch: 5/5... Training loss: 0.1202
Epoch: 5/5... Training loss: 0.1219
Epoch: 5/5... Training loss: 0.1208
Epoch: 5/5... Training loss: 0.1222
Epoch: 5/5... Training loss: 0.1203
Epoch: 5/5... Training loss: 0.1201
Epoch: 5/5... Training loss: 0.1170
Epoch: 5/5... Training loss: 0.1150
Epoch: 5/5... Training loss: 0.1212
Epoch: 5/5... Training loss: 0.1201
Epoch: 5/5... Training loss: 0.1158
Epoch: 5/5... Training loss: 0.1172
Epoch: 5/5... Training loss: 0.1161
Epoch: 5/5... Training loss: 0.1131
Epoch: 5/5... Training loss: 0.1194
Epoch: 5/5... Training loss: 0.1173
Epoch: 5/5... Training loss: 0.1165
Epoch: 5/5... Training loss: 0.1186
Epoch: 5/5... Training loss: 0.1175
Epoch: 5/5... Training loss: 0.1203
Epoch: 5/5... Training loss: 0.1142
Epoch: 5/5... Training loss: 0.1204
Epoch: 5/5... Training loss: 0.1153
Epoch: 5/5... Training loss: 0.1142
Epoch: 5/5... Training loss: 0.1197
Epoch: 5/5... Training loss: 0.1123
Epoch: 5/5... Training loss: 0.1167
Epoch: 5/5... Training loss: 0.1207
Epoch: 5/5... Training loss: 0.1136
Epoch: 5/5... Training loss: 0.1193
Epoch: 5/5... Training loss: 0.1174
Epoch: 5/5... Training loss: 0.1169
Epoch: 5/5... Training loss: 0.1244
Epoch: 5/5... Training loss: 0.1153
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1164
Epoch: 5/5... Training loss: 0.1194
Epoch: 5/5... Training loss: 0.1155
Epoch: 5/5... Training loss: 0.1208
Epoch: 5/5... Training loss: 0.1187
Epoch: 5/5... Training loss: 0.1158
Epoch: 5/5... Training loss: 0.1160
Epoch: 5/5... Training loss: 0.1224
Epoch: 5/5... Training loss: 0.1153
Epoch: 5/5... Training loss: 0.1173
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1197
Epoch: 5/5... Training loss: 0.1187
Epoch: 5/5... Training loss: 0.1153
Epoch: 5/5... Training loss: 0.1195
Epoch: 5/5... Training loss: 0.1136
Epoch: 5/5... Training loss: 0.1142
Epoch: 5/5... Training loss: 0.1174
Epoch: 5/5... Training loss: 0.1180
Epoch: 5/5... Training loss: 0.1193
Epoch: 5/5... Training loss: 0.1239
Epoch: 5/5... Training loss: 0.1167
Epoch: 5/5... Training loss: 0.1174
Epoch: 5/5... Training loss: 0.1189
Epoch: 5/5... Training loss: 0.1212
Epoch: 5/5... Training loss: 0.1253
Epoch: 5/5... Training loss: 0.1161
Epoch: 5/5... Training loss: 0.1236
Epoch: 5/5... Training loss: 0.1241
Epoch: 5/5... Training loss: 0.1194
Epoch: 5/5... Training loss: 0.1244
Epoch: 5/5... Training loss: 0.1203
Epoch: 5/5... Training loss: 0.1225
Epoch: 5/5... Training loss: 0.1140
Epoch: 5/5... Training loss: 0.1194
Epoch: 5/5... Training loss: 0.1195
Epoch: 5/5... Training loss: 0.1195
Epoch: 5/5... Training loss: 0.1176
Epoch: 5/5... Training loss: 0.1168
Epoch: 5/5... Training loss: 0.1162
Epoch: 5/5... Training loss: 0.1161
Epoch: 5/5... Training loss: 0.1198
Epoch: 5/5... Training loss: 0.1219
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1149
Epoch: 5/5... Training loss: 0.1139
Epoch: 5/5... Training loss: 0.1180
Epoch: 5/5... Training loss: 0.1169
Epoch: 5/5... Training loss: 0.1137
Epoch: 5/5... Training loss: 0.1150
Epoch: 5/5... Training loss: 0.1159
Epoch: 5/5... Training loss: 0.1174
Epoch: 5/5... Training loss: 0.1162
Epoch: 5/5... Training loss: 0.1169
Epoch: 5/5... Training loss: 0.1178
Epoch: 5/5... Training loss: 0.1143
Epoch: 5/5... Training loss: 0.1158
Epoch: 5/5... Training loss: 0.1157
Epoch: 5/5... Training loss: 0.1188
Epoch: 5/5... Training loss: 0.1078
Epoch: 5/5... Training loss: 0.1167
Epoch: 5/5... Training loss: 0.1135
Epoch: 5/5... Training loss: 0.1182
Epoch: 5/5... Training loss: 0.1164
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1167
Epoch: 5/5... Training loss: 0.1179
Epoch: 5/5... Training loss: 0.1149
Epoch: 5/5... Training loss: 0.1177
Epoch: 5/5... Training loss: 0.1169
Epoch: 5/5... Training loss: 0.1131
Epoch: 5/5... Training loss: 0.1166
Epoch: 5/5... Training loss: 0.1184
Epoch: 5/5... Training loss: 0.1196
Epoch: 5/5... Training loss: 0.1173
Epoch: 5/5... Training loss: 0.1203
Epoch: 5/5... Training loss: 0.1186
Epoch: 5/5... Training loss: 0.1161
Epoch: 5/5... Training loss: 0.1095
Epoch: 5/5... Training loss: 0.1150
Epoch: 5/5... Training loss: 0.1208
Epoch: 5/5... Training loss: 0.1159
Epoch: 5/5... Training loss: 0.1171
Epoch: 5/5... Training loss: 0.1144
Epoch: 5/5... Training loss: 0.1211
Epoch: 5/5... Training loss: 0.1131
Epoch: 5/5... Training loss: 0.1167
Epoch: 5/5... Training loss: 0.1177
Epoch: 5/5... Training loss: 0.1157
Epoch: 5/5... Training loss: 0.1164
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1183
Epoch: 5/5... Training loss: 0.1165
Epoch: 5/5... Training loss: 0.1167
Epoch: 5/5... Training loss: 0.1175
Epoch: 5/5... Training loss: 0.1111
Epoch: 5/5... Training loss: 0.1175
Epoch: 5/5... Training loss: 0.1171
Epoch: 5/5... Training loss: 0.1169
Epoch: 5/5... Training loss: 0.1211
Epoch: 5/5... Training loss: 0.1184
Epoch: 5/5... Training loss: 0.1154
Epoch: 5/5... Training loss: 0.1137
Epoch: 5/5... Training loss: 0.1173
Epoch: 5/5... Training loss: 0.1142
Epoch: 5/5... Training loss: 0.1166
Epoch: 5/5... Training loss: 0.1143
Epoch: 5/5... Training loss: 0.1147
Epoch: 5/5... Training loss: 0.1141
Epoch: 5/5... Training loss: 0.1142
Epoch: 5/5... Training loss: 0.1185
Epoch: 5/5... Training loss: 0.1194
Epoch: 5/5... Training loss: 0.1139
Epoch: 5/5... Training loss: 0.1162
Epoch: 5/5... Training loss: 0.1182
Epoch: 5/5... Training loss: 0.1199
Epoch: 5/5... Training loss: 0.1206
Epoch: 5/5... Training loss: 0.1173
Epoch: 5/5... Training loss: 0.1155
Epoch: 5/5... Training loss: 0.1184
Epoch: 5/5... Training loss: 0.1168
Epoch: 5/5... Training loss: 0.1125
Epoch: 5/5... Training loss: 0.1160
Epoch: 5/5... Training loss: 0.1167
Epoch: 5/5... Training loss: 0.1178
Epoch: 5/5... Training loss: 0.1213
Epoch: 5/5... Training loss: 0.1180
Epoch: 5/5... Training loss: 0.1185
Epoch: 5/5... Training loss: 0.1204
Epoch: 5/5... Training loss: 0.1176
Epoch: 5/5... Training loss: 0.1119
Epoch: 5/5... Training loss: 0.1185
Epoch: 5/5... Training loss: 0.1184
Epoch: 5/5... Training loss: 0.1142
Epoch: 5/5... Training loss: 0.1156
Epoch: 5/5... Training loss: 0.1160
Epoch: 5/5... Training loss: 0.1195
Epoch: 5/5... Training loss: 0.1115
Epoch: 5/5... Training loss: 0.1162
Epoch: 5/5... Training loss: 0.1159
Epoch: 5/5... Training loss: 0.1179
Epoch: 5/5... Training loss: 0.1144
Epoch: 5/5... Training loss: 0.1180
Epoch: 5/5... Training loss: 0.1151
Epoch: 5/5... Training loss: 0.1172
Epoch: 5/5... Training loss: 0.1158
Epoch: 5/5... Training loss: 0.1147
Epoch: 5/5... Training loss: 0.1206
Epoch: 5/5... Training loss: 0.1165
Epoch: 5/5... Training loss: 0.1159
Epoch: 5/5... Training loss: 0.1158
Epoch: 5/5... Training loss: 0.1147
Epoch: 5/5... Training loss: 0.1148
Epoch: 5/5... Training loss: 0.1197
Epoch: 5/5... Training loss: 0.1133
Epoch: 5/5... Training loss: 0.1149
Epoch: 5/5... Training loss: 0.1124
Epoch: 5/5... Training loss: 0.1144
Epoch: 5/5... Training loss: 0.1173
Epoch: 5/5... Training loss: 0.1178
Epoch: 5/5... Training loss: 0.1159
Epoch: 5/5... Training loss: 0.1197
Epoch: 5/5... Training loss: 0.1128
Epoch: 5/5... Training loss: 0.1215
Epoch: 5/5... Training loss: 0.1125
Epoch: 5/5... Training loss: 0.1164
Epoch: 5/5... Training loss: 0.1189
Epoch: 5/5... Training loss: 0.1149
Epoch: 5/5... Training loss: 0.1123
Epoch: 5/5... Training loss: 0.1126
Epoch: 5/5... Training loss: 0.1165
Epoch: 5/5... Training loss: 0.1186
Epoch: 5/5... Training loss: 0.1165
Epoch: 5/5... Training loss: 0.1146
Epoch: 5/5... Training loss: 0.1133
Epoch: 5/5... Training loss: 0.1135
Epoch: 5/5... Training loss: 0.1134
Epoch: 5/5... Training loss: 0.1127
Epoch: 5/5... Training loss: 0.1142
Epoch: 5/5... Training loss: 0.1128
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1181
Epoch: 5/5... Training loss: 0.1171
Epoch: 5/5... Training loss: 0.1157
Epoch: 5/5... Training loss: 0.1151
Epoch: 5/5... Training loss: 0.1144
Epoch: 5/5... Training loss: 0.1138
Epoch: 5/5... Training loss: 0.1162
Epoch: 5/5... Training loss: 0.1159
Epoch: 5/5... Training loss: 0.1182
Epoch: 5/5... Training loss: 0.1125
Epoch: 5/5... Training loss: 0.1179
Epoch: 5/5... Training loss: 0.1087
Epoch: 5/5... Training loss: 0.1146
Epoch: 5/5... Training loss: 0.1128
Epoch: 5/5... Training loss: 0.1147
Epoch: 5/5... Training loss: 0.1121
Epoch: 5/5... Training loss: 0.1145
Epoch: 5/5... Training loss: 0.1124
Epoch: 5/5... Training loss: 0.1126
Epoch: 5/5... Training loss: 0.1135
Epoch: 5/5... Training loss: 0.1127
Epoch: 5/5... Training loss: 0.1139
Epoch: 5/5... Training loss: 0.1160
Epoch: 5/5... Training loss: 0.1149
Epoch: 5/5... Training loss: 0.1121
Epoch: 5/5... Training loss: 0.1136
Epoch: 5/5... Training loss: 0.1162
Epoch: 5/5... Training loss: 0.1135
Epoch: 5/5... Training loss: 0.1179
Epoch: 5/5... Training loss: 0.1161
Epoch: 5/5... Training loss: 0.1191
Epoch: 5/5... Training loss: 0.1149
Epoch: 5/5... Training loss: 0.1139
Epoch: 5/5... Training loss: 0.1166
Epoch: 5/5... Training loss: 0.1176
Epoch: 5/5... Training loss: 0.1170
Epoch: 5/5... Training loss: 0.1128
Epoch: 5/5... Training loss: 0.1140
Epoch: 5/5... Training loss: 0.1160
Epoch: 5/5... Training loss: 0.1148
Epoch: 5/5... Training loss: 0.1116
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1163
Epoch: 5/5... Training loss: 0.1110
Epoch: 5/5... Training loss: 0.1164
Epoch: 5/5... Training loss: 0.1172
Epoch: 5/5... Training loss: 0.1125
Epoch: 5/5... Training loss: 0.1193
Epoch: 5/5... Training loss: 0.1199
Epoch: 5/5... Training loss: 0.1156
Epoch: 5/5... Training loss: 0.1158
Epoch: 5/5... Training loss: 0.1123
Epoch: 5/5... Training loss: 0.1180
Epoch: 5/5... Training loss: 0.1202
Epoch: 5/5... Training loss: 0.1144
Epoch: 5/5... Training loss: 0.1199
Epoch: 5/5... Training loss: 0.1174
Epoch: 5/5... Training loss: 0.1098
Epoch: 5/5... Training loss: 0.1141
Epoch: 5/5... Training loss: 0.1131
Epoch: 5/5... Training loss: 0.1153
Epoch: 5/5... Training loss: 0.1143
Epoch: 5/5... Training loss: 0.1165
Epoch: 5/5... Training loss: 0.1113
Epoch: 5/5... Training loss: 0.1186
Epoch: 5/5... Training loss: 0.1139
Epoch: 5/5... Training loss: 0.1145
Epoch: 5/5... Training loss: 0.1180
Epoch: 5/5... Training loss: 0.1128
Epoch: 5/5... Training loss: 0.1110
Epoch: 5/5... Training loss: 0.1121
Epoch: 5/5... Training loss: 0.1175
Epoch: 5/5... Training loss: 0.1180
Epoch: 5/5... Training loss: 0.1138
Epoch: 5/5... Training loss: 0.1112
Epoch: 5/5... Training loss: 0.1139
Epoch: 5/5... Training loss: 0.1135
Epoch: 5/5... Training loss: 0.1137
Epoch: 5/5... Training loss: 0.1151
Epoch: 5/5... Training loss: 0.1097
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
targets_ = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
kernel_size = 3
### Encoder
conv1 = tf.layers.conv2d(inputs_, filters=16, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=2, strides=(2, 2), padding='SAME')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, filters=8, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, pool_size=2, strides=(2, 2), padding='SAME')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, filters=8, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv2, pool_size=2, strides=(2, 2), padding='SAME')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, size=(7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, filters=8, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, size=(14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, filters=8, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, size=(28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, filters=16, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, filters=1, kernel_size=kernel_size, padding='SAME', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
if ii == 200:
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
kernel_size = 3
### Encoder
conv1 = tf.layers.conv2d(inputs_, filters=32, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=2, strides=2, padding='SAME')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, filters=32, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, pool_size=2, strides=2, padding='SAME')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, filters=16, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, pool_size=2, strides=2, padding='SAME')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, size=(7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, filters=16, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, size=(14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, filters=32, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, size=(28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, filters=32, kernel_size=kernel_size, padding='SAME', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, filters=1, kernel_size=kernel_size, padding='SAME', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
if ii == 200:
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[3]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder('float',[None,28,28,1])
targets_ = tf.placeholder('float',[None,28,28,1])
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (5,5), padding= 'same', activation = tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2))
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding= 'same', activation = tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2))
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding= 'same', activation = tf.nn.relu)
# Now 7x7x8
encoded =tf.layers.max_pooling2d(conv3, (2,2), (2,2))
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 1
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 1
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/1... Training loss: 0.6829
Epoch: 1/1... Training loss: 0.6523
Epoch: 1/1... Training loss: 0.6081
Epoch: 1/1... Training loss: 0.5524
Epoch: 1/1... Training loss: 0.5087
Epoch: 1/1... Training loss: 0.5156
Epoch: 1/1... Training loss: 0.5153
Epoch: 1/1... Training loss: 0.5155
Epoch: 1/1... Training loss: 0.4886
Epoch: 1/1... Training loss: 0.4855
Epoch: 1/1... Training loss: 0.4714
Epoch: 1/1... Training loss: 0.4799
Epoch: 1/1... Training loss: 0.4782
Epoch: 1/1... Training loss: 0.4741
Epoch: 1/1... Training loss: 0.4791
Epoch: 1/1... Training loss: 0.4538
Epoch: 1/1... Training loss: 0.4467
Epoch: 1/1... Training loss: 0.4467
Epoch: 1/1... Training loss: 0.4291
Epoch: 1/1... Training loss: 0.4361
Epoch: 1/1... Training loss: 0.4119
Epoch: 1/1... Training loss: 0.4096
Epoch: 1/1... Training loss: 0.3941
Epoch: 1/1... Training loss: 0.3825
Epoch: 1/1... Training loss: 0.3830
Epoch: 1/1... Training loss: 0.3576
Epoch: 1/1... Training loss: 0.3505
Epoch: 1/1... Training loss: 0.3311
Epoch: 1/1... Training loss: 0.3249
Epoch: 1/1... Training loss: 0.3169
Epoch: 1/1... Training loss: 0.3006
Epoch: 1/1... Training loss: 0.2901
Epoch: 1/1... Training loss: 0.2883
Epoch: 1/1... Training loss: 0.2807
Epoch: 1/1... Training loss: 0.2797
Epoch: 1/1... Training loss: 0.2714
Epoch: 1/1... Training loss: 0.2662
Epoch: 1/1... Training loss: 0.2729
Epoch: 1/1... Training loss: 0.2677
Epoch: 1/1... Training loss: 0.2625
Epoch: 1/1... Training loss: 0.2665
Epoch: 1/1... Training loss: 0.2586
Epoch: 1/1... Training loss: 0.2532
Epoch: 1/1... Training loss: 0.2550
Epoch: 1/1... Training loss: 0.2493
Epoch: 1/1... Training loss: 0.2467
Epoch: 1/1... Training loss: 0.2480
Epoch: 1/1... Training loss: 0.2531
Epoch: 1/1... Training loss: 0.2420
Epoch: 1/1... Training loss: 0.2493
Epoch: 1/1... Training loss: 0.2416
Epoch: 1/1... Training loss: 0.2409
Epoch: 1/1... Training loss: 0.2439
Epoch: 1/1... Training loss: 0.2404
Epoch: 1/1... Training loss: 0.2440
Epoch: 1/1... Training loss: 0.2359
Epoch: 1/1... Training loss: 0.2353
Epoch: 1/1... Training loss: 0.2437
Epoch: 1/1... Training loss: 0.2577
Epoch: 1/1... Training loss: 0.2440
Epoch: 1/1... Training loss: 0.2489
Epoch: 1/1... Training loss: 0.2347
Epoch: 1/1... Training loss: 0.2372
Epoch: 1/1... Training loss: 0.2260
Epoch: 1/1... Training loss: 0.2256
Epoch: 1/1... Training loss: 0.2392
Epoch: 1/1... Training loss: 0.2318
Epoch: 1/1... Training loss: 0.2363
Epoch: 1/1... Training loss: 0.2328
Epoch: 1/1... Training loss: 0.2292
Epoch: 1/1... Training loss: 0.2321
Epoch: 1/1... Training loss: 0.2252
Epoch: 1/1... Training loss: 0.2287
Epoch: 1/1... Training loss: 0.2293
Epoch: 1/1... Training loss: 0.2208
Epoch: 1/1... Training loss: 0.2321
Epoch: 1/1... Training loss: 0.2279
Epoch: 1/1... Training loss: 0.2248
Epoch: 1/1... Training loss: 0.2310
Epoch: 1/1... Training loss: 0.2271
Epoch: 1/1... Training loss: 0.2185
Epoch: 1/1... Training loss: 0.2251
Epoch: 1/1... Training loss: 0.2241
Epoch: 1/1... Training loss: 0.2223
Epoch: 1/1... Training loss: 0.2259
Epoch: 1/1... Training loss: 0.2165
Epoch: 1/1... Training loss: 0.2192
Epoch: 1/1... Training loss: 0.2172
Epoch: 1/1... Training loss: 0.2223
Epoch: 1/1... Training loss: 0.2198
Epoch: 1/1... Training loss: 0.2196
Epoch: 1/1... Training loss: 0.2192
Epoch: 1/1... Training loss: 0.2180
Epoch: 1/1... Training loss: 0.2142
Epoch: 1/1... Training loss: 0.2140
Epoch: 1/1... Training loss: 0.2150
Epoch: 1/1... Training loss: 0.2090
Epoch: 1/1... Training loss: 0.2068
Epoch: 1/1... Training loss: 0.2175
Epoch: 1/1... Training loss: 0.2090
Epoch: 1/1... Training loss: 0.2140
Epoch: 1/1... Training loss: 0.2153
Epoch: 1/1... Training loss: 0.2070
Epoch: 1/1... Training loss: 0.2102
Epoch: 1/1... Training loss: 0.2135
Epoch: 1/1... Training loss: 0.2100
Epoch: 1/1... Training loss: 0.2076
Epoch: 1/1... Training loss: 0.2076
Epoch: 1/1... Training loss: 0.2023
Epoch: 1/1... Training loss: 0.2098
Epoch: 1/1... Training loss: 0.2056
Epoch: 1/1... Training loss: 0.2104
Epoch: 1/1... Training loss: 0.2108
Epoch: 1/1... Training loss: 0.2061
Epoch: 1/1... Training loss: 0.2051
Epoch: 1/1... Training loss: 0.1950
Epoch: 1/1... Training loss: 0.2064
Epoch: 1/1... Training loss: 0.2023
Epoch: 1/1... Training loss: 0.2017
Epoch: 1/1... Training loss: 0.2011
Epoch: 1/1... Training loss: 0.2027
Epoch: 1/1... Training loss: 0.2017
Epoch: 1/1... Training loss: 0.2016
Epoch: 1/1... Training loss: 0.2039
Epoch: 1/1... Training loss: 0.1972
Epoch: 1/1... Training loss: 0.1931
Epoch: 1/1... Training loss: 0.1986
Epoch: 1/1... Training loss: 0.2041
Epoch: 1/1... Training loss: 0.1978
Epoch: 1/1... Training loss: 0.1994
Epoch: 1/1... Training loss: 0.2021
Epoch: 1/1... Training loss: 0.1928
Epoch: 1/1... Training loss: 0.2013
Epoch: 1/1... Training loss: 0.1979
Epoch: 1/1... Training loss: 0.2033
Epoch: 1/1... Training loss: 0.1965
Epoch: 1/1... Training loss: 0.2002
Epoch: 1/1... Training loss: 0.1967
Epoch: 1/1... Training loss: 0.2043
Epoch: 1/1... Training loss: 0.1978
Epoch: 1/1... Training loss: 0.1895
Epoch: 1/1... Training loss: 0.1921
Epoch: 1/1... Training loss: 0.1936
Epoch: 1/1... Training loss: 0.1930
Epoch: 1/1... Training loss: 0.1941
Epoch: 1/1... Training loss: 0.1941
Epoch: 1/1... Training loss: 0.1929
Epoch: 1/1... Training loss: 0.1936
Epoch: 1/1... Training loss: 0.2016
Epoch: 1/1... Training loss: 0.1919
Epoch: 1/1... Training loss: 0.1932
Epoch: 1/1... Training loss: 0.1927
Epoch: 1/1... Training loss: 0.1914
Epoch: 1/1... Training loss: 0.1916
Epoch: 1/1... Training loss: 0.1894
Epoch: 1/1... Training loss: 0.1939
Epoch: 1/1... Training loss: 0.1949
Epoch: 1/1... Training loss: 0.1878
Epoch: 1/1... Training loss: 0.1902
Epoch: 1/1... Training loss: 0.1890
Epoch: 1/1... Training loss: 0.1929
Epoch: 1/1... Training loss: 0.1909
Epoch: 1/1... Training loss: 0.1929
Epoch: 1/1... Training loss: 0.1900
Epoch: 1/1... Training loss: 0.1872
Epoch: 1/1... Training loss: 0.1902
Epoch: 1/1... Training loss: 0.1870
Epoch: 1/1... Training loss: 0.1894
Epoch: 1/1... Training loss: 0.1891
Epoch: 1/1... Training loss: 0.1867
Epoch: 1/1... Training loss: 0.1860
Epoch: 1/1... Training loss: 0.1884
Epoch: 1/1... Training loss: 0.1834
Epoch: 1/1... Training loss: 0.1880
Epoch: 1/1... Training loss: 0.1893
Epoch: 1/1... Training loss: 0.1882
Epoch: 1/1... Training loss: 0.1865
Epoch: 1/1... Training loss: 0.1881
Epoch: 1/1... Training loss: 0.1833
Epoch: 1/1... Training loss: 0.1833
Epoch: 1/1... Training loss: 0.1794
Epoch: 1/1... Training loss: 0.1805
Epoch: 1/1... Training loss: 0.1812
Epoch: 1/1... Training loss: 0.1840
Epoch: 1/1... Training loss: 0.1805
Epoch: 1/1... Training loss: 0.1808
Epoch: 1/1... Training loss: 0.1790
Epoch: 1/1... Training loss: 0.1780
Epoch: 1/1... Training loss: 0.1824
Epoch: 1/1... Training loss: 0.1846
Epoch: 1/1... Training loss: 0.1834
Epoch: 1/1... Training loss: 0.1799
Epoch: 1/1... Training loss: 0.1797
Epoch: 1/1... Training loss: 0.1794
Epoch: 1/1... Training loss: 0.1826
Epoch: 1/1... Training loss: 0.1800
Epoch: 1/1... Training loss: 0.1826
Epoch: 1/1... Training loss: 0.1799
Epoch: 1/1... Training loss: 0.1709
Epoch: 1/1... Training loss: 0.1855
Epoch: 1/1... Training loss: 0.1740
Epoch: 1/1... Training loss: 0.1808
Epoch: 1/1... Training loss: 0.1779
Epoch: 1/1... Training loss: 0.1752
Epoch: 1/1... Training loss: 0.1794
Epoch: 1/1... Training loss: 0.1749
Epoch: 1/1... Training loss: 0.1838
Epoch: 1/1... Training loss: 0.1736
Epoch: 1/1... Training loss: 0.1719
Epoch: 1/1... Training loss: 0.1755
Epoch: 1/1... Training loss: 0.1758
Epoch: 1/1... Training loss: 0.1788
Epoch: 1/1... Training loss: 0.1747
Epoch: 1/1... Training loss: 0.1786
Epoch: 1/1... Training loss: 0.1758
Epoch: 1/1... Training loss: 0.1760
Epoch: 1/1... Training loss: 0.1724
Epoch: 1/1... Training loss: 0.1749
Epoch: 1/1... Training loss: 0.1716
Epoch: 1/1... Training loss: 0.1657
Epoch: 1/1... Training loss: 0.1704
Epoch: 1/1... Training loss: 0.1720
Epoch: 1/1... Training loss: 0.1770
Epoch: 1/1... Training loss: 0.1733
Epoch: 1/1... Training loss: 0.1726
Epoch: 1/1... Training loss: 0.1705
Epoch: 1/1... Training loss: 0.1727
Epoch: 1/1... Training loss: 0.1716
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32,[None,28,28,1])
targets_ = tf.placeholder(tf.float32,[None,28,28,1])
### Encoder
#increase depth from 1 to 16 (units=16)
conv1 = tf.layers.conv2d(inputs_,16,2,padding='same',activation=tf.nn.relu)
# Now 28x28x16
# decrease hxw by 2 (pool size =2)
maxpool1 = tf.layers.max_pooling2d(conv1,2,1)
# Now 14x14x16
# decrease depth from 16 to 8 (units=8)
conv2 = tf.layers.conv2d(maxpool1,8,2,padding='same',activation=tf.nn.relu)
# Now 14x14x8
# decrease hxw by 2 (pool_size=2)
maxpool2 = tf.layers.max_pooling2d(conv2,2,1)
# Now 7x7x8
# keep depth at 8
conv3 = tf.layers.conv2d(maxpool2,8,2,padding='same',activation=tf.nn.relu)
# Now 7x7x8
# decrease hxw from 7 to 4
encoded = tf.layers.max_pooling2d(conv3,2,1)
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded,[7,7])
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1,8,2,padding='same',activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4,[14,14])
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2,8,2,padding='same',activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5,[28,28])
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3,16,2,padding='same',activation=tf.nn.relu)
# Now 28x28x16
# output depth is reduced to 1
logits = tf.layers.dense(conv6, units=1,activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 8
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_,32,2,padding='same',activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1,2,1)
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1,32,2,padding='same',activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2,2,1)
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2,16,2,padding='same',activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv2,2,1)
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded,[7,7])
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1,16,2,padding='same',activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4,[14,14])
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2,32,2,padding='same',activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv4,[28,28])
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3,32,2,padding='same',activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.dense(conv6,1,activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits,labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
if (batch_cost < 0.18):
break
if (batch_cost < 0.18):
break
###Output
Epoch: 1/100... Training loss: 0.7083
Epoch: 1/100... Training loss: 0.6958
Epoch: 1/100... Training loss: 0.6853
Epoch: 1/100... Training loss: 0.6752
Epoch: 1/100... Training loss: 0.6648
Epoch: 1/100... Training loss: 0.6546
Epoch: 1/100... Training loss: 0.6424
Epoch: 1/100... Training loss: 0.6310
Epoch: 1/100... Training loss: 0.6176
Epoch: 1/100... Training loss: 0.6044
Epoch: 1/100... Training loss: 0.5917
Epoch: 1/100... Training loss: 0.5724
Epoch: 1/100... Training loss: 0.5619
Epoch: 1/100... Training loss: 0.5448
Epoch: 1/100... Training loss: 0.5278
Epoch: 1/100... Training loss: 0.5181
Epoch: 1/100... Training loss: 0.5003
Epoch: 1/100... Training loss: 0.4891
Epoch: 1/100... Training loss: 0.4815
Epoch: 1/100... Training loss: 0.4702
Epoch: 1/100... Training loss: 0.4689
Epoch: 1/100... Training loss: 0.4710
Epoch: 1/100... Training loss: 0.4703
Epoch: 1/100... Training loss: 0.4692
Epoch: 1/100... Training loss: 0.4897
Epoch: 1/100... Training loss: 0.4881
Epoch: 1/100... Training loss: 0.4702
Epoch: 1/100... Training loss: 0.4778
Epoch: 1/100... Training loss: 0.4847
Epoch: 1/100... Training loss: 0.4708
Epoch: 1/100... Training loss: 0.4479
Epoch: 1/100... Training loss: 0.4474
Epoch: 1/100... Training loss: 0.4528
Epoch: 1/100... Training loss: 0.4406
Epoch: 1/100... Training loss: 0.4442
Epoch: 1/100... Training loss: 0.4406
Epoch: 1/100... Training loss: 0.4419
Epoch: 1/100... Training loss: 0.4315
Epoch: 1/100... Training loss: 0.4413
Epoch: 1/100... Training loss: 0.4317
Epoch: 1/100... Training loss: 0.4213
Epoch: 1/100... Training loss: 0.4171
Epoch: 1/100... Training loss: 0.4179
Epoch: 1/100... Training loss: 0.4137
Epoch: 1/100... Training loss: 0.4055
Epoch: 1/100... Training loss: 0.4005
Epoch: 1/100... Training loss: 0.4018
Epoch: 1/100... Training loss: 0.3936
Epoch: 1/100... Training loss: 0.3980
Epoch: 1/100... Training loss: 0.3895
Epoch: 1/100... Training loss: 0.3816
Epoch: 1/100... Training loss: 0.3682
Epoch: 1/100... Training loss: 0.3693
Epoch: 1/100... Training loss: 0.3718
Epoch: 1/100... Training loss: 0.3709
Epoch: 1/100... Training loss: 0.3555
Epoch: 1/100... Training loss: 0.3576
Epoch: 1/100... Training loss: 0.3550
Epoch: 1/100... Training loss: 0.3544
Epoch: 1/100... Training loss: 0.3428
Epoch: 1/100... Training loss: 0.3427
Epoch: 1/100... Training loss: 0.3349
Epoch: 1/100... Training loss: 0.3315
Epoch: 1/100... Training loss: 0.3291
Epoch: 1/100... Training loss: 0.3262
Epoch: 1/100... Training loss: 0.3187
Epoch: 1/100... Training loss: 0.3209
Epoch: 1/100... Training loss: 0.3091
Epoch: 1/100... Training loss: 0.3037
Epoch: 1/100... Training loss: 0.2941
Epoch: 1/100... Training loss: 0.2908
Epoch: 1/100... Training loss: 0.2901
Epoch: 1/100... Training loss: 0.2909
Epoch: 1/100... Training loss: 0.2867
Epoch: 1/100... Training loss: 0.2780
Epoch: 1/100... Training loss: 0.2772
Epoch: 1/100... Training loss: 0.2717
Epoch: 1/100... Training loss: 0.2773
Epoch: 1/100... Training loss: 0.2627
Epoch: 1/100... Training loss: 0.2662
Epoch: 1/100... Training loss: 0.2558
Epoch: 1/100... Training loss: 0.2586
Epoch: 1/100... Training loss: 0.2503
Epoch: 1/100... Training loss: 0.2419
Epoch: 1/100... Training loss: 0.2599
Epoch: 1/100... Training loss: 0.2548
Epoch: 1/100... Training loss: 0.2449
Epoch: 1/100... Training loss: 0.2447
Epoch: 1/100... Training loss: 0.2466
Epoch: 1/100... Training loss: 0.2369
Epoch: 1/100... Training loss: 0.2355
Epoch: 1/100... Training loss: 0.2391
Epoch: 1/100... Training loss: 0.2340
Epoch: 1/100... Training loss: 0.2295
Epoch: 1/100... Training loss: 0.2305
Epoch: 1/100... Training loss: 0.2270
Epoch: 1/100... Training loss: 0.2330
Epoch: 1/100... Training loss: 0.2294
Epoch: 1/100... Training loss: 0.2303
Epoch: 1/100... Training loss: 0.2269
Epoch: 1/100... Training loss: 0.2223
Epoch: 1/100... Training loss: 0.2291
Epoch: 1/100... Training loss: 0.2192
Epoch: 1/100... Training loss: 0.2286
Epoch: 1/100... Training loss: 0.2228
Epoch: 1/100... Training loss: 0.2236
Epoch: 1/100... Training loss: 0.2223
Epoch: 1/100... Training loss: 0.2226
Epoch: 1/100... Training loss: 0.2211
Epoch: 1/100... Training loss: 0.2219
Epoch: 1/100... Training loss: 0.2190
Epoch: 1/100... Training loss: 0.2221
Epoch: 1/100... Training loss: 0.2176
Epoch: 1/100... Training loss: 0.2174
Epoch: 1/100... Training loss: 0.2133
Epoch: 1/100... Training loss: 0.2187
Epoch: 1/100... Training loss: 0.2129
Epoch: 1/100... Training loss: 0.2075
Epoch: 1/100... Training loss: 0.2158
Epoch: 1/100... Training loss: 0.2108
Epoch: 1/100... Training loss: 0.2122
Epoch: 1/100... Training loss: 0.2149
Epoch: 1/100... Training loss: 0.2100
Epoch: 1/100... Training loss: 0.2099
Epoch: 1/100... Training loss: 0.2110
Epoch: 1/100... Training loss: 0.2158
Epoch: 1/100... Training loss: 0.2018
Epoch: 1/100... Training loss: 0.2092
Epoch: 1/100... Training loss: 0.2059
Epoch: 1/100... Training loss: 0.2004
Epoch: 1/100... Training loss: 0.2024
Epoch: 1/100... Training loss: 0.1998
Epoch: 1/100... Training loss: 0.2006
Epoch: 1/100... Training loss: 0.2033
Epoch: 1/100... Training loss: 0.2016
Epoch: 1/100... Training loss: 0.1990
Epoch: 1/100... Training loss: 0.1959
Epoch: 1/100... Training loss: 0.2000
Epoch: 1/100... Training loss: 0.2008
Epoch: 1/100... Training loss: 0.2014
Epoch: 1/100... Training loss: 0.1970
Epoch: 1/100... Training loss: 0.1973
Epoch: 1/100... Training loss: 0.1990
Epoch: 1/100... Training loss: 0.2003
Epoch: 1/100... Training loss: 0.1964
Epoch: 1/100... Training loss: 0.1937
Epoch: 1/100... Training loss: 0.1965
Epoch: 1/100... Training loss: 0.1947
Epoch: 1/100... Training loss: 0.2028
Epoch: 1/100... Training loss: 0.1934
Epoch: 1/100... Training loss: 0.1951
Epoch: 1/100... Training loss: 0.1945
Epoch: 1/100... Training loss: 0.1977
Epoch: 1/100... Training loss: 0.1976
Epoch: 1/100... Training loss: 0.1902
Epoch: 1/100... Training loss: 0.2003
Epoch: 1/100... Training loss: 0.1911
Epoch: 1/100... Training loss: 0.1950
Epoch: 1/100... Training loss: 0.1999
Epoch: 1/100... Training loss: 0.1930
Epoch: 1/100... Training loss: 0.1945
Epoch: 1/100... Training loss: 0.1905
Epoch: 1/100... Training loss: 0.1944
Epoch: 1/100... Training loss: 0.1944
Epoch: 1/100... Training loss: 0.1869
Epoch: 1/100... Training loss: 0.1925
Epoch: 1/100... Training loss: 0.1903
Epoch: 1/100... Training loss: 0.1907
Epoch: 1/100... Training loss: 0.1893
Epoch: 1/100... Training loss: 0.1884
Epoch: 1/100... Training loss: 0.1937
Epoch: 1/100... Training loss: 0.1954
Epoch: 1/100... Training loss: 0.1882
Epoch: 1/100... Training loss: 0.1867
Epoch: 1/100... Training loss: 0.1910
Epoch: 1/100... Training loss: 0.1917
Epoch: 1/100... Training loss: 0.1941
Epoch: 1/100... Training loss: 0.1936
Epoch: 1/100... Training loss: 0.1937
Epoch: 1/100... Training loss: 0.1936
Epoch: 1/100... Training loss: 0.1932
Epoch: 1/100... Training loss: 0.1880
Epoch: 1/100... Training loss: 0.1820
Epoch: 1/100... Training loss: 0.1903
Epoch: 1/100... Training loss: 0.1861
Epoch: 1/100... Training loss: 0.1896
Epoch: 1/100... Training loss: 0.1945
Epoch: 1/100... Training loss: 0.1914
Epoch: 1/100... Training loss: 0.1947
Epoch: 1/100... Training loss: 0.1879
Epoch: 1/100... Training loss: 0.1870
Epoch: 1/100... Training loss: 0.1831
Epoch: 1/100... Training loss: 0.1863
Epoch: 1/100... Training loss: 0.1869
Epoch: 1/100... Training loss: 0.1909
Epoch: 1/100... Training loss: 0.1895
Epoch: 1/100... Training loss: 0.1904
Epoch: 1/100... Training loss: 0.1833
Epoch: 1/100... Training loss: 0.1896
Epoch: 1/100... Training loss: 0.1896
Epoch: 1/100... Training loss: 0.1896
Epoch: 1/100... Training loss: 0.1896
Epoch: 1/100... Training loss: 0.1863
Epoch: 1/100... Training loss: 0.1805
Epoch: 1/100... Training loss: 0.1838
Epoch: 1/100... Training loss: 0.1831
Epoch: 1/100... Training loss: 0.1838
Epoch: 1/100... Training loss: 0.1916
Epoch: 1/100... Training loss: 0.1805
Epoch: 1/100... Training loss: 0.1805
Epoch: 1/100... Training loss: 0.1845
Epoch: 1/100... Training loss: 0.1862
Epoch: 1/100... Training loss: 0.1845
Epoch: 1/100... Training loss: 0.1852
Epoch: 1/100... Training loss: 0.1831
Epoch: 1/100... Training loss: 0.1862
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1))
targets_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=[1, 1], padding='same')
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Now 14x14x16
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, kernel_size=[1, 1], padding='same')
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Now 7x7x8
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=8, kernel_size=[1, 1])
# Now 7x7x8
encoded = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2, padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
###Output
WARNING:tensorflow:From /home/hvlpr/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From <ipython-input-6-f0a818452cce>:9: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.max_pooling2d instead.
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
total = 0
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
total += batch_cost
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(total))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1))
targets_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=[1, 1], padding='same')
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Now 14x14x16
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, kernel_size=[1, 1], padding='same')
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Now 7x7x8
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=8, kernel_size=[1, 1])
# Now 7x7x8
encoded = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2, padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
total = 0
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
total += batch_cost
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.2165
Epoch: 2/100... Training loss: 0.2140
Epoch: 3/100... Training loss: 0.2181
Epoch: 4/100... Training loss: 0.2118
Epoch: 5/100... Training loss: 0.2066
Epoch: 6/100... Training loss: 0.2141
Epoch: 7/100... Training loss: 0.2114
Epoch: 8/100... Training loss: 0.2109
Epoch: 9/100... Training loss: 0.2162
Epoch: 10/100... Training loss: 0.2139
Epoch: 11/100... Training loss: 0.2084
Epoch: 12/100... Training loss: 0.2127
Epoch: 13/100... Training loss: 0.2082
Epoch: 14/100... Training loss: 0.2151
Epoch: 15/100... Training loss: 0.2079
Epoch: 16/100... Training loss: 0.2076
Epoch: 17/100... Training loss: 0.2085
Epoch: 18/100... Training loss: 0.2054
Epoch: 19/100... Training loss: 0.2074
Epoch: 20/100... Training loss: 0.2110
Epoch: 21/100... Training loss: 0.2083
Epoch: 22/100... Training loss: 0.2064
Epoch: 23/100... Training loss: 0.2049
Epoch: 24/100... Training loss: 0.2049
Epoch: 25/100... Training loss: 0.2067
Epoch: 26/100... Training loss: 0.2080
Epoch: 27/100... Training loss: 0.2008
Epoch: 28/100... Training loss: 0.2029
Epoch: 29/100... Training loss: 0.2071
Epoch: 30/100... Training loss: 0.2020
Epoch: 31/100... Training loss: 0.2050
Epoch: 32/100... Training loss: 0.2037
Epoch: 33/100... Training loss: 0.2025
Epoch: 34/100... Training loss: 0.2074
Epoch: 35/100... Training loss: 0.2067
Epoch: 36/100... Training loss: 0.2060
Epoch: 37/100... Training loss: 0.2068
Epoch: 38/100... Training loss: 0.2013
Epoch: 39/100... Training loss: 0.2026
Epoch: 40/100... Training loss: 0.2092
Epoch: 41/100... Training loss: 0.2024
Epoch: 42/100... Training loss: 0.2058
Epoch: 43/100... Training loss: 0.1990
Epoch: 44/100... Training loss: 0.2001
Epoch: 45/100... Training loss: 0.2010
Epoch: 46/100... Training loss: 0.2057
Epoch: 47/100... Training loss: 0.2048
Epoch: 48/100... Training loss: 0.2010
Epoch: 49/100... Training loss: 0.2056
Epoch: 50/100... Training loss: 0.2033
Epoch: 51/100... Training loss: 0.2121
Epoch: 52/100... Training loss: 0.2082
Epoch: 53/100... Training loss: 0.2048
Epoch: 54/100... Training loss: 0.2012
Epoch: 55/100... Training loss: 0.2059
Epoch: 56/100... Training loss: 0.2044
Epoch: 57/100... Training loss: 0.2059
Epoch: 58/100... Training loss: 0.2022
Epoch: 59/100... Training loss: 0.1976
Epoch: 60/100... Training loss: 0.1982
Epoch: 61/100... Training loss: 0.2100
Epoch: 62/100... Training loss: 0.2059
Epoch: 63/100... Training loss: 0.2002
Epoch: 64/100... Training loss: 0.2011
Epoch: 65/100... Training loss: 0.2038
Epoch: 66/100... Training loss: 0.2022
Epoch: 67/100... Training loss: 0.2086
Epoch: 68/100... Training loss: 0.2074
Epoch: 69/100... Training loss: 0.2030
Epoch: 70/100... Training loss: 0.2045
Epoch: 71/100... Training loss: 0.1985
Epoch: 72/100... Training loss: 0.2117
Epoch: 73/100... Training loss: 0.2022
Epoch: 74/100... Training loss: 0.2053
Epoch: 75/100... Training loss: 0.2038
Epoch: 76/100... Training loss: 0.2093
Epoch: 77/100... Training loss: 0.1980
Epoch: 78/100... Training loss: 0.2038
Epoch: 79/100... Training loss: 0.1989
Epoch: 80/100... Training loss: 0.2036
Epoch: 81/100... Training loss: 0.2056
Epoch: 82/100... Training loss: 0.2047
Epoch: 83/100... Training loss: 0.2016
Epoch: 84/100... Training loss: 0.2010
Epoch: 85/100... Training loss: 0.2038
Epoch: 86/100... Training loss: 0.1990
Epoch: 87/100... Training loss: 0.1995
Epoch: 88/100... Training loss: 0.2097
Epoch: 89/100... Training loss: 0.1998
Epoch: 90/100... Training loss: 0.2052
Epoch: 91/100... Training loss: 0.2103
Epoch: 92/100... Training loss: 0.1985
Epoch: 93/100... Training loss: 0.2020
Epoch: 94/100... Training loss: 0.2020
Epoch: 95/100... Training loss: 0.2006
Epoch: 96/100... Training loss: 0.1967
Epoch: 97/100... Training loss: 0.2052
Epoch: 98/100... Training loss: 0.2066
Epoch: 99/100... Training loss: 0.1982
Epoch: 100/100... Training loss: 0.2086
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
mnist.train.labels.shape
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32,
# Batch size is (Samples, Height, Width, Channels)
shape=(None, 28, 28, 1),
name='inputs')
# Remember, target image = input image so they have the same shape
targets_ = tf.placeholder(tf.float32,
shape=(None, 28, 28, 1),
name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_,
filters=16, # We'll try to generate 16 userful filters
kernel_size=(3, 3), # Our filters are 3x3 centered at the target pixel
strides=(1, 1), # Applying our filter for each pixel in the original image
padding='same', # Adds 0's to the outside so our image is the same size on the other end
activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1,
pool_size=(2, 2),
strides=(2, 2),
padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1,
filters=8,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2,
pool_size=(2, 2),
strides=(2, 2),
padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2,
filters=8,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3,
pool_size=(2, 2),
strides=(2, 2),
padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded,
size=(7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1,
filters=8,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4,
size=(14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2,
filters=8,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv4,
size=(28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3,
filters=16,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6,
filters=1,
kernel_size=(3, 3),
padding='same',
activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
from itertools import product
fig, axes = plt.subplots(nrows=8, ncols=10, sharex=True, sharey=True, figsize=(10,10))
for rix, cix in product(range(8), range(10)):
image = compressed[cix]
axes[rix, cix].imshow(compressed[cix][:, :, rix], cmap='Greys_r')
axes[rix, cix].get_xaxis().set_visible(False)
axes[rix, cix].get_yaxis().set_visible(False)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32,
# Batch size is (Samples, Height, Width, Channels)
shape=(None, 28, 28, 1),
name='inputs')
# Remember, target image = input image so they have the same shape
targets_ = tf.placeholder(tf.float32,
shape=(None, 28, 28, 1),
name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_,
filters=32, # We'll try to generate 16 userful filters
kernel_size=(3, 3), # Our filters are 3x3 centered at the target pixel
strides=(1, 1), # Applying our filter for each pixel in the original image
padding='same', # Adds 0's to the outside so our image is the same size on the other end
activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1,
pool_size=(2, 2),
strides=(2, 2),
padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1,
filters=32,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2,
pool_size=(2, 2),
strides=(2, 2),
padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2,
filters=16,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3,
pool_size=(2, 2),
strides=(2, 2),
padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded,
size=(7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1,
filters=16,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4,
size=(14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2,
filters=32,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv4,
size=(28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3,
filters=32,
kernel_size=(3, 3),
padding='same',
activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6,
filters=1,
kernel_size=(3, 3),
padding='same',
activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 20
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/20... Training loss: 0.7079
Epoch: 1/20... Training loss: 0.6781
Epoch: 1/20... Training loss: 0.6519
Epoch: 1/20... Training loss: 0.6224
Epoch: 1/20... Training loss: 0.5891
Epoch: 1/20... Training loss: 0.5534
Epoch: 1/20... Training loss: 0.5185
Epoch: 1/20... Training loss: 0.4938
Epoch: 1/20... Training loss: 0.4959
Epoch: 1/20... Training loss: 0.5260
Epoch: 1/20... Training loss: 0.5100
Epoch: 1/20... Training loss: 0.5209
Epoch: 1/20... Training loss: 0.5010
Epoch: 1/20... Training loss: 0.4873
Epoch: 1/20... Training loss: 0.4709
Epoch: 1/20... Training loss: 0.4746
Epoch: 1/20... Training loss: 0.4690
Epoch: 1/20... Training loss: 0.4654
Epoch: 1/20... Training loss: 0.4511
Epoch: 1/20... Training loss: 0.4492
Epoch: 1/20... Training loss: 0.4326
Epoch: 1/20... Training loss: 0.4362
Epoch: 1/20... Training loss: 0.4379
Epoch: 1/20... Training loss: 0.4238
Epoch: 1/20... Training loss: 0.4184
Epoch: 1/20... Training loss: 0.4113
Epoch: 1/20... Training loss: 0.3910
Epoch: 1/20... Training loss: 0.3850
Epoch: 1/20... Training loss: 0.3823
Epoch: 1/20... Training loss: 0.3750
Epoch: 1/20... Training loss: 0.3529
Epoch: 1/20... Training loss: 0.3522
Epoch: 1/20... Training loss: 0.3473
Epoch: 1/20... Training loss: 0.3302
Epoch: 1/20... Training loss: 0.3269
Epoch: 1/20... Training loss: 0.3201
Epoch: 1/20... Training loss: 0.3108
Epoch: 1/20... Training loss: 0.3028
Epoch: 1/20... Training loss: 0.2975
Epoch: 1/20... Training loss: 0.2950
Epoch: 1/20... Training loss: 0.2855
Epoch: 1/20... Training loss: 0.2872
Epoch: 1/20... Training loss: 0.2779
Epoch: 1/20... Training loss: 0.2826
Epoch: 1/20... Training loss: 0.2732
Epoch: 1/20... Training loss: 0.2769
Epoch: 1/20... Training loss: 0.2742
Epoch: 1/20... Training loss: 0.2687
Epoch: 1/20... Training loss: 0.2657
Epoch: 1/20... Training loss: 0.2688
Epoch: 1/20... Training loss: 0.2644
Epoch: 1/20... Training loss: 0.2581
Epoch: 1/20... Training loss: 0.2552
Epoch: 1/20... Training loss: 0.2536
Epoch: 1/20... Training loss: 0.2584
Epoch: 1/20... Training loss: 0.2551
Epoch: 1/20... Training loss: 0.2579
Epoch: 1/20... Training loss: 0.2513
Epoch: 1/20... Training loss: 0.2510
Epoch: 1/20... Training loss: 0.2434
Epoch: 1/20... Training loss: 0.2482
Epoch: 1/20... Training loss: 0.2487
Epoch: 1/20... Training loss: 0.2440
Epoch: 1/20... Training loss: 0.2429
Epoch: 1/20... Training loss: 0.2443
Epoch: 1/20... Training loss: 0.2440
Epoch: 1/20... Training loss: 0.2430
Epoch: 1/20... Training loss: 0.2292
Epoch: 1/20... Training loss: 0.2371
Epoch: 1/20... Training loss: 0.2541
Epoch: 1/20... Training loss: 0.2266
Epoch: 1/20... Training loss: 0.2432
Epoch: 1/20... Training loss: 0.2347
Epoch: 1/20... Training loss: 0.2420
Epoch: 1/20... Training loss: 0.2399
Epoch: 1/20... Training loss: 0.2397
Epoch: 1/20... Training loss: 0.2418
Epoch: 1/20... Training loss: 0.2357
Epoch: 1/20... Training loss: 0.2393
Epoch: 1/20... Training loss: 0.2369
Epoch: 1/20... Training loss: 0.2347
Epoch: 1/20... Training loss: 0.2332
Epoch: 1/20... Training loss: 0.2369
Epoch: 1/20... Training loss: 0.2224
Epoch: 1/20... Training loss: 0.2364
Epoch: 1/20... Training loss: 0.2336
Epoch: 1/20... Training loss: 0.2344
Epoch: 1/20... Training loss: 0.2362
Epoch: 1/20... Training loss: 0.2320
Epoch: 1/20... Training loss: 0.2318
Epoch: 1/20... Training loss: 0.2297
Epoch: 1/20... Training loss: 0.2240
Epoch: 1/20... Training loss: 0.2332
Epoch: 1/20... Training loss: 0.2262
Epoch: 1/20... Training loss: 0.2256
Epoch: 1/20... Training loss: 0.2333
Epoch: 1/20... Training loss: 0.2265
Epoch: 1/20... Training loss: 0.2236
Epoch: 1/20... Training loss: 0.2275
Epoch: 1/20... Training loss: 0.2266
Epoch: 1/20... Training loss: 0.2254
Epoch: 1/20... Training loss: 0.2254
Epoch: 1/20... Training loss: 0.2279
Epoch: 1/20... Training loss: 0.2177
Epoch: 1/20... Training loss: 0.2239
Epoch: 1/20... Training loss: 0.2245
Epoch: 1/20... Training loss: 0.2221
Epoch: 1/20... Training loss: 0.2209
Epoch: 1/20... Training loss: 0.2232
Epoch: 1/20... Training loss: 0.2206
Epoch: 1/20... Training loss: 0.2240
Epoch: 1/20... Training loss: 0.2232
Epoch: 1/20... Training loss: 0.2220
Epoch: 1/20... Training loss: 0.2193
Epoch: 1/20... Training loss: 0.2167
Epoch: 1/20... Training loss: 0.2201
Epoch: 1/20... Training loss: 0.2137
Epoch: 1/20... Training loss: 0.2197
Epoch: 1/20... Training loss: 0.2148
Epoch: 1/20... Training loss: 0.2229
Epoch: 1/20... Training loss: 0.2141
Epoch: 1/20... Training loss: 0.2169
Epoch: 1/20... Training loss: 0.2202
Epoch: 1/20... Training loss: 0.2187
Epoch: 1/20... Training loss: 0.2086
Epoch: 1/20... Training loss: 0.2125
Epoch: 1/20... Training loss: 0.2136
Epoch: 1/20... Training loss: 0.2146
Epoch: 1/20... Training loss: 0.2085
Epoch: 1/20... Training loss: 0.2055
Epoch: 1/20... Training loss: 0.2167
Epoch: 1/20... Training loss: 0.2136
Epoch: 1/20... Training loss: 0.2076
Epoch: 1/20... Training loss: 0.2099
Epoch: 1/20... Training loss: 0.2167
Epoch: 1/20... Training loss: 0.2117
Epoch: 1/20... Training loss: 0.2092
Epoch: 1/20... Training loss: 0.2094
Epoch: 1/20... Training loss: 0.2059
Epoch: 1/20... Training loss: 0.2049
Epoch: 1/20... Training loss: 0.2100
Epoch: 1/20... Training loss: 0.2077
Epoch: 1/20... Training loss: 0.2014
Epoch: 1/20... Training loss: 0.2072
Epoch: 1/20... Training loss: 0.2021
Epoch: 1/20... Training loss: 0.2091
Epoch: 1/20... Training loss: 0.2005
Epoch: 1/20... Training loss: 0.2084
Epoch: 1/20... Training loss: 0.2027
Epoch: 1/20... Training loss: 0.2030
Epoch: 1/20... Training loss: 0.1995
Epoch: 1/20... Training loss: 0.2002
Epoch: 1/20... Training loss: 0.1995
Epoch: 1/20... Training loss: 0.2051
Epoch: 1/20... Training loss: 0.2027
Epoch: 1/20... Training loss: 0.2040
Epoch: 1/20... Training loss: 0.2052
Epoch: 1/20... Training loss: 0.2078
Epoch: 1/20... Training loss: 0.2001
Epoch: 1/20... Training loss: 0.2022
Epoch: 1/20... Training loss: 0.2018
Epoch: 1/20... Training loss: 0.2000
Epoch: 1/20... Training loss: 0.2009
Epoch: 1/20... Training loss: 0.1982
Epoch: 1/20... Training loss: 0.2000
Epoch: 1/20... Training loss: 0.1961
Epoch: 1/20... Training loss: 0.1968
Epoch: 1/20... Training loss: 0.1934
Epoch: 1/20... Training loss: 0.1941
Epoch: 1/20... Training loss: 0.1988
Epoch: 1/20... Training loss: 0.1954
Epoch: 1/20... Training loss: 0.1975
Epoch: 1/20... Training loss: 0.1955
Epoch: 1/20... Training loss: 0.1948
Epoch: 1/20... Training loss: 0.1955
Epoch: 1/20... Training loss: 0.1918
Epoch: 1/20... Training loss: 0.1951
Epoch: 1/20... Training loss: 0.1903
Epoch: 1/20... Training loss: 0.1966
Epoch: 1/20... Training loss: 0.1941
Epoch: 1/20... Training loss: 0.1942
Epoch: 1/20... Training loss: 0.1887
Epoch: 1/20... Training loss: 0.1921
Epoch: 1/20... Training loss: 0.1918
Epoch: 1/20... Training loss: 0.1925
Epoch: 1/20... Training loss: 0.1870
Epoch: 1/20... Training loss: 0.1839
Epoch: 1/20... Training loss: 0.1885
Epoch: 1/20... Training loss: 0.1891
Epoch: 1/20... Training loss: 0.1903
Epoch: 1/20... Training loss: 0.1928
Epoch: 1/20... Training loss: 0.1848
Epoch: 1/20... Training loss: 0.1865
Epoch: 1/20... Training loss: 0.1905
Epoch: 1/20... Training loss: 0.1881
Epoch: 1/20... Training loss: 0.1885
Epoch: 1/20... Training loss: 0.1877
Epoch: 1/20... Training loss: 0.1890
Epoch: 1/20... Training loss: 0.1869
Epoch: 1/20... Training loss: 0.1865
Epoch: 1/20... Training loss: 0.1890
Epoch: 1/20... Training loss: 0.1848
Epoch: 1/20... Training loss: 0.1886
Epoch: 1/20... Training loss: 0.1845
Epoch: 1/20... Training loss: 0.1814
Epoch: 1/20... Training loss: 0.1869
Epoch: 1/20... Training loss: 0.1873
Epoch: 1/20... Training loss: 0.1868
Epoch: 1/20... Training loss: 0.1912
Epoch: 1/20... Training loss: 0.1847
Epoch: 1/20... Training loss: 0.1789
Epoch: 1/20... Training loss: 0.1865
Epoch: 1/20... Training loss: 0.1899
Epoch: 1/20... Training loss: 0.1806
Epoch: 1/20... Training loss: 0.1863
Epoch: 1/20... Training loss: 0.1873
Epoch: 1/20... Training loss: 0.1874
Epoch: 1/20... Training loss: 0.1864
Epoch: 1/20... Training loss: 0.1835
Epoch: 1/20... Training loss: 0.1848
Epoch: 1/20... Training loss: 0.1905
Epoch: 1/20... Training loss: 0.1833
Epoch: 1/20... Training loss: 0.1780
Epoch: 1/20... Training loss: 0.1896
Epoch: 1/20... Training loss: 0.1804
Epoch: 1/20... Training loss: 0.1849
Epoch: 1/20... Training loss: 0.1876
Epoch: 1/20... Training loss: 0.1864
Epoch: 1/20... Training loss: 0.1811
Epoch: 1/20... Training loss: 0.1833
Epoch: 1/20... Training loss: 0.1805
Epoch: 1/20... Training loss: 0.1861
Epoch: 1/20... Training loss: 0.1817
Epoch: 1/20... Training loss: 0.1837
Epoch: 1/20... Training loss: 0.1809
Epoch: 1/20... Training loss: 0.1820
Epoch: 1/20... Training loss: 0.1837
Epoch: 1/20... Training loss: 0.1755
Epoch: 1/20... Training loss: 0.1811
Epoch: 1/20... Training loss: 0.1812
Epoch: 1/20... Training loss: 0.1778
Epoch: 1/20... Training loss: 0.1831
Epoch: 1/20... Training loss: 0.1807
Epoch: 1/20... Training loss: 0.1747
Epoch: 1/20... Training loss: 0.1822
Epoch: 1/20... Training loss: 0.1862
Epoch: 1/20... Training loss: 0.1810
Epoch: 1/20... Training loss: 0.1786
Epoch: 1/20... Training loss: 0.1816
Epoch: 1/20... Training loss: 0.1794
Epoch: 1/20... Training loss: 0.1845
Epoch: 1/20... Training loss: 0.1767
Epoch: 1/20... Training loss: 0.1777
Epoch: 1/20... Training loss: 0.1790
Epoch: 1/20... Training loss: 0.1826
Epoch: 1/20... Training loss: 0.1840
Epoch: 1/20... Training loss: 0.1764
Epoch: 1/20... Training loss: 0.1768
Epoch: 1/20... Training loss: 0.1831
Epoch: 1/20... Training loss: 0.1782
Epoch: 1/20... Training loss: 0.1779
Epoch: 1/20... Training loss: 0.1767
Epoch: 1/20... Training loss: 0.1786
Epoch: 1/20... Training loss: 0.1764
Epoch: 1/20... Training loss: 0.1756
Epoch: 1/20... Training loss: 0.1748
Epoch: 1/20... Training loss: 0.1783
Epoch: 1/20... Training loss: 0.1807
Epoch: 1/20... Training loss: 0.1723
Epoch: 1/20... Training loss: 0.1773
Epoch: 1/20... Training loss: 0.1774
Epoch: 1/20... Training loss: 0.1796
Epoch: 1/20... Training loss: 0.1788
Epoch: 1/20... Training loss: 0.1789
Epoch: 1/20... Training loss: 0.1779
Epoch: 1/20... Training loss: 0.1737
Epoch: 1/20... Training loss: 0.1751
Epoch: 1/20... Training loss: 0.1747
Epoch: 1/20... Training loss: 0.1795
Epoch: 1/20... Training loss: 0.1721
Epoch: 1/20... Training loss: 0.1711
Epoch: 1/20... Training loss: 0.1732
Epoch: 1/20... Training loss: 0.1761
Epoch: 1/20... Training loss: 0.1710
Epoch: 1/20... Training loss: 0.1782
Epoch: 1/20... Training loss: 0.1790
Epoch: 1/20... Training loss: 0.1747
Epoch: 1/20... Training loss: 0.1773
Epoch: 1/20... Training loss: 0.1728
Epoch: 1/20... Training loss: 0.1712
Epoch: 1/20... Training loss: 0.1778
Epoch: 1/20... Training loss: 0.1801
Epoch: 1/20... Training loss: 0.1806
Epoch: 1/20... Training loss: 0.1699
Epoch: 1/20... Training loss: 0.1755
Epoch: 1/20... Training loss: 0.1751
Epoch: 1/20... Training loss: 0.1777
Epoch: 1/20... Training loss: 0.1698
Epoch: 1/20... Training loss: 0.1737
Epoch: 1/20... Training loss: 0.1799
Epoch: 2/20... Training loss: 0.1728
Epoch: 2/20... Training loss: 0.1754
Epoch: 2/20... Training loss: 0.1716
Epoch: 2/20... Training loss: 0.1729
Epoch: 2/20... Training loss: 0.1765
Epoch: 2/20... Training loss: 0.1731
Epoch: 2/20... Training loss: 0.1655
Epoch: 2/20... Training loss: 0.1779
Epoch: 2/20... Training loss: 0.1772
Epoch: 2/20... Training loss: 0.1705
Epoch: 2/20... Training loss: 0.1722
Epoch: 2/20... Training loss: 0.1737
Epoch: 2/20... Training loss: 0.1678
Epoch: 2/20... Training loss: 0.1752
Epoch: 2/20... Training loss: 0.1681
Epoch: 2/20... Training loss: 0.1706
Epoch: 2/20... Training loss: 0.1692
Epoch: 2/20... Training loss: 0.1784
Epoch: 2/20... Training loss: 0.1679
Epoch: 2/20... Training loss: 0.1731
Epoch: 2/20... Training loss: 0.1704
Epoch: 2/20... Training loss: 0.1758
Epoch: 2/20... Training loss: 0.1724
Epoch: 2/20... Training loss: 0.1746
Epoch: 2/20... Training loss: 0.1749
Epoch: 2/20... Training loss: 0.1699
Epoch: 2/20... Training loss: 0.1742
Epoch: 2/20... Training loss: 0.1722
Epoch: 2/20... Training loss: 0.1706
Epoch: 2/20... Training loss: 0.1738
Epoch: 2/20... Training loss: 0.1636
Epoch: 2/20... Training loss: 0.1715
Epoch: 2/20... Training loss: 0.1736
Epoch: 2/20... Training loss: 0.1720
Epoch: 2/20... Training loss: 0.1749
Epoch: 2/20... Training loss: 0.1734
Epoch: 2/20... Training loss: 0.1697
Epoch: 2/20... Training loss: 0.1714
Epoch: 2/20... Training loss: 0.1672
Epoch: 2/20... Training loss: 0.1725
Epoch: 2/20... Training loss: 0.1700
Epoch: 2/20... Training loss: 0.1695
Epoch: 2/20... Training loss: 0.1652
Epoch: 2/20... Training loss: 0.1641
Epoch: 2/20... Training loss: 0.1738
Epoch: 2/20... Training loss: 0.1709
Epoch: 2/20... Training loss: 0.1677
Epoch: 2/20... Training loss: 0.1671
Epoch: 2/20... Training loss: 0.1747
Epoch: 2/20... Training loss: 0.1735
Epoch: 2/20... Training loss: 0.1691
Epoch: 2/20... Training loss: 0.1712
Epoch: 2/20... Training loss: 0.1699
Epoch: 2/20... Training loss: 0.1706
Epoch: 2/20... Training loss: 0.1703
Epoch: 2/20... Training loss: 0.1704
Epoch: 2/20... Training loss: 0.1644
Epoch: 2/20... Training loss: 0.1718
Epoch: 2/20... Training loss: 0.1628
Epoch: 2/20... Training loss: 0.1636
Epoch: 2/20... Training loss: 0.1677
Epoch: 2/20... Training loss: 0.1674
Epoch: 2/20... Training loss: 0.1673
Epoch: 2/20... Training loss: 0.1730
Epoch: 2/20... Training loss: 0.1641
Epoch: 2/20... Training loss: 0.1704
Epoch: 2/20... Training loss: 0.1703
Epoch: 2/20... Training loss: 0.1635
Epoch: 2/20... Training loss: 0.1709
Epoch: 2/20... Training loss: 0.1683
Epoch: 2/20... Training loss: 0.1627
Epoch: 2/20... Training loss: 0.1684
Epoch: 2/20... Training loss: 0.1619
Epoch: 2/20... Training loss: 0.1654
Epoch: 2/20... Training loss: 0.1664
Epoch: 2/20... Training loss: 0.1678
Epoch: 2/20... Training loss: 0.1625
Epoch: 2/20... Training loss: 0.1696
Epoch: 2/20... Training loss: 0.1711
Epoch: 2/20... Training loss: 0.1684
Epoch: 2/20... Training loss: 0.1661
Epoch: 2/20... Training loss: 0.1645
Epoch: 2/20... Training loss: 0.1679
Epoch: 2/20... Training loss: 0.1697
Epoch: 2/20... Training loss: 0.1695
Epoch: 2/20... Training loss: 0.1663
Epoch: 2/20... Training loss: 0.1681
Epoch: 2/20... Training loss: 0.1696
Epoch: 2/20... Training loss: 0.1657
Epoch: 2/20... Training loss: 0.1636
Epoch: 2/20... Training loss: 0.1657
Epoch: 2/20... Training loss: 0.1668
Epoch: 2/20... Training loss: 0.1696
Epoch: 2/20... Training loss: 0.1658
Epoch: 2/20... Training loss: 0.1673
Epoch: 2/20... Training loss: 0.1640
Epoch: 2/20... Training loss: 0.1631
Epoch: 2/20... Training loss: 0.1667
Epoch: 2/20... Training loss: 0.1656
Epoch: 2/20... Training loss: 0.1675
Epoch: 2/20... Training loss: 0.1667
Epoch: 2/20... Training loss: 0.1658
Epoch: 2/20... Training loss: 0.1662
Epoch: 2/20... Training loss: 0.1643
Epoch: 2/20... Training loss: 0.1677
Epoch: 2/20... Training loss: 0.1649
Epoch: 2/20... Training loss: 0.1632
Epoch: 2/20... Training loss: 0.1646
Epoch: 2/20... Training loss: 0.1677
Epoch: 2/20... Training loss: 0.1661
Epoch: 2/20... Training loss: 0.1679
Epoch: 2/20... Training loss: 0.1711
Epoch: 2/20... Training loss: 0.1633
Epoch: 2/20... Training loss: 0.1583
Epoch: 2/20... Training loss: 0.1622
Epoch: 2/20... Training loss: 0.1594
Epoch: 2/20... Training loss: 0.1667
Epoch: 2/20... Training loss: 0.1644
Epoch: 2/20... Training loss: 0.1672
Epoch: 2/20... Training loss: 0.1592
Epoch: 2/20... Training loss: 0.1665
Epoch: 2/20... Training loss: 0.1609
Epoch: 2/20... Training loss: 0.1658
Epoch: 2/20... Training loss: 0.1641
Epoch: 2/20... Training loss: 0.1632
Epoch: 2/20... Training loss: 0.1646
Epoch: 2/20... Training loss: 0.1667
Epoch: 2/20... Training loss: 0.1636
Epoch: 2/20... Training loss: 0.1691
Epoch: 2/20... Training loss: 0.1642
Epoch: 2/20... Training loss: 0.1602
Epoch: 2/20... Training loss: 0.1647
Epoch: 2/20... Training loss: 0.1631
Epoch: 2/20... Training loss: 0.1588
Epoch: 2/20... Training loss: 0.1689
Epoch: 2/20... Training loss: 0.1654
Epoch: 2/20... Training loss: 0.1613
Epoch: 2/20... Training loss: 0.1628
Epoch: 2/20... Training loss: 0.1573
Epoch: 2/20... Training loss: 0.1608
Epoch: 2/20... Training loss: 0.1623
Epoch: 2/20... Training loss: 0.1616
Epoch: 2/20... Training loss: 0.1641
Epoch: 2/20... Training loss: 0.1663
Epoch: 2/20... Training loss: 0.1562
Epoch: 2/20... Training loss: 0.1622
Epoch: 2/20... Training loss: 0.1655
Epoch: 2/20... Training loss: 0.1585
Epoch: 2/20... Training loss: 0.1642
Epoch: 2/20... Training loss: 0.1668
Epoch: 2/20... Training loss: 0.1670
Epoch: 2/20... Training loss: 0.1661
Epoch: 2/20... Training loss: 0.1626
Epoch: 2/20... Training loss: 0.1639
Epoch: 2/20... Training loss: 0.1648
Epoch: 2/20... Training loss: 0.1604
Epoch: 2/20... Training loss: 0.1636
Epoch: 2/20... Training loss: 0.1644
Epoch: 2/20... Training loss: 0.1622
Epoch: 2/20... Training loss: 0.1613
Epoch: 2/20... Training loss: 0.1627
Epoch: 2/20... Training loss: 0.1624
Epoch: 2/20... Training loss: 0.1663
Epoch: 2/20... Training loss: 0.1583
Epoch: 2/20... Training loss: 0.1619
Epoch: 2/20... Training loss: 0.1578
Epoch: 2/20... Training loss: 0.1642
Epoch: 2/20... Training loss: 0.1629
Epoch: 2/20... Training loss: 0.1691
Epoch: 2/20... Training loss: 0.1632
Epoch: 2/20... Training loss: 0.1569
Epoch: 2/20... Training loss: 0.1614
Epoch: 2/20... Training loss: 0.1605
Epoch: 2/20... Training loss: 0.1615
Epoch: 2/20... Training loss: 0.1608
Epoch: 2/20... Training loss: 0.1607
Epoch: 2/20... Training loss: 0.1639
Epoch: 2/20... Training loss: 0.1616
Epoch: 2/20... Training loss: 0.1557
Epoch: 2/20... Training loss: 0.1588
Epoch: 2/20... Training loss: 0.1576
Epoch: 2/20... Training loss: 0.1620
Epoch: 2/20... Training loss: 0.1656
Epoch: 2/20... Training loss: 0.1654
Epoch: 2/20... Training loss: 0.1609
Epoch: 2/20... Training loss: 0.1657
Epoch: 2/20... Training loss: 0.1586
Epoch: 2/20... Training loss: 0.1543
Epoch: 2/20... Training loss: 0.1579
Epoch: 2/20... Training loss: 0.1613
Epoch: 2/20... Training loss: 0.1611
Epoch: 2/20... Training loss: 0.1598
Epoch: 2/20... Training loss: 0.1588
Epoch: 2/20... Training loss: 0.1591
Epoch: 2/20... Training loss: 0.1576
Epoch: 2/20... Training loss: 0.1583
Epoch: 2/20... Training loss: 0.1598
Epoch: 2/20... Training loss: 0.1603
Epoch: 2/20... Training loss: 0.1612
Epoch: 2/20... Training loss: 0.1546
Epoch: 2/20... Training loss: 0.1578
Epoch: 2/20... Training loss: 0.1648
Epoch: 2/20... Training loss: 0.1605
Epoch: 2/20... Training loss: 0.1585
Epoch: 2/20... Training loss: 0.1652
Epoch: 2/20... Training loss: 0.1587
Epoch: 2/20... Training loss: 0.1564
Epoch: 2/20... Training loss: 0.1607
Epoch: 2/20... Training loss: 0.1623
Epoch: 2/20... Training loss: 0.1566
Epoch: 2/20... Training loss: 0.1656
Epoch: 2/20... Training loss: 0.1635
Epoch: 2/20... Training loss: 0.1561
Epoch: 2/20... Training loss: 0.1589
Epoch: 2/20... Training loss: 0.1637
Epoch: 2/20... Training loss: 0.1599
Epoch: 2/20... Training loss: 0.1585
Epoch: 2/20... Training loss: 0.1622
Epoch: 2/20... Training loss: 0.1622
Epoch: 2/20... Training loss: 0.1559
Epoch: 2/20... Training loss: 0.1561
Epoch: 2/20... Training loss: 0.1608
Epoch: 2/20... Training loss: 0.1544
Epoch: 2/20... Training loss: 0.1570
Epoch: 2/20... Training loss: 0.1616
Epoch: 2/20... Training loss: 0.1620
Epoch: 2/20... Training loss: 0.1597
Epoch: 2/20... Training loss: 0.1607
Epoch: 2/20... Training loss: 0.1618
Epoch: 2/20... Training loss: 0.1532
Epoch: 2/20... Training loss: 0.1588
Epoch: 2/20... Training loss: 0.1566
Epoch: 2/20... Training loss: 0.1600
Epoch: 2/20... Training loss: 0.1525
Epoch: 2/20... Training loss: 0.1564
Epoch: 2/20... Training loss: 0.1564
Epoch: 2/20... Training loss: 0.1588
Epoch: 2/20... Training loss: 0.1598
Epoch: 2/20... Training loss: 0.1546
Epoch: 2/20... Training loss: 0.1556
Epoch: 2/20... Training loss: 0.1550
Epoch: 2/20... Training loss: 0.1522
Epoch: 2/20... Training loss: 0.1574
Epoch: 2/20... Training loss: 0.1604
Epoch: 2/20... Training loss: 0.1595
Epoch: 2/20... Training loss: 0.1549
Epoch: 2/20... Training loss: 0.1569
Epoch: 2/20... Training loss: 0.1579
Epoch: 2/20... Training loss: 0.1580
Epoch: 2/20... Training loss: 0.1531
Epoch: 2/20... Training loss: 0.1547
Epoch: 2/20... Training loss: 0.1541
Epoch: 2/20... Training loss: 0.1536
Epoch: 2/20... Training loss: 0.1565
Epoch: 2/20... Training loss: 0.1530
Epoch: 2/20... Training loss: 0.1606
Epoch: 2/20... Training loss: 0.1575
Epoch: 2/20... Training loss: 0.1602
Epoch: 2/20... Training loss: 0.1561
Epoch: 2/20... Training loss: 0.1566
Epoch: 2/20... Training loss: 0.1546
Epoch: 2/20... Training loss: 0.1560
Epoch: 2/20... Training loss: 0.1474
Epoch: 2/20... Training loss: 0.1576
Epoch: 2/20... Training loss: 0.1543
Epoch: 2/20... Training loss: 0.1543
Epoch: 2/20... Training loss: 0.1535
Epoch: 2/20... Training loss: 0.1559
Epoch: 2/20... Training loss: 0.1558
Epoch: 2/20... Training loss: 0.1564
Epoch: 2/20... Training loss: 0.1573
Epoch: 2/20... Training loss: 0.1516
Epoch: 2/20... Training loss: 0.1514
Epoch: 2/20... Training loss: 0.1518
Epoch: 2/20... Training loss: 0.1550
Epoch: 2/20... Training loss: 0.1547
Epoch: 2/20... Training loss: 0.1591
Epoch: 2/20... Training loss: 0.1546
Epoch: 2/20... Training loss: 0.1498
Epoch: 2/20... Training loss: 0.1600
Epoch: 2/20... Training loss: 0.1574
Epoch: 2/20... Training loss: 0.1579
Epoch: 2/20... Training loss: 0.1572
Epoch: 2/20... Training loss: 0.1597
Epoch: 2/20... Training loss: 0.1553
Epoch: 2/20... Training loss: 0.1573
Epoch: 2/20... Training loss: 0.1546
Epoch: 2/20... Training loss: 0.1529
Epoch: 2/20... Training loss: 0.1531
Epoch: 2/20... Training loss: 0.1569
Epoch: 2/20... Training loss: 0.1534
Epoch: 2/20... Training loss: 0.1565
Epoch: 2/20... Training loss: 0.1530
Epoch: 2/20... Training loss: 0.1515
Epoch: 2/20... Training loss: 0.1548
Epoch: 2/20... Training loss: 0.1540
Epoch: 2/20... Training loss: 0.1540
Epoch: 2/20... Training loss: 0.1535
Epoch: 2/20... Training loss: 0.1507
Epoch: 2/20... Training loss: 0.1509
Epoch: 3/20... Training loss: 0.1522
Epoch: 3/20... Training loss: 0.1531
Epoch: 3/20... Training loss: 0.1535
Epoch: 3/20... Training loss: 0.1548
Epoch: 3/20... Training loss: 0.1561
Epoch: 3/20... Training loss: 0.1498
Epoch: 3/20... Training loss: 0.1488
Epoch: 3/20... Training loss: 0.1515
Epoch: 3/20... Training loss: 0.1524
Epoch: 3/20... Training loss: 0.1517
Epoch: 3/20... Training loss: 0.1543
Epoch: 3/20... Training loss: 0.1502
Epoch: 3/20... Training loss: 0.1534
Epoch: 3/20... Training loss: 0.1518
Epoch: 3/20... Training loss: 0.1513
Epoch: 3/20... Training loss: 0.1515
Epoch: 3/20... Training loss: 0.1485
Epoch: 3/20... Training loss: 0.1525
Epoch: 3/20... Training loss: 0.1553
Epoch: 3/20... Training loss: 0.1528
Epoch: 3/20... Training loss: 0.1480
Epoch: 3/20... Training loss: 0.1508
Epoch: 3/20... Training loss: 0.1537
Epoch: 3/20... Training loss: 0.1585
Epoch: 3/20... Training loss: 0.1547
Epoch: 3/20... Training loss: 0.1543
Epoch: 3/20... Training loss: 0.1537
Epoch: 3/20... Training loss: 0.1494
Epoch: 3/20... Training loss: 0.1510
Epoch: 3/20... Training loss: 0.1451
Epoch: 3/20... Training loss: 0.1529
Epoch: 3/20... Training loss: 0.1537
Epoch: 3/20... Training loss: 0.1528
Epoch: 3/20... Training loss: 0.1500
Epoch: 3/20... Training loss: 0.1517
Epoch: 3/20... Training loss: 0.1512
Epoch: 3/20... Training loss: 0.1535
Epoch: 3/20... Training loss: 0.1514
Epoch: 3/20... Training loss: 0.1535
Epoch: 3/20... Training loss: 0.1560
Epoch: 3/20... Training loss: 0.1505
Epoch: 3/20... Training loss: 0.1565
Epoch: 3/20... Training loss: 0.1472
Epoch: 3/20... Training loss: 0.1555
Epoch: 3/20... Training loss: 0.1465
Epoch: 3/20... Training loss: 0.1575
Epoch: 3/20... Training loss: 0.1552
Epoch: 3/20... Training loss: 0.1515
Epoch: 3/20... Training loss: 0.1535
Epoch: 3/20... Training loss: 0.1566
Epoch: 3/20... Training loss: 0.1504
Epoch: 3/20... Training loss: 0.1505
Epoch: 3/20... Training loss: 0.1574
Epoch: 3/20... Training loss: 0.1535
Epoch: 3/20... Training loss: 0.1539
Epoch: 3/20... Training loss: 0.1516
Epoch: 3/20... Training loss: 0.1497
Epoch: 3/20... Training loss: 0.1509
Epoch: 3/20... Training loss: 0.1500
Epoch: 3/20... Training loss: 0.1536
Epoch: 3/20... Training loss: 0.1527
Epoch: 3/20... Training loss: 0.1520
Epoch: 3/20... Training loss: 0.1499
Epoch: 3/20... Training loss: 0.1563
Epoch: 3/20... Training loss: 0.1521
Epoch: 3/20... Training loss: 0.1562
Epoch: 3/20... Training loss: 0.1561
Epoch: 3/20... Training loss: 0.1531
Epoch: 3/20... Training loss: 0.1510
Epoch: 3/20... Training loss: 0.1504
Epoch: 3/20... Training loss: 0.1506
Epoch: 3/20... Training loss: 0.1519
Epoch: 3/20... Training loss: 0.1520
Epoch: 3/20... Training loss: 0.1511
Epoch: 3/20... Training loss: 0.1513
Epoch: 3/20... Training loss: 0.1483
Epoch: 3/20... Training loss: 0.1465
Epoch: 3/20... Training loss: 0.1534
Epoch: 3/20... Training loss: 0.1511
Epoch: 3/20... Training loss: 0.1485
Epoch: 3/20... Training loss: 0.1475
Epoch: 3/20... Training loss: 0.1515
Epoch: 3/20... Training loss: 0.1500
Epoch: 3/20... Training loss: 0.1515
Epoch: 3/20... Training loss: 0.1487
Epoch: 3/20... Training loss: 0.1512
Epoch: 3/20... Training loss: 0.1559
Epoch: 3/20... Training loss: 0.1548
Epoch: 3/20... Training loss: 0.1536
Epoch: 3/20... Training loss: 0.1483
Epoch: 3/20... Training loss: 0.1512
Epoch: 3/20... Training loss: 0.1520
Epoch: 3/20... Training loss: 0.1530
Epoch: 3/20... Training loss: 0.1480
Epoch: 3/20... Training loss: 0.1455
Epoch: 3/20... Training loss: 0.1511
Epoch: 3/20... Training loss: 0.1486
Epoch: 3/20... Training loss: 0.1508
Epoch: 3/20... Training loss: 0.1523
Epoch: 3/20... Training loss: 0.1508
Epoch: 3/20... Training loss: 0.1519
Epoch: 3/20... Training loss: 0.1451
Epoch: 3/20... Training loss: 0.1508
Epoch: 3/20... Training loss: 0.1501
Epoch: 3/20... Training loss: 0.1514
Epoch: 3/20... Training loss: 0.1562
Epoch: 3/20... Training loss: 0.1453
Epoch: 3/20... Training loss: 0.1465
Epoch: 3/20... Training loss: 0.1503
Epoch: 3/20... Training loss: 0.1496
Epoch: 3/20... Training loss: 0.1475
Epoch: 3/20... Training loss: 0.1523
Epoch: 3/20... Training loss: 0.1483
Epoch: 3/20... Training loss: 0.1526
Epoch: 3/20... Training loss: 0.1460
Epoch: 3/20... Training loss: 0.1525
Epoch: 3/20... Training loss: 0.1511
Epoch: 3/20... Training loss: 0.1482
Epoch: 3/20... Training loss: 0.1373
Epoch: 3/20... Training loss: 0.1443
Epoch: 3/20... Training loss: 0.1447
Epoch: 3/20... Training loss: 0.1487
Epoch: 3/20... Training loss: 0.1465
Epoch: 3/20... Training loss: 0.1519
Epoch: 3/20... Training loss: 0.1509
Epoch: 3/20... Training loss: 0.1521
Epoch: 3/20... Training loss: 0.1438
Epoch: 3/20... Training loss: 0.1467
Epoch: 3/20... Training loss: 0.1514
Epoch: 3/20... Training loss: 0.1439
Epoch: 3/20... Training loss: 0.1433
Epoch: 3/20... Training loss: 0.1455
Epoch: 3/20... Training loss: 0.1483
Epoch: 3/20... Training loss: 0.1496
Epoch: 3/20... Training loss: 0.1519
Epoch: 3/20... Training loss: 0.1441
Epoch: 3/20... Training loss: 0.1485
Epoch: 3/20... Training loss: 0.1502
Epoch: 3/20... Training loss: 0.1493
Epoch: 3/20... Training loss: 0.1498
Epoch: 3/20... Training loss: 0.1422
Epoch: 3/20... Training loss: 0.1441
Epoch: 3/20... Training loss: 0.1415
Epoch: 3/20... Training loss: 0.1438
Epoch: 3/20... Training loss: 0.1463
Epoch: 3/20... Training loss: 0.1471
Epoch: 3/20... Training loss: 0.1443
Epoch: 3/20... Training loss: 0.1454
Epoch: 3/20... Training loss: 0.1479
Epoch: 3/20... Training loss: 0.1464
Epoch: 3/20... Training loss: 0.1424
Epoch: 3/20... Training loss: 0.1476
Epoch: 3/20... Training loss: 0.1541
Epoch: 3/20... Training loss: 0.1485
Epoch: 3/20... Training loss: 0.1482
Epoch: 3/20... Training loss: 0.1484
Epoch: 3/20... Training loss: 0.1463
Epoch: 3/20... Training loss: 0.1441
Epoch: 3/20... Training loss: 0.1507
Epoch: 3/20... Training loss: 0.1489
Epoch: 3/20... Training loss: 0.1442
Epoch: 3/20... Training loss: 0.1452
Epoch: 3/20... Training loss: 0.1476
Epoch: 3/20... Training loss: 0.1448
Epoch: 3/20... Training loss: 0.1453
Epoch: 3/20... Training loss: 0.1438
Epoch: 3/20... Training loss: 0.1482
Epoch: 3/20... Training loss: 0.1465
Epoch: 3/20... Training loss: 0.1466
Epoch: 3/20... Training loss: 0.1464
Epoch: 3/20... Training loss: 0.1497
Epoch: 3/20... Training loss: 0.1488
Epoch: 3/20... Training loss: 0.1490
Epoch: 3/20... Training loss: 0.1479
Epoch: 3/20... Training loss: 0.1481
Epoch: 3/20... Training loss: 0.1452
Epoch: 3/20... Training loss: 0.1488
Epoch: 3/20... Training loss: 0.1472
Epoch: 3/20... Training loss: 0.1424
Epoch: 3/20... Training loss: 0.1398
Epoch: 3/20... Training loss: 0.1441
Epoch: 3/20... Training loss: 0.1522
Epoch: 3/20... Training loss: 0.1487
Epoch: 3/20... Training loss: 0.1417
Epoch: 3/20... Training loss: 0.1402
Epoch: 3/20... Training loss: 0.1428
Epoch: 3/20... Training loss: 0.1487
Epoch: 3/20... Training loss: 0.1508
Epoch: 3/20... Training loss: 0.1466
Epoch: 3/20... Training loss: 0.1442
Epoch: 3/20... Training loss: 0.1453
Epoch: 3/20... Training loss: 0.1509
Epoch: 3/20... Training loss: 0.1446
Epoch: 3/20... Training loss: 0.1495
Epoch: 3/20... Training loss: 0.1455
Epoch: 3/20... Training loss: 0.1475
Epoch: 3/20... Training loss: 0.1411
Epoch: 3/20... Training loss: 0.1443
Epoch: 3/20... Training loss: 0.1458
Epoch: 3/20... Training loss: 0.1486
Epoch: 3/20... Training loss: 0.1440
Epoch: 3/20... Training loss: 0.1412
Epoch: 3/20... Training loss: 0.1472
Epoch: 3/20... Training loss: 0.1462
Epoch: 3/20... Training loss: 0.1456
Epoch: 3/20... Training loss: 0.1458
Epoch: 3/20... Training loss: 0.1412
Epoch: 3/20... Training loss: 0.1460
Epoch: 3/20... Training loss: 0.1489
Epoch: 3/20... Training loss: 0.1461
Epoch: 3/20... Training loss: 0.1466
Epoch: 3/20... Training loss: 0.1468
Epoch: 3/20... Training loss: 0.1396
Epoch: 3/20... Training loss: 0.1468
Epoch: 3/20... Training loss: 0.1432
Epoch: 3/20... Training loss: 0.1439
Epoch: 3/20... Training loss: 0.1417
Epoch: 3/20... Training loss: 0.1475
Epoch: 3/20... Training loss: 0.1431
Epoch: 3/20... Training loss: 0.1454
Epoch: 3/20... Training loss: 0.1502
Epoch: 3/20... Training loss: 0.1469
Epoch: 3/20... Training loss: 0.1469
Epoch: 3/20... Training loss: 0.1454
Epoch: 3/20... Training loss: 0.1445
Epoch: 3/20... Training loss: 0.1437
Epoch: 3/20... Training loss: 0.1478
Epoch: 3/20... Training loss: 0.1468
Epoch: 3/20... Training loss: 0.1484
Epoch: 3/20... Training loss: 0.1460
Epoch: 3/20... Training loss: 0.1432
Epoch: 3/20... Training loss: 0.1406
Epoch: 3/20... Training loss: 0.1428
Epoch: 3/20... Training loss: 0.1391
Epoch: 3/20... Training loss: 0.1491
Epoch: 3/20... Training loss: 0.1442
Epoch: 3/20... Training loss: 0.1400
Epoch: 3/20... Training loss: 0.1452
Epoch: 3/20... Training loss: 0.1459
Epoch: 3/20... Training loss: 0.1482
Epoch: 3/20... Training loss: 0.1419
Epoch: 3/20... Training loss: 0.1444
Epoch: 3/20... Training loss: 0.1408
Epoch: 3/20... Training loss: 0.1415
Epoch: 3/20... Training loss: 0.1400
Epoch: 3/20... Training loss: 0.1461
Epoch: 3/20... Training loss: 0.1402
Epoch: 3/20... Training loss: 0.1506
Epoch: 3/20... Training loss: 0.1426
Epoch: 3/20... Training loss: 0.1411
Epoch: 3/20... Training loss: 0.1425
Epoch: 3/20... Training loss: 0.1447
Epoch: 3/20... Training loss: 0.1472
Epoch: 3/20... Training loss: 0.1444
Epoch: 3/20... Training loss: 0.1452
Epoch: 3/20... Training loss: 0.1413
Epoch: 3/20... Training loss: 0.1433
Epoch: 3/20... Training loss: 0.1450
Epoch: 3/20... Training loss: 0.1447
Epoch: 3/20... Training loss: 0.1398
Epoch: 3/20... Training loss: 0.1447
Epoch: 3/20... Training loss: 0.1440
Epoch: 3/20... Training loss: 0.1450
Epoch: 3/20... Training loss: 0.1478
Epoch: 3/20... Training loss: 0.1422
Epoch: 3/20... Training loss: 0.1412
Epoch: 3/20... Training loss: 0.1454
Epoch: 3/20... Training loss: 0.1428
Epoch: 3/20... Training loss: 0.1414
Epoch: 3/20... Training loss: 0.1436
Epoch: 3/20... Training loss: 0.1429
Epoch: 3/20... Training loss: 0.1452
Epoch: 3/20... Training loss: 0.1454
Epoch: 3/20... Training loss: 0.1458
Epoch: 3/20... Training loss: 0.1450
Epoch: 3/20... Training loss: 0.1479
Epoch: 3/20... Training loss: 0.1459
Epoch: 3/20... Training loss: 0.1390
Epoch: 3/20... Training loss: 0.1476
Epoch: 3/20... Training loss: 0.1426
Epoch: 3/20... Training loss: 0.1416
Epoch: 3/20... Training loss: 0.1457
Epoch: 3/20... Training loss: 0.1412
Epoch: 3/20... Training loss: 0.1422
Epoch: 3/20... Training loss: 0.1444
Epoch: 3/20... Training loss: 0.1443
Epoch: 3/20... Training loss: 0.1408
Epoch: 3/20... Training loss: 0.1401
Epoch: 3/20... Training loss: 0.1428
Epoch: 3/20... Training loss: 0.1393
Epoch: 3/20... Training loss: 0.1413
Epoch: 3/20... Training loss: 0.1521
Epoch: 3/20... Training loss: 0.1421
Epoch: 3/20... Training loss: 0.1439
Epoch: 3/20... Training loss: 0.1397
Epoch: 3/20... Training loss: 0.1456
Epoch: 3/20... Training loss: 0.1424
Epoch: 3/20... Training loss: 0.1455
Epoch: 3/20... Training loss: 0.1412
Epoch: 3/20... Training loss: 0.1421
Epoch: 4/20... Training loss: 0.1459
Epoch: 4/20... Training loss: 0.1404
Epoch: 4/20... Training loss: 0.1407
Epoch: 4/20... Training loss: 0.1432
Epoch: 4/20... Training loss: 0.1406
Epoch: 4/20... Training loss: 0.1413
Epoch: 4/20... Training loss: 0.1447
Epoch: 4/20... Training loss: 0.1409
Epoch: 4/20... Training loss: 0.1418
Epoch: 4/20... Training loss: 0.1394
Epoch: 4/20... Training loss: 0.1464
Epoch: 4/20... Training loss: 0.1425
Epoch: 4/20... Training loss: 0.1494
Epoch: 4/20... Training loss: 0.1481
Epoch: 4/20... Training loss: 0.1416
Epoch: 4/20... Training loss: 0.1452
Epoch: 4/20... Training loss: 0.1459
Epoch: 4/20... Training loss: 0.1410
Epoch: 4/20... Training loss: 0.1453
Epoch: 4/20... Training loss: 0.1444
Epoch: 4/20... Training loss: 0.1471
Epoch: 4/20... Training loss: 0.1417
Epoch: 4/20... Training loss: 0.1443
Epoch: 4/20... Training loss: 0.1462
Epoch: 4/20... Training loss: 0.1412
Epoch: 4/20... Training loss: 0.1438
Epoch: 4/20... Training loss: 0.1408
Epoch: 4/20... Training loss: 0.1437
Epoch: 4/20... Training loss: 0.1412
Epoch: 4/20... Training loss: 0.1412
Epoch: 4/20... Training loss: 0.1402
Epoch: 4/20... Training loss: 0.1379
Epoch: 4/20... Training loss: 0.1451
Epoch: 4/20... Training loss: 0.1428
Epoch: 4/20... Training loss: 0.1409
Epoch: 4/20... Training loss: 0.1369
Epoch: 4/20... Training loss: 0.1447
Epoch: 4/20... Training loss: 0.1397
Epoch: 4/20... Training loss: 0.1442
Epoch: 4/20... Training loss: 0.1387
Epoch: 4/20... Training loss: 0.1444
Epoch: 4/20... Training loss: 0.1433
Epoch: 4/20... Training loss: 0.1472
Epoch: 4/20... Training loss: 0.1391
Epoch: 4/20... Training loss: 0.1366
Epoch: 4/20... Training loss: 0.1398
Epoch: 4/20... Training loss: 0.1411
Epoch: 4/20... Training loss: 0.1410
Epoch: 4/20... Training loss: 0.1446
Epoch: 4/20... Training loss: 0.1412
Epoch: 4/20... Training loss: 0.1379
Epoch: 4/20... Training loss: 0.1359
Epoch: 4/20... Training loss: 0.1400
Epoch: 4/20... Training loss: 0.1432
Epoch: 4/20... Training loss: 0.1416
Epoch: 4/20... Training loss: 0.1439
Epoch: 4/20... Training loss: 0.1433
Epoch: 4/20... Training loss: 0.1386
Epoch: 4/20... Training loss: 0.1409
Epoch: 4/20... Training loss: 0.1413
Epoch: 4/20... Training loss: 0.1427
Epoch: 4/20... Training loss: 0.1375
Epoch: 4/20... Training loss: 0.1412
Epoch: 4/20... Training loss: 0.1338
Epoch: 4/20... Training loss: 0.1387
Epoch: 4/20... Training loss: 0.1423
Epoch: 4/20... Training loss: 0.1431
Epoch: 4/20... Training loss: 0.1427
Epoch: 4/20... Training loss: 0.1428
Epoch: 4/20... Training loss: 0.1370
Epoch: 4/20... Training loss: 0.1454
Epoch: 4/20... Training loss: 0.1458
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1411
Epoch: 4/20... Training loss: 0.1409
Epoch: 4/20... Training loss: 0.1397
Epoch: 4/20... Training loss: 0.1446
Epoch: 4/20... Training loss: 0.1400
Epoch: 4/20... Training loss: 0.1466
Epoch: 4/20... Training loss: 0.1435
Epoch: 4/20... Training loss: 0.1404
Epoch: 4/20... Training loss: 0.1367
Epoch: 4/20... Training loss: 0.1424
Epoch: 4/20... Training loss: 0.1414
Epoch: 4/20... Training loss: 0.1418
Epoch: 4/20... Training loss: 0.1393
Epoch: 4/20... Training loss: 0.1377
Epoch: 4/20... Training loss: 0.1423
Epoch: 4/20... Training loss: 0.1426
Epoch: 4/20... Training loss: 0.1432
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1409
Epoch: 4/20... Training loss: 0.1400
Epoch: 4/20... Training loss: 0.1356
Epoch: 4/20... Training loss: 0.1439
Epoch: 4/20... Training loss: 0.1364
Epoch: 4/20... Training loss: 0.1412
Epoch: 4/20... Training loss: 0.1374
Epoch: 4/20... Training loss: 0.1406
Epoch: 4/20... Training loss: 0.1350
Epoch: 4/20... Training loss: 0.1384
Epoch: 4/20... Training loss: 0.1418
Epoch: 4/20... Training loss: 0.1410
Epoch: 4/20... Training loss: 0.1356
Epoch: 4/20... Training loss: 0.1458
Epoch: 4/20... Training loss: 0.1427
Epoch: 4/20... Training loss: 0.1401
Epoch: 4/20... Training loss: 0.1405
Epoch: 4/20... Training loss: 0.1371
Epoch: 4/20... Training loss: 0.1388
Epoch: 4/20... Training loss: 0.1401
Epoch: 4/20... Training loss: 0.1426
Epoch: 4/20... Training loss: 0.1378
Epoch: 4/20... Training loss: 0.1462
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1390
Epoch: 4/20... Training loss: 0.1356
Epoch: 4/20... Training loss: 0.1386
Epoch: 4/20... Training loss: 0.1408
Epoch: 4/20... Training loss: 0.1371
Epoch: 4/20... Training loss: 0.1376
Epoch: 4/20... Training loss: 0.1416
Epoch: 4/20... Training loss: 0.1397
Epoch: 4/20... Training loss: 0.1426
Epoch: 4/20... Training loss: 0.1452
Epoch: 4/20... Training loss: 0.1391
Epoch: 4/20... Training loss: 0.1424
Epoch: 4/20... Training loss: 0.1398
Epoch: 4/20... Training loss: 0.1424
Epoch: 4/20... Training loss: 0.1422
Epoch: 4/20... Training loss: 0.1407
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1458
Epoch: 4/20... Training loss: 0.1431
Epoch: 4/20... Training loss: 0.1425
Epoch: 4/20... Training loss: 0.1370
Epoch: 4/20... Training loss: 0.1369
Epoch: 4/20... Training loss: 0.1397
Epoch: 4/20... Training loss: 0.1368
Epoch: 4/20... Training loss: 0.1363
Epoch: 4/20... Training loss: 0.1363
Epoch: 4/20... Training loss: 0.1428
Epoch: 4/20... Training loss: 0.1406
Epoch: 4/20... Training loss: 0.1374
Epoch: 4/20... Training loss: 0.1360
Epoch: 4/20... Training loss: 0.1391
Epoch: 4/20... Training loss: 0.1340
Epoch: 4/20... Training loss: 0.1405
Epoch: 4/20... Training loss: 0.1349
Epoch: 4/20... Training loss: 0.1444
Epoch: 4/20... Training loss: 0.1422
Epoch: 4/20... Training loss: 0.1396
Epoch: 4/20... Training loss: 0.1419
Epoch: 4/20... Training loss: 0.1407
Epoch: 4/20... Training loss: 0.1367
Epoch: 4/20... Training loss: 0.1443
Epoch: 4/20... Training loss: 0.1354
Epoch: 4/20... Training loss: 0.1413
Epoch: 4/20... Training loss: 0.1390
Epoch: 4/20... Training loss: 0.1348
Epoch: 4/20... Training loss: 0.1429
Epoch: 4/20... Training loss: 0.1370
Epoch: 4/20... Training loss: 0.1393
Epoch: 4/20... Training loss: 0.1391
Epoch: 4/20... Training loss: 0.1389
Epoch: 4/20... Training loss: 0.1368
Epoch: 4/20... Training loss: 0.1373
Epoch: 4/20... Training loss: 0.1386
Epoch: 4/20... Training loss: 0.1423
Epoch: 4/20... Training loss: 0.1431
Epoch: 4/20... Training loss: 0.1430
Epoch: 4/20... Training loss: 0.1371
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1336
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1358
Epoch: 4/20... Training loss: 0.1394
Epoch: 4/20... Training loss: 0.1361
Epoch: 4/20... Training loss: 0.1359
Epoch: 4/20... Training loss: 0.1451
Epoch: 4/20... Training loss: 0.1367
Epoch: 4/20... Training loss: 0.1390
Epoch: 4/20... Training loss: 0.1364
Epoch: 4/20... Training loss: 0.1388
Epoch: 4/20... Training loss: 0.1438
Epoch: 4/20... Training loss: 0.1373
Epoch: 4/20... Training loss: 0.1393
Epoch: 4/20... Training loss: 0.1364
Epoch: 4/20... Training loss: 0.1426
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1334
Epoch: 4/20... Training loss: 0.1334
Epoch: 4/20... Training loss: 0.1397
Epoch: 4/20... Training loss: 0.1414
Epoch: 4/20... Training loss: 0.1419
Epoch: 4/20... Training loss: 0.1373
Epoch: 4/20... Training loss: 0.1325
Epoch: 4/20... Training loss: 0.1421
Epoch: 4/20... Training loss: 0.1354
Epoch: 4/20... Training loss: 0.1372
Epoch: 4/20... Training loss: 0.1366
Epoch: 4/20... Training loss: 0.1411
Epoch: 4/20... Training loss: 0.1387
Epoch: 4/20... Training loss: 0.1324
Epoch: 4/20... Training loss: 0.1421
Epoch: 4/20... Training loss: 0.1393
Epoch: 4/20... Training loss: 0.1355
Epoch: 4/20... Training loss: 0.1342
Epoch: 4/20... Training loss: 0.1400
Epoch: 4/20... Training loss: 0.1342
Epoch: 4/20... Training loss: 0.1393
Epoch: 4/20... Training loss: 0.1369
Epoch: 4/20... Training loss: 0.1400
Epoch: 4/20... Training loss: 0.1339
Epoch: 4/20... Training loss: 0.1397
Epoch: 4/20... Training loss: 0.1419
Epoch: 4/20... Training loss: 0.1423
Epoch: 4/20... Training loss: 0.1390
Epoch: 4/20... Training loss: 0.1350
Epoch: 4/20... Training loss: 0.1379
Epoch: 4/20... Training loss: 0.1375
Epoch: 4/20... Training loss: 0.1360
Epoch: 4/20... Training loss: 0.1365
Epoch: 4/20... Training loss: 0.1403
Epoch: 4/20... Training loss: 0.1388
Epoch: 4/20... Training loss: 0.1347
Epoch: 4/20... Training loss: 0.1402
Epoch: 4/20... Training loss: 0.1381
Epoch: 4/20... Training loss: 0.1386
Epoch: 4/20... Training loss: 0.1334
Epoch: 4/20... Training loss: 0.1411
Epoch: 4/20... Training loss: 0.1371
Epoch: 4/20... Training loss: 0.1394
Epoch: 4/20... Training loss: 0.1407
Epoch: 4/20... Training loss: 0.1372
Epoch: 4/20... Training loss: 0.1367
Epoch: 4/20... Training loss: 0.1398
Epoch: 4/20... Training loss: 0.1356
Epoch: 4/20... Training loss: 0.1409
Epoch: 4/20... Training loss: 0.1403
Epoch: 4/20... Training loss: 0.1437
Epoch: 4/20... Training loss: 0.1340
Epoch: 4/20... Training loss: 0.1438
Epoch: 4/20... Training loss: 0.1342
Epoch: 4/20... Training loss: 0.1385
Epoch: 4/20... Training loss: 0.1392
Epoch: 4/20... Training loss: 0.1394
Epoch: 4/20... Training loss: 0.1375
Epoch: 4/20... Training loss: 0.1336
Epoch: 4/20... Training loss: 0.1344
Epoch: 4/20... Training loss: 0.1401
Epoch: 4/20... Training loss: 0.1352
Epoch: 4/20... Training loss: 0.1374
Epoch: 4/20... Training loss: 0.1399
Epoch: 4/20... Training loss: 0.1367
Epoch: 4/20... Training loss: 0.1372
Epoch: 4/20... Training loss: 0.1417
Epoch: 4/20... Training loss: 0.1378
Epoch: 4/20... Training loss: 0.1360
Epoch: 4/20... Training loss: 0.1393
Epoch: 4/20... Training loss: 0.1359
Epoch: 4/20... Training loss: 0.1341
Epoch: 4/20... Training loss: 0.1367
Epoch: 4/20... Training loss: 0.1391
Epoch: 4/20... Training loss: 0.1360
Epoch: 4/20... Training loss: 0.1369
Epoch: 4/20... Training loss: 0.1350
Epoch: 4/20... Training loss: 0.1349
Epoch: 4/20... Training loss: 0.1342
Epoch: 4/20... Training loss: 0.1390
Epoch: 4/20... Training loss: 0.1357
Epoch: 4/20... Training loss: 0.1346
Epoch: 4/20... Training loss: 0.1375
Epoch: 4/20... Training loss: 0.1366
Epoch: 4/20... Training loss: 0.1344
Epoch: 4/20... Training loss: 0.1371
Epoch: 4/20... Training loss: 0.1383
Epoch: 4/20... Training loss: 0.1335
Epoch: 4/20... Training loss: 0.1392
Epoch: 4/20... Training loss: 0.1362
Epoch: 4/20... Training loss: 0.1403
Epoch: 4/20... Training loss: 0.1362
Epoch: 4/20... Training loss: 0.1379
Epoch: 4/20... Training loss: 0.1390
Epoch: 4/20... Training loss: 0.1374
Epoch: 4/20... Training loss: 0.1386
Epoch: 4/20... Training loss: 0.1376
Epoch: 4/20... Training loss: 0.1333
Epoch: 4/20... Training loss: 0.1393
Epoch: 4/20... Training loss: 0.1390
Epoch: 4/20... Training loss: 0.1362
Epoch: 4/20... Training loss: 0.1353
Epoch: 4/20... Training loss: 0.1363
Epoch: 4/20... Training loss: 0.1347
Epoch: 4/20... Training loss: 0.1378
Epoch: 4/20... Training loss: 0.1323
Epoch: 4/20... Training loss: 0.1417
Epoch: 4/20... Training loss: 0.1353
Epoch: 4/20... Training loss: 0.1368
Epoch: 5/20... Training loss: 0.1364
Epoch: 5/20... Training loss: 0.1365
Epoch: 5/20... Training loss: 0.1383
Epoch: 5/20... Training loss: 0.1375
Epoch: 5/20... Training loss: 0.1338
Epoch: 5/20... Training loss: 0.1353
Epoch: 5/20... Training loss: 0.1329
Epoch: 5/20... Training loss: 0.1383
Epoch: 5/20... Training loss: 0.1343
Epoch: 5/20... Training loss: 0.1365
Epoch: 5/20... Training loss: 0.1359
Epoch: 5/20... Training loss: 0.1385
Epoch: 5/20... Training loss: 0.1318
Epoch: 5/20... Training loss: 0.1368
Epoch: 5/20... Training loss: 0.1422
Epoch: 5/20... Training loss: 0.1373
Epoch: 5/20... Training loss: 0.1373
Epoch: 5/20... Training loss: 0.1399
Epoch: 5/20... Training loss: 0.1345
Epoch: 5/20... Training loss: 0.1348
Epoch: 5/20... Training loss: 0.1375
Epoch: 5/20... Training loss: 0.1326
Epoch: 5/20... Training loss: 0.1367
Epoch: 5/20... Training loss: 0.1318
Epoch: 5/20... Training loss: 0.1334
Epoch: 5/20... Training loss: 0.1331
Epoch: 5/20... Training loss: 0.1323
Epoch: 5/20... Training loss: 0.1378
Epoch: 5/20... Training loss: 0.1358
Epoch: 5/20... Training loss: 0.1419
Epoch: 5/20... Training loss: 0.1365
Epoch: 5/20... Training loss: 0.1311
Epoch: 5/20... Training loss: 0.1381
Epoch: 5/20... Training loss: 0.1313
Epoch: 5/20... Training loss: 0.1367
Epoch: 5/20... Training loss: 0.1339
Epoch: 5/20... Training loss: 0.1361
Epoch: 5/20... Training loss: 0.1345
Epoch: 5/20... Training loss: 0.1333
Epoch: 5/20... Training loss: 0.1412
Epoch: 5/20... Training loss: 0.1337
Epoch: 5/20... Training loss: 0.1354
Epoch: 5/20... Training loss: 0.1402
Epoch: 5/20... Training loss: 0.1408
Epoch: 5/20... Training loss: 0.1367
Epoch: 5/20... Training loss: 0.1386
Epoch: 5/20... Training loss: 0.1361
Epoch: 5/20... Training loss: 0.1380
Epoch: 5/20... Training loss: 0.1302
Epoch: 5/20... Training loss: 0.1354
Epoch: 5/20... Training loss: 0.1347
Epoch: 5/20... Training loss: 0.1365
Epoch: 5/20... Training loss: 0.1407
Epoch: 5/20... Training loss: 0.1356
Epoch: 5/20... Training loss: 0.1396
Epoch: 5/20... Training loss: 0.1376
Epoch: 5/20... Training loss: 0.1362
Epoch: 5/20... Training loss: 0.1332
Epoch: 5/20... Training loss: 0.1336
Epoch: 5/20... Training loss: 0.1372
Epoch: 5/20... Training loss: 0.1331
Epoch: 5/20... Training loss: 0.1387
Epoch: 5/20... Training loss: 0.1418
Epoch: 5/20... Training loss: 0.1392
Epoch: 5/20... Training loss: 0.1401
Epoch: 5/20... Training loss: 0.1367
Epoch: 5/20... Training loss: 0.1345
Epoch: 5/20... Training loss: 0.1306
Epoch: 5/20... Training loss: 0.1389
Epoch: 5/20... Training loss: 0.1338
Epoch: 5/20... Training loss: 0.1299
Epoch: 5/20... Training loss: 0.1369
Epoch: 5/20... Training loss: 0.1335
Epoch: 5/20... Training loss: 0.1386
Epoch: 5/20... Training loss: 0.1360
Epoch: 5/20... Training loss: 0.1371
Epoch: 5/20... Training loss: 0.1350
Epoch: 5/20... Training loss: 0.1342
Epoch: 5/20... Training loss: 0.1327
Epoch: 5/20... Training loss: 0.1343
Epoch: 5/20... Training loss: 0.1357
Epoch: 5/20... Training loss: 0.1322
Epoch: 5/20... Training loss: 0.1358
Epoch: 5/20... Training loss: 0.1348
Epoch: 5/20... Training loss: 0.1348
Epoch: 5/20... Training loss: 0.1315
Epoch: 5/20... Training loss: 0.1354
Epoch: 5/20... Training loss: 0.1366
Epoch: 5/20... Training loss: 0.1413
Epoch: 5/20... Training loss: 0.1380
Epoch: 5/20... Training loss: 0.1339
Epoch: 5/20... Training loss: 0.1328
Epoch: 5/20... Training loss: 0.1370
Epoch: 5/20... Training loss: 0.1338
Epoch: 5/20... Training loss: 0.1373
Epoch: 5/20... Training loss: 0.1368
Epoch: 5/20... Training loss: 0.1340
Epoch: 5/20... Training loss: 0.1343
Epoch: 5/20... Training loss: 0.1347
Epoch: 5/20... Training loss: 0.1304
Epoch: 5/20... Training loss: 0.1306
Epoch: 5/20... Training loss: 0.1417
Epoch: 5/20... Training loss: 0.1314
Epoch: 5/20... Training loss: 0.1343
Epoch: 5/20... Training loss: 0.1371
Epoch: 5/20... Training loss: 0.1358
Epoch: 5/20... Training loss: 0.1366
Epoch: 5/20... Training loss: 0.1346
Epoch: 5/20... Training loss: 0.1322
Epoch: 5/20... Training loss: 0.1359
Epoch: 5/20... Training loss: 0.1402
Epoch: 5/20... Training loss: 0.1381
Epoch: 5/20... Training loss: 0.1307
Epoch: 5/20... Training loss: 0.1381
Epoch: 5/20... Training loss: 0.1365
Epoch: 5/20... Training loss: 0.1344
Epoch: 5/20... Training loss: 0.1354
Epoch: 5/20... Training loss: 0.1364
Epoch: 5/20... Training loss: 0.1337
Epoch: 5/20... Training loss: 0.1360
Epoch: 5/20... Training loss: 0.1341
Epoch: 5/20... Training loss: 0.1362
Epoch: 5/20... Training loss: 0.1325
Epoch: 5/20... Training loss: 0.1301
Epoch: 5/20... Training loss: 0.1310
Epoch: 5/20... Training loss: 0.1289
Epoch: 5/20... Training loss: 0.1350
Epoch: 5/20... Training loss: 0.1369
Epoch: 5/20... Training loss: 0.1322
Epoch: 5/20... Training loss: 0.1307
Epoch: 5/20... Training loss: 0.1397
Epoch: 5/20... Training loss: 0.1355
Epoch: 5/20... Training loss: 0.1321
Epoch: 5/20... Training loss: 0.1356
Epoch: 5/20... Training loss: 0.1379
Epoch: 5/20... Training loss: 0.1344
Epoch: 5/20... Training loss: 0.1348
Epoch: 5/20... Training loss: 0.1379
Epoch: 5/20... Training loss: 0.1353
Epoch: 5/20... Training loss: 0.1380
Epoch: 5/20... Training loss: 0.1302
Epoch: 5/20... Training loss: 0.1302
Epoch: 5/20... Training loss: 0.1380
Epoch: 5/20... Training loss: 0.1351
Epoch: 5/20... Training loss: 0.1314
Epoch: 5/20... Training loss: 0.1291
Epoch: 5/20... Training loss: 0.1349
Epoch: 5/20... Training loss: 0.1373
Epoch: 5/20... Training loss: 0.1390
Epoch: 5/20... Training loss: 0.1358
Epoch: 5/20... Training loss: 0.1345
Epoch: 5/20... Training loss: 0.1315
Epoch: 5/20... Training loss: 0.1344
Epoch: 5/20... Training loss: 0.1325
Epoch: 5/20... Training loss: 0.1316
Epoch: 5/20... Training loss: 0.1381
Epoch: 5/20... Training loss: 0.1331
Epoch: 5/20... Training loss: 0.1338
Epoch: 5/20... Training loss: 0.1353
Epoch: 5/20... Training loss: 0.1345
Epoch: 5/20... Training loss: 0.1330
Epoch: 5/20... Training loss: 0.1317
Epoch: 5/20... Training loss: 0.1370
Epoch: 5/20... Training loss: 0.1311
Epoch: 5/20... Training loss: 0.1328
Epoch: 5/20... Training loss: 0.1359
Epoch: 5/20... Training loss: 0.1383
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='output')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **deconvolutional** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor).
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1), name="inputs")
targets_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1), name="targets")
# The solutions uses the same tf.layers stuff, alternative is using tf.nn
# but it is more complicated (we have to initialize weights, biases and apply activation function)
# We used tf.nn.conv2d in the image classification project, now we can use higher level functions
# to keep the code shorter and easier.
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3), strides=(1,1), padding='SAME',
activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2, 2), strides=(2, 2), padding='SAME')
# Now 14x14x16
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, kernel_size=(3,3), strides=(1,1), padding='SAME',
activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2, 2), strides=(2, 2), padding='SAME')
# Now 7x7x8
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=8, kernel_size=(3,3), strides=(1,1), padding='SAME',
activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2, 2), strides=(2, 2), padding='SAME')
# Now 4x4x8
# First solution was with resize+conv2d_transpose instead of normal conv2d and it works. Probably
# because conv2d_transpose is not actual deconvolution (see documentation of tf.layers.conv2d_transpose)
# Anyway this remains obscure to me, but the instructions says use resize+conv or only deconv
# (but you could have artifacts). So I substitued conv2d_transpose with conv2d in final solution.
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(images=encoded, size=(7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(inputs=upsample1, kernel_size=(3,3), filters=8, padding='SAME',
activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(images=conv4, size=(14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(inputs=upsample2, kernel_size=(3,3), filters=8, padding='SAME',
activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(images=conv5, size=(28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(inputs=upsample3, kernel_size=(3,3), filters=16, padding='SAME',
activation=tf.nn.relu)
# Now 28x28x16
# Deconv with linear activation
logits = tf.layers.conv2d(inputs=conv6, kernel_size=(3,3), filters=1, padding='SAME',
activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), strides=(1,1), padding='SAME',
activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2, 2), strides=(2, 2), padding='SAME')
# Now 14x14x32
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=32, kernel_size=(3,3), strides=(1,1), padding='SAME',
activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2, 2), strides=(2, 2), padding='SAME')
# Now 7x7x32
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=16, kernel_size=(3,3), strides=(1,1), padding='SAME',
activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2, 2), strides=(2, 2), padding='SAME')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(images=encoded, size=(7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(inputs=upsample1, kernel_size=(3,3), filters=16, padding='SAME',
activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(images=conv4, size=(14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(inputs=upsample2, kernel_size=(3,3), filters=32, padding='SAME',
activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(images=conv5, size=(28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(inputs=upsample3, kernel_size=(3,3), filters=32, padding='SAME',
activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(inputs=conv6, kernel_size=(3,3), filters=1, padding='SAME',
activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
#epochs = 100
epochs = 25 #100 takes too much time, but with 25 the results are already good
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/25... Training loss: 0.6782
Epoch: 1/25... Training loss: 0.6428
Epoch: 1/25... Training loss: 0.5984
Epoch: 1/25... Training loss: 0.5503
Epoch: 1/25... Training loss: 0.5129
Epoch: 1/25... Training loss: 0.5174
Epoch: 1/25... Training loss: 0.5482
Epoch: 1/25... Training loss: 0.5102
Epoch: 1/25... Training loss: 0.4934
Epoch: 1/25... Training loss: 0.4892
Epoch: 1/25... Training loss: 0.4770
Epoch: 1/25... Training loss: 0.4746
Epoch: 1/25... Training loss: 0.4776
Epoch: 1/25... Training loss: 0.4709
Epoch: 1/25... Training loss: 0.4508
Epoch: 1/25... Training loss: 0.4413
Epoch: 1/25... Training loss: 0.4356
Epoch: 1/25... Training loss: 0.4310
Epoch: 1/25... Training loss: 0.4182
Epoch: 1/25... Training loss: 0.3769
Epoch: 1/25... Training loss: 0.3802
Epoch: 1/25... Training loss: 0.3707
Epoch: 1/25... Training loss: 0.3507
Epoch: 1/25... Training loss: 0.3436
Epoch: 1/25... Training loss: 0.3245
Epoch: 1/25... Training loss: 0.3093
Epoch: 1/25... Training loss: 0.3088
Epoch: 1/25... Training loss: 0.2888
Epoch: 1/25... Training loss: 0.2734
Epoch: 1/25... Training loss: 0.2752
Epoch: 1/25... Training loss: 0.2730
Epoch: 1/25... Training loss: 0.2734
Epoch: 1/25... Training loss: 0.2664
Epoch: 1/25... Training loss: 0.2660
Epoch: 1/25... Training loss: 0.2600
Epoch: 1/25... Training loss: 0.2551
Epoch: 1/25... Training loss: 0.2612
Epoch: 1/25... Training loss: 0.2587
Epoch: 1/25... Training loss: 0.2595
Epoch: 1/25... Training loss: 0.2474
Epoch: 1/25... Training loss: 0.2510
Epoch: 1/25... Training loss: 0.2550
Epoch: 1/25... Training loss: 0.2472
Epoch: 1/25... Training loss: 0.2426
Epoch: 1/25... Training loss: 0.2484
Epoch: 1/25... Training loss: 0.2352
Epoch: 1/25... Training loss: 0.2411
Epoch: 1/25... Training loss: 0.2331
Epoch: 1/25... Training loss: 0.2353
Epoch: 1/25... Training loss: 0.2284
Epoch: 1/25... Training loss: 0.2356
Epoch: 1/25... Training loss: 0.2283
Epoch: 1/25... Training loss: 0.2372
Epoch: 1/25... Training loss: 0.2327
Epoch: 1/25... Training loss: 0.2313
Epoch: 1/25... Training loss: 0.2315
Epoch: 1/25... Training loss: 0.2277
Epoch: 1/25... Training loss: 0.2269
Epoch: 1/25... Training loss: 0.2274
Epoch: 1/25... Training loss: 0.2248
Epoch: 1/25... Training loss: 0.2262
Epoch: 1/25... Training loss: 0.2184
Epoch: 1/25... Training loss: 0.2230
Epoch: 1/25... Training loss: 0.2250
Epoch: 1/25... Training loss: 0.2263
Epoch: 1/25... Training loss: 0.2184
Epoch: 1/25... Training loss: 0.2262
Epoch: 1/25... Training loss: 0.2190
Epoch: 1/25... Training loss: 0.2194
Epoch: 1/25... Training loss: 0.2240
Epoch: 1/25... Training loss: 0.2260
Epoch: 1/25... Training loss: 0.2180
Epoch: 1/25... Training loss: 0.2183
Epoch: 1/25... Training loss: 0.2159
Epoch: 1/25... Training loss: 0.2217
Epoch: 1/25... Training loss: 0.2142
Epoch: 1/25... Training loss: 0.2198
Epoch: 1/25... Training loss: 0.2084
Epoch: 1/25... Training loss: 0.2128
Epoch: 1/25... Training loss: 0.2157
Epoch: 1/25... Training loss: 0.2125
Epoch: 1/25... Training loss: 0.2114
Epoch: 1/25... Training loss: 0.2088
Epoch: 1/25... Training loss: 0.2181
Epoch: 1/25... Training loss: 0.2112
Epoch: 1/25... Training loss: 0.2126
Epoch: 1/25... Training loss: 0.2083
Epoch: 1/25... Training loss: 0.2087
Epoch: 1/25... Training loss: 0.2063
Epoch: 1/25... Training loss: 0.2048
Epoch: 1/25... Training loss: 0.2013
Epoch: 1/25... Training loss: 0.2003
Epoch: 1/25... Training loss: 0.2041
Epoch: 1/25... Training loss: 0.2097
Epoch: 1/25... Training loss: 0.2033
Epoch: 1/25... Training loss: 0.2042
Epoch: 1/25... Training loss: 0.2074
Epoch: 1/25... Training loss: 0.2039
Epoch: 1/25... Training loss: 0.2050
Epoch: 1/25... Training loss: 0.2033
Epoch: 1/25... Training loss: 0.1986
Epoch: 1/25... Training loss: 0.2019
Epoch: 1/25... Training loss: 0.2069
Epoch: 1/25... Training loss: 0.1951
Epoch: 1/25... Training loss: 0.1995
Epoch: 1/25... Training loss: 0.2032
Epoch: 1/25... Training loss: 0.2025
Epoch: 1/25... Training loss: 0.2018
Epoch: 1/25... Training loss: 0.1934
Epoch: 1/25... Training loss: 0.2050
Epoch: 1/25... Training loss: 0.1992
Epoch: 1/25... Training loss: 0.1956
Epoch: 1/25... Training loss: 0.1980
Epoch: 1/25... Training loss: 0.2034
Epoch: 1/25... Training loss: 0.1955
Epoch: 1/25... Training loss: 0.1978
Epoch: 1/25... Training loss: 0.1937
Epoch: 1/25... Training loss: 0.1944
Epoch: 1/25... Training loss: 0.1977
Epoch: 1/25... Training loss: 0.1950
Epoch: 1/25... Training loss: 0.1958
Epoch: 1/25... Training loss: 0.1961
Epoch: 1/25... Training loss: 0.1962
Epoch: 1/25... Training loss: 0.2005
Epoch: 1/25... Training loss: 0.1926
Epoch: 1/25... Training loss: 0.1904
Epoch: 1/25... Training loss: 0.1973
Epoch: 1/25... Training loss: 0.1974
Epoch: 1/25... Training loss: 0.1891
Epoch: 1/25... Training loss: 0.1949
Epoch: 1/25... Training loss: 0.1943
Epoch: 1/25... Training loss: 0.1819
Epoch: 1/25... Training loss: 0.1903
Epoch: 1/25... Training loss: 0.1917
Epoch: 1/25... Training loss: 0.1860
Epoch: 1/25... Training loss: 0.1909
Epoch: 1/25... Training loss: 0.1869
Epoch: 1/25... Training loss: 0.1893
Epoch: 1/25... Training loss: 0.1904
Epoch: 1/25... Training loss: 0.1824
Epoch: 1/25... Training loss: 0.1814
Epoch: 1/25... Training loss: 0.1893
Epoch: 1/25... Training loss: 0.1898
Epoch: 1/25... Training loss: 0.1865
Epoch: 1/25... Training loss: 0.1833
Epoch: 1/25... Training loss: 0.1901
Epoch: 1/25... Training loss: 0.1843
Epoch: 1/25... Training loss: 0.1904
Epoch: 1/25... Training loss: 0.1853
Epoch: 1/25... Training loss: 0.1832
Epoch: 1/25... Training loss: 0.1847
Epoch: 1/25... Training loss: 0.1886
Epoch: 1/25... Training loss: 0.1851
Epoch: 1/25... Training loss: 0.1875
Epoch: 1/25... Training loss: 0.1838
Epoch: 1/25... Training loss: 0.1867
Epoch: 1/25... Training loss: 0.1902
Epoch: 1/25... Training loss: 0.1800
Epoch: 1/25... Training loss: 0.1823
Epoch: 1/25... Training loss: 0.1864
Epoch: 1/25... Training loss: 0.1810
Epoch: 1/25... Training loss: 0.1894
Epoch: 1/25... Training loss: 0.1837
Epoch: 1/25... Training loss: 0.1852
Epoch: 1/25... Training loss: 0.1838
Epoch: 1/25... Training loss: 0.1833
Epoch: 1/25... Training loss: 0.1762
Epoch: 1/25... Training loss: 0.1799
Epoch: 1/25... Training loss: 0.1799
Epoch: 1/25... Training loss: 0.1816
Epoch: 1/25... Training loss: 0.1860
Epoch: 1/25... Training loss: 0.1859
Epoch: 1/25... Training loss: 0.1787
Epoch: 1/25... Training loss: 0.1776
Epoch: 1/25... Training loss: 0.1779
Epoch: 1/25... Training loss: 0.1802
Epoch: 1/25... Training loss: 0.1746
Epoch: 1/25... Training loss: 0.1806
Epoch: 1/25... Training loss: 0.1782
Epoch: 1/25... Training loss: 0.1774
Epoch: 1/25... Training loss: 0.1773
Epoch: 1/25... Training loss: 0.1797
Epoch: 1/25... Training loss: 0.1794
Epoch: 1/25... Training loss: 0.1763
Epoch: 1/25... Training loss: 0.1776
Epoch: 1/25... Training loss: 0.1788
Epoch: 1/25... Training loss: 0.1733
Epoch: 1/25... Training loss: 0.1755
Epoch: 1/25... Training loss: 0.1754
Epoch: 1/25... Training loss: 0.1784
Epoch: 1/25... Training loss: 0.1798
Epoch: 1/25... Training loss: 0.1774
Epoch: 1/25... Training loss: 0.1732
Epoch: 1/25... Training loss: 0.1763
Epoch: 1/25... Training loss: 0.1726
Epoch: 1/25... Training loss: 0.1712
Epoch: 1/25... Training loss: 0.1744
Epoch: 1/25... Training loss: 0.1703
Epoch: 1/25... Training loss: 0.1738
Epoch: 1/25... Training loss: 0.1744
Epoch: 1/25... Training loss: 0.1747
Epoch: 1/25... Training loss: 0.1738
Epoch: 1/25... Training loss: 0.1793
Epoch: 1/25... Training loss: 0.1801
Epoch: 1/25... Training loss: 0.1770
Epoch: 1/25... Training loss: 0.1713
Epoch: 1/25... Training loss: 0.1726
Epoch: 1/25... Training loss: 0.1743
Epoch: 1/25... Training loss: 0.1731
Epoch: 1/25... Training loss: 0.1721
Epoch: 1/25... Training loss: 0.1680
Epoch: 1/25... Training loss: 0.1704
Epoch: 1/25... Training loss: 0.1696
Epoch: 1/25... Training loss: 0.1720
Epoch: 1/25... Training loss: 0.1773
Epoch: 1/25... Training loss: 0.1787
Epoch: 1/25... Training loss: 0.1749
Epoch: 1/25... Training loss: 0.1790
Epoch: 1/25... Training loss: 0.1720
Epoch: 1/25... Training loss: 0.1738
Epoch: 1/25... Training loss: 0.1758
Epoch: 1/25... Training loss: 0.1722
Epoch: 1/25... Training loss: 0.1766
Epoch: 1/25... Training loss: 0.1746
Epoch: 1/25... Training loss: 0.1716
Epoch: 1/25... Training loss: 0.1764
Epoch: 1/25... Training loss: 0.1769
Epoch: 1/25... Training loss: 0.1696
Epoch: 1/25... Training loss: 0.1741
Epoch: 1/25... Training loss: 0.1684
Epoch: 1/25... Training loss: 0.1675
Epoch: 1/25... Training loss: 0.1709
Epoch: 1/25... Training loss: 0.1705
Epoch: 1/25... Training loss: 0.1737
Epoch: 1/25... Training loss: 0.1704
Epoch: 1/25... Training loss: 0.1692
Epoch: 1/25... Training loss: 0.1687
Epoch: 1/25... Training loss: 0.1731
Epoch: 1/25... Training loss: 0.1638
Epoch: 1/25... Training loss: 0.1736
Epoch: 1/25... Training loss: 0.1670
Epoch: 1/25... Training loss: 0.1685
Epoch: 1/25... Training loss: 0.1680
Epoch: 1/25... Training loss: 0.1631
Epoch: 1/25... Training loss: 0.1723
Epoch: 1/25... Training loss: 0.1685
Epoch: 1/25... Training loss: 0.1692
Epoch: 1/25... Training loss: 0.1652
Epoch: 1/25... Training loss: 0.1682
Epoch: 1/25... Training loss: 0.1682
Epoch: 1/25... Training loss: 0.1693
Epoch: 1/25... Training loss: 0.1648
Epoch: 1/25... Training loss: 0.1671
Epoch: 1/25... Training loss: 0.1719
Epoch: 1/25... Training loss: 0.1689
Epoch: 1/25... Training loss: 0.1686
Epoch: 1/25... Training loss: 0.1666
Epoch: 1/25... Training loss: 0.1674
Epoch: 1/25... Training loss: 0.1658
Epoch: 1/25... Training loss: 0.1653
Epoch: 1/25... Training loss: 0.1618
Epoch: 1/25... Training loss: 0.1727
Epoch: 1/25... Training loss: 0.1653
Epoch: 1/25... Training loss: 0.1708
Epoch: 1/25... Training loss: 0.1669
Epoch: 1/25... Training loss: 0.1667
Epoch: 1/25... Training loss: 0.1747
Epoch: 1/25... Training loss: 0.1671
Epoch: 1/25... Training loss: 0.1676
Epoch: 1/25... Training loss: 0.1658
Epoch: 1/25... Training loss: 0.1752
Epoch: 1/25... Training loss: 0.1648
Epoch: 1/25... Training loss: 0.1615
Epoch: 1/25... Training loss: 0.1660
Epoch: 1/25... Training loss: 0.1623
Epoch: 1/25... Training loss: 0.1636
Epoch: 1/25... Training loss: 0.1605
Epoch: 1/25... Training loss: 0.1669
Epoch: 1/25... Training loss: 0.1640
Epoch: 1/25... Training loss: 0.1654
Epoch: 1/25... Training loss: 0.1652
Epoch: 1/25... Training loss: 0.1658
Epoch: 1/25... Training loss: 0.1653
Epoch: 1/25... Training loss: 0.1622
Epoch: 1/25... Training loss: 0.1596
Epoch: 1/25... Training loss: 0.1608
Epoch: 1/25... Training loss: 0.1650
Epoch: 1/25... Training loss: 0.1573
Epoch: 1/25... Training loss: 0.1645
Epoch: 1/25... Training loss: 0.1582
Epoch: 1/25... Training loss: 0.1622
Epoch: 1/25... Training loss: 0.1630
Epoch: 1/25... Training loss: 0.1621
Epoch: 1/25... Training loss: 0.1695
Epoch: 1/25... Training loss: 0.1595
Epoch: 1/25... Training loss: 0.1619
Epoch: 1/25... Training loss: 0.1635
Epoch: 1/25... Training loss: 0.1669
Epoch: 1/25... Training loss: 0.1627
Epoch: 1/25... Training loss: 0.1614
Epoch: 2/25... Training loss: 0.1630
Epoch: 2/25... Training loss: 0.1682
Epoch: 2/25... Training loss: 0.1706
Epoch: 2/25... Training loss: 0.1633
Epoch: 2/25... Training loss: 0.1663
Epoch: 2/25... Training loss: 0.1618
Epoch: 2/25... Training loss: 0.1658
Epoch: 2/25... Training loss: 0.1604
Epoch: 2/25... Training loss: 0.1664
Epoch: 2/25... Training loss: 0.1642
Epoch: 2/25... Training loss: 0.1650
Epoch: 2/25... Training loss: 0.1603
Epoch: 2/25... Training loss: 0.1603
Epoch: 2/25... Training loss: 0.1599
Epoch: 2/25... Training loss: 0.1600
Epoch: 2/25... Training loss: 0.1602
Epoch: 2/25... Training loss: 0.1613
Epoch: 2/25... Training loss: 0.1638
Epoch: 2/25... Training loss: 0.1647
Epoch: 2/25... Training loss: 0.1657
Epoch: 2/25... Training loss: 0.1633
Epoch: 2/25... Training loss: 0.1564
Epoch: 2/25... Training loss: 0.1626
Epoch: 2/25... Training loss: 0.1666
Epoch: 2/25... Training loss: 0.1560
Epoch: 2/25... Training loss: 0.1603
Epoch: 2/25... Training loss: 0.1595
Epoch: 2/25... Training loss: 0.1601
Epoch: 2/25... Training loss: 0.1610
Epoch: 2/25... Training loss: 0.1652
Epoch: 2/25... Training loss: 0.1666
Epoch: 2/25... Training loss: 0.1586
Epoch: 2/25... Training loss: 0.1583
Epoch: 2/25... Training loss: 0.1596
Epoch: 2/25... Training loss: 0.1571
Epoch: 2/25... Training loss: 0.1570
Epoch: 2/25... Training loss: 0.1582
Epoch: 2/25... Training loss: 0.1566
Epoch: 2/25... Training loss: 0.1591
Epoch: 2/25... Training loss: 0.1560
Epoch: 2/25... Training loss: 0.1589
Epoch: 2/25... Training loss: 0.1556
Epoch: 2/25... Training loss: 0.1621
Epoch: 2/25... Training loss: 0.1672
Epoch: 2/25... Training loss: 0.1590
Epoch: 2/25... Training loss: 0.1589
Epoch: 2/25... Training loss: 0.1607
Epoch: 2/25... Training loss: 0.1576
Epoch: 2/25... Training loss: 0.1522
Epoch: 2/25... Training loss: 0.1537
Epoch: 2/25... Training loss: 0.1562
Epoch: 2/25... Training loss: 0.1610
Epoch: 2/25... Training loss: 0.1549
Epoch: 2/25... Training loss: 0.1585
Epoch: 2/25... Training loss: 0.1514
Epoch: 2/25... Training loss: 0.1561
Epoch: 2/25... Training loss: 0.1551
Epoch: 2/25... Training loss: 0.1606
Epoch: 2/25... Training loss: 0.1557
Epoch: 2/25... Training loss: 0.1619
Epoch: 2/25... Training loss: 0.1631
Epoch: 2/25... Training loss: 0.1563
Epoch: 2/25... Training loss: 0.1565
Epoch: 2/25... Training loss: 0.1603
Epoch: 2/25... Training loss: 0.1565
Epoch: 2/25... Training loss: 0.1583
Epoch: 2/25... Training loss: 0.1594
Epoch: 2/25... Training loss: 0.1560
Epoch: 2/25... Training loss: 0.1535
Epoch: 2/25... Training loss: 0.1631
Epoch: 2/25... Training loss: 0.1581
Epoch: 2/25... Training loss: 0.1612
Epoch: 2/25... Training loss: 0.1598
Epoch: 2/25... Training loss: 0.1549
Epoch: 2/25... Training loss: 0.1562
Epoch: 2/25... Training loss: 0.1573
Epoch: 2/25... Training loss: 0.1547
Epoch: 2/25... Training loss: 0.1573
Epoch: 2/25... Training loss: 0.1529
Epoch: 2/25... Training loss: 0.1595
Epoch: 2/25... Training loss: 0.1558
Epoch: 2/25... Training loss: 0.1494
Epoch: 2/25... Training loss: 0.1541
Epoch: 2/25... Training loss: 0.1526
Epoch: 2/25... Training loss: 0.1576
Epoch: 2/25... Training loss: 0.1550
Epoch: 2/25... Training loss: 0.1572
Epoch: 2/25... Training loss: 0.1526
Epoch: 2/25... Training loss: 0.1508
Epoch: 2/25... Training loss: 0.1680
Epoch: 2/25... Training loss: 0.1656
Epoch: 2/25... Training loss: 0.1522
Epoch: 2/25... Training loss: 0.1597
Epoch: 2/25... Training loss: 0.1554
Epoch: 2/25... Training loss: 0.1622
Epoch: 2/25... Training loss: 0.1595
Epoch: 2/25... Training loss: 0.1500
Epoch: 2/25... Training loss: 0.1576
Epoch: 2/25... Training loss: 0.1530
Epoch: 2/25... Training loss: 0.1570
Epoch: 2/25... Training loss: 0.1548
Epoch: 2/25... Training loss: 0.1527
Epoch: 2/25... Training loss: 0.1546
Epoch: 2/25... Training loss: 0.1545
Epoch: 2/25... Training loss: 0.1569
Epoch: 2/25... Training loss: 0.1543
Epoch: 2/25... Training loss: 0.1502
Epoch: 2/25... Training loss: 0.1548
Epoch: 2/25... Training loss: 0.1554
Epoch: 2/25... Training loss: 0.1516
Epoch: 2/25... Training loss: 0.1529
Epoch: 2/25... Training loss: 0.1486
Epoch: 2/25... Training loss: 0.1610
Epoch: 2/25... Training loss: 0.1560
Epoch: 2/25... Training loss: 0.1536
Epoch: 2/25... Training loss: 0.1542
Epoch: 2/25... Training loss: 0.1557
Epoch: 2/25... Training loss: 0.1462
Epoch: 2/25... Training loss: 0.1567
Epoch: 2/25... Training loss: 0.1570
Epoch: 2/25... Training loss: 0.1519
Epoch: 2/25... Training loss: 0.1517
Epoch: 2/25... Training loss: 0.1556
Epoch: 2/25... Training loss: 0.1556
Epoch: 2/25... Training loss: 0.1498
Epoch: 2/25... Training loss: 0.1519
Epoch: 2/25... Training loss: 0.1475
Epoch: 2/25... Training loss: 0.1546
Epoch: 2/25... Training loss: 0.1556
Epoch: 2/25... Training loss: 0.1557
Epoch: 2/25... Training loss: 0.1558
Epoch: 2/25... Training loss: 0.1502
Epoch: 2/25... Training loss: 0.1513
Epoch: 2/25... Training loss: 0.1478
Epoch: 2/25... Training loss: 0.1508
Epoch: 2/25... Training loss: 0.1505
Epoch: 2/25... Training loss: 0.1545
Epoch: 2/25... Training loss: 0.1525
Epoch: 2/25... Training loss: 0.1535
Epoch: 2/25... Training loss: 0.1455
Epoch: 2/25... Training loss: 0.1501
Epoch: 2/25... Training loss: 0.1504
Epoch: 2/25... Training loss: 0.1493
Epoch: 2/25... Training loss: 0.1479
Epoch: 2/25... Training loss: 0.1520
Epoch: 2/25... Training loss: 0.1536
Epoch: 2/25... Training loss: 0.1549
Epoch: 2/25... Training loss: 0.1516
Epoch: 2/25... Training loss: 0.1500
Epoch: 2/25... Training loss: 0.1433
Epoch: 2/25... Training loss: 0.1485
Epoch: 2/25... Training loss: 0.1446
Epoch: 2/25... Training loss: 0.1501
Epoch: 2/25... Training loss: 0.1491
Epoch: 2/25... Training loss: 0.1505
Epoch: 2/25... Training loss: 0.1511
Epoch: 2/25... Training loss: 0.1474
Epoch: 2/25... Training loss: 0.1482
Epoch: 2/25... Training loss: 0.1457
Epoch: 2/25... Training loss: 0.1464
Epoch: 2/25... Training loss: 0.1543
Epoch: 2/25... Training loss: 0.1458
Epoch: 2/25... Training loss: 0.1479
Epoch: 2/25... Training loss: 0.1476
Epoch: 2/25... Training loss: 0.1464
Epoch: 2/25... Training loss: 0.1491
Epoch: 2/25... Training loss: 0.1515
Epoch: 2/25... Training loss: 0.1467
Epoch: 2/25... Training loss: 0.1531
Epoch: 2/25... Training loss: 0.1415
Epoch: 2/25... Training loss: 0.1474
Epoch: 2/25... Training loss: 0.1470
Epoch: 2/25... Training loss: 0.1486
Epoch: 2/25... Training loss: 0.1478
Epoch: 2/25... Training loss: 0.1471
Epoch: 2/25... Training loss: 0.1487
Epoch: 2/25... Training loss: 0.1475
Epoch: 2/25... Training loss: 0.1509
Epoch: 2/25... Training loss: 0.1458
Epoch: 2/25... Training loss: 0.1532
Epoch: 2/25... Training loss: 0.1446
Epoch: 2/25... Training loss: 0.1463
Epoch: 2/25... Training loss: 0.1458
Epoch: 2/25... Training loss: 0.1539
Epoch: 2/25... Training loss: 0.1506
Epoch: 2/25... Training loss: 0.1446
Epoch: 2/25... Training loss: 0.1509
Epoch: 2/25... Training loss: 0.1478
Epoch: 2/25... Training loss: 0.1427
Epoch: 2/25... Training loss: 0.1471
Epoch: 2/25... Training loss: 0.1452
Epoch: 2/25... Training loss: 0.1488
Epoch: 2/25... Training loss: 0.1442
Epoch: 2/25... Training loss: 0.1496
Epoch: 2/25... Training loss: 0.1462
Epoch: 2/25... Training loss: 0.1452
Epoch: 2/25... Training loss: 0.1492
Epoch: 2/25... Training loss: 0.1470
Epoch: 2/25... Training loss: 0.1449
Epoch: 2/25... Training loss: 0.1483
Epoch: 2/25... Training loss: 0.1411
Epoch: 2/25... Training loss: 0.1487
Epoch: 2/25... Training loss: 0.1482
Epoch: 2/25... Training loss: 0.1486
Epoch: 2/25... Training loss: 0.1536
Epoch: 2/25... Training loss: 0.1471
Epoch: 2/25... Training loss: 0.1465
Epoch: 2/25... Training loss: 0.1429
Epoch: 2/25... Training loss: 0.1442
Epoch: 2/25... Training loss: 0.1449
Epoch: 2/25... Training loss: 0.1487
Epoch: 2/25... Training loss: 0.1465
Epoch: 2/25... Training loss: 0.1428
Epoch: 2/25... Training loss: 0.1445
Epoch: 2/25... Training loss: 0.1509
Epoch: 2/25... Training loss: 0.1437
Epoch: 2/25... Training loss: 0.1494
Epoch: 2/25... Training loss: 0.1411
Epoch: 2/25... Training loss: 0.1464
Epoch: 2/25... Training loss: 0.1481
Epoch: 2/25... Training loss: 0.1464
Epoch: 2/25... Training loss: 0.1399
Epoch: 2/25... Training loss: 0.1403
Epoch: 2/25... Training loss: 0.1465
Epoch: 2/25... Training loss: 0.1457
Epoch: 2/25... Training loss: 0.1455
Epoch: 2/25... Training loss: 0.1458
Epoch: 2/25... Training loss: 0.1493
Epoch: 2/25... Training loss: 0.1476
Epoch: 2/25... Training loss: 0.1444
Epoch: 2/25... Training loss: 0.1442
Epoch: 2/25... Training loss: 0.1424
Epoch: 2/25... Training loss: 0.1451
Epoch: 2/25... Training loss: 0.1425
Epoch: 2/25... Training loss: 0.1441
Epoch: 2/25... Training loss: 0.1457
Epoch: 2/25... Training loss: 0.1426
Epoch: 2/25... Training loss: 0.1443
Epoch: 2/25... Training loss: 0.1485
Epoch: 2/25... Training loss: 0.1450
Epoch: 2/25... Training loss: 0.1412
Epoch: 2/25... Training loss: 0.1450
Epoch: 2/25... Training loss: 0.1426
Epoch: 2/25... Training loss: 0.1434
Epoch: 2/25... Training loss: 0.1498
Epoch: 2/25... Training loss: 0.1472
Epoch: 2/25... Training loss: 0.1467
Epoch: 2/25... Training loss: 0.1486
Epoch: 2/25... Training loss: 0.1469
Epoch: 2/25... Training loss: 0.1449
Epoch: 2/25... Training loss: 0.1448
Epoch: 2/25... Training loss: 0.1445
Epoch: 2/25... Training loss: 0.1414
Epoch: 2/25... Training loss: 0.1402
Epoch: 2/25... Training loss: 0.1477
Epoch: 2/25... Training loss: 0.1421
Epoch: 2/25... Training loss: 0.1472
Epoch: 2/25... Training loss: 0.1436
Epoch: 2/25... Training loss: 0.1412
Epoch: 2/25... Training loss: 0.1463
Epoch: 2/25... Training loss: 0.1419
Epoch: 2/25... Training loss: 0.1437
Epoch: 2/25... Training loss: 0.1486
Epoch: 2/25... Training loss: 0.1467
Epoch: 2/25... Training loss: 0.1467
Epoch: 2/25... Training loss: 0.1448
Epoch: 2/25... Training loss: 0.1454
Epoch: 2/25... Training loss: 0.1457
Epoch: 2/25... Training loss: 0.1429
Epoch: 2/25... Training loss: 0.1412
Epoch: 2/25... Training loss: 0.1406
Epoch: 2/25... Training loss: 0.1455
Epoch: 2/25... Training loss: 0.1446
Epoch: 2/25... Training loss: 0.1431
Epoch: 2/25... Training loss: 0.1399
Epoch: 2/25... Training loss: 0.1398
Epoch: 2/25... Training loss: 0.1417
Epoch: 2/25... Training loss: 0.1447
Epoch: 2/25... Training loss: 0.1428
Epoch: 2/25... Training loss: 0.1428
Epoch: 2/25... Training loss: 0.1444
Epoch: 2/25... Training loss: 0.1380
Epoch: 2/25... Training loss: 0.1389
Epoch: 2/25... Training loss: 0.1361
Epoch: 2/25... Training loss: 0.1394
Epoch: 2/25... Training loss: 0.1402
Epoch: 2/25... Training loss: 0.1437
Epoch: 2/25... Training loss: 0.1472
Epoch: 2/25... Training loss: 0.1372
Epoch: 2/25... Training loss: 0.1350
Epoch: 2/25... Training loss: 0.1440
Epoch: 2/25... Training loss: 0.1396
Epoch: 2/25... Training loss: 0.1394
Epoch: 2/25... Training loss: 0.1419
Epoch: 2/25... Training loss: 0.1453
Epoch: 2/25... Training loss: 0.1399
Epoch: 2/25... Training loss: 0.1375
Epoch: 2/25... Training loss: 0.1444
Epoch: 2/25... Training loss: 0.1412
Epoch: 2/25... Training loss: 0.1448
Epoch: 3/25... Training loss: 0.1398
Epoch: 3/25... Training loss: 0.1416
Epoch: 3/25... Training loss: 0.1454
Epoch: 3/25... Training loss: 0.1412
Epoch: 3/25... Training loss: 0.1375
Epoch: 3/25... Training loss: 0.1384
Epoch: 3/25... Training loss: 0.1358
Epoch: 3/25... Training loss: 0.1436
Epoch: 3/25... Training loss: 0.1331
Epoch: 3/25... Training loss: 0.1376
Epoch: 3/25... Training loss: 0.1403
Epoch: 3/25... Training loss: 0.1423
Epoch: 3/25... Training loss: 0.1418
Epoch: 3/25... Training loss: 0.1438
Epoch: 3/25... Training loss: 0.1439
Epoch: 3/25... Training loss: 0.1436
Epoch: 3/25... Training loss: 0.1417
Epoch: 3/25... Training loss: 0.1389
Epoch: 3/25... Training loss: 0.1392
Epoch: 3/25... Training loss: 0.1412
Epoch: 3/25... Training loss: 0.1396
Epoch: 3/25... Training loss: 0.1399
Epoch: 3/25... Training loss: 0.1418
Epoch: 3/25... Training loss: 0.1432
Epoch: 3/25... Training loss: 0.1405
Epoch: 3/25... Training loss: 0.1395
Epoch: 3/25... Training loss: 0.1403
Epoch: 3/25... Training loss: 0.1406
Epoch: 3/25... Training loss: 0.1397
Epoch: 3/25... Training loss: 0.1363
Epoch: 3/25... Training loss: 0.1411
Epoch: 3/25... Training loss: 0.1444
Epoch: 3/25... Training loss: 0.1414
Epoch: 3/25... Training loss: 0.1446
Epoch: 3/25... Training loss: 0.1425
Epoch: 3/25... Training loss: 0.1359
Epoch: 3/25... Training loss: 0.1413
Epoch: 3/25... Training loss: 0.1401
Epoch: 3/25... Training loss: 0.1416
Epoch: 3/25... Training loss: 0.1434
Epoch: 3/25... Training loss: 0.1428
Epoch: 3/25... Training loss: 0.1403
Epoch: 3/25... Training loss: 0.1382
Epoch: 3/25... Training loss: 0.1360
Epoch: 3/25... Training loss: 0.1403
Epoch: 3/25... Training loss: 0.1385
Epoch: 3/25... Training loss: 0.1388
Epoch: 3/25... Training loss: 0.1404
Epoch: 3/25... Training loss: 0.1349
Epoch: 3/25... Training loss: 0.1434
Epoch: 3/25... Training loss: 0.1407
Epoch: 3/25... Training loss: 0.1395
Epoch: 3/25... Training loss: 0.1402
Epoch: 3/25... Training loss: 0.1448
Epoch: 3/25... Training loss: 0.1388
Epoch: 3/25... Training loss: 0.1347
Epoch: 3/25... Training loss: 0.1396
Epoch: 3/25... Training loss: 0.1393
Epoch: 3/25... Training loss: 0.1391
Epoch: 3/25... Training loss: 0.1401
Epoch: 3/25... Training loss: 0.1392
Epoch: 3/25... Training loss: 0.1405
Epoch: 3/25... Training loss: 0.1403
Epoch: 3/25... Training loss: 0.1371
Epoch: 3/25... Training loss: 0.1418
Epoch: 3/25... Training loss: 0.1358
Epoch: 3/25... Training loss: 0.1334
Epoch: 3/25... Training loss: 0.1396
Epoch: 3/25... Training loss: 0.1365
Epoch: 3/25... Training loss: 0.1396
Epoch: 3/25... Training loss: 0.1365
Epoch: 3/25... Training loss: 0.1397
Epoch: 3/25... Training loss: 0.1378
Epoch: 3/25... Training loss: 0.1402
Epoch: 3/25... Training loss: 0.1404
Epoch: 3/25... Training loss: 0.1397
Epoch: 3/25... Training loss: 0.1419
Epoch: 3/25... Training loss: 0.1378
Epoch: 3/25... Training loss: 0.1406
Epoch: 3/25... Training loss: 0.1368
Epoch: 3/25... Training loss: 0.1401
Epoch: 3/25... Training loss: 0.1398
Epoch: 3/25... Training loss: 0.1377
Epoch: 3/25... Training loss: 0.1425
Epoch: 3/25... Training loss: 0.1359
Epoch: 3/25... Training loss: 0.1395
Epoch: 3/25... Training loss: 0.1413
Epoch: 3/25... Training loss: 0.1334
Epoch: 3/25... Training loss: 0.1359
Epoch: 3/25... Training loss: 0.1406
Epoch: 3/25... Training loss: 0.1342
Epoch: 3/25... Training loss: 0.1401
Epoch: 3/25... Training loss: 0.1406
Epoch: 3/25... Training loss: 0.1301
Epoch: 3/25... Training loss: 0.1367
Epoch: 3/25... Training loss: 0.1385
Epoch: 3/25... Training loss: 0.1384
Epoch: 3/25... Training loss: 0.1396
Epoch: 3/25... Training loss: 0.1382
Epoch: 3/25... Training loss: 0.1418
Epoch: 3/25... Training loss: 0.1332
Epoch: 3/25... Training loss: 0.1348
Epoch: 3/25... Training loss: 0.1363
Epoch: 3/25... Training loss: 0.1396
Epoch: 3/25... Training loss: 0.1419
Epoch: 3/25... Training loss: 0.1348
Epoch: 3/25... Training loss: 0.1341
Epoch: 3/25... Training loss: 0.1391
Epoch: 3/25... Training loss: 0.1374
Epoch: 3/25... Training loss: 0.1345
Epoch: 3/25... Training loss: 0.1358
Epoch: 3/25... Training loss: 0.1367
Epoch: 3/25... Training loss: 0.1343
Epoch: 3/25... Training loss: 0.1391
Epoch: 3/25... Training loss: 0.1412
Epoch: 3/25... Training loss: 0.1370
Epoch: 3/25... Training loss: 0.1299
Epoch: 3/25... Training loss: 0.1341
Epoch: 3/25... Training loss: 0.1357
Epoch: 3/25... Training loss: 0.1366
Epoch: 3/25... Training loss: 0.1337
Epoch: 3/25... Training loss: 0.1333
Epoch: 3/25... Training loss: 0.1350
Epoch: 3/25... Training loss: 0.1360
Epoch: 3/25... Training loss: 0.1306
Epoch: 3/25... Training loss: 0.1377
Epoch: 3/25... Training loss: 0.1308
Epoch: 3/25... Training loss: 0.1394
Epoch: 3/25... Training loss: 0.1326
Epoch: 3/25... Training loss: 0.1385
Epoch: 3/25... Training loss: 0.1343
Epoch: 3/25... Training loss: 0.1361
Epoch: 3/25... Training loss: 0.1332
Epoch: 3/25... Training loss: 0.1367
Epoch: 3/25... Training loss: 0.1415
Epoch: 3/25... Training loss: 0.1306
Epoch: 3/25... Training loss: 0.1388
Epoch: 3/25... Training loss: 0.1340
Epoch: 3/25... Training loss: 0.1361
Epoch: 3/25... Training loss: 0.1340
Epoch: 3/25... Training loss: 0.1345
Epoch: 3/25... Training loss: 0.1286
Epoch: 3/25... Training loss: 0.1372
Epoch: 3/25... Training loss: 0.1377
Epoch: 3/25... Training loss: 0.1377
Epoch: 3/25... Training loss: 0.1343
Epoch: 3/25... Training loss: 0.1305
Epoch: 3/25... Training loss: 0.1371
Epoch: 3/25... Training loss: 0.1342
Epoch: 3/25... Training loss: 0.1424
Epoch: 3/25... Training loss: 0.1347
Epoch: 3/25... Training loss: 0.1356
Epoch: 3/25... Training loss: 0.1379
Epoch: 3/25... Training loss: 0.1368
Epoch: 3/25... Training loss: 0.1352
Epoch: 3/25... Training loss: 0.1377
Epoch: 3/25... Training loss: 0.1400
Epoch: 3/25... Training loss: 0.1344
Epoch: 3/25... Training loss: 0.1289
Epoch: 3/25... Training loss: 0.1380
Epoch: 3/25... Training loss: 0.1350
Epoch: 3/25... Training loss: 0.1301
Epoch: 3/25... Training loss: 0.1340
Epoch: 3/25... Training loss: 0.1298
Epoch: 3/25... Training loss: 0.1337
Epoch: 3/25... Training loss: 0.1326
Epoch: 3/25... Training loss: 0.1357
Epoch: 3/25... Training loss: 0.1404
Epoch: 3/25... Training loss: 0.1357
Epoch: 3/25... Training loss: 0.1373
Epoch: 3/25... Training loss: 0.1356
Epoch: 3/25... Training loss: 0.1438
Epoch: 3/25... Training loss: 0.1300
Epoch: 3/25... Training loss: 0.1380
Epoch: 3/25... Training loss: 0.1361
Epoch: 3/25... Training loss: 0.1336
Epoch: 3/25... Training loss: 0.1373
Epoch: 3/25... Training loss: 0.1358
Epoch: 3/25... Training loss: 0.1387
Epoch: 3/25... Training loss: 0.1366
Epoch: 3/25... Training loss: 0.1365
Epoch: 3/25... Training loss: 0.1386
Epoch: 3/25... Training loss: 0.1389
Epoch: 3/25... Training loss: 0.1362
Epoch: 3/25... Training loss: 0.1290
Epoch: 3/25... Training loss: 0.1321
Epoch: 3/25... Training loss: 0.1325
Epoch: 3/25... Training loss: 0.1309
Epoch: 3/25... Training loss: 0.1360
Epoch: 3/25... Training loss: 0.1316
Epoch: 3/25... Training loss: 0.1369
Epoch: 3/25... Training loss: 0.1331
Epoch: 3/25... Training loss: 0.1336
Epoch: 3/25... Training loss: 0.1362
Epoch: 3/25... Training loss: 0.1356
Epoch: 3/25... Training loss: 0.1305
Epoch: 3/25... Training loss: 0.1343
Epoch: 3/25... Training loss: 0.1363
Epoch: 3/25... Training loss: 0.1349
Epoch: 3/25... Training loss: 0.1340
Epoch: 3/25... Training loss: 0.1357
Epoch: 3/25... Training loss: 0.1385
Epoch: 3/25... Training loss: 0.1321
Epoch: 3/25... Training loss: 0.1330
Epoch: 3/25... Training loss: 0.1287
Epoch: 3/25... Training loss: 0.1331
Epoch: 3/25... Training loss: 0.1329
Epoch: 3/25... Training loss: 0.1395
Epoch: 3/25... Training loss: 0.1336
Epoch: 3/25... Training loss: 0.1354
Epoch: 3/25... Training loss: 0.1350
Epoch: 3/25... Training loss: 0.1339
Epoch: 3/25... Training loss: 0.1306
Epoch: 3/25... Training loss: 0.1333
Epoch: 3/25... Training loss: 0.1322
Epoch: 3/25... Training loss: 0.1371
Epoch: 3/25... Training loss: 0.1332
Epoch: 3/25... Training loss: 0.1325
Epoch: 3/25... Training loss: 0.1334
Epoch: 3/25... Training loss: 0.1374
Epoch: 3/25... Training loss: 0.1386
Epoch: 3/25... Training loss: 0.1283
Epoch: 3/25... Training loss: 0.1325
Epoch: 3/25... Training loss: 0.1345
Epoch: 3/25... Training loss: 0.1327
Epoch: 3/25... Training loss: 0.1339
Epoch: 3/25... Training loss: 0.1365
Epoch: 3/25... Training loss: 0.1309
Epoch: 3/25... Training loss: 0.1297
Epoch: 3/25... Training loss: 0.1343
Epoch: 3/25... Training loss: 0.1367
Epoch: 3/25... Training loss: 0.1311
Epoch: 3/25... Training loss: 0.1329
Epoch: 3/25... Training loss: 0.1396
Epoch: 3/25... Training loss: 0.1315
Epoch: 3/25... Training loss: 0.1342
Epoch: 3/25... Training loss: 0.1375
Epoch: 3/25... Training loss: 0.1299
Epoch: 3/25... Training loss: 0.1366
Epoch: 3/25... Training loss: 0.1292
Epoch: 3/25... Training loss: 0.1386
Epoch: 3/25... Training loss: 0.1361
Epoch: 3/25... Training loss: 0.1338
Epoch: 3/25... Training loss: 0.1319
Epoch: 3/25... Training loss: 0.1321
Epoch: 3/25... Training loss: 0.1319
Epoch: 3/25... Training loss: 0.1345
Epoch: 3/25... Training loss: 0.1341
Epoch: 3/25... Training loss: 0.1334
Epoch: 3/25... Training loss: 0.1339
Epoch: 3/25... Training loss: 0.1315
Epoch: 3/25... Training loss: 0.1314
Epoch: 3/25... Training loss: 0.1377
Epoch: 3/25... Training loss: 0.1340
Epoch: 3/25... Training loss: 0.1292
Epoch: 3/25... Training loss: 0.1331
Epoch: 3/25... Training loss: 0.1346
Epoch: 3/25... Training loss: 0.1289
Epoch: 3/25... Training loss: 0.1313
Epoch: 3/25... Training loss: 0.1282
Epoch: 3/25... Training loss: 0.1311
Epoch: 3/25... Training loss: 0.1303
Epoch: 3/25... Training loss: 0.1328
Epoch: 3/25... Training loss: 0.1303
Epoch: 3/25... Training loss: 0.1350
Epoch: 3/25... Training loss: 0.1327
Epoch: 3/25... Training loss: 0.1356
Epoch: 3/25... Training loss: 0.1347
Epoch: 3/25... Training loss: 0.1271
Epoch: 3/25... Training loss: 0.1339
Epoch: 3/25... Training loss: 0.1332
Epoch: 3/25... Training loss: 0.1308
Epoch: 3/25... Training loss: 0.1350
Epoch: 3/25... Training loss: 0.1353
Epoch: 3/25... Training loss: 0.1267
Epoch: 3/25... Training loss: 0.1380
Epoch: 3/25... Training loss: 0.1344
Epoch: 3/25... Training loss: 0.1348
Epoch: 3/25... Training loss: 0.1398
Epoch: 3/25... Training loss: 0.1340
Epoch: 3/25... Training loss: 0.1345
Epoch: 3/25... Training loss: 0.1375
Epoch: 3/25... Training loss: 0.1308
Epoch: 3/25... Training loss: 0.1340
Epoch: 3/25... Training loss: 0.1308
Epoch: 3/25... Training loss: 0.1306
Epoch: 3/25... Training loss: 0.1278
Epoch: 3/25... Training loss: 0.1264
Epoch: 3/25... Training loss: 0.1291
Epoch: 3/25... Training loss: 0.1321
Epoch: 3/25... Training loss: 0.1272
Epoch: 3/25... Training loss: 0.1311
Epoch: 3/25... Training loss: 0.1301
Epoch: 3/25... Training loss: 0.1295
Epoch: 3/25... Training loss: 0.1297
Epoch: 3/25... Training loss: 0.1314
Epoch: 3/25... Training loss: 0.1343
Epoch: 3/25... Training loss: 0.1299
Epoch: 3/25... Training loss: 0.1324
Epoch: 3/25... Training loss: 0.1337
Epoch: 4/25... Training loss: 0.1273
Epoch: 4/25... Training loss: 0.1331
Epoch: 4/25... Training loss: 0.1310
Epoch: 4/25... Training loss: 0.1327
Epoch: 4/25... Training loss: 0.1360
Epoch: 4/25... Training loss: 0.1303
Epoch: 4/25... Training loss: 0.1310
Epoch: 4/25... Training loss: 0.1317
Epoch: 4/25... Training loss: 0.1313
Epoch: 4/25... Training loss: 0.1232
Epoch: 4/25... Training loss: 0.1314
Epoch: 4/25... Training loss: 0.1326
Epoch: 4/25... Training loss: 0.1319
Epoch: 4/25... Training loss: 0.1330
Epoch: 4/25... Training loss: 0.1331
Epoch: 4/25... Training loss: 0.1300
Epoch: 4/25... Training loss: 0.1337
Epoch: 4/25... Training loss: 0.1323
Epoch: 4/25... Training loss: 0.1308
Epoch: 4/25... Training loss: 0.1320
Epoch: 4/25... Training loss: 0.1318
Epoch: 4/25... Training loss: 0.1315
Epoch: 4/25... Training loss: 0.1363
Epoch: 4/25... Training loss: 0.1339
Epoch: 4/25... Training loss: 0.1348
Epoch: 4/25... Training loss: 0.1286
Epoch: 4/25... Training loss: 0.1364
Epoch: 4/25... Training loss: 0.1301
Epoch: 4/25... Training loss: 0.1306
Epoch: 4/25... Training loss: 0.1328
Epoch: 4/25... Training loss: 0.1287
Epoch: 4/25... Training loss: 0.1300
Epoch: 4/25... Training loss: 0.1263
Epoch: 4/25... Training loss: 0.1352
Epoch: 4/25... Training loss: 0.1306
Epoch: 4/25... Training loss: 0.1299
Epoch: 4/25... Training loss: 0.1321
Epoch: 4/25... Training loss: 0.1284
Epoch: 4/25... Training loss: 0.1325
Epoch: 4/25... Training loss: 0.1330
Epoch: 4/25... Training loss: 0.1309
Epoch: 4/25... Training loss: 0.1290
Epoch: 4/25... Training loss: 0.1313
Epoch: 4/25... Training loss: 0.1357
Epoch: 4/25... Training loss: 0.1304
Epoch: 4/25... Training loss: 0.1332
Epoch: 4/25... Training loss: 0.1324
Epoch: 4/25... Training loss: 0.1252
Epoch: 4/25... Training loss: 0.1276
Epoch: 4/25... Training loss: 0.1297
Epoch: 4/25... Training loss: 0.1329
Epoch: 4/25... Training loss: 0.1336
Epoch: 4/25... Training loss: 0.1304
Epoch: 4/25... Training loss: 0.1262
Epoch: 4/25... Training loss: 0.1262
Epoch: 4/25... Training loss: 0.1367
Epoch: 4/25... Training loss: 0.1320
Epoch: 4/25... Training loss: 0.1334
Epoch: 4/25... Training loss: 0.1327
Epoch: 4/25... Training loss: 0.1234
Epoch: 4/25... Training loss: 0.1271
Epoch: 4/25... Training loss: 0.1262
Epoch: 4/25... Training loss: 0.1263
Epoch: 4/25... Training loss: 0.1315
Epoch: 4/25... Training loss: 0.1332
Epoch: 4/25... Training loss: 0.1331
Epoch: 4/25... Training loss: 0.1339
Epoch: 4/25... Training loss: 0.1303
Epoch: 4/25... Training loss: 0.1285
Epoch: 4/25... Training loss: 0.1303
Epoch: 4/25... Training loss: 0.1269
Epoch: 4/25... Training loss: 0.1293
Epoch: 4/25... Training loss: 0.1243
Epoch: 4/25... Training loss: 0.1245
Epoch: 4/25... Training loss: 0.1278
Epoch: 4/25... Training loss: 0.1282
Epoch: 4/25... Training loss: 0.1296
Epoch: 4/25... Training loss: 0.1308
Epoch: 4/25... Training loss: 0.1286
Epoch: 4/25... Training loss: 0.1308
Epoch: 4/25... Training loss: 0.1278
Epoch: 4/25... Training loss: 0.1291
Epoch: 4/25... Training loss: 0.1287
Epoch: 4/25... Training loss: 0.1326
Epoch: 4/25... Training loss: 0.1324
Epoch: 4/25... Training loss: 0.1289
Epoch: 4/25... Training loss: 0.1294
Epoch: 4/25... Training loss: 0.1268
Epoch: 4/25... Training loss: 0.1329
Epoch: 4/25... Training loss: 0.1295
Epoch: 4/25... Training loss: 0.1293
Epoch: 4/25... Training loss: 0.1326
Epoch: 4/25... Training loss: 0.1323
Epoch: 4/25... Training loss: 0.1277
Epoch: 4/25... Training loss: 0.1287
Epoch: 4/25... Training loss: 0.1309
Epoch: 4/25... Training loss: 0.1300
Epoch: 4/25... Training loss: 0.1302
Epoch: 4/25... Training loss: 0.1280
Epoch: 4/25... Training loss: 0.1290
Epoch: 4/25... Training loss: 0.1273
Epoch: 4/25... Training loss: 0.1324
Epoch: 4/25... Training loss: 0.1264
Epoch: 4/25... Training loss: 0.1286
Epoch: 4/25... Training loss: 0.1276
Epoch: 4/25... Training loss: 0.1302
Epoch: 4/25... Training loss: 0.1286
Epoch: 4/25... Training loss: 0.1233
Epoch: 4/25... Training loss: 0.1281
Epoch: 4/25... Training loss: 0.1313
Epoch: 4/25... Training loss: 0.1309
Epoch: 4/25... Training loss: 0.1335
Epoch: 4/25... Training loss: 0.1248
Epoch: 4/25... Training loss: 0.1282
Epoch: 4/25... Training loss: 0.1328
Epoch: 4/25... Training loss: 0.1312
Epoch: 4/25... Training loss: 0.1296
Epoch: 4/25... Training loss: 0.1316
Epoch: 4/25... Training loss: 0.1292
Epoch: 4/25... Training loss: 0.1302
Epoch: 4/25... Training loss: 0.1282
Epoch: 4/25... Training loss: 0.1295
Epoch: 4/25... Training loss: 0.1247
Epoch: 4/25... Training loss: 0.1273
Epoch: 4/25... Training loss: 0.1310
Epoch: 4/25... Training loss: 0.1275
Epoch: 4/25... Training loss: 0.1247
Epoch: 4/25... Training loss: 0.1256
Epoch: 4/25... Training loss: 0.1290
Epoch: 4/25... Training loss: 0.1312
Epoch: 4/25... Training loss: 0.1272
Epoch: 4/25... Training loss: 0.1280
Epoch: 4/25... Training loss: 0.1334
Epoch: 4/25... Training loss: 0.1269
Epoch: 4/25... Training loss: 0.1279
Epoch: 4/25... Training loss: 0.1237
Epoch: 4/25... Training loss: 0.1293
Epoch: 4/25... Training loss: 0.1245
Epoch: 4/25... Training loss: 0.1264
Epoch: 4/25... Training loss: 0.1299
Epoch: 4/25... Training loss: 0.1261
Epoch: 4/25... Training loss: 0.1285
Epoch: 4/25... Training loss: 0.1239
Epoch: 4/25... Training loss: 0.1307
Epoch: 4/25... Training loss: 0.1226
Epoch: 4/25... Training loss: 0.1273
Epoch: 4/25... Training loss: 0.1335
Epoch: 4/25... Training loss: 0.1293
Epoch: 4/25... Training loss: 0.1271
Epoch: 4/25... Training loss: 0.1267
Epoch: 4/25... Training loss: 0.1278
Epoch: 4/25... Training loss: 0.1262
Epoch: 4/25... Training loss: 0.1214
Epoch: 4/25... Training loss: 0.1280
Epoch: 4/25... Training loss: 0.1301
Epoch: 4/25... Training loss: 0.1280
Epoch: 4/25... Training loss: 0.1267
Epoch: 4/25... Training loss: 0.1309
Epoch: 4/25... Training loss: 0.1341
Epoch: 4/25... Training loss: 0.1317
Epoch: 4/25... Training loss: 0.1283
Epoch: 4/25... Training loss: 0.1303
Epoch: 4/25... Training loss: 0.1256
Epoch: 4/25... Training loss: 0.1238
Epoch: 4/25... Training loss: 0.1245
Epoch: 4/25... Training loss: 0.1297
Epoch: 4/25... Training loss: 0.1287
Epoch: 4/25... Training loss: 0.1310
Epoch: 4/25... Training loss: 0.1291
Epoch: 4/25... Training loss: 0.1266
Epoch: 4/25... Training loss: 0.1272
Epoch: 4/25... Training loss: 0.1294
Epoch: 4/25... Training loss: 0.1278
Epoch: 4/25... Training loss: 0.1285
Epoch: 4/25... Training loss: 0.1262
Epoch: 4/25... Training loss: 0.1236
Epoch: 4/25... Training loss: 0.1300
Epoch: 4/25... Training loss: 0.1269
Epoch: 4/25... Training loss: 0.1309
Epoch: 4/25... Training loss: 0.1209
Epoch: 4/25... Training loss: 0.1319
Epoch: 4/25... Training loss: 0.1256
Epoch: 4/25... Training loss: 0.1295
Epoch: 4/25... Training loss: 0.1339
Epoch: 4/25... Training loss: 0.1263
Epoch: 4/25... Training loss: 0.1262
Epoch: 4/25... Training loss: 0.1342
Epoch: 4/25... Training loss: 0.1253
Epoch: 4/25... Training loss: 0.1233
Epoch: 4/25... Training loss: 0.1269
Epoch: 4/25... Training loss: 0.1286
Epoch: 4/25... Training loss: 0.1304
Epoch: 4/25... Training loss: 0.1304
Epoch: 4/25... Training loss: 0.1305
Epoch: 4/25... Training loss: 0.1276
Epoch: 4/25... Training loss: 0.1294
Epoch: 4/25... Training loss: 0.1278
Epoch: 4/25... Training loss: 0.1272
Epoch: 4/25... Training loss: 0.1328
Epoch: 4/25... Training loss: 0.1290
Epoch: 4/25... Training loss: 0.1288
Epoch: 4/25... Training loss: 0.1265
Epoch: 4/25... Training loss: 0.1299
Epoch: 4/25... Training loss: 0.1298
Epoch: 4/25... Training loss: 0.1265
Epoch: 4/25... Training loss: 0.1232
Epoch: 4/25... Training loss: 0.1268
Epoch: 4/25... Training loss: 0.1286
Epoch: 4/25... Training loss: 0.1284
Epoch: 4/25... Training loss: 0.1308
Epoch: 4/25... Training loss: 0.1291
Epoch: 4/25... Training loss: 0.1243
Epoch: 4/25... Training loss: 0.1278
Epoch: 4/25... Training loss: 0.1233
Epoch: 4/25... Training loss: 0.1253
Epoch: 4/25... Training loss: 0.1267
Epoch: 4/25... Training loss: 0.1266
Epoch: 4/25... Training loss: 0.1271
Epoch: 4/25... Training loss: 0.1290
Epoch: 4/25... Training loss: 0.1312
Epoch: 4/25... Training loss: 0.1274
Epoch: 4/25... Training loss: 0.1224
Epoch: 4/25... Training loss: 0.1241
Epoch: 4/25... Training loss: 0.1275
Epoch: 4/25... Training loss: 0.1261
Epoch: 4/25... Training loss: 0.1252
Epoch: 4/25... Training loss: 0.1230
Epoch: 4/25... Training loss: 0.1297
Epoch: 4/25... Training loss: 0.1270
Epoch: 4/25... Training loss: 0.1277
Epoch: 4/25... Training loss: 0.1297
Epoch: 4/25... Training loss: 0.1256
Epoch: 4/25... Training loss: 0.1291
Epoch: 4/25... Training loss: 0.1271
Epoch: 4/25... Training loss: 0.1276
Epoch: 4/25... Training loss: 0.1290
Epoch: 4/25... Training loss: 0.1235
Epoch: 4/25... Training loss: 0.1267
Epoch: 4/25... Training loss: 0.1216
Epoch: 4/25... Training loss: 0.1315
Epoch: 4/25... Training loss: 0.1259
Epoch: 4/25... Training loss: 0.1291
Epoch: 4/25... Training loss: 0.1252
Epoch: 4/25... Training loss: 0.1248
Epoch: 4/25... Training loss: 0.1248
Epoch: 4/25... Training loss: 0.1234
Epoch: 4/25... Training loss: 0.1251
Epoch: 4/25... Training loss: 0.1281
Epoch: 4/25... Training loss: 0.1249
Epoch: 4/25... Training loss: 0.1255
Epoch: 4/25... Training loss: 0.1239
Epoch: 4/25... Training loss: 0.1274
Epoch: 4/25... Training loss: 0.1268
Epoch: 4/25... Training loss: 0.1277
Epoch: 4/25... Training loss: 0.1292
Epoch: 4/25... Training loss: 0.1302
Epoch: 4/25... Training loss: 0.1254
Epoch: 4/25... Training loss: 0.1281
Epoch: 4/25... Training loss: 0.1232
Epoch: 4/25... Training loss: 0.1260
Epoch: 4/25... Training loss: 0.1246
Epoch: 4/25... Training loss: 0.1272
Epoch: 4/25... Training loss: 0.1275
Epoch: 4/25... Training loss: 0.1233
Epoch: 4/25... Training loss: 0.1267
Epoch: 4/25... Training loss: 0.1275
Epoch: 4/25... Training loss: 0.1219
Epoch: 4/25... Training loss: 0.1259
Epoch: 4/25... Training loss: 0.1279
Epoch: 4/25... Training loss: 0.1250
Epoch: 4/25... Training loss: 0.1314
Epoch: 4/25... Training loss: 0.1277
Epoch: 4/25... Training loss: 0.1250
Epoch: 4/25... Training loss: 0.1295
Epoch: 4/25... Training loss: 0.1282
Epoch: 4/25... Training loss: 0.1286
Epoch: 4/25... Training loss: 0.1294
Epoch: 4/25... Training loss: 0.1274
Epoch: 4/25... Training loss: 0.1264
Epoch: 4/25... Training loss: 0.1233
Epoch: 4/25... Training loss: 0.1256
Epoch: 4/25... Training loss: 0.1256
Epoch: 4/25... Training loss: 0.1265
Epoch: 4/25... Training loss: 0.1243
Epoch: 4/25... Training loss: 0.1231
Epoch: 4/25... Training loss: 0.1247
Epoch: 4/25... Training loss: 0.1291
Epoch: 4/25... Training loss: 0.1215
Epoch: 4/25... Training loss: 0.1278
Epoch: 4/25... Training loss: 0.1239
Epoch: 4/25... Training loss: 0.1251
Epoch: 4/25... Training loss: 0.1286
Epoch: 4/25... Training loss: 0.1224
Epoch: 4/25... Training loss: 0.1235
Epoch: 4/25... Training loss: 0.1282
Epoch: 4/25... Training loss: 0.1265
Epoch: 4/25... Training loss: 0.1208
Epoch: 4/25... Training loss: 0.1273
Epoch: 4/25... Training loss: 0.1218
Epoch: 4/25... Training loss: 0.1243
Epoch: 5/25... Training loss: 0.1253
Epoch: 5/25... Training loss: 0.1230
Epoch: 5/25... Training loss: 0.1246
Epoch: 5/25... Training loss: 0.1265
Epoch: 5/25... Training loss: 0.1243
Epoch: 5/25... Training loss: 0.1300
Epoch: 5/25... Training loss: 0.1259
Epoch: 5/25... Training loss: 0.1260
Epoch: 5/25... Training loss: 0.1234
Epoch: 5/25... Training loss: 0.1272
Epoch: 5/25... Training loss: 0.1289
Epoch: 5/25... Training loss: 0.1242
Epoch: 5/25... Training loss: 0.1248
Epoch: 5/25... Training loss: 0.1265
Epoch: 5/25... Training loss: 0.1266
Epoch: 5/25... Training loss: 0.1231
Epoch: 5/25... Training loss: 0.1267
Epoch: 5/25... Training loss: 0.1220
Epoch: 5/25... Training loss: 0.1276
Epoch: 5/25... Training loss: 0.1240
Epoch: 5/25... Training loss: 0.1253
Epoch: 5/25... Training loss: 0.1296
Epoch: 5/25... Training loss: 0.1277
Epoch: 5/25... Training loss: 0.1320
Epoch: 5/25... Training loss: 0.1207
Epoch: 5/25... Training loss: 0.1211
Epoch: 5/25... Training loss: 0.1296
Epoch: 5/25... Training loss: 0.1295
Epoch: 5/25... Training loss: 0.1204
Epoch: 5/25... Training loss: 0.1237
Epoch: 5/25... Training loss: 0.1247
Epoch: 5/25... Training loss: 0.1266
Epoch: 5/25... Training loss: 0.1260
Epoch: 5/25... Training loss: 0.1239
Epoch: 5/25... Training loss: 0.1246
Epoch: 5/25... Training loss: 0.1262
Epoch: 5/25... Training loss: 0.1282
Epoch: 5/25... Training loss: 0.1299
Epoch: 5/25... Training loss: 0.1294
Epoch: 5/25... Training loss: 0.1243
Epoch: 5/25... Training loss: 0.1254
Epoch: 5/25... Training loss: 0.1280
Epoch: 5/25... Training loss: 0.1249
Epoch: 5/25... Training loss: 0.1267
Epoch: 5/25... Training loss: 0.1304
Epoch: 5/25... Training loss: 0.1244
Epoch: 5/25... Training loss: 0.1259
Epoch: 5/25... Training loss: 0.1231
Epoch: 5/25... Training loss: 0.1220
Epoch: 5/25... Training loss: 0.1239
Epoch: 5/25... Training loss: 0.1278
Epoch: 5/25... Training loss: 0.1259
Epoch: 5/25... Training loss: 0.1271
Epoch: 5/25... Training loss: 0.1270
Epoch: 5/25... Training loss: 0.1229
Epoch: 5/25... Training loss: 0.1283
Epoch: 5/25... Training loss: 0.1256
Epoch: 5/25... Training loss: 0.1249
Epoch: 5/25... Training loss: 0.1283
Epoch: 5/25... Training loss: 0.1188
Epoch: 5/25... Training loss: 0.1240
Epoch: 5/25... Training loss: 0.1268
Epoch: 5/25... Training loss: 0.1278
Epoch: 5/25... Training loss: 0.1249
Epoch: 5/25... Training loss: 0.1256
Epoch: 5/25... Training loss: 0.1281
Epoch: 5/25... Training loss: 0.1215
Epoch: 5/25... Training loss: 0.1240
Epoch: 5/25... Training loss: 0.1223
Epoch: 5/25... Training loss: 0.1225
Epoch: 5/25... Training loss: 0.1223
Epoch: 5/25... Training loss: 0.1255
Epoch: 5/25... Training loss: 0.1267
Epoch: 5/25... Training loss: 0.1235
Epoch: 5/25... Training loss: 0.1279
Epoch: 5/25... Training loss: 0.1270
Epoch: 5/25... Training loss: 0.1255
Epoch: 5/25... Training loss: 0.1226
Epoch: 5/25... Training loss: 0.1259
Epoch: 5/25... Training loss: 0.1281
Epoch: 5/25... Training loss: 0.1205
Epoch: 5/25... Training loss: 0.1266
Epoch: 5/25... Training loss: 0.1232
Epoch: 5/25... Training loss: 0.1275
Epoch: 5/25... Training loss: 0.1234
Epoch: 5/25... Training loss: 0.1208
Epoch: 5/25... Training loss: 0.1232
Epoch: 5/25... Training loss: 0.1215
Epoch: 5/25... Training loss: 0.1251
Epoch: 5/25... Training loss: 0.1249
Epoch: 5/25... Training loss: 0.1240
Epoch: 5/25... Training loss: 0.1225
Epoch: 5/25... Training loss: 0.1255
Epoch: 5/25... Training loss: 0.1242
Epoch: 5/25... Training loss: 0.1209
Epoch: 5/25... Training loss: 0.1203
Epoch: 5/25... Training loss: 0.1208
Epoch: 5/25... Training loss: 0.1246
Epoch: 5/25... Training loss: 0.1207
Epoch: 5/25... Training loss: 0.1264
Epoch: 5/25... Training loss: 0.1207
Epoch: 5/25... Training loss: 0.1230
Epoch: 5/25... Training loss: 0.1244
Epoch: 5/25... Training loss: 0.1262
Epoch: 5/25... Training loss: 0.1226
Epoch: 5/25... Training loss: 0.1224
Epoch: 5/25... Training loss: 0.1256
Epoch: 5/25... Training loss: 0.1247
Epoch: 5/25... Training loss: 0.1307
Epoch: 5/25... Training loss: 0.1253
Epoch: 5/25... Training loss: 0.1227
Epoch: 5/25... Training loss: 0.1251
Epoch: 5/25... Training loss: 0.1274
Epoch: 5/25... Training loss: 0.1239
Epoch: 5/25... Training loss: 0.1251
Epoch: 5/25... Training loss: 0.1258
Epoch: 5/25... Training loss: 0.1268
Epoch: 5/25... Training loss: 0.1287
Epoch: 5/25... Training loss: 0.1245
Epoch: 5/25... Training loss: 0.1262
Epoch: 5/25... Training loss: 0.1223
Epoch: 5/25... Training loss: 0.1251
Epoch: 5/25... Training loss: 0.1219
Epoch: 5/25... Training loss: 0.1237
Epoch: 5/25... Training loss: 0.1296
Epoch: 5/25... Training loss: 0.1261
Epoch: 5/25... Training loss: 0.1241
Epoch: 5/25... Training loss: 0.1265
Epoch: 5/25... Training loss: 0.1205
Epoch: 5/25... Training loss: 0.1230
Epoch: 5/25... Training loss: 0.1226
Epoch: 5/25... Training loss: 0.1229
Epoch: 5/25... Training loss: 0.1258
Epoch: 5/25... Training loss: 0.1225
Epoch: 5/25... Training loss: 0.1237
Epoch: 5/25... Training loss: 0.1252
Epoch: 5/25... Training loss: 0.1223
Epoch: 5/25... Training loss: 0.1222
Epoch: 5/25... Training loss: 0.1210
Epoch: 5/25... Training loss: 0.1248
Epoch: 5/25... Training loss: 0.1210
Epoch: 5/25... Training loss: 0.1252
Epoch: 5/25... Training loss: 0.1235
Epoch: 5/25... Training loss: 0.1235
Epoch: 5/25... Training loss: 0.1256
Epoch: 5/25... Training loss: 0.1228
Epoch: 5/25... Training loss: 0.1260
Epoch: 5/25... Training loss: 0.1249
Epoch: 5/25... Training loss: 0.1217
Epoch: 5/25... Training loss: 0.1218
Epoch: 5/25... Training loss: 0.1217
Epoch: 5/25... Training loss: 0.1223
Epoch: 5/25... Training loss: 0.1254
Epoch: 5/25... Training loss: 0.1209
Epoch: 5/25... Training loss: 0.1200
Epoch: 5/25... Training loss: 0.1221
Epoch: 5/25... Training loss: 0.1245
Epoch: 5/25... Training loss: 0.1255
Epoch: 5/25... Training loss: 0.1266
Epoch: 5/25... Training loss: 0.1212
Epoch: 5/25... Training loss: 0.1269
Epoch: 5/25... Training loss: 0.1236
Epoch: 5/25... Training loss: 0.1208
Epoch: 5/25... Training loss: 0.1242
Epoch: 5/25... Training loss: 0.1244
Epoch: 5/25... Training loss: 0.1268
Epoch: 5/25... Training loss: 0.1261
Epoch: 5/25... Training loss: 0.1246
Epoch: 5/25... Training loss: 0.1230
Epoch: 5/25... Training loss: 0.1288
Epoch: 5/25... Training loss: 0.1263
Epoch: 5/25... Training loss: 0.1275
Epoch: 5/25... Training loss: 0.1259
Epoch: 5/25... Training loss: 0.1208
Epoch: 5/25... Training loss: 0.1255
Epoch: 5/25... Training loss: 0.1259
Epoch: 5/25... Training loss: 0.1235
Epoch: 5/25... Training loss: 0.1265
Epoch: 5/25... Training loss: 0.1268
Epoch: 5/25... Training loss: 0.1207
Epoch: 5/25... Training loss: 0.1254
Epoch: 5/25... Training loss: 0.1233
Epoch: 5/25... Training loss: 0.1267
Epoch: 5/25... Training loss: 0.1201
Epoch: 5/25... Training loss: 0.1268
Epoch: 5/25... Training loss: 0.1238
Epoch: 5/25... Training loss: 0.1207
Epoch: 5/25... Training loss: 0.1263
Epoch: 5/25... Training loss: 0.1236
Epoch: 5/25... Training loss: 0.1272
Epoch: 5/25... Training loss: 0.1252
Epoch: 5/25... Training loss: 0.1236
Epoch: 5/25... Training loss: 0.1228
Epoch: 5/25... Training loss: 0.1224
Epoch: 5/25... Training loss: 0.1225
Epoch: 5/25... Training loss: 0.1220
Epoch: 5/25... Training loss: 0.1215
Epoch: 5/25... Training loss: 0.1236
Epoch: 5/25... Training loss: 0.1217
Epoch: 5/25... Training loss: 0.1228
Epoch: 5/25... Training loss: 0.1184
Epoch: 5/25... Training loss: 0.1252
Epoch: 5/25... Training loss: 0.1258
Epoch: 5/25... Training loss: 0.1258
Epoch: 5/25... Training loss: 0.1196
Epoch: 5/25... Training loss: 0.1176
Epoch: 5/25... Training loss: 0.1186
Epoch: 5/25... Training loss: 0.1225
Epoch: 5/25... Training loss: 0.1249
Epoch: 5/25... Training loss: 0.1254
Epoch: 5/25... Training loss: 0.1236
Epoch: 5/25... Training loss: 0.1237
Epoch: 5/25... Training loss: 0.1185
Epoch: 5/25... Training loss: 0.1200
Epoch: 5/25... Training loss: 0.1287
Epoch: 5/25... Training loss: 0.1211
Epoch: 5/25... Training loss: 0.1259
Epoch: 5/25... Training loss: 0.1254
Epoch: 5/25... Training loss: 0.1268
Epoch: 5/25... Training loss: 0.1199
Epoch: 5/25... Training loss: 0.1263
Epoch: 5/25... Training loss: 0.1225
Epoch: 5/25... Training loss: 0.1183
Epoch: 5/25... Training loss: 0.1226
Epoch: 5/25... Training loss: 0.1256
Epoch: 5/25... Training loss: 0.1254
Epoch: 5/25... Training loss: 0.1240
Epoch: 5/25... Training loss: 0.1253
Epoch: 5/25... Training loss: 0.1213
Epoch: 5/25... Training loss: 0.1190
Epoch: 5/25... Training loss: 0.1210
Epoch: 5/25... Training loss: 0.1257
Epoch: 5/25... Training loss: 0.1198
Epoch: 5/25... Training loss: 0.1237
Epoch: 5/25... Training loss: 0.1261
Epoch: 5/25... Training loss: 0.1217
Epoch: 5/25... Training loss: 0.1268
Epoch: 5/25... Training loss: 0.1185
Epoch: 5/25... Training loss: 0.1191
Epoch: 5/25... Training loss: 0.1228
Epoch: 5/25... Training loss: 0.1231
Epoch: 5/25... Training loss: 0.1234
Epoch: 5/25... Training loss: 0.1207
Epoch: 5/25... Training loss: 0.1250
Epoch: 5/25... Training loss: 0.1252
Epoch: 5/25... Training loss: 0.1214
Epoch: 5/25... Training loss: 0.1247
Epoch: 5/25... Training loss: 0.1271
Epoch: 5/25... Training loss: 0.1230
Epoch: 5/25... Training loss: 0.1198
Epoch: 5/25... Training loss: 0.1204
Epoch: 5/25... Training loss: 0.1249
Epoch: 5/25... Training loss: 0.1211
Epoch: 5/25... Training loss: 0.1186
Epoch: 5/25... Training loss: 0.1224
Epoch: 5/25... Training loss: 0.1250
Epoch: 5/25... Training loss: 0.1200
Epoch: 5/25... Training loss: 0.1262
Epoch: 5/25... Training loss: 0.1223
Epoch: 5/25... Training loss: 0.1240
Epoch: 5/25... Training loss: 0.1244
Epoch: 5/25... Training loss: 0.1236
Epoch: 5/25... Training loss: 0.1232
Epoch: 5/25... Training loss: 0.1188
Epoch: 5/25... Training loss: 0.1212
Epoch: 5/25... Training loss: 0.1254
Epoch: 5/25... Training loss: 0.1223
Epoch: 5/25... Training loss: 0.1218
Epoch: 5/25... Training loss: 0.1209
Epoch: 5/25... Training loss: 0.1196
Epoch: 5/25... Training loss: 0.1203
Epoch: 5/25... Training loss: 0.1213
Epoch: 5/25... Training loss: 0.1203
Epoch: 5/25... Training loss: 0.1248
Epoch: 5/25... Training loss: 0.1231
Epoch: 5/25... Training loss: 0.1243
Epoch: 5/25... Training loss: 0.1219
Epoch: 5/25... Training loss: 0.1249
Epoch: 5/25... Training loss: 0.1234
Epoch: 5/25... Training loss: 0.1210
Epoch: 5/25... Training loss: 0.1276
Epoch: 5/25... Training loss: 0.1201
Epoch: 5/25... Training loss: 0.1201
Epoch: 5/25... Training loss: 0.1222
Epoch: 5/25... Training loss: 0.1199
Epoch: 5/25... Training loss: 0.1218
Epoch: 5/25... Training loss: 0.1239
Epoch: 5/25... Training loss: 0.1222
Epoch: 5/25... Training loss: 0.1204
Epoch: 5/25... Training loss: 0.1189
Epoch: 5/25... Training loss: 0.1219
Epoch: 5/25... Training loss: 0.1206
Epoch: 5/25... Training loss: 0.1243
Epoch: 5/25... Training loss: 0.1250
Epoch: 5/25... Training loss: 0.1232
Epoch: 5/25... Training loss: 0.1227
Epoch: 5/25... Training loss: 0.1221
Epoch: 5/25... Training loss: 0.1225
Epoch: 5/25... Training loss: 0.1261
Epoch: 5/25... Training loss: 0.1178
Epoch: 6/25... Training loss: 0.1210
Epoch: 6/25... Training loss: 0.1226
Epoch: 6/25... Training loss: 0.1262
Epoch: 6/25... Training loss: 0.1218
Epoch: 6/25... Training loss: 0.1242
Epoch: 6/25... Training loss: 0.1220
Epoch: 6/25... Training loss: 0.1202
Epoch: 6/25... Training loss: 0.1211
Epoch: 6/25... Training loss: 0.1196
Epoch: 6/25... Training loss: 0.1250
Epoch: 6/25... Training loss: 0.1274
Epoch: 6/25... Training loss: 0.1205
Epoch: 6/25... Training loss: 0.1225
Epoch: 6/25... Training loss: 0.1243
Epoch: 6/25... Training loss: 0.1243
Epoch: 6/25... Training loss: 0.1205
Epoch: 6/25... Training loss: 0.1221
Epoch: 6/25... Training loss: 0.1217
Epoch: 6/25... Training loss: 0.1238
Epoch: 6/25... Training loss: 0.1230
Epoch: 6/25... Training loss: 0.1226
Epoch: 6/25... Training loss: 0.1216
Epoch: 6/25... Training loss: 0.1258
Epoch: 6/25... Training loss: 0.1215
Epoch: 6/25... Training loss: 0.1235
Epoch: 6/25... Training loss: 0.1263
Epoch: 6/25... Training loss: 0.1228
Epoch: 6/25... Training loss: 0.1250
Epoch: 6/25... Training loss: 0.1256
Epoch: 6/25... Training loss: 0.1221
Epoch: 6/25... Training loss: 0.1245
Epoch: 6/25... Training loss: 0.1237
Epoch: 6/25... Training loss: 0.1233
Epoch: 6/25... Training loss: 0.1234
Epoch: 6/25... Training loss: 0.1288
Epoch: 6/25... Training loss: 0.1226
Epoch: 6/25... Training loss: 0.1221
Epoch: 6/25... Training loss: 0.1249
Epoch: 6/25... Training loss: 0.1244
Epoch: 6/25... Training loss: 0.1190
Epoch: 6/25... Training loss: 0.1242
Epoch: 6/25... Training loss: 0.1178
Epoch: 6/25... Training loss: 0.1184
Epoch: 6/25... Training loss: 0.1222
Epoch: 6/25... Training loss: 0.1194
Epoch: 6/25... Training loss: 0.1257
Epoch: 6/25... Training loss: 0.1205
Epoch: 6/25... Training loss: 0.1174
Epoch: 6/25... Training loss: 0.1224
Epoch: 6/25... Training loss: 0.1232
Epoch: 6/25... Training loss: 0.1210
Epoch: 6/25... Training loss: 0.1215
Epoch: 6/25... Training loss: 0.1174
Epoch: 6/25... Training loss: 0.1195
Epoch: 6/25... Training loss: 0.1156
Epoch: 6/25... Training loss: 0.1239
Epoch: 6/25... Training loss: 0.1192
Epoch: 6/25... Training loss: 0.1208
Epoch: 6/25... Training loss: 0.1189
Epoch: 6/25... Training loss: 0.1219
Epoch: 6/25... Training loss: 0.1248
Epoch: 6/25... Training loss: 0.1192
Epoch: 6/25... Training loss: 0.1213
Epoch: 6/25... Training loss: 0.1208
Epoch: 6/25... Training loss: 0.1235
Epoch: 6/25... Training loss: 0.1234
Epoch: 6/25... Training loss: 0.1231
Epoch: 6/25... Training loss: 0.1197
Epoch: 6/25... Training loss: 0.1234
Epoch: 6/25... Training loss: 0.1194
Epoch: 6/25... Training loss: 0.1209
Epoch: 6/25... Training loss: 0.1241
Epoch: 6/25... Training loss: 0.1227
Epoch: 6/25... Training loss: 0.1206
Epoch: 6/25... Training loss: 0.1234
Epoch: 6/25... Training loss: 0.1246
Epoch: 6/25... Training loss: 0.1226
Epoch: 6/25... Training loss: 0.1201
Epoch: 6/25... Training loss: 0.1178
Epoch: 6/25... Training loss: 0.1227
Epoch: 6/25... Training loss: 0.1193
Epoch: 6/25... Training loss: 0.1222
Epoch: 6/25... Training loss: 0.1179
Epoch: 6/25... Training loss: 0.1223
Epoch: 6/25... Training loss: 0.1261
Epoch: 6/25... Training loss: 0.1227
Epoch: 6/25... Training loss: 0.1189
Epoch: 6/25... Training loss: 0.1167
Epoch: 6/25... Training loss: 0.1252
Epoch: 6/25... Training loss: 0.1170
Epoch: 6/25... Training loss: 0.1205
Epoch: 6/25... Training loss: 0.1167
Epoch: 6/25... Training loss: 0.1215
Epoch: 6/25... Training loss: 0.1227
Epoch: 6/25... Training loss: 0.1218
Epoch: 6/25... Training loss: 0.1197
Epoch: 6/25... Training loss: 0.1192
Epoch: 6/25... Training loss: 0.1194
Epoch: 6/25... Training loss: 0.1194
Epoch: 6/25... Training loss: 0.1230
Epoch: 6/25... Training loss: 0.1244
Epoch: 6/25... Training loss: 0.1208
Epoch: 6/25... Training loss: 0.1234
Epoch: 6/25... Training loss: 0.1211
Epoch: 6/25... Training loss: 0.1195
Epoch: 6/25... Training loss: 0.1210
Epoch: 6/25... Training loss: 0.1251
Epoch: 6/25... Training loss: 0.1186
Epoch: 6/25... Training loss: 0.1264
Epoch: 6/25... Training loss: 0.1198
Epoch: 6/25... Training loss: 0.1255
Epoch: 6/25... Training loss: 0.1209
Epoch: 6/25... Training loss: 0.1200
Epoch: 6/25... Training loss: 0.1160
Epoch: 6/25... Training loss: 0.1224
Epoch: 6/25... Training loss: 0.1184
Epoch: 6/25... Training loss: 0.1226
Epoch: 6/25... Training loss: 0.1207
Epoch: 6/25... Training loss: 0.1250
Epoch: 6/25... Training loss: 0.1161
Epoch: 6/25... Training loss: 0.1186
Epoch: 6/25... Training loss: 0.1231
Epoch: 6/25... Training loss: 0.1272
Epoch: 6/25... Training loss: 0.1234
Epoch: 6/25... Training loss: 0.1268
Epoch: 6/25... Training loss: 0.1219
Epoch: 6/25... Training loss: 0.1202
Epoch: 6/25... Training loss: 0.1218
Epoch: 6/25... Training loss: 0.1209
Epoch: 6/25... Training loss: 0.1222
Epoch: 6/25... Training loss: 0.1231
Epoch: 6/25... Training loss: 0.1218
Epoch: 6/25... Training loss: 0.1197
Epoch: 6/25... Training loss: 0.1211
Epoch: 6/25... Training loss: 0.1239
Epoch: 6/25... Training loss: 0.1218
Epoch: 6/25... Training loss: 0.1145
Epoch: 6/25... Training loss: 0.1169
Epoch: 6/25... Training loss: 0.1204
Epoch: 6/25... Training loss: 0.1200
Epoch: 6/25... Training loss: 0.1219
Epoch: 6/25... Training loss: 0.1193
Epoch: 6/25... Training loss: 0.1242
Epoch: 6/25... Training loss: 0.1213
Epoch: 6/25... Training loss: 0.1209
Epoch: 6/25... Training loss: 0.1228
Epoch: 6/25... Training loss: 0.1198
Epoch: 6/25... Training loss: 0.1202
Epoch: 6/25... Training loss: 0.1205
Epoch: 6/25... Training loss: 0.1243
Epoch: 6/25... Training loss: 0.1200
Epoch: 6/25... Training loss: 0.1231
Epoch: 6/25... Training loss: 0.1215
Epoch: 6/25... Training loss: 0.1193
Epoch: 6/25... Training loss: 0.1257
Epoch: 6/25... Training loss: 0.1188
Epoch: 6/25... Training loss: 0.1164
Epoch: 6/25... Training loss: 0.1196
Epoch: 6/25... Training loss: 0.1203
Epoch: 6/25... Training loss: 0.1206
Epoch: 6/25... Training loss: 0.1211
Epoch: 6/25... Training loss: 0.1202
Epoch: 6/25... Training loss: 0.1198
Epoch: 6/25... Training loss: 0.1215
Epoch: 6/25... Training loss: 0.1189
Epoch: 6/25... Training loss: 0.1211
Epoch: 6/25... Training loss: 0.1190
Epoch: 6/25... Training loss: 0.1180
Epoch: 6/25... Training loss: 0.1226
Epoch: 6/25... Training loss: 0.1179
Epoch: 6/25... Training loss: 0.1254
Epoch: 6/25... Training loss: 0.1231
Epoch: 6/25... Training loss: 0.1240
Epoch: 6/25... Training loss: 0.1188
Epoch: 6/25... Training loss: 0.1218
Epoch: 6/25... Training loss: 0.1197
Epoch: 6/25... Training loss: 0.1169
Epoch: 6/25... Training loss: 0.1189
Epoch: 6/25... Training loss: 0.1230
Epoch: 6/25... Training loss: 0.1194
Epoch: 6/25... Training loss: 0.1161
Epoch: 6/25... Training loss: 0.1223
Epoch: 6/25... Training loss: 0.1181
Epoch: 6/25... Training loss: 0.1186
Epoch: 6/25... Training loss: 0.1236
Epoch: 6/25... Training loss: 0.1219
Epoch: 6/25... Training loss: 0.1247
Epoch: 6/25... Training loss: 0.1154
Epoch: 6/25... Training loss: 0.1225
Epoch: 6/25... Training loss: 0.1220
Epoch: 6/25... Training loss: 0.1193
Epoch: 6/25... Training loss: 0.1186
Epoch: 6/25... Training loss: 0.1167
Epoch: 6/25... Training loss: 0.1186
Epoch: 6/25... Training loss: 0.1219
Epoch: 6/25... Training loss: 0.1208
Epoch: 6/25... Training loss: 0.1185
Epoch: 6/25... Training loss: 0.1159
Epoch: 6/25... Training loss: 0.1221
Epoch: 6/25... Training loss: 0.1190
Epoch: 6/25... Training loss: 0.1229
Epoch: 6/25... Training loss: 0.1174
Epoch: 6/25... Training loss: 0.1208
Epoch: 6/25... Training loss: 0.1220
Epoch: 6/25... Training loss: 0.1209
Epoch: 6/25... Training loss: 0.1204
Epoch: 6/25... Training loss: 0.1188
Epoch: 6/25... Training loss: 0.1192
Epoch: 6/25... Training loss: 0.1185
Epoch: 6/25... Training loss: 0.1183
Epoch: 6/25... Training loss: 0.1191
Epoch: 6/25... Training loss: 0.1170
Epoch: 6/25... Training loss: 0.1157
Epoch: 6/25... Training loss: 0.1202
Epoch: 6/25... Training loss: 0.1199
Epoch: 6/25... Training loss: 0.1191
Epoch: 6/25... Training loss: 0.1216
Epoch: 6/25... Training loss: 0.1193
Epoch: 6/25... Training loss: 0.1209
Epoch: 6/25... Training loss: 0.1154
Epoch: 6/25... Training loss: 0.1160
Epoch: 6/25... Training loss: 0.1185
Epoch: 6/25... Training loss: 0.1218
Epoch: 6/25... Training loss: 0.1201
Epoch: 6/25... Training loss: 0.1163
Epoch: 6/25... Training loss: 0.1178
Epoch: 6/25... Training loss: 0.1193
Epoch: 6/25... Training loss: 0.1181
Epoch: 6/25... Training loss: 0.1206
Epoch: 6/25... Training loss: 0.1187
Epoch: 6/25... Training loss: 0.1146
Epoch: 6/25... Training loss: 0.1173
Epoch: 6/25... Training loss: 0.1175
Epoch: 6/25... Training loss: 0.1227
Epoch: 6/25... Training loss: 0.1196
Epoch: 6/25... Training loss: 0.1164
Epoch: 6/25... Training loss: 0.1191
Epoch: 6/25... Training loss: 0.1232
Epoch: 6/25... Training loss: 0.1197
Epoch: 6/25... Training loss: 0.1247
Epoch: 6/25... Training loss: 0.1195
Epoch: 6/25... Training loss: 0.1193
Epoch: 6/25... Training loss: 0.1249
Epoch: 6/25... Training loss: 0.1142
Epoch: 6/25... Training loss: 0.1232
Epoch: 6/25... Training loss: 0.1179
Epoch: 6/25... Training loss: 0.1225
Epoch: 6/25... Training loss: 0.1232
Epoch: 6/25... Training loss: 0.1206
Epoch: 6/25... Training loss: 0.1216
Epoch: 6/25... Training loss: 0.1185
Epoch: 6/25... Training loss: 0.1222
Epoch: 6/25... Training loss: 0.1187
Epoch: 6/25... Training loss: 0.1196
Epoch: 6/25... Training loss: 0.1215
Epoch: 6/25... Training loss: 0.1183
Epoch: 6/25... Training loss: 0.1223
Epoch: 6/25... Training loss: 0.1212
Epoch: 6/25... Training loss: 0.1151
Epoch: 6/25... Training loss: 0.1195
Epoch: 6/25... Training loss: 0.1164
Epoch: 6/25... Training loss: 0.1197
Epoch: 6/25... Training loss: 0.1201
Epoch: 6/25... Training loss: 0.1176
Epoch: 6/25... Training loss: 0.1184
Epoch: 6/25... Training loss: 0.1196
Epoch: 6/25... Training loss: 0.1178
Epoch: 6/25... Training loss: 0.1182
Epoch: 6/25... Training loss: 0.1187
Epoch: 6/25... Training loss: 0.1194
Epoch: 6/25... Training loss: 0.1197
Epoch: 6/25... Training loss: 0.1186
Epoch: 6/25... Training loss: 0.1202
Epoch: 6/25... Training loss: 0.1162
Epoch: 6/25... Training loss: 0.1240
Epoch: 6/25... Training loss: 0.1236
Epoch: 6/25... Training loss: 0.1210
Epoch: 6/25... Training loss: 0.1190
Epoch: 6/25... Training loss: 0.1196
Epoch: 6/25... Training loss: 0.1169
Epoch: 6/25... Training loss: 0.1187
Epoch: 6/25... Training loss: 0.1192
Epoch: 6/25... Training loss: 0.1171
Epoch: 6/25... Training loss: 0.1206
Epoch: 6/25... Training loss: 0.1196
Epoch: 6/25... Training loss: 0.1190
Epoch: 6/25... Training loss: 0.1172
Epoch: 6/25... Training loss: 0.1168
Epoch: 6/25... Training loss: 0.1193
Epoch: 6/25... Training loss: 0.1171
Epoch: 6/25... Training loss: 0.1218
Epoch: 6/25... Training loss: 0.1196
Epoch: 6/25... Training loss: 0.1191
Epoch: 6/25... Training loss: 0.1137
Epoch: 6/25... Training loss: 0.1195
Epoch: 6/25... Training loss: 0.1202
Epoch: 6/25... Training loss: 0.1208
Epoch: 6/25... Training loss: 0.1237
Epoch: 6/25... Training loss: 0.1195
Epoch: 6/25... Training loss: 0.1199
Epoch: 7/25... Training loss: 0.1183
Epoch: 7/25... Training loss: 0.1185
Epoch: 7/25... Training loss: 0.1215
Epoch: 7/25... Training loss: 0.1171
Epoch: 7/25... Training loss: 0.1174
Epoch: 7/25... Training loss: 0.1188
Epoch: 7/25... Training loss: 0.1225
Epoch: 7/25... Training loss: 0.1187
Epoch: 7/25... Training loss: 0.1159
Epoch: 7/25... Training loss: 0.1206
Epoch: 7/25... Training loss: 0.1223
Epoch: 7/25... Training loss: 0.1227
Epoch: 7/25... Training loss: 0.1224
Epoch: 7/25... Training loss: 0.1214
Epoch: 7/25... Training loss: 0.1223
Epoch: 7/25... Training loss: 0.1241
Epoch: 7/25... Training loss: 0.1179
Epoch: 7/25... Training loss: 0.1158
Epoch: 7/25... Training loss: 0.1170
Epoch: 7/25... Training loss: 0.1206
Epoch: 7/25... Training loss: 0.1150
Epoch: 7/25... Training loss: 0.1230
Epoch: 7/25... Training loss: 0.1141
Epoch: 7/25... Training loss: 0.1194
Epoch: 7/25... Training loss: 0.1211
Epoch: 7/25... Training loss: 0.1168
Epoch: 7/25... Training loss: 0.1183
Epoch: 7/25... Training loss: 0.1177
Epoch: 7/25... Training loss: 0.1195
Epoch: 7/25... Training loss: 0.1178
Epoch: 7/25... Training loss: 0.1217
Epoch: 7/25... Training loss: 0.1219
Epoch: 7/25... Training loss: 0.1162
Epoch: 7/25... Training loss: 0.1172
Epoch: 7/25... Training loss: 0.1189
Epoch: 7/25... Training loss: 0.1194
Epoch: 7/25... Training loss: 0.1166
Epoch: 7/25... Training loss: 0.1171
Epoch: 7/25... Training loss: 0.1189
Epoch: 7/25... Training loss: 0.1189
Epoch: 7/25... Training loss: 0.1238
Epoch: 7/25... Training loss: 0.1202
Epoch: 7/25... Training loss: 0.1203
Epoch: 7/25... Training loss: 0.1213
Epoch: 7/25... Training loss: 0.1180
Epoch: 7/25... Training loss: 0.1188
Epoch: 7/25... Training loss: 0.1160
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1217
Epoch: 7/25... Training loss: 0.1187
Epoch: 7/25... Training loss: 0.1199
Epoch: 7/25... Training loss: 0.1211
Epoch: 7/25... Training loss: 0.1193
Epoch: 7/25... Training loss: 0.1162
Epoch: 7/25... Training loss: 0.1173
Epoch: 7/25... Training loss: 0.1196
Epoch: 7/25... Training loss: 0.1188
Epoch: 7/25... Training loss: 0.1157
Epoch: 7/25... Training loss: 0.1160
Epoch: 7/25... Training loss: 0.1159
Epoch: 7/25... Training loss: 0.1192
Epoch: 7/25... Training loss: 0.1198
Epoch: 7/25... Training loss: 0.1195
Epoch: 7/25... Training loss: 0.1164
Epoch: 7/25... Training loss: 0.1144
Epoch: 7/25... Training loss: 0.1221
Epoch: 7/25... Training loss: 0.1141
Epoch: 7/25... Training loss: 0.1218
Epoch: 7/25... Training loss: 0.1136
Epoch: 7/25... Training loss: 0.1207
Epoch: 7/25... Training loss: 0.1196
Epoch: 7/25... Training loss: 0.1197
Epoch: 7/25... Training loss: 0.1197
Epoch: 7/25... Training loss: 0.1167
Epoch: 7/25... Training loss: 0.1156
Epoch: 7/25... Training loss: 0.1198
Epoch: 7/25... Training loss: 0.1187
Epoch: 7/25... Training loss: 0.1219
Epoch: 7/25... Training loss: 0.1160
Epoch: 7/25... Training loss: 0.1168
Epoch: 7/25... Training loss: 0.1207
Epoch: 7/25... Training loss: 0.1214
Epoch: 7/25... Training loss: 0.1196
Epoch: 7/25... Training loss: 0.1207
Epoch: 7/25... Training loss: 0.1192
Epoch: 7/25... Training loss: 0.1167
Epoch: 7/25... Training loss: 0.1204
Epoch: 7/25... Training loss: 0.1134
Epoch: 7/25... Training loss: 0.1194
Epoch: 7/25... Training loss: 0.1189
Epoch: 7/25... Training loss: 0.1184
Epoch: 7/25... Training loss: 0.1170
Epoch: 7/25... Training loss: 0.1158
Epoch: 7/25... Training loss: 0.1189
Epoch: 7/25... Training loss: 0.1212
Epoch: 7/25... Training loss: 0.1154
Epoch: 7/25... Training loss: 0.1184
Epoch: 7/25... Training loss: 0.1229
Epoch: 7/25... Training loss: 0.1156
Epoch: 7/25... Training loss: 0.1188
Epoch: 7/25... Training loss: 0.1202
Epoch: 7/25... Training loss: 0.1183
Epoch: 7/25... Training loss: 0.1216
Epoch: 7/25... Training loss: 0.1156
Epoch: 7/25... Training loss: 0.1155
Epoch: 7/25... Training loss: 0.1168
Epoch: 7/25... Training loss: 0.1177
Epoch: 7/25... Training loss: 0.1153
Epoch: 7/25... Training loss: 0.1164
Epoch: 7/25... Training loss: 0.1180
Epoch: 7/25... Training loss: 0.1146
Epoch: 7/25... Training loss: 0.1185
Epoch: 7/25... Training loss: 0.1185
Epoch: 7/25... Training loss: 0.1166
Epoch: 7/25... Training loss: 0.1197
Epoch: 7/25... Training loss: 0.1152
Epoch: 7/25... Training loss: 0.1172
Epoch: 7/25... Training loss: 0.1159
Epoch: 7/25... Training loss: 0.1200
Epoch: 7/25... Training loss: 0.1149
Epoch: 7/25... Training loss: 0.1181
Epoch: 7/25... Training loss: 0.1192
Epoch: 7/25... Training loss: 0.1177
Epoch: 7/25... Training loss: 0.1220
Epoch: 7/25... Training loss: 0.1189
Epoch: 7/25... Training loss: 0.1171
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1159
Epoch: 7/25... Training loss: 0.1170
Epoch: 7/25... Training loss: 0.1178
Epoch: 7/25... Training loss: 0.1190
Epoch: 7/25... Training loss: 0.1189
Epoch: 7/25... Training loss: 0.1151
Epoch: 7/25... Training loss: 0.1189
Epoch: 7/25... Training loss: 0.1161
Epoch: 7/25... Training loss: 0.1124
Epoch: 7/25... Training loss: 0.1135
Epoch: 7/25... Training loss: 0.1200
Epoch: 7/25... Training loss: 0.1197
Epoch: 7/25... Training loss: 0.1178
Epoch: 7/25... Training loss: 0.1204
Epoch: 7/25... Training loss: 0.1200
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1199
Epoch: 7/25... Training loss: 0.1219
Epoch: 7/25... Training loss: 0.1209
Epoch: 7/25... Training loss: 0.1219
Epoch: 7/25... Training loss: 0.1237
Epoch: 7/25... Training loss: 0.1212
Epoch: 7/25... Training loss: 0.1193
Epoch: 7/25... Training loss: 0.1173
Epoch: 7/25... Training loss: 0.1174
Epoch: 7/25... Training loss: 0.1195
Epoch: 7/25... Training loss: 0.1197
Epoch: 7/25... Training loss: 0.1140
Epoch: 7/25... Training loss: 0.1198
Epoch: 7/25... Training loss: 0.1196
Epoch: 7/25... Training loss: 0.1168
Epoch: 7/25... Training loss: 0.1195
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1166
Epoch: 7/25... Training loss: 0.1226
Epoch: 7/25... Training loss: 0.1181
Epoch: 7/25... Training loss: 0.1218
Epoch: 7/25... Training loss: 0.1196
Epoch: 7/25... Training loss: 0.1209
Epoch: 7/25... Training loss: 0.1163
Epoch: 7/25... Training loss: 0.1180
Epoch: 7/25... Training loss: 0.1182
Epoch: 7/25... Training loss: 0.1204
Epoch: 7/25... Training loss: 0.1173
Epoch: 7/25... Training loss: 0.1172
Epoch: 7/25... Training loss: 0.1150
Epoch: 7/25... Training loss: 0.1166
Epoch: 7/25... Training loss: 0.1159
Epoch: 7/25... Training loss: 0.1153
Epoch: 7/25... Training loss: 0.1195
Epoch: 7/25... Training loss: 0.1138
Epoch: 7/25... Training loss: 0.1205
Epoch: 7/25... Training loss: 0.1179
Epoch: 7/25... Training loss: 0.1169
Epoch: 7/25... Training loss: 0.1214
Epoch: 7/25... Training loss: 0.1205
Epoch: 7/25... Training loss: 0.1191
Epoch: 7/25... Training loss: 0.1172
Epoch: 7/25... Training loss: 0.1245
Epoch: 7/25... Training loss: 0.1146
Epoch: 7/25... Training loss: 0.1179
Epoch: 7/25... Training loss: 0.1176
Epoch: 7/25... Training loss: 0.1208
Epoch: 7/25... Training loss: 0.1199
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1203
Epoch: 7/25... Training loss: 0.1184
Epoch: 7/25... Training loss: 0.1162
Epoch: 7/25... Training loss: 0.1168
Epoch: 7/25... Training loss: 0.1155
Epoch: 7/25... Training loss: 0.1160
Epoch: 7/25... Training loss: 0.1179
Epoch: 7/25... Training loss: 0.1175
Epoch: 7/25... Training loss: 0.1172
Epoch: 7/25... Training loss: 0.1226
Epoch: 7/25... Training loss: 0.1181
Epoch: 7/25... Training loss: 0.1142
Epoch: 7/25... Training loss: 0.1147
Epoch: 7/25... Training loss: 0.1169
Epoch: 7/25... Training loss: 0.1181
Epoch: 7/25... Training loss: 0.1159
Epoch: 7/25... Training loss: 0.1194
Epoch: 7/25... Training loss: 0.1181
Epoch: 7/25... Training loss: 0.1181
Epoch: 7/25... Training loss: 0.1125
Epoch: 7/25... Training loss: 0.1184
Epoch: 7/25... Training loss: 0.1153
Epoch: 7/25... Training loss: 0.1200
Epoch: 7/25... Training loss: 0.1184
Epoch: 7/25... Training loss: 0.1155
Epoch: 7/25... Training loss: 0.1157
Epoch: 7/25... Training loss: 0.1196
Epoch: 7/25... Training loss: 0.1150
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1190
Epoch: 7/25... Training loss: 0.1208
Epoch: 7/25... Training loss: 0.1161
Epoch: 7/25... Training loss: 0.1192
Epoch: 7/25... Training loss: 0.1191
Epoch: 7/25... Training loss: 0.1191
Epoch: 7/25... Training loss: 0.1185
Epoch: 7/25... Training loss: 0.1178
Epoch: 7/25... Training loss: 0.1171
Epoch: 7/25... Training loss: 0.1171
Epoch: 7/25... Training loss: 0.1185
Epoch: 7/25... Training loss: 0.1168
Epoch: 7/25... Training loss: 0.1179
Epoch: 7/25... Training loss: 0.1193
Epoch: 7/25... Training loss: 0.1160
Epoch: 7/25... Training loss: 0.1176
Epoch: 7/25... Training loss: 0.1138
Epoch: 7/25... Training loss: 0.1205
Epoch: 7/25... Training loss: 0.1148
Epoch: 7/25... Training loss: 0.1159
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1164
Epoch: 7/25... Training loss: 0.1159
Epoch: 7/25... Training loss: 0.1149
Epoch: 7/25... Training loss: 0.1162
Epoch: 7/25... Training loss: 0.1196
Epoch: 7/25... Training loss: 0.1172
Epoch: 7/25... Training loss: 0.1218
Epoch: 7/25... Training loss: 0.1166
Epoch: 7/25... Training loss: 0.1173
Epoch: 7/25... Training loss: 0.1146
Epoch: 7/25... Training loss: 0.1201
Epoch: 7/25... Training loss: 0.1179
Epoch: 7/25... Training loss: 0.1147
Epoch: 7/25... Training loss: 0.1173
Epoch: 7/25... Training loss: 0.1225
Epoch: 7/25... Training loss: 0.1135
Epoch: 7/25... Training loss: 0.1170
Epoch: 7/25... Training loss: 0.1193
Epoch: 7/25... Training loss: 0.1138
Epoch: 7/25... Training loss: 0.1146
Epoch: 7/25... Training loss: 0.1135
Epoch: 7/25... Training loss: 0.1132
Epoch: 7/25... Training loss: 0.1218
Epoch: 7/25... Training loss: 0.1184
Epoch: 7/25... Training loss: 0.1179
Epoch: 7/25... Training loss: 0.1208
Epoch: 7/25... Training loss: 0.1198
Epoch: 7/25... Training loss: 0.1170
Epoch: 7/25... Training loss: 0.1179
Epoch: 7/25... Training loss: 0.1194
Epoch: 7/25... Training loss: 0.1197
Epoch: 7/25... Training loss: 0.1133
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1169
Epoch: 7/25... Training loss: 0.1124
Epoch: 7/25... Training loss: 0.1168
Epoch: 7/25... Training loss: 0.1154
Epoch: 7/25... Training loss: 0.1166
Epoch: 7/25... Training loss: 0.1158
Epoch: 7/25... Training loss: 0.1158
Epoch: 7/25... Training loss: 0.1161
Epoch: 7/25... Training loss: 0.1126
Epoch: 7/25... Training loss: 0.1211
Epoch: 7/25... Training loss: 0.1194
Epoch: 7/25... Training loss: 0.1191
Epoch: 7/25... Training loss: 0.1204
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1150
Epoch: 7/25... Training loss: 0.1186
Epoch: 7/25... Training loss: 0.1152
Epoch: 7/25... Training loss: 0.1161
Epoch: 7/25... Training loss: 0.1134
Epoch: 7/25... Training loss: 0.1141
Epoch: 7/25... Training loss: 0.1129
Epoch: 7/25... Training loss: 0.1187
Epoch: 7/25... Training loss: 0.1169
Epoch: 7/25... Training loss: 0.1220
Epoch: 7/25... Training loss: 0.1180
Epoch: 8/25... Training loss: 0.1166
Epoch: 8/25... Training loss: 0.1187
Epoch: 8/25... Training loss: 0.1139
Epoch: 8/25... Training loss: 0.1168
Epoch: 8/25... Training loss: 0.1198
Epoch: 8/25... Training loss: 0.1146
Epoch: 8/25... Training loss: 0.1176
Epoch: 8/25... Training loss: 0.1178
Epoch: 8/25... Training loss: 0.1177
Epoch: 8/25... Training loss: 0.1180
Epoch: 8/25... Training loss: 0.1144
Epoch: 8/25... Training loss: 0.1159
Epoch: 8/25... Training loss: 0.1154
Epoch: 8/25... Training loss: 0.1180
Epoch: 8/25... Training loss: 0.1137
Epoch: 8/25... Training loss: 0.1174
Epoch: 8/25... Training loss: 0.1181
Epoch: 8/25... Training loss: 0.1190
Epoch: 8/25... Training loss: 0.1173
Epoch: 8/25... Training loss: 0.1174
Epoch: 8/25... Training loss: 0.1134
Epoch: 8/25... Training loss: 0.1170
Epoch: 8/25... Training loss: 0.1182
Epoch: 8/25... Training loss: 0.1158
Epoch: 8/25... Training loss: 0.1149
Epoch: 8/25... Training loss: 0.1143
Epoch: 8/25... Training loss: 0.1193
Epoch: 8/25... Training loss: 0.1201
Epoch: 8/25... Training loss: 0.1193
Epoch: 8/25... Training loss: 0.1191
Epoch: 8/25... Training loss: 0.1164
Epoch: 8/25... Training loss: 0.1125
Epoch: 8/25... Training loss: 0.1149
Epoch: 8/25... Training loss: 0.1198
Epoch: 8/25... Training loss: 0.1177
Epoch: 8/25... Training loss: 0.1169
Epoch: 8/25... Training loss: 0.1154
Epoch: 8/25... Training loss: 0.1184
Epoch: 8/25... Training loss: 0.1171
Epoch: 8/25... Training loss: 0.1187
Epoch: 8/25... Training loss: 0.1153
Epoch: 8/25... Training loss: 0.1196
Epoch: 8/25... Training loss: 0.1148
Epoch: 8/25... Training loss: 0.1187
Epoch: 8/25... Training loss: 0.1159
Epoch: 8/25... Training loss: 0.1177
Epoch: 8/25... Training loss: 0.1190
Epoch: 8/25... Training loss: 0.1191
Epoch: 8/25... Training loss: 0.1181
Epoch: 8/25... Training loss: 0.1202
Epoch: 8/25... Training loss: 0.1140
Epoch: 8/25... Training loss: 0.1193
Epoch: 8/25... Training loss: 0.1145
Epoch: 8/25... Training loss: 0.1174
Epoch: 8/25... Training loss: 0.1189
Epoch: 8/25... Training loss: 0.1223
Epoch: 8/25... Training loss: 0.1123
Epoch: 8/25... Training loss: 0.1160
Epoch: 8/25... Training loss: 0.1238
Epoch: 8/25... Training loss: 0.1178
Epoch: 8/25... Training loss: 0.1134
Epoch: 8/25... Training loss: 0.1171
Epoch: 8/25... Training loss: 0.1138
Epoch: 8/25... Training loss: 0.1155
Epoch: 8/25... Training loss: 0.1128
Epoch: 8/25... Training loss: 0.1122
Epoch: 8/25... Training loss: 0.1138
Epoch: 8/25... Training loss: 0.1196
Epoch: 8/25... Training loss: 0.1152
Epoch: 8/25... Training loss: 0.1143
Epoch: 8/25... Training loss: 0.1140
Epoch: 8/25... Training loss: 0.1164
Epoch: 8/25... Training loss: 0.1188
Epoch: 8/25... Training loss: 0.1150
Epoch: 8/25... Training loss: 0.1172
Epoch: 8/25... Training loss: 0.1161
Epoch: 8/25... Training loss: 0.1198
Epoch: 8/25... Training loss: 0.1124
Epoch: 8/25... Training loss: 0.1173
Epoch: 8/25... Training loss: 0.1133
Epoch: 8/25... Training loss: 0.1150
Epoch: 8/25... Training loss: 0.1175
Epoch: 8/25... Training loss: 0.1083
Epoch: 8/25... Training loss: 0.1150
Epoch: 8/25... Training loss: 0.1173
Epoch: 8/25... Training loss: 0.1175
Epoch: 8/25... Training loss: 0.1169
Epoch: 8/25... Training loss: 0.1185
Epoch: 8/25... Training loss: 0.1182
Epoch: 8/25... Training loss: 0.1111
Epoch: 8/25... Training loss: 0.1183
Epoch: 8/25... Training loss: 0.1141
Epoch: 8/25... Training loss: 0.1165
Epoch: 8/25... Training loss: 0.1186
Epoch: 8/25... Training loss: 0.1160
Epoch: 8/25... Training loss: 0.1189
Epoch: 8/25... Training loss: 0.1180
Epoch: 8/25... Training loss: 0.1145
Epoch: 8/25... Training loss: 0.1175
Epoch: 8/25... Training loss: 0.1181
Epoch: 8/25... Training loss: 0.1154
Epoch: 8/25... Training loss: 0.1176
Epoch: 8/25... Training loss: 0.1157
Epoch: 8/25... Training loss: 0.1174
Epoch: 8/25... Training loss: 0.1139
Epoch: 8/25... Training loss: 0.1190
Epoch: 8/25... Training loss: 0.1199
Epoch: 8/25... Training loss: 0.1197
Epoch: 8/25... Training loss: 0.1154
Epoch: 8/25... Training loss: 0.1153
Epoch: 8/25... Training loss: 0.1168
Epoch: 8/25... Training loss: 0.1155
Epoch: 8/25... Training loss: 0.1162
Epoch: 8/25... Training loss: 0.1176
Epoch: 8/25... Training loss: 0.1178
Epoch: 8/25... Training loss: 0.1136
Epoch: 8/25... Training loss: 0.1167
Epoch: 8/25... Training loss: 0.1188
Epoch: 8/25... Training loss: 0.1187
Epoch: 8/25... Training loss: 0.1224
Epoch: 8/25... Training loss: 0.1150
Epoch: 8/25... Training loss: 0.1147
Epoch: 8/25... Training loss: 0.1156
Epoch: 8/25... Training loss: 0.1178
Epoch: 8/25... Training loss: 0.1175
Epoch: 8/25... Training loss: 0.1155
Epoch: 8/25... Training loss: 0.1110
Epoch: 8/25... Training loss: 0.1126
Epoch: 8/25... Training loss: 0.1179
Epoch: 8/25... Training loss: 0.1181
Epoch: 8/25... Training loss: 0.1151
Epoch: 8/25... Training loss: 0.1183
Epoch: 8/25... Training loss: 0.1184
Epoch: 8/25... Training loss: 0.1163
Epoch: 8/25... Training loss: 0.1132
Epoch: 8/25... Training loss: 0.1109
Epoch: 8/25... Training loss: 0.1184
Epoch: 8/25... Training loss: 0.1190
Epoch: 8/25... Training loss: 0.1153
Epoch: 8/25... Training loss: 0.1119
Epoch: 8/25... Training loss: 0.1193
Epoch: 8/25... Training loss: 0.1132
Epoch: 8/25... Training loss: 0.1175
Epoch: 8/25... Training loss: 0.1140
Epoch: 8/25... Training loss: 0.1190
Epoch: 8/25... Training loss: 0.1126
Epoch: 8/25... Training loss: 0.1170
Epoch: 8/25... Training loss: 0.1146
Epoch: 8/25... Training loss: 0.1150
Epoch: 8/25... Training loss: 0.1164
Epoch: 8/25... Training loss: 0.1120
Epoch: 8/25... Training loss: 0.1181
Epoch: 8/25... Training loss: 0.1210
Epoch: 8/25... Training loss: 0.1154
Epoch: 8/25... Training loss: 0.1141
Epoch: 8/25... Training loss: 0.1129
Epoch: 8/25... Training loss: 0.1167
Epoch: 8/25... Training loss: 0.1153
Epoch: 8/25... Training loss: 0.1204
Epoch: 8/25... Training loss: 0.1126
Epoch: 8/25... Training loss: 0.1202
Epoch: 8/25... Training loss: 0.1145
Epoch: 8/25... Training loss: 0.1147
Epoch: 8/25... Training loss: 0.1198
Epoch: 8/25... Training loss: 0.1147
Epoch: 8/25... Training loss: 0.1153
Epoch: 8/25... Training loss: 0.1175
Epoch: 8/25... Training loss: 0.1167
Epoch: 8/25... Training loss: 0.1150
Epoch: 8/25... Training loss: 0.1176
Epoch: 8/25... Training loss: 0.1173
Epoch: 8/25... Training loss: 0.1192
Epoch: 8/25... Training loss: 0.1135
Epoch: 8/25... Training loss: 0.1142
Epoch: 8/25... Training loss: 0.1166
Epoch: 8/25... Training loss: 0.1152
Epoch: 8/25... Training loss: 0.1113
Epoch: 8/25... Training loss: 0.1159
Epoch: 8/25... Training loss: 0.1131
Epoch: 8/25... Training loss: 0.1181
Epoch: 8/25... Training loss: 0.1117
Epoch: 8/25... Training loss: 0.1160
Epoch: 8/25... Training loss: 0.1165
Epoch: 8/25... Training loss: 0.1137
Epoch: 8/25... Training loss: 0.1174
Epoch: 8/25... Training loss: 0.1194
Epoch: 8/25... Training loss: 0.1114
Epoch: 8/25... Training loss: 0.1171
Epoch: 8/25... Training loss: 0.1140
Epoch: 8/25... Training loss: 0.1172
Epoch: 8/25... Training loss: 0.1158
Epoch: 8/25... Training loss: 0.1152
Epoch: 8/25... Training loss: 0.1165
Epoch: 8/25... Training loss: 0.1152
Epoch: 8/25... Training loss: 0.1178
Epoch: 8/25... Training loss: 0.1185
Epoch: 8/25... Training loss: 0.1127
Epoch: 8/25... Training loss: 0.1194
Epoch: 8/25... Training loss: 0.1177
Epoch: 8/25... Training loss: 0.1216
Epoch: 8/25... Training loss: 0.1185
Epoch: 8/25... Training loss: 0.1196
Epoch: 8/25... Training loss: 0.1178
Epoch: 8/25... Training loss: 0.1147
Epoch: 8/25... Training loss: 0.1150
Epoch: 8/25... Training loss: 0.1193
Epoch: 8/25... Training loss: 0.1137
Epoch: 8/25... Training loss: 0.1126
Epoch: 8/25... Training loss: 0.1147
Epoch: 8/25... Training loss: 0.1219
Epoch: 8/25... Training loss: 0.1140
Epoch: 8/25... Training loss: 0.1135
Epoch: 8/25... Training loss: 0.1159
Epoch: 8/25... Training loss: 0.1140
Epoch: 8/25... Training loss: 0.1142
Epoch: 8/25... Training loss: 0.1159
Epoch: 8/25... Training loss: 0.1171
Epoch: 8/25... Training loss: 0.1163
Epoch: 8/25... Training loss: 0.1155
Epoch: 8/25... Training loss: 0.1146
Epoch: 8/25... Training loss: 0.1159
Epoch: 8/25... Training loss: 0.1190
Epoch: 8/25... Training loss: 0.1154
Epoch: 8/25... Training loss: 0.1149
Epoch: 8/25... Training loss: 0.1126
Epoch: 8/25... Training loss: 0.1185
Epoch: 8/25... Training loss: 0.1168
Epoch: 8/25... Training loss: 0.1175
Epoch: 8/25... Training loss: 0.1222
Epoch: 8/25... Training loss: 0.1203
Epoch: 8/25... Training loss: 0.1165
Epoch: 8/25... Training loss: 0.1149
Epoch: 8/25... Training loss: 0.1124
Epoch: 8/25... Training loss: 0.1196
Epoch: 8/25... Training loss: 0.1175
Epoch: 8/25... Training loss: 0.1169
Epoch: 8/25... Training loss: 0.1173
Epoch: 8/25... Training loss: 0.1158
Epoch: 8/25... Training loss: 0.1131
Epoch: 8/25... Training loss: 0.1169
Epoch: 8/25... Training loss: 0.1177
Epoch: 8/25... Training loss: 0.1151
Epoch: 8/25... Training loss: 0.1136
Epoch: 8/25... Training loss: 0.1135
Epoch: 8/25... Training loss: 0.1174
Epoch: 8/25... Training loss: 0.1192
Epoch: 8/25... Training loss: 0.1184
Epoch: 8/25... Training loss: 0.1199
Epoch: 8/25... Training loss: 0.1129
Epoch: 8/25... Training loss: 0.1166
Epoch: 8/25... Training loss: 0.1171
Epoch: 8/25... Training loss: 0.1150
Epoch: 8/25... Training loss: 0.1221
Epoch: 8/25... Training loss: 0.1167
Epoch: 8/25... Training loss: 0.1173
Epoch: 8/25... Training loss: 0.1183
Epoch: 8/25... Training loss: 0.1145
Epoch: 8/25... Training loss: 0.1182
Epoch: 8/25... Training loss: 0.1169
Epoch: 8/25... Training loss: 0.1173
Epoch: 8/25... Training loss: 0.1177
Epoch: 8/25... Training loss: 0.1134
Epoch: 8/25... Training loss: 0.1179
Epoch: 8/25... Training loss: 0.1174
Epoch: 8/25... Training loss: 0.1153
Epoch: 8/25... Training loss: 0.1172
Epoch: 8/25... Training loss: 0.1137
Epoch: 8/25... Training loss: 0.1173
Epoch: 8/25... Training loss: 0.1146
Epoch: 8/25... Training loss: 0.1177
Epoch: 8/25... Training loss: 0.1140
Epoch: 8/25... Training loss: 0.1174
Epoch: 8/25... Training loss: 0.1137
Epoch: 8/25... Training loss: 0.1114
Epoch: 8/25... Training loss: 0.1155
Epoch: 8/25... Training loss: 0.1165
Epoch: 8/25... Training loss: 0.1168
Epoch: 8/25... Training loss: 0.1129
Epoch: 8/25... Training loss: 0.1144
Epoch: 8/25... Training loss: 0.1125
Epoch: 8/25... Training loss: 0.1139
Epoch: 8/25... Training loss: 0.1130
Epoch: 8/25... Training loss: 0.1142
Epoch: 8/25... Training loss: 0.1180
Epoch: 8/25... Training loss: 0.1158
Epoch: 8/25... Training loss: 0.1164
Epoch: 8/25... Training loss: 0.1131
Epoch: 8/25... Training loss: 0.1154
Epoch: 8/25... Training loss: 0.1200
Epoch: 8/25... Training loss: 0.1159
Epoch: 8/25... Training loss: 0.1144
Epoch: 8/25... Training loss: 0.1186
Epoch: 8/25... Training loss: 0.1139
Epoch: 8/25... Training loss: 0.1136
Epoch: 8/25... Training loss: 0.1139
Epoch: 8/25... Training loss: 0.1142
Epoch: 8/25... Training loss: 0.1139
Epoch: 8/25... Training loss: 0.1160
Epoch: 8/25... Training loss: 0.1115
Epoch: 8/25... Training loss: 0.1159
Epoch: 9/25... Training loss: 0.1185
Epoch: 9/25... Training loss: 0.1164
Epoch: 9/25... Training loss: 0.1171
Epoch: 9/25... Training loss: 0.1149
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1139
Epoch: 9/25... Training loss: 0.1167
Epoch: 9/25... Training loss: 0.1201
Epoch: 9/25... Training loss: 0.1172
Epoch: 9/25... Training loss: 0.1150
Epoch: 9/25... Training loss: 0.1130
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1146
Epoch: 9/25... Training loss: 0.1130
Epoch: 9/25... Training loss: 0.1197
Epoch: 9/25... Training loss: 0.1161
Epoch: 9/25... Training loss: 0.1141
Epoch: 9/25... Training loss: 0.1157
Epoch: 9/25... Training loss: 0.1193
Epoch: 9/25... Training loss: 0.1164
Epoch: 9/25... Training loss: 0.1168
Epoch: 9/25... Training loss: 0.1151
Epoch: 9/25... Training loss: 0.1131
Epoch: 9/25... Training loss: 0.1170
Epoch: 9/25... Training loss: 0.1156
Epoch: 9/25... Training loss: 0.1131
Epoch: 9/25... Training loss: 0.1213
Epoch: 9/25... Training loss: 0.1166
Epoch: 9/25... Training loss: 0.1144
Epoch: 9/25... Training loss: 0.1147
Epoch: 9/25... Training loss: 0.1189
Epoch: 9/25... Training loss: 0.1175
Epoch: 9/25... Training loss: 0.1157
Epoch: 9/25... Training loss: 0.1157
Epoch: 9/25... Training loss: 0.1181
Epoch: 9/25... Training loss: 0.1167
Epoch: 9/25... Training loss: 0.1180
Epoch: 9/25... Training loss: 0.1137
Epoch: 9/25... Training loss: 0.1189
Epoch: 9/25... Training loss: 0.1115
Epoch: 9/25... Training loss: 0.1145
Epoch: 9/25... Training loss: 0.1135
Epoch: 9/25... Training loss: 0.1215
Epoch: 9/25... Training loss: 0.1162
Epoch: 9/25... Training loss: 0.1108
Epoch: 9/25... Training loss: 0.1156
Epoch: 9/25... Training loss: 0.1103
Epoch: 9/25... Training loss: 0.1121
Epoch: 9/25... Training loss: 0.1195
Epoch: 9/25... Training loss: 0.1136
Epoch: 9/25... Training loss: 0.1142
Epoch: 9/25... Training loss: 0.1161
Epoch: 9/25... Training loss: 0.1113
Epoch: 9/25... Training loss: 0.1167
Epoch: 9/25... Training loss: 0.1169
Epoch: 9/25... Training loss: 0.1158
Epoch: 9/25... Training loss: 0.1138
Epoch: 9/25... Training loss: 0.1152
Epoch: 9/25... Training loss: 0.1178
Epoch: 9/25... Training loss: 0.1168
Epoch: 9/25... Training loss: 0.1166
Epoch: 9/25... Training loss: 0.1204
Epoch: 9/25... Training loss: 0.1183
Epoch: 9/25... Training loss: 0.1100
Epoch: 9/25... Training loss: 0.1126
Epoch: 9/25... Training loss: 0.1157
Epoch: 9/25... Training loss: 0.1134
Epoch: 9/25... Training loss: 0.1130
Epoch: 9/25... Training loss: 0.1151
Epoch: 9/25... Training loss: 0.1164
Epoch: 9/25... Training loss: 0.1128
Epoch: 9/25... Training loss: 0.1137
Epoch: 9/25... Training loss: 0.1139
Epoch: 9/25... Training loss: 0.1131
Epoch: 9/25... Training loss: 0.1135
Epoch: 9/25... Training loss: 0.1138
Epoch: 9/25... Training loss: 0.1129
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1113
Epoch: 9/25... Training loss: 0.1169
Epoch: 9/25... Training loss: 0.1149
Epoch: 9/25... Training loss: 0.1187
Epoch: 9/25... Training loss: 0.1163
Epoch: 9/25... Training loss: 0.1158
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1125
Epoch: 9/25... Training loss: 0.1114
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1165
Epoch: 9/25... Training loss: 0.1141
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1166
Epoch: 9/25... Training loss: 0.1129
Epoch: 9/25... Training loss: 0.1127
Epoch: 9/25... Training loss: 0.1132
Epoch: 9/25... Training loss: 0.1148
Epoch: 9/25... Training loss: 0.1185
Epoch: 9/25... Training loss: 0.1148
Epoch: 9/25... Training loss: 0.1195
Epoch: 9/25... Training loss: 0.1139
Epoch: 9/25... Training loss: 0.1150
Epoch: 9/25... Training loss: 0.1139
Epoch: 9/25... Training loss: 0.1134
Epoch: 9/25... Training loss: 0.1140
Epoch: 9/25... Training loss: 0.1135
Epoch: 9/25... Training loss: 0.1138
Epoch: 9/25... Training loss: 0.1155
Epoch: 9/25... Training loss: 0.1150
Epoch: 9/25... Training loss: 0.1131
Epoch: 9/25... Training loss: 0.1163
Epoch: 9/25... Training loss: 0.1165
Epoch: 9/25... Training loss: 0.1137
Epoch: 9/25... Training loss: 0.1164
Epoch: 9/25... Training loss: 0.1151
Epoch: 9/25... Training loss: 0.1136
Epoch: 9/25... Training loss: 0.1124
Epoch: 9/25... Training loss: 0.1097
Epoch: 9/25... Training loss: 0.1179
Epoch: 9/25... Training loss: 0.1104
Epoch: 9/25... Training loss: 0.1158
Epoch: 9/25... Training loss: 0.1145
Epoch: 9/25... Training loss: 0.1132
Epoch: 9/25... Training loss: 0.1149
Epoch: 9/25... Training loss: 0.1122
Epoch: 9/25... Training loss: 0.1128
Epoch: 9/25... Training loss: 0.1130
Epoch: 9/25... Training loss: 0.1124
Epoch: 9/25... Training loss: 0.1134
Epoch: 9/25... Training loss: 0.1129
Epoch: 9/25... Training loss: 0.1145
Epoch: 9/25... Training loss: 0.1182
Epoch: 9/25... Training loss: 0.1158
Epoch: 9/25... Training loss: 0.1137
Epoch: 9/25... Training loss: 0.1163
Epoch: 9/25... Training loss: 0.1160
Epoch: 9/25... Training loss: 0.1164
Epoch: 9/25... Training loss: 0.1151
Epoch: 9/25... Training loss: 0.1121
Epoch: 9/25... Training loss: 0.1137
Epoch: 9/25... Training loss: 0.1147
Epoch: 9/25... Training loss: 0.1111
Epoch: 9/25... Training loss: 0.1138
Epoch: 9/25... Training loss: 0.1161
Epoch: 9/25... Training loss: 0.1122
Epoch: 9/25... Training loss: 0.1148
Epoch: 9/25... Training loss: 0.1176
Epoch: 9/25... Training loss: 0.1181
Epoch: 9/25... Training loss: 0.1180
Epoch: 9/25... Training loss: 0.1164
Epoch: 9/25... Training loss: 0.1154
Epoch: 9/25... Training loss: 0.1132
Epoch: 9/25... Training loss: 0.1141
Epoch: 9/25... Training loss: 0.1146
Epoch: 9/25... Training loss: 0.1135
Epoch: 9/25... Training loss: 0.1127
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1159
Epoch: 9/25... Training loss: 0.1163
Epoch: 9/25... Training loss: 0.1161
Epoch: 9/25... Training loss: 0.1189
Epoch: 9/25... Training loss: 0.1172
Epoch: 9/25... Training loss: 0.1132
Epoch: 9/25... Training loss: 0.1165
Epoch: 9/25... Training loss: 0.1170
Epoch: 9/25... Training loss: 0.1135
Epoch: 9/25... Training loss: 0.1117
Epoch: 9/25... Training loss: 0.1105
Epoch: 9/25... Training loss: 0.1152
Epoch: 9/25... Training loss: 0.1140
Epoch: 9/25... Training loss: 0.1147
Epoch: 9/25... Training loss: 0.1147
Epoch: 9/25... Training loss: 0.1167
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1140
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1131
Epoch: 9/25... Training loss: 0.1172
Epoch: 9/25... Training loss: 0.1159
Epoch: 9/25... Training loss: 0.1181
Epoch: 9/25... Training loss: 0.1123
Epoch: 9/25... Training loss: 0.1178
Epoch: 9/25... Training loss: 0.1107
Epoch: 9/25... Training loss: 0.1159
Epoch: 9/25... Training loss: 0.1170
Epoch: 9/25... Training loss: 0.1132
Epoch: 9/25... Training loss: 0.1123
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1163
Epoch: 9/25... Training loss: 0.1141
Epoch: 9/25... Training loss: 0.1120
Epoch: 9/25... Training loss: 0.1163
Epoch: 9/25... Training loss: 0.1168
Epoch: 9/25... Training loss: 0.1190
Epoch: 9/25... Training loss: 0.1148
Epoch: 9/25... Training loss: 0.1144
Epoch: 9/25... Training loss: 0.1138
Epoch: 9/25... Training loss: 0.1136
Epoch: 9/25... Training loss: 0.1127
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1145
Epoch: 9/25... Training loss: 0.1135
Epoch: 9/25... Training loss: 0.1156
Epoch: 9/25... Training loss: 0.1123
Epoch: 9/25... Training loss: 0.1156
Epoch: 9/25... Training loss: 0.1117
Epoch: 9/25... Training loss: 0.1156
Epoch: 9/25... Training loss: 0.1127
Epoch: 9/25... Training loss: 0.1112
Epoch: 9/25... Training loss: 0.1160
Epoch: 9/25... Training loss: 0.1102
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1167
Epoch: 9/25... Training loss: 0.1169
Epoch: 9/25... Training loss: 0.1146
Epoch: 9/25... Training loss: 0.1157
Epoch: 9/25... Training loss: 0.1162
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1149
Epoch: 9/25... Training loss: 0.1118
Epoch: 9/25... Training loss: 0.1161
Epoch: 9/25... Training loss: 0.1141
Epoch: 9/25... Training loss: 0.1119
Epoch: 9/25... Training loss: 0.1092
Epoch: 9/25... Training loss: 0.1145
Epoch: 9/25... Training loss: 0.1142
Epoch: 9/25... Training loss: 0.1147
Epoch: 9/25... Training loss: 0.1153
Epoch: 9/25... Training loss: 0.1133
Epoch: 9/25... Training loss: 0.1124
Epoch: 9/25... Training loss: 0.1095
Epoch: 9/25... Training loss: 0.1129
Epoch: 9/25... Training loss: 0.1135
Epoch: 9/25... Training loss: 0.1144
Epoch: 9/25... Training loss: 0.1110
Epoch: 9/25... Training loss: 0.1119
Epoch: 9/25... Training loss: 0.1163
Epoch: 9/25... Training loss: 0.1150
Epoch: 9/25... Training loss: 0.1186
Epoch: 9/25... Training loss: 0.1079
Epoch: 9/25... Training loss: 0.1149
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1141
Epoch: 9/25... Training loss: 0.1140
Epoch: 9/25... Training loss: 0.1109
Epoch: 9/25... Training loss: 0.1107
Epoch: 9/25... Training loss: 0.1139
Epoch: 9/25... Training loss: 0.1152
Epoch: 9/25... Training loss: 0.1138
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1139
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1136
Epoch: 9/25... Training loss: 0.1125
Epoch: 9/25... Training loss: 0.1143
Epoch: 9/25... Training loss: 0.1171
Epoch: 9/25... Training loss: 0.1149
Epoch: 9/25... Training loss: 0.1160
Epoch: 9/25... Training loss: 0.1095
Epoch: 9/25... Training loss: 0.1140
Epoch: 9/25... Training loss: 0.1172
Epoch: 9/25... Training loss: 0.1144
Epoch: 9/25... Training loss: 0.1151
Epoch: 9/25... Training loss: 0.1177
Epoch: 9/25... Training loss: 0.1080
Epoch: 9/25... Training loss: 0.1114
Epoch: 9/25... Training loss: 0.1185
Epoch: 9/25... Training loss: 0.1132
Epoch: 9/25... Training loss: 0.1140
Epoch: 9/25... Training loss: 0.1141
Epoch: 9/25... Training loss: 0.1152
Epoch: 9/25... Training loss: 0.1139
Epoch: 9/25... Training loss: 0.1133
Epoch: 9/25... Training loss: 0.1116
Epoch: 9/25... Training loss: 0.1148
Epoch: 9/25... Training loss: 0.1151
Epoch: 9/25... Training loss: 0.1160
Epoch: 9/25... Training loss: 0.1140
Epoch: 9/25... Training loss: 0.1125
Epoch: 9/25... Training loss: 0.1135
Epoch: 9/25... Training loss: 0.1131
Epoch: 9/25... Training loss: 0.1122
Epoch: 9/25... Training loss: 0.1152
Epoch: 9/25... Training loss: 0.1131
Epoch: 9/25... Training loss: 0.1110
Epoch: 9/25... Training loss: 0.1112
Epoch: 9/25... Training loss: 0.1127
Epoch: 9/25... Training loss: 0.1114
Epoch: 9/25... Training loss: 0.1155
Epoch: 9/25... Training loss: 0.1111
Epoch: 9/25... Training loss: 0.1121
Epoch: 9/25... Training loss: 0.1157
Epoch: 9/25... Training loss: 0.1096
Epoch: 9/25... Training loss: 0.1158
Epoch: 9/25... Training loss: 0.1124
Epoch: 9/25... Training loss: 0.1136
Epoch: 9/25... Training loss: 0.1157
Epoch: 9/25... Training loss: 0.1129
Epoch: 9/25... Training loss: 0.1168
Epoch: 9/25... Training loss: 0.1162
Epoch: 10/25... Training loss: 0.1123
Epoch: 10/25... Training loss: 0.1165
Epoch: 10/25... Training loss: 0.1146
Epoch: 10/25... Training loss: 0.1201
Epoch: 10/25... Training loss: 0.1156
Epoch: 10/25... Training loss: 0.1134
Epoch: 10/25... Training loss: 0.1100
Epoch: 10/25... Training loss: 0.1143
Epoch: 10/25... Training loss: 0.1133
Epoch: 10/25... Training loss: 0.1123
Epoch: 10/25... Training loss: 0.1141
Epoch: 10/25... Training loss: 0.1141
Epoch: 10/25... Training loss: 0.1148
Epoch: 10/25... Training loss: 0.1151
Epoch: 10/25... Training loss: 0.1120
Epoch: 10/25... Training loss: 0.1159
Epoch: 10/25... Training loss: 0.1135
Epoch: 10/25... Training loss: 0.1175
Epoch: 10/25... Training loss: 0.1121
Epoch: 10/25... Training loss: 0.1113
Epoch: 10/25... Training loss: 0.1156
Epoch: 10/25... Training loss: 0.1103
Epoch: 10/25... Training loss: 0.1156
Epoch: 10/25... Training loss: 0.1198
Epoch: 10/25... Training loss: 0.1117
Epoch: 10/25... Training loss: 0.1192
Epoch: 10/25... Training loss: 0.1132
Epoch: 10/25... Training loss: 0.1159
Epoch: 10/25... Training loss: 0.1160
Epoch: 10/25... Training loss: 0.1153
Epoch: 10/25... Training loss: 0.1194
Epoch: 10/25... Training loss: 0.1140
Epoch: 10/25... Training loss: 0.1149
Epoch: 10/25... Training loss: 0.1146
Epoch: 10/25... Training loss: 0.1183
Epoch: 10/25... Training loss: 0.1170
Epoch: 10/25... Training loss: 0.1147
Epoch: 10/25... Training loss: 0.1187
Epoch: 10/25... Training loss: 0.1122
Epoch: 10/25... Training loss: 0.1159
Epoch: 10/25... Training loss: 0.1149
Epoch: 10/25... Training loss: 0.1173
Epoch: 10/25... Training loss: 0.1142
Epoch: 10/25... Training loss: 0.1131
Epoch: 10/25... Training loss: 0.1134
Epoch: 10/25... Training loss: 0.1151
Epoch: 10/25... Training loss: 0.1136
Epoch: 10/25... Training loss: 0.1210
Epoch: 10/25... Training loss: 0.1157
Epoch: 10/25... Training loss: 0.1171
Epoch: 10/25... Training loss: 0.1135
Epoch: 10/25... Training loss: 0.1138
Epoch: 10/25... Training loss: 0.1151
Epoch: 10/25... Training loss: 0.1134
Epoch: 10/25... Training loss: 0.1152
Epoch: 10/25... Training loss: 0.1161
Epoch: 10/25... Training loss: 0.1132
Epoch: 10/25... Training loss: 0.1125
Epoch: 10/25... Training loss: 0.1164
Epoch: 10/25... Training loss: 0.1134
Epoch: 10/25... Training loss: 0.1150
Epoch: 10/25... Training loss: 0.1163
Epoch: 10/25... Training loss: 0.1207
Epoch: 10/25... Training loss: 0.1128
Epoch: 10/25... Training loss: 0.1164
Epoch: 10/25... Training loss: 0.1140
Epoch: 10/25... Training loss: 0.1119
Epoch: 10/25... Training loss: 0.1154
Epoch: 10/25... Training loss: 0.1103
Epoch: 10/25... Training loss: 0.1144
Epoch: 10/25... Training loss: 0.1100
Epoch: 10/25... Training loss: 0.1111
Epoch: 10/25... Training loss: 0.1134
Epoch: 10/25... Training loss: 0.1122
Epoch: 10/25... Training loss: 0.1102
Epoch: 10/25... Training loss: 0.1136
Epoch: 10/25... Training loss: 0.1093
Epoch: 10/25... Training loss: 0.1084
Epoch: 10/25... Training loss: 0.1171
Epoch: 10/25... Training loss: 0.1152
Epoch: 10/25... Training loss: 0.1137
Epoch: 10/25... Training loss: 0.1111
Epoch: 10/25... Training loss: 0.1116
Epoch: 10/25... Training loss: 0.1138
Epoch: 10/25... Training loss: 0.1132
Epoch: 10/25... Training loss: 0.1127
Epoch: 10/25... Training loss: 0.1127
Epoch: 10/25... Training loss: 0.1090
Epoch: 10/25... Training loss: 0.1105
Epoch: 10/25... Training loss: 0.1116
Epoch: 10/25... Training loss: 0.1122
Epoch: 10/25... Training loss: 0.1112
Epoch: 10/25... Training loss: 0.1140
Epoch: 10/25... Training loss: 0.1118
Epoch: 10/25... Training loss: 0.1120
Epoch: 10/25... Training loss: 0.1120
Epoch: 10/25... Training loss: 0.1144
Epoch: 10/25... Training loss: 0.1129
Epoch: 10/25... Training loss: 0.1143
Epoch: 10/25... Training loss: 0.1161
Epoch: 10/25... Training loss: 0.1130
Epoch: 10/25... Training loss: 0.1118
Epoch: 10/25... Training loss: 0.1125
Epoch: 10/25... Training loss: 0.1150
Epoch: 10/25... Training loss: 0.1126
Epoch: 10/25... Training loss: 0.1153
Epoch: 10/25... Training loss: 0.1113
Epoch: 10/25... Training loss: 0.1116
Epoch: 10/25... Training loss: 0.1133
Epoch: 10/25... Training loss: 0.1149
Epoch: 10/25... Training loss: 0.1122
Epoch: 10/25... Training loss: 0.1099
Epoch: 10/25... Training loss: 0.1093
Epoch: 10/25... Training loss: 0.1133
Epoch: 10/25... Training loss: 0.1117
Epoch: 10/25... Training loss: 0.1157
Epoch: 10/25... Training loss: 0.1113
Epoch: 10/25... Training loss: 0.1118
Epoch: 10/25... Training loss: 0.1129
Epoch: 10/25... Training loss: 0.1154
Epoch: 10/25... Training loss: 0.1150
Epoch: 10/25... Training loss: 0.1161
Epoch: 10/25... Training loss: 0.1183
Epoch: 10/25... Training loss: 0.1173
Epoch: 10/25... Training loss: 0.1134
Epoch: 10/25... Training loss: 0.1125
Epoch: 10/25... Training loss: 0.1147
Epoch: 10/25... Training loss: 0.1108
Epoch: 10/25... Training loss: 0.1127
Epoch: 10/25... Training loss: 0.1128
Epoch: 10/25... Training loss: 0.1080
Epoch: 10/25... Training loss: 0.1166
Epoch: 10/25... Training loss: 0.1145
Epoch: 10/25... Training loss: 0.1135
Epoch: 10/25... Training loss: 0.1141
Epoch: 10/25... Training loss: 0.1110
Epoch: 10/25... Training loss: 0.1132
Epoch: 10/25... Training loss: 0.1123
Epoch: 10/25... Training loss: 0.1161
Epoch: 10/25... Training loss: 0.1111
Epoch: 10/25... Training loss: 0.1110
Epoch: 10/25... Training loss: 0.1115
Epoch: 10/25... Training loss: 0.1089
Epoch: 10/25... Training loss: 0.1156
Epoch: 10/25... Training loss: 0.1184
Epoch: 10/25... Training loss: 0.1126
Epoch: 10/25... Training loss: 0.1111
Epoch: 10/25... Training loss: 0.1135
Epoch: 10/25... Training loss: 0.1126
Epoch: 10/25... Training loss: 0.1125
Epoch: 10/25... Training loss: 0.1124
Epoch: 10/25... Training loss: 0.1083
Epoch: 10/25... Training loss: 0.1108
Epoch: 10/25... Training loss: 0.1116
Epoch: 10/25... Training loss: 0.1109
Epoch: 10/25... Training loss: 0.1128
Epoch: 10/25... Training loss: 0.1122
Epoch: 10/25... Training loss: 0.1159
Epoch: 10/25... Training loss: 0.1164
Epoch: 10/25... Training loss: 0.1124
Epoch: 10/25... Training loss: 0.1158
Epoch: 10/25... Training loss: 0.1131
Epoch: 10/25... Training loss: 0.1134
Epoch: 10/25... Training loss: 0.1137
Epoch: 10/25... Training loss: 0.1154
Epoch: 10/25... Training loss: 0.1126
Epoch: 10/25... Training loss: 0.1112
Epoch: 10/25... Training loss: 0.1144
Epoch: 10/25... Training loss: 0.1139
Epoch: 10/25... Training loss: 0.1127
Epoch: 10/25... Training loss: 0.1159
Epoch: 10/25... Training loss: 0.1121
Epoch: 10/25... Training loss: 0.1120
Epoch: 10/25... Training loss: 0.1108
Epoch: 10/25... Training loss: 0.1106
Epoch: 10/25... Training loss: 0.1083
Epoch: 10/25... Training loss: 0.1107
Epoch: 10/25... Training loss: 0.1147
Epoch: 10/25... Training loss: 0.1120
Epoch: 10/25... Training loss: 0.1108
Epoch: 10/25... Training loss: 0.1158
Epoch: 10/25... Training loss: 0.1172
Epoch: 10/25... Training loss: 0.1122
Epoch: 10/25... Training loss: 0.1110
Epoch: 10/25... Training loss: 0.1147
Epoch: 10/25... Training loss: 0.1131
Epoch: 10/25... Training loss: 0.1065
Epoch: 10/25... Training loss: 0.1170
Epoch: 10/25... Training loss: 0.1138
Epoch: 10/25... Training loss: 0.1145
Epoch: 10/25... Training loss: 0.1138
Epoch: 10/25... Training loss: 0.1133
Epoch: 10/25... Training loss: 0.1181
Epoch: 10/25... Training loss: 0.1083
Epoch: 10/25... Training loss: 0.1126
Epoch: 10/25... Training loss: 0.1150
Epoch: 10/25... Training loss: 0.1127
Epoch: 10/25... Training loss: 0.1139
Epoch: 10/25... Training loss: 0.1143
Epoch: 10/25... Training loss: 0.1140
Epoch: 10/25... Training loss: 0.1182
Epoch: 10/25... Training loss: 0.1137
Epoch: 10/25... Training loss: 0.1138
Epoch: 10/25... Training loss: 0.1142
Epoch: 10/25... Training loss: 0.1134
Epoch: 10/25... Training loss: 0.1133
Epoch: 10/25... Training loss: 0.1160
Epoch: 10/25... Training loss: 0.1129
Epoch: 10/25... Training loss: 0.1112
Epoch: 10/25... Training loss: 0.1095
Epoch: 10/25... Training loss: 0.1140
Epoch: 10/25... Training loss: 0.1119
Epoch: 10/25... Training loss: 0.1135
Epoch: 10/25... Training loss: 0.1149
Epoch: 10/25... Training loss: 0.1087
Epoch: 10/25... Training loss: 0.1099
Epoch: 10/25... Training loss: 0.1159
Epoch: 10/25... Training loss: 0.1133
Epoch: 10/25... Training loss: 0.1141
Epoch: 10/25... Training loss: 0.1116
Epoch: 10/25... Training loss: 0.1110
Epoch: 10/25... Training loss: 0.1117
Epoch: 10/25... Training loss: 0.1130
Epoch: 10/25... Training loss: 0.1147
Epoch: 10/25... Training loss: 0.1159
Epoch: 10/25... Training loss: 0.1112
Epoch: 10/25... Training loss: 0.1152
Epoch: 10/25... Training loss: 0.1101
Epoch: 10/25... Training loss: 0.1124
Epoch: 10/25... Training loss: 0.1144
Epoch: 10/25... Training loss: 0.1103
Epoch: 10/25... Training loss: 0.1147
Epoch: 10/25... Training loss: 0.1150
Epoch: 10/25... Training loss: 0.1108
Epoch: 10/25... Training loss: 0.1107
Epoch: 10/25... Training loss: 0.1142
Epoch: 10/25... Training loss: 0.1122
Epoch: 10/25... Training loss: 0.1099
Epoch: 10/25... Training loss: 0.1109
Epoch: 10/25... Training loss: 0.1130
Epoch: 10/25... Training loss: 0.1120
Epoch: 10/25... Training loss: 0.1115
Epoch: 10/25... Training loss: 0.1113
Epoch: 10/25... Training loss: 0.1153
Epoch: 10/25... Training loss: 0.1119
Epoch: 10/25... Training loss: 0.1111
Epoch: 10/25... Training loss: 0.1117
Epoch: 10/25... Training loss: 0.1125
Epoch: 10/25... Training loss: 0.1093
Epoch: 10/25... Training loss: 0.1101
Epoch: 10/25... Training loss: 0.1111
Epoch: 10/25... Training loss: 0.1138
Epoch: 10/25... Training loss: 0.1155
Epoch: 10/25... Training loss: 0.1150
Epoch: 10/25... Training loss: 0.1117
Epoch: 10/25... Training loss: 0.1117
Epoch: 10/25... Training loss: 0.1113
Epoch: 10/25... Training loss: 0.1128
Epoch: 10/25... Training loss: 0.1129
Epoch: 10/25... Training loss: 0.1106
Epoch: 10/25... Training loss: 0.1143
Epoch: 10/25... Training loss: 0.1108
Epoch: 10/25... Training loss: 0.1147
Epoch: 10/25... Training loss: 0.1148
Epoch: 10/25... Training loss: 0.1099
Epoch: 10/25... Training loss: 0.1094
Epoch: 10/25... Training loss: 0.1135
Epoch: 10/25... Training loss: 0.1156
Epoch: 10/25... Training loss: 0.1077
Epoch: 10/25... Training loss: 0.1102
Epoch: 10/25... Training loss: 0.1127
Epoch: 10/25... Training loss: 0.1157
Epoch: 10/25... Training loss: 0.1125
Epoch: 10/25... Training loss: 0.1095
Epoch: 10/25... Training loss: 0.1078
Epoch: 10/25... Training loss: 0.1124
Epoch: 10/25... Training loss: 0.1133
Epoch: 10/25... Training loss: 0.1095
Epoch: 10/25... Training loss: 0.1123
Epoch: 10/25... Training loss: 0.1126
Epoch: 10/25... Training loss: 0.1145
Epoch: 10/25... Training loss: 0.1149
Epoch: 10/25... Training loss: 0.1137
Epoch: 10/25... Training loss: 0.1147
Epoch: 10/25... Training loss: 0.1100
Epoch: 10/25... Training loss: 0.1130
Epoch: 10/25... Training loss: 0.1097
Epoch: 10/25... Training loss: 0.1136
Epoch: 10/25... Training loss: 0.1144
Epoch: 10/25... Training loss: 0.1099
Epoch: 10/25... Training loss: 0.1100
Epoch: 10/25... Training loss: 0.1167
Epoch: 10/25... Training loss: 0.1121
Epoch: 10/25... Training loss: 0.1136
Epoch: 10/25... Training loss: 0.1104
Epoch: 10/25... Training loss: 0.1130
Epoch: 10/25... Training loss: 0.1154
Epoch: 10/25... Training loss: 0.1103
Epoch: 10/25... Training loss: 0.1113
Epoch: 10/25... Training loss: 0.1144
Epoch: 11/25... Training loss: 0.1138
Epoch: 11/25... Training loss: 0.1099
Epoch: 11/25... Training loss: 0.1166
Epoch: 11/25... Training loss: 0.1105
Epoch: 11/25... Training loss: 0.1142
Epoch: 11/25... Training loss: 0.1134
Epoch: 11/25... Training loss: 0.1116
Epoch: 11/25... Training loss: 0.1118
Epoch: 11/25... Training loss: 0.1104
Epoch: 11/25... Training loss: 0.1093
Epoch: 11/25... Training loss: 0.1134
Epoch: 11/25... Training loss: 0.1097
Epoch: 11/25... Training loss: 0.1135
Epoch: 11/25... Training loss: 0.1145
Epoch: 11/25... Training loss: 0.1095
Epoch: 11/25... Training loss: 0.1129
Epoch: 11/25... Training loss: 0.1157
Epoch: 11/25... Training loss: 0.1149
Epoch: 11/25... Training loss: 0.1125
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1134
Epoch: 11/25... Training loss: 0.1139
Epoch: 11/25... Training loss: 0.1102
Epoch: 11/25... Training loss: 0.1125
Epoch: 11/25... Training loss: 0.1111
Epoch: 11/25... Training loss: 0.1130
Epoch: 11/25... Training loss: 0.1129
Epoch: 11/25... Training loss: 0.1142
Epoch: 11/25... Training loss: 0.1119
Epoch: 11/25... Training loss: 0.1111
Epoch: 11/25... Training loss: 0.1104
Epoch: 11/25... Training loss: 0.1116
Epoch: 11/25... Training loss: 0.1157
Epoch: 11/25... Training loss: 0.1094
Epoch: 11/25... Training loss: 0.1134
Epoch: 11/25... Training loss: 0.1110
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1093
Epoch: 11/25... Training loss: 0.1144
Epoch: 11/25... Training loss: 0.1138
Epoch: 11/25... Training loss: 0.1108
Epoch: 11/25... Training loss: 0.1097
Epoch: 11/25... Training loss: 0.1099
Epoch: 11/25... Training loss: 0.1132
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1126
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1145
Epoch: 11/25... Training loss: 0.1087
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1122
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1130
Epoch: 11/25... Training loss: 0.1127
Epoch: 11/25... Training loss: 0.1126
Epoch: 11/25... Training loss: 0.1095
Epoch: 11/25... Training loss: 0.1155
Epoch: 11/25... Training loss: 0.1111
Epoch: 11/25... Training loss: 0.1144
Epoch: 11/25... Training loss: 0.1144
Epoch: 11/25... Training loss: 0.1140
Epoch: 11/25... Training loss: 0.1150
Epoch: 11/25... Training loss: 0.1124
Epoch: 11/25... Training loss: 0.1147
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1146
Epoch: 11/25... Training loss: 0.1140
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1120
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1104
Epoch: 11/25... Training loss: 0.1139
Epoch: 11/25... Training loss: 0.1085
Epoch: 11/25... Training loss: 0.1109
Epoch: 11/25... Training loss: 0.1149
Epoch: 11/25... Training loss: 0.1084
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1145
Epoch: 11/25... Training loss: 0.1124
Epoch: 11/25... Training loss: 0.1103
Epoch: 11/25... Training loss: 0.1122
Epoch: 11/25... Training loss: 0.1159
Epoch: 11/25... Training loss: 0.1149
Epoch: 11/25... Training loss: 0.1117
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1103
Epoch: 11/25... Training loss: 0.1163
Epoch: 11/25... Training loss: 0.1077
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1113
Epoch: 11/25... Training loss: 0.1152
Epoch: 11/25... Training loss: 0.1150
Epoch: 11/25... Training loss: 0.1112
Epoch: 11/25... Training loss: 0.1126
Epoch: 11/25... Training loss: 0.1108
Epoch: 11/25... Training loss: 0.1108
Epoch: 11/25... Training loss: 0.1097
Epoch: 11/25... Training loss: 0.1099
Epoch: 11/25... Training loss: 0.1131
Epoch: 11/25... Training loss: 0.1105
Epoch: 11/25... Training loss: 0.1117
Epoch: 11/25... Training loss: 0.1161
Epoch: 11/25... Training loss: 0.1122
Epoch: 11/25... Training loss: 0.1133
Epoch: 11/25... Training loss: 0.1124
Epoch: 11/25... Training loss: 0.1136
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1126
Epoch: 11/25... Training loss: 0.1098
Epoch: 11/25... Training loss: 0.1119
Epoch: 11/25... Training loss: 0.1116
Epoch: 11/25... Training loss: 0.1090
Epoch: 11/25... Training loss: 0.1159
Epoch: 11/25... Training loss: 0.1150
Epoch: 11/25... Training loss: 0.1073
Epoch: 11/25... Training loss: 0.1164
Epoch: 11/25... Training loss: 0.1120
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1108
Epoch: 11/25... Training loss: 0.1142
Epoch: 11/25... Training loss: 0.1095
Epoch: 11/25... Training loss: 0.1143
Epoch: 11/25... Training loss: 0.1145
Epoch: 11/25... Training loss: 0.1128
Epoch: 11/25... Training loss: 0.1103
Epoch: 11/25... Training loss: 0.1134
Epoch: 11/25... Training loss: 0.1181
Epoch: 11/25... Training loss: 0.1127
Epoch: 11/25... Training loss: 0.1118
Epoch: 11/25... Training loss: 0.1133
Epoch: 11/25... Training loss: 0.1137
Epoch: 11/25... Training loss: 0.1142
Epoch: 11/25... Training loss: 0.1110
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1108
Epoch: 11/25... Training loss: 0.1116
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1090
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1154
Epoch: 11/25... Training loss: 0.1135
Epoch: 11/25... Training loss: 0.1149
Epoch: 11/25... Training loss: 0.1147
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1141
Epoch: 11/25... Training loss: 0.1149
Epoch: 11/25... Training loss: 0.1129
Epoch: 11/25... Training loss: 0.1113
Epoch: 11/25... Training loss: 0.1087
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1107
Epoch: 11/25... Training loss: 0.1126
Epoch: 11/25... Training loss: 0.1106
Epoch: 11/25... Training loss: 0.1123
Epoch: 11/25... Training loss: 0.1090
Epoch: 11/25... Training loss: 0.1129
Epoch: 11/25... Training loss: 0.1166
Epoch: 11/25... Training loss: 0.1151
Epoch: 11/25... Training loss: 0.1151
Epoch: 11/25... Training loss: 0.1109
Epoch: 11/25... Training loss: 0.1116
Epoch: 11/25... Training loss: 0.1136
Epoch: 11/25... Training loss: 0.1109
Epoch: 11/25... Training loss: 0.1129
Epoch: 11/25... Training loss: 0.1133
Epoch: 11/25... Training loss: 0.1132
Epoch: 11/25... Training loss: 0.1124
Epoch: 11/25... Training loss: 0.1177
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1101
Epoch: 11/25... Training loss: 0.1141
Epoch: 11/25... Training loss: 0.1098
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1102
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1113
Epoch: 11/25... Training loss: 0.1117
Epoch: 11/25... Training loss: 0.1123
Epoch: 11/25... Training loss: 0.1111
Epoch: 11/25... Training loss: 0.1152
Epoch: 11/25... Training loss: 0.1138
Epoch: 11/25... Training loss: 0.1110
Epoch: 11/25... Training loss: 0.1124
Epoch: 11/25... Training loss: 0.1098
Epoch: 11/25... Training loss: 0.1113
Epoch: 11/25... Training loss: 0.1078
Epoch: 11/25... Training loss: 0.1105
Epoch: 11/25... Training loss: 0.1129
Epoch: 11/25... Training loss: 0.1108
Epoch: 11/25... Training loss: 0.1087
Epoch: 11/25... Training loss: 0.1108
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1130
Epoch: 11/25... Training loss: 0.1167
Epoch: 11/25... Training loss: 0.1135
Epoch: 11/25... Training loss: 0.1151
Epoch: 11/25... Training loss: 0.1079
Epoch: 11/25... Training loss: 0.1137
Epoch: 11/25... Training loss: 0.1119
Epoch: 11/25... Training loss: 0.1143
Epoch: 11/25... Training loss: 0.1103
Epoch: 11/25... Training loss: 0.1104
Epoch: 11/25... Training loss: 0.1114
Epoch: 11/25... Training loss: 0.1138
Epoch: 11/25... Training loss: 0.1136
Epoch: 11/25... Training loss: 0.1153
Epoch: 11/25... Training loss: 0.1156
Epoch: 11/25... Training loss: 0.1137
Epoch: 11/25... Training loss: 0.1126
Epoch: 11/25... Training loss: 0.1118
Epoch: 11/25... Training loss: 0.1133
Epoch: 11/25... Training loss: 0.1112
Epoch: 11/25... Training loss: 0.1128
Epoch: 11/25... Training loss: 0.1096
Epoch: 11/25... Training loss: 0.1107
Epoch: 11/25... Training loss: 0.1156
Epoch: 11/25... Training loss: 0.1111
Epoch: 11/25... Training loss: 0.1109
Epoch: 11/25... Training loss: 0.1133
Epoch: 11/25... Training loss: 0.1112
Epoch: 11/25... Training loss: 0.1123
Epoch: 11/25... Training loss: 0.1120
Epoch: 11/25... Training loss: 0.1086
Epoch: 11/25... Training loss: 0.1106
Epoch: 11/25... Training loss: 0.1154
Epoch: 11/25... Training loss: 0.1136
Epoch: 11/25... Training loss: 0.1175
Epoch: 11/25... Training loss: 0.1143
Epoch: 11/25... Training loss: 0.1104
Epoch: 11/25... Training loss: 0.1136
Epoch: 11/25... Training loss: 0.1123
Epoch: 11/25... Training loss: 0.1073
Epoch: 11/25... Training loss: 0.1135
Epoch: 11/25... Training loss: 0.1145
Epoch: 11/25... Training loss: 0.1137
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1145
Epoch: 11/25... Training loss: 0.1121
Epoch: 11/25... Training loss: 0.1079
Epoch: 11/25... Training loss: 0.1094
Epoch: 11/25... Training loss: 0.1130
Epoch: 11/25... Training loss: 0.1150
Epoch: 11/25... Training loss: 0.1148
Epoch: 11/25... Training loss: 0.1125
Epoch: 11/25... Training loss: 0.1104
Epoch: 11/25... Training loss: 0.1104
Epoch: 11/25... Training loss: 0.1109
Epoch: 11/25... Training loss: 0.1116
Epoch: 11/25... Training loss: 0.1117
Epoch: 11/25... Training loss: 0.1116
Epoch: 11/25... Training loss: 0.1118
Epoch: 11/25... Training loss: 0.1166
Epoch: 11/25... Training loss: 0.1144
Epoch: 11/25... Training loss: 0.1126
Epoch: 11/25... Training loss: 0.1122
Epoch: 11/25... Training loss: 0.1141
Epoch: 11/25... Training loss: 0.1119
Epoch: 11/25... Training loss: 0.1061
Epoch: 11/25... Training loss: 0.1112
Epoch: 11/25... Training loss: 0.1100
Epoch: 11/25... Training loss: 0.1138
Epoch: 11/25... Training loss: 0.1088
Epoch: 11/25... Training loss: 0.1139
Epoch: 11/25... Training loss: 0.1112
Epoch: 11/25... Training loss: 0.1092
Epoch: 11/25... Training loss: 0.1112
Epoch: 11/25... Training loss: 0.1097
Epoch: 11/25... Training loss: 0.1115
Epoch: 11/25... Training loss: 0.1108
Epoch: 11/25... Training loss: 0.1130
Epoch: 11/25... Training loss: 0.1091
Epoch: 11/25... Training loss: 0.1097
Epoch: 11/25... Training loss: 0.1104
Epoch: 11/25... Training loss: 0.1113
Epoch: 11/25... Training loss: 0.1142
Epoch: 11/25... Training loss: 0.1095
Epoch: 11/25... Training loss: 0.1140
Epoch: 11/25... Training loss: 0.1119
Epoch: 11/25... Training loss: 0.1116
Epoch: 11/25... Training loss: 0.1113
Epoch: 11/25... Training loss: 0.1110
Epoch: 11/25... Training loss: 0.1132
Epoch: 11/25... Training loss: 0.1100
Epoch: 11/25... Training loss: 0.1148
Epoch: 11/25... Training loss: 0.1118
Epoch: 11/25... Training loss: 0.1089
Epoch: 11/25... Training loss: 0.1131
Epoch: 11/25... Training loss: 0.1079
Epoch: 11/25... Training loss: 0.1093
Epoch: 11/25... Training loss: 0.1105
Epoch: 11/25... Training loss: 0.1119
Epoch: 11/25... Training loss: 0.1138
Epoch: 11/25... Training loss: 0.1120
Epoch: 11/25... Training loss: 0.1088
Epoch: 11/25... Training loss: 0.1092
Epoch: 11/25... Training loss: 0.1086
Epoch: 11/25... Training loss: 0.1095
Epoch: 11/25... Training loss: 0.1074
Epoch: 12/25... Training loss: 0.1108
Epoch: 12/25... Training loss: 0.1103
Epoch: 12/25... Training loss: 0.1060
Epoch: 12/25... Training loss: 0.1119
Epoch: 12/25... Training loss: 0.1132
Epoch: 12/25... Training loss: 0.1114
Epoch: 12/25... Training loss: 0.1113
Epoch: 12/25... Training loss: 0.1085
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1124
Epoch: 12/25... Training loss: 0.1134
Epoch: 12/25... Training loss: 0.1139
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1146
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1082
Epoch: 12/25... Training loss: 0.1066
Epoch: 12/25... Training loss: 0.1111
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1126
Epoch: 12/25... Training loss: 0.1118
Epoch: 12/25... Training loss: 0.1135
Epoch: 12/25... Training loss: 0.1113
Epoch: 12/25... Training loss: 0.1098
Epoch: 12/25... Training loss: 0.1127
Epoch: 12/25... Training loss: 0.1113
Epoch: 12/25... Training loss: 0.1129
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1137
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1119
Epoch: 12/25... Training loss: 0.1069
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1125
Epoch: 12/25... Training loss: 0.1106
Epoch: 12/25... Training loss: 0.1100
Epoch: 12/25... Training loss: 0.1094
Epoch: 12/25... Training loss: 0.1150
Epoch: 12/25... Training loss: 0.1103
Epoch: 12/25... Training loss: 0.1118
Epoch: 12/25... Training loss: 0.1115
Epoch: 12/25... Training loss: 0.1120
Epoch: 12/25... Training loss: 0.1141
Epoch: 12/25... Training loss: 0.1125
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1122
Epoch: 12/25... Training loss: 0.1089
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1110
Epoch: 12/25... Training loss: 0.1108
Epoch: 12/25... Training loss: 0.1110
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1107
Epoch: 12/25... Training loss: 0.1113
Epoch: 12/25... Training loss: 0.1122
Epoch: 12/25... Training loss: 0.1106
Epoch: 12/25... Training loss: 0.1105
Epoch: 12/25... Training loss: 0.1121
Epoch: 12/25... Training loss: 0.1125
Epoch: 12/25... Training loss: 0.1132
Epoch: 12/25... Training loss: 0.1124
Epoch: 12/25... Training loss: 0.1143
Epoch: 12/25... Training loss: 0.1091
Epoch: 12/25... Training loss: 0.1108
Epoch: 12/25... Training loss: 0.1145
Epoch: 12/25... Training loss: 0.1117
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1134
Epoch: 12/25... Training loss: 0.1100
Epoch: 12/25... Training loss: 0.1098
Epoch: 12/25... Training loss: 0.1117
Epoch: 12/25... Training loss: 0.1061
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1132
Epoch: 12/25... Training loss: 0.1168
Epoch: 12/25... Training loss: 0.1081
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1106
Epoch: 12/25... Training loss: 0.1118
Epoch: 12/25... Training loss: 0.1123
Epoch: 12/25... Training loss: 0.1116
Epoch: 12/25... Training loss: 0.1107
Epoch: 12/25... Training loss: 0.1135
Epoch: 12/25... Training loss: 0.1096
Epoch: 12/25... Training loss: 0.1074
Epoch: 12/25... Training loss: 0.1114
Epoch: 12/25... Training loss: 0.1133
Epoch: 12/25... Training loss: 0.1143
Epoch: 12/25... Training loss: 0.1116
Epoch: 12/25... Training loss: 0.1088
Epoch: 12/25... Training loss: 0.1084
Epoch: 12/25... Training loss: 0.1131
Epoch: 12/25... Training loss: 0.1114
Epoch: 12/25... Training loss: 0.1127
Epoch: 12/25... Training loss: 0.1107
Epoch: 12/25... Training loss: 0.1089
Epoch: 12/25... Training loss: 0.1126
Epoch: 12/25... Training loss: 0.1110
Epoch: 12/25... Training loss: 0.1138
Epoch: 12/25... Training loss: 0.1150
Epoch: 12/25... Training loss: 0.1127
Epoch: 12/25... Training loss: 0.1085
Epoch: 12/25... Training loss: 0.1122
Epoch: 12/25... Training loss: 0.1124
Epoch: 12/25... Training loss: 0.1095
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1145
Epoch: 12/25... Training loss: 0.1074
Epoch: 12/25... Training loss: 0.1088
Epoch: 12/25... Training loss: 0.1130
Epoch: 12/25... Training loss: 0.1110
Epoch: 12/25... Training loss: 0.1116
Epoch: 12/25... Training loss: 0.1126
Epoch: 12/25... Training loss: 0.1114
Epoch: 12/25... Training loss: 0.1139
Epoch: 12/25... Training loss: 0.1142
Epoch: 12/25... Training loss: 0.1121
Epoch: 12/25... Training loss: 0.1138
Epoch: 12/25... Training loss: 0.1111
Epoch: 12/25... Training loss: 0.1137
Epoch: 12/25... Training loss: 0.1083
Epoch: 12/25... Training loss: 0.1143
Epoch: 12/25... Training loss: 0.1116
Epoch: 12/25... Training loss: 0.1104
Epoch: 12/25... Training loss: 0.1142
Epoch: 12/25... Training loss: 0.1104
Epoch: 12/25... Training loss: 0.1094
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1100
Epoch: 12/25... Training loss: 0.1096
Epoch: 12/25... Training loss: 0.1085
Epoch: 12/25... Training loss: 0.1104
Epoch: 12/25... Training loss: 0.1095
Epoch: 12/25... Training loss: 0.1087
Epoch: 12/25... Training loss: 0.1079
Epoch: 12/25... Training loss: 0.1135
Epoch: 12/25... Training loss: 0.1126
Epoch: 12/25... Training loss: 0.1149
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1121
Epoch: 12/25... Training loss: 0.1080
Epoch: 12/25... Training loss: 0.1148
Epoch: 12/25... Training loss: 0.1128
Epoch: 12/25... Training loss: 0.1091
Epoch: 12/25... Training loss: 0.1133
Epoch: 12/25... Training loss: 0.1124
Epoch: 12/25... Training loss: 0.1123
Epoch: 12/25... Training loss: 0.1089
Epoch: 12/25... Training loss: 0.1099
Epoch: 12/25... Training loss: 0.1094
Epoch: 12/25... Training loss: 0.1149
Epoch: 12/25... Training loss: 0.1134
Epoch: 12/25... Training loss: 0.1123
Epoch: 12/25... Training loss: 0.1105
Epoch: 12/25... Training loss: 0.1106
Epoch: 12/25... Training loss: 0.1092
Epoch: 12/25... Training loss: 0.1093
Epoch: 12/25... Training loss: 0.1111
Epoch: 12/25... Training loss: 0.1110
Epoch: 12/25... Training loss: 0.1090
Epoch: 12/25... Training loss: 0.1089
Epoch: 12/25... Training loss: 0.1106
Epoch: 12/25... Training loss: 0.1130
Epoch: 12/25... Training loss: 0.1124
Epoch: 12/25... Training loss: 0.1067
Epoch: 12/25... Training loss: 0.1098
Epoch: 12/25... Training loss: 0.1117
Epoch: 12/25... Training loss: 0.1113
Epoch: 12/25... Training loss: 0.1104
Epoch: 12/25... Training loss: 0.1126
Epoch: 12/25... Training loss: 0.1115
Epoch: 12/25... Training loss: 0.1100
Epoch: 12/25... Training loss: 0.1119
Epoch: 12/25... Training loss: 0.1091
Epoch: 12/25... Training loss: 0.1095
Epoch: 12/25... Training loss: 0.1071
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1154
Epoch: 12/25... Training loss: 0.1056
Epoch: 12/25... Training loss: 0.1118
Epoch: 12/25... Training loss: 0.1127
Epoch: 12/25... Training loss: 0.1102
Epoch: 12/25... Training loss: 0.1089
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1141
Epoch: 12/25... Training loss: 0.1088
Epoch: 12/25... Training loss: 0.1113
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1129
Epoch: 12/25... Training loss: 0.1107
Epoch: 12/25... Training loss: 0.1088
Epoch: 12/25... Training loss: 0.1092
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1117
Epoch: 12/25... Training loss: 0.1125
Epoch: 12/25... Training loss: 0.1165
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1114
Epoch: 12/25... Training loss: 0.1097
Epoch: 12/25... Training loss: 0.1121
Epoch: 12/25... Training loss: 0.1107
Epoch: 12/25... Training loss: 0.1158
Epoch: 12/25... Training loss: 0.1156
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1112
Epoch: 12/25... Training loss: 0.1089
Epoch: 12/25... Training loss: 0.1091
Epoch: 12/25... Training loss: 0.1126
Epoch: 12/25... Training loss: 0.1119
Epoch: 12/25... Training loss: 0.1117
Epoch: 12/25... Training loss: 0.1163
Epoch: 12/25... Training loss: 0.1071
Epoch: 12/25... Training loss: 0.1137
Epoch: 12/25... Training loss: 0.1148
Epoch: 12/25... Training loss: 0.1125
Epoch: 12/25... Training loss: 0.1135
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1128
Epoch: 12/25... Training loss: 0.1133
Epoch: 12/25... Training loss: 0.1145
Epoch: 12/25... Training loss: 0.1123
Epoch: 12/25... Training loss: 0.1126
Epoch: 12/25... Training loss: 0.1076
Epoch: 12/25... Training loss: 0.1122
Epoch: 12/25... Training loss: 0.1134
Epoch: 12/25... Training loss: 0.1162
Epoch: 12/25... Training loss: 0.1117
Epoch: 12/25... Training loss: 0.1127
Epoch: 12/25... Training loss: 0.1117
Epoch: 12/25... Training loss: 0.1100
Epoch: 12/25... Training loss: 0.1118
Epoch: 12/25... Training loss: 0.1152
Epoch: 12/25... Training loss: 0.1104
Epoch: 12/25... Training loss: 0.1150
Epoch: 12/25... Training loss: 0.1100
Epoch: 12/25... Training loss: 0.1150
Epoch: 12/25... Training loss: 0.1108
Epoch: 12/25... Training loss: 0.1103
Epoch: 12/25... Training loss: 0.1078
Epoch: 12/25... Training loss: 0.1143
Epoch: 12/25... Training loss: 0.1140
Epoch: 12/25... Training loss: 0.1113
Epoch: 12/25... Training loss: 0.1077
Epoch: 12/25... Training loss: 0.1102
Epoch: 12/25... Training loss: 0.1146
Epoch: 12/25... Training loss: 0.1072
Epoch: 12/25... Training loss: 0.1089
Epoch: 12/25... Training loss: 0.1100
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1139
Epoch: 12/25... Training loss: 0.1110
Epoch: 12/25... Training loss: 0.1118
Epoch: 12/25... Training loss: 0.1091
Epoch: 12/25... Training loss: 0.1144
Epoch: 12/25... Training loss: 0.1114
Epoch: 12/25... Training loss: 0.1121
Epoch: 12/25... Training loss: 0.1075
Epoch: 12/25... Training loss: 0.1107
Epoch: 12/25... Training loss: 0.1095
Epoch: 12/25... Training loss: 0.1107
Epoch: 12/25... Training loss: 0.1087
Epoch: 12/25... Training loss: 0.1105
Epoch: 12/25... Training loss: 0.1086
Epoch: 12/25... Training loss: 0.1128
Epoch: 12/25... Training loss: 0.1096
Epoch: 12/25... Training loss: 0.1114
Epoch: 12/25... Training loss: 0.1110
Epoch: 12/25... Training loss: 0.1115
Epoch: 12/25... Training loss: 0.1109
Epoch: 12/25... Training loss: 0.1157
Epoch: 12/25... Training loss: 0.1115
Epoch: 12/25... Training loss: 0.1102
Epoch: 12/25... Training loss: 0.1098
Epoch: 12/25... Training loss: 0.1097
Epoch: 12/25... Training loss: 0.1124
Epoch: 12/25... Training loss: 0.1115
Epoch: 12/25... Training loss: 0.1098
Epoch: 12/25... Training loss: 0.1102
Epoch: 12/25... Training loss: 0.1098
Epoch: 12/25... Training loss: 0.1147
Epoch: 12/25... Training loss: 0.1123
Epoch: 12/25... Training loss: 0.1078
Epoch: 12/25... Training loss: 0.1108
Epoch: 12/25... Training loss: 0.1132
Epoch: 12/25... Training loss: 0.1119
Epoch: 12/25... Training loss: 0.1104
Epoch: 12/25... Training loss: 0.1056
Epoch: 12/25... Training loss: 0.1115
Epoch: 12/25... Training loss: 0.1119
Epoch: 12/25... Training loss: 0.1085
Epoch: 12/25... Training loss: 0.1097
Epoch: 12/25... Training loss: 0.1092
Epoch: 12/25... Training loss: 0.1101
Epoch: 12/25... Training loss: 0.1105
Epoch: 12/25... Training loss: 0.1115
Epoch: 12/25... Training loss: 0.1104
Epoch: 12/25... Training loss: 0.1141
Epoch: 13/25... Training loss: 0.1108
Epoch: 13/25... Training loss: 0.1099
Epoch: 13/25... Training loss: 0.1133
Epoch: 13/25... Training loss: 0.1142
Epoch: 13/25... Training loss: 0.1139
Epoch: 13/25... Training loss: 0.1069
Epoch: 13/25... Training loss: 0.1075
Epoch: 13/25... Training loss: 0.1107
Epoch: 13/25... Training loss: 0.1106
Epoch: 13/25... Training loss: 0.1125
Epoch: 13/25... Training loss: 0.1086
Epoch: 13/25... Training loss: 0.1135
Epoch: 13/25... Training loss: 0.1138
Epoch: 13/25... Training loss: 0.1110
Epoch: 13/25... Training loss: 0.1087
Epoch: 13/25... Training loss: 0.1117
Epoch: 13/25... Training loss: 0.1097
Epoch: 13/25... Training loss: 0.1076
Epoch: 13/25... Training loss: 0.1103
Epoch: 13/25... Training loss: 0.1106
Epoch: 13/25... Training loss: 0.1081
Epoch: 13/25... Training loss: 0.1124
Epoch: 13/25... Training loss: 0.1071
Epoch: 13/25... Training loss: 0.1067
Epoch: 13/25... Training loss: 0.1168
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1120
Epoch: 13/25... Training loss: 0.1097
Epoch: 13/25... Training loss: 0.1105
Epoch: 13/25... Training loss: 0.1109
Epoch: 13/25... Training loss: 0.1102
Epoch: 13/25... Training loss: 0.1126
Epoch: 13/25... Training loss: 0.1084
Epoch: 13/25... Training loss: 0.1094
Epoch: 13/25... Training loss: 0.1095
Epoch: 13/25... Training loss: 0.1116
Epoch: 13/25... Training loss: 0.1097
Epoch: 13/25... Training loss: 0.1071
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1105
Epoch: 13/25... Training loss: 0.1061
Epoch: 13/25... Training loss: 0.1107
Epoch: 13/25... Training loss: 0.1103
Epoch: 13/25... Training loss: 0.1090
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1125
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1094
Epoch: 13/25... Training loss: 0.1125
Epoch: 13/25... Training loss: 0.1063
Epoch: 13/25... Training loss: 0.1129
Epoch: 13/25... Training loss: 0.1134
Epoch: 13/25... Training loss: 0.1100
Epoch: 13/25... Training loss: 0.1144
Epoch: 13/25... Training loss: 0.1098
Epoch: 13/25... Training loss: 0.1100
Epoch: 13/25... Training loss: 0.1078
Epoch: 13/25... Training loss: 0.1114
Epoch: 13/25... Training loss: 0.1085
Epoch: 13/25... Training loss: 0.1114
Epoch: 13/25... Training loss: 0.1146
Epoch: 13/25... Training loss: 0.1132
Epoch: 13/25... Training loss: 0.1140
Epoch: 13/25... Training loss: 0.1105
Epoch: 13/25... Training loss: 0.1110
Epoch: 13/25... Training loss: 0.1123
Epoch: 13/25... Training loss: 0.1110
Epoch: 13/25... Training loss: 0.1099
Epoch: 13/25... Training loss: 0.1113
Epoch: 13/25... Training loss: 0.1150
Epoch: 13/25... Training loss: 0.1120
Epoch: 13/25... Training loss: 0.1150
Epoch: 13/25... Training loss: 0.1057
Epoch: 13/25... Training loss: 0.1083
Epoch: 13/25... Training loss: 0.1090
Epoch: 13/25... Training loss: 0.1130
Epoch: 13/25... Training loss: 0.1073
Epoch: 13/25... Training loss: 0.1128
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1097
Epoch: 13/25... Training loss: 0.1106
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1097
Epoch: 13/25... Training loss: 0.1115
Epoch: 13/25... Training loss: 0.1076
Epoch: 13/25... Training loss: 0.1085
Epoch: 13/25... Training loss: 0.1128
Epoch: 13/25... Training loss: 0.1078
Epoch: 13/25... Training loss: 0.1099
Epoch: 13/25... Training loss: 0.1130
Epoch: 13/25... Training loss: 0.1105
Epoch: 13/25... Training loss: 0.1127
Epoch: 13/25... Training loss: 0.1069
Epoch: 13/25... Training loss: 0.1120
Epoch: 13/25... Training loss: 0.1100
Epoch: 13/25... Training loss: 0.1077
Epoch: 13/25... Training loss: 0.1096
Epoch: 13/25... Training loss: 0.1124
Epoch: 13/25... Training loss: 0.1080
Epoch: 13/25... Training loss: 0.1119
Epoch: 13/25... Training loss: 0.1114
Epoch: 13/25... Training loss: 0.1098
Epoch: 13/25... Training loss: 0.1090
Epoch: 13/25... Training loss: 0.1115
Epoch: 13/25... Training loss: 0.1071
Epoch: 13/25... Training loss: 0.1122
Epoch: 13/25... Training loss: 0.1075
Epoch: 13/25... Training loss: 0.1121
Epoch: 13/25... Training loss: 0.1096
Epoch: 13/25... Training loss: 0.1132
Epoch: 13/25... Training loss: 0.1102
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1124
Epoch: 13/25... Training loss: 0.1094
Epoch: 13/25... Training loss: 0.1069
Epoch: 13/25... Training loss: 0.1146
Epoch: 13/25... Training loss: 0.1130
Epoch: 13/25... Training loss: 0.1121
Epoch: 13/25... Training loss: 0.1122
Epoch: 13/25... Training loss: 0.1084
Epoch: 13/25... Training loss: 0.1133
Epoch: 13/25... Training loss: 0.1109
Epoch: 13/25... Training loss: 0.1112
Epoch: 13/25... Training loss: 0.1129
Epoch: 13/25... Training loss: 0.1111
Epoch: 13/25... Training loss: 0.1132
Epoch: 13/25... Training loss: 0.1070
Epoch: 13/25... Training loss: 0.1085
Epoch: 13/25... Training loss: 0.1147
Epoch: 13/25... Training loss: 0.1078
Epoch: 13/25... Training loss: 0.1080
Epoch: 13/25... Training loss: 0.1135
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1103
Epoch: 13/25... Training loss: 0.1132
Epoch: 13/25... Training loss: 0.1109
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1103
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1062
Epoch: 13/25... Training loss: 0.1097
Epoch: 13/25... Training loss: 0.1133
Epoch: 13/25... Training loss: 0.1136
Epoch: 13/25... Training loss: 0.1090
Epoch: 13/25... Training loss: 0.1120
Epoch: 13/25... Training loss: 0.1085
Epoch: 13/25... Training loss: 0.1120
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1092
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1102
Epoch: 13/25... Training loss: 0.1107
Epoch: 13/25... Training loss: 0.1123
Epoch: 13/25... Training loss: 0.1074
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1097
Epoch: 13/25... Training loss: 0.1057
Epoch: 13/25... Training loss: 0.1081
Epoch: 13/25... Training loss: 0.1116
Epoch: 13/25... Training loss: 0.1089
Epoch: 13/25... Training loss: 0.1042
Epoch: 13/25... Training loss: 0.1080
Epoch: 13/25... Training loss: 0.1113
Epoch: 13/25... Training loss: 0.1065
Epoch: 13/25... Training loss: 0.1099
Epoch: 13/25... Training loss: 0.1052
Epoch: 13/25... Training loss: 0.1081
Epoch: 13/25... Training loss: 0.1158
Epoch: 13/25... Training loss: 0.1123
Epoch: 13/25... Training loss: 0.1112
Epoch: 13/25... Training loss: 0.1078
Epoch: 13/25... Training loss: 0.1113
Epoch: 13/25... Training loss: 0.1086
Epoch: 13/25... Training loss: 0.1114
Epoch: 13/25... Training loss: 0.1098
Epoch: 13/25... Training loss: 0.1112
Epoch: 13/25... Training loss: 0.1094
Epoch: 13/25... Training loss: 0.1077
Epoch: 13/25... Training loss: 0.1109
Epoch: 13/25... Training loss: 0.1063
Epoch: 13/25... Training loss: 0.1098
Epoch: 13/25... Training loss: 0.1130
Epoch: 13/25... Training loss: 0.1137
Epoch: 13/25... Training loss: 0.1093
Epoch: 13/25... Training loss: 0.1076
Epoch: 13/25... Training loss: 0.1050
Epoch: 13/25... Training loss: 0.1111
Epoch: 13/25... Training loss: 0.1085
Epoch: 13/25... Training loss: 0.1100
Epoch: 13/25... Training loss: 0.1081
Epoch: 13/25... Training loss: 0.1064
Epoch: 13/25... Training loss: 0.1146
Epoch: 13/25... Training loss: 0.1088
Epoch: 13/25... Training loss: 0.1128
Epoch: 13/25... Training loss: 0.1125
Epoch: 13/25... Training loss: 0.1113
Epoch: 13/25... Training loss: 0.1078
Epoch: 13/25... Training loss: 0.1074
Epoch: 13/25... Training loss: 0.1107
Epoch: 13/25... Training loss: 0.1103
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1113
Epoch: 13/25... Training loss: 0.1077
Epoch: 13/25... Training loss: 0.1077
Epoch: 13/25... Training loss: 0.1118
Epoch: 13/25... Training loss: 0.1134
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1138
Epoch: 13/25... Training loss: 0.1093
Epoch: 13/25... Training loss: 0.1114
Epoch: 13/25... Training loss: 0.1122
Epoch: 13/25... Training loss: 0.1088
Epoch: 13/25... Training loss: 0.1169
Epoch: 13/25... Training loss: 0.1092
Epoch: 13/25... Training loss: 0.1123
Epoch: 13/25... Training loss: 0.1118
Epoch: 13/25... Training loss: 0.1125
Epoch: 13/25... Training loss: 0.1075
Epoch: 13/25... Training loss: 0.1106
Epoch: 13/25... Training loss: 0.1103
Epoch: 13/25... Training loss: 0.1115
Epoch: 13/25... Training loss: 0.1102
Epoch: 13/25... Training loss: 0.1121
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1079
Epoch: 13/25... Training loss: 0.1057
Epoch: 13/25... Training loss: 0.1150
Epoch: 13/25... Training loss: 0.1131
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1135
Epoch: 13/25... Training loss: 0.1085
Epoch: 13/25... Training loss: 0.1096
Epoch: 13/25... Training loss: 0.1089
Epoch: 13/25... Training loss: 0.1103
Epoch: 13/25... Training loss: 0.1061
Epoch: 13/25... Training loss: 0.1118
Epoch: 13/25... Training loss: 0.1092
Epoch: 13/25... Training loss: 0.1120
Epoch: 13/25... Training loss: 0.1066
Epoch: 13/25... Training loss: 0.1111
Epoch: 13/25... Training loss: 0.1142
Epoch: 13/25... Training loss: 0.1088
Epoch: 13/25... Training loss: 0.1068
Epoch: 13/25... Training loss: 0.1117
Epoch: 13/25... Training loss: 0.1128
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1141
Epoch: 13/25... Training loss: 0.1097
Epoch: 13/25... Training loss: 0.1080
Epoch: 13/25... Training loss: 0.1105
Epoch: 13/25... Training loss: 0.1092
Epoch: 13/25... Training loss: 0.1074
Epoch: 13/25... Training loss: 0.1118
Epoch: 13/25... Training loss: 0.1144
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1109
Epoch: 13/25... Training loss: 0.1099
Epoch: 13/25... Training loss: 0.1087
Epoch: 13/25... Training loss: 0.1114
Epoch: 13/25... Training loss: 0.1070
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1078
Epoch: 13/25... Training loss: 0.1106
Epoch: 13/25... Training loss: 0.1125
Epoch: 13/25... Training loss: 0.1073
Epoch: 13/25... Training loss: 0.1114
Epoch: 13/25... Training loss: 0.1111
Epoch: 13/25... Training loss: 0.1106
Epoch: 13/25... Training loss: 0.1115
Epoch: 13/25... Training loss: 0.1123
Epoch: 13/25... Training loss: 0.1060
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1084
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1104
Epoch: 13/25... Training loss: 0.1101
Epoch: 13/25... Training loss: 0.1087
Epoch: 13/25... Training loss: 0.1083
Epoch: 13/25... Training loss: 0.1116
Epoch: 13/25... Training loss: 0.1115
Epoch: 13/25... Training loss: 0.1129
Epoch: 13/25... Training loss: 0.1091
Epoch: 13/25... Training loss: 0.1118
Epoch: 13/25... Training loss: 0.1095
Epoch: 13/25... Training loss: 0.1083
Epoch: 13/25... Training loss: 0.1113
Epoch: 13/25... Training loss: 0.1089
Epoch: 13/25... Training loss: 0.1096
Epoch: 13/25... Training loss: 0.1128
Epoch: 13/25... Training loss: 0.1095
Epoch: 13/25... Training loss: 0.1103
Epoch: 13/25... Training loss: 0.1083
Epoch: 13/25... Training loss: 0.1102
Epoch: 14/25... Training loss: 0.1116
Epoch: 14/25... Training loss: 0.1078
Epoch: 14/25... Training loss: 0.1119
Epoch: 14/25... Training loss: 0.1128
Epoch: 14/25... Training loss: 0.1103
Epoch: 14/25... Training loss: 0.1073
Epoch: 14/25... Training loss: 0.1116
Epoch: 14/25... Training loss: 0.1087
Epoch: 14/25... Training loss: 0.1114
Epoch: 14/25... Training loss: 0.1064
Epoch: 14/25... Training loss: 0.1042
Epoch: 14/25... Training loss: 0.1121
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1087
Epoch: 14/25... Training loss: 0.1092
Epoch: 14/25... Training loss: 0.1094
Epoch: 14/25... Training loss: 0.1105
Epoch: 14/25... Training loss: 0.1108
Epoch: 14/25... Training loss: 0.1098
Epoch: 14/25... Training loss: 0.1073
Epoch: 14/25... Training loss: 0.1112
Epoch: 14/25... Training loss: 0.1088
Epoch: 14/25... Training loss: 0.1105
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1103
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1118
Epoch: 14/25... Training loss: 0.1110
Epoch: 14/25... Training loss: 0.1101
Epoch: 14/25... Training loss: 0.1110
Epoch: 14/25... Training loss: 0.1094
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1060
Epoch: 14/25... Training loss: 0.1111
Epoch: 14/25... Training loss: 0.1091
Epoch: 14/25... Training loss: 0.1134
Epoch: 14/25... Training loss: 0.1108
Epoch: 14/25... Training loss: 0.1094
Epoch: 14/25... Training loss: 0.1092
Epoch: 14/25... Training loss: 0.1101
Epoch: 14/25... Training loss: 0.1107
Epoch: 14/25... Training loss: 0.1100
Epoch: 14/25... Training loss: 0.1143
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1113
Epoch: 14/25... Training loss: 0.1122
Epoch: 14/25... Training loss: 0.1101
Epoch: 14/25... Training loss: 0.1134
Epoch: 14/25... Training loss: 0.1114
Epoch: 14/25... Training loss: 0.1084
Epoch: 14/25... Training loss: 0.1137
Epoch: 14/25... Training loss: 0.1087
Epoch: 14/25... Training loss: 0.1109
Epoch: 14/25... Training loss: 0.1077
Epoch: 14/25... Training loss: 0.1090
Epoch: 14/25... Training loss: 0.1091
Epoch: 14/25... Training loss: 0.1108
Epoch: 14/25... Training loss: 0.1078
Epoch: 14/25... Training loss: 0.1080
Epoch: 14/25... Training loss: 0.1139
Epoch: 14/25... Training loss: 0.1057
Epoch: 14/25... Training loss: 0.1102
Epoch: 14/25... Training loss: 0.1106
Epoch: 14/25... Training loss: 0.1099
Epoch: 14/25... Training loss: 0.1106
Epoch: 14/25... Training loss: 0.1125
Epoch: 14/25... Training loss: 0.1080
Epoch: 14/25... Training loss: 0.1086
Epoch: 14/25... Training loss: 0.1059
Epoch: 14/25... Training loss: 0.1102
Epoch: 14/25... Training loss: 0.1084
Epoch: 14/25... Training loss: 0.1140
Epoch: 14/25... Training loss: 0.1102
Epoch: 14/25... Training loss: 0.1110
Epoch: 14/25... Training loss: 0.1041
Epoch: 14/25... Training loss: 0.1092
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1083
Epoch: 14/25... Training loss: 0.1089
Epoch: 14/25... Training loss: 0.1099
Epoch: 14/25... Training loss: 0.1082
Epoch: 14/25... Training loss: 0.1086
Epoch: 14/25... Training loss: 0.1090
Epoch: 14/25... Training loss: 0.1106
Epoch: 14/25... Training loss: 0.1123
Epoch: 14/25... Training loss: 0.1094
Epoch: 14/25... Training loss: 0.1090
Epoch: 14/25... Training loss: 0.1065
Epoch: 14/25... Training loss: 0.1076
Epoch: 14/25... Training loss: 0.1067
Epoch: 14/25... Training loss: 0.1113
Epoch: 14/25... Training loss: 0.1141
Epoch: 14/25... Training loss: 0.1087
Epoch: 14/25... Training loss: 0.1063
Epoch: 14/25... Training loss: 0.1109
Epoch: 14/25... Training loss: 0.1132
Epoch: 14/25... Training loss: 0.1087
Epoch: 14/25... Training loss: 0.1111
Epoch: 14/25... Training loss: 0.1111
Epoch: 14/25... Training loss: 0.1035
Epoch: 14/25... Training loss: 0.1096
Epoch: 14/25... Training loss: 0.1061
Epoch: 14/25... Training loss: 0.1049
Epoch: 14/25... Training loss: 0.1112
Epoch: 14/25... Training loss: 0.1113
Epoch: 14/25... Training loss: 0.1076
Epoch: 14/25... Training loss: 0.1119
Epoch: 14/25... Training loss: 0.1107
Epoch: 14/25... Training loss: 0.1113
Epoch: 14/25... Training loss: 0.1109
Epoch: 14/25... Training loss: 0.1093
Epoch: 14/25... Training loss: 0.1126
Epoch: 14/25... Training loss: 0.1079
Epoch: 14/25... Training loss: 0.1108
Epoch: 14/25... Training loss: 0.1106
Epoch: 14/25... Training loss: 0.1071
Epoch: 14/25... Training loss: 0.1077
Epoch: 14/25... Training loss: 0.1131
Epoch: 14/25... Training loss: 0.1064
Epoch: 14/25... Training loss: 0.1066
Epoch: 14/25... Training loss: 0.1119
Epoch: 14/25... Training loss: 0.1121
Epoch: 14/25... Training loss: 0.1077
Epoch: 14/25... Training loss: 0.1112
Epoch: 14/25... Training loss: 0.1105
Epoch: 14/25... Training loss: 0.1074
Epoch: 14/25... Training loss: 0.1132
Epoch: 14/25... Training loss: 0.1085
Epoch: 14/25... Training loss: 0.1067
Epoch: 14/25... Training loss: 0.1076
Epoch: 14/25... Training loss: 0.1052
Epoch: 14/25... Training loss: 0.1047
Epoch: 14/25... Training loss: 0.1101
Epoch: 14/25... Training loss: 0.1067
Epoch: 14/25... Training loss: 0.1114
Epoch: 14/25... Training loss: 0.1116
Epoch: 14/25... Training loss: 0.1050
Epoch: 14/25... Training loss: 0.1102
Epoch: 14/25... Training loss: 0.1094
Epoch: 14/25... Training loss: 0.1110
Epoch: 14/25... Training loss: 0.1088
Epoch: 14/25... Training loss: 0.1099
Epoch: 14/25... Training loss: 0.1075
Epoch: 14/25... Training loss: 0.1137
Epoch: 14/25... Training loss: 0.1114
Epoch: 14/25... Training loss: 0.1057
Epoch: 14/25... Training loss: 0.1089
Epoch: 14/25... Training loss: 0.1139
Epoch: 14/25... Training loss: 0.1059
Epoch: 14/25... Training loss: 0.1084
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1117
Epoch: 14/25... Training loss: 0.1109
Epoch: 14/25... Training loss: 0.1089
Epoch: 14/25... Training loss: 0.1072
Epoch: 14/25... Training loss: 0.1073
Epoch: 14/25... Training loss: 0.1085
Epoch: 14/25... Training loss: 0.1082
Epoch: 14/25... Training loss: 0.1064
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1110
Epoch: 14/25... Training loss: 0.1109
Epoch: 14/25... Training loss: 0.1127
Epoch: 14/25... Training loss: 0.1090
Epoch: 14/25... Training loss: 0.1082
Epoch: 14/25... Training loss: 0.1083
Epoch: 14/25... Training loss: 0.1123
Epoch: 14/25... Training loss: 0.1097
Epoch: 14/25... Training loss: 0.1088
Epoch: 14/25... Training loss: 0.1054
Epoch: 14/25... Training loss: 0.1131
Epoch: 14/25... Training loss: 0.1046
Epoch: 14/25... Training loss: 0.1130
Epoch: 14/25... Training loss: 0.1057
Epoch: 14/25... Training loss: 0.1016
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1098
Epoch: 14/25... Training loss: 0.1099
Epoch: 14/25... Training loss: 0.1091
Epoch: 14/25... Training loss: 0.1075
Epoch: 14/25... Training loss: 0.1085
Epoch: 14/25... Training loss: 0.1077
Epoch: 14/25... Training loss: 0.1092
Epoch: 14/25... Training loss: 0.1109
Epoch: 14/25... Training loss: 0.1083
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1096
Epoch: 14/25... Training loss: 0.1068
Epoch: 14/25... Training loss: 0.1121
Epoch: 14/25... Training loss: 0.1082
Epoch: 14/25... Training loss: 0.1059
Epoch: 14/25... Training loss: 0.1088
Epoch: 14/25... Training loss: 0.1120
Epoch: 14/25... Training loss: 0.1086
Epoch: 14/25... Training loss: 0.1056
Epoch: 14/25... Training loss: 0.1073
Epoch: 14/25... Training loss: 0.1082
Epoch: 14/25... Training loss: 0.1061
Epoch: 14/25... Training loss: 0.1102
Epoch: 14/25... Training loss: 0.1141
Epoch: 14/25... Training loss: 0.1086
Epoch: 14/25... Training loss: 0.1113
Epoch: 14/25... Training loss: 0.1076
Epoch: 14/25... Training loss: 0.1105
Epoch: 14/25... Training loss: 0.1075
Epoch: 14/25... Training loss: 0.1108
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1085
Epoch: 14/25... Training loss: 0.1135
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1123
Epoch: 14/25... Training loss: 0.1131
Epoch: 14/25... Training loss: 0.1134
Epoch: 14/25... Training loss: 0.1124
Epoch: 14/25... Training loss: 0.1096
Epoch: 14/25... Training loss: 0.1152
Epoch: 14/25... Training loss: 0.1085
Epoch: 14/25... Training loss: 0.1090
Epoch: 14/25... Training loss: 0.1142
Epoch: 14/25... Training loss: 0.1078
Epoch: 14/25... Training loss: 0.1096
Epoch: 14/25... Training loss: 0.1126
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1100
Epoch: 14/25... Training loss: 0.1114
Epoch: 14/25... Training loss: 0.1100
Epoch: 14/25... Training loss: 0.1110
Epoch: 14/25... Training loss: 0.1086
Epoch: 14/25... Training loss: 0.1103
Epoch: 14/25... Training loss: 0.1127
Epoch: 14/25... Training loss: 0.1090
Epoch: 14/25... Training loss: 0.1088
Epoch: 14/25... Training loss: 0.1086
Epoch: 14/25... Training loss: 0.1070
Epoch: 14/25... Training loss: 0.1098
Epoch: 14/25... Training loss: 0.1105
Epoch: 14/25... Training loss: 0.1094
Epoch: 14/25... Training loss: 0.1071
Epoch: 14/25... Training loss: 0.1108
Epoch: 14/25... Training loss: 0.1110
Epoch: 14/25... Training loss: 0.1087
Epoch: 14/25... Training loss: 0.1077
Epoch: 14/25... Training loss: 0.1077
Epoch: 14/25... Training loss: 0.1059
Epoch: 14/25... Training loss: 0.1049
Epoch: 14/25... Training loss: 0.1082
Epoch: 14/25... Training loss: 0.1111
Epoch: 14/25... Training loss: 0.1070
Epoch: 14/25... Training loss: 0.1098
Epoch: 14/25... Training loss: 0.1107
Epoch: 14/25... Training loss: 0.1142
Epoch: 14/25... Training loss: 0.1097
Epoch: 14/25... Training loss: 0.1092
Epoch: 14/25... Training loss: 0.1107
Epoch: 14/25... Training loss: 0.1094
Epoch: 14/25... Training loss: 0.1065
Epoch: 14/25... Training loss: 0.1088
Epoch: 14/25... Training loss: 0.1077
Epoch: 14/25... Training loss: 0.1100
Epoch: 14/25... Training loss: 0.1105
Epoch: 14/25... Training loss: 0.1129
Epoch: 14/25... Training loss: 0.1075
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1075
Epoch: 14/25... Training loss: 0.1084
Epoch: 14/25... Training loss: 0.1099
Epoch: 14/25... Training loss: 0.1087
Epoch: 14/25... Training loss: 0.1068
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1093
Epoch: 14/25... Training loss: 0.1109
Epoch: 14/25... Training loss: 0.1126
Epoch: 14/25... Training loss: 0.1074
Epoch: 14/25... Training loss: 0.1096
Epoch: 14/25... Training loss: 0.1111
Epoch: 14/25... Training loss: 0.1078
Epoch: 14/25... Training loss: 0.1115
Epoch: 14/25... Training loss: 0.1076
Epoch: 14/25... Training loss: 0.1101
Epoch: 14/25... Training loss: 0.1093
Epoch: 14/25... Training loss: 0.1097
Epoch: 14/25... Training loss: 0.1073
Epoch: 14/25... Training loss: 0.1112
Epoch: 14/25... Training loss: 0.1127
Epoch: 14/25... Training loss: 0.1119
Epoch: 14/25... Training loss: 0.1123
Epoch: 14/25... Training loss: 0.1066
Epoch: 14/25... Training loss: 0.1090
Epoch: 14/25... Training loss: 0.1095
Epoch: 14/25... Training loss: 0.1097
Epoch: 14/25... Training loss: 0.1085
Epoch: 14/25... Training loss: 0.1093
Epoch: 14/25... Training loss: 0.1087
Epoch: 14/25... Training loss: 0.1081
Epoch: 14/25... Training loss: 0.1093
Epoch: 14/25... Training loss: 0.1093
Epoch: 14/25... Training loss: 0.1066
Epoch: 14/25... Training loss: 0.1071
Epoch: 15/25... Training loss: 0.1075
Epoch: 15/25... Training loss: 0.1095
Epoch: 15/25... Training loss: 0.1078
Epoch: 15/25... Training loss: 0.1081
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1096
Epoch: 15/25... Training loss: 0.1099
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1080
Epoch: 15/25... Training loss: 0.1096
Epoch: 15/25... Training loss: 0.1100
Epoch: 15/25... Training loss: 0.1088
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1075
Epoch: 15/25... Training loss: 0.1100
Epoch: 15/25... Training loss: 0.1095
Epoch: 15/25... Training loss: 0.1062
Epoch: 15/25... Training loss: 0.1092
Epoch: 15/25... Training loss: 0.1103
Epoch: 15/25... Training loss: 0.1090
Epoch: 15/25... Training loss: 0.1054
Epoch: 15/25... Training loss: 0.1116
Epoch: 15/25... Training loss: 0.1073
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1083
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1099
Epoch: 15/25... Training loss: 0.1109
Epoch: 15/25... Training loss: 0.1152
Epoch: 15/25... Training loss: 0.1081
Epoch: 15/25... Training loss: 0.1062
Epoch: 15/25... Training loss: 0.1107
Epoch: 15/25... Training loss: 0.1080
Epoch: 15/25... Training loss: 0.1065
Epoch: 15/25... Training loss: 0.1107
Epoch: 15/25... Training loss: 0.1113
Epoch: 15/25... Training loss: 0.1100
Epoch: 15/25... Training loss: 0.1125
Epoch: 15/25... Training loss: 0.1118
Epoch: 15/25... Training loss: 0.1107
Epoch: 15/25... Training loss: 0.1103
Epoch: 15/25... Training loss: 0.1098
Epoch: 15/25... Training loss: 0.1131
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1099
Epoch: 15/25... Training loss: 0.1095
Epoch: 15/25... Training loss: 0.1080
Epoch: 15/25... Training loss: 0.1064
Epoch: 15/25... Training loss: 0.1123
Epoch: 15/25... Training loss: 0.1068
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1071
Epoch: 15/25... Training loss: 0.1090
Epoch: 15/25... Training loss: 0.1095
Epoch: 15/25... Training loss: 0.1112
Epoch: 15/25... Training loss: 0.1093
Epoch: 15/25... Training loss: 0.1070
Epoch: 15/25... Training loss: 0.1136
Epoch: 15/25... Training loss: 0.1089
Epoch: 15/25... Training loss: 0.1084
Epoch: 15/25... Training loss: 0.1070
Epoch: 15/25... Training loss: 0.1095
Epoch: 15/25... Training loss: 0.1120
Epoch: 15/25... Training loss: 0.1073
Epoch: 15/25... Training loss: 0.1081
Epoch: 15/25... Training loss: 0.1086
Epoch: 15/25... Training loss: 0.1080
Epoch: 15/25... Training loss: 0.1128
Epoch: 15/25... Training loss: 0.1102
Epoch: 15/25... Training loss: 0.1096
Epoch: 15/25... Training loss: 0.1080
Epoch: 15/25... Training loss: 0.1132
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1031
Epoch: 15/25... Training loss: 0.1126
Epoch: 15/25... Training loss: 0.1075
Epoch: 15/25... Training loss: 0.1078
Epoch: 15/25... Training loss: 0.1101
Epoch: 15/25... Training loss: 0.1134
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1100
Epoch: 15/25... Training loss: 0.1060
Epoch: 15/25... Training loss: 0.1111
Epoch: 15/25... Training loss: 0.1078
Epoch: 15/25... Training loss: 0.1059
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1128
Epoch: 15/25... Training loss: 0.1132
Epoch: 15/25... Training loss: 0.1120
Epoch: 15/25... Training loss: 0.1076
Epoch: 15/25... Training loss: 0.1112
Epoch: 15/25... Training loss: 0.1107
Epoch: 15/25... Training loss: 0.1040
Epoch: 15/25... Training loss: 0.1043
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1071
Epoch: 15/25... Training loss: 0.1085
Epoch: 15/25... Training loss: 0.1141
Epoch: 15/25... Training loss: 0.1098
Epoch: 15/25... Training loss: 0.1108
Epoch: 15/25... Training loss: 0.1052
Epoch: 15/25... Training loss: 0.1107
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1105
Epoch: 15/25... Training loss: 0.1102
Epoch: 15/25... Training loss: 0.1121
Epoch: 15/25... Training loss: 0.1077
Epoch: 15/25... Training loss: 0.1062
Epoch: 15/25... Training loss: 0.1064
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1098
Epoch: 15/25... Training loss: 0.1089
Epoch: 15/25... Training loss: 0.1087
Epoch: 15/25... Training loss: 0.1114
Epoch: 15/25... Training loss: 0.1094
Epoch: 15/25... Training loss: 0.1131
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1103
Epoch: 15/25... Training loss: 0.1073
Epoch: 15/25... Training loss: 0.1110
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1059
Epoch: 15/25... Training loss: 0.1054
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1116
Epoch: 15/25... Training loss: 0.1088
Epoch: 15/25... Training loss: 0.1081
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1110
Epoch: 15/25... Training loss: 0.1128
Epoch: 15/25... Training loss: 0.1075
Epoch: 15/25... Training loss: 0.1063
Epoch: 15/25... Training loss: 0.1133
Epoch: 15/25... Training loss: 0.1051
Epoch: 15/25... Training loss: 0.1103
Epoch: 15/25... Training loss: 0.1090
Epoch: 15/25... Training loss: 0.1098
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1057
Epoch: 15/25... Training loss: 0.1071
Epoch: 15/25... Training loss: 0.1064
Epoch: 15/25... Training loss: 0.1071
Epoch: 15/25... Training loss: 0.1092
Epoch: 15/25... Training loss: 0.1110
Epoch: 15/25... Training loss: 0.1028
Epoch: 15/25... Training loss: 0.1099
Epoch: 15/25... Training loss: 0.1104
Epoch: 15/25... Training loss: 0.1111
Epoch: 15/25... Training loss: 0.1059
Epoch: 15/25... Training loss: 0.1116
Epoch: 15/25... Training loss: 0.1065
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1062
Epoch: 15/25... Training loss: 0.1085
Epoch: 15/25... Training loss: 0.1115
Epoch: 15/25... Training loss: 0.1061
Epoch: 15/25... Training loss: 0.1113
Epoch: 15/25... Training loss: 0.1090
Epoch: 15/25... Training loss: 0.1100
Epoch: 15/25... Training loss: 0.1055
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1068
Epoch: 15/25... Training loss: 0.1055
Epoch: 15/25... Training loss: 0.1108
Epoch: 15/25... Training loss: 0.1122
Epoch: 15/25... Training loss: 0.1099
Epoch: 15/25... Training loss: 0.1067
Epoch: 15/25... Training loss: 0.1061
Epoch: 15/25... Training loss: 0.1122
Epoch: 15/25... Training loss: 0.1043
Epoch: 15/25... Training loss: 0.1080
Epoch: 15/25... Training loss: 0.1072
Epoch: 15/25... Training loss: 0.1117
Epoch: 15/25... Training loss: 0.1107
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1086
Epoch: 15/25... Training loss: 0.1083
Epoch: 15/25... Training loss: 0.1107
Epoch: 15/25... Training loss: 0.1061
Epoch: 15/25... Training loss: 0.1096
Epoch: 15/25... Training loss: 0.1101
Epoch: 15/25... Training loss: 0.1058
Epoch: 15/25... Training loss: 0.1099
Epoch: 15/25... Training loss: 0.1104
Epoch: 15/25... Training loss: 0.1092
Epoch: 15/25... Training loss: 0.1089
Epoch: 15/25... Training loss: 0.1095
Epoch: 15/25... Training loss: 0.1098
Epoch: 15/25... Training loss: 0.1088
Epoch: 15/25... Training loss: 0.1100
Epoch: 15/25... Training loss: 0.1111
Epoch: 15/25... Training loss: 0.1044
Epoch: 15/25... Training loss: 0.1085
Epoch: 15/25... Training loss: 0.1068
Epoch: 15/25... Training loss: 0.1053
Epoch: 15/25... Training loss: 0.1078
Epoch: 15/25... Training loss: 0.1050
Epoch: 15/25... Training loss: 0.1075
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1044
Epoch: 15/25... Training loss: 0.1105
Epoch: 15/25... Training loss: 0.1111
Epoch: 15/25... Training loss: 0.1069
Epoch: 15/25... Training loss: 0.1118
Epoch: 15/25... Training loss: 0.1102
Epoch: 15/25... Training loss: 0.1098
Epoch: 15/25... Training loss: 0.1080
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1059
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1095
Epoch: 15/25... Training loss: 0.1080
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1070
Epoch: 15/25... Training loss: 0.1076
Epoch: 15/25... Training loss: 0.1065
Epoch: 15/25... Training loss: 0.1131
Epoch: 15/25... Training loss: 0.1105
Epoch: 15/25... Training loss: 0.1083
Epoch: 15/25... Training loss: 0.1127
Epoch: 15/25... Training loss: 0.1113
Epoch: 15/25... Training loss: 0.1086
Epoch: 15/25... Training loss: 0.1068
Epoch: 15/25... Training loss: 0.1111
Epoch: 15/25... Training loss: 0.1066
Epoch: 15/25... Training loss: 0.1076
Epoch: 15/25... Training loss: 0.1110
Epoch: 15/25... Training loss: 0.1093
Epoch: 15/25... Training loss: 0.1100
Epoch: 15/25... Training loss: 0.1065
Epoch: 15/25... Training loss: 0.1103
Epoch: 15/25... Training loss: 0.1113
Epoch: 15/25... Training loss: 0.1102
Epoch: 15/25... Training loss: 0.1062
Epoch: 15/25... Training loss: 0.1087
Epoch: 15/25... Training loss: 0.1090
Epoch: 15/25... Training loss: 0.1105
Epoch: 15/25... Training loss: 0.1052
Epoch: 15/25... Training loss: 0.1083
Epoch: 15/25... Training loss: 0.1060
Epoch: 15/25... Training loss: 0.1109
Epoch: 15/25... Training loss: 0.1064
Epoch: 15/25... Training loss: 0.1084
Epoch: 15/25... Training loss: 0.1093
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1126
Epoch: 15/25... Training loss: 0.1088
Epoch: 15/25... Training loss: 0.1092
Epoch: 15/25... Training loss: 0.1026
Epoch: 15/25... Training loss: 0.1084
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1111
Epoch: 15/25... Training loss: 0.1050
Epoch: 15/25... Training loss: 0.1078
Epoch: 15/25... Training loss: 0.1045
Epoch: 15/25... Training loss: 0.1128
Epoch: 15/25... Training loss: 0.1077
Epoch: 15/25... Training loss: 0.1087
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1056
Epoch: 15/25... Training loss: 0.1081
Epoch: 15/25... Training loss: 0.1098
Epoch: 15/25... Training loss: 0.1067
Epoch: 15/25... Training loss: 0.1069
Epoch: 15/25... Training loss: 0.1119
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1106
Epoch: 15/25... Training loss: 0.1048
Epoch: 15/25... Training loss: 0.1090
Epoch: 15/25... Training loss: 0.1091
Epoch: 15/25... Training loss: 0.1101
Epoch: 15/25... Training loss: 0.1072
Epoch: 15/25... Training loss: 0.1050
Epoch: 15/25... Training loss: 0.1111
Epoch: 15/25... Training loss: 0.1078
Epoch: 15/25... Training loss: 0.1071
Epoch: 15/25... Training loss: 0.1089
Epoch: 15/25... Training loss: 0.1084
Epoch: 15/25... Training loss: 0.1092
Epoch: 15/25... Training loss: 0.1068
Epoch: 15/25... Training loss: 0.1101
Epoch: 15/25... Training loss: 0.1084
Epoch: 15/25... Training loss: 0.1069
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1079
Epoch: 15/25... Training loss: 0.1121
Epoch: 15/25... Training loss: 0.1107
Epoch: 15/25... Training loss: 0.1076
Epoch: 15/25... Training loss: 0.1082
Epoch: 15/25... Training loss: 0.1087
Epoch: 15/25... Training loss: 0.1100
Epoch: 15/25... Training loss: 0.1097
Epoch: 15/25... Training loss: 0.1081
Epoch: 15/25... Training loss: 0.1108
Epoch: 15/25... Training loss: 0.1057
Epoch: 16/25... Training loss: 0.1083
Epoch: 16/25... Training loss: 0.1093
Epoch: 16/25... Training loss: 0.1054
Epoch: 16/25... Training loss: 0.1107
Epoch: 16/25... Training loss: 0.1089
Epoch: 16/25... Training loss: 0.1078
Epoch: 16/25... Training loss: 0.1069
Epoch: 16/25... Training loss: 0.1070
Epoch: 16/25... Training loss: 0.1078
Epoch: 16/25... Training loss: 0.1092
Epoch: 16/25... Training loss: 0.1099
Epoch: 16/25... Training loss: 0.1098
Epoch: 16/25... Training loss: 0.1086
Epoch: 16/25... Training loss: 0.1112
Epoch: 16/25... Training loss: 0.1086
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1091
Epoch: 16/25... Training loss: 0.1073
Epoch: 16/25... Training loss: 0.1080
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1091
Epoch: 16/25... Training loss: 0.1095
Epoch: 16/25... Training loss: 0.1081
Epoch: 16/25... Training loss: 0.1081
Epoch: 16/25... Training loss: 0.1095
Epoch: 16/25... Training loss: 0.1095
Epoch: 16/25... Training loss: 0.1096
Epoch: 16/25... Training loss: 0.1115
Epoch: 16/25... Training loss: 0.1093
Epoch: 16/25... Training loss: 0.1096
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1041
Epoch: 16/25... Training loss: 0.1105
Epoch: 16/25... Training loss: 0.1070
Epoch: 16/25... Training loss: 0.1131
Epoch: 16/25... Training loss: 0.1095
Epoch: 16/25... Training loss: 0.1069
Epoch: 16/25... Training loss: 0.1099
Epoch: 16/25... Training loss: 0.1099
Epoch: 16/25... Training loss: 0.1086
Epoch: 16/25... Training loss: 0.1048
Epoch: 16/25... Training loss: 0.1071
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1049
Epoch: 16/25... Training loss: 0.1117
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1082
Epoch: 16/25... Training loss: 0.1077
Epoch: 16/25... Training loss: 0.1101
Epoch: 16/25... Training loss: 0.1080
Epoch: 16/25... Training loss: 0.1105
Epoch: 16/25... Training loss: 0.1072
Epoch: 16/25... Training loss: 0.1082
Epoch: 16/25... Training loss: 0.1085
Epoch: 16/25... Training loss: 0.1110
Epoch: 16/25... Training loss: 0.1095
Epoch: 16/25... Training loss: 0.1051
Epoch: 16/25... Training loss: 0.1094
Epoch: 16/25... Training loss: 0.1080
Epoch: 16/25... Training loss: 0.1072
Epoch: 16/25... Training loss: 0.1103
Epoch: 16/25... Training loss: 0.1054
Epoch: 16/25... Training loss: 0.1064
Epoch: 16/25... Training loss: 0.1062
Epoch: 16/25... Training loss: 0.1105
Epoch: 16/25... Training loss: 0.1098
Epoch: 16/25... Training loss: 0.1047
Epoch: 16/25... Training loss: 0.1079
Epoch: 16/25... Training loss: 0.1046
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1092
Epoch: 16/25... Training loss: 0.1070
Epoch: 16/25... Training loss: 0.1085
Epoch: 16/25... Training loss: 0.1093
Epoch: 16/25... Training loss: 0.1062
Epoch: 16/25... Training loss: 0.1115
Epoch: 16/25... Training loss: 0.1109
Epoch: 16/25... Training loss: 0.1069
Epoch: 16/25... Training loss: 0.1068
Epoch: 16/25... Training loss: 0.1096
Epoch: 16/25... Training loss: 0.1086
Epoch: 16/25... Training loss: 0.1102
Epoch: 16/25... Training loss: 0.1095
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1081
Epoch: 16/25... Training loss: 0.1064
Epoch: 16/25... Training loss: 0.1098
Epoch: 16/25... Training loss: 0.1088
Epoch: 16/25... Training loss: 0.1061
Epoch: 16/25... Training loss: 0.1024
Epoch: 16/25... Training loss: 0.1093
Epoch: 16/25... Training loss: 0.1102
Epoch: 16/25... Training loss: 0.1096
Epoch: 16/25... Training loss: 0.1096
Epoch: 16/25... Training loss: 0.1094
Epoch: 16/25... Training loss: 0.1095
Epoch: 16/25... Training loss: 0.1090
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1052
Epoch: 16/25... Training loss: 0.1081
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1046
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1080
Epoch: 16/25... Training loss: 0.1110
Epoch: 16/25... Training loss: 0.1139
Epoch: 16/25... Training loss: 0.1102
Epoch: 16/25... Training loss: 0.1030
Epoch: 16/25... Training loss: 0.1094
Epoch: 16/25... Training loss: 0.1102
Epoch: 16/25... Training loss: 0.1023
Epoch: 16/25... Training loss: 0.1060
Epoch: 16/25... Training loss: 0.1094
Epoch: 16/25... Training loss: 0.1057
Epoch: 16/25... Training loss: 0.1086
Epoch: 16/25... Training loss: 0.1084
Epoch: 16/25... Training loss: 0.1057
Epoch: 16/25... Training loss: 0.1084
Epoch: 16/25... Training loss: 0.1069
Epoch: 16/25... Training loss: 0.1053
Epoch: 16/25... Training loss: 0.1069
Epoch: 16/25... Training loss: 0.1082
Epoch: 16/25... Training loss: 0.1023
Epoch: 16/25... Training loss: 0.1063
Epoch: 16/25... Training loss: 0.1097
Epoch: 16/25... Training loss: 0.1133
Epoch: 16/25... Training loss: 0.1098
Epoch: 16/25... Training loss: 0.1040
Epoch: 16/25... Training loss: 0.1074
Epoch: 16/25... Training loss: 0.1059
Epoch: 16/25... Training loss: 0.1101
Epoch: 16/25... Training loss: 0.1056
Epoch: 16/25... Training loss: 0.1083
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1094
Epoch: 16/25... Training loss: 0.1103
Epoch: 16/25... Training loss: 0.1080
Epoch: 16/25... Training loss: 0.1093
Epoch: 16/25... Training loss: 0.1061
Epoch: 16/25... Training loss: 0.1054
Epoch: 16/25... Training loss: 0.1105
Epoch: 16/25... Training loss: 0.1071
Epoch: 16/25... Training loss: 0.1092
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1093
Epoch: 16/25... Training loss: 0.1085
Epoch: 16/25... Training loss: 0.1067
Epoch: 16/25... Training loss: 0.1043
Epoch: 16/25... Training loss: 0.1117
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1064
Epoch: 16/25... Training loss: 0.1097
Epoch: 16/25... Training loss: 0.1122
Epoch: 16/25... Training loss: 0.1079
Epoch: 16/25... Training loss: 0.1052
Epoch: 16/25... Training loss: 0.1092
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1085
Epoch: 16/25... Training loss: 0.1061
Epoch: 16/25... Training loss: 0.1068
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1077
Epoch: 16/25... Training loss: 0.1134
Epoch: 16/25... Training loss: 0.1126
Epoch: 16/25... Training loss: 0.1107
Epoch: 16/25... Training loss: 0.1062
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1077
Epoch: 16/25... Training loss: 0.1108
Epoch: 16/25... Training loss: 0.1076
Epoch: 16/25... Training loss: 0.1093
Epoch: 16/25... Training loss: 0.1090
Epoch: 16/25... Training loss: 0.1122
Epoch: 16/25... Training loss: 0.1077
Epoch: 16/25... Training loss: 0.1078
Epoch: 16/25... Training loss: 0.1068
Epoch: 16/25... Training loss: 0.1071
Epoch: 16/25... Training loss: 0.1103
Epoch: 16/25... Training loss: 0.1110
Epoch: 16/25... Training loss: 0.1059
Epoch: 16/25... Training loss: 0.1078
Epoch: 16/25... Training loss: 0.1114
Epoch: 16/25... Training loss: 0.1080
Epoch: 16/25... Training loss: 0.1078
Epoch: 16/25... Training loss: 0.1126
Epoch: 16/25... Training loss: 0.1133
Epoch: 16/25... Training loss: 0.1065
Epoch: 16/25... Training loss: 0.1103
Epoch: 16/25... Training loss: 0.1098
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1079
Epoch: 16/25... Training loss: 0.1110
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1064
Epoch: 16/25... Training loss: 0.1050
Epoch: 16/25... Training loss: 0.1125
Epoch: 16/25... Training loss: 0.1121
Epoch: 16/25... Training loss: 0.1088
Epoch: 16/25... Training loss: 0.1069
Epoch: 16/25... Training loss: 0.1090
Epoch: 16/25... Training loss: 0.1082
Epoch: 16/25... Training loss: 0.1072
Epoch: 16/25... Training loss: 0.1062
Epoch: 16/25... Training loss: 0.1105
Epoch: 16/25... Training loss: 0.1102
Epoch: 16/25... Training loss: 0.1082
Epoch: 16/25... Training loss: 0.1082
Epoch: 16/25... Training loss: 0.1059
Epoch: 16/25... Training loss: 0.1104
Epoch: 16/25... Training loss: 0.1090
Epoch: 16/25... Training loss: 0.1079
Epoch: 16/25... Training loss: 0.1121
Epoch: 16/25... Training loss: 0.1054
Epoch: 16/25... Training loss: 0.1054
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1074
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1113
Epoch: 16/25... Training loss: 0.1075
Epoch: 16/25... Training loss: 0.1070
Epoch: 16/25... Training loss: 0.1065
Epoch: 16/25... Training loss: 0.1114
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1116
Epoch: 16/25... Training loss: 0.1091
Epoch: 16/25... Training loss: 0.1080
Epoch: 16/25... Training loss: 0.1092
Epoch: 16/25... Training loss: 0.1103
Epoch: 16/25... Training loss: 0.1051
Epoch: 16/25... Training loss: 0.1086
Epoch: 16/25... Training loss: 0.1038
Epoch: 16/25... Training loss: 0.1068
Epoch: 16/25... Training loss: 0.1035
Epoch: 16/25... Training loss: 0.1068
Epoch: 16/25... Training loss: 0.1069
Epoch: 16/25... Training loss: 0.1098
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1059
Epoch: 16/25... Training loss: 0.1032
Epoch: 16/25... Training loss: 0.1076
Epoch: 16/25... Training loss: 0.1081
Epoch: 16/25... Training loss: 0.1068
Epoch: 16/25... Training loss: 0.1063
Epoch: 16/25... Training loss: 0.1060
Epoch: 16/25... Training loss: 0.1087
Epoch: 16/25... Training loss: 0.1071
Epoch: 16/25... Training loss: 0.1027
Epoch: 16/25... Training loss: 0.1067
Epoch: 16/25... Training loss: 0.1080
Epoch: 16/25... Training loss: 0.1067
Epoch: 16/25... Training loss: 0.1090
Epoch: 16/25... Training loss: 0.1046
Epoch: 16/25... Training loss: 0.1081
Epoch: 16/25... Training loss: 0.1096
Epoch: 16/25... Training loss: 0.1081
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1108
Epoch: 16/25... Training loss: 0.1114
Epoch: 16/25... Training loss: 0.1107
Epoch: 16/25... Training loss: 0.1067
Epoch: 16/25... Training loss: 0.1028
Epoch: 16/25... Training loss: 0.1086
Epoch: 16/25... Training loss: 0.1092
Epoch: 16/25... Training loss: 0.1043
Epoch: 16/25... Training loss: 0.1066
Epoch: 16/25... Training loss: 0.1057
Epoch: 16/25... Training loss: 0.1111
Epoch: 16/25... Training loss: 0.1077
Epoch: 16/25... Training loss: 0.1109
Epoch: 16/25... Training loss: 0.1104
Epoch: 16/25... Training loss: 0.1108
Epoch: 16/25... Training loss: 0.1060
Epoch: 16/25... Training loss: 0.1057
Epoch: 16/25... Training loss: 0.1090
Epoch: 16/25... Training loss: 0.1105
Epoch: 16/25... Training loss: 0.1114
Epoch: 16/25... Training loss: 0.1079
Epoch: 16/25... Training loss: 0.1077
Epoch: 16/25... Training loss: 0.1083
Epoch: 16/25... Training loss: 0.1100
Epoch: 16/25... Training loss: 0.1088
Epoch: 16/25... Training loss: 0.1082
Epoch: 16/25... Training loss: 0.1068
Epoch: 16/25... Training loss: 0.1076
Epoch: 16/25... Training loss: 0.1035
Epoch: 16/25... Training loss: 0.1084
Epoch: 16/25... Training loss: 0.1109
Epoch: 16/25... Training loss: 0.1076
Epoch: 16/25... Training loss: 0.1085
Epoch: 16/25... Training loss: 0.1114
Epoch: 16/25... Training loss: 0.1115
Epoch: 16/25... Training loss: 0.1100
Epoch: 16/25... Training loss: 0.1071
Epoch: 16/25... Training loss: 0.1067
Epoch: 16/25... Training loss: 0.1110
Epoch: 16/25... Training loss: 0.1074
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1075
Epoch: 17/25... Training loss: 0.1114
Epoch: 17/25... Training loss: 0.1057
Epoch: 17/25... Training loss: 0.1090
Epoch: 17/25... Training loss: 0.1093
Epoch: 17/25... Training loss: 0.1076
Epoch: 17/25... Training loss: 0.1071
Epoch: 17/25... Training loss: 0.1063
Epoch: 17/25... Training loss: 0.1099
Epoch: 17/25... Training loss: 0.1099
Epoch: 17/25... Training loss: 0.1052
Epoch: 17/25... Training loss: 0.1065
Epoch: 17/25... Training loss: 0.1035
Epoch: 17/25... Training loss: 0.1048
Epoch: 17/25... Training loss: 0.1095
Epoch: 17/25... Training loss: 0.1026
Epoch: 17/25... Training loss: 0.1075
Epoch: 17/25... Training loss: 0.1068
Epoch: 17/25... Training loss: 0.1102
Epoch: 17/25... Training loss: 0.1062
Epoch: 17/25... Training loss: 0.1074
Epoch: 17/25... Training loss: 0.1065
Epoch: 17/25... Training loss: 0.1095
Epoch: 17/25... Training loss: 0.1063
Epoch: 17/25... Training loss: 0.1106
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1107
Epoch: 17/25... Training loss: 0.1108
Epoch: 17/25... Training loss: 0.1096
Epoch: 17/25... Training loss: 0.1092
Epoch: 17/25... Training loss: 0.1085
Epoch: 17/25... Training loss: 0.1073
Epoch: 17/25... Training loss: 0.1067
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1052
Epoch: 17/25... Training loss: 0.1009
Epoch: 17/25... Training loss: 0.1093
Epoch: 17/25... Training loss: 0.1082
Epoch: 17/25... Training loss: 0.1072
Epoch: 17/25... Training loss: 0.1039
Epoch: 17/25... Training loss: 0.1087
Epoch: 17/25... Training loss: 0.1108
Epoch: 17/25... Training loss: 0.1067
Epoch: 17/25... Training loss: 0.1054
Epoch: 17/25... Training loss: 0.1073
Epoch: 17/25... Training loss: 0.1087
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1058
Epoch: 17/25... Training loss: 0.1065
Epoch: 17/25... Training loss: 0.1073
Epoch: 17/25... Training loss: 0.1063
Epoch: 17/25... Training loss: 0.1077
Epoch: 17/25... Training loss: 0.1082
Epoch: 17/25... Training loss: 0.1101
Epoch: 17/25... Training loss: 0.1105
Epoch: 17/25... Training loss: 0.1045
Epoch: 17/25... Training loss: 0.1069
Epoch: 17/25... Training loss: 0.1069
Epoch: 17/25... Training loss: 0.1092
Epoch: 17/25... Training loss: 0.1030
Epoch: 17/25... Training loss: 0.1099
Epoch: 17/25... Training loss: 0.1052
Epoch: 17/25... Training loss: 0.1082
Epoch: 17/25... Training loss: 0.1082
Epoch: 17/25... Training loss: 0.1076
Epoch: 17/25... Training loss: 0.1079
Epoch: 17/25... Training loss: 0.1061
Epoch: 17/25... Training loss: 0.1069
Epoch: 17/25... Training loss: 0.1087
Epoch: 17/25... Training loss: 0.1071
Epoch: 17/25... Training loss: 0.1067
Epoch: 17/25... Training loss: 0.1046
Epoch: 17/25... Training loss: 0.1097
Epoch: 17/25... Training loss: 0.1107
Epoch: 17/25... Training loss: 0.1077
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1141
Epoch: 17/25... Training loss: 0.1068
Epoch: 17/25... Training loss: 0.1103
Epoch: 17/25... Training loss: 0.1084
Epoch: 17/25... Training loss: 0.1073
Epoch: 17/25... Training loss: 0.1058
Epoch: 17/25... Training loss: 0.1083
Epoch: 17/25... Training loss: 0.1109
Epoch: 17/25... Training loss: 0.1114
Epoch: 17/25... Training loss: 0.1088
Epoch: 17/25... Training loss: 0.1088
Epoch: 17/25... Training loss: 0.1074
Epoch: 17/25... Training loss: 0.1109
Epoch: 17/25... Training loss: 0.1095
Epoch: 17/25... Training loss: 0.1096
Epoch: 17/25... Training loss: 0.1073
Epoch: 17/25... Training loss: 0.1037
Epoch: 17/25... Training loss: 0.1071
Epoch: 17/25... Training loss: 0.1063
Epoch: 17/25... Training loss: 0.1055
Epoch: 17/25... Training loss: 0.1118
Epoch: 17/25... Training loss: 0.1036
Epoch: 17/25... Training loss: 0.1049
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1086
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1070
Epoch: 17/25... Training loss: 0.1092
Epoch: 17/25... Training loss: 0.1105
Epoch: 17/25... Training loss: 0.1104
Epoch: 17/25... Training loss: 0.1103
Epoch: 17/25... Training loss: 0.1054
Epoch: 17/25... Training loss: 0.1094
Epoch: 17/25... Training loss: 0.1092
Epoch: 17/25... Training loss: 0.1128
Epoch: 17/25... Training loss: 0.1071
Epoch: 17/25... Training loss: 0.1090
Epoch: 17/25... Training loss: 0.1108
Epoch: 17/25... Training loss: 0.1100
Epoch: 17/25... Training loss: 0.1103
Epoch: 17/25... Training loss: 0.1080
Epoch: 17/25... Training loss: 0.1057
Epoch: 17/25... Training loss: 0.1098
Epoch: 17/25... Training loss: 0.1037
Epoch: 17/25... Training loss: 0.1050
Epoch: 17/25... Training loss: 0.1078
Epoch: 17/25... Training loss: 0.1068
Epoch: 17/25... Training loss: 0.1042
Epoch: 17/25... Training loss: 0.1080
Epoch: 17/25... Training loss: 0.1034
Epoch: 17/25... Training loss: 0.1030
Epoch: 17/25... Training loss: 0.1075
Epoch: 17/25... Training loss: 0.1074
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1120
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1054
Epoch: 17/25... Training loss: 0.1109
Epoch: 17/25... Training loss: 0.1075
Epoch: 17/25... Training loss: 0.1076
Epoch: 17/25... Training loss: 0.1094
Epoch: 17/25... Training loss: 0.1088
Epoch: 17/25... Training loss: 0.1060
Epoch: 17/25... Training loss: 0.1052
Epoch: 17/25... Training loss: 0.1069
Epoch: 17/25... Training loss: 0.1080
Epoch: 17/25... Training loss: 0.1079
Epoch: 17/25... Training loss: 0.1048
Epoch: 17/25... Training loss: 0.1099
Epoch: 17/25... Training loss: 0.1085
Epoch: 17/25... Training loss: 0.1091
Epoch: 17/25... Training loss: 0.1094
Epoch: 17/25... Training loss: 0.1080
Epoch: 17/25... Training loss: 0.1039
Epoch: 17/25... Training loss: 0.1070
Epoch: 17/25... Training loss: 0.1073
Epoch: 17/25... Training loss: 0.1111
Epoch: 17/25... Training loss: 0.1074
Epoch: 17/25... Training loss: 0.1100
Epoch: 17/25... Training loss: 0.1054
Epoch: 17/25... Training loss: 0.1105
Epoch: 17/25... Training loss: 0.1085
Epoch: 17/25... Training loss: 0.1095
Epoch: 17/25... Training loss: 0.1079
Epoch: 17/25... Training loss: 0.1090
Epoch: 17/25... Training loss: 0.1093
Epoch: 17/25... Training loss: 0.1053
Epoch: 17/25... Training loss: 0.1068
Epoch: 17/25... Training loss: 0.1069
Epoch: 17/25... Training loss: 0.1060
Epoch: 17/25... Training loss: 0.1079
Epoch: 17/25... Training loss: 0.1067
Epoch: 17/25... Training loss: 0.1086
Epoch: 17/25... Training loss: 0.1084
Epoch: 17/25... Training loss: 0.1073
Epoch: 17/25... Training loss: 0.1047
Epoch: 17/25... Training loss: 0.1056
Epoch: 17/25... Training loss: 0.1056
Epoch: 17/25... Training loss: 0.1062
Epoch: 17/25... Training loss: 0.1105
Epoch: 17/25... Training loss: 0.1092
Epoch: 17/25... Training loss: 0.1085
Epoch: 17/25... Training loss: 0.1054
Epoch: 17/25... Training loss: 0.1053
Epoch: 17/25... Training loss: 0.1061
Epoch: 17/25... Training loss: 0.1094
Epoch: 17/25... Training loss: 0.1099
Epoch: 17/25... Training loss: 0.1096
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1110
Epoch: 17/25... Training loss: 0.1106
Epoch: 17/25... Training loss: 0.1042
Epoch: 17/25... Training loss: 0.1053
Epoch: 17/25... Training loss: 0.1099
Epoch: 17/25... Training loss: 0.1094
Epoch: 17/25... Training loss: 0.1060
Epoch: 17/25... Training loss: 0.1077
Epoch: 17/25... Training loss: 0.1098
Epoch: 17/25... Training loss: 0.1080
Epoch: 17/25... Training loss: 0.1070
Epoch: 17/25... Training loss: 0.1048
Epoch: 17/25... Training loss: 0.1095
Epoch: 17/25... Training loss: 0.1097
Epoch: 17/25... Training loss: 0.1072
Epoch: 17/25... Training loss: 0.1088
Epoch: 17/25... Training loss: 0.1078
Epoch: 17/25... Training loss: 0.1039
Epoch: 17/25... Training loss: 0.1082
Epoch: 17/25... Training loss: 0.1082
Epoch: 17/25... Training loss: 0.1080
Epoch: 17/25... Training loss: 0.1091
Epoch: 17/25... Training loss: 0.1087
Epoch: 17/25... Training loss: 0.1019
Epoch: 17/25... Training loss: 0.1091
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1086
Epoch: 17/25... Training loss: 0.1121
Epoch: 17/25... Training loss: 0.1083
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1120
Epoch: 17/25... Training loss: 0.1049
Epoch: 17/25... Training loss: 0.1103
Epoch: 17/25... Training loss: 0.1105
Epoch: 17/25... Training loss: 0.1091
Epoch: 17/25... Training loss: 0.1068
Epoch: 17/25... Training loss: 0.1105
Epoch: 17/25... Training loss: 0.1103
Epoch: 17/25... Training loss: 0.1079
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1090
Epoch: 17/25... Training loss: 0.1041
Epoch: 17/25... Training loss: 0.1063
Epoch: 17/25... Training loss: 0.1041
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1075
Epoch: 17/25... Training loss: 0.1055
Epoch: 17/25... Training loss: 0.1090
Epoch: 17/25... Training loss: 0.1051
Epoch: 17/25... Training loss: 0.1046
Epoch: 17/25... Training loss: 0.1095
Epoch: 17/25... Training loss: 0.1039
Epoch: 17/25... Training loss: 0.1070
Epoch: 17/25... Training loss: 0.1070
Epoch: 17/25... Training loss: 0.1072
Epoch: 17/25... Training loss: 0.1069
Epoch: 17/25... Training loss: 0.1047
Epoch: 17/25... Training loss: 0.1061
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1090
Epoch: 17/25... Training loss: 0.1061
Epoch: 17/25... Training loss: 0.1050
Epoch: 17/25... Training loss: 0.1085
Epoch: 17/25... Training loss: 0.1055
Epoch: 17/25... Training loss: 0.1067
Epoch: 17/25... Training loss: 0.1052
Epoch: 17/25... Training loss: 0.1076
Epoch: 17/25... Training loss: 0.1063
Epoch: 17/25... Training loss: 0.1033
Epoch: 17/25... Training loss: 0.1066
Epoch: 17/25... Training loss: 0.1082
Epoch: 17/25... Training loss: 0.1064
Epoch: 17/25... Training loss: 0.1077
Epoch: 17/25... Training loss: 0.1043
Epoch: 17/25... Training loss: 0.1080
Epoch: 17/25... Training loss: 0.1082
Epoch: 17/25... Training loss: 0.1059
Epoch: 17/25... Training loss: 0.1063
Epoch: 17/25... Training loss: 0.1055
Epoch: 17/25... Training loss: 0.1071
Epoch: 17/25... Training loss: 0.1061
Epoch: 17/25... Training loss: 0.1074
Epoch: 17/25... Training loss: 0.1065
Epoch: 17/25... Training loss: 0.1085
Epoch: 17/25... Training loss: 0.1079
Epoch: 17/25... Training loss: 0.1045
Epoch: 17/25... Training loss: 0.1068
Epoch: 17/25... Training loss: 0.1064
Epoch: 17/25... Training loss: 0.1059
Epoch: 17/25... Training loss: 0.1078
Epoch: 17/25... Training loss: 0.1100
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1057
Epoch: 17/25... Training loss: 0.1063
Epoch: 17/25... Training loss: 0.1081
Epoch: 17/25... Training loss: 0.1078
Epoch: 17/25... Training loss: 0.1058
Epoch: 17/25... Training loss: 0.1054
Epoch: 17/25... Training loss: 0.1103
Epoch: 17/25... Training loss: 0.1115
Epoch: 17/25... Training loss: 0.1094
Epoch: 17/25... Training loss: 0.1079
Epoch: 17/25... Training loss: 0.1062
Epoch: 17/25... Training loss: 0.1089
Epoch: 17/25... Training loss: 0.1070
Epoch: 17/25... Training loss: 0.1101
Epoch: 17/25... Training loss: 0.1074
Epoch: 17/25... Training loss: 0.1087
Epoch: 17/25... Training loss: 0.1091
Epoch: 17/25... Training loss: 0.1077
Epoch: 17/25... Training loss: 0.1076
Epoch: 17/25... Training loss: 0.1067
Epoch: 17/25... Training loss: 0.1031
Epoch: 17/25... Training loss: 0.1068
Epoch: 18/25... Training loss: 0.1097
Epoch: 18/25... Training loss: 0.1049
Epoch: 18/25... Training loss: 0.1068
Epoch: 18/25... Training loss: 0.1053
Epoch: 18/25... Training loss: 0.1090
Epoch: 18/25... Training loss: 0.1110
Epoch: 18/25... Training loss: 0.1040
Epoch: 18/25... Training loss: 0.1085
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1081
Epoch: 18/25... Training loss: 0.1052
Epoch: 18/25... Training loss: 0.1045
Epoch: 18/25... Training loss: 0.1044
Epoch: 18/25... Training loss: 0.1053
Epoch: 18/25... Training loss: 0.1098
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1093
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1094
Epoch: 18/25... Training loss: 0.1069
Epoch: 18/25... Training loss: 0.1096
Epoch: 18/25... Training loss: 0.1054
Epoch: 18/25... Training loss: 0.1066
Epoch: 18/25... Training loss: 0.1057
Epoch: 18/25... Training loss: 0.1054
Epoch: 18/25... Training loss: 0.1100
Epoch: 18/25... Training loss: 0.1085
Epoch: 18/25... Training loss: 0.1090
Epoch: 18/25... Training loss: 0.1079
Epoch: 18/25... Training loss: 0.1113
Epoch: 18/25... Training loss: 0.1049
Epoch: 18/25... Training loss: 0.1075
Epoch: 18/25... Training loss: 0.1087
Epoch: 18/25... Training loss: 0.1138
Epoch: 18/25... Training loss: 0.1071
Epoch: 18/25... Training loss: 0.1052
Epoch: 18/25... Training loss: 0.1039
Epoch: 18/25... Training loss: 0.1093
Epoch: 18/25... Training loss: 0.1084
Epoch: 18/25... Training loss: 0.1059
Epoch: 18/25... Training loss: 0.1044
Epoch: 18/25... Training loss: 0.1072
Epoch: 18/25... Training loss: 0.1027
Epoch: 18/25... Training loss: 0.1021
Epoch: 18/25... Training loss: 0.1060
Epoch: 18/25... Training loss: 0.0982
Epoch: 18/25... Training loss: 0.1103
Epoch: 18/25... Training loss: 0.1055
Epoch: 18/25... Training loss: 0.1093
Epoch: 18/25... Training loss: 0.1029
Epoch: 18/25... Training loss: 0.1070
Epoch: 18/25... Training loss: 0.1064
Epoch: 18/25... Training loss: 0.1054
Epoch: 18/25... Training loss: 0.1095
Epoch: 18/25... Training loss: 0.1095
Epoch: 18/25... Training loss: 0.1028
Epoch: 18/25... Training loss: 0.1068
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1064
Epoch: 18/25... Training loss: 0.1092
Epoch: 18/25... Training loss: 0.1030
Epoch: 18/25... Training loss: 0.1051
Epoch: 18/25... Training loss: 0.1057
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1065
Epoch: 18/25... Training loss: 0.1088
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1059
Epoch: 18/25... Training loss: 0.1064
Epoch: 18/25... Training loss: 0.1073
Epoch: 18/25... Training loss: 0.1044
Epoch: 18/25... Training loss: 0.1060
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1088
Epoch: 18/25... Training loss: 0.1067
Epoch: 18/25... Training loss: 0.1047
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1077
Epoch: 18/25... Training loss: 0.1070
Epoch: 18/25... Training loss: 0.1088
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1032
Epoch: 18/25... Training loss: 0.1065
Epoch: 18/25... Training loss: 0.1084
Epoch: 18/25... Training loss: 0.1056
Epoch: 18/25... Training loss: 0.1105
Epoch: 18/25... Training loss: 0.1061
Epoch: 18/25... Training loss: 0.1068
Epoch: 18/25... Training loss: 0.1059
Epoch: 18/25... Training loss: 0.1049
Epoch: 18/25... Training loss: 0.1065
Epoch: 18/25... Training loss: 0.1048
Epoch: 18/25... Training loss: 0.1094
Epoch: 18/25... Training loss: 0.1044
Epoch: 18/25... Training loss: 0.1105
Epoch: 18/25... Training loss: 0.1112
Epoch: 18/25... Training loss: 0.1086
Epoch: 18/25... Training loss: 0.1059
Epoch: 18/25... Training loss: 0.1109
Epoch: 18/25... Training loss: 0.1082
Epoch: 18/25... Training loss: 0.1066
Epoch: 18/25... Training loss: 0.1092
Epoch: 18/25... Training loss: 0.1023
Epoch: 18/25... Training loss: 0.1053
Epoch: 18/25... Training loss: 0.1073
Epoch: 18/25... Training loss: 0.1059
Epoch: 18/25... Training loss: 0.1050
Epoch: 18/25... Training loss: 0.1106
Epoch: 18/25... Training loss: 0.1102
Epoch: 18/25... Training loss: 0.1083
Epoch: 18/25... Training loss: 0.1084
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1071
Epoch: 18/25... Training loss: 0.1081
Epoch: 18/25... Training loss: 0.1078
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1092
Epoch: 18/25... Training loss: 0.1087
Epoch: 18/25... Training loss: 0.1051
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1059
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1072
Epoch: 18/25... Training loss: 0.1081
Epoch: 18/25... Training loss: 0.1054
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1036
Epoch: 18/25... Training loss: 0.1088
Epoch: 18/25... Training loss: 0.1099
Epoch: 18/25... Training loss: 0.1079
Epoch: 18/25... Training loss: 0.1053
Epoch: 18/25... Training loss: 0.1065
Epoch: 18/25... Training loss: 0.1098
Epoch: 18/25... Training loss: 0.1110
Epoch: 18/25... Training loss: 0.1044
Epoch: 18/25... Training loss: 0.1055
Epoch: 18/25... Training loss: 0.1074
Epoch: 18/25... Training loss: 0.1070
Epoch: 18/25... Training loss: 0.1029
Epoch: 18/25... Training loss: 0.1053
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1075
Epoch: 18/25... Training loss: 0.1081
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1083
Epoch: 18/25... Training loss: 0.1046
Epoch: 18/25... Training loss: 0.1055
Epoch: 18/25... Training loss: 0.1086
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1049
Epoch: 18/25... Training loss: 0.1092
Epoch: 18/25... Training loss: 0.1017
Epoch: 18/25... Training loss: 0.1062
Epoch: 18/25... Training loss: 0.1081
Epoch: 18/25... Training loss: 0.1049
Epoch: 18/25... Training loss: 0.1065
Epoch: 18/25... Training loss: 0.1101
Epoch: 18/25... Training loss: 0.1069
Epoch: 18/25... Training loss: 0.1093
Epoch: 18/25... Training loss: 0.1095
Epoch: 18/25... Training loss: 0.1091
Epoch: 18/25... Training loss: 0.1084
Epoch: 18/25... Training loss: 0.1049
Epoch: 18/25... Training loss: 0.1079
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1056
Epoch: 18/25... Training loss: 0.1086
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1065
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1102
Epoch: 18/25... Training loss: 0.1081
Epoch: 18/25... Training loss: 0.1121
Epoch: 18/25... Training loss: 0.1097
Epoch: 18/25... Training loss: 0.1057
Epoch: 18/25... Training loss: 0.1045
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1082
Epoch: 18/25... Training loss: 0.1051
Epoch: 18/25... Training loss: 0.1064
Epoch: 18/25... Training loss: 0.1086
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1061
Epoch: 18/25... Training loss: 0.1079
Epoch: 18/25... Training loss: 0.1075
Epoch: 18/25... Training loss: 0.1046
Epoch: 18/25... Training loss: 0.1054
Epoch: 18/25... Training loss: 0.1068
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1051
Epoch: 18/25... Training loss: 0.1091
Epoch: 18/25... Training loss: 0.1070
Epoch: 18/25... Training loss: 0.1083
Epoch: 18/25... Training loss: 0.1075
Epoch: 18/25... Training loss: 0.1053
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1051
Epoch: 18/25... Training loss: 0.1064
Epoch: 18/25... Training loss: 0.1073
Epoch: 18/25... Training loss: 0.1055
Epoch: 18/25... Training loss: 0.1050
Epoch: 18/25... Training loss: 0.1054
Epoch: 18/25... Training loss: 0.1095
Epoch: 18/25... Training loss: 0.1079
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1057
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1085
Epoch: 18/25... Training loss: 0.1077
Epoch: 18/25... Training loss: 0.1070
Epoch: 18/25... Training loss: 0.1095
Epoch: 18/25... Training loss: 0.1092
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1096
Epoch: 18/25... Training loss: 0.1083
Epoch: 18/25... Training loss: 0.1057
Epoch: 18/25... Training loss: 0.1112
Epoch: 18/25... Training loss: 0.1056
Epoch: 18/25... Training loss: 0.1070
Epoch: 18/25... Training loss: 0.1064
Epoch: 18/25... Training loss: 0.1070
Epoch: 18/25... Training loss: 0.1042
Epoch: 18/25... Training loss: 0.1078
Epoch: 18/25... Training loss: 0.1031
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1067
Epoch: 18/25... Training loss: 0.1049
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1050
Epoch: 18/25... Training loss: 0.1069
Epoch: 18/25... Training loss: 0.1058
Epoch: 18/25... Training loss: 0.1095
Epoch: 18/25... Training loss: 0.1079
Epoch: 18/25... Training loss: 0.1090
Epoch: 18/25... Training loss: 0.1064
Epoch: 18/25... Training loss: 0.1085
Epoch: 18/25... Training loss: 0.1082
Epoch: 18/25... Training loss: 0.1033
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1073
Epoch: 18/25... Training loss: 0.1070
Epoch: 18/25... Training loss: 0.1078
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1062
Epoch: 18/25... Training loss: 0.1062
Epoch: 18/25... Training loss: 0.1055
Epoch: 18/25... Training loss: 0.1061
Epoch: 18/25... Training loss: 0.1061
Epoch: 18/25... Training loss: 0.1051
Epoch: 18/25... Training loss: 0.1048
Epoch: 18/25... Training loss: 0.1051
Epoch: 18/25... Training loss: 0.1086
Epoch: 18/25... Training loss: 0.1098
Epoch: 18/25... Training loss: 0.1076
Epoch: 18/25... Training loss: 0.1108
Epoch: 18/25... Training loss: 0.1073
Epoch: 18/25... Training loss: 0.1099
Epoch: 18/25... Training loss: 0.1049
Epoch: 18/25... Training loss: 0.1065
Epoch: 18/25... Training loss: 0.1055
Epoch: 18/25... Training loss: 0.1087
Epoch: 18/25... Training loss: 0.1084
Epoch: 18/25... Training loss: 0.1062
Epoch: 18/25... Training loss: 0.1065
Epoch: 18/25... Training loss: 0.1085
Epoch: 18/25... Training loss: 0.1072
Epoch: 18/25... Training loss: 0.1085
Epoch: 18/25... Training loss: 0.1057
Epoch: 18/25... Training loss: 0.1067
Epoch: 18/25... Training loss: 0.1045
Epoch: 18/25... Training loss: 0.1064
Epoch: 18/25... Training loss: 0.1048
Epoch: 18/25... Training loss: 0.1079
Epoch: 18/25... Training loss: 0.1087
Epoch: 18/25... Training loss: 0.1053
Epoch: 18/25... Training loss: 0.1047
Epoch: 18/25... Training loss: 0.1088
Epoch: 18/25... Training loss: 0.1039
Epoch: 18/25... Training loss: 0.1066
Epoch: 18/25... Training loss: 0.1062
Epoch: 18/25... Training loss: 0.1091
Epoch: 18/25... Training loss: 0.1060
Epoch: 18/25... Training loss: 0.1082
Epoch: 18/25... Training loss: 0.1051
Epoch: 18/25... Training loss: 0.1063
Epoch: 18/25... Training loss: 0.1081
Epoch: 18/25... Training loss: 0.1089
Epoch: 18/25... Training loss: 0.1062
Epoch: 18/25... Training loss: 0.1105
Epoch: 18/25... Training loss: 0.1080
Epoch: 18/25... Training loss: 0.1073
Epoch: 18/25... Training loss: 0.1073
Epoch: 18/25... Training loss: 0.1056
Epoch: 18/25... Training loss: 0.1078
Epoch: 19/25... Training loss: 0.1080
Epoch: 19/25... Training loss: 0.1067
Epoch: 19/25... Training loss: 0.1043
Epoch: 19/25... Training loss: 0.1062
Epoch: 19/25... Training loss: 0.1040
Epoch: 19/25... Training loss: 0.1072
Epoch: 19/25... Training loss: 0.1047
Epoch: 19/25... Training loss: 0.1078
Epoch: 19/25... Training loss: 0.1085
Epoch: 19/25... Training loss: 0.1073
Epoch: 19/25... Training loss: 0.1098
Epoch: 19/25... Training loss: 0.1044
Epoch: 19/25... Training loss: 0.1090
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1100
Epoch: 19/25... Training loss: 0.1050
Epoch: 19/25... Training loss: 0.1056
Epoch: 19/25... Training loss: 0.1078
Epoch: 19/25... Training loss: 0.1092
Epoch: 19/25... Training loss: 0.1107
Epoch: 19/25... Training loss: 0.1054
Epoch: 19/25... Training loss: 0.1070
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1100
Epoch: 19/25... Training loss: 0.1083
Epoch: 19/25... Training loss: 0.1038
Epoch: 19/25... Training loss: 0.1093
Epoch: 19/25... Training loss: 0.1129
Epoch: 19/25... Training loss: 0.1110
Epoch: 19/25... Training loss: 0.1049
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1035
Epoch: 19/25... Training loss: 0.1086
Epoch: 19/25... Training loss: 0.1091
Epoch: 19/25... Training loss: 0.1025
Epoch: 19/25... Training loss: 0.1086
Epoch: 19/25... Training loss: 0.1091
Epoch: 19/25... Training loss: 0.1048
Epoch: 19/25... Training loss: 0.1065
Epoch: 19/25... Training loss: 0.1077
Epoch: 19/25... Training loss: 0.1042
Epoch: 19/25... Training loss: 0.1086
Epoch: 19/25... Training loss: 0.1084
Epoch: 19/25... Training loss: 0.1085
Epoch: 19/25... Training loss: 0.1065
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1073
Epoch: 19/25... Training loss: 0.1055
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1031
Epoch: 19/25... Training loss: 0.1072
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1051
Epoch: 19/25... Training loss: 0.1080
Epoch: 19/25... Training loss: 0.1054
Epoch: 19/25... Training loss: 0.1095
Epoch: 19/25... Training loss: 0.1061
Epoch: 19/25... Training loss: 0.1056
Epoch: 19/25... Training loss: 0.1042
Epoch: 19/25... Training loss: 0.1054
Epoch: 19/25... Training loss: 0.1068
Epoch: 19/25... Training loss: 0.1112
Epoch: 19/25... Training loss: 0.1066
Epoch: 19/25... Training loss: 0.1083
Epoch: 19/25... Training loss: 0.1078
Epoch: 19/25... Training loss: 0.1026
Epoch: 19/25... Training loss: 0.1057
Epoch: 19/25... Training loss: 0.1052
Epoch: 19/25... Training loss: 0.1071
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1084
Epoch: 19/25... Training loss: 0.1062
Epoch: 19/25... Training loss: 0.1055
Epoch: 19/25... Training loss: 0.1039
Epoch: 19/25... Training loss: 0.1034
Epoch: 19/25... Training loss: 0.1077
Epoch: 19/25... Training loss: 0.1072
Epoch: 19/25... Training loss: 0.1028
Epoch: 19/25... Training loss: 0.1067
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1073
Epoch: 19/25... Training loss: 0.1067
Epoch: 19/25... Training loss: 0.1082
Epoch: 19/25... Training loss: 0.1056
Epoch: 19/25... Training loss: 0.1030
Epoch: 19/25... Training loss: 0.1069
Epoch: 19/25... Training loss: 0.1056
Epoch: 19/25... Training loss: 0.1082
Epoch: 19/25... Training loss: 0.1065
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1047
Epoch: 19/25... Training loss: 0.1068
Epoch: 19/25... Training loss: 0.1030
Epoch: 19/25... Training loss: 0.1094
Epoch: 19/25... Training loss: 0.1032
Epoch: 19/25... Training loss: 0.1082
Epoch: 19/25... Training loss: 0.1063
Epoch: 19/25... Training loss: 0.1077
Epoch: 19/25... Training loss: 0.1073
Epoch: 19/25... Training loss: 0.1076
Epoch: 19/25... Training loss: 0.1052
Epoch: 19/25... Training loss: 0.1055
Epoch: 19/25... Training loss: 0.1083
Epoch: 19/25... Training loss: 0.1072
Epoch: 19/25... Training loss: 0.1083
Epoch: 19/25... Training loss: 0.1093
Epoch: 19/25... Training loss: 0.1033
Epoch: 19/25... Training loss: 0.1082
Epoch: 19/25... Training loss: 0.1069
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1068
Epoch: 19/25... Training loss: 0.1044
Epoch: 19/25... Training loss: 0.1086
Epoch: 19/25... Training loss: 0.1038
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1123
Epoch: 19/25... Training loss: 0.1039
Epoch: 19/25... Training loss: 0.1050
Epoch: 19/25... Training loss: 0.1100
Epoch: 19/25... Training loss: 0.1061
Epoch: 19/25... Training loss: 0.1075
Epoch: 19/25... Training loss: 0.1052
Epoch: 19/25... Training loss: 0.1092
Epoch: 19/25... Training loss: 0.1071
Epoch: 19/25... Training loss: 0.1025
Epoch: 19/25... Training loss: 0.1035
Epoch: 19/25... Training loss: 0.1042
Epoch: 19/25... Training loss: 0.1007
Epoch: 19/25... Training loss: 0.1078
Epoch: 19/25... Training loss: 0.1048
Epoch: 19/25... Training loss: 0.1077
Epoch: 19/25... Training loss: 0.1051
Epoch: 19/25... Training loss: 0.1092
Epoch: 19/25... Training loss: 0.1053
Epoch: 19/25... Training loss: 0.1065
Epoch: 19/25... Training loss: 0.1064
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1017
Epoch: 19/25... Training loss: 0.1057
Epoch: 19/25... Training loss: 0.1088
Epoch: 19/25... Training loss: 0.1068
Epoch: 19/25... Training loss: 0.1104
Epoch: 19/25... Training loss: 0.1052
Epoch: 19/25... Training loss: 0.1053
Epoch: 19/25... Training loss: 0.1082
Epoch: 19/25... Training loss: 0.1103
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1042
Epoch: 19/25... Training loss: 0.1091
Epoch: 19/25... Training loss: 0.1076
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1069
Epoch: 19/25... Training loss: 0.1071
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1059
Epoch: 19/25... Training loss: 0.1052
Epoch: 19/25... Training loss: 0.1046
Epoch: 19/25... Training loss: 0.1036
Epoch: 19/25... Training loss: 0.1091
Epoch: 19/25... Training loss: 0.1041
Epoch: 19/25... Training loss: 0.1070
Epoch: 19/25... Training loss: 0.1111
Epoch: 19/25... Training loss: 0.1058
Epoch: 19/25... Training loss: 0.1115
Epoch: 19/25... Training loss: 0.1089
Epoch: 19/25... Training loss: 0.1057
Epoch: 19/25... Training loss: 0.1075
Epoch: 19/25... Training loss: 0.1083
Epoch: 19/25... Training loss: 0.1075
Epoch: 19/25... Training loss: 0.1083
Epoch: 19/25... Training loss: 0.1076
Epoch: 19/25... Training loss: 0.1075
Epoch: 19/25... Training loss: 0.1056
Epoch: 19/25... Training loss: 0.1057
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1071
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1119
Epoch: 19/25... Training loss: 0.1087
Epoch: 19/25... Training loss: 0.1035
Epoch: 19/25... Training loss: 0.1071
Epoch: 19/25... Training loss: 0.1051
Epoch: 19/25... Training loss: 0.1061
Epoch: 19/25... Training loss: 0.1083
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1026
Epoch: 19/25... Training loss: 0.1070
Epoch: 19/25... Training loss: 0.1092
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1051
Epoch: 19/25... Training loss: 0.1090
Epoch: 19/25... Training loss: 0.1114
Epoch: 19/25... Training loss: 0.1034
Epoch: 19/25... Training loss: 0.1076
Epoch: 19/25... Training loss: 0.1105
Epoch: 19/25... Training loss: 0.1048
Epoch: 19/25... Training loss: 0.1094
Epoch: 19/25... Training loss: 0.1089
Epoch: 19/25... Training loss: 0.1036
Epoch: 19/25... Training loss: 0.1047
Epoch: 19/25... Training loss: 0.1099
Epoch: 19/25... Training loss: 0.1078
Epoch: 19/25... Training loss: 0.1066
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1085
Epoch: 19/25... Training loss: 0.1021
Epoch: 19/25... Training loss: 0.1103
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1044
Epoch: 19/25... Training loss: 0.1063
Epoch: 19/25... Training loss: 0.1041
Epoch: 19/25... Training loss: 0.1051
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1050
Epoch: 19/25... Training loss: 0.1036
Epoch: 19/25... Training loss: 0.1072
Epoch: 19/25... Training loss: 0.1041
Epoch: 19/25... Training loss: 0.1047
Epoch: 19/25... Training loss: 0.1063
Epoch: 19/25... Training loss: 0.1058
Epoch: 19/25... Training loss: 0.1081
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1069
Epoch: 19/25... Training loss: 0.1059
Epoch: 19/25... Training loss: 0.1054
Epoch: 19/25... Training loss: 0.1072
Epoch: 19/25... Training loss: 0.1085
Epoch: 19/25... Training loss: 0.1111
Epoch: 19/25... Training loss: 0.1053
Epoch: 19/25... Training loss: 0.1057
Epoch: 19/25... Training loss: 0.1045
Epoch: 19/25... Training loss: 0.1073
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1037
Epoch: 19/25... Training loss: 0.1070
Epoch: 19/25... Training loss: 0.1080
Epoch: 19/25... Training loss: 0.1043
Epoch: 19/25... Training loss: 0.1031
Epoch: 19/25... Training loss: 0.1017
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1071
Epoch: 19/25... Training loss: 0.1075
Epoch: 19/25... Training loss: 0.1083
Epoch: 19/25... Training loss: 0.1082
Epoch: 19/25... Training loss: 0.1024
Epoch: 19/25... Training loss: 0.1045
Epoch: 19/25... Training loss: 0.1060
Epoch: 19/25... Training loss: 0.1066
Epoch: 19/25... Training loss: 0.1039
Epoch: 19/25... Training loss: 0.1041
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1057
Epoch: 19/25... Training loss: 0.1046
Epoch: 19/25... Training loss: 0.1070
Epoch: 19/25... Training loss: 0.1061
Epoch: 19/25... Training loss: 0.1094
Epoch: 19/25... Training loss: 0.1095
Epoch: 19/25... Training loss: 0.1080
Epoch: 19/25... Training loss: 0.1095
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1076
Epoch: 19/25... Training loss: 0.1048
Epoch: 19/25... Training loss: 0.1057
Epoch: 19/25... Training loss: 0.1069
Epoch: 19/25... Training loss: 0.1059
Epoch: 19/25... Training loss: 0.1070
Epoch: 19/25... Training loss: 0.1044
Epoch: 19/25... Training loss: 0.1055
Epoch: 19/25... Training loss: 0.1056
Epoch: 19/25... Training loss: 0.1065
Epoch: 19/25... Training loss: 0.1068
Epoch: 19/25... Training loss: 0.1079
Epoch: 19/25... Training loss: 0.1053
Epoch: 19/25... Training loss: 0.1061
Epoch: 19/25... Training loss: 0.1094
Epoch: 19/25... Training loss: 0.1081
Epoch: 19/25... Training loss: 0.1084
Epoch: 19/25... Training loss: 0.1045
Epoch: 19/25... Training loss: 0.1065
Epoch: 19/25... Training loss: 0.1043
Epoch: 19/25... Training loss: 0.1070
Epoch: 19/25... Training loss: 0.1021
Epoch: 19/25... Training loss: 0.1038
Epoch: 19/25... Training loss: 0.1059
Epoch: 19/25... Training loss: 0.1054
Epoch: 19/25... Training loss: 0.1066
Epoch: 19/25... Training loss: 0.1075
Epoch: 19/25... Training loss: 0.1089
Epoch: 19/25... Training loss: 0.1061
Epoch: 19/25... Training loss: 0.1066
Epoch: 19/25... Training loss: 0.1074
Epoch: 19/25... Training loss: 0.1044
Epoch: 19/25... Training loss: 0.1043
Epoch: 19/25... Training loss: 0.1049
Epoch: 19/25... Training loss: 0.1084
Epoch: 19/25... Training loss: 0.1038
Epoch: 19/25... Training loss: 0.1044
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1068
Epoch: 20/25... Training loss: 0.1067
Epoch: 20/25... Training loss: 0.1082
Epoch: 20/25... Training loss: 0.1069
Epoch: 20/25... Training loss: 0.1071
Epoch: 20/25... Training loss: 0.1078
Epoch: 20/25... Training loss: 0.1078
Epoch: 20/25... Training loss: 0.1045
Epoch: 20/25... Training loss: 0.1101
Epoch: 20/25... Training loss: 0.1076
Epoch: 20/25... Training loss: 0.1068
Epoch: 20/25... Training loss: 0.1045
Epoch: 20/25... Training loss: 0.1056
Epoch: 20/25... Training loss: 0.1066
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1076
Epoch: 20/25... Training loss: 0.1082
Epoch: 20/25... Training loss: 0.1032
Epoch: 20/25... Training loss: 0.1033
Epoch: 20/25... Training loss: 0.1060
Epoch: 20/25... Training loss: 0.1033
Epoch: 20/25... Training loss: 0.1023
Epoch: 20/25... Training loss: 0.1054
Epoch: 20/25... Training loss: 0.1043
Epoch: 20/25... Training loss: 0.1066
Epoch: 20/25... Training loss: 0.1060
Epoch: 20/25... Training loss: 0.1075
Epoch: 20/25... Training loss: 0.1059
Epoch: 20/25... Training loss: 0.1058
Epoch: 20/25... Training loss: 0.1058
Epoch: 20/25... Training loss: 0.1024
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1065
Epoch: 20/25... Training loss: 0.1069
Epoch: 20/25... Training loss: 0.1079
Epoch: 20/25... Training loss: 0.1037
Epoch: 20/25... Training loss: 0.0995
Epoch: 20/25... Training loss: 0.1052
Epoch: 20/25... Training loss: 0.1049
Epoch: 20/25... Training loss: 0.1058
Epoch: 20/25... Training loss: 0.1099
Epoch: 20/25... Training loss: 0.1068
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1047
Epoch: 20/25... Training loss: 0.1082
Epoch: 20/25... Training loss: 0.1034
Epoch: 20/25... Training loss: 0.1029
Epoch: 20/25... Training loss: 0.1046
Epoch: 20/25... Training loss: 0.1070
Epoch: 20/25... Training loss: 0.1053
Epoch: 20/25... Training loss: 0.1068
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1038
Epoch: 20/25... Training loss: 0.1077
Epoch: 20/25... Training loss: 0.1095
Epoch: 20/25... Training loss: 0.1049
Epoch: 20/25... Training loss: 0.1096
Epoch: 20/25... Training loss: 0.1073
Epoch: 20/25... Training loss: 0.1073
Epoch: 20/25... Training loss: 0.1064
Epoch: 20/25... Training loss: 0.1074
Epoch: 20/25... Training loss: 0.1075
Epoch: 20/25... Training loss: 0.1090
Epoch: 20/25... Training loss: 0.1077
Epoch: 20/25... Training loss: 0.1044
Epoch: 20/25... Training loss: 0.1062
Epoch: 20/25... Training loss: 0.1043
Epoch: 20/25... Training loss: 0.1097
Epoch: 20/25... Training loss: 0.1112
Epoch: 20/25... Training loss: 0.1029
Epoch: 20/25... Training loss: 0.1042
Epoch: 20/25... Training loss: 0.1079
Epoch: 20/25... Training loss: 0.1079
Epoch: 20/25... Training loss: 0.1061
Epoch: 20/25... Training loss: 0.1055
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1030
Epoch: 20/25... Training loss: 0.1059
Epoch: 20/25... Training loss: 0.1024
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1037
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1044
Epoch: 20/25... Training loss: 0.1052
Epoch: 20/25... Training loss: 0.1067
Epoch: 20/25... Training loss: 0.1064
Epoch: 20/25... Training loss: 0.1064
Epoch: 20/25... Training loss: 0.1069
Epoch: 20/25... Training loss: 0.1064
Epoch: 20/25... Training loss: 0.1068
Epoch: 20/25... Training loss: 0.1051
Epoch: 20/25... Training loss: 0.1051
Epoch: 20/25... Training loss: 0.1083
Epoch: 20/25... Training loss: 0.1054
Epoch: 20/25... Training loss: 0.1069
Epoch: 20/25... Training loss: 0.1068
Epoch: 20/25... Training loss: 0.1056
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1052
Epoch: 20/25... Training loss: 0.1072
Epoch: 20/25... Training loss: 0.1089
Epoch: 20/25... Training loss: 0.1098
Epoch: 20/25... Training loss: 0.1100
Epoch: 20/25... Training loss: 0.1058
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1076
Epoch: 20/25... Training loss: 0.1029
Epoch: 20/25... Training loss: 0.1059
Epoch: 20/25... Training loss: 0.1067
Epoch: 20/25... Training loss: 0.1059
Epoch: 20/25... Training loss: 0.1046
Epoch: 20/25... Training loss: 0.1078
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1077
Epoch: 20/25... Training loss: 0.1053
Epoch: 20/25... Training loss: 0.1101
Epoch: 20/25... Training loss: 0.1081
Epoch: 20/25... Training loss: 0.1082
Epoch: 20/25... Training loss: 0.1066
Epoch: 20/25... Training loss: 0.1045
Epoch: 20/25... Training loss: 0.1084
Epoch: 20/25... Training loss: 0.1040
Epoch: 20/25... Training loss: 0.1074
Epoch: 20/25... Training loss: 0.1045
Epoch: 20/25... Training loss: 0.1067
Epoch: 20/25... Training loss: 0.1101
Epoch: 20/25... Training loss: 0.1086
Epoch: 20/25... Training loss: 0.1073
Epoch: 20/25... Training loss: 0.1065
Epoch: 20/25... Training loss: 0.1042
Epoch: 20/25... Training loss: 0.1055
Epoch: 20/25... Training loss: 0.1037
Epoch: 20/25... Training loss: 0.1118
Epoch: 20/25... Training loss: 0.1088
Epoch: 20/25... Training loss: 0.1036
Epoch: 20/25... Training loss: 0.1086
Epoch: 20/25... Training loss: 0.1075
Epoch: 20/25... Training loss: 0.1057
Epoch: 20/25... Training loss: 0.1059
Epoch: 20/25... Training loss: 0.1046
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1103
Epoch: 20/25... Training loss: 0.1095
Epoch: 20/25... Training loss: 0.1077
Epoch: 20/25... Training loss: 0.1043
Epoch: 20/25... Training loss: 0.1057
Epoch: 20/25... Training loss: 0.1042
Epoch: 20/25... Training loss: 0.1074
Epoch: 20/25... Training loss: 0.1052
Epoch: 20/25... Training loss: 0.1099
Epoch: 20/25... Training loss: 0.1059
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1066
Epoch: 20/25... Training loss: 0.1086
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1053
Epoch: 20/25... Training loss: 0.1074
Epoch: 20/25... Training loss: 0.1078
Epoch: 20/25... Training loss: 0.1057
Epoch: 20/25... Training loss: 0.1062
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1071
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1026
Epoch: 20/25... Training loss: 0.1089
Epoch: 20/25... Training loss: 0.1072
Epoch: 20/25... Training loss: 0.1062
Epoch: 20/25... Training loss: 0.1055
Epoch: 20/25... Training loss: 0.1035
Epoch: 20/25... Training loss: 0.1039
Epoch: 20/25... Training loss: 0.1067
Epoch: 20/25... Training loss: 0.1066
Epoch: 20/25... Training loss: 0.1039
Epoch: 20/25... Training loss: 0.1041
Epoch: 20/25... Training loss: 0.1072
Epoch: 20/25... Training loss: 0.1052
Epoch: 20/25... Training loss: 0.1079
Epoch: 20/25... Training loss: 0.1086
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1033
Epoch: 20/25... Training loss: 0.1057
Epoch: 20/25... Training loss: 0.1084
Epoch: 20/25... Training loss: 0.1086
Epoch: 20/25... Training loss: 0.1041
Epoch: 20/25... Training loss: 0.1026
Epoch: 20/25... Training loss: 0.1041
Epoch: 20/25... Training loss: 0.1044
Epoch: 20/25... Training loss: 0.1071
Epoch: 20/25... Training loss: 0.1060
Epoch: 20/25... Training loss: 0.1079
Epoch: 20/25... Training loss: 0.1053
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1044
Epoch: 20/25... Training loss: 0.1015
Epoch: 20/25... Training loss: 0.1066
Epoch: 20/25... Training loss: 0.1058
Epoch: 20/25... Training loss: 0.1039
Epoch: 20/25... Training loss: 0.1082
Epoch: 20/25... Training loss: 0.1059
Epoch: 20/25... Training loss: 0.1077
Epoch: 20/25... Training loss: 0.1046
Epoch: 20/25... Training loss: 0.1051
Epoch: 20/25... Training loss: 0.1088
Epoch: 20/25... Training loss: 0.1069
Epoch: 20/25... Training loss: 0.1047
Epoch: 20/25... Training loss: 0.1066
Epoch: 20/25... Training loss: 0.1077
Epoch: 20/25... Training loss: 0.1086
Epoch: 20/25... Training loss: 0.1080
Epoch: 20/25... Training loss: 0.1073
Epoch: 20/25... Training loss: 0.1036
Epoch: 20/25... Training loss: 0.1033
Epoch: 20/25... Training loss: 0.1023
Epoch: 20/25... Training loss: 0.1048
Epoch: 20/25... Training loss: 0.1023
Epoch: 20/25... Training loss: 0.1021
Epoch: 20/25... Training loss: 0.1056
Epoch: 20/25... Training loss: 0.1079
Epoch: 20/25... Training loss: 0.1042
Epoch: 20/25... Training loss: 0.1048
Epoch: 20/25... Training loss: 0.1062
Epoch: 20/25... Training loss: 0.1065
Epoch: 20/25... Training loss: 0.1070
Epoch: 20/25... Training loss: 0.1062
Epoch: 20/25... Training loss: 0.1019
Epoch: 20/25... Training loss: 0.1061
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1063
Epoch: 20/25... Training loss: 0.1052
Epoch: 20/25... Training loss: 0.1018
Epoch: 20/25... Training loss: 0.1073
Epoch: 20/25... Training loss: 0.1012
Epoch: 20/25... Training loss: 0.1068
Epoch: 20/25... Training loss: 0.1023
Epoch: 20/25... Training loss: 0.1055
Epoch: 20/25... Training loss: 0.1011
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1065
Epoch: 20/25... Training loss: 0.1020
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1071
Epoch: 20/25... Training loss: 0.1065
Epoch: 20/25... Training loss: 0.1038
Epoch: 20/25... Training loss: 0.1093
Epoch: 20/25... Training loss: 0.1044
Epoch: 20/25... Training loss: 0.1074
Epoch: 20/25... Training loss: 0.1098
Epoch: 20/25... Training loss: 0.1055
Epoch: 20/25... Training loss: 0.1084
Epoch: 20/25... Training loss: 0.1058
Epoch: 20/25... Training loss: 0.1060
Epoch: 20/25... Training loss: 0.1052
Epoch: 20/25... Training loss: 0.1023
Epoch: 20/25... Training loss: 0.1065
Epoch: 20/25... Training loss: 0.1048
Epoch: 20/25... Training loss: 0.1044
Epoch: 20/25... Training loss: 0.1055
Epoch: 20/25... Training loss: 0.1045
Epoch: 20/25... Training loss: 0.1092
Epoch: 20/25... Training loss: 0.1044
Epoch: 20/25... Training loss: 0.1075
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1098
Epoch: 20/25... Training loss: 0.1066
Epoch: 20/25... Training loss: 0.1113
Epoch: 20/25... Training loss: 0.1058
Epoch: 20/25... Training loss: 0.1087
Epoch: 20/25... Training loss: 0.1059
Epoch: 20/25... Training loss: 0.1080
Epoch: 20/25... Training loss: 0.1061
Epoch: 20/25... Training loss: 0.1055
Epoch: 20/25... Training loss: 0.1053
Epoch: 20/25... Training loss: 0.1056
Epoch: 20/25... Training loss: 0.1020
Epoch: 20/25... Training loss: 0.1060
Epoch: 20/25... Training loss: 0.1082
Epoch: 20/25... Training loss: 0.1083
Epoch: 20/25... Training loss: 0.1078
Epoch: 20/25... Training loss: 0.1053
Epoch: 20/25... Training loss: 0.1090
Epoch: 20/25... Training loss: 0.1071
Epoch: 20/25... Training loss: 0.1067
Epoch: 20/25... Training loss: 0.1085
Epoch: 20/25... Training loss: 0.1034
Epoch: 20/25... Training loss: 0.1071
Epoch: 20/25... Training loss: 0.1045
Epoch: 20/25... Training loss: 0.1047
Epoch: 20/25... Training loss: 0.1046
Epoch: 20/25... Training loss: 0.1035
Epoch: 20/25... Training loss: 0.1061
Epoch: 20/25... Training loss: 0.1082
Epoch: 20/25... Training loss: 0.1086
Epoch: 20/25... Training loss: 0.1030
Epoch: 20/25... Training loss: 0.1075
Epoch: 20/25... Training loss: 0.1030
Epoch: 20/25... Training loss: 0.1050
Epoch: 20/25... Training loss: 0.1063
Epoch: 21/25... Training loss: 0.1047
Epoch: 21/25... Training loss: 0.1052
Epoch: 21/25... Training loss: 0.1085
Epoch: 21/25... Training loss: 0.1026
Epoch: 21/25... Training loss: 0.1102
Epoch: 21/25... Training loss: 0.1080
Epoch: 21/25... Training loss: 0.1036
Epoch: 21/25... Training loss: 0.1036
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1034
Epoch: 21/25... Training loss: 0.1075
Epoch: 21/25... Training loss: 0.1089
Epoch: 21/25... Training loss: 0.1035
Epoch: 21/25... Training loss: 0.1048
Epoch: 21/25... Training loss: 0.1067
Epoch: 21/25... Training loss: 0.1029
Epoch: 21/25... Training loss: 0.1029
Epoch: 21/25... Training loss: 0.1059
Epoch: 21/25... Training loss: 0.1016
Epoch: 21/25... Training loss: 0.1044
Epoch: 21/25... Training loss: 0.1050
Epoch: 21/25... Training loss: 0.1069
Epoch: 21/25... Training loss: 0.1035
Epoch: 21/25... Training loss: 0.1052
Epoch: 21/25... Training loss: 0.1072
Epoch: 21/25... Training loss: 0.1078
Epoch: 21/25... Training loss: 0.1099
Epoch: 21/25... Training loss: 0.1044
Epoch: 21/25... Training loss: 0.1073
Epoch: 21/25... Training loss: 0.1058
Epoch: 21/25... Training loss: 0.0999
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1090
Epoch: 21/25... Training loss: 0.1054
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1048
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1068
Epoch: 21/25... Training loss: 0.1038
Epoch: 21/25... Training loss: 0.1072
Epoch: 21/25... Training loss: 0.1063
Epoch: 21/25... Training loss: 0.1050
Epoch: 21/25... Training loss: 0.1065
Epoch: 21/25... Training loss: 0.1033
Epoch: 21/25... Training loss: 0.1050
Epoch: 21/25... Training loss: 0.1070
Epoch: 21/25... Training loss: 0.1050
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1025
Epoch: 21/25... Training loss: 0.1053
Epoch: 21/25... Training loss: 0.1101
Epoch: 21/25... Training loss: 0.1083
Epoch: 21/25... Training loss: 0.1085
Epoch: 21/25... Training loss: 0.1041
Epoch: 21/25... Training loss: 0.1084
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1059
Epoch: 21/25... Training loss: 0.1066
Epoch: 21/25... Training loss: 0.1037
Epoch: 21/25... Training loss: 0.1059
Epoch: 21/25... Training loss: 0.1070
Epoch: 21/25... Training loss: 0.1045
Epoch: 21/25... Training loss: 0.1090
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1073
Epoch: 21/25... Training loss: 0.1118
Epoch: 21/25... Training loss: 0.1038
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1074
Epoch: 21/25... Training loss: 0.1039
Epoch: 21/25... Training loss: 0.1047
Epoch: 21/25... Training loss: 0.1027
Epoch: 21/25... Training loss: 0.1070
Epoch: 21/25... Training loss: 0.1054
Epoch: 21/25... Training loss: 0.1024
Epoch: 21/25... Training loss: 0.1037
Epoch: 21/25... Training loss: 0.1043
Epoch: 21/25... Training loss: 0.1049
Epoch: 21/25... Training loss: 0.1044
Epoch: 21/25... Training loss: 0.1047
Epoch: 21/25... Training loss: 0.1043
Epoch: 21/25... Training loss: 0.1050
Epoch: 21/25... Training loss: 0.1067
Epoch: 21/25... Training loss: 0.1076
Epoch: 21/25... Training loss: 0.1075
Epoch: 21/25... Training loss: 0.1069
Epoch: 21/25... Training loss: 0.1075
Epoch: 21/25... Training loss: 0.1043
Epoch: 21/25... Training loss: 0.1026
Epoch: 21/25... Training loss: 0.1080
Epoch: 21/25... Training loss: 0.1070
Epoch: 21/25... Training loss: 0.1088
Epoch: 21/25... Training loss: 0.1048
Epoch: 21/25... Training loss: 0.1048
Epoch: 21/25... Training loss: 0.1041
Epoch: 21/25... Training loss: 0.1080
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1027
Epoch: 21/25... Training loss: 0.1045
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1015
Epoch: 21/25... Training loss: 0.1065
Epoch: 21/25... Training loss: 0.1058
Epoch: 21/25... Training loss: 0.1052
Epoch: 21/25... Training loss: 0.1079
Epoch: 21/25... Training loss: 0.1009
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1083
Epoch: 21/25... Training loss: 0.1078
Epoch: 21/25... Training loss: 0.1032
Epoch: 21/25... Training loss: 0.1033
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1076
Epoch: 21/25... Training loss: 0.1032
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1079
Epoch: 21/25... Training loss: 0.1081
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1084
Epoch: 21/25... Training loss: 0.1067
Epoch: 21/25... Training loss: 0.1047
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1062
Epoch: 21/25... Training loss: 0.1056
Epoch: 21/25... Training loss: 0.1059
Epoch: 21/25... Training loss: 0.1018
Epoch: 21/25... Training loss: 0.1069
Epoch: 21/25... Training loss: 0.1036
Epoch: 21/25... Training loss: 0.1053
Epoch: 21/25... Training loss: 0.1044
Epoch: 21/25... Training loss: 0.1049
Epoch: 21/25... Training loss: 0.1057
Epoch: 21/25... Training loss: 0.1068
Epoch: 21/25... Training loss: 0.1073
Epoch: 21/25... Training loss: 0.1049
Epoch: 21/25... Training loss: 0.1050
Epoch: 21/25... Training loss: 0.1033
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1033
Epoch: 21/25... Training loss: 0.1079
Epoch: 21/25... Training loss: 0.1066
Epoch: 21/25... Training loss: 0.1076
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1087
Epoch: 21/25... Training loss: 0.1097
Epoch: 21/25... Training loss: 0.1046
Epoch: 21/25... Training loss: 0.1058
Epoch: 21/25... Training loss: 0.1082
Epoch: 21/25... Training loss: 0.1027
Epoch: 21/25... Training loss: 0.1054
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1056
Epoch: 21/25... Training loss: 0.1014
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1105
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1030
Epoch: 21/25... Training loss: 0.1052
Epoch: 21/25... Training loss: 0.1059
Epoch: 21/25... Training loss: 0.1046
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1063
Epoch: 21/25... Training loss: 0.1031
Epoch: 21/25... Training loss: 0.1047
Epoch: 21/25... Training loss: 0.1053
Epoch: 21/25... Training loss: 0.1067
Epoch: 21/25... Training loss: 0.1093
Epoch: 21/25... Training loss: 0.1040
Epoch: 21/25... Training loss: 0.1061
Epoch: 21/25... Training loss: 0.1019
Epoch: 21/25... Training loss: 0.1030
Epoch: 21/25... Training loss: 0.1077
Epoch: 21/25... Training loss: 0.1052
Epoch: 21/25... Training loss: 0.1087
Epoch: 21/25... Training loss: 0.1088
Epoch: 21/25... Training loss: 0.1120
Epoch: 21/25... Training loss: 0.1044
Epoch: 21/25... Training loss: 0.1031
Epoch: 21/25... Training loss: 0.1075
Epoch: 21/25... Training loss: 0.1037
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1065
Epoch: 21/25... Training loss: 0.1062
Epoch: 21/25... Training loss: 0.1032
Epoch: 21/25... Training loss: 0.1069
Epoch: 21/25... Training loss: 0.1083
Epoch: 21/25... Training loss: 0.1088
Epoch: 21/25... Training loss: 0.1038
Epoch: 21/25... Training loss: 0.1064
Epoch: 21/25... Training loss: 0.1049
Epoch: 21/25... Training loss: 0.1037
Epoch: 21/25... Training loss: 0.1026
Epoch: 21/25... Training loss: 0.1052
Epoch: 21/25... Training loss: 0.1066
Epoch: 21/25... Training loss: 0.1062
Epoch: 21/25... Training loss: 0.1028
Epoch: 21/25... Training loss: 0.1049
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1049
Epoch: 21/25... Training loss: 0.1029
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1027
Epoch: 21/25... Training loss: 0.1106
Epoch: 21/25... Training loss: 0.1046
Epoch: 21/25... Training loss: 0.1065
Epoch: 21/25... Training loss: 0.1095
Epoch: 21/25... Training loss: 0.1073
Epoch: 21/25... Training loss: 0.1066
Epoch: 21/25... Training loss: 0.1036
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1086
Epoch: 21/25... Training loss: 0.1035
Epoch: 21/25... Training loss: 0.1034
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1064
Epoch: 21/25... Training loss: 0.1070
Epoch: 21/25... Training loss: 0.1086
Epoch: 21/25... Training loss: 0.1064
Epoch: 21/25... Training loss: 0.1084
Epoch: 21/25... Training loss: 0.1027
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1060
Epoch: 21/25... Training loss: 0.1031
Epoch: 21/25... Training loss: 0.1057
Epoch: 21/25... Training loss: 0.1065
Epoch: 21/25... Training loss: 0.1070
Epoch: 21/25... Training loss: 0.1022
Epoch: 21/25... Training loss: 0.1080
Epoch: 21/25... Training loss: 0.1064
Epoch: 21/25... Training loss: 0.1080
Epoch: 21/25... Training loss: 0.1014
Epoch: 21/25... Training loss: 0.1025
Epoch: 21/25... Training loss: 0.1064
Epoch: 21/25... Training loss: 0.1079
Epoch: 21/25... Training loss: 0.1024
Epoch: 21/25... Training loss: 0.1022
Epoch: 21/25... Training loss: 0.1055
Epoch: 21/25... Training loss: 0.1056
Epoch: 21/25... Training loss: 0.1052
Epoch: 21/25... Training loss: 0.1045
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1077
Epoch: 21/25... Training loss: 0.1075
Epoch: 21/25... Training loss: 0.1106
Epoch: 21/25... Training loss: 0.1057
Epoch: 21/25... Training loss: 0.1015
Epoch: 21/25... Training loss: 0.1058
Epoch: 21/25... Training loss: 0.1077
Epoch: 21/25... Training loss: 0.1056
Epoch: 21/25... Training loss: 0.1062
Epoch: 21/25... Training loss: 0.1059
Epoch: 21/25... Training loss: 0.1052
Epoch: 21/25... Training loss: 0.1069
Epoch: 21/25... Training loss: 0.1073
Epoch: 21/25... Training loss: 0.1066
Epoch: 21/25... Training loss: 0.1091
Epoch: 21/25... Training loss: 0.1063
Epoch: 21/25... Training loss: 0.1099
Epoch: 21/25... Training loss: 0.1035
Epoch: 21/25... Training loss: 0.1067
Epoch: 21/25... Training loss: 0.1044
Epoch: 21/25... Training loss: 0.1069
Epoch: 21/25... Training loss: 0.1034
Epoch: 21/25... Training loss: 0.1074
Epoch: 21/25... Training loss: 0.1045
Epoch: 21/25... Training loss: 0.1051
Epoch: 21/25... Training loss: 0.1041
Epoch: 21/25... Training loss: 0.1057
Epoch: 21/25... Training loss: 0.1044
Epoch: 21/25... Training loss: 0.1041
Epoch: 21/25... Training loss: 0.1038
Epoch: 21/25... Training loss: 0.1029
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1047
Epoch: 21/25... Training loss: 0.1057
Epoch: 21/25... Training loss: 0.1035
Epoch: 21/25... Training loss: 0.1086
Epoch: 21/25... Training loss: 0.1059
Epoch: 21/25... Training loss: 0.1041
Epoch: 21/25... Training loss: 0.1066
Epoch: 21/25... Training loss: 0.1046
Epoch: 21/25... Training loss: 0.1070
Epoch: 21/25... Training loss: 0.1066
Epoch: 21/25... Training loss: 0.1085
Epoch: 21/25... Training loss: 0.1067
Epoch: 21/25... Training loss: 0.1054
Epoch: 21/25... Training loss: 0.1058
Epoch: 21/25... Training loss: 0.1040
Epoch: 21/25... Training loss: 0.1058
Epoch: 21/25... Training loss: 0.1043
Epoch: 21/25... Training loss: 0.1091
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1074
Epoch: 21/25... Training loss: 0.1042
Epoch: 21/25... Training loss: 0.1101
Epoch: 21/25... Training loss: 0.1059
Epoch: 21/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1066
Epoch: 22/25... Training loss: 0.1071
Epoch: 22/25... Training loss: 0.1134
Epoch: 22/25... Training loss: 0.1076
Epoch: 22/25... Training loss: 0.1064
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1048
Epoch: 22/25... Training loss: 0.1055
Epoch: 22/25... Training loss: 0.1041
Epoch: 22/25... Training loss: 0.1081
Epoch: 22/25... Training loss: 0.1065
Epoch: 22/25... Training loss: 0.1008
Epoch: 22/25... Training loss: 0.1054
Epoch: 22/25... Training loss: 0.1050
Epoch: 22/25... Training loss: 0.1050
Epoch: 22/25... Training loss: 0.1065
Epoch: 22/25... Training loss: 0.1073
Epoch: 22/25... Training loss: 0.1025
Epoch: 22/25... Training loss: 0.1069
Epoch: 22/25... Training loss: 0.1093
Epoch: 22/25... Training loss: 0.1074
Epoch: 22/25... Training loss: 0.1010
Epoch: 22/25... Training loss: 0.1071
Epoch: 22/25... Training loss: 0.1037
Epoch: 22/25... Training loss: 0.1062
Epoch: 22/25... Training loss: 0.1042
Epoch: 22/25... Training loss: 0.1077
Epoch: 22/25... Training loss: 0.1050
Epoch: 22/25... Training loss: 0.1047
Epoch: 22/25... Training loss: 0.1032
Epoch: 22/25... Training loss: 0.1025
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1046
Epoch: 22/25... Training loss: 0.1066
Epoch: 22/25... Training loss: 0.1035
Epoch: 22/25... Training loss: 0.1026
Epoch: 22/25... Training loss: 0.1034
Epoch: 22/25... Training loss: 0.1059
Epoch: 22/25... Training loss: 0.1039
Epoch: 22/25... Training loss: 0.1054
Epoch: 22/25... Training loss: 0.1063
Epoch: 22/25... Training loss: 0.1069
Epoch: 22/25... Training loss: 0.1060
Epoch: 22/25... Training loss: 0.1072
Epoch: 22/25... Training loss: 0.1065
Epoch: 22/25... Training loss: 0.1082
Epoch: 22/25... Training loss: 0.1006
Epoch: 22/25... Training loss: 0.1065
Epoch: 22/25... Training loss: 0.1075
Epoch: 22/25... Training loss: 0.1051
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1039
Epoch: 22/25... Training loss: 0.1058
Epoch: 22/25... Training loss: 0.1061
Epoch: 22/25... Training loss: 0.1001
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1035
Epoch: 22/25... Training loss: 0.1066
Epoch: 22/25... Training loss: 0.1029
Epoch: 22/25... Training loss: 0.1049
Epoch: 22/25... Training loss: 0.1116
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1092
Epoch: 22/25... Training loss: 0.1097
Epoch: 22/25... Training loss: 0.1058
Epoch: 22/25... Training loss: 0.1032
Epoch: 22/25... Training loss: 0.1039
Epoch: 22/25... Training loss: 0.1050
Epoch: 22/25... Training loss: 0.1057
Epoch: 22/25... Training loss: 0.1089
Epoch: 22/25... Training loss: 0.1063
Epoch: 22/25... Training loss: 0.1043
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1043
Epoch: 22/25... Training loss: 0.1025
Epoch: 22/25... Training loss: 0.1075
Epoch: 22/25... Training loss: 0.1041
Epoch: 22/25... Training loss: 0.1019
Epoch: 22/25... Training loss: 0.1061
Epoch: 22/25... Training loss: 0.1065
Epoch: 22/25... Training loss: 0.1063
Epoch: 22/25... Training loss: 0.1081
Epoch: 22/25... Training loss: 0.1087
Epoch: 22/25... Training loss: 0.1062
Epoch: 22/25... Training loss: 0.1073
Epoch: 22/25... Training loss: 0.1039
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1049
Epoch: 22/25... Training loss: 0.1061
Epoch: 22/25... Training loss: 0.1032
Epoch: 22/25... Training loss: 0.1077
Epoch: 22/25... Training loss: 0.1030
Epoch: 22/25... Training loss: 0.1025
Epoch: 22/25... Training loss: 0.1048
Epoch: 22/25... Training loss: 0.1054
Epoch: 22/25... Training loss: 0.1041
Epoch: 22/25... Training loss: 0.1028
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1072
Epoch: 22/25... Training loss: 0.1066
Epoch: 22/25... Training loss: 0.1078
Epoch: 22/25... Training loss: 0.1049
Epoch: 22/25... Training loss: 0.1042
Epoch: 22/25... Training loss: 0.1075
Epoch: 22/25... Training loss: 0.1037
Epoch: 22/25... Training loss: 0.1058
Epoch: 22/25... Training loss: 0.1040
Epoch: 22/25... Training loss: 0.1059
Epoch: 22/25... Training loss: 0.1053
Epoch: 22/25... Training loss: 0.1029
Epoch: 22/25... Training loss: 0.1049
Epoch: 22/25... Training loss: 0.1037
Epoch: 22/25... Training loss: 0.1062
Epoch: 22/25... Training loss: 0.1053
Epoch: 22/25... Training loss: 0.1042
Epoch: 22/25... Training loss: 0.1062
Epoch: 22/25... Training loss: 0.1071
Epoch: 22/25... Training loss: 0.0994
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1060
Epoch: 22/25... Training loss: 0.1078
Epoch: 22/25... Training loss: 0.1059
Epoch: 22/25... Training loss: 0.1016
Epoch: 22/25... Training loss: 0.1041
Epoch: 22/25... Training loss: 0.1024
Epoch: 22/25... Training loss: 0.1042
Epoch: 22/25... Training loss: 0.1048
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1085
Epoch: 22/25... Training loss: 0.1020
Epoch: 22/25... Training loss: 0.1056
Epoch: 22/25... Training loss: 0.1048
Epoch: 22/25... Training loss: 0.1035
Epoch: 22/25... Training loss: 0.1026
Epoch: 22/25... Training loss: 0.1039
Epoch: 22/25... Training loss: 0.1082
Epoch: 22/25... Training loss: 0.1037
Epoch: 22/25... Training loss: 0.1048
Epoch: 22/25... Training loss: 0.1061
Epoch: 22/25... Training loss: 0.1053
Epoch: 22/25... Training loss: 0.1072
Epoch: 22/25... Training loss: 0.1047
Epoch: 22/25... Training loss: 0.1084
Epoch: 22/25... Training loss: 0.1015
Epoch: 22/25... Training loss: 0.1079
Epoch: 22/25... Training loss: 0.1077
Epoch: 22/25... Training loss: 0.1054
Epoch: 22/25... Training loss: 0.1097
Epoch: 22/25... Training loss: 0.1062
Epoch: 22/25... Training loss: 0.1052
Epoch: 22/25... Training loss: 0.1076
Epoch: 22/25... Training loss: 0.1038
Epoch: 22/25... Training loss: 0.1069
Epoch: 22/25... Training loss: 0.1064
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1022
Epoch: 22/25... Training loss: 0.1039
Epoch: 22/25... Training loss: 0.1055
Epoch: 22/25... Training loss: 0.1091
Epoch: 22/25... Training loss: 0.1079
Epoch: 22/25... Training loss: 0.1061
Epoch: 22/25... Training loss: 0.1031
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1107
Epoch: 22/25... Training loss: 0.1024
Epoch: 22/25... Training loss: 0.1062
Epoch: 22/25... Training loss: 0.1037
Epoch: 22/25... Training loss: 0.1038
Epoch: 22/25... Training loss: 0.1015
Epoch: 22/25... Training loss: 0.1030
Epoch: 22/25... Training loss: 0.1009
Epoch: 22/25... Training loss: 0.1051
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1033
Epoch: 22/25... Training loss: 0.1065
Epoch: 22/25... Training loss: 0.1070
Epoch: 22/25... Training loss: 0.1065
Epoch: 22/25... Training loss: 0.1079
Epoch: 22/25... Training loss: 0.1064
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1036
Epoch: 22/25... Training loss: 0.1073
Epoch: 22/25... Training loss: 0.1020
Epoch: 22/25... Training loss: 0.1056
Epoch: 22/25... Training loss: 0.1031
Epoch: 22/25... Training loss: 0.1107
Epoch: 22/25... Training loss: 0.1065
Epoch: 22/25... Training loss: 0.1083
Epoch: 22/25... Training loss: 0.1053
Epoch: 22/25... Training loss: 0.1063
Epoch: 22/25... Training loss: 0.1051
Epoch: 22/25... Training loss: 0.1055
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1091
Epoch: 22/25... Training loss: 0.1080
Epoch: 22/25... Training loss: 0.1073
Epoch: 22/25... Training loss: 0.1060
Epoch: 22/25... Training loss: 0.1060
Epoch: 22/25... Training loss: 0.1049
Epoch: 22/25... Training loss: 0.1028
Epoch: 22/25... Training loss: 0.1038
Epoch: 22/25... Training loss: 0.1083
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1043
Epoch: 22/25... Training loss: 0.1047
Epoch: 22/25... Training loss: 0.1039
Epoch: 22/25... Training loss: 0.1029
Epoch: 22/25... Training loss: 0.1056
Epoch: 22/25... Training loss: 0.1080
Epoch: 22/25... Training loss: 0.1055
Epoch: 22/25... Training loss: 0.1049
Epoch: 22/25... Training loss: 0.1015
Epoch: 22/25... Training loss: 0.1073
Epoch: 22/25... Training loss: 0.1043
Epoch: 22/25... Training loss: 0.1050
Epoch: 22/25... Training loss: 0.1040
Epoch: 22/25... Training loss: 0.1040
Epoch: 22/25... Training loss: 0.1043
Epoch: 22/25... Training loss: 0.1013
Epoch: 22/25... Training loss: 0.1064
Epoch: 22/25... Training loss: 0.1057
Epoch: 22/25... Training loss: 0.1036
Epoch: 22/25... Training loss: 0.1041
Epoch: 22/25... Training loss: 0.1041
Epoch: 22/25... Training loss: 0.1036
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1028
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1074
Epoch: 22/25... Training loss: 0.1062
Epoch: 22/25... Training loss: 0.1057
Epoch: 22/25... Training loss: 0.1040
Epoch: 22/25... Training loss: 0.1060
Epoch: 22/25... Training loss: 0.1057
Epoch: 22/25... Training loss: 0.1036
Epoch: 22/25... Training loss: 0.1054
Epoch: 22/25... Training loss: 0.1043
Epoch: 22/25... Training loss: 0.1033
Epoch: 22/25... Training loss: 0.1046
Epoch: 22/25... Training loss: 0.1069
Epoch: 22/25... Training loss: 0.1051
Epoch: 22/25... Training loss: 0.1064
Epoch: 22/25... Training loss: 0.1062
Epoch: 22/25... Training loss: 0.1060
Epoch: 22/25... Training loss: 0.1025
Epoch: 22/25... Training loss: 0.1027
Epoch: 22/25... Training loss: 0.1040
Epoch: 22/25... Training loss: 0.1036
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1052
Epoch: 22/25... Training loss: 0.1078
Epoch: 22/25... Training loss: 0.1039
Epoch: 22/25... Training loss: 0.1016
Epoch: 22/25... Training loss: 0.1043
Epoch: 22/25... Training loss: 0.1068
Epoch: 22/25... Training loss: 0.1049
Epoch: 22/25... Training loss: 0.1053
Epoch: 22/25... Training loss: 0.1075
Epoch: 22/25... Training loss: 0.1089
Epoch: 22/25... Training loss: 0.1088
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1031
Epoch: 22/25... Training loss: 0.1053
Epoch: 22/25... Training loss: 0.1036
Epoch: 22/25... Training loss: 0.1015
Epoch: 22/25... Training loss: 0.1085
Epoch: 22/25... Training loss: 0.1068
Epoch: 22/25... Training loss: 0.1054
Epoch: 22/25... Training loss: 0.1059
Epoch: 22/25... Training loss: 0.1071
Epoch: 22/25... Training loss: 0.1055
Epoch: 22/25... Training loss: 0.1076
Epoch: 22/25... Training loss: 0.1050
Epoch: 22/25... Training loss: 0.1066
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1044
Epoch: 22/25... Training loss: 0.1080
Epoch: 22/25... Training loss: 0.1060
Epoch: 22/25... Training loss: 0.1067
Epoch: 22/25... Training loss: 0.1032
Epoch: 22/25... Training loss: 0.1058
Epoch: 22/25... Training loss: 0.1078
Epoch: 22/25... Training loss: 0.1030
Epoch: 22/25... Training loss: 0.1061
Epoch: 22/25... Training loss: 0.1063
Epoch: 22/25... Training loss: 0.1055
Epoch: 22/25... Training loss: 0.1055
Epoch: 22/25... Training loss: 0.1045
Epoch: 22/25... Training loss: 0.1081
Epoch: 22/25... Training loss: 0.1111
Epoch: 22/25... Training loss: 0.1049
Epoch: 22/25... Training loss: 0.1058
Epoch: 22/25... Training loss: 0.1073
Epoch: 22/25... Training loss: 0.1040
Epoch: 22/25... Training loss: 0.1008
Epoch: 22/25... Training loss: 0.1078
Epoch: 22/25... Training loss: 0.1031
Epoch: 23/25... Training loss: 0.1057
Epoch: 23/25... Training loss: 0.1057
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1077
Epoch: 23/25... Training loss: 0.1061
Epoch: 23/25... Training loss: 0.1075
Epoch: 23/25... Training loss: 0.1053
Epoch: 23/25... Training loss: 0.1069
Epoch: 23/25... Training loss: 0.1065
Epoch: 23/25... Training loss: 0.1049
Epoch: 23/25... Training loss: 0.1085
Epoch: 23/25... Training loss: 0.1075
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1079
Epoch: 23/25... Training loss: 0.1044
Epoch: 23/25... Training loss: 0.1046
Epoch: 23/25... Training loss: 0.1037
Epoch: 23/25... Training loss: 0.1047
Epoch: 23/25... Training loss: 0.1059
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1041
Epoch: 23/25... Training loss: 0.1061
Epoch: 23/25... Training loss: 0.1041
Epoch: 23/25... Training loss: 0.1030
Epoch: 23/25... Training loss: 0.1070
Epoch: 23/25... Training loss: 0.1032
Epoch: 23/25... Training loss: 0.1057
Epoch: 23/25... Training loss: 0.1079
Epoch: 23/25... Training loss: 0.1083
Epoch: 23/25... Training loss: 0.1052
Epoch: 23/25... Training loss: 0.1032
Epoch: 23/25... Training loss: 0.1032
Epoch: 23/25... Training loss: 0.1016
Epoch: 23/25... Training loss: 0.1074
Epoch: 23/25... Training loss: 0.1040
Epoch: 23/25... Training loss: 0.1033
Epoch: 23/25... Training loss: 0.1028
Epoch: 23/25... Training loss: 0.1047
Epoch: 23/25... Training loss: 0.1045
Epoch: 23/25... Training loss: 0.1084
Epoch: 23/25... Training loss: 0.1052
Epoch: 23/25... Training loss: 0.1023
Epoch: 23/25... Training loss: 0.1037
Epoch: 23/25... Training loss: 0.1036
Epoch: 23/25... Training loss: 0.1096
Epoch: 23/25... Training loss: 0.1072
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1061
Epoch: 23/25... Training loss: 0.1046
Epoch: 23/25... Training loss: 0.1008
Epoch: 23/25... Training loss: 0.1037
Epoch: 23/25... Training loss: 0.1081
Epoch: 23/25... Training loss: 0.1042
Epoch: 23/25... Training loss: 0.1033
Epoch: 23/25... Training loss: 0.1039
Epoch: 23/25... Training loss: 0.1059
Epoch: 23/25... Training loss: 0.1072
Epoch: 23/25... Training loss: 0.1011
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1029
Epoch: 23/25... Training loss: 0.1073
Epoch: 23/25... Training loss: 0.1058
Epoch: 23/25... Training loss: 0.1042
Epoch: 23/25... Training loss: 0.1053
Epoch: 23/25... Training loss: 0.1059
Epoch: 23/25... Training loss: 0.1061
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1023
Epoch: 23/25... Training loss: 0.1060
Epoch: 23/25... Training loss: 0.1047
Epoch: 23/25... Training loss: 0.1049
Epoch: 23/25... Training loss: 0.1039
Epoch: 23/25... Training loss: 0.1039
Epoch: 23/25... Training loss: 0.1015
Epoch: 23/25... Training loss: 0.1029
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1038
Epoch: 23/25... Training loss: 0.1020
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1059
Epoch: 23/25... Training loss: 0.1033
Epoch: 23/25... Training loss: 0.0990
Epoch: 23/25... Training loss: 0.1029
Epoch: 23/25... Training loss: 0.1038
Epoch: 23/25... Training loss: 0.1035
Epoch: 23/25... Training loss: 0.1081
Epoch: 23/25... Training loss: 0.1051
Epoch: 23/25... Training loss: 0.1024
Epoch: 23/25... Training loss: 0.1045
Epoch: 23/25... Training loss: 0.1032
Epoch: 23/25... Training loss: 0.1048
Epoch: 23/25... Training loss: 0.1076
Epoch: 23/25... Training loss: 0.1086
Epoch: 23/25... Training loss: 0.1042
Epoch: 23/25... Training loss: 0.1084
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1014
Epoch: 23/25... Training loss: 0.1059
Epoch: 23/25... Training loss: 0.1073
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1028
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1040
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1046
Epoch: 23/25... Training loss: 0.1062
Epoch: 23/25... Training loss: 0.1052
Epoch: 23/25... Training loss: 0.1081
Epoch: 23/25... Training loss: 0.1045
Epoch: 23/25... Training loss: 0.1051
Epoch: 23/25... Training loss: 0.1075
Epoch: 23/25... Training loss: 0.1049
Epoch: 23/25... Training loss: 0.1060
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1044
Epoch: 23/25... Training loss: 0.1051
Epoch: 23/25... Training loss: 0.1061
Epoch: 23/25... Training loss: 0.1062
Epoch: 23/25... Training loss: 0.1021
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1062
Epoch: 23/25... Training loss: 0.1046
Epoch: 23/25... Training loss: 0.1040
Epoch: 23/25... Training loss: 0.1062
Epoch: 23/25... Training loss: 0.1011
Epoch: 23/25... Training loss: 0.1045
Epoch: 23/25... Training loss: 0.1026
Epoch: 23/25... Training loss: 0.1101
Epoch: 23/25... Training loss: 0.1060
Epoch: 23/25... Training loss: 0.1039
Epoch: 23/25... Training loss: 0.1064
Epoch: 23/25... Training loss: 0.1036
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1041
Epoch: 23/25... Training loss: 0.1070
Epoch: 23/25... Training loss: 0.1060
Epoch: 23/25... Training loss: 0.1065
Epoch: 23/25... Training loss: 0.1041
Epoch: 23/25... Training loss: 0.1045
Epoch: 23/25... Training loss: 0.1027
Epoch: 23/25... Training loss: 0.1069
Epoch: 23/25... Training loss: 0.1067
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1010
Epoch: 23/25... Training loss: 0.1058
Epoch: 23/25... Training loss: 0.1023
Epoch: 23/25... Training loss: 0.1042
Epoch: 23/25... Training loss: 0.1020
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1061
Epoch: 23/25... Training loss: 0.1052
Epoch: 23/25... Training loss: 0.1023
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1057
Epoch: 23/25... Training loss: 0.1028
Epoch: 23/25... Training loss: 0.1036
Epoch: 23/25... Training loss: 0.1040
Epoch: 23/25... Training loss: 0.1034
Epoch: 23/25... Training loss: 0.1089
Epoch: 23/25... Training loss: 0.1031
Epoch: 23/25... Training loss: 0.1033
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1055
Epoch: 23/25... Training loss: 0.1064
Epoch: 23/25... Training loss: 0.1062
Epoch: 23/25... Training loss: 0.1068
Epoch: 23/25... Training loss: 0.1031
Epoch: 23/25... Training loss: 0.1036
Epoch: 23/25... Training loss: 0.1022
Epoch: 23/25... Training loss: 0.1025
Epoch: 23/25... Training loss: 0.1072
Epoch: 23/25... Training loss: 0.1004
Epoch: 23/25... Training loss: 0.1079
Epoch: 23/25... Training loss: 0.1025
Epoch: 23/25... Training loss: 0.1018
Epoch: 23/25... Training loss: 0.1040
Epoch: 23/25... Training loss: 0.1066
Epoch: 23/25... Training loss: 0.1065
Epoch: 23/25... Training loss: 0.1079
Epoch: 23/25... Training loss: 0.1079
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1075
Epoch: 23/25... Training loss: 0.1047
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1032
Epoch: 23/25... Training loss: 0.1041
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1020
Epoch: 23/25... Training loss: 0.1031
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1075
Epoch: 23/25... Training loss: 0.1027
Epoch: 23/25... Training loss: 0.1045
Epoch: 23/25... Training loss: 0.1055
Epoch: 23/25... Training loss: 0.1096
Epoch: 23/25... Training loss: 0.1092
Epoch: 23/25... Training loss: 0.1048
Epoch: 23/25... Training loss: 0.1030
Epoch: 23/25... Training loss: 0.1008
Epoch: 23/25... Training loss: 0.1058
Epoch: 23/25... Training loss: 0.1078
Epoch: 23/25... Training loss: 0.1047
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1046
Epoch: 23/25... Training loss: 0.1020
Epoch: 23/25... Training loss: 0.1014
Epoch: 23/25... Training loss: 0.1052
Epoch: 23/25... Training loss: 0.1049
Epoch: 23/25... Training loss: 0.1045
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1016
Epoch: 23/25... Training loss: 0.1053
Epoch: 23/25... Training loss: 0.1049
Epoch: 23/25... Training loss: 0.1042
Epoch: 23/25... Training loss: 0.1021
Epoch: 23/25... Training loss: 0.1042
Epoch: 23/25... Training loss: 0.1071
Epoch: 23/25... Training loss: 0.1044
Epoch: 23/25... Training loss: 0.0999
Epoch: 23/25... Training loss: 0.1061
Epoch: 23/25... Training loss: 0.1056
Epoch: 23/25... Training loss: 0.1026
Epoch: 23/25... Training loss: 0.1031
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1057
Epoch: 23/25... Training loss: 0.1029
Epoch: 23/25... Training loss: 0.1084
Epoch: 23/25... Training loss: 0.1038
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1028
Epoch: 23/25... Training loss: 0.1067
Epoch: 23/25... Training loss: 0.1031
Epoch: 23/25... Training loss: 0.1080
Epoch: 23/25... Training loss: 0.1055
Epoch: 23/25... Training loss: 0.1068
Epoch: 23/25... Training loss: 0.1042
Epoch: 23/25... Training loss: 0.1069
Epoch: 23/25... Training loss: 0.1024
Epoch: 23/25... Training loss: 0.1084
Epoch: 23/25... Training loss: 0.1059
Epoch: 23/25... Training loss: 0.1049
Epoch: 23/25... Training loss: 0.1054
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1025
Epoch: 23/25... Training loss: 0.1061
Epoch: 23/25... Training loss: 0.1035
Epoch: 23/25... Training loss: 0.1033
Epoch: 23/25... Training loss: 0.1080
Epoch: 23/25... Training loss: 0.1050
Epoch: 23/25... Training loss: 0.1032
Epoch: 23/25... Training loss: 0.1073
Epoch: 23/25... Training loss: 0.1029
Epoch: 23/25... Training loss: 0.1072
Epoch: 23/25... Training loss: 0.1047
Epoch: 23/25... Training loss: 0.1083
Epoch: 23/25... Training loss: 0.1051
Epoch: 23/25... Training loss: 0.1029
Epoch: 23/25... Training loss: 0.1083
Epoch: 23/25... Training loss: 0.1082
Epoch: 23/25... Training loss: 0.1036
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1047
Epoch: 23/25... Training loss: 0.1059
Epoch: 23/25... Training loss: 0.1069
Epoch: 23/25... Training loss: 0.1067
Epoch: 23/25... Training loss: 0.1057
Epoch: 23/25... Training loss: 0.1033
Epoch: 23/25... Training loss: 0.1036
Epoch: 23/25... Training loss: 0.1026
Epoch: 23/25... Training loss: 0.1039
Epoch: 23/25... Training loss: 0.1082
Epoch: 23/25... Training loss: 0.1076
Epoch: 23/25... Training loss: 0.0995
Epoch: 23/25... Training loss: 0.1077
Epoch: 23/25... Training loss: 0.1026
Epoch: 23/25... Training loss: 0.1016
Epoch: 23/25... Training loss: 0.1025
Epoch: 23/25... Training loss: 0.1034
Epoch: 23/25... Training loss: 0.1030
Epoch: 23/25... Training loss: 0.1032
Epoch: 23/25... Training loss: 0.1012
Epoch: 23/25... Training loss: 0.1066
Epoch: 23/25... Training loss: 0.1073
Epoch: 23/25... Training loss: 0.1039
Epoch: 23/25... Training loss: 0.1068
Epoch: 23/25... Training loss: 0.1049
Epoch: 23/25... Training loss: 0.1094
Epoch: 23/25... Training loss: 0.1062
Epoch: 23/25... Training loss: 0.1063
Epoch: 23/25... Training loss: 0.1048
Epoch: 24/25... Training loss: 0.1021
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1028
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1025
Epoch: 24/25... Training loss: 0.1027
Epoch: 24/25... Training loss: 0.1033
Epoch: 24/25... Training loss: 0.1054
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1046
Epoch: 24/25... Training loss: 0.1055
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1065
Epoch: 24/25... Training loss: 0.1066
Epoch: 24/25... Training loss: 0.1028
Epoch: 24/25... Training loss: 0.1011
Epoch: 24/25... Training loss: 0.1073
Epoch: 24/25... Training loss: 0.1046
Epoch: 24/25... Training loss: 0.1039
Epoch: 24/25... Training loss: 0.1076
Epoch: 24/25... Training loss: 0.1059
Epoch: 24/25... Training loss: 0.1021
Epoch: 24/25... Training loss: 0.1049
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1065
Epoch: 24/25... Training loss: 0.1047
Epoch: 24/25... Training loss: 0.1082
Epoch: 24/25... Training loss: 0.1014
Epoch: 24/25... Training loss: 0.1066
Epoch: 24/25... Training loss: 0.1011
Epoch: 24/25... Training loss: 0.1029
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1023
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1008
Epoch: 24/25... Training loss: 0.1074
Epoch: 24/25... Training loss: 0.1058
Epoch: 24/25... Training loss: 0.1052
Epoch: 24/25... Training loss: 0.1039
Epoch: 24/25... Training loss: 0.1022
Epoch: 24/25... Training loss: 0.1035
Epoch: 24/25... Training loss: 0.1057
Epoch: 24/25... Training loss: 0.1030
Epoch: 24/25... Training loss: 0.1048
Epoch: 24/25... Training loss: 0.1056
Epoch: 24/25... Training loss: 0.1057
Epoch: 24/25... Training loss: 0.1010
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1068
Epoch: 24/25... Training loss: 0.1048
Epoch: 24/25... Training loss: 0.1074
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1087
Epoch: 24/25... Training loss: 0.1029
Epoch: 24/25... Training loss: 0.1019
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1022
Epoch: 24/25... Training loss: 0.1026
Epoch: 24/25... Training loss: 0.1042
Epoch: 24/25... Training loss: 0.1026
Epoch: 24/25... Training loss: 0.1011
Epoch: 24/25... Training loss: 0.1058
Epoch: 24/25... Training loss: 0.1055
Epoch: 24/25... Training loss: 0.1084
Epoch: 24/25... Training loss: 0.1071
Epoch: 24/25... Training loss: 0.1060
Epoch: 24/25... Training loss: 0.1026
Epoch: 24/25... Training loss: 0.1051
Epoch: 24/25... Training loss: 0.1039
Epoch: 24/25... Training loss: 0.1036
Epoch: 24/25... Training loss: 0.1030
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1046
Epoch: 24/25... Training loss: 0.1058
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1092
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1035
Epoch: 24/25... Training loss: 0.1032
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1028
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1046
Epoch: 24/25... Training loss: 0.1058
Epoch: 24/25... Training loss: 0.1030
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1092
Epoch: 24/25... Training loss: 0.1036
Epoch: 24/25... Training loss: 0.1033
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1062
Epoch: 24/25... Training loss: 0.1048
Epoch: 24/25... Training loss: 0.1032
Epoch: 24/25... Training loss: 0.1036
Epoch: 24/25... Training loss: 0.1059
Epoch: 24/25... Training loss: 0.1061
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1004
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1047
Epoch: 24/25... Training loss: 0.1014
Epoch: 24/25... Training loss: 0.1027
Epoch: 24/25... Training loss: 0.1030
Epoch: 24/25... Training loss: 0.1068
Epoch: 24/25... Training loss: 0.1054
Epoch: 24/25... Training loss: 0.1070
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1057
Epoch: 24/25... Training loss: 0.1078
Epoch: 24/25... Training loss: 0.1033
Epoch: 24/25... Training loss: 0.1082
Epoch: 24/25... Training loss: 0.1083
Epoch: 24/25... Training loss: 0.1063
Epoch: 24/25... Training loss: 0.1031
Epoch: 24/25... Training loss: 0.1056
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1057
Epoch: 24/25... Training loss: 0.1030
Epoch: 24/25... Training loss: 0.1096
Epoch: 24/25... Training loss: 0.1028
Epoch: 24/25... Training loss: 0.1072
Epoch: 24/25... Training loss: 0.1008
Epoch: 24/25... Training loss: 0.1034
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1068
Epoch: 24/25... Training loss: 0.1023
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1059
Epoch: 24/25... Training loss: 0.1064
Epoch: 24/25... Training loss: 0.1019
Epoch: 24/25... Training loss: 0.1078
Epoch: 24/25... Training loss: 0.1083
Epoch: 24/25... Training loss: 0.1035
Epoch: 24/25... Training loss: 0.1043
Epoch: 24/25... Training loss: 0.1032
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1050
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1049
Epoch: 24/25... Training loss: 0.0996
Epoch: 24/25... Training loss: 0.1048
Epoch: 24/25... Training loss: 0.1028
Epoch: 24/25... Training loss: 0.1055
Epoch: 24/25... Training loss: 0.1032
Epoch: 24/25... Training loss: 0.1073
Epoch: 24/25... Training loss: 0.1065
Epoch: 24/25... Training loss: 0.1023
Epoch: 24/25... Training loss: 0.1024
Epoch: 24/25... Training loss: 0.1013
Epoch: 24/25... Training loss: 0.1111
Epoch: 24/25... Training loss: 0.1021
Epoch: 24/25... Training loss: 0.1031
Epoch: 24/25... Training loss: 0.1016
Epoch: 24/25... Training loss: 0.1067
Epoch: 24/25... Training loss: 0.1016
Epoch: 24/25... Training loss: 0.1047
Epoch: 24/25... Training loss: 0.1045
Epoch: 24/25... Training loss: 0.1021
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1044
Epoch: 24/25... Training loss: 0.1077
Epoch: 24/25... Training loss: 0.1025
Epoch: 24/25... Training loss: 0.1059
Epoch: 24/25... Training loss: 0.1042
Epoch: 24/25... Training loss: 0.1014
Epoch: 24/25... Training loss: 0.1048
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1064
Epoch: 24/25... Training loss: 0.1052
Epoch: 24/25... Training loss: 0.1051
Epoch: 24/25... Training loss: 0.1048
Epoch: 24/25... Training loss: 0.1065
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1026
Epoch: 24/25... Training loss: 0.1028
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1050
Epoch: 24/25... Training loss: 0.1054
Epoch: 24/25... Training loss: 0.1032
Epoch: 24/25... Training loss: 0.1035
Epoch: 24/25... Training loss: 0.1033
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1046
Epoch: 24/25... Training loss: 0.1056
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1039
Epoch: 24/25... Training loss: 0.1026
Epoch: 24/25... Training loss: 0.1058
Epoch: 24/25... Training loss: 0.1049
Epoch: 24/25... Training loss: 0.1032
Epoch: 24/25... Training loss: 0.1069
Epoch: 24/25... Training loss: 0.1049
Epoch: 24/25... Training loss: 0.1045
Epoch: 24/25... Training loss: 0.1043
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1088
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1056
Epoch: 24/25... Training loss: 0.1073
Epoch: 24/25... Training loss: 0.1100
Epoch: 24/25... Training loss: 0.1061
Epoch: 24/25... Training loss: 0.1061
Epoch: 24/25... Training loss: 0.1051
Epoch: 24/25... Training loss: 0.1045
Epoch: 24/25... Training loss: 0.1045
Epoch: 24/25... Training loss: 0.1029
Epoch: 24/25... Training loss: 0.1046
Epoch: 24/25... Training loss: 0.1041
Epoch: 24/25... Training loss: 0.1078
Epoch: 24/25... Training loss: 0.1080
Epoch: 24/25... Training loss: 0.1011
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1037
Epoch: 24/25... Training loss: 0.1049
Epoch: 24/25... Training loss: 0.1020
Epoch: 24/25... Training loss: 0.1022
Epoch: 24/25... Training loss: 0.1062
Epoch: 24/25... Training loss: 0.1045
Epoch: 24/25... Training loss: 0.1034
Epoch: 24/25... Training loss: 0.1067
Epoch: 24/25... Training loss: 0.1041
Epoch: 24/25... Training loss: 0.1025
Epoch: 24/25... Training loss: 0.1055
Epoch: 24/25... Training loss: 0.1031
Epoch: 24/25... Training loss: 0.1027
Epoch: 24/25... Training loss: 0.1081
Epoch: 24/25... Training loss: 0.1077
Epoch: 24/25... Training loss: 0.1011
Epoch: 24/25... Training loss: 0.1000
Epoch: 24/25... Training loss: 0.1027
Epoch: 24/25... Training loss: 0.1031
Epoch: 24/25... Training loss: 0.1025
Epoch: 24/25... Training loss: 0.1052
Epoch: 24/25... Training loss: 0.1011
Epoch: 24/25... Training loss: 0.1044
Epoch: 24/25... Training loss: 0.1055
Epoch: 24/25... Training loss: 0.1031
Epoch: 24/25... Training loss: 0.1071
Epoch: 24/25... Training loss: 0.1036
Epoch: 24/25... Training loss: 0.1002
Epoch: 24/25... Training loss: 0.1030
Epoch: 24/25... Training loss: 0.1071
Epoch: 24/25... Training loss: 0.1028
Epoch: 24/25... Training loss: 0.1071
Epoch: 24/25... Training loss: 0.1055
Epoch: 24/25... Training loss: 0.1035
Epoch: 24/25... Training loss: 0.1031
Epoch: 24/25... Training loss: 0.1027
Epoch: 24/25... Training loss: 0.1036
Epoch: 24/25... Training loss: 0.1024
Epoch: 24/25... Training loss: 0.1065
Epoch: 24/25... Training loss: 0.1070
Epoch: 24/25... Training loss: 0.1035
Epoch: 24/25... Training loss: 0.1002
Epoch: 24/25... Training loss: 0.1053
Epoch: 24/25... Training loss: 0.1040
Epoch: 24/25... Training loss: 0.1011
Epoch: 24/25... Training loss: 0.1033
Epoch: 24/25... Training loss: 0.1007
Epoch: 24/25... Training loss: 0.1041
Epoch: 24/25... Training loss: 0.1027
Epoch: 24/25... Training loss: 0.1070
Epoch: 24/25... Training loss: 0.1061
Epoch: 24/25... Training loss: 0.1033
Epoch: 24/25... Training loss: 0.1061
Epoch: 24/25... Training loss: 0.1052
Epoch: 24/25... Training loss: 0.1041
Epoch: 24/25... Training loss: 0.1043
Epoch: 24/25... Training loss: 0.1038
Epoch: 24/25... Training loss: 0.1034
Epoch: 24/25... Training loss: 0.1055
Epoch: 24/25... Training loss: 0.1051
Epoch: 24/25... Training loss: 0.0979
Epoch: 24/25... Training loss: 0.1043
Epoch: 24/25... Training loss: 0.1072
Epoch: 24/25... Training loss: 0.1016
Epoch: 24/25... Training loss: 0.1077
Epoch: 24/25... Training loss: 0.1032
Epoch: 24/25... Training loss: 0.1042
Epoch: 24/25... Training loss: 0.1024
Epoch: 24/25... Training loss: 0.1061
Epoch: 24/25... Training loss: 0.1046
Epoch: 24/25... Training loss: 0.1033
Epoch: 24/25... Training loss: 0.1056
Epoch: 24/25... Training loss: 0.1045
Epoch: 24/25... Training loss: 0.1070
Epoch: 24/25... Training loss: 0.1054
Epoch: 24/25... Training loss: 0.1047
Epoch: 24/25... Training loss: 0.1045
Epoch: 24/25... Training loss: 0.1047
Epoch: 24/25... Training loss: 0.1033
Epoch: 24/25... Training loss: 0.1080
Epoch: 24/25... Training loss: 0.1041
Epoch: 24/25... Training loss: 0.0987
Epoch: 24/25... Training loss: 0.1034
Epoch: 24/25... Training loss: 0.1022
Epoch: 24/25... Training loss: 0.1044
Epoch: 24/25... Training loss: 0.1067
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1027
Epoch: 25/25... Training loss: 0.1057
Epoch: 25/25... Training loss: 0.1019
Epoch: 25/25... Training loss: 0.1027
Epoch: 25/25... Training loss: 0.1085
Epoch: 25/25... Training loss: 0.1032
Epoch: 25/25... Training loss: 0.1073
Epoch: 25/25... Training loss: 0.1051
Epoch: 25/25... Training loss: 0.1063
Epoch: 25/25... Training loss: 0.1037
Epoch: 25/25... Training loss: 0.1062
Epoch: 25/25... Training loss: 0.1082
Epoch: 25/25... Training loss: 0.1051
Epoch: 25/25... Training loss: 0.1065
Epoch: 25/25... Training loss: 0.1078
Epoch: 25/25... Training loss: 0.1039
Epoch: 25/25... Training loss: 0.1060
Epoch: 25/25... Training loss: 0.1011
Epoch: 25/25... Training loss: 0.1067
Epoch: 25/25... Training loss: 0.1052
Epoch: 25/25... Training loss: 0.1024
Epoch: 25/25... Training loss: 0.1042
Epoch: 25/25... Training loss: 0.1059
Epoch: 25/25... Training loss: 0.1017
Epoch: 25/25... Training loss: 0.1048
Epoch: 25/25... Training loss: 0.1022
Epoch: 25/25... Training loss: 0.1021
Epoch: 25/25... Training loss: 0.1042
Epoch: 25/25... Training loss: 0.1063
Epoch: 25/25... Training loss: 0.1062
Epoch: 25/25... Training loss: 0.1034
Epoch: 25/25... Training loss: 0.1034
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1063
Epoch: 25/25... Training loss: 0.1056
Epoch: 25/25... Training loss: 0.1035
Epoch: 25/25... Training loss: 0.1044
Epoch: 25/25... Training loss: 0.1066
Epoch: 25/25... Training loss: 0.1090
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1042
Epoch: 25/25... Training loss: 0.1075
Epoch: 25/25... Training loss: 0.1080
Epoch: 25/25... Training loss: 0.1022
Epoch: 25/25... Training loss: 0.1081
Epoch: 25/25... Training loss: 0.1050
Epoch: 25/25... Training loss: 0.1032
Epoch: 25/25... Training loss: 0.1015
Epoch: 25/25... Training loss: 0.1061
Epoch: 25/25... Training loss: 0.1051
Epoch: 25/25... Training loss: 0.1073
Epoch: 25/25... Training loss: 0.1062
Epoch: 25/25... Training loss: 0.1027
Epoch: 25/25... Training loss: 0.1061
Epoch: 25/25... Training loss: 0.1049
Epoch: 25/25... Training loss: 0.1033
Epoch: 25/25... Training loss: 0.1031
Epoch: 25/25... Training loss: 0.1081
Epoch: 25/25... Training loss: 0.0999
Epoch: 25/25... Training loss: 0.1027
Epoch: 25/25... Training loss: 0.1049
Epoch: 25/25... Training loss: 0.1023
Epoch: 25/25... Training loss: 0.0995
Epoch: 25/25... Training loss: 0.1029
Epoch: 25/25... Training loss: 0.1027
Epoch: 25/25... Training loss: 0.1051
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1007
Epoch: 25/25... Training loss: 0.1055
Epoch: 25/25... Training loss: 0.1063
Epoch: 25/25... Training loss: 0.1031
Epoch: 25/25... Training loss: 0.1020
Epoch: 25/25... Training loss: 0.1069
Epoch: 25/25... Training loss: 0.1042
Epoch: 25/25... Training loss: 0.1070
Epoch: 25/25... Training loss: 0.1029
Epoch: 25/25... Training loss: 0.1061
Epoch: 25/25... Training loss: 0.1063
Epoch: 25/25... Training loss: 0.1050
Epoch: 25/25... Training loss: 0.1053
Epoch: 25/25... Training loss: 0.0991
Epoch: 25/25... Training loss: 0.1053
Epoch: 25/25... Training loss: 0.1049
Epoch: 25/25... Training loss: 0.1048
Epoch: 25/25... Training loss: 0.1055
Epoch: 25/25... Training loss: 0.1037
Epoch: 25/25... Training loss: 0.1057
Epoch: 25/25... Training loss: 0.1020
Epoch: 25/25... Training loss: 0.1015
Epoch: 25/25... Training loss: 0.1044
Epoch: 25/25... Training loss: 0.1057
Epoch: 25/25... Training loss: 0.1085
Epoch: 25/25... Training loss: 0.1038
Epoch: 25/25... Training loss: 0.1072
Epoch: 25/25... Training loss: 0.1103
Epoch: 25/25... Training loss: 0.1035
Epoch: 25/25... Training loss: 0.1066
Epoch: 25/25... Training loss: 0.1043
Epoch: 25/25... Training loss: 0.1026
Epoch: 25/25... Training loss: 0.1008
Epoch: 25/25... Training loss: 0.1064
Epoch: 25/25... Training loss: 0.1019
Epoch: 25/25... Training loss: 0.1041
Epoch: 25/25... Training loss: 0.1073
Epoch: 25/25... Training loss: 0.1054
Epoch: 25/25... Training loss: 0.1039
Epoch: 25/25... Training loss: 0.1057
Epoch: 25/25... Training loss: 0.1068
Epoch: 25/25... Training loss: 0.1026
Epoch: 25/25... Training loss: 0.1032
Epoch: 25/25... Training loss: 0.1075
Epoch: 25/25... Training loss: 0.1016
Epoch: 25/25... Training loss: 0.1035
Epoch: 25/25... Training loss: 0.1061
Epoch: 25/25... Training loss: 0.1055
Epoch: 25/25... Training loss: 0.1032
Epoch: 25/25... Training loss: 0.1035
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1007
Epoch: 25/25... Training loss: 0.1025
Epoch: 25/25... Training loss: 0.1023
Epoch: 25/25... Training loss: 0.1020
Epoch: 25/25... Training loss: 0.1053
Epoch: 25/25... Training loss: 0.1029
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1064
Epoch: 25/25... Training loss: 0.1031
Epoch: 25/25... Training loss: 0.1074
Epoch: 25/25... Training loss: 0.1073
Epoch: 25/25... Training loss: 0.1017
Epoch: 25/25... Training loss: 0.1007
Epoch: 25/25... Training loss: 0.1065
Epoch: 25/25... Training loss: 0.1031
Epoch: 25/25... Training loss: 0.1067
Epoch: 25/25... Training loss: 0.1041
Epoch: 25/25... Training loss: 0.1002
Epoch: 25/25... Training loss: 0.1050
Epoch: 25/25... Training loss: 0.0989
Epoch: 25/25... Training loss: 0.1015
Epoch: 25/25... Training loss: 0.1010
Epoch: 25/25... Training loss: 0.1032
Epoch: 25/25... Training loss: 0.1028
Epoch: 25/25... Training loss: 0.1013
Epoch: 25/25... Training loss: 0.1026
Epoch: 25/25... Training loss: 0.1040
Epoch: 25/25... Training loss: 0.1023
Epoch: 25/25... Training loss: 0.1038
Epoch: 25/25... Training loss: 0.1072
Epoch: 25/25... Training loss: 0.1028
Epoch: 25/25... Training loss: 0.1019
Epoch: 25/25... Training loss: 0.1021
Epoch: 25/25... Training loss: 0.1058
Epoch: 25/25... Training loss: 0.1048
Epoch: 25/25... Training loss: 0.1073
Epoch: 25/25... Training loss: 0.1037
Epoch: 25/25... Training loss: 0.1018
Epoch: 25/25... Training loss: 0.1021
Epoch: 25/25... Training loss: 0.1016
Epoch: 25/25... Training loss: 0.1033
Epoch: 25/25... Training loss: 0.1056
Epoch: 25/25... Training loss: 0.1031
Epoch: 25/25... Training loss: 0.1053
Epoch: 25/25... Training loss: 0.1056
Epoch: 25/25... Training loss: 0.1063
Epoch: 25/25... Training loss: 0.1035
Epoch: 25/25... Training loss: 0.1075
Epoch: 25/25... Training loss: 0.1046
Epoch: 25/25... Training loss: 0.1033
Epoch: 25/25... Training loss: 0.1052
Epoch: 25/25... Training loss: 0.1037
Epoch: 25/25... Training loss: 0.1078
Epoch: 25/25... Training loss: 0.1030
Epoch: 25/25... Training loss: 0.1040
Epoch: 25/25... Training loss: 0.1033
Epoch: 25/25... Training loss: 0.1046
Epoch: 25/25... Training loss: 0.1039
Epoch: 25/25... Training loss: 0.1015
Epoch: 25/25... Training loss: 0.1070
Epoch: 25/25... Training loss: 0.1053
Epoch: 25/25... Training loss: 0.1023
Epoch: 25/25... Training loss: 0.1063
Epoch: 25/25... Training loss: 0.1046
Epoch: 25/25... Training loss: 0.1046
Epoch: 25/25... Training loss: 0.1060
Epoch: 25/25... Training loss: 0.1035
Epoch: 25/25... Training loss: 0.1089
Epoch: 25/25... Training loss: 0.1080
Epoch: 25/25... Training loss: 0.1091
Epoch: 25/25... Training loss: 0.1008
Epoch: 25/25... Training loss: 0.1059
Epoch: 25/25... Training loss: 0.1041
Epoch: 25/25... Training loss: 0.1044
Epoch: 25/25... Training loss: 0.1068
Epoch: 25/25... Training loss: 0.1031
Epoch: 25/25... Training loss: 0.1026
Epoch: 25/25... Training loss: 0.1049
Epoch: 25/25... Training loss: 0.0994
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1044
Epoch: 25/25... Training loss: 0.0999
Epoch: 25/25... Training loss: 0.1085
Epoch: 25/25... Training loss: 0.1013
Epoch: 25/25... Training loss: 0.1092
Epoch: 25/25... Training loss: 0.1037
Epoch: 25/25... Training loss: 0.0999
Epoch: 25/25... Training loss: 0.1022
Epoch: 25/25... Training loss: 0.1067
Epoch: 25/25... Training loss: 0.1069
Epoch: 25/25... Training loss: 0.1056
Epoch: 25/25... Training loss: 0.1034
Epoch: 25/25... Training loss: 0.1024
Epoch: 25/25... Training loss: 0.1035
Epoch: 25/25... Training loss: 0.1085
Epoch: 25/25... Training loss: 0.1030
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1046
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1054
Epoch: 25/25... Training loss: 0.1038
Epoch: 25/25... Training loss: 0.1025
Epoch: 25/25... Training loss: 0.1061
Epoch: 25/25... Training loss: 0.1016
Epoch: 25/25... Training loss: 0.1043
Epoch: 25/25... Training loss: 0.1037
Epoch: 25/25... Training loss: 0.1049
Epoch: 25/25... Training loss: 0.1066
Epoch: 25/25... Training loss: 0.1033
Epoch: 25/25... Training loss: 0.1069
Epoch: 25/25... Training loss: 0.1034
Epoch: 25/25... Training loss: 0.1059
Epoch: 25/25... Training loss: 0.1060
Epoch: 25/25... Training loss: 0.1061
Epoch: 25/25... Training loss: 0.1027
Epoch: 25/25... Training loss: 0.1033
Epoch: 25/25... Training loss: 0.1047
Epoch: 25/25... Training loss: 0.1038
Epoch: 25/25... Training loss: 0.1026
Epoch: 25/25... Training loss: 0.1033
Epoch: 25/25... Training loss: 0.1056
Epoch: 25/25... Training loss: 0.1016
Epoch: 25/25... Training loss: 0.1018
Epoch: 25/25... Training loss: 0.1036
Epoch: 25/25... Training loss: 0.1044
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1006
Epoch: 25/25... Training loss: 0.1058
Epoch: 25/25... Training loss: 0.1045
Epoch: 25/25... Training loss: 0.1081
Epoch: 25/25... Training loss: 0.1054
Epoch: 25/25... Training loss: 0.1024
Epoch: 25/25... Training loss: 0.1027
Epoch: 25/25... Training loss: 0.1028
Epoch: 25/25... Training loss: 0.1082
Epoch: 25/25... Training loss: 0.1041
Epoch: 25/25... Training loss: 0.1082
Epoch: 25/25... Training loss: 0.1028
Epoch: 25/25... Training loss: 0.1048
Epoch: 25/25... Training loss: 0.1029
Epoch: 25/25... Training loss: 0.1024
Epoch: 25/25... Training loss: 0.1040
Epoch: 25/25... Training loss: 0.1030
Epoch: 25/25... Training loss: 0.1048
Epoch: 25/25... Training loss: 0.1050
Epoch: 25/25... Training loss: 0.1055
Epoch: 25/25... Training loss: 0.1018
Epoch: 25/25... Training loss: 0.1033
Epoch: 25/25... Training loss: 0.1023
Epoch: 25/25... Training loss: 0.1044
Epoch: 25/25... Training loss: 0.1044
Epoch: 25/25... Training loss: 0.1009
Epoch: 25/25... Training loss: 0.1029
Epoch: 25/25... Training loss: 0.1072
Epoch: 25/25... Training loss: 0.1034
Epoch: 25/25... Training loss: 0.1001
Epoch: 25/25... Training loss: 0.1063
Epoch: 25/25... Training loss: 0.1060
Epoch: 25/25... Training loss: 0.1030
Epoch: 25/25... Training loss: 0.1047
Epoch: 25/25... Training loss: 0.1020
Epoch: 25/25... Training loss: 0.1040
Epoch: 25/25... Training loss: 0.1040
Epoch: 25/25... Training loss: 0.1028
Epoch: 25/25... Training loss: 0.1055
Epoch: 25/25... Training loss: 0.1070
Epoch: 25/25... Training loss: 0.1004
Epoch: 25/25... Training loss: 0.1026
Epoch: 25/25... Training loss: 0.1012
Epoch: 25/25... Training loss: 0.1066
Epoch: 25/25... Training loss: 0.1009
Epoch: 25/25... Training loss: 0.1037
Epoch: 25/25... Training loss: 0.1030
Epoch: 25/25... Training loss: 0.1062
Epoch: 25/25... Training loss: 0.1025
Epoch: 25/25... Training loss: 0.1039
Epoch: 25/25... Training loss: 0.1025
Epoch: 25/25... Training loss: 0.1035
Epoch: 25/25... Training loss: 0.1030
Epoch: 25/25... Training loss: 0.1054
Epoch: 25/25... Training loss: 0.1015
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.Placeholder([28,28])
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32)
targets_ = tf.placeholder(shape=[None, 28, 28, 1], dtype=tf.float32)
### Encoder
conv1 = tf.layers.conv2d(inputs_, filters=16, kernel_size=(3,3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='SAME')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, filters=8, kernel_size=(3,3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='SAME')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, filters=8, kernel_size=(3,3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='SAME')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, filters=8, kernel_size=(3,3), activation=tf.nn.relu, padding='same')
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, filters=8, kernel_size=(3,3), activation=tf.nn.relu, padding='same')
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, filters=16, kernel_size=(3,3), activation=tf.nn.relu, padding='same')
# Now 28x28x16
logits =tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`](https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practice. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
tf.reset_default_graph()
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name="inputs")
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name="targets")
filter_size = (5, 5)
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, filter_size, padding="same", activation="relu", name="conv1")
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), name="maxpool1")
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, filter_size, padding="same", activation="relu", name="conv2")
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), name="maxpool2")
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, filter_size, padding="same", activation="relu", name="conv3")
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), name="encoded")
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7), name="upsample1")
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, filter_size, padding="same", activation="relu", name="conv4")
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14), name="upsample2")
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, filter_size, padding="same", activation="relu", name="conv5")
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28), name="upsample3")
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, filter_size, padding="same", activation="relu", name="conv6")
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, filter_size, padding="same", activation=None, name="logits")
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name="decoded")
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 40
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
if(ii%30 == 0):
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
tf.reset_default_graph()
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
filter_size= (3, 3)
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, filter_size, padding="same", activation="relu", name="conv1")
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), name="maxpool1")
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, filter_size, padding="same", activation="relu", name="conv2")
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), name="maxpool2")
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, filter_size, padding="same", activation="relu", name="conv3")
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding="same", name="encoded")
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, filter_size, padding="same", activation="relu", name="conv4")
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, filter_size, padding="same", activation="relu", name="conv5")
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28), name="upsample3")
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, filter_size, padding="same", activation="relu", name="conv6")
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, filter_size, padding="same", activation=None, name="logits")
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name="decoded")
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
if(ii % 100 == 0):
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6888
Epoch: 1/100... Training loss: 0.2124
Epoch: 1/100... Training loss: 0.1803
Epoch: 2/100... Training loss: 0.1669
Epoch: 2/100... Training loss: 0.1576
Epoch: 2/100... Training loss: 0.1551
Epoch: 3/100... Training loss: 0.1464
Epoch: 3/100... Training loss: 0.1446
Epoch: 3/100... Training loss: 0.1372
Epoch: 4/100... Training loss: 0.1353
Epoch: 4/100... Training loss: 0.1329
Epoch: 4/100... Training loss: 0.1296
Epoch: 5/100... Training loss: 0.1314
Epoch: 5/100... Training loss: 0.1259
Epoch: 5/100... Training loss: 0.1212
Epoch: 6/100... Training loss: 0.1235
Epoch: 6/100... Training loss: 0.1196
Epoch: 6/100... Training loss: 0.1199
Epoch: 7/100... Training loss: 0.1191
Epoch: 7/100... Training loss: 0.1224
Epoch: 7/100... Training loss: 0.1197
Epoch: 8/100... Training loss: 0.1190
Epoch: 8/100... Training loss: 0.1188
Epoch: 8/100... Training loss: 0.1204
Epoch: 9/100... Training loss: 0.1186
Epoch: 9/100... Training loss: 0.1179
Epoch: 9/100... Training loss: 0.1136
Epoch: 10/100... Training loss: 0.1108
Epoch: 10/100... Training loss: 0.1101
Epoch: 10/100... Training loss: 0.1136
Epoch: 11/100... Training loss: 0.1132
Epoch: 11/100... Training loss: 0.1093
Epoch: 11/100... Training loss: 0.1099
Epoch: 12/100... Training loss: 0.1095
Epoch: 12/100... Training loss: 0.1105
Epoch: 12/100... Training loss: 0.1078
Epoch: 13/100... Training loss: 0.1106
Epoch: 13/100... Training loss: 0.1115
Epoch: 13/100... Training loss: 0.1078
Epoch: 14/100... Training loss: 0.1091
Epoch: 14/100... Training loss: 0.1091
Epoch: 14/100... Training loss: 0.1075
Epoch: 15/100... Training loss: 0.1103
Epoch: 15/100... Training loss: 0.1095
Epoch: 15/100... Training loss: 0.1079
Epoch: 16/100... Training loss: 0.1096
Epoch: 16/100... Training loss: 0.1043
Epoch: 16/100... Training loss: 0.1055
Epoch: 17/100... Training loss: 0.1091
Epoch: 17/100... Training loss: 0.1084
Epoch: 17/100... Training loss: 0.1063
Epoch: 18/100... Training loss: 0.1075
Epoch: 18/100... Training loss: 0.1045
Epoch: 18/100... Training loss: 0.1085
Epoch: 19/100... Training loss: 0.1098
Epoch: 19/100... Training loss: 0.1073
Epoch: 19/100... Training loss: 0.1033
Epoch: 20/100... Training loss: 0.1045
Epoch: 20/100... Training loss: 0.1079
Epoch: 20/100... Training loss: 0.1047
Epoch: 21/100... Training loss: 0.1044
Epoch: 21/100... Training loss: 0.1045
Epoch: 21/100... Training loss: 0.1049
Epoch: 22/100... Training loss: 0.1091
Epoch: 22/100... Training loss: 0.1066
Epoch: 22/100... Training loss: 0.1045
Epoch: 23/100... Training loss: 0.1043
Epoch: 23/100... Training loss: 0.1093
Epoch: 23/100... Training loss: 0.1043
Epoch: 24/100... Training loss: 0.1073
Epoch: 24/100... Training loss: 0.1031
Epoch: 24/100... Training loss: 0.1031
Epoch: 25/100... Training loss: 0.1053
Epoch: 25/100... Training loss: 0.1039
Epoch: 25/100... Training loss: 0.1056
Epoch: 26/100... Training loss: 0.1042
Epoch: 26/100... Training loss: 0.1042
Epoch: 26/100... Training loss: 0.1063
Epoch: 27/100... Training loss: 0.1045
Epoch: 27/100... Training loss: 0.1016
Epoch: 27/100... Training loss: 0.1032
Epoch: 28/100... Training loss: 0.1045
Epoch: 28/100... Training loss: 0.1003
Epoch: 28/100... Training loss: 0.1005
Epoch: 29/100... Training loss: 0.1041
Epoch: 29/100... Training loss: 0.1029
Epoch: 29/100... Training loss: 0.1035
Epoch: 30/100... Training loss: 0.1052
Epoch: 30/100... Training loss: 0.1056
Epoch: 30/100... Training loss: 0.1025
Epoch: 31/100... Training loss: 0.1006
Epoch: 31/100... Training loss: 0.1001
Epoch: 31/100... Training loss: 0.1010
Epoch: 32/100... Training loss: 0.1052
Epoch: 32/100... Training loss: 0.1017
Epoch: 32/100... Training loss: 0.1021
Epoch: 33/100... Training loss: 0.1016
Epoch: 33/100... Training loss: 0.1034
Epoch: 33/100... Training loss: 0.1040
Epoch: 34/100... Training loss: 0.0990
Epoch: 34/100... Training loss: 0.1017
Epoch: 34/100... Training loss: 0.1046
Epoch: 35/100... Training loss: 0.1020
Epoch: 35/100... Training loss: 0.1016
Epoch: 35/100... Training loss: 0.1027
Epoch: 36/100... Training loss: 0.1045
Epoch: 36/100... Training loss: 0.1012
Epoch: 36/100... Training loss: 0.0987
Epoch: 37/100... Training loss: 0.1037
Epoch: 37/100... Training loss: 0.0996
Epoch: 37/100... Training loss: 0.1032
Epoch: 38/100... Training loss: 0.1033
Epoch: 38/100... Training loss: 0.1002
Epoch: 38/100... Training loss: 0.0994
Epoch: 39/100... Training loss: 0.1035
Epoch: 39/100... Training loss: 0.1018
Epoch: 39/100... Training loss: 0.0987
Epoch: 40/100... Training loss: 0.1035
Epoch: 40/100... Training loss: 0.1030
Epoch: 40/100... Training loss: 0.1046
Epoch: 41/100... Training loss: 0.1040
Epoch: 41/100... Training loss: 0.0995
Epoch: 41/100... Training loss: 0.1004
Epoch: 42/100... Training loss: 0.0990
Epoch: 42/100... Training loss: 0.1030
Epoch: 42/100... Training loss: 0.1010
Epoch: 43/100... Training loss: 0.1006
Epoch: 43/100... Training loss: 0.1022
Epoch: 43/100... Training loss: 0.1017
Epoch: 44/100... Training loss: 0.1044
Epoch: 44/100... Training loss: 0.1000
Epoch: 44/100... Training loss: 0.1025
Epoch: 45/100... Training loss: 0.0967
Epoch: 45/100... Training loss: 0.1004
Epoch: 45/100... Training loss: 0.1014
Epoch: 46/100... Training loss: 0.0996
Epoch: 46/100... Training loss: 0.1019
Epoch: 46/100... Training loss: 0.1025
Epoch: 47/100... Training loss: 0.1008
Epoch: 47/100... Training loss: 0.1047
Epoch: 47/100... Training loss: 0.1002
Epoch: 48/100... Training loss: 0.1020
Epoch: 48/100... Training loss: 0.1033
Epoch: 48/100... Training loss: 0.1001
Epoch: 49/100... Training loss: 0.1034
Epoch: 49/100... Training loss: 0.1017
Epoch: 49/100... Training loss: 0.1006
Epoch: 50/100... Training loss: 0.1000
Epoch: 50/100... Training loss: 0.0982
Epoch: 50/100... Training loss: 0.1027
Epoch: 51/100... Training loss: 0.0983
Epoch: 51/100... Training loss: 0.0997
Epoch: 51/100... Training loss: 0.1025
Epoch: 52/100... Training loss: 0.0959
Epoch: 52/100... Training loss: 0.1044
Epoch: 52/100... Training loss: 0.0995
Epoch: 53/100... Training loss: 0.0961
Epoch: 53/100... Training loss: 0.0925
Epoch: 53/100... Training loss: 0.0974
Epoch: 54/100... Training loss: 0.0999
Epoch: 54/100... Training loss: 0.1041
Epoch: 54/100... Training loss: 0.1014
Epoch: 55/100... Training loss: 0.1023
Epoch: 55/100... Training loss: 0.0973
Epoch: 55/100... Training loss: 0.0988
Epoch: 56/100... Training loss: 0.1001
Epoch: 56/100... Training loss: 0.1024
Epoch: 56/100... Training loss: 0.1030
Epoch: 57/100... Training loss: 0.0986
Epoch: 57/100... Training loss: 0.1019
Epoch: 57/100... Training loss: 0.1015
Epoch: 58/100... Training loss: 0.1004
Epoch: 58/100... Training loss: 0.1001
Epoch: 58/100... Training loss: 0.1025
Epoch: 59/100... Training loss: 0.1011
Epoch: 59/100... Training loss: 0.1014
Epoch: 59/100... Training loss: 0.1041
Epoch: 60/100... Training loss: 0.1013
Epoch: 60/100... Training loss: 0.1015
Epoch: 60/100... Training loss: 0.1031
Epoch: 61/100... Training loss: 0.1007
Epoch: 61/100... Training loss: 0.0994
Epoch: 61/100... Training loss: 0.0997
Epoch: 62/100... Training loss: 0.1020
Epoch: 62/100... Training loss: 0.1002
Epoch: 62/100... Training loss: 0.1012
Epoch: 63/100... Training loss: 0.1026
Epoch: 63/100... Training loss: 0.0983
Epoch: 63/100... Training loss: 0.1008
Epoch: 64/100... Training loss: 0.0988
Epoch: 64/100... Training loss: 0.1034
Epoch: 64/100... Training loss: 0.1024
Epoch: 65/100... Training loss: 0.0984
Epoch: 65/100... Training loss: 0.1027
Epoch: 65/100... Training loss: 0.0993
Epoch: 66/100... Training loss: 0.0986
Epoch: 66/100... Training loss: 0.0967
Epoch: 66/100... Training loss: 0.0984
Epoch: 67/100... Training loss: 0.0992
Epoch: 67/100... Training loss: 0.0984
Epoch: 67/100... Training loss: 0.0965
Epoch: 68/100... Training loss: 0.1015
Epoch: 68/100... Training loss: 0.0989
Epoch: 68/100... Training loss: 0.0948
Epoch: 69/100... Training loss: 0.0991
Epoch: 69/100... Training loss: 0.0982
Epoch: 69/100... Training loss: 0.0997
Epoch: 70/100... Training loss: 0.1018
Epoch: 70/100... Training loss: 0.1025
Epoch: 70/100... Training loss: 0.0995
Epoch: 71/100... Training loss: 0.0978
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noise_factor = 0.3
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[3]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **deconvolutional** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor).
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='same') #activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='same') #activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='same') #activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded, (7, 7), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='same') #activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_images(conv4, (14, 14), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='same') #activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_images(conv4, (28, 28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='same') #activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='same') #activation=tf.nn.relu)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, 28,28,1 ), name = "inputs")
targets_ = tf.placeholder(tf.float32, (None, 28,28,1), name = "targets")
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2) , (2,2) , padding="same")
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2) , (2,2) , padding="same")
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2) , (2,2) , padding="same")
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=tf.nn.relu)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 2000
sess.run(tf.global_variables_initializer())
for e in range(epochs):
total = mnist.train.num_examples//batch_size
for ii in range(total):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost), ii,total)
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, shape=[None, 28, 28, 1], name='inputs')
targets_ = tf.placeholder(tf.float32, shape=[None, 28, 28, 1], name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, kernel_size=(5,5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, kernel_size=(5,5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, pool_size=(2,2), strides=(2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, kernel_size=(5,5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, pool_size=(2,2), strides=(2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, size=(7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, kernel_size=(5,5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, size=(14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, kernel_size=(5,5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, size=(28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, kernel_size=(5,5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, kernel_size=(5,5), padding='same', activation = None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_,
logits = logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, pool_size=(2,2), strides=(2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, pool_size=(2,2), strides=(2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, size=(7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, size=(14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, size=(28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, kernel_size=(3,3), padding='same', activation = None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels = targets_,
logits = logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 1000
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6921
Epoch: 1/100... Training loss: 0.6774
Epoch: 1/100... Training loss: 0.6579
Epoch: 1/100... Training loss: 0.6286
Epoch: 1/100... Training loss: 0.5897
Epoch: 1/100... Training loss: 0.5450
Epoch: 1/100... Training loss: 0.5133
Epoch: 1/100... Training loss: 0.5167
Epoch: 1/100... Training loss: 0.5343
Epoch: 1/100... Training loss: 0.5308
Epoch: 1/100... Training loss: 0.5061
Epoch: 1/100... Training loss: 0.4909
Epoch: 1/100... Training loss: 0.4826
Epoch: 1/100... Training loss: 0.4802
Epoch: 1/100... Training loss: 0.4758
Epoch: 1/100... Training loss: 0.4681
Epoch: 1/100... Training loss: 0.4605
Epoch: 1/100... Training loss: 0.4457
Epoch: 1/100... Training loss: 0.4335
Epoch: 1/100... Training loss: 0.4329
Epoch: 1/100... Training loss: 0.4207
Epoch: 1/100... Training loss: 0.4043
Epoch: 1/100... Training loss: 0.3937
Epoch: 1/100... Training loss: 0.3771
Epoch: 1/100... Training loss: 0.3638
Epoch: 1/100... Training loss: 0.3512
Epoch: 1/100... Training loss: 0.3352
Epoch: 1/100... Training loss: 0.3238
Epoch: 1/100... Training loss: 0.3117
Epoch: 1/100... Training loss: 0.3056
Epoch: 1/100... Training loss: 0.2946
Epoch: 1/100... Training loss: 0.2889
Epoch: 1/100... Training loss: 0.2845
Epoch: 1/100... Training loss: 0.2794
Epoch: 1/100... Training loss: 0.2741
Epoch: 1/100... Training loss: 0.2755
Epoch: 1/100... Training loss: 0.2727
Epoch: 1/100... Training loss: 0.2729
Epoch: 1/100... Training loss: 0.2719
Epoch: 1/100... Training loss: 0.2707
Epoch: 1/100... Training loss: 0.2705
Epoch: 1/100... Training loss: 0.2708
Epoch: 1/100... Training loss: 0.2679
Epoch: 1/100... Training loss: 0.2666
Epoch: 1/100... Training loss: 0.2658
Epoch: 1/100... Training loss: 0.2686
Epoch: 1/100... Training loss: 0.2635
Epoch: 1/100... Training loss: 0.2641
Epoch: 1/100... Training loss: 0.2673
Epoch: 1/100... Training loss: 0.2591
Epoch: 1/100... Training loss: 0.2607
Epoch: 1/100... Training loss: 0.2582
Epoch: 1/100... Training loss: 0.2526
Epoch: 1/100... Training loss: 0.2551
Epoch: 1/100... Training loss: 0.2541
Epoch: 1/100... Training loss: 0.2517
Epoch: 1/100... Training loss: 0.2513
Epoch: 1/100... Training loss: 0.2525
Epoch: 1/100... Training loss: 0.2500
Epoch: 1/100... Training loss: 0.2500
Epoch: 2/100... Training loss: 0.2486
Epoch: 2/100... Training loss: 0.2461
Epoch: 2/100... Training loss: 0.2464
Epoch: 2/100... Training loss: 0.2468
Epoch: 2/100... Training loss: 0.2423
Epoch: 2/100... Training loss: 0.2458
Epoch: 2/100... Training loss: 0.2419
Epoch: 2/100... Training loss: 0.2430
Epoch: 2/100... Training loss: 0.2374
Epoch: 2/100... Training loss: 0.2439
Epoch: 2/100... Training loss: 0.2396
Epoch: 2/100... Training loss: 0.2366
Epoch: 2/100... Training loss: 0.2352
Epoch: 2/100... Training loss: 0.2346
Epoch: 2/100... Training loss: 0.2352
Epoch: 2/100... Training loss: 0.2378
Epoch: 2/100... Training loss: 0.2342
Epoch: 2/100... Training loss: 0.2308
Epoch: 2/100... Training loss: 0.2315
Epoch: 2/100... Training loss: 0.2328
Epoch: 2/100... Training loss: 0.2322
Epoch: 2/100... Training loss: 0.2283
Epoch: 2/100... Training loss: 0.2280
Epoch: 2/100... Training loss: 0.2293
Epoch: 2/100... Training loss: 0.2251
Epoch: 2/100... Training loss: 0.2267
Epoch: 2/100... Training loss: 0.2266
Epoch: 2/100... Training loss: 0.2225
Epoch: 2/100... Training loss: 0.2205
Epoch: 2/100... Training loss: 0.2215
Epoch: 2/100... Training loss: 0.2211
Epoch: 2/100... Training loss: 0.2224
Epoch: 2/100... Training loss: 0.2179
Epoch: 2/100... Training loss: 0.2177
Epoch: 2/100... Training loss: 0.2180
Epoch: 2/100... Training loss: 0.2177
Epoch: 2/100... Training loss: 0.2170
Epoch: 2/100... Training loss: 0.2157
Epoch: 2/100... Training loss: 0.2135
Epoch: 2/100... Training loss: 0.2122
Epoch: 2/100... Training loss: 0.2131
Epoch: 2/100... Training loss: 0.2133
Epoch: 2/100... Training loss: 0.2096
Epoch: 2/100... Training loss: 0.2076
Epoch: 2/100... Training loss: 0.2081
Epoch: 2/100... Training loss: 0.2067
Epoch: 2/100... Training loss: 0.2101
Epoch: 2/100... Training loss: 0.2081
Epoch: 2/100... Training loss: 0.2084
Epoch: 2/100... Training loss: 0.2031
Epoch: 2/100... Training loss: 0.2060
Epoch: 2/100... Training loss: 0.2031
Epoch: 2/100... Training loss: 0.2031
Epoch: 2/100... Training loss: 0.2012
Epoch: 2/100... Training loss: 0.2023
Epoch: 2/100... Training loss: 0.2012
Epoch: 2/100... Training loss: 0.2009
Epoch: 2/100... Training loss: 0.2003
Epoch: 2/100... Training loss: 0.1993
Epoch: 2/100... Training loss: 0.2006
Epoch: 3/100... Training loss: 0.1976
Epoch: 3/100... Training loss: 0.1970
Epoch: 3/100... Training loss: 0.2027
Epoch: 3/100... Training loss: 0.2039
Epoch: 3/100... Training loss: 0.2006
Epoch: 3/100... Training loss: 0.1955
Epoch: 3/100... Training loss: 0.1997
Epoch: 3/100... Training loss: 0.1976
Epoch: 3/100... Training loss: 0.1932
Epoch: 3/100... Training loss: 0.1962
Epoch: 3/100... Training loss: 0.1922
Epoch: 3/100... Training loss: 0.1966
Epoch: 3/100... Training loss: 0.1947
Epoch: 3/100... Training loss: 0.1929
Epoch: 3/100... Training loss: 0.1926
Epoch: 3/100... Training loss: 0.1919
Epoch: 3/100... Training loss: 0.1938
Epoch: 3/100... Training loss: 0.1913
Epoch: 3/100... Training loss: 0.1905
Epoch: 3/100... Training loss: 0.1921
Epoch: 3/100... Training loss: 0.1892
Epoch: 3/100... Training loss: 0.1898
Epoch: 3/100... Training loss: 0.1910
Epoch: 3/100... Training loss: 0.1896
Epoch: 3/100... Training loss: 0.1889
Epoch: 3/100... Training loss: 0.1874
Epoch: 3/100... Training loss: 0.1885
Epoch: 3/100... Training loss: 0.1889
Epoch: 3/100... Training loss: 0.1885
Epoch: 3/100... Training loss: 0.1899
Epoch: 3/100... Training loss: 0.1875
Epoch: 3/100... Training loss: 0.1862
Epoch: 3/100... Training loss: 0.1857
Epoch: 3/100... Training loss: 0.1863
Epoch: 3/100... Training loss: 0.1853
Epoch: 3/100... Training loss: 0.1852
Epoch: 3/100... Training loss: 0.1851
Epoch: 3/100... Training loss: 0.1856
Epoch: 3/100... Training loss: 0.1873
Epoch: 3/100... Training loss: 0.1832
Epoch: 3/100... Training loss: 0.1866
Epoch: 3/100... Training loss: 0.1829
Epoch: 3/100... Training loss: 0.1830
Epoch: 3/100... Training loss: 0.1833
Epoch: 3/100... Training loss: 0.1847
Epoch: 3/100... Training loss: 0.1844
Epoch: 3/100... Training loss: 0.1812
Epoch: 3/100... Training loss: 0.1811
Epoch: 3/100... Training loss: 0.1819
Epoch: 3/100... Training loss: 0.1838
Epoch: 3/100... Training loss: 0.1819
Epoch: 3/100... Training loss: 0.1783
Epoch: 3/100... Training loss: 0.1806
Epoch: 3/100... Training loss: 0.1790
Epoch: 3/100... Training loss: 0.1806
Epoch: 3/100... Training loss: 0.1803
Epoch: 3/100... Training loss: 0.1781
Epoch: 3/100... Training loss: 0.1808
Epoch: 3/100... Training loss: 0.1823
Epoch: 3/100... Training loss: 0.1798
Epoch: 4/100... Training loss: 0.1782
Epoch: 4/100... Training loss: 0.1761
Epoch: 4/100... Training loss: 0.1795
Epoch: 4/100... Training loss: 0.1781
Epoch: 4/100... Training loss: 0.1808
Epoch: 4/100... Training loss: 0.1774
Epoch: 4/100... Training loss: 0.1763
Epoch: 4/100... Training loss: 0.1753
Epoch: 4/100... Training loss: 0.1769
Epoch: 4/100... Training loss: 0.1763
Epoch: 4/100... Training loss: 0.1785
Epoch: 4/100... Training loss: 0.1749
Epoch: 4/100... Training loss: 0.1774
Epoch: 4/100... Training loss: 0.1761
Epoch: 4/100... Training loss: 0.1782
Epoch: 4/100... Training loss: 0.1772
Epoch: 4/100... Training loss: 0.1785
Epoch: 4/100... Training loss: 0.1764
Epoch: 4/100... Training loss: 0.1752
Epoch: 4/100... Training loss: 0.1750
Epoch: 4/100... Training loss: 0.1733
Epoch: 4/100... Training loss: 0.1787
Epoch: 4/100... Training loss: 0.1764
Epoch: 4/100... Training loss: 0.1733
Epoch: 4/100... Training loss: 0.1740
Epoch: 4/100... Training loss: 0.1762
Epoch: 4/100... Training loss: 0.1764
Epoch: 4/100... Training loss: 0.1754
Epoch: 4/100... Training loss: 0.1741
Epoch: 4/100... Training loss: 0.1781
Epoch: 4/100... Training loss: 0.1762
Epoch: 4/100... Training loss: 0.1728
Epoch: 4/100... Training loss: 0.1759
Epoch: 4/100... Training loss: 0.1731
Epoch: 4/100... Training loss: 0.1680
Epoch: 4/100... Training loss: 0.1732
Epoch: 4/100... Training loss: 0.1743
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noise_factor = 0.5
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
mnist.train.images.shape
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
imagesize = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, 28, 28, 1])
targets_ = tf.placeholder(tf.float32, [None, 28, 28, 1])
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2))
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2))
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, 2, 2)
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (5, 5), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2))
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2))
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2))
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (5, 5), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (5, 5), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (5, 5), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6906
Epoch: 1/100... Training loss: 0.6519
Epoch: 1/100... Training loss: 0.5718
Epoch: 1/100... Training loss: 0.5006
Epoch: 1/100... Training loss: 0.5798
Epoch: 1/100... Training loss: 0.5042
Epoch: 1/100... Training loss: 0.4761
Epoch: 1/100... Training loss: 0.4727
Epoch: 1/100... Training loss: 0.4650
Epoch: 1/100... Training loss: 0.4392
Epoch: 1/100... Training loss: 0.4222
Epoch: 1/100... Training loss: 0.4041
Epoch: 1/100... Training loss: 0.3858
Epoch: 1/100... Training loss: 0.3378
Epoch: 1/100... Training loss: 0.3335
Epoch: 1/100... Training loss: 0.3241
Epoch: 1/100... Training loss: 0.3087
Epoch: 1/100... Training loss: 0.3014
Epoch: 1/100... Training loss: 0.2997
Epoch: 1/100... Training loss: 0.2936
Epoch: 1/100... Training loss: 0.2788
Epoch: 1/100... Training loss: 0.2784
Epoch: 1/100... Training loss: 0.2732
Epoch: 1/100... Training loss: 0.2761
Epoch: 1/100... Training loss: 0.2762
Epoch: 1/100... Training loss: 0.2703
Epoch: 1/100... Training loss: 0.2784
Epoch: 1/100... Training loss: 0.2745
Epoch: 1/100... Training loss: 0.2751
Epoch: 1/100... Training loss: 0.2693
Epoch: 1/100... Training loss: 0.2780
Epoch: 1/100... Training loss: 0.2703
Epoch: 1/100... Training loss: 0.2666
Epoch: 1/100... Training loss: 0.2688
Epoch: 1/100... Training loss: 0.2724
Epoch: 1/100... Training loss: 0.2696
Epoch: 1/100... Training loss: 0.2660
Epoch: 1/100... Training loss: 0.2658
Epoch: 1/100... Training loss: 0.2637
Epoch: 1/100... Training loss: 0.2600
Epoch: 1/100... Training loss: 0.2632
Epoch: 1/100... Training loss: 0.2623
Epoch: 1/100... Training loss: 0.2643
Epoch: 1/100... Training loss: 0.2595
Epoch: 1/100... Training loss: 0.2577
Epoch: 1/100... Training loss: 0.2681
Epoch: 1/100... Training loss: 0.2603
Epoch: 1/100... Training loss: 0.2597
Epoch: 1/100... Training loss: 0.2608
Epoch: 1/100... Training loss: 0.2599
Epoch: 1/100... Training loss: 0.2492
Epoch: 1/100... Training loss: 0.2616
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, filters=16, kernel_size=(5,5), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2))
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv2, 2, 2)
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, filters=8, kernel_size=(3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (5,5), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 2
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (5,5), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2))
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv2, 2, 2)
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (5,5), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 2
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/2... Training loss: 0.6911
Epoch: 1/2... Training loss: 0.6751
Epoch: 1/2... Training loss: 0.6532
Epoch: 1/2... Training loss: 0.6203
Epoch: 1/2... Training loss: 0.5757
Epoch: 1/2... Training loss: 0.5287
Epoch: 1/2... Training loss: 0.4976
Epoch: 1/2... Training loss: 0.5146
Epoch: 1/2... Training loss: 0.5064
Epoch: 1/2... Training loss: 0.5220
Epoch: 1/2... Training loss: 0.4722
Epoch: 1/2... Training loss: 0.4659
Epoch: 1/2... Training loss: 0.4445
Epoch: 1/2... Training loss: 0.4510
Epoch: 1/2... Training loss: 0.4345
Epoch: 1/2... Training loss: 0.4245
Epoch: 1/2... Training loss: 0.4039
Epoch: 1/2... Training loss: 0.4083
Epoch: 1/2... Training loss: 0.3952
Epoch: 1/2... Training loss: 0.3684
Epoch: 1/2... Training loss: 0.3572
Epoch: 1/2... Training loss: 0.3468
Epoch: 1/2... Training loss: 0.3396
Epoch: 1/2... Training loss: 0.3203
Epoch: 1/2... Training loss: 0.3178
Epoch: 1/2... Training loss: 0.3070
Epoch: 1/2... Training loss: 0.3085
Epoch: 1/2... Training loss: 0.2936
Epoch: 1/2... Training loss: 0.2708
Epoch: 1/2... Training loss: 0.2759
Epoch: 1/2... Training loss: 0.2642
Epoch: 1/2... Training loss: 0.2611
Epoch: 1/2... Training loss: 0.2555
Epoch: 1/2... Training loss: 0.2442
Epoch: 1/2... Training loss: 0.2513
Epoch: 1/2... Training loss: 0.2394
Epoch: 1/2... Training loss: 0.2412
Epoch: 1/2... Training loss: 0.2337
Epoch: 1/2... Training loss: 0.2356
Epoch: 1/2... Training loss: 0.2348
Epoch: 1/2... Training loss: 0.2268
Epoch: 1/2... Training loss: 0.2274
Epoch: 1/2... Training loss: 0.2272
Epoch: 1/2... Training loss: 0.2247
Epoch: 1/2... Training loss: 0.2229
Epoch: 1/2... Training loss: 0.2207
Epoch: 1/2... Training loss: 0.2156
Epoch: 1/2... Training loss: 0.2117
Epoch: 1/2... Training loss: 0.2170
Epoch: 1/2... Training loss: 0.2176
Epoch: 1/2... Training loss: 0.2108
Epoch: 1/2... Training loss: 0.2129
Epoch: 1/2... Training loss: 0.2072
Epoch: 1/2... Training loss: 0.2062
Epoch: 1/2... Training loss: 0.2098
Epoch: 1/2... Training loss: 0.2086
Epoch: 1/2... Training loss: 0.2001
Epoch: 1/2... Training loss: 0.2012
Epoch: 1/2... Training loss: 0.1961
Epoch: 1/2... Training loss: 0.2138
Epoch: 1/2... Training loss: 0.2009
Epoch: 1/2... Training loss: 0.2048
Epoch: 1/2... Training loss: 0.1879
Epoch: 1/2... Training loss: 0.1987
Epoch: 1/2... Training loss: 0.1924
Epoch: 1/2... Training loss: 0.2007
Epoch: 1/2... Training loss: 0.1924
Epoch: 1/2... Training loss: 0.1881
Epoch: 1/2... Training loss: 0.1913
Epoch: 1/2... Training loss: 0.1810
Epoch: 1/2... Training loss: 0.1862
Epoch: 1/2... Training loss: 0.1783
Epoch: 1/2... Training loss: 0.1797
Epoch: 1/2... Training loss: 0.1720
Epoch: 1/2... Training loss: 0.1780
Epoch: 1/2... Training loss: 0.1751
Epoch: 1/2... Training loss: 0.1663
Epoch: 1/2... Training loss: 0.1775
Epoch: 1/2... Training loss: 0.1758
Epoch: 1/2... Training loss: 0.1701
Epoch: 1/2... Training loss: 0.1667
Epoch: 1/2... Training loss: 0.1794
Epoch: 1/2... Training loss: 0.1720
Epoch: 1/2... Training loss: 0.1690
Epoch: 1/2... Training loss: 0.1727
Epoch: 1/2... Training loss: 0.1658
Epoch: 1/2... Training loss: 0.1642
Epoch: 1/2... Training loss: 0.1648
Epoch: 1/2... Training loss: 0.1676
Epoch: 1/2... Training loss: 0.1712
Epoch: 1/2... Training loss: 0.1629
Epoch: 1/2... Training loss: 0.1678
Epoch: 1/2... Training loss: 0.1602
Epoch: 1/2... Training loss: 0.1608
Epoch: 1/2... Training loss: 0.1649
Epoch: 1/2... Training loss: 0.1655
Epoch: 1/2... Training loss: 0.1636
Epoch: 1/2... Training loss: 0.1620
Epoch: 1/2... Training loss: 0.1578
Epoch: 1/2... Training loss: 0.1627
Epoch: 1/2... Training loss: 0.1569
Epoch: 1/2... Training loss: 0.1622
Epoch: 1/2... Training loss: 0.1566
Epoch: 1/2... Training loss: 0.1529
Epoch: 1/2... Training loss: 0.1549
Epoch: 1/2... Training loss: 0.1531
Epoch: 1/2... Training loss: 0.1538
Epoch: 1/2... Training loss: 0.1578
Epoch: 1/2... Training loss: 0.1617
Epoch: 1/2... Training loss: 0.1546
Epoch: 1/2... Training loss: 0.1535
Epoch: 1/2... Training loss: 0.1528
Epoch: 1/2... Training loss: 0.1519
Epoch: 1/2... Training loss: 0.1531
Epoch: 1/2... Training loss: 0.1507
Epoch: 1/2... Training loss: 0.1505
Epoch: 1/2... Training loss: 0.1481
Epoch: 1/2... Training loss: 0.1531
Epoch: 1/2... Training loss: 0.1489
Epoch: 1/2... Training loss: 0.1553
Epoch: 1/2... Training loss: 0.1543
Epoch: 1/2... Training loss: 0.1482
Epoch: 1/2... Training loss: 0.1489
Epoch: 1/2... Training loss: 0.1486
Epoch: 1/2... Training loss: 0.1520
Epoch: 1/2... Training loss: 0.1457
Epoch: 1/2... Training loss: 0.1485
Epoch: 1/2... Training loss: 0.1501
Epoch: 1/2... Training loss: 0.1489
Epoch: 1/2... Training loss: 0.1471
Epoch: 1/2... Training loss: 0.1458
Epoch: 1/2... Training loss: 0.1446
Epoch: 1/2... Training loss: 0.1429
Epoch: 1/2... Training loss: 0.1486
Epoch: 1/2... Training loss: 0.1439
Epoch: 1/2... Training loss: 0.1424
Epoch: 1/2... Training loss: 0.1412
Epoch: 1/2... Training loss: 0.1433
Epoch: 1/2... Training loss: 0.1463
Epoch: 1/2... Training loss: 0.1489
Epoch: 1/2... Training loss: 0.1467
Epoch: 1/2... Training loss: 0.1431
Epoch: 1/2... Training loss: 0.1405
Epoch: 1/2... Training loss: 0.1447
Epoch: 1/2... Training loss: 0.1427
Epoch: 1/2... Training loss: 0.1470
Epoch: 1/2... Training loss: 0.1443
Epoch: 1/2... Training loss: 0.1393
Epoch: 1/2... Training loss: 0.1413
Epoch: 1/2... Training loss: 0.1460
Epoch: 1/2... Training loss: 0.1404
Epoch: 1/2... Training loss: 0.1427
Epoch: 1/2... Training loss: 0.1420
Epoch: 1/2... Training loss: 0.1440
Epoch: 1/2... Training loss: 0.1408
Epoch: 1/2... Training loss: 0.1404
Epoch: 1/2... Training loss: 0.1461
Epoch: 1/2... Training loss: 0.1393
Epoch: 1/2... Training loss: 0.1408
Epoch: 1/2... Training loss: 0.1341
Epoch: 1/2... Training loss: 0.1391
Epoch: 1/2... Training loss: 0.1426
Epoch: 1/2... Training loss: 0.1416
Epoch: 1/2... Training loss: 0.1438
Epoch: 1/2... Training loss: 0.1417
Epoch: 1/2... Training loss: 0.1387
Epoch: 1/2... Training loss: 0.1354
Epoch: 1/2... Training loss: 0.1370
Epoch: 1/2... Training loss: 0.1385
Epoch: 1/2... Training loss: 0.1392
Epoch: 1/2... Training loss: 0.1391
Epoch: 1/2... Training loss: 0.1378
Epoch: 1/2... Training loss: 0.1368
Epoch: 1/2... Training loss: 0.1379
Epoch: 1/2... Training loss: 0.1378
Epoch: 1/2... Training loss: 0.1376
Epoch: 1/2... Training loss: 0.1397
Epoch: 1/2... Training loss: 0.1358
Epoch: 1/2... Training loss: 0.1379
Epoch: 1/2... Training loss: 0.1373
Epoch: 1/2... Training loss: 0.1323
Epoch: 1/2... Training loss: 0.1386
Epoch: 1/2... Training loss: 0.1340
Epoch: 1/2... Training loss: 0.1336
Epoch: 1/2... Training loss: 0.1331
Epoch: 1/2... Training loss: 0.1344
Epoch: 1/2... Training loss: 0.1350
Epoch: 1/2... Training loss: 0.1351
Epoch: 1/2... Training loss: 0.1338
Epoch: 1/2... Training loss: 0.1412
Epoch: 1/2... Training loss: 0.1329
Epoch: 1/2... Training loss: 0.1351
Epoch: 1/2... Training loss: 0.1279
Epoch: 1/2... Training loss: 0.1351
Epoch: 1/2... Training loss: 0.1319
Epoch: 1/2... Training loss: 0.1384
Epoch: 1/2... Training loss: 0.1340
Epoch: 1/2... Training loss: 0.1375
Epoch: 1/2... Training loss: 0.1358
Epoch: 1/2... Training loss: 0.1291
Epoch: 1/2... Training loss: 0.1310
Epoch: 1/2... Training loss: 0.1321
Epoch: 1/2... Training loss: 0.1355
Epoch: 1/2... Training loss: 0.1268
Epoch: 1/2... Training loss: 0.1307
Epoch: 1/2... Training loss: 0.1367
Epoch: 1/2... Training loss: 0.1304
Epoch: 1/2... Training loss: 0.1293
Epoch: 1/2... Training loss: 0.1316
Epoch: 1/2... Training loss: 0.1340
Epoch: 1/2... Training loss: 0.1305
Epoch: 1/2... Training loss: 0.1311
Epoch: 1/2... Training loss: 0.1340
Epoch: 1/2... Training loss: 0.1293
Epoch: 1/2... Training loss: 0.1327
Epoch: 1/2... Training loss: 0.1335
Epoch: 1/2... Training loss: 0.1310
Epoch: 1/2... Training loss: 0.1277
Epoch: 1/2... Training loss: 0.1308
Epoch: 1/2... Training loss: 0.1278
Epoch: 1/2... Training loss: 0.1309
Epoch: 1/2... Training loss: 0.1305
Epoch: 1/2... Training loss: 0.1263
Epoch: 1/2... Training loss: 0.1334
Epoch: 1/2... Training loss: 0.1291
Epoch: 1/2... Training loss: 0.1314
Epoch: 1/2... Training loss: 0.1319
Epoch: 1/2... Training loss: 0.1317
Epoch: 1/2... Training loss: 0.1372
Epoch: 1/2... Training loss: 0.1307
Epoch: 1/2... Training loss: 0.1275
Epoch: 1/2... Training loss: 0.1271
Epoch: 1/2... Training loss: 0.1282
Epoch: 1/2... Training loss: 0.1322
Epoch: 1/2... Training loss: 0.1269
Epoch: 1/2... Training loss: 0.1327
Epoch: 1/2... Training loss: 0.1301
Epoch: 1/2... Training loss: 0.1261
Epoch: 1/2... Training loss: 0.1282
Epoch: 1/2... Training loss: 0.1276
Epoch: 1/2... Training loss: 0.1286
Epoch: 1/2... Training loss: 0.1267
Epoch: 1/2... Training loss: 0.1280
Epoch: 1/2... Training loss: 0.1313
Epoch: 1/2... Training loss: 0.1269
Epoch: 1/2... Training loss: 0.1279
Epoch: 1/2... Training loss: 0.1286
Epoch: 1/2... Training loss: 0.1287
Epoch: 1/2... Training loss: 0.1255
Epoch: 1/2... Training loss: 0.1244
Epoch: 1/2... Training loss: 0.1288
Epoch: 1/2... Training loss: 0.1255
Epoch: 1/2... Training loss: 0.1239
Epoch: 1/2... Training loss: 0.1268
Epoch: 1/2... Training loss: 0.1270
Epoch: 1/2... Training loss: 0.1242
Epoch: 1/2... Training loss: 0.1285
Epoch: 1/2... Training loss: 0.1275
Epoch: 1/2... Training loss: 0.1237
Epoch: 1/2... Training loss: 0.1279
Epoch: 1/2... Training loss: 0.1261
Epoch: 1/2... Training loss: 0.1241
Epoch: 1/2... Training loss: 0.1249
Epoch: 1/2... Training loss: 0.1237
Epoch: 1/2... Training loss: 0.1252
Epoch: 1/2... Training loss: 0.1218
Epoch: 1/2... Training loss: 0.1215
Epoch: 1/2... Training loss: 0.1243
Epoch: 1/2... Training loss: 0.1237
Epoch: 1/2... Training loss: 0.1261
Epoch: 1/2... Training loss: 0.1272
Epoch: 1/2... Training loss: 0.1202
Epoch: 1/2... Training loss: 0.1244
Epoch: 1/2... Training loss: 0.1248
Epoch: 1/2... Training loss: 0.1273
Epoch: 1/2... Training loss: 0.1269
Epoch: 1/2... Training loss: 0.1250
Epoch: 1/2... Training loss: 0.1269
Epoch: 1/2... Training loss: 0.1275
Epoch: 1/2... Training loss: 0.1274
Epoch: 1/2... Training loss: 0.1239
Epoch: 1/2... Training loss: 0.1263
Epoch: 1/2... Training loss: 0.1243
Epoch: 1/2... Training loss: 0.1238
Epoch: 1/2... Training loss: 0.1219
Epoch: 1/2... Training loss: 0.1217
Epoch: 1/2... Training loss: 0.1206
Epoch: 1/2... Training loss: 0.1251
Epoch: 1/2... Training loss: 0.1222
Epoch: 1/2... Training loss: 0.1247
Epoch: 1/2... Training loss: 0.1184
Epoch: 1/2... Training loss: 0.1233
Epoch: 1/2... Training loss: 0.1258
Epoch: 1/2... Training loss: 0.1264
Epoch: 1/2... Training loss: 0.1243
Epoch: 1/2... Training loss: 0.1238
Epoch: 1/2... Training loss: 0.1223
Epoch: 1/2... Training loss: 0.1217
Epoch: 1/2... Training loss: 0.1204
Epoch: 1/2... Training loss: 0.1202
Epoch: 2/2... Training loss: 0.1212
Epoch: 2/2... Training loss: 0.1231
Epoch: 2/2... Training loss: 0.1255
Epoch: 2/2... Training loss: 0.1257
Epoch: 2/2... Training loss: 0.1218
Epoch: 2/2... Training loss: 0.1247
Epoch: 2/2... Training loss: 0.1258
Epoch: 2/2... Training loss: 0.1223
Epoch: 2/2... Training loss: 0.1235
Epoch: 2/2... Training loss: 0.1276
Epoch: 2/2... Training loss: 0.1265
Epoch: 2/2... Training loss: 0.1220
Epoch: 2/2... Training loss: 0.1253
Epoch: 2/2... Training loss: 0.1220
Epoch: 2/2... Training loss: 0.1269
Epoch: 2/2... Training loss: 0.1244
Epoch: 2/2... Training loss: 0.1248
Epoch: 2/2... Training loss: 0.1196
Epoch: 2/2... Training loss: 0.1246
Epoch: 2/2... Training loss: 0.1222
Epoch: 2/2... Training loss: 0.1225
Epoch: 2/2... Training loss: 0.1239
Epoch: 2/2... Training loss: 0.1236
Epoch: 2/2... Training loss: 0.1250
Epoch: 2/2... Training loss: 0.1239
Epoch: 2/2... Training loss: 0.1254
Epoch: 2/2... Training loss: 0.1234
Epoch: 2/2... Training loss: 0.1217
Epoch: 2/2... Training loss: 0.1193
Epoch: 2/2... Training loss: 0.1241
Epoch: 2/2... Training loss: 0.1246
Epoch: 2/2... Training loss: 0.1271
Epoch: 2/2... Training loss: 0.1203
Epoch: 2/2... Training loss: 0.1199
Epoch: 2/2... Training loss: 0.1198
Epoch: 2/2... Training loss: 0.1224
Epoch: 2/2... Training loss: 0.1210
Epoch: 2/2... Training loss: 0.1227
Epoch: 2/2... Training loss: 0.1201
Epoch: 2/2... Training loss: 0.1227
Epoch: 2/2... Training loss: 0.1231
Epoch: 2/2... Training loss: 0.1216
Epoch: 2/2... Training loss: 0.1154
Epoch: 2/2... Training loss: 0.1204
Epoch: 2/2... Training loss: 0.1221
Epoch: 2/2... Training loss: 0.1187
Epoch: 2/2... Training loss: 0.1243
Epoch: 2/2... Training loss: 0.1237
Epoch: 2/2... Training loss: 0.1227
Epoch: 2/2... Training loss: 0.1234
Epoch: 2/2... Training loss: 0.1179
Epoch: 2/2... Training loss: 0.1208
Epoch: 2/2... Training loss: 0.1229
Epoch: 2/2... Training loss: 0.1190
Epoch: 2/2... Training loss: 0.1201
Epoch: 2/2... Training loss: 0.1209
Epoch: 2/2... Training loss: 0.1221
Epoch: 2/2... Training loss: 0.1176
Epoch: 2/2... Training loss: 0.1221
Epoch: 2/2... Training loss: 0.1169
Epoch: 2/2... Training loss: 0.1204
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1209
Epoch: 2/2... Training loss: 0.1215
Epoch: 2/2... Training loss: 0.1223
Epoch: 2/2... Training loss: 0.1178
Epoch: 2/2... Training loss: 0.1245
Epoch: 2/2... Training loss: 0.1199
Epoch: 2/2... Training loss: 0.1229
Epoch: 2/2... Training loss: 0.1197
Epoch: 2/2... Training loss: 0.1206
Epoch: 2/2... Training loss: 0.1208
Epoch: 2/2... Training loss: 0.1196
Epoch: 2/2... Training loss: 0.1200
Epoch: 2/2... Training loss: 0.1225
Epoch: 2/2... Training loss: 0.1233
Epoch: 2/2... Training loss: 0.1192
Epoch: 2/2... Training loss: 0.1210
Epoch: 2/2... Training loss: 0.1193
Epoch: 2/2... Training loss: 0.1207
Epoch: 2/2... Training loss: 0.1244
Epoch: 2/2... Training loss: 0.1186
Epoch: 2/2... Training loss: 0.1165
Epoch: 2/2... Training loss: 0.1200
Epoch: 2/2... Training loss: 0.1217
Epoch: 2/2... Training loss: 0.1262
Epoch: 2/2... Training loss: 0.1209
Epoch: 2/2... Training loss: 0.1181
Epoch: 2/2... Training loss: 0.1184
Epoch: 2/2... Training loss: 0.1202
Epoch: 2/2... Training loss: 0.1203
Epoch: 2/2... Training loss: 0.1211
Epoch: 2/2... Training loss: 0.1188
Epoch: 2/2... Training loss: 0.1154
Epoch: 2/2... Training loss: 0.1188
Epoch: 2/2... Training loss: 0.1198
Epoch: 2/2... Training loss: 0.1213
Epoch: 2/2... Training loss: 0.1172
Epoch: 2/2... Training loss: 0.1192
Epoch: 2/2... Training loss: 0.1208
Epoch: 2/2... Training loss: 0.1186
Epoch: 2/2... Training loss: 0.1226
Epoch: 2/2... Training loss: 0.1210
Epoch: 2/2... Training loss: 0.1169
Epoch: 2/2... Training loss: 0.1170
Epoch: 2/2... Training loss: 0.1158
Epoch: 2/2... Training loss: 0.1196
Epoch: 2/2... Training loss: 0.1203
Epoch: 2/2... Training loss: 0.1179
Epoch: 2/2... Training loss: 0.1205
Epoch: 2/2... Training loss: 0.1183
Epoch: 2/2... Training loss: 0.1184
Epoch: 2/2... Training loss: 0.1189
Epoch: 2/2... Training loss: 0.1137
Epoch: 2/2... Training loss: 0.1194
Epoch: 2/2... Training loss: 0.1198
Epoch: 2/2... Training loss: 0.1171
Epoch: 2/2... Training loss: 0.1207
Epoch: 2/2... Training loss: 0.1223
Epoch: 2/2... Training loss: 0.1177
Epoch: 2/2... Training loss: 0.1166
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1193
Epoch: 2/2... Training loss: 0.1197
Epoch: 2/2... Training loss: 0.1189
Epoch: 2/2... Training loss: 0.1194
Epoch: 2/2... Training loss: 0.1185
Epoch: 2/2... Training loss: 0.1185
Epoch: 2/2... Training loss: 0.1171
Epoch: 2/2... Training loss: 0.1190
Epoch: 2/2... Training loss: 0.1159
Epoch: 2/2... Training loss: 0.1147
Epoch: 2/2... Training loss: 0.1185
Epoch: 2/2... Training loss: 0.1215
Epoch: 2/2... Training loss: 0.1162
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1184
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1147
Epoch: 2/2... Training loss: 0.1176
Epoch: 2/2... Training loss: 0.1187
Epoch: 2/2... Training loss: 0.1156
Epoch: 2/2... Training loss: 0.1185
Epoch: 2/2... Training loss: 0.1198
Epoch: 2/2... Training loss: 0.1158
Epoch: 2/2... Training loss: 0.1209
Epoch: 2/2... Training loss: 0.1138
Epoch: 2/2... Training loss: 0.1189
Epoch: 2/2... Training loss: 0.1172
Epoch: 2/2... Training loss: 0.1154
Epoch: 2/2... Training loss: 0.1177
Epoch: 2/2... Training loss: 0.1186
Epoch: 2/2... Training loss: 0.1169
Epoch: 2/2... Training loss: 0.1176
Epoch: 2/2... Training loss: 0.1170
Epoch: 2/2... Training loss: 0.1186
Epoch: 2/2... Training loss: 0.1152
Epoch: 2/2... Training loss: 0.1181
Epoch: 2/2... Training loss: 0.1149
Epoch: 2/2... Training loss: 0.1171
Epoch: 2/2... Training loss: 0.1200
Epoch: 2/2... Training loss: 0.1188
Epoch: 2/2... Training loss: 0.1157
Epoch: 2/2... Training loss: 0.1147
Epoch: 2/2... Training loss: 0.1158
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1153
Epoch: 2/2... Training loss: 0.1159
Epoch: 2/2... Training loss: 0.1193
Epoch: 2/2... Training loss: 0.1188
Epoch: 2/2... Training loss: 0.1132
Epoch: 2/2... Training loss: 0.1162
Epoch: 2/2... Training loss: 0.1132
Epoch: 2/2... Training loss: 0.1178
Epoch: 2/2... Training loss: 0.1175
Epoch: 2/2... Training loss: 0.1180
Epoch: 2/2... Training loss: 0.1162
Epoch: 2/2... Training loss: 0.1168
Epoch: 2/2... Training loss: 0.1163
Epoch: 2/2... Training loss: 0.1179
Epoch: 2/2... Training loss: 0.1145
Epoch: 2/2... Training loss: 0.1129
Epoch: 2/2... Training loss: 0.1178
Epoch: 2/2... Training loss: 0.1180
Epoch: 2/2... Training loss: 0.1130
Epoch: 2/2... Training loss: 0.1122
Epoch: 2/2... Training loss: 0.1184
Epoch: 2/2... Training loss: 0.1154
Epoch: 2/2... Training loss: 0.1155
Epoch: 2/2... Training loss: 0.1151
Epoch: 2/2... Training loss: 0.1178
Epoch: 2/2... Training loss: 0.1154
Epoch: 2/2... Training loss: 0.1170
Epoch: 2/2... Training loss: 0.1160
Epoch: 2/2... Training loss: 0.1159
Epoch: 2/2... Training loss: 0.1140
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1166
Epoch: 2/2... Training loss: 0.1152
Epoch: 2/2... Training loss: 0.1204
Epoch: 2/2... Training loss: 0.1215
Epoch: 2/2... Training loss: 0.1182
Epoch: 2/2... Training loss: 0.1144
Epoch: 2/2... Training loss: 0.1163
Epoch: 2/2... Training loss: 0.1167
Epoch: 2/2... Training loss: 0.1114
Epoch: 2/2... Training loss: 0.1167
Epoch: 2/2... Training loss: 0.1205
Epoch: 2/2... Training loss: 0.1168
Epoch: 2/2... Training loss: 0.1150
Epoch: 2/2... Training loss: 0.1146
Epoch: 2/2... Training loss: 0.1108
Epoch: 2/2... Training loss: 0.1196
Epoch: 2/2... Training loss: 0.1163
Epoch: 2/2... Training loss: 0.1194
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1143
Epoch: 2/2... Training loss: 0.1141
Epoch: 2/2... Training loss: 0.1099
Epoch: 2/2... Training loss: 0.1151
Epoch: 2/2... Training loss: 0.1175
Epoch: 2/2... Training loss: 0.1141
Epoch: 2/2... Training loss: 0.1117
Epoch: 2/2... Training loss: 0.1192
Epoch: 2/2... Training loss: 0.1126
Epoch: 2/2... Training loss: 0.1151
Epoch: 2/2... Training loss: 0.1160
Epoch: 2/2... Training loss: 0.1151
Epoch: 2/2... Training loss: 0.1159
Epoch: 2/2... Training loss: 0.1213
Epoch: 2/2... Training loss: 0.1142
Epoch: 2/2... Training loss: 0.1156
Epoch: 2/2... Training loss: 0.1149
Epoch: 2/2... Training loss: 0.1154
Epoch: 2/2... Training loss: 0.1127
Epoch: 2/2... Training loss: 0.1146
Epoch: 2/2... Training loss: 0.1123
Epoch: 2/2... Training loss: 0.1161
Epoch: 2/2... Training loss: 0.1141
Epoch: 2/2... Training loss: 0.1167
Epoch: 2/2... Training loss: 0.1124
Epoch: 2/2... Training loss: 0.1125
Epoch: 2/2... Training loss: 0.1135
Epoch: 2/2... Training loss: 0.1149
Epoch: 2/2... Training loss: 0.1150
Epoch: 2/2... Training loss: 0.1175
Epoch: 2/2... Training loss: 0.1197
Epoch: 2/2... Training loss: 0.1164
Epoch: 2/2... Training loss: 0.1142
Epoch: 2/2... Training loss: 0.1159
Epoch: 2/2... Training loss: 0.1122
Epoch: 2/2... Training loss: 0.1132
Epoch: 2/2... Training loss: 0.1139
Epoch: 2/2... Training loss: 0.1119
Epoch: 2/2... Training loss: 0.1141
Epoch: 2/2... Training loss: 0.1138
Epoch: 2/2... Training loss: 0.1163
Epoch: 2/2... Training loss: 0.1148
Epoch: 2/2... Training loss: 0.1119
Epoch: 2/2... Training loss: 0.1123
Epoch: 2/2... Training loss: 0.1172
Epoch: 2/2... Training loss: 0.1152
Epoch: 2/2... Training loss: 0.1107
Epoch: 2/2... Training loss: 0.1154
Epoch: 2/2... Training loss: 0.1110
Epoch: 2/2... Training loss: 0.1129
Epoch: 2/2... Training loss: 0.1142
Epoch: 2/2... Training loss: 0.1111
Epoch: 2/2... Training loss: 0.1112
Epoch: 2/2... Training loss: 0.1124
Epoch: 2/2... Training loss: 0.1099
Epoch: 2/2... Training loss: 0.1159
Epoch: 2/2... Training loss: 0.1128
Epoch: 2/2... Training loss: 0.1133
Epoch: 2/2... Training loss: 0.1127
Epoch: 2/2... Training loss: 0.1145
Epoch: 2/2... Training loss: 0.1115
Epoch: 2/2... Training loss: 0.1167
Epoch: 2/2... Training loss: 0.1127
Epoch: 2/2... Training loss: 0.1135
Epoch: 2/2... Training loss: 0.1107
Epoch: 2/2... Training loss: 0.1137
Epoch: 2/2... Training loss: 0.1131
Epoch: 2/2... Training loss: 0.1109
Epoch: 2/2... Training loss: 0.1181
Epoch: 2/2... Training loss: 0.1136
Epoch: 2/2... Training loss: 0.1140
Epoch: 2/2... Training loss: 0.1134
Epoch: 2/2... Training loss: 0.1154
Epoch: 2/2... Training loss: 0.1151
Epoch: 2/2... Training loss: 0.1152
Epoch: 2/2... Training loss: 0.1143
Epoch: 2/2... Training loss: 0.1129
Epoch: 2/2... Training loss: 0.1150
Epoch: 2/2... Training loss: 0.1120
Epoch: 2/2... Training loss: 0.1134
Epoch: 2/2... Training loss: 0.1141
Epoch: 2/2... Training loss: 0.1139
Epoch: 2/2... Training loss: 0.1169
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
img.shape
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
img.shape[0]
learning_rate = 0.001
# Input and target placeholders
img_size = img.shape[0]
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6913
Epoch: 1/100... Training loss: 0.6683
Epoch: 1/100... Training loss: 0.6348
Epoch: 1/100... Training loss: 0.5883
Epoch: 1/100... Training loss: 0.5337
Epoch: 1/100... Training loss: 0.4981
Epoch: 1/100... Training loss: 0.5087
Epoch: 1/100... Training loss: 0.5253
Epoch: 1/100... Training loss: 0.5051
Epoch: 1/100... Training loss: 0.4867
Epoch: 1/100... Training loss: 0.4671
Epoch: 1/100... Training loss: 0.4788
Epoch: 1/100... Training loss: 0.4672
Epoch: 1/100... Training loss: 0.4685
Epoch: 1/100... Training loss: 0.4535
Epoch: 1/100... Training loss: 0.4406
Epoch: 1/100... Training loss: 0.4282
Epoch: 1/100... Training loss: 0.4345
Epoch: 1/100... Training loss: 0.4248
Epoch: 1/100... Training loss: 0.3934
Epoch: 1/100... Training loss: 0.3948
Epoch: 1/100... Training loss: 0.3707
Epoch: 1/100... Training loss: 0.3581
Epoch: 1/100... Training loss: 0.3488
Epoch: 1/100... Training loss: 0.3424
Epoch: 1/100... Training loss: 0.3230
Epoch: 1/100... Training loss: 0.3067
Epoch: 1/100... Training loss: 0.3040
Epoch: 1/100... Training loss: 0.2881
Epoch: 1/100... Training loss: 0.2858
Epoch: 1/100... Training loss: 0.2875
Epoch: 1/100... Training loss: 0.2801
Epoch: 1/100... Training loss: 0.2786
Epoch: 1/100... Training loss: 0.2758
Epoch: 1/100... Training loss: 0.2681
Epoch: 1/100... Training loss: 0.2667
Epoch: 1/100... Training loss: 0.2672
Epoch: 1/100... Training loss: 0.2617
Epoch: 1/100... Training loss: 0.2679
Epoch: 1/100... Training loss: 0.2664
Epoch: 1/100... Training loss: 0.2647
Epoch: 1/100... Training loss: 0.2712
Epoch: 1/100... Training loss: 0.2607
Epoch: 1/100... Training loss: 0.2649
Epoch: 1/100... Training loss: 0.2559
Epoch: 1/100... Training loss: 0.2538
Epoch: 1/100... Training loss: 0.2647
Epoch: 1/100... Training loss: 0.2541
Epoch: 1/100... Training loss: 0.2658
Epoch: 1/100... Training loss: 0.2529
Epoch: 1/100... Training loss: 0.2552
Epoch: 1/100... Training loss: 0.2485
Epoch: 1/100... Training loss: 0.2574
Epoch: 1/100... Training loss: 0.2552
Epoch: 1/100... Training loss: 0.2479
Epoch: 1/100... Training loss: 0.2528
Epoch: 1/100... Training loss: 0.2543
Epoch: 1/100... Training loss: 0.2485
Epoch: 1/100... Training loss: 0.2512
Epoch: 1/100... Training loss: 0.2445
Epoch: 1/100... Training loss: 0.2449
Epoch: 1/100... Training loss: 0.2415
Epoch: 1/100... Training loss: 0.2542
Epoch: 1/100... Training loss: 0.2418
Epoch: 1/100... Training loss: 0.2381
Epoch: 1/100... Training loss: 0.2398
Epoch: 1/100... Training loss: 0.2378
Epoch: 1/100... Training loss: 0.2427
Epoch: 1/100... Training loss: 0.2408
Epoch: 1/100... Training loss: 0.2325
Epoch: 1/100... Training loss: 0.2353
Epoch: 1/100... Training loss: 0.2364
Epoch: 1/100... Training loss: 0.2330
Epoch: 1/100... Training loss: 0.2356
Epoch: 1/100... Training loss: 0.2288
Epoch: 1/100... Training loss: 0.2296
Epoch: 1/100... Training loss: 0.2299
Epoch: 1/100... Training loss: 0.2255
Epoch: 1/100... Training loss: 0.2248
Epoch: 1/100... Training loss: 0.2300
Epoch: 1/100... Training loss: 0.2247
Epoch: 1/100... Training loss: 0.2245
Epoch: 1/100... Training loss: 0.2283
Epoch: 1/100... Training loss: 0.2228
Epoch: 1/100... Training loss: 0.2254
Epoch: 1/100... Training loss: 0.2231
Epoch: 1/100... Training loss: 0.2241
Epoch: 1/100... Training loss: 0.2170
Epoch: 1/100... Training loss: 0.2188
Epoch: 1/100... Training loss: 0.2237
Epoch: 1/100... Training loss: 0.2197
Epoch: 1/100... Training loss: 0.2277
Epoch: 1/100... Training loss: 0.2254
Epoch: 1/100... Training loss: 0.2106
Epoch: 1/100... Training loss: 0.2334
Epoch: 1/100... Training loss: 0.2297
Epoch: 1/100... Training loss: 0.2264
Epoch: 1/100... Training loss: 0.2206
Epoch: 1/100... Training loss: 0.2210
Epoch: 1/100... Training loss: 0.2270
Epoch: 1/100... Training loss: 0.2216
Epoch: 1/100... Training loss: 0.2171
Epoch: 1/100... Training loss: 0.2213
Epoch: 1/100... Training loss: 0.2140
Epoch: 1/100... Training loss: 0.2240
Epoch: 1/100... Training loss: 0.2158
Epoch: 1/100... Training loss: 0.2096
Epoch: 1/100... Training loss: 0.2198
Epoch: 1/100... Training loss: 0.2136
Epoch: 1/100... Training loss: 0.2199
Epoch: 1/100... Training loss: 0.2180
Epoch: 1/100... Training loss: 0.2104
Epoch: 1/100... Training loss: 0.2092
Epoch: 1/100... Training loss: 0.2107
Epoch: 1/100... Training loss: 0.2133
Epoch: 1/100... Training loss: 0.2099
Epoch: 1/100... Training loss: 0.2118
Epoch: 1/100... Training loss: 0.2129
Epoch: 1/100... Training loss: 0.2183
Epoch: 1/100... Training loss: 0.2145
Epoch: 1/100... Training loss: 0.2087
Epoch: 1/100... Training loss: 0.2104
Epoch: 1/100... Training loss: 0.2111
Epoch: 1/100... Training loss: 0.2086
Epoch: 1/100... Training loss: 0.2106
Epoch: 1/100... Training loss: 0.2081
Epoch: 1/100... Training loss: 0.2062
Epoch: 1/100... Training loss: 0.2088
Epoch: 1/100... Training loss: 0.2098
Epoch: 1/100... Training loss: 0.2069
Epoch: 1/100... Training loss: 0.2099
Epoch: 1/100... Training loss: 0.2082
Epoch: 1/100... Training loss: 0.2073
Epoch: 1/100... Training loss: 0.2049
Epoch: 1/100... Training loss: 0.2063
Epoch: 1/100... Training loss: 0.2014
Epoch: 1/100... Training loss: 0.2076
Epoch: 1/100... Training loss: 0.2006
Epoch: 1/100... Training loss: 0.2025
Epoch: 1/100... Training loss: 0.2068
Epoch: 1/100... Training loss: 0.2027
Epoch: 1/100... Training loss: 0.1982
Epoch: 1/100... Training loss: 0.2036
Epoch: 1/100... Training loss: 0.1951
Epoch: 1/100... Training loss: 0.2011
Epoch: 1/100... Training loss: 0.2007
Epoch: 1/100... Training loss: 0.2049
Epoch: 1/100... Training loss: 0.1977
Epoch: 1/100... Training loss: 0.2038
Epoch: 1/100... Training loss: 0.2011
Epoch: 1/100... Training loss: 0.1963
Epoch: 1/100... Training loss: 0.2002
Epoch: 1/100... Training loss: 0.2031
Epoch: 1/100... Training loss: 0.1987
Epoch: 1/100... Training loss: 0.1970
Epoch: 1/100... Training loss: 0.1978
Epoch: 1/100... Training loss: 0.2013
Epoch: 1/100... Training loss: 0.2016
Epoch: 1/100... Training loss: 0.1945
Epoch: 1/100... Training loss: 0.1947
Epoch: 1/100... Training loss: 0.1937
Epoch: 1/100... Training loss: 0.1986
Epoch: 1/100... Training loss: 0.2011
Epoch: 1/100... Training loss: 0.1973
Epoch: 1/100... Training loss: 0.1939
Epoch: 1/100... Training loss: 0.1945
Epoch: 1/100... Training loss: 0.1978
Epoch: 1/100... Training loss: 0.1996
Epoch: 1/100... Training loss: 0.1953
Epoch: 1/100... Training loss: 0.1896
Epoch: 1/100... Training loss: 0.1968
Epoch: 1/100... Training loss: 0.1955
Epoch: 1/100... Training loss: 0.1915
Epoch: 1/100... Training loss: 0.1935
Epoch: 1/100... Training loss: 0.1905
Epoch: 1/100... Training loss: 0.1922
Epoch: 1/100... Training loss: 0.1885
Epoch: 1/100... Training loss: 0.1949
Epoch: 1/100... Training loss: 0.1959
Epoch: 1/100... Training loss: 0.1902
Epoch: 1/100... Training loss: 0.1921
Epoch: 1/100... Training loss: 0.1900
Epoch: 1/100... Training loss: 0.1925
Epoch: 1/100... Training loss: 0.1868
Epoch: 1/100... Training loss: 0.1830
Epoch: 1/100... Training loss: 0.1853
Epoch: 1/100... Training loss: 0.1857
Epoch: 1/100... Training loss: 0.1903
Epoch: 1/100... Training loss: 0.1853
Epoch: 1/100... Training loss: 0.1871
Epoch: 1/100... Training loss: 0.1862
Epoch: 1/100... Training loss: 0.1893
Epoch: 1/100... Training loss: 0.1871
Epoch: 1/100... Training loss: 0.1905
Epoch: 1/100... Training loss: 0.1871
Epoch: 1/100... Training loss: 0.1896
Epoch: 1/100... Training loss: 0.1837
Epoch: 1/100... Training loss: 0.1849
Epoch: 1/100... Training loss: 0.1832
Epoch: 1/100... Training loss: 0.1789
Epoch: 1/100... Training loss: 0.1820
Epoch: 1/100... Training loss: 0.1834
Epoch: 1/100... Training loss: 0.1856
Epoch: 1/100... Training loss: 0.1822
Epoch: 1/100... Training loss: 0.1852
Epoch: 1/100... Training loss: 0.1838
Epoch: 1/100... Training loss: 0.1847
Epoch: 1/100... Training loss: 0.1772
Epoch: 1/100... Training loss: 0.1813
Epoch: 1/100... Training loss: 0.1802
Epoch: 1/100... Training loss: 0.1807
Epoch: 1/100... Training loss: 0.1827
Epoch: 1/100... Training loss: 0.1813
Epoch: 1/100... Training loss: 0.1815
Epoch: 1/100... Training loss: 0.1906
Epoch: 1/100... Training loss: 0.1788
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, 28, 28, 1])
targets_ = tf.placeholder(tf.float32, [None, 28, 28, 1])
# Note: here we are using tf.layers.conv2d(x, n_filters, (k,k), padding='SAME')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2), padding='SAME')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='SAME')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='SAME')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv4, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='SAME', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name = 'decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
# note: the inputs are batch_size x 748 -> need to reshape to 4d tensors for conv2d layer
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
if ii % 10 == 0:
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. Note: Training: Noisy Img -> input, clean Img -> output
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='SAME')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='SAME')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='SAME')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='SAME', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# MAIN DIFFERENCE -> we add noise to training data
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
if ii % 10 ==0:
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6985
Epoch: 1/100... Training loss: 0.6857
Epoch: 1/100... Training loss: 0.6739
Epoch: 1/100... Training loss: 0.6574
Epoch: 1/100... Training loss: 0.6351
Epoch: 1/100... Training loss: 0.6059
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1), name="inputs")
targets_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1), name="targets")
### Encoder
'''
tf.layers.conv2d(inputs,
filters,
kernel_size,
strides=(1, 1), # stride of (1, 1) will not reduce size
padding='valid',
data_format='channels_last',
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
trainable=True,
name=None,
reuse=None
)
max_pooling2d(
inputs,
pool_size,
strides,
padding='valid',
data_format='channels_last',
name=None
)
'''
conv1 = tf.layers.conv2d(inputs_, 16, (5, 5), padding="same", activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), strides=(2, 2), padding="same")
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (5, 5), padding="same", activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), strides=(2, 2), padding="same")
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (5, 5), padding="same", activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), strides=(2, 2), padding="valid")
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding="same", activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_images(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (5, 5), padding="same", activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_images(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (5, 5), padding="same", activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (5, 5), padding="same", activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='output')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d()
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1))
targets_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2), strides=(2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2,2), strides=(2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=8, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2,2), strides=(2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(images=encoded, size=(7,7), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 7x7x8
conv4 = tf.layers.conv2d(inputs=upsample1, filters=8, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_images(images=conv4, size=(14,14), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 14x14x8
conv5 = tf.layers.conv2d(inputs=upsample2, filters=8, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_images(images=conv5, size=(28,28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 28x28x8
conv6 = tf.layers.conv2d(inputs=upsample3, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=(3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2), strides=(2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2,2), strides=(2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2,2), strides=(2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_images(images=encoded, size=(7,7), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 7x7x16
conv4 = tf.layers.conv2d(inputs=upsample1, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_images(images=conv4, size=(14,14), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 14x14x16
conv5 = tf.layers.conv2d(inputs=upsample2, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_images(images=conv5, size=(28,28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 28x28x32
conv6 = tf.layers.conv2d(inputs=upsample3, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=(3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 2
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/2... Training loss: 0.6892
Epoch: 1/2... Training loss: 0.6591
Epoch: 1/2... Training loss: 0.6225
Epoch: 1/2... Training loss: 0.5764
Epoch: 1/2... Training loss: 0.5340
Epoch: 1/2... Training loss: 0.5021
Epoch: 1/2... Training loss: 0.4985
Epoch: 1/2... Training loss: 0.5179
Epoch: 1/2... Training loss: 0.5241
Epoch: 1/2... Training loss: 0.5011
Epoch: 1/2... Training loss: 0.4793
Epoch: 1/2... Training loss: 0.4782
Epoch: 1/2... Training loss: 0.4832
Epoch: 1/2... Training loss: 0.4787
Epoch: 1/2... Training loss: 0.4745
Epoch: 1/2... Training loss: 0.4618
Epoch: 1/2... Training loss: 0.4592
Epoch: 1/2... Training loss: 0.4494
Epoch: 1/2... Training loss: 0.4515
Epoch: 1/2... Training loss: 0.4257
Epoch: 1/2... Training loss: 0.4269
Epoch: 1/2... Training loss: 0.4152
Epoch: 1/2... Training loss: 0.3884
Epoch: 1/2... Training loss: 0.3976
Epoch: 1/2... Training loss: 0.3783
Epoch: 1/2... Training loss: 0.3757
Epoch: 1/2... Training loss: 0.3593
Epoch: 1/2... Training loss: 0.3519
Epoch: 1/2... Training loss: 0.3403
Epoch: 1/2... Training loss: 0.3258
Epoch: 1/2... Training loss: 0.3283
Epoch: 1/2... Training loss: 0.3064
Epoch: 1/2... Training loss: 0.3097
Epoch: 1/2... Training loss: 0.3003
Epoch: 1/2... Training loss: 0.2863
Epoch: 1/2... Training loss: 0.2867
Epoch: 1/2... Training loss: 0.2807
Epoch: 1/2... Training loss: 0.2801
Epoch: 1/2... Training loss: 0.2746
Epoch: 1/2... Training loss: 0.2722
Epoch: 1/2... Training loss: 0.2766
Epoch: 1/2... Training loss: 0.2671
Epoch: 1/2... Training loss: 0.2687
Epoch: 1/2... Training loss: 0.2681
Epoch: 1/2... Training loss: 0.2645
Epoch: 1/2... Training loss: 0.2677
Epoch: 1/2... Training loss: 0.2547
Epoch: 1/2... Training loss: 0.2629
Epoch: 1/2... Training loss: 0.2526
Epoch: 1/2... Training loss: 0.2549
Epoch: 1/2... Training loss: 0.2465
Epoch: 1/2... Training loss: 0.2507
Epoch: 1/2... Training loss: 0.2465
Epoch: 1/2... Training loss: 0.2426
Epoch: 1/2... Training loss: 0.2409
Epoch: 1/2... Training loss: 0.2393
Epoch: 1/2... Training loss: 0.2395
Epoch: 1/2... Training loss: 0.2409
Epoch: 1/2... Training loss: 0.2381
Epoch: 1/2... Training loss: 0.2278
Epoch: 1/2... Training loss: 0.2313
Epoch: 1/2... Training loss: 0.2384
Epoch: 1/2... Training loss: 0.2341
Epoch: 1/2... Training loss: 0.2284
Epoch: 1/2... Training loss: 0.2292
Epoch: 1/2... Training loss: 0.2263
Epoch: 1/2... Training loss: 0.2216
Epoch: 1/2... Training loss: 0.2279
Epoch: 1/2... Training loss: 0.2260
Epoch: 1/2... Training loss: 0.2249
Epoch: 1/2... Training loss: 0.2189
Epoch: 1/2... Training loss: 0.2210
Epoch: 1/2... Training loss: 0.2276
Epoch: 1/2... Training loss: 0.2227
Epoch: 1/2... Training loss: 0.2244
Epoch: 1/2... Training loss: 0.2135
Epoch: 1/2... Training loss: 0.2166
Epoch: 1/2... Training loss: 0.2212
Epoch: 1/2... Training loss: 0.2235
Epoch: 1/2... Training loss: 0.2185
Epoch: 1/2... Training loss: 0.2202
Epoch: 1/2... Training loss: 0.2237
Epoch: 1/2... Training loss: 0.2174
Epoch: 1/2... Training loss: 0.2226
Epoch: 1/2... Training loss: 0.2117
Epoch: 1/2... Training loss: 0.2178
Epoch: 1/2... Training loss: 0.2155
Epoch: 1/2... Training loss: 0.2212
Epoch: 1/2... Training loss: 0.2162
Epoch: 1/2... Training loss: 0.2128
Epoch: 1/2... Training loss: 0.2211
Epoch: 1/2... Training loss: 0.2179
Epoch: 1/2... Training loss: 0.2148
Epoch: 1/2... Training loss: 0.2135
Epoch: 1/2... Training loss: 0.2106
Epoch: 1/2... Training loss: 0.2110
Epoch: 1/2... Training loss: 0.2076
Epoch: 1/2... Training loss: 0.2074
Epoch: 1/2... Training loss: 0.2060
Epoch: 1/2... Training loss: 0.2097
Epoch: 1/2... Training loss: 0.2059
Epoch: 1/2... Training loss: 0.2103
Epoch: 1/2... Training loss: 0.2115
Epoch: 1/2... Training loss: 0.2051
Epoch: 1/2... Training loss: 0.2083
Epoch: 1/2... Training loss: 0.2078
Epoch: 1/2... Training loss: 0.2031
Epoch: 1/2... Training loss: 0.1981
Epoch: 1/2... Training loss: 0.2053
Epoch: 1/2... Training loss: 0.2064
Epoch: 1/2... Training loss: 0.2067
Epoch: 1/2... Training loss: 0.2058
Epoch: 1/2... Training loss: 0.2045
Epoch: 1/2... Training loss: 0.2098
Epoch: 1/2... Training loss: 0.2072
Epoch: 1/2... Training loss: 0.2061
Epoch: 1/2... Training loss: 0.2059
Epoch: 1/2... Training loss: 0.2014
Epoch: 1/2... Training loss: 0.2024
Epoch: 1/2... Training loss: 0.2065
Epoch: 1/2... Training loss: 0.2076
Epoch: 1/2... Training loss: 0.2067
Epoch: 1/2... Training loss: 0.1995
Epoch: 1/2... Training loss: 0.1975
Epoch: 1/2... Training loss: 0.2050
Epoch: 1/2... Training loss: 0.1945
Epoch: 1/2... Training loss: 0.2060
Epoch: 1/2... Training loss: 0.2024
Epoch: 1/2... Training loss: 0.2010
Epoch: 1/2... Training loss: 0.2010
Epoch: 1/2... Training loss: 0.1986
Epoch: 1/2... Training loss: 0.1986
Epoch: 1/2... Training loss: 0.1949
Epoch: 1/2... Training loss: 0.1946
Epoch: 1/2... Training loss: 0.1914
Epoch: 1/2... Training loss: 0.1994
Epoch: 1/2... Training loss: 0.1938
Epoch: 1/2... Training loss: 0.2034
Epoch: 1/2... Training loss: 0.1969
Epoch: 1/2... Training loss: 0.1932
Epoch: 1/2... Training loss: 0.1964
Epoch: 1/2... Training loss: 0.1977
Epoch: 1/2... Training loss: 0.1968
Epoch: 1/2... Training loss: 0.1888
Epoch: 1/2... Training loss: 0.1929
Epoch: 1/2... Training loss: 0.1992
Epoch: 1/2... Training loss: 0.1958
Epoch: 1/2... Training loss: 0.1937
Epoch: 1/2... Training loss: 0.1931
Epoch: 1/2... Training loss: 0.1924
Epoch: 1/2... Training loss: 0.1931
Epoch: 1/2... Training loss: 0.1927
Epoch: 1/2... Training loss: 0.1924
Epoch: 1/2... Training loss: 0.1911
Epoch: 1/2... Training loss: 0.1941
Epoch: 1/2... Training loss: 0.1939
Epoch: 1/2... Training loss: 0.1865
Epoch: 1/2... Training loss: 0.1853
Epoch: 1/2... Training loss: 0.1889
Epoch: 1/2... Training loss: 0.1921
Epoch: 1/2... Training loss: 0.1944
Epoch: 1/2... Training loss: 0.1911
Epoch: 1/2... Training loss: 0.1822
Epoch: 1/2... Training loss: 0.1916
Epoch: 1/2... Training loss: 0.1798
Epoch: 1/2... Training loss: 0.1842
Epoch: 1/2... Training loss: 0.1865
Epoch: 1/2... Training loss: 0.1906
Epoch: 1/2... Training loss: 0.1855
Epoch: 1/2... Training loss: 0.1912
Epoch: 1/2... Training loss: 0.1876
Epoch: 1/2... Training loss: 0.1840
Epoch: 1/2... Training loss: 0.1904
Epoch: 1/2... Training loss: 0.1887
Epoch: 1/2... Training loss: 0.1808
Epoch: 1/2... Training loss: 0.1853
Epoch: 1/2... Training loss: 0.1822
Epoch: 1/2... Training loss: 0.1817
Epoch: 1/2... Training loss: 0.1871
Epoch: 1/2... Training loss: 0.1817
Epoch: 1/2... Training loss: 0.1851
Epoch: 1/2... Training loss: 0.1830
Epoch: 1/2... Training loss: 0.1818
Epoch: 1/2... Training loss: 0.1871
Epoch: 1/2... Training loss: 0.1839
Epoch: 1/2... Training loss: 0.1891
Epoch: 1/2... Training loss: 0.1787
Epoch: 1/2... Training loss: 0.1814
Epoch: 1/2... Training loss: 0.1825
Epoch: 1/2... Training loss: 0.1857
Epoch: 1/2... Training loss: 0.1819
Epoch: 1/2... Training loss: 0.1816
Epoch: 1/2... Training loss: 0.1830
Epoch: 1/2... Training loss: 0.1804
Epoch: 1/2... Training loss: 0.1809
Epoch: 1/2... Training loss: 0.1816
Epoch: 1/2... Training loss: 0.1821
Epoch: 1/2... Training loss: 0.1804
Epoch: 1/2... Training loss: 0.1766
Epoch: 1/2... Training loss: 0.1795
Epoch: 1/2... Training loss: 0.1799
Epoch: 1/2... Training loss: 0.1845
Epoch: 1/2... Training loss: 0.1758
Epoch: 1/2... Training loss: 0.1797
Epoch: 1/2... Training loss: 0.1804
Epoch: 1/2... Training loss: 0.1772
Epoch: 1/2... Training loss: 0.1828
Epoch: 1/2... Training loss: 0.1791
Epoch: 1/2... Training loss: 0.1763
Epoch: 1/2... Training loss: 0.1757
Epoch: 1/2... Training loss: 0.1752
Epoch: 1/2... Training loss: 0.1778
Epoch: 1/2... Training loss: 0.1779
Epoch: 1/2... Training loss: 0.1772
Epoch: 1/2... Training loss: 0.1725
Epoch: 1/2... Training loss: 0.1742
Epoch: 1/2... Training loss: 0.1754
Epoch: 1/2... Training loss: 0.1748
Epoch: 1/2... Training loss: 0.1779
Epoch: 1/2... Training loss: 0.1722
Epoch: 1/2... Training loss: 0.1737
Epoch: 1/2... Training loss: 0.1716
Epoch: 1/2... Training loss: 0.1730
Epoch: 1/2... Training loss: 0.1728
Epoch: 1/2... Training loss: 0.1741
Epoch: 1/2... Training loss: 0.1703
Epoch: 1/2... Training loss: 0.1695
Epoch: 1/2... Training loss: 0.1749
Epoch: 1/2... Training loss: 0.1721
Epoch: 1/2... Training loss: 0.1704
Epoch: 1/2... Training loss: 0.1696
Epoch: 1/2... Training loss: 0.1700
Epoch: 1/2... Training loss: 0.1731
Epoch: 1/2... Training loss: 0.1753
Epoch: 1/2... Training loss: 0.1758
Epoch: 1/2... Training loss: 0.1696
Epoch: 1/2... Training loss: 0.1757
Epoch: 1/2... Training loss: 0.1734
Epoch: 1/2... Training loss: 0.1745
Epoch: 1/2... Training loss: 0.1719
Epoch: 1/2... Training loss: 0.1721
Epoch: 1/2... Training loss: 0.1672
Epoch: 1/2... Training loss: 0.1704
Epoch: 1/2... Training loss: 0.1689
Epoch: 1/2... Training loss: 0.1668
Epoch: 1/2... Training loss: 0.1737
Epoch: 1/2... Training loss: 0.1773
Epoch: 1/2... Training loss: 0.1700
Epoch: 1/2... Training loss: 0.1701
Epoch: 1/2... Training loss: 0.1644
Epoch: 1/2... Training loss: 0.1744
Epoch: 1/2... Training loss: 0.1737
Epoch: 1/2... Training loss: 0.1677
Epoch: 1/2... Training loss: 0.1738
Epoch: 1/2... Training loss: 0.1654
Epoch: 1/2... Training loss: 0.1730
Epoch: 1/2... Training loss: 0.1703
Epoch: 1/2... Training loss: 0.1643
Epoch: 1/2... Training loss: 0.1602
Epoch: 1/2... Training loss: 0.1753
Epoch: 1/2... Training loss: 0.1674
Epoch: 1/2... Training loss: 0.1668
Epoch: 1/2... Training loss: 0.1687
Epoch: 1/2... Training loss: 0.1653
Epoch: 1/2... Training loss: 0.1664
Epoch: 1/2... Training loss: 0.1723
Epoch: 1/2... Training loss: 0.1653
Epoch: 1/2... Training loss: 0.1673
Epoch: 1/2... Training loss: 0.1694
Epoch: 1/2... Training loss: 0.1665
Epoch: 1/2... Training loss: 0.1657
Epoch: 1/2... Training loss: 0.1675
Epoch: 1/2... Training loss: 0.1692
Epoch: 1/2... Training loss: 0.1634
Epoch: 1/2... Training loss: 0.1637
Epoch: 1/2... Training loss: 0.1622
Epoch: 1/2... Training loss: 0.1640
Epoch: 1/2... Training loss: 0.1596
Epoch: 1/2... Training loss: 0.1651
Epoch: 1/2... Training loss: 0.1646
Epoch: 1/2... Training loss: 0.1592
Epoch: 1/2... Training loss: 0.1672
Epoch: 1/2... Training loss: 0.1687
Epoch: 1/2... Training loss: 0.1604
Epoch: 1/2... Training loss: 0.1601
Epoch: 1/2... Training loss: 0.1660
Epoch: 1/2... Training loss: 0.1614
Epoch: 1/2... Training loss: 0.1634
Epoch: 1/2... Training loss: 0.1604
Epoch: 1/2... Training loss: 0.1642
Epoch: 1/2... Training loss: 0.1674
Epoch: 1/2... Training loss: 0.1650
Epoch: 1/2... Training loss: 0.1653
Epoch: 1/2... Training loss: 0.1563
Epoch: 1/2... Training loss: 0.1584
Epoch: 1/2... Training loss: 0.1603
Epoch: 1/2... Training loss: 0.1611
Epoch: 1/2... Training loss: 0.1616
Epoch: 1/2... Training loss: 0.1618
Epoch: 1/2... Training loss: 0.1568
Epoch: 2/2... Training loss: 0.1599
Epoch: 2/2... Training loss: 0.1601
Epoch: 2/2... Training loss: 0.1623
Epoch: 2/2... Training loss: 0.1634
Epoch: 2/2... Training loss: 0.1645
Epoch: 2/2... Training loss: 0.1608
Epoch: 2/2... Training loss: 0.1514
Epoch: 2/2... Training loss: 0.1575
Epoch: 2/2... Training loss: 0.1598
Epoch: 2/2... Training loss: 0.1602
Epoch: 2/2... Training loss: 0.1567
Epoch: 2/2... Training loss: 0.1613
Epoch: 2/2... Training loss: 0.1586
Epoch: 2/2... Training loss: 0.1560
Epoch: 2/2... Training loss: 0.1654
Epoch: 2/2... Training loss: 0.1628
Epoch: 2/2... Training loss: 0.1616
Epoch: 2/2... Training loss: 0.1601
Epoch: 2/2... Training loss: 0.1632
Epoch: 2/2... Training loss: 0.1633
Epoch: 2/2... Training loss: 0.1593
Epoch: 2/2... Training loss: 0.1585
Epoch: 2/2... Training loss: 0.1639
Epoch: 2/2... Training loss: 0.1594
Epoch: 2/2... Training loss: 0.1584
Epoch: 2/2... Training loss: 0.1600
Epoch: 2/2... Training loss: 0.1566
Epoch: 2/2... Training loss: 0.1559
Epoch: 2/2... Training loss: 0.1573
Epoch: 2/2... Training loss: 0.1567
Epoch: 2/2... Training loss: 0.1571
Epoch: 2/2... Training loss: 0.1587
Epoch: 2/2... Training loss: 0.1630
Epoch: 2/2... Training loss: 0.1595
Epoch: 2/2... Training loss: 0.1627
Epoch: 2/2... Training loss: 0.1587
Epoch: 2/2... Training loss: 0.1591
Epoch: 2/2... Training loss: 0.1602
Epoch: 2/2... Training loss: 0.1583
Epoch: 2/2... Training loss: 0.1565
Epoch: 2/2... Training loss: 0.1560
Epoch: 2/2... Training loss: 0.1593
Epoch: 2/2... Training loss: 0.1538
Epoch: 2/2... Training loss: 0.1585
Epoch: 2/2... Training loss: 0.1591
Epoch: 2/2... Training loss: 0.1603
Epoch: 2/2... Training loss: 0.1594
Epoch: 2/2... Training loss: 0.1538
Epoch: 2/2... Training loss: 0.1601
Epoch: 2/2... Training loss: 0.1551
Epoch: 2/2... Training loss: 0.1546
Epoch: 2/2... Training loss: 0.1592
Epoch: 2/2... Training loss: 0.1536
Epoch: 2/2... Training loss: 0.1541
Epoch: 2/2... Training loss: 0.1559
Epoch: 2/2... Training loss: 0.1549
Epoch: 2/2... Training loss: 0.1582
Epoch: 2/2... Training loss: 0.1546
Epoch: 2/2... Training loss: 0.1525
Epoch: 2/2... Training loss: 0.1536
Epoch: 2/2... Training loss: 0.1577
Epoch: 2/2... Training loss: 0.1552
Epoch: 2/2... Training loss: 0.1499
Epoch: 2/2... Training loss: 0.1504
Epoch: 2/2... Training loss: 0.1547
Epoch: 2/2... Training loss: 0.1559
Epoch: 2/2... Training loss: 0.1545
Epoch: 2/2... Training loss: 0.1557
Epoch: 2/2... Training loss: 0.1526
Epoch: 2/2... Training loss: 0.1573
Epoch: 2/2... Training loss: 0.1536
Epoch: 2/2... Training loss: 0.1573
Epoch: 2/2... Training loss: 0.1567
Epoch: 2/2... Training loss: 0.1533
Epoch: 2/2... Training loss: 0.1503
Epoch: 2/2... Training loss: 0.1521
Epoch: 2/2... Training loss: 0.1568
Epoch: 2/2... Training loss: 0.1538
Epoch: 2/2... Training loss: 0.1515
Epoch: 2/2... Training loss: 0.1517
Epoch: 2/2... Training loss: 0.1572
Epoch: 2/2... Training loss: 0.1564
Epoch: 2/2... Training loss: 0.1555
Epoch: 2/2... Training loss: 0.1541
Epoch: 2/2... Training loss: 0.1490
Epoch: 2/2... Training loss: 0.1509
Epoch: 2/2... Training loss: 0.1535
Epoch: 2/2... Training loss: 0.1531
Epoch: 2/2... Training loss: 0.1567
Epoch: 2/2... Training loss: 0.1557
Epoch: 2/2... Training loss: 0.1523
Epoch: 2/2... Training loss: 0.1581
Epoch: 2/2... Training loss: 0.1546
Epoch: 2/2... Training loss: 0.1513
Epoch: 2/2... Training loss: 0.1547
Epoch: 2/2... Training loss: 0.1523
Epoch: 2/2... Training loss: 0.1512
Epoch: 2/2... Training loss: 0.1529
Epoch: 2/2... Training loss: 0.1488
Epoch: 2/2... Training loss: 0.1488
Epoch: 2/2... Training loss: 0.1535
Epoch: 2/2... Training loss: 0.1499
Epoch: 2/2... Training loss: 0.1487
Epoch: 2/2... Training loss: 0.1564
Epoch: 2/2... Training loss: 0.1523
Epoch: 2/2... Training loss: 0.1501
Epoch: 2/2... Training loss: 0.1537
Epoch: 2/2... Training loss: 0.1532
Epoch: 2/2... Training loss: 0.1530
Epoch: 2/2... Training loss: 0.1537
Epoch: 2/2... Training loss: 0.1566
Epoch: 2/2... Training loss: 0.1501
Epoch: 2/2... Training loss: 0.1538
Epoch: 2/2... Training loss: 0.1533
Epoch: 2/2... Training loss: 0.1464
Epoch: 2/2... Training loss: 0.1447
Epoch: 2/2... Training loss: 0.1518
Epoch: 2/2... Training loss: 0.1507
Epoch: 2/2... Training loss: 0.1556
Epoch: 2/2... Training loss: 0.1480
Epoch: 2/2... Training loss: 0.1521
Epoch: 2/2... Training loss: 0.1487
Epoch: 2/2... Training loss: 0.1513
Epoch: 2/2... Training loss: 0.1476
Epoch: 2/2... Training loss: 0.1549
Epoch: 2/2... Training loss: 0.1508
Epoch: 2/2... Training loss: 0.1503
Epoch: 2/2... Training loss: 0.1544
Epoch: 2/2... Training loss: 0.1508
Epoch: 2/2... Training loss: 0.1536
Epoch: 2/2... Training loss: 0.1511
Epoch: 2/2... Training loss: 0.1499
Epoch: 2/2... Training loss: 0.1496
Epoch: 2/2... Training loss: 0.1485
Epoch: 2/2... Training loss: 0.1525
Epoch: 2/2... Training loss: 0.1489
Epoch: 2/2... Training loss: 0.1527
Epoch: 2/2... Training loss: 0.1457
Epoch: 2/2... Training loss: 0.1483
Epoch: 2/2... Training loss: 0.1514
Epoch: 2/2... Training loss: 0.1450
Epoch: 2/2... Training loss: 0.1474
Epoch: 2/2... Training loss: 0.1575
Epoch: 2/2... Training loss: 0.1462
Epoch: 2/2... Training loss: 0.1520
Epoch: 2/2... Training loss: 0.1570
Epoch: 2/2... Training loss: 0.1524
Epoch: 2/2... Training loss: 0.1451
Epoch: 2/2... Training loss: 0.1517
Epoch: 2/2... Training loss: 0.1497
Epoch: 2/2... Training loss: 0.1482
Epoch: 2/2... Training loss: 0.1524
Epoch: 2/2... Training loss: 0.1506
Epoch: 2/2... Training loss: 0.1519
Epoch: 2/2... Training loss: 0.1522
Epoch: 2/2... Training loss: 0.1453
Epoch: 2/2... Training loss: 0.1499
Epoch: 2/2... Training loss: 0.1477
Epoch: 2/2... Training loss: 0.1443
Epoch: 2/2... Training loss: 0.1445
Epoch: 2/2... Training loss: 0.1464
Epoch: 2/2... Training loss: 0.1458
Epoch: 2/2... Training loss: 0.1489
Epoch: 2/2... Training loss: 0.1447
Epoch: 2/2... Training loss: 0.1419
Epoch: 2/2... Training loss: 0.1429
Epoch: 2/2... Training loss: 0.1456
Epoch: 2/2... Training loss: 0.1437
Epoch: 2/2... Training loss: 0.1445
Epoch: 2/2... Training loss: 0.1451
Epoch: 2/2... Training loss: 0.1447
Epoch: 2/2... Training loss: 0.1499
Epoch: 2/2... Training loss: 0.1472
Epoch: 2/2... Training loss: 0.1465
Epoch: 2/2... Training loss: 0.1442
Epoch: 2/2... Training loss: 0.1512
Epoch: 2/2... Training loss: 0.1457
Epoch: 2/2... Training loss: 0.1489
Epoch: 2/2... Training loss: 0.1450
Epoch: 2/2... Training loss: 0.1437
Epoch: 2/2... Training loss: 0.1498
Epoch: 2/2... Training loss: 0.1471
Epoch: 2/2... Training loss: 0.1487
Epoch: 2/2... Training loss: 0.1494
Epoch: 2/2... Training loss: 0.1474
Epoch: 2/2... Training loss: 0.1501
Epoch: 2/2... Training loss: 0.1470
Epoch: 2/2... Training loss: 0.1489
Epoch: 2/2... Training loss: 0.1525
Epoch: 2/2... Training loss: 0.1488
Epoch: 2/2... Training loss: 0.1490
Epoch: 2/2... Training loss: 0.1501
Epoch: 2/2... Training loss: 0.1399
Epoch: 2/2... Training loss: 0.1417
Epoch: 2/2... Training loss: 0.1429
Epoch: 2/2... Training loss: 0.1509
Epoch: 2/2... Training loss: 0.1462
Epoch: 2/2... Training loss: 0.1481
Epoch: 2/2... Training loss: 0.1429
Epoch: 2/2... Training loss: 0.1431
Epoch: 2/2... Training loss: 0.1420
Epoch: 2/2... Training loss: 0.1450
Epoch: 2/2... Training loss: 0.1452
Epoch: 2/2... Training loss: 0.1396
Epoch: 2/2... Training loss: 0.1478
Epoch: 2/2... Training loss: 0.1490
Epoch: 2/2... Training loss: 0.1446
Epoch: 2/2... Training loss: 0.1449
Epoch: 2/2... Training loss: 0.1487
Epoch: 2/2... Training loss: 0.1512
Epoch: 2/2... Training loss: 0.1420
Epoch: 2/2... Training loss: 0.1461
Epoch: 2/2... Training loss: 0.1460
Epoch: 2/2... Training loss: 0.1452
Epoch: 2/2... Training loss: 0.1382
Epoch: 2/2... Training loss: 0.1451
Epoch: 2/2... Training loss: 0.1431
Epoch: 2/2... Training loss: 0.1459
Epoch: 2/2... Training loss: 0.1450
Epoch: 2/2... Training loss: 0.1432
Epoch: 2/2... Training loss: 0.1404
Epoch: 2/2... Training loss: 0.1430
Epoch: 2/2... Training loss: 0.1478
Epoch: 2/2... Training loss: 0.1410
Epoch: 2/2... Training loss: 0.1447
Epoch: 2/2... Training loss: 0.1502
Epoch: 2/2... Training loss: 0.1447
Epoch: 2/2... Training loss: 0.1474
Epoch: 2/2... Training loss: 0.1456
Epoch: 2/2... Training loss: 0.1453
Epoch: 2/2... Training loss: 0.1410
Epoch: 2/2... Training loss: 0.1452
Epoch: 2/2... Training loss: 0.1446
Epoch: 2/2... Training loss: 0.1409
Epoch: 2/2... Training loss: 0.1415
Epoch: 2/2... Training loss: 0.1439
Epoch: 2/2... Training loss: 0.1423
Epoch: 2/2... Training loss: 0.1449
Epoch: 2/2... Training loss: 0.1475
Epoch: 2/2... Training loss: 0.1404
Epoch: 2/2... Training loss: 0.1449
Epoch: 2/2... Training loss: 0.1443
Epoch: 2/2... Training loss: 0.1452
Epoch: 2/2... Training loss: 0.1415
Epoch: 2/2... Training loss: 0.1489
Epoch: 2/2... Training loss: 0.1382
Epoch: 2/2... Training loss: 0.1474
Epoch: 2/2... Training loss: 0.1463
Epoch: 2/2... Training loss: 0.1447
Epoch: 2/2... Training loss: 0.1410
Epoch: 2/2... Training loss: 0.1432
Epoch: 2/2... Training loss: 0.1375
Epoch: 2/2... Training loss: 0.1433
Epoch: 2/2... Training loss: 0.1411
Epoch: 2/2... Training loss: 0.1424
Epoch: 2/2... Training loss: 0.1423
Epoch: 2/2... Training loss: 0.1424
Epoch: 2/2... Training loss: 0.1407
Epoch: 2/2... Training loss: 0.1434
Epoch: 2/2... Training loss: 0.1472
Epoch: 2/2... Training loss: 0.1465
Epoch: 2/2... Training loss: 0.1428
Epoch: 2/2... Training loss: 0.1429
Epoch: 2/2... Training loss: 0.1412
Epoch: 2/2... Training loss: 0.1406
Epoch: 2/2... Training loss: 0.1415
Epoch: 2/2... Training loss: 0.1456
Epoch: 2/2... Training loss: 0.1455
Epoch: 2/2... Training loss: 0.1443
Epoch: 2/2... Training loss: 0.1440
Epoch: 2/2... Training loss: 0.1434
Epoch: 2/2... Training loss: 0.1475
Epoch: 2/2... Training loss: 0.1454
Epoch: 2/2... Training loss: 0.1455
Epoch: 2/2... Training loss: 0.1477
Epoch: 2/2... Training loss: 0.1421
Epoch: 2/2... Training loss: 0.1393
Epoch: 2/2... Training loss: 0.1456
Epoch: 2/2... Training loss: 0.1410
Epoch: 2/2... Training loss: 0.1413
Epoch: 2/2... Training loss: 0.1458
Epoch: 2/2... Training loss: 0.1390
Epoch: 2/2... Training loss: 0.1374
Epoch: 2/2... Training loss: 0.1434
Epoch: 2/2... Training loss: 0.1422
Epoch: 2/2... Training loss: 0.1471
Epoch: 2/2... Training loss: 0.1424
Epoch: 2/2... Training loss: 0.1452
Epoch: 2/2... Training loss: 0.1398
Epoch: 2/2... Training loss: 0.1367
Epoch: 2/2... Training loss: 0.1387
Epoch: 2/2... Training loss: 0.1440
Epoch: 2/2... Training loss: 0.1436
Epoch: 2/2... Training loss: 0.1377
Epoch: 2/2... Training loss: 0.1390
Epoch: 2/2... Training loss: 0.1419
Epoch: 2/2... Training loss: 0.1430
Epoch: 2/2... Training loss: 0.1415
Epoch: 2/2... Training loss: 0.1449
Epoch: 2/2... Training loss: 0.1430
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, strides=(2,2), pool_size=(2, 2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, strides=(2,2), pool_size=(2, 2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(inputs_, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv2, strides=(2,2), pool_size=(2, 2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, shape=(None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=(5,5), strides=(1,1), padding='same',activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2), strides=(2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, strides=(1,1), kernel_size=(3,3), padding='same',activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2,2), strides=(2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(inputs=maxpool2, strides=(1,1), kernel_size=(3,3), filters=8, padding='same',activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2,2), strides=(2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(images=encoded, size=(7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(inputs=upsample1, kernel_size=(3,3), filters=8, strides=(1,1), padding='same',activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(images=conv4, size=(14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(inputs=upsample2, kernel_size=(3,3), filters=8, strides=(1,1), padding='same',activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(images=conv5, size=(28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(inputs=upsample3, kernel_size=(3,3), filters=16, strides=(1,1), padding='same',activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(inputs=conv6, kernel_size=(3,3), filters=1, strides=(1,1), padding='same')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(5,5), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=(2,2), padding='same',strides=(2,2))
# Now 14x14x32
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=32, kernel_size=(3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=(2,2), padding='same',strides=(2,2))
# Now 7x7x32
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=16, kernel_size=(3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=(2,2), padding='same',strides=(2,2))
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(images=encoded, size=(7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(inputs=upsample1, filters=16, kernel_size=(3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(images=conv4, size=(14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(inputs=upsample2, filters=32, kernel_size=(3,3), strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(images=conv5, size=(28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(inputs=upsample3, filters=32, strides=(1,1), kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(inputs=conv6, filters=1, strides=(1,1), kernel_size=(3,3), padding='same')
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 10
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/10... Training loss: 0.6750
Epoch: 1/10... Training loss: 0.6359
Epoch: 1/10... Training loss: 0.5841
Epoch: 1/10... Training loss: 0.5238
Epoch: 1/10... Training loss: 0.4942
Epoch: 1/10... Training loss: 0.5295
Epoch: 1/10... Training loss: 0.5391
Epoch: 1/10... Training loss: 0.4981
Epoch: 1/10... Training loss: 0.4836
Epoch: 1/10... Training loss: 0.4856
Epoch: 1/10... Training loss: 0.4835
Epoch: 1/10... Training loss: 0.4834
Epoch: 1/10... Training loss: 0.4774
Epoch: 1/10... Training loss: 0.4818
Epoch: 1/10... Training loss: 0.4562
Epoch: 1/10... Training loss: 0.4543
Epoch: 1/10... Training loss: 0.4349
Epoch: 1/10... Training loss: 0.4439
Epoch: 1/10... Training loss: 0.4384
Epoch: 1/10... Training loss: 0.4232
Epoch: 1/10... Training loss: 0.4148
Epoch: 1/10... Training loss: 0.4052
Epoch: 1/10... Training loss: 0.3986
Epoch: 1/10... Training loss: 0.3917
Epoch: 1/10... Training loss: 0.3723
Epoch: 1/10... Training loss: 0.3723
Epoch: 1/10... Training loss: 0.3665
Epoch: 1/10... Training loss: 0.3432
Epoch: 1/10... Training loss: 0.3409
Epoch: 1/10... Training loss: 0.3222
Epoch: 1/10... Training loss: 0.3215
Epoch: 1/10... Training loss: 0.3182
Epoch: 1/10... Training loss: 0.3025
Epoch: 1/10... Training loss: 0.2953
Epoch: 1/10... Training loss: 0.2848
Epoch: 1/10... Training loss: 0.2919
Epoch: 1/10... Training loss: 0.2792
Epoch: 1/10... Training loss: 0.2740
Epoch: 1/10... Training loss: 0.2735
Epoch: 1/10... Training loss: 0.2770
Epoch: 1/10... Training loss: 0.2676
Epoch: 1/10... Training loss: 0.2691
Epoch: 1/10... Training loss: 0.2723
Epoch: 1/10... Training loss: 0.2693
Epoch: 1/10... Training loss: 0.2719
Epoch: 1/10... Training loss: 0.2658
Epoch: 1/10... Training loss: 0.2631
Epoch: 1/10... Training loss: 0.2622
Epoch: 1/10... Training loss: 0.2602
Epoch: 1/10... Training loss: 0.2571
Epoch: 1/10... Training loss: 0.2586
Epoch: 1/10... Training loss: 0.2527
Epoch: 1/10... Training loss: 0.2569
Epoch: 1/10... Training loss: 0.2504
Epoch: 1/10... Training loss: 0.2486
Epoch: 1/10... Training loss: 0.2398
Epoch: 1/10... Training loss: 0.2478
Epoch: 1/10... Training loss: 0.2442
Epoch: 1/10... Training loss: 0.2396
Epoch: 1/10... Training loss: 0.2393
Epoch: 1/10... Training loss: 0.2360
Epoch: 1/10... Training loss: 0.2353
Epoch: 1/10... Training loss: 0.2347
Epoch: 1/10... Training loss: 0.2404
Epoch: 1/10... Training loss: 0.2288
Epoch: 1/10... Training loss: 0.2292
Epoch: 1/10... Training loss: 0.2311
Epoch: 1/10... Training loss: 0.2274
Epoch: 1/10... Training loss: 0.2312
Epoch: 1/10... Training loss: 0.2208
Epoch: 1/10... Training loss: 0.2296
Epoch: 1/10... Training loss: 0.2167
Epoch: 1/10... Training loss: 0.2276
Epoch: 1/10... Training loss: 0.2180
Epoch: 1/10... Training loss: 0.2186
Epoch: 1/10... Training loss: 0.2233
Epoch: 1/10... Training loss: 0.2139
Epoch: 1/10... Training loss: 0.2179
Epoch: 1/10... Training loss: 0.2180
Epoch: 1/10... Training loss: 0.2185
Epoch: 1/10... Training loss: 0.2125
Epoch: 1/10... Training loss: 0.2201
Epoch: 1/10... Training loss: 0.2158
Epoch: 1/10... Training loss: 0.2156
Epoch: 1/10... Training loss: 0.2158
Epoch: 1/10... Training loss: 0.2104
Epoch: 1/10... Training loss: 0.2054
Epoch: 1/10... Training loss: 0.2111
Epoch: 1/10... Training loss: 0.2133
Epoch: 1/10... Training loss: 0.2165
Epoch: 1/10... Training loss: 0.2149
Epoch: 1/10... Training loss: 0.2157
Epoch: 1/10... Training loss: 0.2041
Epoch: 1/10... Training loss: 0.2102
Epoch: 1/10... Training loss: 0.2142
Epoch: 1/10... Training loss: 0.2117
Epoch: 1/10... Training loss: 0.2108
Epoch: 1/10... Training loss: 0.2128
Epoch: 1/10... Training loss: 0.2054
Epoch: 1/10... Training loss: 0.2080
Epoch: 1/10... Training loss: 0.2111
Epoch: 1/10... Training loss: 0.2040
Epoch: 1/10... Training loss: 0.2110
Epoch: 1/10... Training loss: 0.2039
Epoch: 1/10... Training loss: 0.2117
Epoch: 1/10... Training loss: 0.2043
Epoch: 1/10... Training loss: 0.2048
Epoch: 1/10... Training loss: 0.2082
Epoch: 1/10... Training loss: 0.2081
Epoch: 1/10... Training loss: 0.2059
Epoch: 1/10... Training loss: 0.2038
Epoch: 1/10... Training loss: 0.2037
Epoch: 1/10... Training loss: 0.2000
Epoch: 1/10... Training loss: 0.2062
Epoch: 1/10... Training loss: 0.2064
Epoch: 1/10... Training loss: 0.2002
Epoch: 1/10... Training loss: 0.2035
Epoch: 1/10... Training loss: 0.2072
Epoch: 1/10... Training loss: 0.2060
Epoch: 1/10... Training loss: 0.1966
Epoch: 1/10... Training loss: 0.2004
Epoch: 1/10... Training loss: 0.1958
Epoch: 1/10... Training loss: 0.2060
Epoch: 1/10... Training loss: 0.2022
Epoch: 1/10... Training loss: 0.2058
Epoch: 1/10... Training loss: 0.1967
Epoch: 1/10... Training loss: 0.2031
Epoch: 1/10... Training loss: 0.2042
Epoch: 1/10... Training loss: 0.1997
Epoch: 1/10... Training loss: 0.2030
Epoch: 1/10... Training loss: 0.2002
Epoch: 1/10... Training loss: 0.1905
Epoch: 1/10... Training loss: 0.2006
Epoch: 1/10... Training loss: 0.1977
Epoch: 1/10... Training loss: 0.1996
Epoch: 1/10... Training loss: 0.1910
Epoch: 1/10... Training loss: 0.2007
Epoch: 1/10... Training loss: 0.1976
Epoch: 1/10... Training loss: 0.1957
Epoch: 1/10... Training loss: 0.1898
Epoch: 1/10... Training loss: 0.1950
Epoch: 1/10... Training loss: 0.2032
Epoch: 1/10... Training loss: 0.1920
Epoch: 1/10... Training loss: 0.1968
Epoch: 1/10... Training loss: 0.1928
Epoch: 1/10... Training loss: 0.1883
Epoch: 1/10... Training loss: 0.1911
Epoch: 1/10... Training loss: 0.1948
Epoch: 1/10... Training loss: 0.1963
Epoch: 1/10... Training loss: 0.1936
Epoch: 1/10... Training loss: 0.1968
Epoch: 1/10... Training loss: 0.1904
Epoch: 1/10... Training loss: 0.1912
Epoch: 1/10... Training loss: 0.1984
Epoch: 1/10... Training loss: 0.1898
Epoch: 1/10... Training loss: 0.1900
Epoch: 1/10... Training loss: 0.1926
Epoch: 1/10... Training loss: 0.1922
Epoch: 1/10... Training loss: 0.1881
Epoch: 1/10... Training loss: 0.1861
Epoch: 1/10... Training loss: 0.1904
Epoch: 1/10... Training loss: 0.1862
Epoch: 1/10... Training loss: 0.1924
Epoch: 1/10... Training loss: 0.1925
Epoch: 1/10... Training loss: 0.1871
Epoch: 1/10... Training loss: 0.1878
Epoch: 1/10... Training loss: 0.1851
Epoch: 1/10... Training loss: 0.1824
Epoch: 1/10... Training loss: 0.1875
Epoch: 1/10... Training loss: 0.1870
Epoch: 1/10... Training loss: 0.1890
Epoch: 1/10... Training loss: 0.1911
Epoch: 1/10... Training loss: 0.1925
Epoch: 1/10... Training loss: 0.1813
Epoch: 1/10... Training loss: 0.1869
Epoch: 1/10... Training loss: 0.1862
Epoch: 1/10... Training loss: 0.1882
Epoch: 1/10... Training loss: 0.1792
Epoch: 1/10... Training loss: 0.1849
Epoch: 1/10... Training loss: 0.1867
Epoch: 1/10... Training loss: 0.1830
Epoch: 1/10... Training loss: 0.1888
Epoch: 1/10... Training loss: 0.1889
Epoch: 1/10... Training loss: 0.1805
Epoch: 1/10... Training loss: 0.1956
Epoch: 1/10... Training loss: 0.1889
Epoch: 1/10... Training loss: 0.1842
Epoch: 1/10... Training loss: 0.1801
Epoch: 1/10... Training loss: 0.1831
Epoch: 1/10... Training loss: 0.1858
Epoch: 1/10... Training loss: 0.1800
Epoch: 1/10... Training loss: 0.1852
Epoch: 1/10... Training loss: 0.1747
Epoch: 1/10... Training loss: 0.1799
Epoch: 1/10... Training loss: 0.1831
Epoch: 1/10... Training loss: 0.1798
Epoch: 1/10... Training loss: 0.1828
Epoch: 1/10... Training loss: 0.1826
Epoch: 1/10... Training loss: 0.1828
Epoch: 1/10... Training loss: 0.1717
Epoch: 1/10... Training loss: 0.1833
Epoch: 1/10... Training loss: 0.1812
Epoch: 1/10... Training loss: 0.1760
Epoch: 1/10... Training loss: 0.1775
Epoch: 1/10... Training loss: 0.1795
Epoch: 1/10... Training loss: 0.1810
Epoch: 1/10... Training loss: 0.1782
Epoch: 1/10... Training loss: 0.1814
Epoch: 1/10... Training loss: 0.1811
Epoch: 1/10... Training loss: 0.1817
Epoch: 1/10... Training loss: 0.1814
Epoch: 1/10... Training loss: 0.1785
Epoch: 1/10... Training loss: 0.1766
Epoch: 1/10... Training loss: 0.1837
Epoch: 1/10... Training loss: 0.1731
Epoch: 1/10... Training loss: 0.1775
Epoch: 1/10... Training loss: 0.1723
Epoch: 1/10... Training loss: 0.1821
Epoch: 1/10... Training loss: 0.1805
Epoch: 1/10... Training loss: 0.1798
Epoch: 1/10... Training loss: 0.1774
Epoch: 1/10... Training loss: 0.1756
Epoch: 1/10... Training loss: 0.1765
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
WARNING:tensorflow:From <ipython-input-5-fb6520ddb5a8>:7: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d instead.
WARNING:tensorflow:From /home/pavel/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From <ipython-input-5-fb6520ddb5a8>:9: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.max_pooling2d instead.
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,2), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6921
Epoch: 1/100... Training loss: 0.6636
Epoch: 1/100... Training loss: 0.6324
Epoch: 1/100... Training loss: 0.5900
Epoch: 1/100... Training loss: 0.5406
Epoch: 1/100... Training loss: 0.4989
Epoch: 1/100... Training loss: 0.4975
Epoch: 1/100... Training loss: 0.5206
Epoch: 1/100... Training loss: 0.5098
Epoch: 1/100... Training loss: 0.4862
Epoch: 1/100... Training loss: 0.4735
Epoch: 1/100... Training loss: 0.4621
Epoch: 1/100... Training loss: 0.4489
Epoch: 1/100... Training loss: 0.4520
Epoch: 1/100... Training loss: 0.4528
Epoch: 1/100... Training loss: 0.4455
Epoch: 1/100... Training loss: 0.4255
Epoch: 1/100... Training loss: 0.4257
Epoch: 1/100... Training loss: 0.4037
Epoch: 1/100... Training loss: 0.4036
Epoch: 1/100... Training loss: 0.4002
Epoch: 1/100... Training loss: 0.3738
Epoch: 1/100... Training loss: 0.3637
Epoch: 1/100... Training loss: 0.3591
Epoch: 1/100... Training loss: 0.3504
Epoch: 1/100... Training loss: 0.3374
Epoch: 1/100... Training loss: 0.3278
Epoch: 1/100... Training loss: 0.3216
Epoch: 1/100... Training loss: 0.3158
Epoch: 1/100... Training loss: 0.3094
Epoch: 1/100... Training loss: 0.3037
Epoch: 1/100... Training loss: 0.2875
Epoch: 1/100... Training loss: 0.2928
Epoch: 1/100... Training loss: 0.2843
Epoch: 1/100... Training loss: 0.2817
Epoch: 1/100... Training loss: 0.2836
Epoch: 1/100... Training loss: 0.2750
Epoch: 1/100... Training loss: 0.2807
Epoch: 1/100... Training loss: 0.2667
Epoch: 1/100... Training loss: 0.2739
Epoch: 1/100... Training loss: 0.2772
Epoch: 1/100... Training loss: 0.2757
Epoch: 1/100... Training loss: 0.2765
Epoch: 1/100... Training loss: 0.2707
Epoch: 1/100... Training loss: 0.2673
Epoch: 1/100... Training loss: 0.2757
Epoch: 1/100... Training loss: 0.2722
Epoch: 1/100... Training loss: 0.2681
Epoch: 1/100... Training loss: 0.2766
Epoch: 1/100... Training loss: 0.2629
Epoch: 1/100... Training loss: 0.2695
Epoch: 1/100... Training loss: 0.2619
Epoch: 1/100... Training loss: 0.2668
Epoch: 1/100... Training loss: 0.2650
Epoch: 1/100... Training loss: 0.2604
Epoch: 1/100... Training loss: 0.2513
Epoch: 1/100... Training loss: 0.2496
Epoch: 1/100... Training loss: 0.2548
Epoch: 1/100... Training loss: 0.2556
Epoch: 1/100... Training loss: 0.2466
Epoch: 1/100... Training loss: 0.2459
Epoch: 1/100... Training loss: 0.2404
Epoch: 1/100... Training loss: 0.2486
Epoch: 1/100... Training loss: 0.2427
Epoch: 1/100... Training loss: 0.2464
Epoch: 1/100... Training loss: 0.2444
Epoch: 1/100... Training loss: 0.2401
Epoch: 1/100... Training loss: 0.2490
Epoch: 1/100... Training loss: 0.2343
Epoch: 1/100... Training loss: 0.2422
Epoch: 1/100... Training loss: 0.2390
Epoch: 1/100... Training loss: 0.2442
Epoch: 1/100... Training loss: 0.2298
Epoch: 1/100... Training loss: 0.2438
Epoch: 1/100... Training loss: 0.2256
Epoch: 1/100... Training loss: 0.2374
Epoch: 1/100... Training loss: 0.2317
Epoch: 1/100... Training loss: 0.2326
Epoch: 1/100... Training loss: 0.2377
Epoch: 1/100... Training loss: 0.2343
Epoch: 1/100... Training loss: 0.2278
Epoch: 1/100... Training loss: 0.2273
Epoch: 1/100... Training loss: 0.2277
Epoch: 1/100... Training loss: 0.2290
Epoch: 1/100... Training loss: 0.2239
Epoch: 1/100... Training loss: 0.2257
Epoch: 1/100... Training loss: 0.2270
Epoch: 1/100... Training loss: 0.2193
Epoch: 1/100... Training loss: 0.2190
Epoch: 1/100... Training loss: 0.2191
Epoch: 1/100... Training loss: 0.2187
Epoch: 1/100... Training loss: 0.2256
Epoch: 1/100... Training loss: 0.2202
Epoch: 1/100... Training loss: 0.2197
Epoch: 1/100... Training loss: 0.2213
Epoch: 1/100... Training loss: 0.2138
Epoch: 1/100... Training loss: 0.2201
Epoch: 1/100... Training loss: 0.2131
Epoch: 1/100... Training loss: 0.2227
Epoch: 1/100... Training loss: 0.2178
Epoch: 1/100... Training loss: 0.2202
Epoch: 1/100... Training loss: 0.2094
Epoch: 1/100... Training loss: 0.2130
Epoch: 1/100... Training loss: 0.2141
Epoch: 1/100... Training loss: 0.2232
Epoch: 1/100... Training loss: 0.2145
Epoch: 1/100... Training loss: 0.2105
Epoch: 1/100... Training loss: 0.2177
Epoch: 1/100... Training loss: 0.2111
Epoch: 1/100... Training loss: 0.2076
Epoch: 1/100... Training loss: 0.2126
Epoch: 1/100... Training loss: 0.2161
Epoch: 1/100... Training loss: 0.2117
Epoch: 1/100... Training loss: 0.2052
Epoch: 1/100... Training loss: 0.2064
Epoch: 1/100... Training loss: 0.2121
Epoch: 1/100... Training loss: 0.2075
Epoch: 1/100... Training loss: 0.2099
Epoch: 1/100... Training loss: 0.2073
Epoch: 1/100... Training loss: 0.2067
Epoch: 1/100... Training loss: 0.1971
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
mnist.train.images[0].shape
learning_rate = 0.001
image_size = mnist.train.images.shape[1]
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, 28, 28, 1], name='inputs')
targets_ = tf.placeholder(tf.float32, [None, 28, 28, 1], name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, 3, strides=(1,1), padding='same',
activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2))
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, 3, strides=(1,1), padding='same',
activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, pool_size=(2,2), strides=(2,2))
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, 3, strides=(1,1), padding='same',
activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, pool_size=(2,2), strides=(2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, 3, strides=(1,1), padding='same',
activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, 3, strides=(1,1), padding='same',
activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_images(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, 3, strides=(1,1), padding='same',
activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.dense(conv6, 1)
# Now 28x28x1
# # Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name="decoded")
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 1
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, 3, strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2))
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, 3, strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, pool_size=(2,2), strides=(2,2))
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, 3, strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, pool_size=(2,2), strides=(2,2), padding='same')
# Now 4x4x16
print(encoded.shape)
### Decoder
upsample1 = tf.image.resize_images(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, 3, strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_images(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, 3, strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_images(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, 3, strides=(1,1), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.dense(conv6, 1)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 5
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/5... Training loss: 0.6887
Epoch: 1/5... Training loss: 0.6783
Epoch: 1/5... Training loss: 0.6607
Epoch: 1/5... Training loss: 0.6323
Epoch: 1/5... Training loss: 0.5900
Epoch: 1/5... Training loss: 0.5441
Epoch: 1/5... Training loss: 0.4815
Epoch: 1/5... Training loss: 0.4609
Epoch: 1/5... Training loss: 0.4717
Epoch: 1/5... Training loss: 0.5000
Epoch: 1/5... Training loss: 0.4807
Epoch: 1/5... Training loss: 0.4548
Epoch: 1/5... Training loss: 0.4177
Epoch: 1/5... Training loss: 0.4074
Epoch: 1/5... Training loss: 0.3958
Epoch: 1/5... Training loss: 0.3850
Epoch: 1/5... Training loss: 0.3811
Epoch: 1/5... Training loss: 0.3654
Epoch: 1/5... Training loss: 0.3505
Epoch: 1/5... Training loss: 0.3403
Epoch: 1/5... Training loss: 0.3359
Epoch: 1/5... Training loss: 0.3187
Epoch: 1/5... Training loss: 0.3157
Epoch: 1/5... Training loss: 0.3111
Epoch: 1/5... Training loss: 0.2976
Epoch: 1/5... Training loss: 0.2948
Epoch: 1/5... Training loss: 0.2835
Epoch: 1/5... Training loss: 0.2898
Epoch: 1/5... Training loss: 0.2823
Epoch: 1/5... Training loss: 0.2816
Epoch: 1/5... Training loss: 0.2726
Epoch: 1/5... Training loss: 0.2749
Epoch: 1/5... Training loss: 0.2808
Epoch: 1/5... Training loss: 0.2724
Epoch: 1/5... Training loss: 0.2663
Epoch: 1/5... Training loss: 0.2722
Epoch: 1/5... Training loss: 0.2783
Epoch: 1/5... Training loss: 0.2682
Epoch: 1/5... Training loss: 0.2705
Epoch: 1/5... Training loss: 0.2653
Epoch: 1/5... Training loss: 0.2678
Epoch: 1/5... Training loss: 0.2654
Epoch: 1/5... Training loss: 0.2696
Epoch: 1/5... Training loss: 0.2580
Epoch: 1/5... Training loss: 0.2769
Epoch: 1/5... Training loss: 0.2620
Epoch: 1/5... Training loss: 0.2608
Epoch: 1/5... Training loss: 0.2689
Epoch: 1/5... Training loss: 0.2603
Epoch: 1/5... Training loss: 0.2594
Epoch: 1/5... Training loss: 0.2614
Epoch: 1/5... Training loss: 0.2669
Epoch: 1/5... Training loss: 0.2626
Epoch: 1/5... Training loss: 0.2635
Epoch: 1/5... Training loss: 0.2628
Epoch: 1/5... Training loss: 0.2657
Epoch: 1/5... Training loss: 0.2612
Epoch: 1/5... Training loss: 0.2700
Epoch: 1/5... Training loss: 0.2637
Epoch: 1/5... Training loss: 0.2622
Epoch: 1/5... Training loss: 0.2523
Epoch: 1/5... Training loss: 0.2629
Epoch: 1/5... Training loss: 0.2587
Epoch: 1/5... Training loss: 0.2641
Epoch: 1/5... Training loss: 0.2688
Epoch: 1/5... Training loss: 0.2617
Epoch: 1/5... Training loss: 0.2623
Epoch: 1/5... Training loss: 0.2607
Epoch: 1/5... Training loss: 0.2583
Epoch: 1/5... Training loss: 0.2549
Epoch: 1/5... Training loss: 0.2599
Epoch: 1/5... Training loss: 0.2551
Epoch: 1/5... Training loss: 0.2581
Epoch: 1/5... Training loss: 0.2525
Epoch: 1/5... Training loss: 0.2550
Epoch: 1/5... Training loss: 0.2567
Epoch: 1/5... Training loss: 0.2538
Epoch: 1/5... Training loss: 0.2548
Epoch: 1/5... Training loss: 0.2561
Epoch: 1/5... Training loss: 0.2625
Epoch: 1/5... Training loss: 0.2546
Epoch: 1/5... Training loss: 0.2580
Epoch: 1/5... Training loss: 0.2498
Epoch: 1/5... Training loss: 0.2521
Epoch: 1/5... Training loss: 0.2561
Epoch: 1/5... Training loss: 0.2542
Epoch: 1/5... Training loss: 0.2454
Epoch: 1/5... Training loss: 0.2500
Epoch: 1/5... Training loss: 0.2529
Epoch: 1/5... Training loss: 0.2563
Epoch: 1/5... Training loss: 0.2447
Epoch: 1/5... Training loss: 0.2442
Epoch: 1/5... Training loss: 0.2512
Epoch: 1/5... Training loss: 0.2525
Epoch: 1/5... Training loss: 0.2446
Epoch: 1/5... Training loss: 0.2469
Epoch: 1/5... Training loss: 0.2478
Epoch: 1/5... Training loss: 0.2412
Epoch: 1/5... Training loss: 0.2475
Epoch: 1/5... Training loss: 0.2390
Epoch: 1/5... Training loss: 0.2414
Epoch: 1/5... Training loss: 0.2392
Epoch: 1/5... Training loss: 0.2420
Epoch: 1/5... Training loss: 0.2395
Epoch: 1/5... Training loss: 0.2334
Epoch: 1/5... Training loss: 0.2433
Epoch: 1/5... Training loss: 0.2387
Epoch: 1/5... Training loss: 0.2419
Epoch: 1/5... Training loss: 0.2352
Epoch: 1/5... Training loss: 0.2376
Epoch: 1/5... Training loss: 0.2306
Epoch: 1/5... Training loss: 0.2369
Epoch: 1/5... Training loss: 0.2335
Epoch: 1/5... Training loss: 0.2340
Epoch: 1/5... Training loss: 0.2371
Epoch: 1/5... Training loss: 0.2388
Epoch: 1/5... Training loss: 0.2331
Epoch: 1/5... Training loss: 0.2305
Epoch: 1/5... Training loss: 0.2284
Epoch: 1/5... Training loss: 0.2264
Epoch: 1/5... Training loss: 0.2318
Epoch: 1/5... Training loss: 0.2238
Epoch: 1/5... Training loss: 0.2230
Epoch: 1/5... Training loss: 0.2278
Epoch: 1/5... Training loss: 0.2318
Epoch: 1/5... Training loss: 0.2242
Epoch: 1/5... Training loss: 0.2300
Epoch: 1/5... Training loss: 0.2263
Epoch: 1/5... Training loss: 0.2272
Epoch: 1/5... Training loss: 0.2224
Epoch: 1/5... Training loss: 0.2314
Epoch: 1/5... Training loss: 0.2275
Epoch: 1/5... Training loss: 0.2243
Epoch: 1/5... Training loss: 0.2203
Epoch: 1/5... Training loss: 0.2155
Epoch: 1/5... Training loss: 0.2205
Epoch: 1/5... Training loss: 0.2235
Epoch: 1/5... Training loss: 0.2239
Epoch: 1/5... Training loss: 0.2181
Epoch: 1/5... Training loss: 0.2187
Epoch: 1/5... Training loss: 0.2179
Epoch: 1/5... Training loss: 0.2174
Epoch: 1/5... Training loss: 0.2194
Epoch: 1/5... Training loss: 0.2190
Epoch: 1/5... Training loss: 0.2113
Epoch: 1/5... Training loss: 0.2197
Epoch: 1/5... Training loss: 0.2160
Epoch: 1/5... Training loss: 0.2185
Epoch: 1/5... Training loss: 0.2062
Epoch: 1/5... Training loss: 0.2134
Epoch: 1/5... Training loss: 0.2145
Epoch: 1/5... Training loss: 0.2118
Epoch: 1/5... Training loss: 0.2146
Epoch: 1/5... Training loss: 0.2109
Epoch: 1/5... Training loss: 0.2071
Epoch: 1/5... Training loss: 0.2080
Epoch: 1/5... Training loss: 0.2077
Epoch: 1/5... Training loss: 0.2052
Epoch: 1/5... Training loss: 0.2089
Epoch: 1/5... Training loss: 0.2052
Epoch: 1/5... Training loss: 0.2067
Epoch: 1/5... Training loss: 0.2075
Epoch: 1/5... Training loss: 0.2015
Epoch: 1/5... Training loss: 0.2046
Epoch: 1/5... Training loss: 0.2062
Epoch: 1/5... Training loss: 0.2025
Epoch: 1/5... Training loss: 0.2015
Epoch: 1/5... Training loss: 0.2122
Epoch: 1/5... Training loss: 0.2100
Epoch: 1/5... Training loss: 0.2048
Epoch: 1/5... Training loss: 0.2046
Epoch: 1/5... Training loss: 0.2064
Epoch: 1/5... Training loss: 0.2033
Epoch: 1/5... Training loss: 0.2031
Epoch: 1/5... Training loss: 0.2034
Epoch: 1/5... Training loss: 0.2015
Epoch: 1/5... Training loss: 0.2049
Epoch: 1/5... Training loss: 0.2030
Epoch: 1/5... Training loss: 0.1918
Epoch: 1/5... Training loss: 0.2068
Epoch: 1/5... Training loss: 0.1975
Epoch: 1/5... Training loss: 0.1950
Epoch: 1/5... Training loss: 0.2011
Epoch: 1/5... Training loss: 0.2069
Epoch: 1/5... Training loss: 0.2022
Epoch: 1/5... Training loss: 0.2092
Epoch: 1/5... Training loss: 0.1946
Epoch: 1/5... Training loss: 0.1996
Epoch: 1/5... Training loss: 0.1958
Epoch: 1/5... Training loss: 0.1963
Epoch: 1/5... Training loss: 0.1981
Epoch: 1/5... Training loss: 0.1993
Epoch: 1/5... Training loss: 0.1995
Epoch: 1/5... Training loss: 0.2045
Epoch: 1/5... Training loss: 0.1970
Epoch: 1/5... Training loss: 0.2050
Epoch: 1/5... Training loss: 0.1988
Epoch: 1/5... Training loss: 0.1957
Epoch: 1/5... Training loss: 0.2023
Epoch: 1/5... Training loss: 0.1962
Epoch: 1/5... Training loss: 0.1972
Epoch: 1/5... Training loss: 0.1962
Epoch: 1/5... Training loss: 0.1986
Epoch: 1/5... Training loss: 0.1927
Epoch: 1/5... Training loss: 0.1982
Epoch: 1/5... Training loss: 0.1951
Epoch: 1/5... Training loss: 0.1984
Epoch: 1/5... Training loss: 0.1959
Epoch: 1/5... Training loss: 0.1891
Epoch: 1/5... Training loss: 0.1958
Epoch: 1/5... Training loss: 0.1955
Epoch: 1/5... Training loss: 0.1961
Epoch: 1/5... Training loss: 0.1939
Epoch: 1/5... Training loss: 0.1933
Epoch: 1/5... Training loss: 0.1950
Epoch: 1/5... Training loss: 0.1884
Epoch: 1/5... Training loss: 0.1948
Epoch: 1/5... Training loss: 0.1908
Epoch: 1/5... Training loss: 0.1943
Epoch: 1/5... Training loss: 0.1924
Epoch: 1/5... Training loss: 0.1971
Epoch: 1/5... Training loss: 0.1877
Epoch: 1/5... Training loss: 0.1905
Epoch: 1/5... Training loss: 0.1865
Epoch: 1/5... Training loss: 0.1969
Epoch: 1/5... Training loss: 0.1913
Epoch: 1/5... Training loss: 0.1859
Epoch: 1/5... Training loss: 0.1892
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name = "inputs")
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name = "targets")
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(upsample3, 1, (3, 3), padding='same', activation=tf.nn.relu)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name="output")
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3, 3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool1, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3, 3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='same', activation=tf.nn.relu)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.6971
Epoch: 1/100... Training loss: 0.6934
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
Epoch: 1/100... Training loss: 0.6933
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(dtype=tf.float32, shape=[None, 28, 28, 1], name='inputs')
targets_ = tf.placeholder(dtype=tf.float32, shape=[None, 28, 28, 1], name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=16, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 28x28x16
print("Shape after conv1: ",conv1.shape)
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2,2], strides=[2,2], padding='same')
# Now 14x14x16
print("Shape after maxpool1: ",maxpool1.shape)
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=8, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 14x14x8
print("Shape after conv2: ",conv2.shape)
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2,2], strides=[2,2], padding='same')
# Now 7x7x8
print("Shape after maxpool2: ",maxpool2.shape)
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=8, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 7x7x8
print("Shape after conv3: ",conv3.shape)
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2,2], strides=[2,2], padding='same')
# Now 4x4x8
print("Shape after encoded: ",encoded.shape)
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(images=encoded, size=[7,7])
# Now 7x7x8
print("Shape after unsample1: ",upsample1.shape)
conv4 = tf.layers.conv2d(inputs=upsample1, filters=8, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 7x7x8
print("Shape after conv4: ",conv4.shape)
upsample2 = tf.image.resize_nearest_neighbor(images=conv4, size=[14,14])
# Now 14x14x8
print("Shape after unsample2: ",upsample2.shape)
conv5 = tf.layers.conv2d(inputs=upsample2, filters=8, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 14x14x8
print("Shape after conv5: ",conv5.shape)
upsample3 = tf.image.resize_nearest_neighbor(images=conv5, size=[28,28])
# Now 28x28x8
print("Shape after unsample3: ",upsample3.shape)
conv6 = tf.layers.conv2d(inputs=upsample3, filters=16, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 28x28x16
print("Shape after conv6: ",conv6.shape)
logits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=[5,5], strides=[1,1], padding='same',
activation=None)
#Now 28x28x1
print("Shape after logits: ",logits.shape)
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
Shape after conv1: (?, 28, 28, 16)
Shape after maxpool1: (?, 14, 14, 16)
Shape after conv2: (?, 14, 14, 8)
Shape after maxpool2: (?, 7, 7, 8)
Shape after conv3: (?, 7, 7, 8)
Shape after encoded: (?, 4, 4, 8)
Shape after unsample1: (?, 7, 7, 8)
Shape after conv4: (?, 7, 7, 8)
Shape after unsample2: (?, 14, 14, 8)
Shape after conv5: (?, 14, 14, 8)
Shape after unsample3: (?, 28, 28, 8)
Shape after conv6: (?, 28, 28, 16)
Shape after logits: (?, 28, 28, 1)
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='dumb')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 28x28x32
print("Shape after conv1: ",conv1.shape)
maxpool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2,2], strides=[2,2], padding='same')
# Now 14x14x32
print("Shape after maxpool1: ",maxpool1.shape)
conv2 = tf.layers.conv2d(inputs=maxpool1, filters=32, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 14x14x32
print("Shape after conv2: ",conv2.shape)
maxpool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2,2], strides=[2,2], padding='same')
# Now 7x7x32
print("Shape after maxpool2: ",maxpool2.shape)
conv3 = tf.layers.conv2d(inputs=maxpool2, filters=16, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 7x7x16
print("Shape after conv3: ",conv3.shape)
encoded = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2,2], strides=[2,2], padding='same')
# Now 4x4x16
print("Shape after encoded: ",encoded.shape)
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(images=encoded, size=[7,7])
# Now 7x7x16
print("Shape after upsample1: ",upsample1.shape)
conv4 = tf.layers.conv2d(inputs=upsample1, filters=16, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 7x7x16
print("Shape after conv4: ",conv4.shape)
upsample2 = tf.image.resize_nearest_neighbor(images=conv4, size=[14,14])
# Now 14x14x16
print("Shape after upsample2: ",upsample2.shape)
conv5 = tf.layers.conv2d(inputs=upsample2, filters=32, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 14x14x32
print("Shape after conv5: ",conv5.shape)
upsample3 = tf.image.resize_nearest_neighbor(images=conv5, size=[28,28])
# Now 28x28x32
print("Shape after upsample3: ",upsample3.shape)
conv6 = tf.layers.conv2d(inputs=upsample3, filters=32, kernel_size=[5,5], strides=[1,1], padding='same',
activation=tf.nn.relu)
# Now 28x28x32
print("Shape after conv6: ",conv6.shape)
logits = tf.layers.conv2d(inputs=conv6, filters=1, kernel_size=[5,5], strides=[1,1], padding='same',
activation=None)
#Now 28x28x1
print("Shape after logits: ",logits.shape)
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
inputs_.name
sess = tf.Session()
epochs = 1
batch_size = 8000
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
# Save Model
checkpoint = "best_model.ckpt"
saver = tf.train.Saver()
saver.save(sess, checkpoint)
print('Model Saved')
sess.close()
###Output
Model Saved
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
checkpoint = "./best_model.ckpt"
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(checkpoint + '.meta')
loader.restore(sess, checkpoint)
list_names = [tensor.name for tensor in tf.get_default_graph().as_graph_def().node]
for name in list_names:
if name == 'dumbs':
print(name)
checkpoint = "./best_model.ckpt"
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(checkpoint + '.meta')
loader.restore(sess, checkpoint)
inputs_ = loaded_graph.get_tensor_by_name('dumb:0')
print(inputs_.shape)
decoded = loaded_graph.get_tensor_by_name('decoded:0')
noise_factor = 0.5
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
INFO:tensorflow:Restoring parameters from ./best_model.ckpt
(?, 28, 28, 1)
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='valid')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='valid')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='valid')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='valid')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='valid')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='valid')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name='decoded')
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
Epoch: 1/100... Training loss: 0.1908
Epoch: 2/100... Training loss: 0.1723
Epoch: 3/100... Training loss: 0.1531
Epoch: 4/100... Training loss: 0.1412
Epoch: 5/100... Training loss: 0.1389
Epoch: 6/100... Training loss: 0.1292
Epoch: 7/100... Training loss: 0.1309
Epoch: 8/100... Training loss: 0.1277
Epoch: 9/100... Training loss: 0.1241
Epoch: 10/100... Training loss: 0.1248
Epoch: 11/100... Training loss: 0.1208
Epoch: 12/100... Training loss: 0.1225
Epoch: 13/100... Training loss: 0.1178
Epoch: 14/100... Training loss: 0.1224
Epoch: 15/100... Training loss: 0.1158
Epoch: 16/100... Training loss: 0.1172
Epoch: 17/100... Training loss: 0.1196
Epoch: 18/100... Training loss: 0.1156
Epoch: 19/100... Training loss: 0.1128
Epoch: 20/100... Training loss: 0.1166
Epoch: 21/100... Training loss: 0.1133
Epoch: 22/100... Training loss: 0.1147
Epoch: 23/100... Training loss: 0.1177
Epoch: 24/100... Training loss: 0.1136
Epoch: 25/100... Training loss: 0.1126
Epoch: 26/100... Training loss: 0.1124
Epoch: 27/100... Training loss: 0.1131
Epoch: 28/100... Training loss: 0.1084
Epoch: 29/100... Training loss: 0.1111
Epoch: 30/100... Training loss: 0.1097
Epoch: 31/100... Training loss: 0.1114
Epoch: 32/100... Training loss: 0.1082
Epoch: 33/100... Training loss: 0.1106
Epoch: 34/100... Training loss: 0.1090
Epoch: 35/100... Training loss: 0.1110
Epoch: 36/100... Training loss: 0.1082
Epoch: 37/100... Training loss: 0.1113
Epoch: 38/100... Training loss: 0.1072
Epoch: 39/100... Training loss: 0.1098
Epoch: 40/100... Training loss: 0.1054
Epoch: 41/100... Training loss: 0.1091
Epoch: 42/100... Training loss: 0.1116
Epoch: 43/100... Training loss: 0.1067
Epoch: 44/100... Training loss: 0.1077
Epoch: 45/100... Training loss: 0.1105
Epoch: 46/100... Training loss: 0.1067
Epoch: 47/100... Training loss: 0.1068
Epoch: 48/100... Training loss: 0.1106
Epoch: 49/100... Training loss: 0.1072
Epoch: 50/100... Training loss: 0.1056
Epoch: 51/100... Training loss: 0.1053
Epoch: 52/100... Training loss: 0.1093
Epoch: 53/100... Training loss: 0.1092
Epoch: 54/100... Training loss: 0.1085
Epoch: 55/100... Training loss: 0.1069
Epoch: 56/100... Training loss: 0.1079
Epoch: 57/100... Training loss: 0.1081
Epoch: 58/100... Training loss: 0.1096
Epoch: 59/100... Training loss: 0.1071
Epoch: 60/100... Training loss: 0.1076
Epoch: 61/100... Training loss: 0.1037
Epoch: 62/100... Training loss: 0.1076
Epoch: 63/100... Training loss: 0.1082
Epoch: 64/100... Training loss: 0.1063
Epoch: 65/100... Training loss: 0.1088
Epoch: 66/100... Training loss: 0.1099
Epoch: 67/100... Training loss: 0.1051
Epoch: 68/100... Training loss: 0.1056
Epoch: 69/100... Training loss: 0.1051
Epoch: 70/100... Training loss: 0.1093
Epoch: 71/100... Training loss: 0.1058
Epoch: 72/100... Training loss: 0.1049
Epoch: 73/100... Training loss: 0.1067
Epoch: 74/100... Training loss: 0.1054
Epoch: 75/100... Training loss: 0.1083
Epoch: 76/100... Training loss: 0.1045
Epoch: 77/100... Training loss: 0.1064
Epoch: 78/100... Training loss: 0.1096
Epoch: 79/100... Training loss: 0.1061
Epoch: 80/100... Training loss: 0.1056
Epoch: 81/100... Training loss: 0.1039
Epoch: 82/100... Training loss: 0.1090
Epoch: 83/100... Training loss: 0.1066
Epoch: 84/100... Training loss: 0.1053
Epoch: 85/100... Training loss: 0.1025
Epoch: 86/100... Training loss: 0.1074
Epoch: 87/100... Training loss: 0.1049
Epoch: 88/100... Training loss: 0.1045
Epoch: 89/100... Training loss: 0.1100
Epoch: 90/100... Training loss: 0.1028
Epoch: 91/100... Training loss: 0.1056
Epoch: 92/100... Training loss: 0.1025
Epoch: 93/100... Training loss: 0.1076
Epoch: 94/100... Training loss: 0.1059
Epoch: 95/100... Training loss: 0.1088
Epoch: 96/100... Training loss: 0.1042
Epoch: 97/100... Training loss: 0.1032
Epoch: 98/100... Training loss: 0.1082
Epoch: 99/100... Training loss: 0.1071
Epoch: 100/100... Training loss: 0.1053
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
###Code
learning_rate = 0.001
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name="inputs")
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name="targets")
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (2,2), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2))
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (2,2), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv1, (2,2), (2,2))
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool1, 8, (2,2), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv1, (2,2), (2,2))
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (2,2), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_images(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (2,2), padding="same", activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_images(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (2,2), padding="same", activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (2,2), padding="same", activation=None) ## Computes a normal sum
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name="decoded")
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (5,5), padding="same", activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2))
# Now 14x14x32
conv2 = tf.layers.conv2d(inputs_, 32, (5,5), padding="same", activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2))
# Now 7x7x32
conv3 = tf.layers.conv2d(inputs_, 16, (5,5), padding="same", activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2))
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_images(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(inputs_, 16, (5,5), padding="same", activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_images(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(inputs_, 32, (5,5), padding="same", activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_images(conv5, (14,14))
# Now 28x28x32
conv6 = tf.layers.conv2d(inputs_, 32, (5,5), padding="same", activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(inputs_, 1, (5,5), padding="same", activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits, name="decoded")
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
# print("Epoch: {}/{}...".format(e+1, epochs),
# "Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **deconvolutional** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor).
###Code
learning_rate = 0.001
inputs_ =
targets_ =
### Encoder
conv1 =
# Now 28x28x16
maxpool1 =
# Now 14x14x16
conv2 =
# Now 14x14x8
maxpool2 =
# Now 7x7x8
conv3 =
# Now 7x7x8
encoded =
# Now 4x4x8
### Decoder
upsample1 =
# Now 7x7x8
conv4 =
# Now 7x7x8
upsample2 =
# Now 14x14x8
conv5 =
# Now 14x14x8
upsample3 =
# Now 28x28x8
conv6 =
# Now 28x28x16
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 =
# Now 28x28x32
maxpool1 =
# Now 14x14x32
conv2 =
# Now 14x14x32
maxpool2 =
# Now 7x7x32
conv3 =
# Now 7x7x16
encoded =
# Now 4x4x16
### Decoder
upsample1 =
# Now 7x7x16
conv4 =
# Now 7x7x16
upsample2 =
# Now 14x14x16
conv5 =
# Now 14x14x32
upsample3 =
# Now 28x28x32
conv6 =
# Now 28x28x32
logits =
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded =
# Pass logits through sigmoid and calculate the cross-entropy loss
loss =
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Network ArchitectureThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoderOkay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **deconvolutional** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose). However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor).
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1))
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2))
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2))
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2))
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_images(encoded, (7,7), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_images(encoded, (14,14), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_images(encoded, (28,28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TrainingAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
###Code
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
###Output
_____no_output_____
###Markdown
DenoisingAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
###Code
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, 2, 2)
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_images(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_images(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_images(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
###Output
_____no_output_____
###Markdown
Checking out the performanceHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
###Code
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
###Output
_____no_output_____ |
captioning/model/Model-Design.ipynb | ###Markdown

###Code
model.fit([X_train_photos,X_train_captions], to_categorical(y_train, VOCAB_SIZE), epochs = 1, verbose = 1)
inputs_photo = Input(shape = (4096,), name="Inputs-photo")
drop1 = Dropout(0.5)(inputs_photo)
dense1 = Dense(300, activation='relu')(drop1)
cnn_feats = Masking()(RepeatVector(1)(dense1))
inputs_caption = Input(shape=(15,), name = "Inputs-caption")
embedding = Embedding(VOCAB_SIZE, 300,
mask_zero = True, trainable = False,
weights=[embedding_matrix])(inputs_caption)
merged = concatenate([cnn_feats, embedding], axis=1)
lstm_layer = LSTM(units=300,
input_shape=(15 + 1, 300),
return_sequences=False,
dropout=.5)(merged)
outputs = Dense(units=VOCAB_SIZE,activation='softmax')(lstm_layer)
model = Model(inputs=[inputs_photo, inputs_caption], outputs=outputs)
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='sparse_categorical_crossentropy', optimizer=sgd)
print(model.summary())
plot_model(model, to_file='images/model6.png', show_shapes=True,show_layer_names=False )
###Output
_____no_output_____
###Markdown

###Code
model.fit([X_train_photos,X_train_captions], y_train, epochs = 1, verbose = 1)
###Output
_____no_output_____ |
funka_alibi1.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import glob
import os
import shutil
from collections import Counter
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, UpSampling2D, Dense, Layer, Reshape, InputLayer, Flatten, Input, MaxPooling2D
!git clone https://github.com/SeldonIO/alibi-detect.git
%cd /content/alibi-detect/alibi_detect/od
!pip install alibi-detect
from alibi_detect.od import OutlierAE
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image
from google.colab import drive
drive.mount('/content/drive')
def img_to_np(path, resize = True):
img_array = []
fpaths = glob.glob(path, recursive=True)
for fname in fpaths:
img = Image.open(fname).convert("RGB")
if(resize): img = img.resize((64,64))
img_array.append(np.asarray(img))
images = np.array(img_array)
return images
path_train = "D:\\img\\capsule\\train\\**\*.*"
path_test = "D:\\img\\capsule\\test\\**\*.*"
train = img_to_np(path_train)
test = img_to_np(path_test)
train = train.astype('float32') / 255.
test = test.astype('float32') / 255.
encoding_dim = 1024
dense_dim = [8, 8, 128]
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=train[0].shape),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu),
Flatten(),
Dense(encoding_dim,)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(encoding_dim,)),
Dense(np.prod(dense_dim)),
Reshape(target_shape=dense_dim),
Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
])
od = OutlierAE( threshold = 0.001,
encoder_net=encoder_net,
decoder_net=decoder_net)
adam = tf.keras.optimizers.Adam(lr=1e-4)
od.fit(train, epochs=100, verbose=True,
optimizer = adam)
od.infer_threshold(test, threshold_perc=95)
preds = od.predict(test, outlier_type='instance',
return_instance_score=True,
return_feature_score=True)
for i, fpath in enumerate(glob.glob(path_test)):
if(preds['data']['is_outlier'][i] == 1):
source = fpath
shutil.copy(source, 'img\\')
filenames = [os.path.basename(x) for x in glob.glob(path_test, recursive=True)]
dict1 = {'Filename': filenames,
'instance_score': preds['data']['instance_score'],
'is_outlier': preds['data']['is_outlier']}
df = pd.DataFrame(dict1)
df_outliers = df[df['is_outlier'] == 1]
print(df_outliers)
recon = od.ae(test).numpy()
plot_feature_outlier_image(preds, test,
X_recon=recon,
max_instances=5,
outliers_only=False,
figsize=(15,15))
###Output
_____no_output_____ |
Notebooks/Session 2 Introduction_to_Pandas/S2-Introduction_to_Pandas.ipynb | ###Markdown
Session-1: An introduction to Pandas------------------------------------------------------*Introduction to Data Science & Machine Learning**Pablo M. Olmos [email protected]*------------------------------------------------------When dealing with numeric matrices and vectors in Python, Numerical Python ([Numpy](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html NumPy)) makes life a lot easier. Doing data analysis directly with NumPy can be problematic, as many different data types have to jointly managed.Fortunately, some nice folks have written the **[Python Data Analysis Library](https://pandas.pydata.org/)** (a.k.a. pandas). Pandas is an open sourcelibrary providing high-performance, easy-to-use data structures and data analysis tools for the Python programming languageIn this tutorial, we'll go through the basics of pandas using a database of house prices provided by [Kaggle](https://www.kaggle.com/). Pandas has a lot of functionality, so we'll only be able to cover a small fraction of what you can do. Check out the (very readable) [pandas docs](http://pandas.pydata.org/pandas-docs/stable/) if you want to learn more. Acknowledgment:I have compiled this tutorial by putting together a few very nice blogs and posts I found on the web. All credit goes to them:- [An introduction to Pandas](http://synesthesiam.com/posts/an-introduction-to-pandas.htmlhanding-missing-values)- [Using iloc, loc, & ix to select rows and columns in Pandas DataFrames](https://www.shanelynn.ie/select-pandas-dataframe-rows-and-columns-using-iloc-loc-and-ix/) Getting StartedLet's import the libray and check the current installed version
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#The following is required to print the plots inside the notebooks
%matplotlib inline
pd.__version__
###Output
_____no_output_____
###Markdown
If you are using Anaconda and you want to update pandas to the latest version, you can use either the [package manager](https://docs.anaconda.com/anaconda/navigator/tutorials/manage-packages) in Anaconda Navigator, or type in a terminal window```> conda update pandas``` Next lets read the housing price database, which is provided by [Kaggle in this link](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). Because it's in a CSV file, we can use pandas' `read_csv` function to pull it directly into the basic data structure in pandas: a **DataFrame**.
###Code
data = pd.read_csv("house_prices_train.csv")
###Output
_____no_output_____
###Markdown
We can visualize the first rows of the Dataframe `data`
###Code
data.head()
###Output
_____no_output_____
###Markdown
You have a description of all fields in the [data description file](./data_description.txt). You can check the size of the Dataframe and get a list of the column labels as follows:
###Code
print("The dataframe has %d entries, and %d attributes (columns)\n" %(data.shape[0],data.shape[1]))
print("The labels associated to each of the %d attributes are:\n " %(data.shape[1]))
label_list = list(data.columns)
print(label_list)
###Output
The dataframe has 1460 entries, and 81 attributes (columns)
The labels associated to each of the 81 attributes are:
['Id', 'MSSubClass', 'MSZoning', 'LotFrontage', 'LotArea', 'Street', 'Alley', 'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'OverallQual', 'OverallCond', 'YearBuilt', 'YearRemodAdd', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'MasVnrType', 'MasVnrArea', 'ExterQual', 'ExterCond', 'Foundation', 'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinSF1', 'BsmtFinType2', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', 'Heating', 'HeatingQC', 'CentralAir', 'Electrical', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'KitchenQual', 'TotRmsAbvGrd', 'Functional', 'Fireplaces', 'FireplaceQu', 'GarageType', 'GarageYrBlt', 'GarageFinish', 'GarageCars', 'GarageArea', 'GarageQual', 'GarageCond', 'PavedDrive', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea', 'PoolQC', 'Fence', 'MiscFeature', 'MiscVal', 'MoSold', 'YrSold', 'SaleType', 'SaleCondition', 'SalePrice']
###Markdown
Columns can be accessed in two ways. The first is using the DataFrame like a dictionary with string keys:
###Code
data[['SalePrice']].head(10) #This shows the first 10 entries in the column 'SalePrice'
###Output
_____no_output_____
###Markdown
You can get multiple columns out at the same time by passing in a list of strings.
###Code
simple_data = data[['LotArea','1stFlrSF','2ndFlrSF','SalePrice']]
#Subpart of the dataframe.
# Watch out! This is not a different copy!
simple_data.tail(10) #.tail() shows the last 10 entries
###Output
_____no_output_____
###Markdown
Operations with columns We can easily [change the name](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html) of the columns
###Code
data.rename(index=str,columns={"LotArea":"Area"}, inplace=True)
###Output
_____no_output_____
###Markdown
Try to rename the column name directly in `simple.data`, what do you get?There are a lot of useful methods that can be applied over columns. Most of pandas' methods will happily ignore missing values like `NaN`. We will talk about **missing data** later.First, since we rename one column name, lets recompute the short (referenced) data-frame `simple_data``
###Code
simple_data = data[['Area','1stFlrSF','2ndFlrSF','SalePrice']]
print(simple_data.head(5))
print(simple_data['Area'].mean())
print(simple_data['Area'].std())
###Output
Area 1stFlrSF 2ndFlrSF SalePrice
0 8450 856 854 208500
1 9600 1262 0 181500
2 11250 920 866 223500
3 9550 961 756 140000
4 14260 1145 1053 250000
10516.828082191782
9981.264932379147
###Markdown
Some methods, like plot() and hist() produce plots using [matplotlib](https://matplotlib.org/). We'll go over plotting in more detail later.
###Code
simple_data[['Area']][:100].plot()
simple_data[['Area']].hist()
###Output
_____no_output_____
###Markdown
Operations with `apply()` Methods like `sum()` and `std()` work on entire columns. We can run our own functions across all values in a column (or row) using `apply()`.To get an idea about how this works, assume we want to convert the variable ['Area'] into squared meters instead of square foots. First, we create a conversion function.
###Code
def sfoot_to_smeter(x):
return (x * 0.092903)
sfoot_to_smeter(1) #just checking everything is correct
###Output
_____no_output_____
###Markdown
Using the `apply()` method, which takes an [anonymous function](https://docs.python.org/2/reference/expressions.htmllambda), we can apply `sfoot_to_smeter` to each value in the column. We can now either overwrite the data in the column 'Area' or create a new one. We'll do the latter in this case.
###Code
# Recall! data['Area'] is not a DataFrama, but a Pandas Series (another data object with different attributes). In order
# to index a DataFrame with a single column, you should use double [[]], i.e., data[['Area']]
data['Area_m2'] = data[['Area']].apply(lambda d: sfoot_to_smeter(d))
simple_data = data[['Area','Area_m2', '1stFlrSF','2ndFlrSF','SalePrice']]
simple_data.head()
###Output
_____no_output_____
###Markdown
What do you get if you try to apply the transformation directly over `simple_data`? What do you think the problem is? Now, we do not even need the column `Area`(in square foot), lets remove it.
###Code
data.drop('Area',axis=1,inplace=True)
data.head(5)
###Output
_____no_output_____
###Markdown
Indexing, iloc, loc There are [multiple ways](http://pandas.pydata.org/pandas-docs/stable/indexing.htmldifferent-choices-for-indexing) to select and index rows and columns from Pandas DataFrames. There’s three main options to achieve the selection and indexing activities in Pandas, which can be confusing. The three selection cases and methods covered in this post are:- Selecting data by row numbers (.iloc)- Selecting data by label or by a conditional statment (.loc)- Selecting in a hybrid approach (.ix) (now Deprecated in Pandas 0.20.1)We will cover the first two Selecting rows using `iloc()`The [`iloc`](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iloc.html) indexer for Pandas Dataframe is used for integer-location based indexing / selection by position.The iloc indexer syntax is `data.iloc[, ]`. “iloc” in pandas is used to select rows and columns by number, **in the order that they appear in the data frame**. You can imagine that each row has a row number from 0 to the total rows (data.shape[0]) and iloc[] allows selections based on these numbers. The same applies for columns (ranging from 0 to data.shape[1] )
###Code
simple_data.iloc[[3,4],0:3]
###Output
_____no_output_____
###Markdown
Note that `.iloc` returns a Pandas Series when one row is selected, and a Pandas DataFrame when multiple rows are selected, or if any column in full is selected. To counter this, pass a single-valued list if you require DataFrame output.
###Code
print(type(simple_data.iloc[:,0])) #PandaSeries
print(type(simple_data.iloc[:,[0]])) #DataFrame
# To avoid confusion, work always with DataFrames!
###Output
<class 'pandas.core.series.Series'>
<class 'pandas.core.frame.DataFrame'>
###Markdown
When selecting multiple columns or multiple rows in this manner, remember that in your selection e.g.[1:5], the rows/columns selected will run from the first number to one minus the second number. e.g. [1:5] will go 1,2,3,4., [x,y] goes from x to y-1.In practice, `iloc()` is sheldom used. 'loc()' is way more handly. Selecting rows using `loc()`The Pandas `loc()` indexer can be used with DataFrames for two different use cases:- Selecting rows by label/index- Selecting rows with a boolean / conditional lookup Selecting rows by label/index*Important* Selections using the `loc()` method are based on the index of the data frame (if any). Where the index is set on a DataFrame, using df.set_index(), the `loc()` method directly selects based on index values of any rows. For example, setting the index of our test data frame to the column 'OverallQual' (Rates the overall material and finish of the house):
###Code
data.set_index('OverallQual',inplace=True)
data.head(5)
###Output
_____no_output_____
###Markdown
Using `.loc()` we can search for rows with a specific index value
###Code
good_houses = data.loc[[8,9,10]] #List all houses with rating above 8
good_houses.head(10)
###Output
_____no_output_____
###Markdown
We can sort the dataframe according to index
###Code
data.sort_index(inplace=True,ascending=False) #Again, what is what you get if soft Dataframe good_houses directly?
good_houses.head(10)
###Output
_____no_output_____
###Markdown
Boolean / Logical indexing using .loc [Conditional selections](http://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing) with boolean arrays using `data.loc[]` is a common method with Pandas DataFrames. With boolean indexing or logical selection, you pass an array or Series of `True/False` values to the `.loc` indexer to select the rows where your Series has True values.For example, the statement data[‘first_name’] == ‘Antonio’] produces a Pandas Series with a True/False value for every row in the ‘data’ DataFrame, where there are “True” values for the rows where the first_name is “Antonio”. These type of boolean arrays can be passed directly to the .loc indexer as so:
###Code
good_houses.loc[good_houses['PoolArea']>0] #How many houses with quality above or equal to 8 have a Pool
###Output
_____no_output_____
###Markdown
As before, a second argument can be passed to .loc to select particular columns out of the data frame.
###Code
good_houses.loc[good_houses['PoolArea']>0,['GarageArea','GarageCars']] #Among those above, we focus on the area of the
# garage and how many cars can fit within
###Output
_____no_output_____
###Markdown
Even an anonymous function with the `.apply()` method can be used to generate the series of True/False indexes. For instance, select good houses with less than 10 years.
###Code
def check_date(current_year,year_built,threshold):
return (current_year-year_built) <= threshold
good_houses.loc[good_houses['YearBuilt'].apply(lambda d: check_date(2018, d,10))]
###Output
_____no_output_____
###Markdown
Using the above filtering, we can add our own column to the DataFrame to create an index that is 1 for houses that have swimming pool and less than 30 years.
###Code
data['My_index'] = 0 # We create new column with default vale
data.loc[(data['YearBuilt'].apply(lambda d: check_date(2018, d,30))) & (data['PoolArea']>0),'My_index'] = 1
data.loc[data['My_index'] == 1]
###Output
_____no_output_____
###Markdown
Handling Missing DataPandas considers values like `NaN` and `None` to represent missing data. The `pandas.isnull` function can be used to tell whether or not a value is missing.Let's use `apply()` across all of the columns in our DataFrame to figure out which values are missing.
###Code
empty = data.apply(lambda col: pd.isnull(col))
empty.head(5) #We get back a boolean Dataframe with 'True' whenever we have a missing data (either Nan or None)
###Output
_____no_output_____
###Markdown
There are multiple ways of handling missing data, we will talk about this during the course. Pandas provides handly functions to easily work with missing data, check [this post](https://chrisalbon.com/python/data_wrangling/pandas_missing_data/) for examples. More about plotting with `matplotlib()` libraryYou should consult [matplotlib documentation](https://matplotlib.org/index.html) for tons of examples and options.
###Code
plt.plot(data['Area_m2'],data['SalePrice'],'ro')
plt.plot(good_houses['Area_m2'],good_houses['SalePrice'],'*')
plt.legend(['SalePrice (all data)','SalePrince (good houses)'])
plt.xlabel('Area_m2')
plt.grid(True)
plt.xlim([0,7500])
data.sort_values(['SalePrice'],ascending=True,inplace=True) #We order the data according to SalePrice
# Create axes
fig, ax = plt.subplots()
ax2 = ax.twinx()
ax.loglog(data['SalePrice'], data['Area_m2'], color='blue',marker='o')
ax.set_xlabel('SalePrice (logscale)')
ax.set_ylabel('Area_m2 (logscale)')
ax2.semilogx(data['SalePrice'],data[['GarageArea']].apply(lambda d: sfoot_to_smeter(d)), color='red',marker='+',linewidth=0)
ax2.set_ylabel('Garage Area (logscale)')
ax.set_title('A plot with two scales')
###Output
_____no_output_____
###Markdown
Getting data out Writing data out in pandas is as easy as getting data in. To save our DataFrame out to a new csv file, we can just do this:
###Code
data.to_csv("modified_data.csv")
###Output
_____no_output_____
###Markdown
There's also support for reading and writing [Excel files](http://pandas.pydata.org/pandas-docs/stable/io.htmlexcel-files), if you need it.Also, creating a Numpy array is straightforward:
###Code
data_array = np.array(good_houses)
print(data_array.shape)
###Output
(229, 80)
|
instructor/day_two.ipynb | ###Markdown
Day 2 - Getting Data with PythonSomething about automation and scriptsSomething about exceptions Let's try a challenge!Error handling - or having a computer program anticipate and respond to errors created by other functions - is a big part of programming. To give you a little more practice with this, we're going to have you team up with person sitting next to you and try challenge B in the challenges directory. Introduction to the interwebsA vast amount of data exists on the web and is now publicly available. In this section, we give an overview of popular ways to retrieve data from the web, and walk through some important concerns and considerations. The internet follows a client-server architecture, where clients (e.g. you) ask servers to do things. The most common way that you experience this is through a browser, where you enter a URL and a server sends your computer a page for your browser to render. Most of what you think about as the internet are stored documents (web pages) that are given out to anyone who asks.You probably also have a program on your computer like Outlook or Thunderbird that sends emails to a server and asks it to forward them along to someone else. You may also have proprietary software that's protected by a license, and needs to connect to a license server to verify that you are an authenticated user.Ultimately, the internet is just connecting to computers that you don't own and passing data back and forth. Because the data transfer protocol (`http`) and typical data formats (`html`) are not native to Python, we're going to leave Python just for a little bit. Intro to HTTP requestsYou can view the request sent by your browser by:1) Opening a new tab in your browser 2) Enabling developer tools (__View -> Developer -> Developer Tools in Chrome__ and __Tools -> Web Developer -> Toggle Tools in Firefox__) 3) Loading or reloading a web page (etc. www.google.com) 4) Navigating to the Network tab in the panel that appears at the bottom of the page.   These requests you send follow the HTTP protocol (Hypertext Transfer Protocol), part of which defines the information (along with the format) the server needs to receive to return the right resources. Your HTTP request contains __headers__, which contains information that the server needs to know in order to return the right information to you. But we're not here to wander around the web (you probably do this a lot, all on your own). You're here because you want Python to do it for you.In order to get web pages, we're going to use a python library called `requests`, which takes a lot of the fuss out of contacting servers.
###Code
import requests
r = requests.get("http://en.wikipedia.org/wiki/Main_Page")
###Output
_____no_output_____
###Markdown
This response object contains various information about the request you sent to the server, the resources returned, and information about the response the server returned to you, among other information. These are accessible through the __request__ attribute, the __content__ attribute and the __headers__ attribute respectively, which we'll each examine below.
###Code
type(r.request), type(r.content), type(r.headers)
###Output
_____no_output_____
###Markdown
Here, we can see that __request__ is an object with a custom type, __content__ is a str value and __headers__ is an object with "dict" in its name, suggesting we can interact with it like we would with a dictionary.The content is the actual resource returned to us - let's take a look at the content first before examining the request and response objects more carefully. (We select the first 1000 characters b/c of the display limits of Jupyter/python notebook.)
###Code
from pprint import pprint
pprint(r.content[0:1000])
###Output
(b'<!DOCTYPE html>\n<html lang="en" dir="ltr" class="client-nojs">\n<head>\n<m'
b'eta charset="UTF-8" />\n<title>Wikipedia, the free encyclopedia</title>\n<'
b'script>document.documentElement.className = document.documentElement.classNa'
b'me.replace( /(^|\\s)client-nojs(\\s|$)/, "$1client-js$2" );</script>\n<scri'
b'pt>(window.RLQ = window.RLQ || []).push(function () {\nmw.config.set({"wg'
b'CanonicalNamespace":"","wgCanonicalSpecialPageName":false,"wgNamespaceNumber'
b'":0,"wgPageName":"Main_Page","wgTitle":"Main Page","wgCurRevisionId":6968469'
b'20,"wgRevisionId":696846920,"wgArticleId":15580374,"wgIsArticle":true,"wgIsR'
b'edirect":false,"wgAction":"view","wgUserName":null,"wgUserGroups":["*"],"wgC'
b'ategories":[],"wgBreakFrames":false,"wgPageContentLanguage":"en","wgPageCont'
b'entModel":"wikitext","wgSeparatorTransformTable":["",""],"wgDigitTransformTa'
b'ble":["",""],"wgDefaultDateFormat":"dmy","wgMonthNames":["","January","Febru'
b'ary","March","April","May","June","July","August","September","October","Nov'
b'ember","December"],"wgMonthN')
###Markdown
The content returned is written in HTML (__H__yper__T__ext __M__arkup __L__anguage), which is the default format in which web pages are returned. The content looks like gibberish at first, with little to no spacing. The reason for this is that some of the formatting rules for the document, like its hierarchical structure, are saved in text along with the text in the document. > note - this is called the __D__ocument __O__bject __M__odel (DOM) and is the same way that markdown and LaTeX documents are writtenIf you save a web page as a ".html" file, and open the file in a text editor like Notepad++ or Sublime Text, this is the same format you'll see. Opening the file in a browser (i.e. by double-clicking it) gives you the Google home page you are familiar with. You can inspect the information you sent to Wikipedia long with your request
###Code
r.request.headers
###Output
_____no_output_____
###Markdown
Along with the additional info that Wikipedia sent back:
###Code
r.headers
###Output
_____no_output_____
###Markdown
But you will probably not ever need this information.Most of what you'll be doing is sending what are called `GET` requests (this is why we typed in `requests.get` above). This is an `HTTP` protocol for asking a server to send you some stuff. We asked Wikipedia to `GET` us their main page. Things like queries (searching Wikipedia) also fall under `GET`.From time to time, you may also want to send information to a server (we'll do this later today). These are called `POST` requests, because you are posting something to the server (and not asking for data back).> note - From the server's perspective, the request it receives from your browser is not so different from the request received from your console (though some servers use a range of methods to determine if the request comes from a "valid" person using a browser, versus an automated program.)To have a look at the content of the web page, we can ask for the content:
###Code
r.content[:1000]
###Output
_____no_output_____
###Markdown
which gives us the response in bytes, or text:
###Code
r.text[:1000]
###Output
_____no_output_____
###Markdown
Parsing HTML in PythonTrying to parse this `str` by hand is basically a nightmare. Instead, we'll use a Python library called Beautiful Soup to turn it into something that is still confusing, but less of a nightmare.
###Code
from bs4 import BeautifulSoup
page = BeautifulSoup(r.content)
page
###Output
/Users/dillon/anaconda/lib/python3.5/site-packages/bs4/__init__.py:166: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("lxml"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
To get rid of this warning, change this:
BeautifulSoup([your markup])
to this:
BeautifulSoup([your markup], "lxml")
markup_type=markup_type))
###Markdown
Beautiful Soup creates a linked tree, where the root of the tree is the whole HTML document. It has children, which are all the elements of the HTML document. Each of those has children, which are any elements they have. Each element of the tree is aware of its parent and children.You probably don't want to iterate through each child of the whole HTML document - you want a specific thing or things in it. In some cases, you want to seach for html tags. Common tages include:| tag | function ||------------|------------------------------------------------------------|| `` | The title of the web page (shows up in your browser header) || `` | Information about the web page that is not shown to the user | | `` | Links to other web pages | | `` | Paragraph of text |In other cases, you want to look for IDs. These are optional information added to a tag to help developers or other code on the web page know which tag is for which purpose. Unlike tags, these are not standardized, so they will change from site to site and author to author. They will look something like:``With the advent of CSS (__C__ascading __S__tyle __S__heets), it is also common for people to define their own HTML styling tags. So, while things like lists (``) and tables (``, ``, and ``) are in the HTML specification, it's not safe to assume they'll be used when you expect.As a general strategy, when web scraping, you should have the page you want to scrape open in a browser with either the Developer Tools window open, or the HTML source displayed.We can pull out elements by tag with:
###Code
page.p
###Output
_____no_output_____
###Markdown
This is grabbing the paragraph tag from the page. If we want the first link from the first paragraph, we can try:
###Code
page.p.a
###Output
_____no_output_____
###Markdown
But what if we want all the links? We are going to use a method of bs4's elements called `find_all`.
###Code
page.p.findAll('a')
###Output
_____no_output_____
###Markdown
What if you want all the elements in that paragraph, and not just the links? bs4 has an iterator for children:
###Code
for element in page.p.children:
print(element)
###Output
<b><a href="/wiki/California_State_Route_78" title="California State Route 78">State Route 78</a></b>
is a
<a href="/wiki/State_highway" title="State highway">state highway</a>
in
<a href="/wiki/California" title="California">California</a>
that runs from
<a href="/wiki/Oceanside,_California" title="Oceanside, California">Oceanside</a>
east to
<a href="/wiki/Blythe,_California" title="Blythe, California">Blythe</a>
, a few miles from
<a href="/wiki/Arizona" title="Arizona">Arizona</a>
. Its western terminus is at
<a class="mw-redirect" href="/wiki/Interstate_5_(California)" title="Interstate 5 (California)">Interstate 5</a>
in
<a href="/wiki/San_Diego_County,_California" title="San Diego County, California">San Diego County</a>
and its eastern terminus is at
<a class="mw-redirect" href="/wiki/Interstate_10_(California)" title="Interstate 10 (California)">Interstate 10</a>
in
<a href="/wiki/Riverside_County,_California" title="Riverside County, California">Riverside County</a>
. The route is a freeway through the heavily populated cities of northern San Diego County and a two-lane highway running through the
<a href="/wiki/Cuyamaca_Mountains" title="Cuyamaca Mountains">Cuyamaca Mountains</a>
to
<a href="/wiki/Julian,_California" title="Julian, California">Julian</a>
. In
<a href="/wiki/Imperial_County,_California" title="Imperial County, California">Imperial County</a>
, it travels through the desert near the
<a href="/wiki/Salton_Sea" title="Salton Sea">Salton Sea</a>
and passes through the city of
<a href="/wiki/Brawley,_California" title="Brawley, California">Brawley</a>
before turning north into an area of sand dunes on the way to its terminus in Blythe. Portions of the route existed as early as 1900, and it was one of the original state highways designated in 1934. The freeway section in the
<a class="mw-redirect" href="/wiki/San_Diego_North_County,_California" title="San Diego North County, California">North County</a>
of
<a href="/wiki/San_Diego" title="San Diego">San Diego</a>
that connects Oceanside and
<a href="/wiki/Escondido,_California" title="Escondido, California">Escondido</a>
was built in the middle of the 20th century in several stages, including a transitory stage known as the Vista Way Freeway, and has been improved several times. An expressway bypass of the city of Brawley was completed in 2012. There are many projects slated to improve the freeway due to increasing congestion. (
<a href="/wiki/California_State_Route_78" title="California State Route 78"><b>Full article...</b></a>
)
###Markdown
HTML elements can be nested, but children only iterates at one level below the element. If you want everything, you can iterate with `descendants`
###Code
for element in page.p.descendants:
print(element)
###Output
<b><a href="/wiki/California_State_Route_78" title="California State Route 78">State Route 78</a></b>
<a href="/wiki/California_State_Route_78" title="California State Route 78">State Route 78</a>
State Route 78
is a
<a href="/wiki/State_highway" title="State highway">state highway</a>
state highway
in
<a href="/wiki/California" title="California">California</a>
California
that runs from
<a href="/wiki/Oceanside,_California" title="Oceanside, California">Oceanside</a>
Oceanside
east to
<a href="/wiki/Blythe,_California" title="Blythe, California">Blythe</a>
Blythe
, a few miles from
<a href="/wiki/Arizona" title="Arizona">Arizona</a>
Arizona
. Its western terminus is at
<a class="mw-redirect" href="/wiki/Interstate_5_(California)" title="Interstate 5 (California)">Interstate 5</a>
Interstate 5
in
<a href="/wiki/San_Diego_County,_California" title="San Diego County, California">San Diego County</a>
San Diego County
and its eastern terminus is at
<a class="mw-redirect" href="/wiki/Interstate_10_(California)" title="Interstate 10 (California)">Interstate 10</a>
Interstate 10
in
<a href="/wiki/Riverside_County,_California" title="Riverside County, California">Riverside County</a>
Riverside County
. The route is a freeway through the heavily populated cities of northern San Diego County and a two-lane highway running through the
<a href="/wiki/Cuyamaca_Mountains" title="Cuyamaca Mountains">Cuyamaca Mountains</a>
Cuyamaca Mountains
to
<a href="/wiki/Julian,_California" title="Julian, California">Julian</a>
Julian
. In
<a href="/wiki/Imperial_County,_California" title="Imperial County, California">Imperial County</a>
Imperial County
, it travels through the desert near the
<a href="/wiki/Salton_Sea" title="Salton Sea">Salton Sea</a>
Salton Sea
and passes through the city of
<a href="/wiki/Brawley,_California" title="Brawley, California">Brawley</a>
Brawley
before turning north into an area of sand dunes on the way to its terminus in Blythe. Portions of the route existed as early as 1900, and it was one of the original state highways designated in 1934. The freeway section in the
<a class="mw-redirect" href="/wiki/San_Diego_North_County,_California" title="San Diego North County, California">North County</a>
North County
of
<a href="/wiki/San_Diego" title="San Diego">San Diego</a>
San Diego
that connects Oceanside and
<a href="/wiki/Escondido,_California" title="Escondido, California">Escondido</a>
Escondido
was built in the middle of the 20th century in several stages, including a transitory stage known as the Vista Way Freeway, and has been improved several times. An expressway bypass of the city of Brawley was completed in 2012. There are many projects slated to improve the freeway due to increasing congestion. (
<a href="/wiki/California_State_Route_78" title="California State Route 78"><b>Full article...</b></a>
<b>Full article...</b>
Full article...
)
###Markdown
This splits out formatting tags that we *probably* don't care about, like bold-faced text, and so we probably won't use it again.In reality, you won't be inspecting things yourself, so you'll want to get in the habit of using your knowledge from day 2 about looping and control structures to make decisions for you. For example, what if we wanted to look at every link in the page, then print it's neighbor but only if the link is not to a media file? We could do something like:
###Code
for link in page.find_all('a'):
if link.attrs.get('class') != 'mw-redirect':
print(link.find_next())
###Output
<div id="siteNotice"><!-- CentralNotice --></div>
<a href="#p-search">search</a>
<div class="mw-content-ltr" dir="ltr" id="mw-content-text" lang="en"><table id="mp-topbanner" style="width:100%; background:#f9f9f9; margin:1.2em 0 6px 0; border:1px solid #ddd;">
<tr>
<td style="width:61%; color:#000;">
<table style="width:280px; border:none; background:none;">
<tr>
<td style="width:280px; text-align:center; white-space:nowrap; color:#000;">
<div style="font-size:162%; border:none; margin:0; padding:.1em; color:#000;">Welcome to <a href="/wiki/Wikipedia" title="Wikipedia">Wikipedia</a>,</div>
<div style="top:+0.2em; font-size:95%;">the <a href="/wiki/Free_content" title="Free content">free</a> <a href="/wiki/Encyclopedia" title="Encyclopedia">encyclopedia</a> that <a href="/wiki/Wikipedia:Introduction" title="Wikipedia:Introduction">anyone can edit</a>.</div>
<div id="articlecount" style="font-size:85%;"><a href="/wiki/Special:Statistics" title="Special:Statistics">5,104,889</a> articles in <a href="/wiki/English_language" title="English language">English</a></div>
</td>
</tr>
</table>
</td>
<td style="width:13%; font-size:95%;">
<ul>
<li><a href="/wiki/Portal:Arts" title="Portal:Arts">Arts</a></li>
<li><a href="/wiki/Portal:Biography" title="Portal:Biography">Biography</a></li>
<li><a href="/wiki/Portal:Geography" title="Portal:Geography">Geography</a></li>
</ul>
</td>
<td style="width:13%; font-size:95%;">
<ul>
<li><a href="/wiki/Portal:History" title="Portal:History">History</a></li>
<li><a href="/wiki/Portal:Mathematics" title="Portal:Mathematics">Mathematics</a></li>
<li><a href="/wiki/Portal:Science" title="Portal:Science">Science</a></li>
</ul>
</td>
<td style="width:13%; font-size:95%;">
<ul>
<li><a href="/wiki/Portal:Society" title="Portal:Society">Society</a></li>
<li><a href="/wiki/Portal:Technology" title="Portal:Technology">Technology</a></li>
<li><b><a href="/wiki/Portal:Contents/Portals" title="Portal:Contents/Portals">All portals</a></b></li>
</ul>
</td>
</tr>
</table>
<table id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;">
<tr>
<td class="MainPageBG" style="width:55%; border:1px solid #cef2e0; background:#f5fffa; vertical-align:top; color:#000;">
<table id="mp-left" style="width:100%; vertical-align:top; background:#f5fffa;">
<tr>
<td style="padding:2px;">
<h2 id="mp-tfa-h2" style="margin:3px; background:#cef2e0; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3bfb1; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="From_today.27s_featured_article">From today's featured article</span></h2>
</td>
</tr>
<tr>
<td style="color:#000;">
<div id="mp-tfa" style="padding:2px 5px">
<div id="mp-tfa-img" style="float: left; margin: 0.5em 0.9em 0.4em 0em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 178px;"><a class="image" href="/wiki/File:CASR78atS11_(cropped).jpg" title="SR 78 in Oceanside at the El Camino Real overpass"><img alt="SR 78 in Oceanside at the El Camino Real overpass" data-file-height="1080" data-file-width="1920" height="100" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/178px-CASR78atS11_%28cropped%29.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/267px-CASR78atS11_%28cropped%29.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/356px-CASR78atS11_%28cropped%29.jpg 2x" width="178"/></a></div>
</div>
<p><b><a href="/wiki/California_State_Route_78" title="California State Route 78">State Route 78</a></b> is a <a href="/wiki/State_highway" title="State highway">state highway</a> in <a href="/wiki/California" title="California">California</a> that runs from <a href="/wiki/Oceanside,_California" title="Oceanside, California">Oceanside</a> east to <a href="/wiki/Blythe,_California" title="Blythe, California">Blythe</a>, a few miles from <a href="/wiki/Arizona" title="Arizona">Arizona</a>. Its western terminus is at <a class="mw-redirect" href="/wiki/Interstate_5_(California)" title="Interstate 5 (California)">Interstate 5</a> in <a href="/wiki/San_Diego_County,_California" title="San Diego County, California">San Diego County</a> and its eastern terminus is at <a class="mw-redirect" href="/wiki/Interstate_10_(California)" title="Interstate 10 (California)">Interstate 10</a> in <a href="/wiki/Riverside_County,_California" title="Riverside County, California">Riverside County</a>. The route is a freeway through the heavily populated cities of northern San Diego County and a two-lane highway running through the <a href="/wiki/Cuyamaca_Mountains" title="Cuyamaca Mountains">Cuyamaca Mountains</a> to <a href="/wiki/Julian,_California" title="Julian, California">Julian</a>. In <a href="/wiki/Imperial_County,_California" title="Imperial County, California">Imperial County</a>, it travels through the desert near the <a href="/wiki/Salton_Sea" title="Salton Sea">Salton Sea</a> and passes through the city of <a href="/wiki/Brawley,_California" title="Brawley, California">Brawley</a> before turning north into an area of sand dunes on the way to its terminus in Blythe. Portions of the route existed as early as 1900, and it was one of the original state highways designated in 1934. The freeway section in the <a class="mw-redirect" href="/wiki/San_Diego_North_County,_California" title="San Diego North County, California">North County</a> of <a href="/wiki/San_Diego" title="San Diego">San Diego</a> that connects Oceanside and <a href="/wiki/Escondido,_California" title="Escondido, California">Escondido</a> was built in the middle of the 20th century in several stages, including a transitory stage known as the Vista Way Freeway, and has been improved several times. An expressway bypass of the city of Brawley was completed in 2012. There are many projects slated to improve the freeway due to increasing congestion. (<a href="/wiki/California_State_Route_78" title="California State Route 78"><b>Full article...</b></a>)</p>
<ul style="list-style:none; margin-left:0; text-align:right;">
<li>Recently featured:
<div class="hlist inline">
<ul>
<li><i><a href="/wiki/Sarcoscypha_coccinea" title="Sarcoscypha coccinea">Sarcoscypha coccinea</a></i></li>
<li><a href="/wiki/Japanese_battleship_Asahi" title="Japanese battleship Asahi">Japanese battleship <i>Asahi</i></a></li>
<li><a href="/wiki/Isabella_Beeton" title="Isabella Beeton">Isabella Beeton</a></li>
</ul>
</div>
</li>
</ul>
<div class="hlist noprint" id="mp-tfa-footer" style="text-align: right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Today%27s_featured_article/March_2016" title="Wikipedia:Today's featured article/March 2016">Archive</a></b></li>
<li><b><a class="extiw" href="https://lists.wikimedia.org/mailman/listinfo/daily-article-l" title="mail:daily-article-l">By email</a></b></li>
<li><b><a href="/wiki/Wikipedia:Featured_articles" title="Wikipedia:Featured articles">More featured articles...</a></b></li>
</ul>
</div>
</div>
</td>
</tr>
<tr>
<td style="padding:2px;">
<h2 id="mp-dyk-h2" style="margin:3px; background:#cef2e0; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3bfb1; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="Did_you_know...">Did you know...</span></h2>
</td>
</tr>
<tr>
<td style="color:#000; padding:2px 5px 5px;">
<div id="mp-dyk">
<div id="mp-dyk-img" style="float:right; margin-left:0.5em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 120px;"><a class="image" href="/wiki/File:Bilikiss_Adebiyi_CEO.jpg" title="Bilikiss Adebiyi"><img alt="Bilikiss Adebiyi" data-file-height="500" data-file-width="447" height="133" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/119px-Bilikiss_Adebiyi_CEO.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/179px-Bilikiss_Adebiyi_CEO.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/238px-Bilikiss_Adebiyi_CEO.jpg 2x" width="119"/></a>
<div class="thumbcaption" style="padding: 0.25em 0; word-wrap: break-word;">Bilikiss Adebiyi</div>
</div>
</div>
<ul>
<li>... that <b><a href="/wiki/Bilikiss_Adebiyi_Abiola" title="Bilikiss Adebiyi Abiola">Bilikiss Adebiyi</a></b> <i>(pictured)</i> planned to collect rubbish in the streets of Nigeria while taking her <a href="/wiki/Master_of_Business_Administration" title="Master of Business Administration">MBA</a> at <a href="/wiki/Massachusetts_Institute_of_Technology" title="Massachusetts Institute of Technology">MIT</a>?</li>
<li>... that the <a href="/wiki/BBC" title="BBC">BBC</a> re-launched its former television channel <a href="/wiki/BBC_Three_(former)" title="BBC Three (former)">BBC Three</a> as an <b><a href="/wiki/BBC_Three_(Internet_television)" title="BBC Three (Internet television)">Internet television service</a></b>?</li>
<li>... that the Sanskrit text <b><a href="/wiki/Manasollasa" title="Manasollasa">Manasollasa</a></b> is a 12th-century encyclopedia covering topics such as garden design, cuisine recipes, veterinary medicine, jewelry, painting, music, and dance?</li>
<li>... that the species name for <i><b><a href="/wiki/Burmaleon" title="Burmaleon">Burmaleon magnificus</a></b></i> was coined for the quality of preservation in the fossils?</li>
<li>... that the documentary film <i><b><a href="/wiki/No_Land%27s_Song" title="No Land's Song">No Land's Song</a></b></i> spotlights women's protests against an Iranian ban on public female solo singing before male audiences?</li>
<li>... that uninjured reporters commandeered a <a href="/wiki/Medical_evacuation" title="Medical evacuation">medical evacuation</a> helicopter during <b><a href="/wiki/Campaign_Z" title="Campaign Z">Campaign Z</a></b>?</li>
<li>... that of an estimated 100,000 <b><a href="/wiki/German_Jewish_military_personnel_of_World_War_I" title="German Jewish military personnel of World War I">German Jews</a></b> who served in the <a href="/wiki/German_Army_(German_Empire)" title="German Army (German Empire)">German Army</a> in <a href="/wiki/World_War_I" title="World War I">World War I</a>, 12,000 were killed in action?</li>
</ul>
<p><b>Correction</b>: we erroneously claimed here that in 1964 <a href="/wiki/Jim_Hazelton" title="Jim Hazelton">Jim Hazelton</a> was the first Australian to fly a single-engine aircraft across the Pacific, but <a href="/wiki/Charles_Kingsford_Smith" title="Charles Kingsford Smith">Charles Kingsford Smith</a> and copilot <a href="/wiki/Gordon_Taylor_(aviator)" title="Gordon Taylor (aviator)">Gordon Taylor</a> were actually the first to do so in 1934 in their <a href="/wiki/Lockheed_Altair" title="Lockheed Altair">Lockheed Altair</a> <i><a href="/wiki/Lady_Southern_Cross" title="Lady Southern Cross">Lady Southern Cross</a></i>.</p>
<div class="hlist noprint" id="mp-dyk-footer" style="text-align:right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Recent_additions" title="Wikipedia:Recent additions">Recently improved articles</a></b></li>
<li><b><a href="/wiki/Wikipedia:Your_first_article" title="Wikipedia:Your first article">Start a new article</a></b></li>
<li><b><a href="/wiki/Template_talk:Did_you_know" title="Template talk:Did you know">Nominate an article</a></b></li>
</ul>
</div>
</div>
</td>
</tr>
</table>
</td>
<td style="border:1px solid transparent;"></td>
<td class="MainPageBG" style="width:45%; border:1px solid #cedff2; background:#f5faff; vertical-align:top;">
<table id="mp-right" style="width:100%; vertical-align:top; background:#f5faff;">
<tr>
<td style="padding:2px;">
<h2 id="mp-itn-h2" style="margin:3px; background:#cedff2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3b0bf; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="In_the_news">In the news</span></h2>
</td>
</tr>
<tr>
<td style="color:#000; padding:2px 5px;">
<div id="mp-itn">
<div id="mp-itn-img" style="float:right;margin-left:0.5em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 120px;"><a href="/wiki/File:Total_Solar_Eclipse,_9_March_2016,_from_Balikpapan,_East_Kalimantan,_Indonesia.JPG" title="Total solar eclipse, viewed from Balikpapan"><img alt="Total solar eclipse, viewed from Balikpapan" data-file-height="388" data-file-width="396" height="118" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/120px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/180px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/240px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG 2x" width="120"/></a>
<div class="thumbcaption" style="padding: 0.25em 0; word-wrap: break-word;">Total solar eclipse, viewed from <a href="/wiki/Balikpapan" title="Balikpapan">Balikpapan</a></div>
</div>
</div>
<ul>
<li><b><a href="/wiki/March_2016_Ankara_bombing" title="March 2016 Ankara bombing">An explosion</a></b> in <a href="/wiki/Ankara" title="Ankara">Ankara</a>, Turkey, kills 37 people and injures at least 125 others.</li>
<li>At least 18 people are killed in <b><a href="/wiki/2016_Grand-Bassam_shootings" title="2016 Grand-Bassam shootings">shootings</a></b> at a beach resort in <a href="/wiki/Grand-Bassam" title="Grand-Bassam">Grand-Bassam</a>, Ivory Coast.</li>
<li><a href="/wiki/Google_DeepMind" title="Google DeepMind">Google DeepMind</a>'s <a href="/wiki/AlphaGo" title="AlphaGo">AlphaGo</a> computer program <b><a href="/wiki/AlphaGo_versus_Lee_Sedol" title="AlphaGo versus Lee Sedol">wins a series</a></b> against <a href="/wiki/Lee_Sedol" title="Lee Sedol">Lee Sedol</a>, one of the world's best <a href="/wiki/Go_(game)" title="Go (game)">Go</a> players.</li>
<li>A total <a href="/wiki/Solar_eclipse" title="Solar eclipse">solar eclipse</a> <b><a href="/wiki/Solar_eclipse_of_March_9,_2016" title="Solar eclipse of March 9, 2016">occurs</a></b>, with totality <i>(pictured)</i> visible from Indonesia and the North Pacific.</li>
<li>In the <b><a href="/wiki/Slovak_parliamentary_election,_2016" title="Slovak parliamentary election, 2016">Slovak parliamentary election</a></b>, <a href="/wiki/Direction_%E2%80%93_Social_Democracy" title="Direction – Social Democracy">Direction – Social Democracy</a> remains the largest political party but loses its majority in the <a href="/wiki/National_Council_(Slovakia)" title="National Council (Slovakia)">National Council</a>.</li>
<li>The <a href="/wiki/Human_Rights_Protection_Party" title="Human Rights Protection Party">Human Rights Protection Party</a>, led by <a href="/wiki/Tuilaepa_Aiono_Sailele_Malielegaoi" title="Tuilaepa Aiono Sailele Malielegaoi">Tuilaepa Aiono Sailele Malielegaoi</a>, wins a landslide victory in the <b><a href="/wiki/Samoan_general_election,_2016" title="Samoan general election, 2016">Samoan general election</a></b>.</li>
</ul>
<ul style="list-style:none; margin-left:0;">
<li><b><a href="/wiki/Portal:Current_events" title="Portal:Current events">Ongoing events</a></b>:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Zika_virus_outbreak_(2015%E2%80%93present)" title="Zika virus outbreak (2015–present)">Zika virus outbreak</a></li>
<li><a href="/wiki/European_migrant_crisis" title="European migrant crisis">European migrant crisis</a></li>
</ul>
</div>
</li>
<li><b><a href="/wiki/Deaths_in_2016" title="Deaths in 2016">Recent deaths</a></b>:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Hilary_Putnam" title="Hilary Putnam">Hilary Putnam</a></li>
<li><a href="/wiki/Lloyd_Shapley" title="Lloyd Shapley">Lloyd Shapley</a></li>
<li><a href="/wiki/Iolanda_Bala%C8%99" title="Iolanda Balaș">Iolanda Balaș</a></li>
</ul>
</div>
</li>
</ul>
</div>
</td>
</tr>
<tr>
<td style="padding:2px;">
<h2 id="mp-otd-h2" style="margin:3px; background:#cedff2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3b0bf; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="On_this_day...">On this day...</span></h2>
</td>
</tr>
<tr>
<td style="color:#000; padding:2px 5px 5px;">
<div id="mp-otd">
<p><b><a href="/wiki/March_15" title="March 15">March 15</a></b>: <b><a href="/wiki/Ides_of_March" title="Ides of March">Ides of March</a></b>; <b><a href="/wiki/Hungarian_Revolution_of_1848" title="Hungarian Revolution of 1848">National Day</a></b> in Hungary (<a href="/wiki/1848" title="1848">1848</a>)</p>
<div id="mp-otd-img" style="float:right;margin-left:0.5em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 100px;"><a class="image" href="/wiki/File:Villa_close_up.jpg" title="Pancho Villa"><img alt="Pancho Villa" data-file-height="574" data-file-width="431" height="133" src="//upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/100px-Villa_close_up.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/150px-Villa_close_up.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/200px-Villa_close_up.jpg 2x" width="100"/></a>
<div class="thumbcaption" style="padding: 0.25em 0; word-wrap: break-word;">Pancho Villa</div>
</div>
</div>
<ul>
<li><a href="/wiki/1783" title="1783">1783</a> – A <b><a href="/wiki/Newburgh_Conspiracy" title="Newburgh Conspiracy">potential uprising</a></b> in <a href="/wiki/Newburgh_(city),_New_York" title="Newburgh (city), New York">Newburgh, New York</a>, was defused when <a href="/wiki/George_Washington" title="George Washington">George Washington</a> asked <a href="/wiki/Continental_Army" title="Continental Army">Continental Army</a> officers to support the supremacy of <a href="/wiki/United_States_Congress" title="United States Congress">Congress</a>.</li>
<li><a href="/wiki/1892" title="1892">1892</a> – <b><a href="/wiki/Liverpool_F.C." title="Liverpool F.C.">Liverpool F.C.</a></b>, one of England's most successful <a href="/wiki/Association_football" title="Association football">football</a> clubs, was founded.</li>
<li><a href="/wiki/1916" title="1916">1916</a> – Six days after <a href="/wiki/Pancho_Villa" title="Pancho Villa">Pancho Villa</a> <i>(pictured)</i> and his cross-border raiders attacked <a href="/wiki/Columbus,_New_Mexico" title="Columbus, New Mexico">Columbus, New Mexico</a>, US General <a href="/wiki/John_J._Pershing" title="John J. Pershing">John J. Pershing</a> led a <b><a href="/wiki/Pancho_Villa_Expedition" title="Pancho Villa Expedition">punitive expedition into Mexico</a></b> to pursue Villa.</li>
<li><a href="/wiki/1941" title="1941">1941</a> – <b><a href="/wiki/Philippine_Airlines" title="Philippine Airlines">Philippine Airlines</a></b>, the <a href="/wiki/Flag_carrier" title="Flag carrier">flag carrier</a> of the Philippines took its first flight, making it the oldest commercial airline in Asia operating under its original name.</li>
<li><a href="/wiki/2011" title="2011">2011</a> – <a href="/wiki/Arab_Spring" title="Arab Spring">Arab Spring</a>: Protests erupted <b><a href="/wiki/Syrian_Civil_War" title="Syrian Civil War">across Syria</a></b> against the authoritarian government.</li>
</ul>
<ul style="list-style:none; margin-left:0;">
<li>More anniversaries:
<div class="hlist inline nowraplinks">
<ul>
<li><a href="/wiki/March_14" title="March 14">March 14</a></li>
<li><b><a href="/wiki/March_15" title="March 15">March 15</a></b></li>
<li><a href="/wiki/March_16" title="March 16">March 16</a></li>
</ul>
</div>
</li>
</ul>
<div class="hlist noprint" id="mp-otd-footer" style="text-align: right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Selected_anniversaries/March" title="Wikipedia:Selected anniversaries/March">Archive</a></b></li>
<li><b><a class="extiw" href="https://lists.wikimedia.org/mailman/listinfo/daily-article-l" title="mail:daily-article-l">By email</a></b></li>
<li><b><a href="/wiki/List_of_historical_anniversaries" title="List of historical anniversaries">List of historical anniversaries</a></b></li>
</ul>
<div style="font-size:smaller;">
<ul>
<li>Current date: <span class="nowrap">March 15, 2016</span> (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)</li>
<li><span class="plainlinks" id="otd-purgelink"><span class="nowrap"><a class="external text" href="//en.wikipedia.org/w/index.php?title=Main_Page&action=purge">Reload this page</a></span></span></li>
</ul>
</div>
</div>
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
<table id="mp-lower" style="margin:4px 0 0 0; width:100%; background:none; border-spacing: 0px;">
<tr>
<td class="MainPageBG" style="width:100%; border:1px solid #ddcef2; background:#faf5ff; vertical-align:top; color:#000;">
<table id="mp-bottom" style="width:100%; vertical-align:top; background:#faf5ff; color:#000;">
<tr>
<td style="padding:2px;">
<h2 id="mp-tfp-h2" style="margin:3px; background:#ddcef2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #afa3bf; text-align:left; color:#000; padding:0.2em 0.4em"><span class="mw-headline" id="Today.27s_featured_picture">Today's featured picture</span></h2>
</td>
</tr>
<tr>
<td style="color:#000; padding:2px;">
<div id="mp-tfp">
<table style="margin:0 3px 3px; width:100%; text-align:left; background-color:transparent; border-collapse: collapse;">
<tr>
<td style="padding:0 0.9em 0 0;"><a class="image" href="/wiki/File:Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg" title="Man sweeping volcanic ash"><img alt="Man sweeping volcanic ash" data-file-height="1524" data-file-width="2246" height="258" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/380px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/570px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/760px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg 2x" width="380"/></a></td>
<td style="padding:0 6px 0 0">
<p>A man sweeping <a href="/wiki/Volcanic_ash" title="Volcanic ash">volcanic ash</a> in <a href="/wiki/Yogyakarta" title="Yogyakarta">Yogyakarta</a> during the <b><a href="/wiki/Kelud#2014_eruption" title="Kelud">2014 eruption</a></b> of <a href="/wiki/Kelud" title="Kelud">Kelud</a>. The <a href="/wiki/East_Java" title="East Java">East Javan</a> volcano erupted on 13 February 2014 and sent volcanic ash covering an area of about 500 kilometres (310 mi) in diameter. Ashfall from the eruption "paralyzed Java", closing airports, tourist attractions, and businesses as far away as <a href="/wiki/Bandung" title="Bandung">Bandung</a> and causing millions of dollars in financial losses. Cleaning operations continued for more than a week.</p>
<p><small>Photograph: <a href="/wiki/User:Crisco_1492" title="User:Crisco 1492">Chris Woodrich</a></small></p>
<ul style="list-style:none; margin-left:0; text-align:right;">
<li>Recently featured:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Template:POTD/2016-03-14" title="Template:POTD/2016-03-14"><i>Homme au bain</i></a></li>
<li><a href="/wiki/Template:POTD/2016-03-13" title="Template:POTD/2016-03-13">Wagner VI projection</a></li>
<li><a href="/wiki/Template:POTD/2016-03-12" title="Template:POTD/2016-03-12">Lynx (constellation)</a></li>
</ul>
</div>
</li>
</ul>
<div class="hlist noprint" style="text-align:right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Picture_of_the_day/March_2016" title="Wikipedia:Picture of the day/March 2016">Archive</a></b></li>
<li><b><a href="/wiki/Wikipedia:Featured_pictures" title="Wikipedia:Featured pictures">More featured pictures...</a></b></li>
</ul>
</div>
</td>
</tr>
</table>
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
<div id="mp-other" style="padding-top:4px; padding-bottom:2px;">
<h2><span class="mw-headline" id="Other_areas_of_Wikipedia">Other areas of Wikipedia</span></h2>
<ul>
<li><b><a href="/wiki/Wikipedia:Community_portal" title="Wikipedia:Community portal">Community portal</a></b> – Bulletin board, projects, resources and activities covering a wide range of Wikipedia areas.</li>
<li><b><a href="/wiki/Wikipedia:Help_desk" title="Wikipedia:Help desk">Help desk</a></b> – Ask questions about using Wikipedia.</li>
<li><b><a href="/wiki/Wikipedia:Local_Embassy" title="Wikipedia:Local Embassy">Local embassy</a></b> – For Wikipedia-related communication in languages other than English.</li>
<li><b><a href="/wiki/Wikipedia:Reference_desk" title="Wikipedia:Reference desk">Reference desk</a></b> – Serving as virtual librarians, Wikipedia volunteers tackle your questions on a wide range of subjects.</li>
<li><b><a href="/wiki/Wikipedia:News" title="Wikipedia:News">Site news</a></b> – Announcements, updates, articles and press releases on Wikipedia and the Wikimedia Foundation.</li>
<li><b><a href="/wiki/Wikipedia:Village_pump" title="Wikipedia:Village pump">Village pump</a></b> – For discussions about Wikipedia itself, including areas for technical issues and policies.</li>
</ul>
</div>
<div id="mp-sister">
<h2><span class="mw-headline" id="Wikipedia.27s_sister_projects">Wikipedia's sister projects</span></h2>
<p>Wikipedia is hosted by the <a href="/wiki/Wikimedia_Foundation" title="Wikimedia Foundation">Wikimedia Foundation</a>, a non-profit organization that also hosts a range of other <a class="extiw" href="//wikimediafoundation.org/wiki/Our_projects" title="wmf:Our projects">projects</a>:</p>
<table class="layout plainlinks" style="width:100%; margin:auto; text-align:left; background:transparent;">
<tr>
<td style="text-align:center; padding:4px;"><a href="//commons.wikimedia.org/wiki/" title="Commons"><img alt="Commons" data-file-height="41" data-file-width="31" height="41" src="//upload.wikimedia.org/wikipedia/en/9/9d/Commons-logo-31px.png" width="31"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//commons.wikimedia.org/">Commons</a></b><br/>
Free media repository</td>
<td style="text-align:center; padding:4px;"><a href="//www.mediawiki.org/wiki/" title="MediaWiki"><img alt="MediaWiki" data-file-height="102" data-file-width="135" height="26" src="//upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/35px-Mediawiki-logo.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/53px-Mediawiki-logo.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/70px-Mediawiki-logo.png 2x" width="35"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//mediawiki.org/">MediaWiki</a></b><br/>
Wiki software development</td>
<td style="text-align:center; padding:4px;"><a href="//meta.wikimedia.org/wiki/" title="Meta-Wiki"><img alt="Meta-Wiki" data-file-height="35" data-file-width="35" height="35" src="//upload.wikimedia.org/wikipedia/en/b/bc/Meta-logo-35px.png" width="35"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//meta.wikimedia.org/">Meta-Wiki</a></b><br/>
Wikimedia project coordination</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikibooks.org/wiki/" title="Wikibooks"><img alt="Wikibooks" data-file-height="35" data-file-width="35" height="35" src="//upload.wikimedia.org/wikipedia/en/7/7f/Wikibooks-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikibooks.org/">Wikibooks</a></b><br/>
Free textbooks and manuals</td>
<td style="text-align:center; padding:3px;"><a href="//www.wikidata.org/wiki/" title="Wikidata"><img alt="Wikidata" data-file-height="590" data-file-width="1050" height="26" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/47px-Wikidata-logo.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/71px-Wikidata-logo.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/94px-Wikidata-logo.svg.png 2x" width="47"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//www.wikidata.org/">Wikidata</a></b><br/>
Free knowledge base</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikinews.org/wiki/" title="Wikinews"><img alt="Wikinews" data-file-height="30" data-file-width="51" height="30" src="//upload.wikimedia.org/wikipedia/en/6/60/Wikinews-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikinews.org/">Wikinews</a></b><br/>
Free-content news</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikiquote.org/wiki/" title="Wikiquote"><img alt="Wikiquote" data-file-height="41" data-file-width="51" height="41" src="//upload.wikimedia.org/wikipedia/en/4/46/Wikiquote-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikiquote.org/">Wikiquote</a></b><br/>
Collection of quotations</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikisource.org/wiki/" title="Wikisource"><img alt="Wikisource" data-file-height="37" data-file-width="35" height="37" src="//upload.wikimedia.org/wikipedia/en/b/b6/Wikisource-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikisource.org/">Wikisource</a></b><br/>
Free-content library</td>
<td style="text-align:center; padding:4px;"><a href="//species.wikimedia.org/wiki/" title="Wikispecies"><img alt="Wikispecies" data-file-height="41" data-file-width="35" height="41" src="//upload.wikimedia.org/wikipedia/en/b/bf/Wikispecies-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//species.wikimedia.org/">Wikispecies</a></b><br/>
Directory of species</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikiversity.org/wiki/" title="Wikiversity"><img alt="Wikiversity" data-file-height="32" data-file-width="41" height="32" src="//upload.wikimedia.org/wikipedia/en/e/e3/Wikiversity-logo-41px.png" width="41"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikiversity.org/">Wikiversity</a></b><br/>
Free learning materials and activities</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikivoyage.org/wiki/" title="Wikivoyage"><img alt="Wikivoyage" data-file-height="193" data-file-width="193" height="35" src="//upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/35px-Wikivoyage-Logo-v3-icon.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/53px-Wikivoyage-Logo-v3-icon.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/70px-Wikivoyage-Logo-v3-icon.svg.png 2x" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikivoyage.org/">Wikivoyage</a></b><br/>
Free travel guide</td>
<td style="text-align:center; padding:4px;"><a href="//en.wiktionary.org/wiki/" title="Wiktionary"><img alt="Wiktionary" data-file-height="35" data-file-width="51" height="35" src="//upload.wikimedia.org/wikipedia/en/f/f2/Wiktionary-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wiktionary.org/">Wiktionary</a></b><br/>
Dictionary and thesaurus</td>
</tr>
</table>
</div>
<div id="mp-lang">
<h2><span class="mw-headline" id="Wikipedia_languages">Wikipedia languages</span></h2>
<div class="nowraplinks nourlexpansion plainlinks" id="lang">
<p>This Wikipedia is written in <a href="/wiki/English_language" title="English language">English</a>. Started in 2001<span style="display:none"> (<span class="bday dtstart published updated">2001</span>)</span>, it currently contains <a href="/wiki/Special:Statistics" title="Special:Statistics">5,104,889</a> articles. Many other Wikipedias are available; some of the largest are listed below.</p>
<ul>
<li id="lang-3">More than 1,000,000 articles:
<div class="hlist inline">
<ul>
<li><a class="external text" href="//de.wikipedia.org/wiki/"><span class="autonym" lang="de" title="German (de:)" xml:lang="de">Deutsch</span></a></li>
<li><a class="external text" href="//es.wikipedia.org/wiki/"><span class="autonym" lang="es" title="Spanish (es:)" xml:lang="es">Español</span></a></li>
<li><a class="external text" href="//fr.wikipedia.org/wiki/"><span class="autonym" lang="fr" title="French (fr:)" xml:lang="fr">Français</span></a></li>
<li><a class="external text" href="//it.wikipedia.org/wiki/"><span class="autonym" lang="it" title="Italian (it:)" xml:lang="it">Italiano</span></a></li>
<li><a class="external text" href="//nl.wikipedia.org/wiki/"><span class="autonym" lang="nl" title="Dutch (nl:)" xml:lang="nl">Nederlands</span></a></li>
<li><a class="external text" href="//ja.wikipedia.org/wiki/"><span class="autonym" lang="ja" title="Japanese (ja:)" xml:lang="ja">日本語</span></a></li>
<li><a class="external text" href="//pl.wikipedia.org/wiki/"><span class="autonym" lang="pl" title="Polish (pl:)" xml:lang="pl">Polski</span></a></li>
<li><a class="external text" href="//ru.wikipedia.org/wiki/"><span class="autonym" lang="ru" title="Russian (ru:)" xml:lang="ru">Русский</span></a></li>
<li><a class="external text" href="//sv.wikipedia.org/wiki/"><span class="autonym" lang="sv" title="Swedish (sv:)" xml:lang="sv">Svenska</span></a></li>
<li><a class="external text" href="//vi.wikipedia.org/wiki/"><span class="autonym" lang="vi" title="Vietnamese (vi:)" xml:lang="vi">Tiếng Việt</span></a></li>
</ul>
</div>
</li>
<li id="lang-2">More than 250,000 articles:
<div class="hlist inline">
<ul>
<li><a class="external text" href="//ar.wikipedia.org/wiki/"><span class="autonym" lang="ar" title="Arabic (ar:)" xml:lang="ar">العربية</span></a></li>
<li><a class="external text" href="//id.wikipedia.org/wiki/"><span class="autonym" lang="id" title="Indonesian (id:)" xml:lang="id">Bahasa Indonesia</span></a></li>
<li><a class="external text" href="//ms.wikipedia.org/wiki/"><span class="autonym" lang="ms" title="Malay (ms:)" xml:lang="ms">Bahasa Melayu</span></a></li>
<li><a class="external text" href="//ca.wikipedia.org/wiki/"><span class="autonym" lang="ca" title="Catalan (ca:)" xml:lang="ca">Català</span></a></li>
<li><a class="external text" href="//cs.wikipedia.org/wiki/"><span class="autonym" lang="cs" title="Czech (cs:)" xml:lang="cs">Čeština</span></a></li>
<li><a class="external text" href="//fa.wikipedia.org/wiki/"><span class="autonym" lang="fa" title="Persian (fa:)" xml:lang="fa">فارسی</span></a></li>
<li><a class="external text" href="//ko.wikipedia.org/wiki/"><span class="autonym" lang="ko" title="Korean (ko:)" xml:lang="ko">한국어</span></a></li>
<li><a class="external text" href="//hu.wikipedia.org/wiki/"><span class="autonym" lang="hu" title="Hungarian (hu:)" xml:lang="hu">Magyar</span></a></li>
<li><a class="external text" href="//no.wikipedia.org/wiki/"><span class="autonym" lang="no" title="Norwegian (no:)" xml:lang="no">Norsk bokmål</span></a></li>
<li><a class="external text" href="//pt.wikipedia.org/wiki/"><span class="autonym" lang="pt" title="Portuguese (pt:)" xml:lang="pt">Português</span></a></li>
<li><a class="external text" href="//ro.wikipedia.org/wiki/"><span class="autonym" lang="ro" title="Romanian (ro:)" xml:lang="ro">Română</span></a></li>
<li><a class="external text" href="//sr.wikipedia.org/wiki/"><span class="autonym" lang="sr" title="Serbian (sr:)" xml:lang="sr">Srpski / српски</span></a></li>
<li><a class="external text" href="//sh.wikipedia.org/wiki/"><span class="autonym" lang="sh" title="Serbo-Croatian (sh:)" xml:lang="sh">Srpskohrvatski / српскохрватски</span></a></li>
<li><a class="external text" href="//fi.wikipedia.org/wiki/"><span class="autonym" lang="fi" title="Finnish (fi:)" xml:lang="fi">Suomi</span></a></li>
<li><a class="external text" href="//tr.wikipedia.org/wiki/"><span class="autonym" lang="tr" title="Turkish (tr:)" xml:lang="tr">Türkçe</span></a></li>
<li><a class="external text" href="//uk.wikipedia.org/wiki/"><span class="autonym" lang="uk" title="Ukrainian (uk:)" xml:lang="uk">Українська</span></a></li>
<li><a class="external text" href="//zh.wikipedia.org/wiki/"><span class="autonym" lang="zh" title="Chinese (zh:)" xml:lang="zh">中文</span></a></li>
</ul>
</div>
</li>
<li id="lang-1">More than 50,000 articles:
<div class="hlist inline">
<ul>
<li><a class="external text" href="//bs.wikipedia.org/wiki/"><span class="autonym" lang="bs" title="Bosnian (bs:)" xml:lang="bs">Bosanski</span></a></li>
<li><a class="external text" href="//bg.wikipedia.org/wiki/"><span class="autonym" lang="bg" title="Bulgarian (bg:)" xml:lang="bg">Български</span></a></li>
<li><a class="external text" href="//da.wikipedia.org/wiki/"><span class="autonym" lang="da" title="Danish (da:)" xml:lang="da">Dansk</span></a></li>
<li><a class="external text" href="//et.wikipedia.org/wiki/"><span class="autonym" lang="et" title="Estonian (et:)" xml:lang="et">Eesti</span></a></li>
<li><a class="external text" href="//el.wikipedia.org/wiki/"><span class="autonym" lang="el" title="Greek (el:)" xml:lang="el">Ελληνικά</span></a></li>
<li><a class="external text" href="//simple.wikipedia.org/wiki/"><span class="autonym" lang="simple" title="Simple English (simple:)" xml:lang="simple">English (simple)</span></a></li>
<li><a class="external text" href="//eo.wikipedia.org/wiki/"><span class="autonym" lang="eo" title="Esperanto (eo:)" xml:lang="eo">Esperanto</span></a></li>
<li><a class="external text" href="//eu.wikipedia.org/wiki/"><span class="autonym" lang="eu" title="Basque (eu:)" xml:lang="eu">Euskara</span></a></li>
<li><a class="external text" href="//gl.wikipedia.org/wiki/"><span class="autonym" lang="gl" title="Galician (gl:)" xml:lang="gl">Galego</span></a></li>
<li><a class="external text" href="//he.wikipedia.org/wiki/"><span class="autonym" lang="he" title="Hebrew (he:)" xml:lang="he">עברית</span></a></li>
<li><a class="external text" href="//hr.wikipedia.org/wiki/"><span class="autonym" lang="hr" title="Croatian (hr:)" xml:lang="hr">Hrvatski</span></a></li>
<li><a class="external text" href="//lv.wikipedia.org/wiki/"><span class="autonym" lang="lv" title="Latvian (lv:)" xml:lang="lv">Latviešu</span></a></li>
<li><a class="external text" href="//lt.wikipedia.org/wiki/"><span class="autonym" lang="lt" title="Lithuanian (lt:)" xml:lang="lt">Lietuvių</span></a></li>
<li><a class="external text" href="//nn.wikipedia.org/wiki/"><span class="autonym" lang="nn" title="Norwegian Nynorsk (nn:)" xml:lang="nn">Norsk nynorsk</span></a></li>
<li><a class="external text" href="//sk.wikipedia.org/wiki/"><span class="autonym" lang="sk" title="Slovak (sk:)" xml:lang="sk">Slovenčina</span></a></li>
<li><a class="external text" href="//sl.wikipedia.org/wiki/"><span class="autonym" lang="sl" title="Slovenian (sl:)" xml:lang="sl">Slovenščina</span></a></li>
<li><a class="external text" href="//th.wikipedia.org/wiki/"><span class="autonym" lang="th" title="Thai (th:)" xml:lang="th">ไทย</span></a></li>
</ul>
</div>
</li>
</ul>
</div>
<div class="plainlinks" id="metalink" style="text-align:center;"><b><a class="extiw" href="//meta.wikimedia.org/wiki/List_of_Wikipedias" title="meta:List of Wikipedias">Complete list of Wikipedias</a></b></div>
</div>
<!--
NewPP limit report
Parsed by mw1096
Cached time: 20160315215020
Cache expiry: 3600
Dynamic content: true
CPU time usage: 0.363 seconds
Real time usage: 0.444 seconds
Preprocessor visited node count: 3198/1000000
Preprocessor generated node count: 0/1500000
Post‐expand include size: 101448/2097152 bytes
Template argument size: 6661/2097152 bytes
Highest expansion depth: 14/40
Expensive parser function count: 5/500
Lua time usage: 0.112/10.000 seconds
Lua memory usage: 2.14 MB/50 MB
Number of Wikibase entities loaded: 0-->
<!--
Transclusion expansion time report (%,ms,calls,template)
100.00% 328.474 1 - -total
49.05% 161.113 8 - Template:Main_page_image
44.60% 146.499 1 - Wikipedia:Main_Page/Tomorrow
18.27% 60.025 2 - Template:Wikipedia_languages
17.20% 56.502 2 - Template:In_the_news
16.99% 55.798 1 - Wikipedia:Today's_featured_article/March_15,_2016
16.19% 53.167 1 - Template:Did_you_know/Queue/2
15.83% 51.986 8 - Template:Str_number/trim
14.65% 48.124 2 - Template:In_the_news/image
13.74% 45.122 24 - Template:If_empty
-->
<!-- Saved in parser cache with key enwiki:pcache:idhash:15580374-0!*!0!!*!4!* and timestamp 20160315215019 and revision id 696846920
-->
<noscript><img alt="" height="1" src="//en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1" style="border: none; position: absolute;" title="" width="1"/></noscript></div>
<div style="top:+0.2em; font-size:95%;">the <a href="/wiki/Free_content" title="Free content">free</a> <a href="/wiki/Encyclopedia" title="Encyclopedia">encyclopedia</a> that <a href="/wiki/Wikipedia:Introduction" title="Wikipedia:Introduction">anyone can edit</a>.</div>
<a href="/wiki/Encyclopedia" title="Encyclopedia">encyclopedia</a>
<a href="/wiki/Wikipedia:Introduction" title="Wikipedia:Introduction">anyone can edit</a>
<div id="articlecount" style="font-size:85%;"><a href="/wiki/Special:Statistics" title="Special:Statistics">5,104,889</a> articles in <a href="/wiki/English_language" title="English language">English</a></div>
<a href="/wiki/English_language" title="English language">English</a>
<td style="width:13%; font-size:95%;">
<ul>
<li><a href="/wiki/Portal:Arts" title="Portal:Arts">Arts</a></li>
<li><a href="/wiki/Portal:Biography" title="Portal:Biography">Biography</a></li>
<li><a href="/wiki/Portal:Geography" title="Portal:Geography">Geography</a></li>
</ul>
</td>
<li><a href="/wiki/Portal:Biography" title="Portal:Biography">Biography</a></li>
<li><a href="/wiki/Portal:Geography" title="Portal:Geography">Geography</a></li>
<td style="width:13%; font-size:95%;">
<ul>
<li><a href="/wiki/Portal:History" title="Portal:History">History</a></li>
<li><a href="/wiki/Portal:Mathematics" title="Portal:Mathematics">Mathematics</a></li>
<li><a href="/wiki/Portal:Science" title="Portal:Science">Science</a></li>
</ul>
</td>
<li><a href="/wiki/Portal:Mathematics" title="Portal:Mathematics">Mathematics</a></li>
<li><a href="/wiki/Portal:Science" title="Portal:Science">Science</a></li>
<td style="width:13%; font-size:95%;">
<ul>
<li><a href="/wiki/Portal:Society" title="Portal:Society">Society</a></li>
<li><a href="/wiki/Portal:Technology" title="Portal:Technology">Technology</a></li>
<li><b><a href="/wiki/Portal:Contents/Portals" title="Portal:Contents/Portals">All portals</a></b></li>
</ul>
</td>
<li><a href="/wiki/Portal:Technology" title="Portal:Technology">Technology</a></li>
<li><b><a href="/wiki/Portal:Contents/Portals" title="Portal:Contents/Portals">All portals</a></b></li>
<table id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;">
<tr>
<td class="MainPageBG" style="width:55%; border:1px solid #cef2e0; background:#f5fffa; vertical-align:top; color:#000;">
<table id="mp-left" style="width:100%; vertical-align:top; background:#f5fffa;">
<tr>
<td style="padding:2px;">
<h2 id="mp-tfa-h2" style="margin:3px; background:#cef2e0; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3bfb1; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="From_today.27s_featured_article">From today's featured article</span></h2>
</td>
</tr>
<tr>
<td style="color:#000;">
<div id="mp-tfa" style="padding:2px 5px">
<div id="mp-tfa-img" style="float: left; margin: 0.5em 0.9em 0.4em 0em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 178px;"><a class="image" href="/wiki/File:CASR78atS11_(cropped).jpg" title="SR 78 in Oceanside at the El Camino Real overpass"><img alt="SR 78 in Oceanside at the El Camino Real overpass" data-file-height="1080" data-file-width="1920" height="100" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/178px-CASR78atS11_%28cropped%29.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/267px-CASR78atS11_%28cropped%29.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/356px-CASR78atS11_%28cropped%29.jpg 2x" width="178"/></a></div>
</div>
<p><b><a href="/wiki/California_State_Route_78" title="California State Route 78">State Route 78</a></b> is a <a href="/wiki/State_highway" title="State highway">state highway</a> in <a href="/wiki/California" title="California">California</a> that runs from <a href="/wiki/Oceanside,_California" title="Oceanside, California">Oceanside</a> east to <a href="/wiki/Blythe,_California" title="Blythe, California">Blythe</a>, a few miles from <a href="/wiki/Arizona" title="Arizona">Arizona</a>. Its western terminus is at <a class="mw-redirect" href="/wiki/Interstate_5_(California)" title="Interstate 5 (California)">Interstate 5</a> in <a href="/wiki/San_Diego_County,_California" title="San Diego County, California">San Diego County</a> and its eastern terminus is at <a class="mw-redirect" href="/wiki/Interstate_10_(California)" title="Interstate 10 (California)">Interstate 10</a> in <a href="/wiki/Riverside_County,_California" title="Riverside County, California">Riverside County</a>. The route is a freeway through the heavily populated cities of northern San Diego County and a two-lane highway running through the <a href="/wiki/Cuyamaca_Mountains" title="Cuyamaca Mountains">Cuyamaca Mountains</a> to <a href="/wiki/Julian,_California" title="Julian, California">Julian</a>. In <a href="/wiki/Imperial_County,_California" title="Imperial County, California">Imperial County</a>, it travels through the desert near the <a href="/wiki/Salton_Sea" title="Salton Sea">Salton Sea</a> and passes through the city of <a href="/wiki/Brawley,_California" title="Brawley, California">Brawley</a> before turning north into an area of sand dunes on the way to its terminus in Blythe. Portions of the route existed as early as 1900, and it was one of the original state highways designated in 1934. The freeway section in the <a class="mw-redirect" href="/wiki/San_Diego_North_County,_California" title="San Diego North County, California">North County</a> of <a href="/wiki/San_Diego" title="San Diego">San Diego</a> that connects Oceanside and <a href="/wiki/Escondido,_California" title="Escondido, California">Escondido</a> was built in the middle of the 20th century in several stages, including a transitory stage known as the Vista Way Freeway, and has been improved several times. An expressway bypass of the city of Brawley was completed in 2012. There are many projects slated to improve the freeway due to increasing congestion. (<a href="/wiki/California_State_Route_78" title="California State Route 78"><b>Full article...</b></a>)</p>
<ul style="list-style:none; margin-left:0; text-align:right;">
<li>Recently featured:
<div class="hlist inline">
<ul>
<li><i><a href="/wiki/Sarcoscypha_coccinea" title="Sarcoscypha coccinea">Sarcoscypha coccinea</a></i></li>
<li><a href="/wiki/Japanese_battleship_Asahi" title="Japanese battleship Asahi">Japanese battleship <i>Asahi</i></a></li>
<li><a href="/wiki/Isabella_Beeton" title="Isabella Beeton">Isabella Beeton</a></li>
</ul>
</div>
</li>
</ul>
<div class="hlist noprint" id="mp-tfa-footer" style="text-align: right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Today%27s_featured_article/March_2016" title="Wikipedia:Today's featured article/March 2016">Archive</a></b></li>
<li><b><a class="extiw" href="https://lists.wikimedia.org/mailman/listinfo/daily-article-l" title="mail:daily-article-l">By email</a></b></li>
<li><b><a href="/wiki/Wikipedia:Featured_articles" title="Wikipedia:Featured articles">More featured articles...</a></b></li>
</ul>
</div>
</div>
</td>
</tr>
<tr>
<td style="padding:2px;">
<h2 id="mp-dyk-h2" style="margin:3px; background:#cef2e0; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3bfb1; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="Did_you_know...">Did you know...</span></h2>
</td>
</tr>
<tr>
<td style="color:#000; padding:2px 5px 5px;">
<div id="mp-dyk">
<div id="mp-dyk-img" style="float:right; margin-left:0.5em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 120px;"><a class="image" href="/wiki/File:Bilikiss_Adebiyi_CEO.jpg" title="Bilikiss Adebiyi"><img alt="Bilikiss Adebiyi" data-file-height="500" data-file-width="447" height="133" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/119px-Bilikiss_Adebiyi_CEO.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/179px-Bilikiss_Adebiyi_CEO.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/238px-Bilikiss_Adebiyi_CEO.jpg 2x" width="119"/></a>
<div class="thumbcaption" style="padding: 0.25em 0; word-wrap: break-word;">Bilikiss Adebiyi</div>
</div>
</div>
<ul>
<li>... that <b><a href="/wiki/Bilikiss_Adebiyi_Abiola" title="Bilikiss Adebiyi Abiola">Bilikiss Adebiyi</a></b> <i>(pictured)</i> planned to collect rubbish in the streets of Nigeria while taking her <a href="/wiki/Master_of_Business_Administration" title="Master of Business Administration">MBA</a> at <a href="/wiki/Massachusetts_Institute_of_Technology" title="Massachusetts Institute of Technology">MIT</a>?</li>
<li>... that the <a href="/wiki/BBC" title="BBC">BBC</a> re-launched its former television channel <a href="/wiki/BBC_Three_(former)" title="BBC Three (former)">BBC Three</a> as an <b><a href="/wiki/BBC_Three_(Internet_television)" title="BBC Three (Internet television)">Internet television service</a></b>?</li>
<li>... that the Sanskrit text <b><a href="/wiki/Manasollasa" title="Manasollasa">Manasollasa</a></b> is a 12th-century encyclopedia covering topics such as garden design, cuisine recipes, veterinary medicine, jewelry, painting, music, and dance?</li>
<li>... that the species name for <i><b><a href="/wiki/Burmaleon" title="Burmaleon">Burmaleon magnificus</a></b></i> was coined for the quality of preservation in the fossils?</li>
<li>... that the documentary film <i><b><a href="/wiki/No_Land%27s_Song" title="No Land's Song">No Land's Song</a></b></i> spotlights women's protests against an Iranian ban on public female solo singing before male audiences?</li>
<li>... that uninjured reporters commandeered a <a href="/wiki/Medical_evacuation" title="Medical evacuation">medical evacuation</a> helicopter during <b><a href="/wiki/Campaign_Z" title="Campaign Z">Campaign Z</a></b>?</li>
<li>... that of an estimated 100,000 <b><a href="/wiki/German_Jewish_military_personnel_of_World_War_I" title="German Jewish military personnel of World War I">German Jews</a></b> who served in the <a href="/wiki/German_Army_(German_Empire)" title="German Army (German Empire)">German Army</a> in <a href="/wiki/World_War_I" title="World War I">World War I</a>, 12,000 were killed in action?</li>
</ul>
<p><b>Correction</b>: we erroneously claimed here that in 1964 <a href="/wiki/Jim_Hazelton" title="Jim Hazelton">Jim Hazelton</a> was the first Australian to fly a single-engine aircraft across the Pacific, but <a href="/wiki/Charles_Kingsford_Smith" title="Charles Kingsford Smith">Charles Kingsford Smith</a> and copilot <a href="/wiki/Gordon_Taylor_(aviator)" title="Gordon Taylor (aviator)">Gordon Taylor</a> were actually the first to do so in 1934 in their <a href="/wiki/Lockheed_Altair" title="Lockheed Altair">Lockheed Altair</a> <i><a href="/wiki/Lady_Southern_Cross" title="Lady Southern Cross">Lady Southern Cross</a></i>.</p>
<div class="hlist noprint" id="mp-dyk-footer" style="text-align:right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Recent_additions" title="Wikipedia:Recent additions">Recently improved articles</a></b></li>
<li><b><a href="/wiki/Wikipedia:Your_first_article" title="Wikipedia:Your first article">Start a new article</a></b></li>
<li><b><a href="/wiki/Template_talk:Did_you_know" title="Template talk:Did you know">Nominate an article</a></b></li>
</ul>
</div>
</div>
</td>
</tr>
</table>
</td>
<td style="border:1px solid transparent;"></td>
<td class="MainPageBG" style="width:45%; border:1px solid #cedff2; background:#f5faff; vertical-align:top;">
<table id="mp-right" style="width:100%; vertical-align:top; background:#f5faff;">
<tr>
<td style="padding:2px;">
<h2 id="mp-itn-h2" style="margin:3px; background:#cedff2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3b0bf; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="In_the_news">In the news</span></h2>
</td>
</tr>
<tr>
<td style="color:#000; padding:2px 5px;">
<div id="mp-itn">
<div id="mp-itn-img" style="float:right;margin-left:0.5em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 120px;"><a href="/wiki/File:Total_Solar_Eclipse,_9_March_2016,_from_Balikpapan,_East_Kalimantan,_Indonesia.JPG" title="Total solar eclipse, viewed from Balikpapan"><img alt="Total solar eclipse, viewed from Balikpapan" data-file-height="388" data-file-width="396" height="118" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/120px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/180px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/240px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG 2x" width="120"/></a>
<div class="thumbcaption" style="padding: 0.25em 0; word-wrap: break-word;">Total solar eclipse, viewed from <a href="/wiki/Balikpapan" title="Balikpapan">Balikpapan</a></div>
</div>
</div>
<ul>
<li><b><a href="/wiki/March_2016_Ankara_bombing" title="March 2016 Ankara bombing">An explosion</a></b> in <a href="/wiki/Ankara" title="Ankara">Ankara</a>, Turkey, kills 37 people and injures at least 125 others.</li>
<li>At least 18 people are killed in <b><a href="/wiki/2016_Grand-Bassam_shootings" title="2016 Grand-Bassam shootings">shootings</a></b> at a beach resort in <a href="/wiki/Grand-Bassam" title="Grand-Bassam">Grand-Bassam</a>, Ivory Coast.</li>
<li><a href="/wiki/Google_DeepMind" title="Google DeepMind">Google DeepMind</a>'s <a href="/wiki/AlphaGo" title="AlphaGo">AlphaGo</a> computer program <b><a href="/wiki/AlphaGo_versus_Lee_Sedol" title="AlphaGo versus Lee Sedol">wins a series</a></b> against <a href="/wiki/Lee_Sedol" title="Lee Sedol">Lee Sedol</a>, one of the world's best <a href="/wiki/Go_(game)" title="Go (game)">Go</a> players.</li>
<li>A total <a href="/wiki/Solar_eclipse" title="Solar eclipse">solar eclipse</a> <b><a href="/wiki/Solar_eclipse_of_March_9,_2016" title="Solar eclipse of March 9, 2016">occurs</a></b>, with totality <i>(pictured)</i> visible from Indonesia and the North Pacific.</li>
<li>In the <b><a href="/wiki/Slovak_parliamentary_election,_2016" title="Slovak parliamentary election, 2016">Slovak parliamentary election</a></b>, <a href="/wiki/Direction_%E2%80%93_Social_Democracy" title="Direction – Social Democracy">Direction – Social Democracy</a> remains the largest political party but loses its majority in the <a href="/wiki/National_Council_(Slovakia)" title="National Council (Slovakia)">National Council</a>.</li>
<li>The <a href="/wiki/Human_Rights_Protection_Party" title="Human Rights Protection Party">Human Rights Protection Party</a>, led by <a href="/wiki/Tuilaepa_Aiono_Sailele_Malielegaoi" title="Tuilaepa Aiono Sailele Malielegaoi">Tuilaepa Aiono Sailele Malielegaoi</a>, wins a landslide victory in the <b><a href="/wiki/Samoan_general_election,_2016" title="Samoan general election, 2016">Samoan general election</a></b>.</li>
</ul>
<ul style="list-style:none; margin-left:0;">
<li><b><a href="/wiki/Portal:Current_events" title="Portal:Current events">Ongoing events</a></b>:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Zika_virus_outbreak_(2015%E2%80%93present)" title="Zika virus outbreak (2015–present)">Zika virus outbreak</a></li>
<li><a href="/wiki/European_migrant_crisis" title="European migrant crisis">European migrant crisis</a></li>
</ul>
</div>
</li>
<li><b><a href="/wiki/Deaths_in_2016" title="Deaths in 2016">Recent deaths</a></b>:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Hilary_Putnam" title="Hilary Putnam">Hilary Putnam</a></li>
<li><a href="/wiki/Lloyd_Shapley" title="Lloyd Shapley">Lloyd Shapley</a></li>
<li><a href="/wiki/Iolanda_Bala%C8%99" title="Iolanda Balaș">Iolanda Balaș</a></li>
</ul>
</div>
</li>
</ul>
</div>
</td>
</tr>
<tr>
<td style="padding:2px;">
<h2 id="mp-otd-h2" style="margin:3px; background:#cedff2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3b0bf; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="On_this_day...">On this day...</span></h2>
</td>
</tr>
<tr>
<td style="color:#000; padding:2px 5px 5px;">
<div id="mp-otd">
<p><b><a href="/wiki/March_15" title="March 15">March 15</a></b>: <b><a href="/wiki/Ides_of_March" title="Ides of March">Ides of March</a></b>; <b><a href="/wiki/Hungarian_Revolution_of_1848" title="Hungarian Revolution of 1848">National Day</a></b> in Hungary (<a href="/wiki/1848" title="1848">1848</a>)</p>
<div id="mp-otd-img" style="float:right;margin-left:0.5em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 100px;"><a class="image" href="/wiki/File:Villa_close_up.jpg" title="Pancho Villa"><img alt="Pancho Villa" data-file-height="574" data-file-width="431" height="133" src="//upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/100px-Villa_close_up.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/150px-Villa_close_up.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/200px-Villa_close_up.jpg 2x" width="100"/></a>
<div class="thumbcaption" style="padding: 0.25em 0; word-wrap: break-word;">Pancho Villa</div>
</div>
</div>
<ul>
<li><a href="/wiki/1783" title="1783">1783</a> – A <b><a href="/wiki/Newburgh_Conspiracy" title="Newburgh Conspiracy">potential uprising</a></b> in <a href="/wiki/Newburgh_(city),_New_York" title="Newburgh (city), New York">Newburgh, New York</a>, was defused when <a href="/wiki/George_Washington" title="George Washington">George Washington</a> asked <a href="/wiki/Continental_Army" title="Continental Army">Continental Army</a> officers to support the supremacy of <a href="/wiki/United_States_Congress" title="United States Congress">Congress</a>.</li>
<li><a href="/wiki/1892" title="1892">1892</a> – <b><a href="/wiki/Liverpool_F.C." title="Liverpool F.C.">Liverpool F.C.</a></b>, one of England's most successful <a href="/wiki/Association_football" title="Association football">football</a> clubs, was founded.</li>
<li><a href="/wiki/1916" title="1916">1916</a> – Six days after <a href="/wiki/Pancho_Villa" title="Pancho Villa">Pancho Villa</a> <i>(pictured)</i> and his cross-border raiders attacked <a href="/wiki/Columbus,_New_Mexico" title="Columbus, New Mexico">Columbus, New Mexico</a>, US General <a href="/wiki/John_J._Pershing" title="John J. Pershing">John J. Pershing</a> led a <b><a href="/wiki/Pancho_Villa_Expedition" title="Pancho Villa Expedition">punitive expedition into Mexico</a></b> to pursue Villa.</li>
<li><a href="/wiki/1941" title="1941">1941</a> – <b><a href="/wiki/Philippine_Airlines" title="Philippine Airlines">Philippine Airlines</a></b>, the <a href="/wiki/Flag_carrier" title="Flag carrier">flag carrier</a> of the Philippines took its first flight, making it the oldest commercial airline in Asia operating under its original name.</li>
<li><a href="/wiki/2011" title="2011">2011</a> – <a href="/wiki/Arab_Spring" title="Arab Spring">Arab Spring</a>: Protests erupted <b><a href="/wiki/Syrian_Civil_War" title="Syrian Civil War">across Syria</a></b> against the authoritarian government.</li>
</ul>
<ul style="list-style:none; margin-left:0;">
<li>More anniversaries:
<div class="hlist inline nowraplinks">
<ul>
<li><a href="/wiki/March_14" title="March 14">March 14</a></li>
<li><b><a href="/wiki/March_15" title="March 15">March 15</a></b></li>
<li><a href="/wiki/March_16" title="March 16">March 16</a></li>
</ul>
</div>
</li>
</ul>
<div class="hlist noprint" id="mp-otd-footer" style="text-align: right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Selected_anniversaries/March" title="Wikipedia:Selected anniversaries/March">Archive</a></b></li>
<li><b><a class="extiw" href="https://lists.wikimedia.org/mailman/listinfo/daily-article-l" title="mail:daily-article-l">By email</a></b></li>
<li><b><a href="/wiki/List_of_historical_anniversaries" title="List of historical anniversaries">List of historical anniversaries</a></b></li>
</ul>
<div style="font-size:smaller;">
<ul>
<li>Current date: <span class="nowrap">March 15, 2016</span> (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)</li>
<li><span class="plainlinks" id="otd-purgelink"><span class="nowrap"><a class="external text" href="//en.wikipedia.org/w/index.php?title=Main_Page&action=purge">Reload this page</a></span></span></li>
</ul>
</div>
</div>
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
<img alt="SR 78 in Oceanside at the El Camino Real overpass" data-file-height="1080" data-file-width="1920" height="100" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/178px-CASR78atS11_%28cropped%29.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/267px-CASR78atS11_%28cropped%29.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/25/CASR78atS11_%28cropped%29.jpg/356px-CASR78atS11_%28cropped%29.jpg 2x" width="178"/>
<a href="/wiki/State_highway" title="State highway">state highway</a>
<a href="/wiki/California" title="California">California</a>
<a href="/wiki/Oceanside,_California" title="Oceanside, California">Oceanside</a>
<a href="/wiki/Blythe,_California" title="Blythe, California">Blythe</a>
<a href="/wiki/Arizona" title="Arizona">Arizona</a>
<a class="mw-redirect" href="/wiki/Interstate_5_(California)" title="Interstate 5 (California)">Interstate 5</a>
<a href="/wiki/San_Diego_County,_California" title="San Diego County, California">San Diego County</a>
<a class="mw-redirect" href="/wiki/Interstate_10_(California)" title="Interstate 10 (California)">Interstate 10</a>
<a href="/wiki/Riverside_County,_California" title="Riverside County, California">Riverside County</a>
<a href="/wiki/Cuyamaca_Mountains" title="Cuyamaca Mountains">Cuyamaca Mountains</a>
<a href="/wiki/Julian,_California" title="Julian, California">Julian</a>
<a href="/wiki/Imperial_County,_California" title="Imperial County, California">Imperial County</a>
<a href="/wiki/Salton_Sea" title="Salton Sea">Salton Sea</a>
<a href="/wiki/Brawley,_California" title="Brawley, California">Brawley</a>
<a class="mw-redirect" href="/wiki/San_Diego_North_County,_California" title="San Diego North County, California">North County</a>
<a href="/wiki/San_Diego" title="San Diego">San Diego</a>
<a href="/wiki/Escondido,_California" title="Escondido, California">Escondido</a>
<a href="/wiki/California_State_Route_78" title="California State Route 78"><b>Full article...</b></a>
<b>Full article...</b>
<li><a href="/wiki/Japanese_battleship_Asahi" title="Japanese battleship Asahi">Japanese battleship <i>Asahi</i></a></li>
<i>Asahi</i>
<div class="hlist noprint" id="mp-tfa-footer" style="text-align: right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Today%27s_featured_article/March_2016" title="Wikipedia:Today's featured article/March 2016">Archive</a></b></li>
<li><b><a class="extiw" href="https://lists.wikimedia.org/mailman/listinfo/daily-article-l" title="mail:daily-article-l">By email</a></b></li>
<li><b><a href="/wiki/Wikipedia:Featured_articles" title="Wikipedia:Featured articles">More featured articles...</a></b></li>
</ul>
</div>
<li><b><a class="extiw" href="https://lists.wikimedia.org/mailman/listinfo/daily-article-l" title="mail:daily-article-l">By email</a></b></li>
<li><b><a href="/wiki/Wikipedia:Featured_articles" title="Wikipedia:Featured articles">More featured articles...</a></b></li>
<tr>
<td style="padding:2px;">
<h2 id="mp-dyk-h2" style="margin:3px; background:#cef2e0; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3bfb1; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="Did_you_know...">Did you know...</span></h2>
</td>
</tr>
<img alt="Bilikiss Adebiyi" data-file-height="500" data-file-width="447" height="133" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/119px-Bilikiss_Adebiyi_CEO.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/179px-Bilikiss_Adebiyi_CEO.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Bilikiss_Adebiyi_CEO.jpg/238px-Bilikiss_Adebiyi_CEO.jpg 2x" width="119"/>
<i>(pictured)</i>
<a href="/wiki/Massachusetts_Institute_of_Technology" title="Massachusetts Institute of Technology">MIT</a>
<li>... that the <a href="/wiki/BBC" title="BBC">BBC</a> re-launched its former television channel <a href="/wiki/BBC_Three_(former)" title="BBC Three (former)">BBC Three</a> as an <b><a href="/wiki/BBC_Three_(Internet_television)" title="BBC Three (Internet television)">Internet television service</a></b>?</li>
<a href="/wiki/BBC_Three_(former)" title="BBC Three (former)">BBC Three</a>
<b><a href="/wiki/BBC_Three_(Internet_television)" title="BBC Three (Internet television)">Internet television service</a></b>
<li>... that the Sanskrit text <b><a href="/wiki/Manasollasa" title="Manasollasa">Manasollasa</a></b> is a 12th-century encyclopedia covering topics such as garden design, cuisine recipes, veterinary medicine, jewelry, painting, music, and dance?</li>
<li>... that the species name for <i><b><a href="/wiki/Burmaleon" title="Burmaleon">Burmaleon magnificus</a></b></i> was coined for the quality of preservation in the fossils?</li>
<li>... that the documentary film <i><b><a href="/wiki/No_Land%27s_Song" title="No Land's Song">No Land's Song</a></b></i> spotlights women's protests against an Iranian ban on public female solo singing before male audiences?</li>
<li>... that uninjured reporters commandeered a <a href="/wiki/Medical_evacuation" title="Medical evacuation">medical evacuation</a> helicopter during <b><a href="/wiki/Campaign_Z" title="Campaign Z">Campaign Z</a></b>?</li>
<b><a href="/wiki/Campaign_Z" title="Campaign Z">Campaign Z</a></b>
<li>... that of an estimated 100,000 <b><a href="/wiki/German_Jewish_military_personnel_of_World_War_I" title="German Jewish military personnel of World War I">German Jews</a></b> who served in the <a href="/wiki/German_Army_(German_Empire)" title="German Army (German Empire)">German Army</a> in <a href="/wiki/World_War_I" title="World War I">World War I</a>, 12,000 were killed in action?</li>
<a href="/wiki/German_Army_(German_Empire)" title="German Army (German Empire)">German Army</a>
<a href="/wiki/World_War_I" title="World War I">World War I</a>
<p><b>Correction</b>: we erroneously claimed here that in 1964 <a href="/wiki/Jim_Hazelton" title="Jim Hazelton">Jim Hazelton</a> was the first Australian to fly a single-engine aircraft across the Pacific, but <a href="/wiki/Charles_Kingsford_Smith" title="Charles Kingsford Smith">Charles Kingsford Smith</a> and copilot <a href="/wiki/Gordon_Taylor_(aviator)" title="Gordon Taylor (aviator)">Gordon Taylor</a> were actually the first to do so in 1934 in their <a href="/wiki/Lockheed_Altair" title="Lockheed Altair">Lockheed Altair</a> <i><a href="/wiki/Lady_Southern_Cross" title="Lady Southern Cross">Lady Southern Cross</a></i>.</p>
<a href="/wiki/Charles_Kingsford_Smith" title="Charles Kingsford Smith">Charles Kingsford Smith</a>
<a href="/wiki/Gordon_Taylor_(aviator)" title="Gordon Taylor (aviator)">Gordon Taylor</a>
<a href="/wiki/Lockheed_Altair" title="Lockheed Altair">Lockheed Altair</a>
<i><a href="/wiki/Lady_Southern_Cross" title="Lady Southern Cross">Lady Southern Cross</a></i>
<div class="hlist noprint" id="mp-dyk-footer" style="text-align:right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Recent_additions" title="Wikipedia:Recent additions">Recently improved articles</a></b></li>
<li><b><a href="/wiki/Wikipedia:Your_first_article" title="Wikipedia:Your first article">Start a new article</a></b></li>
<li><b><a href="/wiki/Template_talk:Did_you_know" title="Template talk:Did you know">Nominate an article</a></b></li>
</ul>
</div>
<li><b><a href="/wiki/Wikipedia:Your_first_article" title="Wikipedia:Your first article">Start a new article</a></b></li>
<li><b><a href="/wiki/Template_talk:Did_you_know" title="Template talk:Did you know">Nominate an article</a></b></li>
<td style="border:1px solid transparent;"></td>
<img alt="Total solar eclipse, viewed from Balikpapan" data-file-height="388" data-file-width="396" height="118" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/120px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/180px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG/240px-Total_Solar_Eclipse%2C_9_March_2016%2C_from_Balikpapan%2C_East_Kalimantan%2C_Indonesia_%28cropped%29.JPG 2x" width="120"/>
<ul>
<li><b><a href="/wiki/March_2016_Ankara_bombing" title="March 2016 Ankara bombing">An explosion</a></b> in <a href="/wiki/Ankara" title="Ankara">Ankara</a>, Turkey, kills 37 people and injures at least 125 others.</li>
<li>At least 18 people are killed in <b><a href="/wiki/2016_Grand-Bassam_shootings" title="2016 Grand-Bassam shootings">shootings</a></b> at a beach resort in <a href="/wiki/Grand-Bassam" title="Grand-Bassam">Grand-Bassam</a>, Ivory Coast.</li>
<li><a href="/wiki/Google_DeepMind" title="Google DeepMind">Google DeepMind</a>'s <a href="/wiki/AlphaGo" title="AlphaGo">AlphaGo</a> computer program <b><a href="/wiki/AlphaGo_versus_Lee_Sedol" title="AlphaGo versus Lee Sedol">wins a series</a></b> against <a href="/wiki/Lee_Sedol" title="Lee Sedol">Lee Sedol</a>, one of the world's best <a href="/wiki/Go_(game)" title="Go (game)">Go</a> players.</li>
<li>A total <a href="/wiki/Solar_eclipse" title="Solar eclipse">solar eclipse</a> <b><a href="/wiki/Solar_eclipse_of_March_9,_2016" title="Solar eclipse of March 9, 2016">occurs</a></b>, with totality <i>(pictured)</i> visible from Indonesia and the North Pacific.</li>
<li>In the <b><a href="/wiki/Slovak_parliamentary_election,_2016" title="Slovak parliamentary election, 2016">Slovak parliamentary election</a></b>, <a href="/wiki/Direction_%E2%80%93_Social_Democracy" title="Direction – Social Democracy">Direction – Social Democracy</a> remains the largest political party but loses its majority in the <a href="/wiki/National_Council_(Slovakia)" title="National Council (Slovakia)">National Council</a>.</li>
<li>The <a href="/wiki/Human_Rights_Protection_Party" title="Human Rights Protection Party">Human Rights Protection Party</a>, led by <a href="/wiki/Tuilaepa_Aiono_Sailele_Malielegaoi" title="Tuilaepa Aiono Sailele Malielegaoi">Tuilaepa Aiono Sailele Malielegaoi</a>, wins a landslide victory in the <b><a href="/wiki/Samoan_general_election,_2016" title="Samoan general election, 2016">Samoan general election</a></b>.</li>
</ul>
<a href="/wiki/Ankara" title="Ankara">Ankara</a>
<li>At least 18 people are killed in <b><a href="/wiki/2016_Grand-Bassam_shootings" title="2016 Grand-Bassam shootings">shootings</a></b> at a beach resort in <a href="/wiki/Grand-Bassam" title="Grand-Bassam">Grand-Bassam</a>, Ivory Coast.</li>
<a href="/wiki/Grand-Bassam" title="Grand-Bassam">Grand-Bassam</a>
<li><a href="/wiki/Google_DeepMind" title="Google DeepMind">Google DeepMind</a>'s <a href="/wiki/AlphaGo" title="AlphaGo">AlphaGo</a> computer program <b><a href="/wiki/AlphaGo_versus_Lee_Sedol" title="AlphaGo versus Lee Sedol">wins a series</a></b> against <a href="/wiki/Lee_Sedol" title="Lee Sedol">Lee Sedol</a>, one of the world's best <a href="/wiki/Go_(game)" title="Go (game)">Go</a> players.</li>
<a href="/wiki/AlphaGo" title="AlphaGo">AlphaGo</a>
<b><a href="/wiki/AlphaGo_versus_Lee_Sedol" title="AlphaGo versus Lee Sedol">wins a series</a></b>
<a href="/wiki/Lee_Sedol" title="Lee Sedol">Lee Sedol</a>
<a href="/wiki/Go_(game)" title="Go (game)">Go</a>
<li>A total <a href="/wiki/Solar_eclipse" title="Solar eclipse">solar eclipse</a> <b><a href="/wiki/Solar_eclipse_of_March_9,_2016" title="Solar eclipse of March 9, 2016">occurs</a></b>, with totality <i>(pictured)</i> visible from Indonesia and the North Pacific.</li>
<b><a href="/wiki/Solar_eclipse_of_March_9,_2016" title="Solar eclipse of March 9, 2016">occurs</a></b>
<i>(pictured)</i>
<a href="/wiki/Direction_%E2%80%93_Social_Democracy" title="Direction – Social Democracy">Direction – Social Democracy</a>
<a href="/wiki/National_Council_(Slovakia)" title="National Council (Slovakia)">National Council</a>
<li>The <a href="/wiki/Human_Rights_Protection_Party" title="Human Rights Protection Party">Human Rights Protection Party</a>, led by <a href="/wiki/Tuilaepa_Aiono_Sailele_Malielegaoi" title="Tuilaepa Aiono Sailele Malielegaoi">Tuilaepa Aiono Sailele Malielegaoi</a>, wins a landslide victory in the <b><a href="/wiki/Samoan_general_election,_2016" title="Samoan general election, 2016">Samoan general election</a></b>.</li>
<a href="/wiki/Tuilaepa_Aiono_Sailele_Malielegaoi" title="Tuilaepa Aiono Sailele Malielegaoi">Tuilaepa Aiono Sailele Malielegaoi</a>
<b><a href="/wiki/Samoan_general_election,_2016" title="Samoan general election, 2016">Samoan general election</a></b>
<ul style="list-style:none; margin-left:0;">
<li><b><a href="/wiki/Portal:Current_events" title="Portal:Current events">Ongoing events</a></b>:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Zika_virus_outbreak_(2015%E2%80%93present)" title="Zika virus outbreak (2015–present)">Zika virus outbreak</a></li>
<li><a href="/wiki/European_migrant_crisis" title="European migrant crisis">European migrant crisis</a></li>
</ul>
</div>
</li>
<li><b><a href="/wiki/Deaths_in_2016" title="Deaths in 2016">Recent deaths</a></b>:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Hilary_Putnam" title="Hilary Putnam">Hilary Putnam</a></li>
<li><a href="/wiki/Lloyd_Shapley" title="Lloyd Shapley">Lloyd Shapley</a></li>
<li><a href="/wiki/Iolanda_Bala%C8%99" title="Iolanda Balaș">Iolanda Balaș</a></li>
</ul>
</div>
</li>
</ul>
<div class="hlist inline">
<ul>
<li><a href="/wiki/Zika_virus_outbreak_(2015%E2%80%93present)" title="Zika virus outbreak (2015–present)">Zika virus outbreak</a></li>
<li><a href="/wiki/European_migrant_crisis" title="European migrant crisis">European migrant crisis</a></li>
</ul>
</div>
<li><a href="/wiki/European_migrant_crisis" title="European migrant crisis">European migrant crisis</a></li>
<li><b><a href="/wiki/Deaths_in_2016" title="Deaths in 2016">Recent deaths</a></b>:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Hilary_Putnam" title="Hilary Putnam">Hilary Putnam</a></li>
<li><a href="/wiki/Lloyd_Shapley" title="Lloyd Shapley">Lloyd Shapley</a></li>
<li><a href="/wiki/Iolanda_Bala%C8%99" title="Iolanda Balaș">Iolanda Balaș</a></li>
</ul>
</div>
</li>
<div class="hlist inline">
<ul>
<li><a href="/wiki/Hilary_Putnam" title="Hilary Putnam">Hilary Putnam</a></li>
<li><a href="/wiki/Lloyd_Shapley" title="Lloyd Shapley">Lloyd Shapley</a></li>
<li><a href="/wiki/Iolanda_Bala%C8%99" title="Iolanda Balaș">Iolanda Balaș</a></li>
</ul>
</div>
<li><a href="/wiki/Lloyd_Shapley" title="Lloyd Shapley">Lloyd Shapley</a></li>
<li><a href="/wiki/Iolanda_Bala%C8%99" title="Iolanda Balaș">Iolanda Balaș</a></li>
<tr>
<td style="padding:2px;">
<h2 id="mp-otd-h2" style="margin:3px; background:#cedff2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3b0bf; text-align:left; color:#000; padding:0.2em 0.4em;"><span class="mw-headline" id="On_this_day...">On this day...</span></h2>
</td>
</tr>
<b><a href="/wiki/Ides_of_March" title="Ides of March">Ides of March</a></b>
<b><a href="/wiki/Hungarian_Revolution_of_1848" title="Hungarian Revolution of 1848">National Day</a></b>
<a href="/wiki/1848" title="1848">1848</a>
<div id="mp-otd-img" style="float:right;margin-left:0.5em;">
<div class="thumbinner mp-thumb" style="background: transparent; border: none; padding: 0; max-width: 100px;"><a class="image" href="/wiki/File:Villa_close_up.jpg" title="Pancho Villa"><img alt="Pancho Villa" data-file-height="574" data-file-width="431" height="133" src="//upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/100px-Villa_close_up.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/150px-Villa_close_up.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/200px-Villa_close_up.jpg 2x" width="100"/></a>
<div class="thumbcaption" style="padding: 0.25em 0; word-wrap: break-word;">Pancho Villa</div>
</div>
</div>
<img alt="Pancho Villa" data-file-height="574" data-file-width="431" height="133" src="//upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/100px-Villa_close_up.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/150px-Villa_close_up.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/1/10/Villa_close_up.jpg/200px-Villa_close_up.jpg 2x" width="100"/>
<b><a href="/wiki/Newburgh_Conspiracy" title="Newburgh Conspiracy">potential uprising</a></b>
<a href="/wiki/Newburgh_(city),_New_York" title="Newburgh (city), New York">Newburgh, New York</a>
<a href="/wiki/George_Washington" title="George Washington">George Washington</a>
<a href="/wiki/Continental_Army" title="Continental Army">Continental Army</a>
<a href="/wiki/United_States_Congress" title="United States Congress">Congress</a>
<li><a href="/wiki/1892" title="1892">1892</a> – <b><a href="/wiki/Liverpool_F.C." title="Liverpool F.C.">Liverpool F.C.</a></b>, one of England's most successful <a href="/wiki/Association_football" title="Association football">football</a> clubs, was founded.</li>
<b><a href="/wiki/Liverpool_F.C." title="Liverpool F.C.">Liverpool F.C.</a></b>
<a href="/wiki/Association_football" title="Association football">football</a>
<li><a href="/wiki/1916" title="1916">1916</a> – Six days after <a href="/wiki/Pancho_Villa" title="Pancho Villa">Pancho Villa</a> <i>(pictured)</i> and his cross-border raiders attacked <a href="/wiki/Columbus,_New_Mexico" title="Columbus, New Mexico">Columbus, New Mexico</a>, US General <a href="/wiki/John_J._Pershing" title="John J. Pershing">John J. Pershing</a> led a <b><a href="/wiki/Pancho_Villa_Expedition" title="Pancho Villa Expedition">punitive expedition into Mexico</a></b> to pursue Villa.</li>
<a href="/wiki/Pancho_Villa" title="Pancho Villa">Pancho Villa</a>
<i>(pictured)</i>
<a href="/wiki/John_J._Pershing" title="John J. Pershing">John J. Pershing</a>
<b><a href="/wiki/Pancho_Villa_Expedition" title="Pancho Villa Expedition">punitive expedition into Mexico</a></b>
<li><a href="/wiki/1941" title="1941">1941</a> – <b><a href="/wiki/Philippine_Airlines" title="Philippine Airlines">Philippine Airlines</a></b>, the <a href="/wiki/Flag_carrier" title="Flag carrier">flag carrier</a> of the Philippines took its first flight, making it the oldest commercial airline in Asia operating under its original name.</li>
<b><a href="/wiki/Philippine_Airlines" title="Philippine Airlines">Philippine Airlines</a></b>
<a href="/wiki/Flag_carrier" title="Flag carrier">flag carrier</a>
<li><a href="/wiki/2011" title="2011">2011</a> – <a href="/wiki/Arab_Spring" title="Arab Spring">Arab Spring</a>: Protests erupted <b><a href="/wiki/Syrian_Civil_War" title="Syrian Civil War">across Syria</a></b> against the authoritarian government.</li>
<a href="/wiki/Arab_Spring" title="Arab Spring">Arab Spring</a>
<b><a href="/wiki/Syrian_Civil_War" title="Syrian Civil War">across Syria</a></b>
<ul style="list-style:none; margin-left:0;">
<li>More anniversaries:
<div class="hlist inline nowraplinks">
<ul>
<li><a href="/wiki/March_14" title="March 14">March 14</a></li>
<li><b><a href="/wiki/March_15" title="March 15">March 15</a></b></li>
<li><a href="/wiki/March_16" title="March 16">March 16</a></li>
</ul>
</div>
</li>
</ul>
<li><b><a href="/wiki/March_15" title="March 15">March 15</a></b></li>
<li><a href="/wiki/March_16" title="March 16">March 16</a></li>
<div class="hlist noprint" id="mp-otd-footer" style="text-align: right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Selected_anniversaries/March" title="Wikipedia:Selected anniversaries/March">Archive</a></b></li>
<li><b><a class="extiw" href="https://lists.wikimedia.org/mailman/listinfo/daily-article-l" title="mail:daily-article-l">By email</a></b></li>
<li><b><a href="/wiki/List_of_historical_anniversaries" title="List of historical anniversaries">List of historical anniversaries</a></b></li>
</ul>
<div style="font-size:smaller;">
<ul>
<li>Current date: <span class="nowrap">March 15, 2016</span> (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)</li>
<li><span class="plainlinks" id="otd-purgelink"><span class="nowrap"><a class="external text" href="//en.wikipedia.org/w/index.php?title=Main_Page&action=purge">Reload this page</a></span></span></li>
</ul>
</div>
</div>
<li><b><a class="extiw" href="https://lists.wikimedia.org/mailman/listinfo/daily-article-l" title="mail:daily-article-l">By email</a></b></li>
<li><b><a href="/wiki/List_of_historical_anniversaries" title="List of historical anniversaries">List of historical anniversaries</a></b></li>
<div style="font-size:smaller;">
<ul>
<li>Current date: <span class="nowrap">March 15, 2016</span> (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)</li>
<li><span class="plainlinks" id="otd-purgelink"><span class="nowrap"><a class="external text" href="//en.wikipedia.org/w/index.php?title=Main_Page&action=purge">Reload this page</a></span></span></li>
</ul>
</div>
<li><span class="plainlinks" id="otd-purgelink"><span class="nowrap"><a class="external text" href="//en.wikipedia.org/w/index.php?title=Main_Page&action=purge">Reload this page</a></span></span></li>
<table id="mp-lower" style="margin:4px 0 0 0; width:100%; background:none; border-spacing: 0px;">
<tr>
<td class="MainPageBG" style="width:100%; border:1px solid #ddcef2; background:#faf5ff; vertical-align:top; color:#000;">
<table id="mp-bottom" style="width:100%; vertical-align:top; background:#faf5ff; color:#000;">
<tr>
<td style="padding:2px;">
<h2 id="mp-tfp-h2" style="margin:3px; background:#ddcef2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #afa3bf; text-align:left; color:#000; padding:0.2em 0.4em"><span class="mw-headline" id="Today.27s_featured_picture">Today's featured picture</span></h2>
</td>
</tr>
<tr>
<td style="color:#000; padding:2px;">
<div id="mp-tfp">
<table style="margin:0 3px 3px; width:100%; text-align:left; background-color:transparent; border-collapse: collapse;">
<tr>
<td style="padding:0 0.9em 0 0;"><a class="image" href="/wiki/File:Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg" title="Man sweeping volcanic ash"><img alt="Man sweeping volcanic ash" data-file-height="1524" data-file-width="2246" height="258" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/380px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/570px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/760px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg 2x" width="380"/></a></td>
<td style="padding:0 6px 0 0">
<p>A man sweeping <a href="/wiki/Volcanic_ash" title="Volcanic ash">volcanic ash</a> in <a href="/wiki/Yogyakarta" title="Yogyakarta">Yogyakarta</a> during the <b><a href="/wiki/Kelud#2014_eruption" title="Kelud">2014 eruption</a></b> of <a href="/wiki/Kelud" title="Kelud">Kelud</a>. The <a href="/wiki/East_Java" title="East Java">East Javan</a> volcano erupted on 13 February 2014 and sent volcanic ash covering an area of about 500 kilometres (310 mi) in diameter. Ashfall from the eruption "paralyzed Java", closing airports, tourist attractions, and businesses as far away as <a href="/wiki/Bandung" title="Bandung">Bandung</a> and causing millions of dollars in financial losses. Cleaning operations continued for more than a week.</p>
<p><small>Photograph: <a href="/wiki/User:Crisco_1492" title="User:Crisco 1492">Chris Woodrich</a></small></p>
<ul style="list-style:none; margin-left:0; text-align:right;">
<li>Recently featured:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Template:POTD/2016-03-14" title="Template:POTD/2016-03-14"><i>Homme au bain</i></a></li>
<li><a href="/wiki/Template:POTD/2016-03-13" title="Template:POTD/2016-03-13">Wagner VI projection</a></li>
<li><a href="/wiki/Template:POTD/2016-03-12" title="Template:POTD/2016-03-12">Lynx (constellation)</a></li>
</ul>
</div>
</li>
</ul>
<div class="hlist noprint" style="text-align:right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Picture_of_the_day/March_2016" title="Wikipedia:Picture of the day/March 2016">Archive</a></b></li>
<li><b><a href="/wiki/Wikipedia:Featured_pictures" title="Wikipedia:Featured pictures">More featured pictures...</a></b></li>
</ul>
</div>
</td>
</tr>
</table>
</div>
</td>
</tr>
</table>
</td>
</tr>
</table>
<img alt="Man sweeping volcanic ash" data-file-height="1524" data-file-width="2246" height="258" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/380px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/570px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg/760px-Ash_in_Yogyakarta_during_the_2014_eruption_of_Kelud_01.jpg 2x" width="380"/>
<a href="/wiki/Yogyakarta" title="Yogyakarta">Yogyakarta</a>
<b><a href="/wiki/Kelud#2014_eruption" title="Kelud">2014 eruption</a></b>
<a href="/wiki/Kelud" title="Kelud">Kelud</a>
<a href="/wiki/East_Java" title="East Java">East Javan</a>
<a href="/wiki/Bandung" title="Bandung">Bandung</a>
<p><small>Photograph: <a href="/wiki/User:Crisco_1492" title="User:Crisco 1492">Chris Woodrich</a></small></p>
<ul style="list-style:none; margin-left:0; text-align:right;">
<li>Recently featured:
<div class="hlist inline">
<ul>
<li><a href="/wiki/Template:POTD/2016-03-14" title="Template:POTD/2016-03-14"><i>Homme au bain</i></a></li>
<li><a href="/wiki/Template:POTD/2016-03-13" title="Template:POTD/2016-03-13">Wagner VI projection</a></li>
<li><a href="/wiki/Template:POTD/2016-03-12" title="Template:POTD/2016-03-12">Lynx (constellation)</a></li>
</ul>
</div>
</li>
</ul>
<i>Homme au bain</i>
<li><a href="/wiki/Template:POTD/2016-03-12" title="Template:POTD/2016-03-12">Lynx (constellation)</a></li>
<div class="hlist noprint" style="text-align:right;">
<ul>
<li><b><a href="/wiki/Wikipedia:Picture_of_the_day/March_2016" title="Wikipedia:Picture of the day/March 2016">Archive</a></b></li>
<li><b><a href="/wiki/Wikipedia:Featured_pictures" title="Wikipedia:Featured pictures">More featured pictures...</a></b></li>
</ul>
</div>
<li><b><a href="/wiki/Wikipedia:Featured_pictures" title="Wikipedia:Featured pictures">More featured pictures...</a></b></li>
<div id="mp-other" style="padding-top:4px; padding-bottom:2px;">
<h2><span class="mw-headline" id="Other_areas_of_Wikipedia">Other areas of Wikipedia</span></h2>
<ul>
<li><b><a href="/wiki/Wikipedia:Community_portal" title="Wikipedia:Community portal">Community portal</a></b> – Bulletin board, projects, resources and activities covering a wide range of Wikipedia areas.</li>
<li><b><a href="/wiki/Wikipedia:Help_desk" title="Wikipedia:Help desk">Help desk</a></b> – Ask questions about using Wikipedia.</li>
<li><b><a href="/wiki/Wikipedia:Local_Embassy" title="Wikipedia:Local Embassy">Local embassy</a></b> – For Wikipedia-related communication in languages other than English.</li>
<li><b><a href="/wiki/Wikipedia:Reference_desk" title="Wikipedia:Reference desk">Reference desk</a></b> – Serving as virtual librarians, Wikipedia volunteers tackle your questions on a wide range of subjects.</li>
<li><b><a href="/wiki/Wikipedia:News" title="Wikipedia:News">Site news</a></b> – Announcements, updates, articles and press releases on Wikipedia and the Wikimedia Foundation.</li>
<li><b><a href="/wiki/Wikipedia:Village_pump" title="Wikipedia:Village pump">Village pump</a></b> – For discussions about Wikipedia itself, including areas for technical issues and policies.</li>
</ul>
</div>
<li><b><a href="/wiki/Wikipedia:Help_desk" title="Wikipedia:Help desk">Help desk</a></b> – Ask questions about using Wikipedia.</li>
<li><b><a href="/wiki/Wikipedia:Local_Embassy" title="Wikipedia:Local Embassy">Local embassy</a></b> – For Wikipedia-related communication in languages other than English.</li>
<li><b><a href="/wiki/Wikipedia:Reference_desk" title="Wikipedia:Reference desk">Reference desk</a></b> – Serving as virtual librarians, Wikipedia volunteers tackle your questions on a wide range of subjects.</li>
<li><b><a href="/wiki/Wikipedia:News" title="Wikipedia:News">Site news</a></b> – Announcements, updates, articles and press releases on Wikipedia and the Wikimedia Foundation.</li>
<li><b><a href="/wiki/Wikipedia:Village_pump" title="Wikipedia:Village pump">Village pump</a></b> – For discussions about Wikipedia itself, including areas for technical issues and policies.</li>
<div id="mp-sister">
<h2><span class="mw-headline" id="Wikipedia.27s_sister_projects">Wikipedia's sister projects</span></h2>
<p>Wikipedia is hosted by the <a href="/wiki/Wikimedia_Foundation" title="Wikimedia Foundation">Wikimedia Foundation</a>, a non-profit organization that also hosts a range of other <a class="extiw" href="//wikimediafoundation.org/wiki/Our_projects" title="wmf:Our projects">projects</a>:</p>
<table class="layout plainlinks" style="width:100%; margin:auto; text-align:left; background:transparent;">
<tr>
<td style="text-align:center; padding:4px;"><a href="//commons.wikimedia.org/wiki/" title="Commons"><img alt="Commons" data-file-height="41" data-file-width="31" height="41" src="//upload.wikimedia.org/wikipedia/en/9/9d/Commons-logo-31px.png" width="31"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//commons.wikimedia.org/">Commons</a></b><br/>
Free media repository</td>
<td style="text-align:center; padding:4px;"><a href="//www.mediawiki.org/wiki/" title="MediaWiki"><img alt="MediaWiki" data-file-height="102" data-file-width="135" height="26" src="//upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/35px-Mediawiki-logo.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/53px-Mediawiki-logo.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/70px-Mediawiki-logo.png 2x" width="35"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//mediawiki.org/">MediaWiki</a></b><br/>
Wiki software development</td>
<td style="text-align:center; padding:4px;"><a href="//meta.wikimedia.org/wiki/" title="Meta-Wiki"><img alt="Meta-Wiki" data-file-height="35" data-file-width="35" height="35" src="//upload.wikimedia.org/wikipedia/en/b/bc/Meta-logo-35px.png" width="35"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//meta.wikimedia.org/">Meta-Wiki</a></b><br/>
Wikimedia project coordination</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikibooks.org/wiki/" title="Wikibooks"><img alt="Wikibooks" data-file-height="35" data-file-width="35" height="35" src="//upload.wikimedia.org/wikipedia/en/7/7f/Wikibooks-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikibooks.org/">Wikibooks</a></b><br/>
Free textbooks and manuals</td>
<td style="text-align:center; padding:3px;"><a href="//www.wikidata.org/wiki/" title="Wikidata"><img alt="Wikidata" data-file-height="590" data-file-width="1050" height="26" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/47px-Wikidata-logo.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/71px-Wikidata-logo.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/94px-Wikidata-logo.svg.png 2x" width="47"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//www.wikidata.org/">Wikidata</a></b><br/>
Free knowledge base</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikinews.org/wiki/" title="Wikinews"><img alt="Wikinews" data-file-height="30" data-file-width="51" height="30" src="//upload.wikimedia.org/wikipedia/en/6/60/Wikinews-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikinews.org/">Wikinews</a></b><br/>
Free-content news</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikiquote.org/wiki/" title="Wikiquote"><img alt="Wikiquote" data-file-height="41" data-file-width="51" height="41" src="//upload.wikimedia.org/wikipedia/en/4/46/Wikiquote-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikiquote.org/">Wikiquote</a></b><br/>
Collection of quotations</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikisource.org/wiki/" title="Wikisource"><img alt="Wikisource" data-file-height="37" data-file-width="35" height="37" src="//upload.wikimedia.org/wikipedia/en/b/b6/Wikisource-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikisource.org/">Wikisource</a></b><br/>
Free-content library</td>
<td style="text-align:center; padding:4px;"><a href="//species.wikimedia.org/wiki/" title="Wikispecies"><img alt="Wikispecies" data-file-height="41" data-file-width="35" height="41" src="//upload.wikimedia.org/wikipedia/en/b/bf/Wikispecies-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//species.wikimedia.org/">Wikispecies</a></b><br/>
Directory of species</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikiversity.org/wiki/" title="Wikiversity"><img alt="Wikiversity" data-file-height="32" data-file-width="41" height="32" src="//upload.wikimedia.org/wikipedia/en/e/e3/Wikiversity-logo-41px.png" width="41"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikiversity.org/">Wikiversity</a></b><br/>
Free learning materials and activities</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikivoyage.org/wiki/" title="Wikivoyage"><img alt="Wikivoyage" data-file-height="193" data-file-width="193" height="35" src="//upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/35px-Wikivoyage-Logo-v3-icon.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/53px-Wikivoyage-Logo-v3-icon.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/70px-Wikivoyage-Logo-v3-icon.svg.png 2x" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikivoyage.org/">Wikivoyage</a></b><br/>
Free travel guide</td>
<td style="text-align:center; padding:4px;"><a href="//en.wiktionary.org/wiki/" title="Wiktionary"><img alt="Wiktionary" data-file-height="35" data-file-width="51" height="35" src="//upload.wikimedia.org/wikipedia/en/f/f2/Wiktionary-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wiktionary.org/">Wiktionary</a></b><br/>
Dictionary and thesaurus</td>
</tr>
</table>
</div>
<a class="extiw" href="//wikimediafoundation.org/wiki/Our_projects" title="wmf:Our projects">projects</a>
<table class="layout plainlinks" style="width:100%; margin:auto; text-align:left; background:transparent;">
<tr>
<td style="text-align:center; padding:4px;"><a href="//commons.wikimedia.org/wiki/" title="Commons"><img alt="Commons" data-file-height="41" data-file-width="31" height="41" src="//upload.wikimedia.org/wikipedia/en/9/9d/Commons-logo-31px.png" width="31"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//commons.wikimedia.org/">Commons</a></b><br/>
Free media repository</td>
<td style="text-align:center; padding:4px;"><a href="//www.mediawiki.org/wiki/" title="MediaWiki"><img alt="MediaWiki" data-file-height="102" data-file-width="135" height="26" src="//upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/35px-Mediawiki-logo.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/53px-Mediawiki-logo.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/70px-Mediawiki-logo.png 2x" width="35"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//mediawiki.org/">MediaWiki</a></b><br/>
Wiki software development</td>
<td style="text-align:center; padding:4px;"><a href="//meta.wikimedia.org/wiki/" title="Meta-Wiki"><img alt="Meta-Wiki" data-file-height="35" data-file-width="35" height="35" src="//upload.wikimedia.org/wikipedia/en/b/bc/Meta-logo-35px.png" width="35"/></a></td>
<td style="width:33%; padding:4px;"><b><a class="external text" href="//meta.wikimedia.org/">Meta-Wiki</a></b><br/>
Wikimedia project coordination</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikibooks.org/wiki/" title="Wikibooks"><img alt="Wikibooks" data-file-height="35" data-file-width="35" height="35" src="//upload.wikimedia.org/wikipedia/en/7/7f/Wikibooks-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikibooks.org/">Wikibooks</a></b><br/>
Free textbooks and manuals</td>
<td style="text-align:center; padding:3px;"><a href="//www.wikidata.org/wiki/" title="Wikidata"><img alt="Wikidata" data-file-height="590" data-file-width="1050" height="26" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/47px-Wikidata-logo.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/71px-Wikidata-logo.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/94px-Wikidata-logo.svg.png 2x" width="47"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//www.wikidata.org/">Wikidata</a></b><br/>
Free knowledge base</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikinews.org/wiki/" title="Wikinews"><img alt="Wikinews" data-file-height="30" data-file-width="51" height="30" src="//upload.wikimedia.org/wikipedia/en/6/60/Wikinews-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikinews.org/">Wikinews</a></b><br/>
Free-content news</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikiquote.org/wiki/" title="Wikiquote"><img alt="Wikiquote" data-file-height="41" data-file-width="51" height="41" src="//upload.wikimedia.org/wikipedia/en/4/46/Wikiquote-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikiquote.org/">Wikiquote</a></b><br/>
Collection of quotations</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikisource.org/wiki/" title="Wikisource"><img alt="Wikisource" data-file-height="37" data-file-width="35" height="37" src="//upload.wikimedia.org/wikipedia/en/b/b6/Wikisource-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikisource.org/">Wikisource</a></b><br/>
Free-content library</td>
<td style="text-align:center; padding:4px;"><a href="//species.wikimedia.org/wiki/" title="Wikispecies"><img alt="Wikispecies" data-file-height="41" data-file-width="35" height="41" src="//upload.wikimedia.org/wikipedia/en/b/bf/Wikispecies-logo-35px.png" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//species.wikimedia.org/">Wikispecies</a></b><br/>
Directory of species</td>
</tr>
<tr>
<td style="text-align:center; padding:4px;"><a href="//en.wikiversity.org/wiki/" title="Wikiversity"><img alt="Wikiversity" data-file-height="32" data-file-width="41" height="32" src="//upload.wikimedia.org/wikipedia/en/e/e3/Wikiversity-logo-41px.png" width="41"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikiversity.org/">Wikiversity</a></b><br/>
Free learning materials and activities</td>
<td style="text-align:center; padding:4px;"><a href="//en.wikivoyage.org/wiki/" title="Wikivoyage"><img alt="Wikivoyage" data-file-height="193" data-file-width="193" height="35" src="//upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/35px-Wikivoyage-Logo-v3-icon.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/53px-Wikivoyage-Logo-v3-icon.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/70px-Wikivoyage-Logo-v3-icon.svg.png 2x" width="35"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wikivoyage.org/">Wikivoyage</a></b><br/>
Free travel guide</td>
<td style="text-align:center; padding:4px;"><a href="//en.wiktionary.org/wiki/" title="Wiktionary"><img alt="Wiktionary" data-file-height="35" data-file-width="51" height="35" src="//upload.wikimedia.org/wikipedia/en/f/f2/Wiktionary-logo-51px.png" width="51"/></a></td>
<td style="padding:4px;"><b><a class="external text" href="//en.wiktionary.org/">Wiktionary</a></b><br/>
Dictionary and thesaurus</td>
</tr>
</table>
<img alt="Commons" data-file-height="41" data-file-width="31" height="41" src="//upload.wikimedia.org/wikipedia/en/9/9d/Commons-logo-31px.png" width="31"/>
<br/>
<img alt="MediaWiki" data-file-height="102" data-file-width="135" height="26" src="//upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/35px-Mediawiki-logo.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/53px-Mediawiki-logo.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Mediawiki-logo.png/70px-Mediawiki-logo.png 2x" width="35"/>
<br/>
<img alt="Meta-Wiki" data-file-height="35" data-file-width="35" height="35" src="//upload.wikimedia.org/wikipedia/en/b/bc/Meta-logo-35px.png" width="35"/>
<br/>
<img alt="Wikibooks" data-file-height="35" data-file-width="35" height="35" src="//upload.wikimedia.org/wikipedia/en/7/7f/Wikibooks-logo-35px.png" width="35"/>
<br/>
<img alt="Wikidata" data-file-height="590" data-file-width="1050" height="26" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/47px-Wikidata-logo.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/71px-Wikidata-logo.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/94px-Wikidata-logo.svg.png 2x" width="47"/>
<br/>
<img alt="Wikinews" data-file-height="30" data-file-width="51" height="30" src="//upload.wikimedia.org/wikipedia/en/6/60/Wikinews-logo-51px.png" width="51"/>
<br/>
<img alt="Wikiquote" data-file-height="41" data-file-width="51" height="41" src="//upload.wikimedia.org/wikipedia/en/4/46/Wikiquote-logo-51px.png" width="51"/>
<br/>
<img alt="Wikisource" data-file-height="37" data-file-width="35" height="37" src="//upload.wikimedia.org/wikipedia/en/b/b6/Wikisource-logo-35px.png" width="35"/>
<br/>
<img alt="Wikispecies" data-file-height="41" data-file-width="35" height="41" src="//upload.wikimedia.org/wikipedia/en/b/bf/Wikispecies-logo-35px.png" width="35"/>
<br/>
<img alt="Wikiversity" data-file-height="32" data-file-width="41" height="32" src="//upload.wikimedia.org/wikipedia/en/e/e3/Wikiversity-logo-41px.png" width="41"/>
<br/>
<img alt="Wikivoyage" data-file-height="193" data-file-width="193" height="35" src="//upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/35px-Wikivoyage-Logo-v3-icon.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/53px-Wikivoyage-Logo-v3-icon.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Wikivoyage-Logo-v3-icon.svg/70px-Wikivoyage-Logo-v3-icon.svg.png 2x" width="35"/>
<br/>
<img alt="Wiktionary" data-file-height="35" data-file-width="51" height="35" src="//upload.wikimedia.org/wikipedia/en/f/f2/Wiktionary-logo-51px.png" width="51"/>
<br/>
<span style="display:none"> (<span class="bday dtstart published updated">2001</span>)</span>
<ul>
<li id="lang-3">More than 1,000,000 articles:
<div class="hlist inline">
<ul>
<li><a class="external text" href="//de.wikipedia.org/wiki/"><span class="autonym" lang="de" title="German (de:)" xml:lang="de">Deutsch</span></a></li>
<li><a class="external text" href="//es.wikipedia.org/wiki/"><span class="autonym" lang="es" title="Spanish (es:)" xml:lang="es">Español</span></a></li>
<li><a class="external text" href="//fr.wikipedia.org/wiki/"><span class="autonym" lang="fr" title="French (fr:)" xml:lang="fr">Français</span></a></li>
<li><a class="external text" href="//it.wikipedia.org/wiki/"><span class="autonym" lang="it" title="Italian (it:)" xml:lang="it">Italiano</span></a></li>
<li><a class="external text" href="//nl.wikipedia.org/wiki/"><span class="autonym" lang="nl" title="Dutch (nl:)" xml:lang="nl">Nederlands</span></a></li>
<li><a class="external text" href="//ja.wikipedia.org/wiki/"><span class="autonym" lang="ja" title="Japanese (ja:)" xml:lang="ja">日本語</span></a></li>
<li><a class="external text" href="//pl.wikipedia.org/wiki/"><span class="autonym" lang="pl" title="Polish (pl:)" xml:lang="pl">Polski</span></a></li>
<li><a class="external text" href="//ru.wikipedia.org/wiki/"><span class="autonym" lang="ru" title="Russian (ru:)" xml:lang="ru">Русский</span></a></li>
<li><a class="external text" href="//sv.wikipedia.org/wiki/"><span class="autonym" lang="sv" title="Swedish (sv:)" xml:lang="sv">Svenska</span></a></li>
<li><a class="external text" href="//vi.wikipedia.org/wiki/"><span class="autonym" lang="vi" title="Vietnamese (vi:)" xml:lang="vi">Tiếng Việt</span></a></li>
</ul>
</div>
</li>
<li id="lang-2">More than 250,000 articles:
<div class="hlist inline">
<ul>
<li><a class="external text" href="//ar.wikipedia.org/wiki/"><span class="autonym" lang="ar" title="Arabic (ar:)" xml:lang="ar">العربية</span></a></li>
<li><a class="external text" href="//id.wikipedia.org/wiki/"><span class="autonym" lang="id" title="Indonesian (id:)" xml:lang="id">Bahasa Indonesia</span></a></li>
<li><a class="external text" href="//ms.wikipedia.org/wiki/"><span class="autonym" lang="ms" title="Malay (ms:)" xml:lang="ms">Bahasa Melayu</span></a></li>
<li><a class="external text" href="//ca.wikipedia.org/wiki/"><span class="autonym" lang="ca" title="Catalan (ca:)" xml:lang="ca">Català</span></a></li>
<li><a class="external text" href="//cs.wikipedia.org/wiki/"><span class="autonym" lang="cs" title="Czech (cs:)" xml:lang="cs">Čeština</span></a></li>
<li><a class="external text" href="//fa.wikipedia.org/wiki/"><span class="autonym" lang="fa" title="Persian (fa:)" xml:lang="fa">فارسی</span></a></li>
<li><a class="external text" href="//ko.wikipedia.org/wiki/"><span class="autonym" lang="ko" title="Korean (ko:)" xml:lang="ko">한국어</span></a></li>
<li><a class="external text" href="//hu.wikipedia.org/wiki/"><span class="autonym" lang="hu" title="Hungarian (hu:)" xml:lang="hu">Magyar</span></a></li>
<li><a class="external text" href="//no.wikipedia.org/wiki/"><span class="autonym" lang="no" title="Norwegian (no:)" xml:lang="no">Norsk bokmål</span></a></li>
<li><a class="external text" href="//pt.wikipedia.org/wiki/"><span class="autonym" lang="pt" title="Portuguese (pt:)" xml:lang="pt">Português</span></a></li>
<li><a class="external text" href="//ro.wikipedia.org/wiki/"><span class="autonym" lang="ro" title="Romanian (ro:)" xml:lang="ro">Română</span></a></li>
<li><a class="external text" href="//sr.wikipedia.org/wiki/"><span class="autonym" lang="sr" title="Serbian (sr:)" xml:lang="sr">Srpski / српски</span></a></li>
<li><a class="external text" href="//sh.wikipedia.org/wiki/"><span class="autonym" lang="sh" title="Serbo-Croatian (sh:)" xml:lang="sh">Srpskohrvatski / српскохрватски</span></a></li>
<li><a class="external text" href="//fi.wikipedia.org/wiki/"><span class="autonym" lang="fi" title="Finnish (fi:)" xml:lang="fi">Suomi</span></a></li>
<li><a class="external text" href="//tr.wikipedia.org/wiki/"><span class="autonym" lang="tr" title="Turkish (tr:)" xml:lang="tr">Türkçe</span></a></li>
<li><a class="external text" href="//uk.wikipedia.org/wiki/"><span class="autonym" lang="uk" title="Ukrainian (uk:)" xml:lang="uk">Українська</span></a></li>
<li><a class="external text" href="//zh.wikipedia.org/wiki/"><span class="autonym" lang="zh" title="Chinese (zh:)" xml:lang="zh">中文</span></a></li>
</ul>
</div>
</li>
<li id="lang-1">More than 50,000 articles:
<div class="hlist inline">
<ul>
<li><a class="external text" href="//bs.wikipedia.org/wiki/"><span class="autonym" lang="bs" title="Bosnian (bs:)" xml:lang="bs">Bosanski</span></a></li>
<li><a class="external text" href="//bg.wikipedia.org/wiki/"><span class="autonym" lang="bg" title="Bulgarian (bg:)" xml:lang="bg">Български</span></a></li>
<li><a class="external text" href="//da.wikipedia.org/wiki/"><span class="autonym" lang="da" title="Danish (da:)" xml:lang="da">Dansk</span></a></li>
<li><a class="external text" href="//et.wikipedia.org/wiki/"><span class="autonym" lang="et" title="Estonian (et:)" xml:lang="et">Eesti</span></a></li>
<li><a class="external text" href="//el.wikipedia.org/wiki/"><span class="autonym" lang="el" title="Greek (el:)" xml:lang="el">Ελληνικά</span></a></li>
<li><a class="external text" href="//simple.wikipedia.org/wiki/"><span class="autonym" lang="simple" title="Simple English (simple:)" xml:lang="simple">English (simple)</span></a></li>
<li><a class="external text" href="//eo.wikipedia.org/wiki/"><span class="autonym" lang="eo" title="Esperanto (eo:)" xml:lang="eo">Esperanto</span></a></li>
<li><a class="external text" href="//eu.wikipedia.org/wiki/"><span class="autonym" lang="eu" title="Basque (eu:)" xml:lang="eu">Euskara</span></a></li>
<li><a class="external text" href="//gl.wikipedia.org/wiki/"><span class="autonym" lang="gl" title="Galician (gl:)" xml:lang="gl">Galego</span></a></li>
<li><a class="external text" href="//he.wikipedia.org/wiki/"><span class="autonym" lang="he" title="Hebrew (he:)" xml:lang="he">עברית</span></a></li>
<li><a class="external text" href="//hr.wikipedia.org/wiki/"><span class="autonym" lang="hr" title="Croatian (hr:)" xml:lang="hr">Hrvatski</span></a></li>
<li><a class="external text" href="//lv.wikipedia.org/wiki/"><span class="autonym" lang="lv" title="Latvian (lv:)" xml:lang="lv">Latviešu</span></a></li>
<li><a class="external text" href="//lt.wikipedia.org/wiki/"><span class="autonym" lang="lt" title="Lithuanian (lt:)" xml:lang="lt">Lietuvių</span></a></li>
<li><a class="external text" href="//nn.wikipedia.org/wiki/"><span class="autonym" lang="nn" title="Norwegian Nynorsk (nn:)" xml:lang="nn">Norsk nynorsk</span></a></li>
<li><a class="external text" href="//sk.wikipedia.org/wiki/"><span class="autonym" lang="sk" title="Slovak (sk:)" xml:lang="sk">Slovenčina</span></a></li>
<li><a class="external text" href="//sl.wikipedia.org/wiki/"><span class="autonym" lang="sl" title="Slovenian (sl:)" xml:lang="sl">Slovenščina</span></a></li>
<li><a class="external text" href="//th.wikipedia.org/wiki/"><span class="autonym" lang="th" title="Thai (th:)" xml:lang="th">ไทย</span></a></li>
</ul>
</div>
</li>
</ul>
<span class="autonym" lang="de" title="German (de:)" xml:lang="de">Deutsch</span>
<span class="autonym" lang="es" title="Spanish (es:)" xml:lang="es">Español</span>
<span class="autonym" lang="fr" title="French (fr:)" xml:lang="fr">Français</span>
<span class="autonym" lang="it" title="Italian (it:)" xml:lang="it">Italiano</span>
<span class="autonym" lang="nl" title="Dutch (nl:)" xml:lang="nl">Nederlands</span>
<span class="autonym" lang="ja" title="Japanese (ja:)" xml:lang="ja">日本語</span>
<span class="autonym" lang="pl" title="Polish (pl:)" xml:lang="pl">Polski</span>
<span class="autonym" lang="ru" title="Russian (ru:)" xml:lang="ru">Русский</span>
<span class="autonym" lang="sv" title="Swedish (sv:)" xml:lang="sv">Svenska</span>
<span class="autonym" lang="vi" title="Vietnamese (vi:)" xml:lang="vi">Tiếng Việt</span>
<span class="autonym" lang="ar" title="Arabic (ar:)" xml:lang="ar">العربية</span>
<span class="autonym" lang="id" title="Indonesian (id:)" xml:lang="id">Bahasa Indonesia</span>
<span class="autonym" lang="ms" title="Malay (ms:)" xml:lang="ms">Bahasa Melayu</span>
<span class="autonym" lang="ca" title="Catalan (ca:)" xml:lang="ca">Català</span>
<span class="autonym" lang="cs" title="Czech (cs:)" xml:lang="cs">Čeština</span>
<span class="autonym" lang="fa" title="Persian (fa:)" xml:lang="fa">فارسی</span>
<span class="autonym" lang="ko" title="Korean (ko:)" xml:lang="ko">한국어</span>
<span class="autonym" lang="hu" title="Hungarian (hu:)" xml:lang="hu">Magyar</span>
<span class="autonym" lang="no" title="Norwegian (no:)" xml:lang="no">Norsk bokmål</span>
<span class="autonym" lang="pt" title="Portuguese (pt:)" xml:lang="pt">Português</span>
<span class="autonym" lang="ro" title="Romanian (ro:)" xml:lang="ro">Română</span>
<span class="autonym" lang="sr" title="Serbian (sr:)" xml:lang="sr">Srpski / српски</span>
<span class="autonym" lang="sh" title="Serbo-Croatian (sh:)" xml:lang="sh">Srpskohrvatski / српскохрватски</span>
<span class="autonym" lang="fi" title="Finnish (fi:)" xml:lang="fi">Suomi</span>
<span class="autonym" lang="tr" title="Turkish (tr:)" xml:lang="tr">Türkçe</span>
<span class="autonym" lang="uk" title="Ukrainian (uk:)" xml:lang="uk">Українська</span>
<span class="autonym" lang="zh" title="Chinese (zh:)" xml:lang="zh">中文</span>
<span class="autonym" lang="bs" title="Bosnian (bs:)" xml:lang="bs">Bosanski</span>
<span class="autonym" lang="bg" title="Bulgarian (bg:)" xml:lang="bg">Български</span>
<span class="autonym" lang="da" title="Danish (da:)" xml:lang="da">Dansk</span>
<span class="autonym" lang="et" title="Estonian (et:)" xml:lang="et">Eesti</span>
<span class="autonym" lang="el" title="Greek (el:)" xml:lang="el">Ελληνικά</span>
<span class="autonym" lang="simple" title="Simple English (simple:)" xml:lang="simple">English (simple)</span>
<span class="autonym" lang="eo" title="Esperanto (eo:)" xml:lang="eo">Esperanto</span>
<span class="autonym" lang="eu" title="Basque (eu:)" xml:lang="eu">Euskara</span>
<span class="autonym" lang="gl" title="Galician (gl:)" xml:lang="gl">Galego</span>
<span class="autonym" lang="he" title="Hebrew (he:)" xml:lang="he">עברית</span>
<span class="autonym" lang="hr" title="Croatian (hr:)" xml:lang="hr">Hrvatski</span>
<span class="autonym" lang="lv" title="Latvian (lv:)" xml:lang="lv">Latviešu</span>
<span class="autonym" lang="lt" title="Lithuanian (lt:)" xml:lang="lt">Lietuvių</span>
<span class="autonym" lang="nn" title="Norwegian Nynorsk (nn:)" xml:lang="nn">Norsk nynorsk</span>
<span class="autonym" lang="sk" title="Slovak (sk:)" xml:lang="sk">Slovenčina</span>
<span class="autonym" lang="sl" title="Slovenian (sl:)" xml:lang="sl">Slovenščina</span>
<span class="autonym" lang="th" title="Thai (th:)" xml:lang="th">ไทย</span>
<noscript><img alt="" height="1" src="//en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1" style="border: none; position: absolute;" title="" width="1"/></noscript>
<div class="catlinks catlinks-allhidden" data-mw="interface" id="catlinks"></div>
<li id="pt-anoncontribs"><a accesskey="y" href="/wiki/Special:MyContributions" title="A list of edits made from this IP address [y]">Contributions</a></li>
<li id="pt-createaccount"><a href="/w/index.php?title=Special:UserLogin&returnto=Main+Page&type=signup" title="You are encouraged to create an account and log in; however, it is not mandatory">Create account</a></li>
<li id="pt-login"><a accesskey="o" href="/w/index.php?title=Special:UserLogin&returnto=Main+Page" title="You're encouraged to log in; however, it's not mandatory. [o]">Log in</a></li>
<div id="left-navigation">
<div aria-labelledby="p-namespaces-label" class="vectorTabs" id="p-namespaces" role="navigation">
<h3 id="p-namespaces-label">Namespaces</h3>
<ul>
<li class="selected" id="ca-nstab-main"><span><a accesskey="c" href="/wiki/Main_Page" title="View the content page [c]">Main Page</a></span></li>
<li id="ca-talk"><span><a accesskey="t" href="/wiki/Talk:Main_Page" rel="discussion" title="Discussion about the content page [t]">Talk</a></span></li>
</ul>
</div>
<div aria-labelledby="p-variants-label" class="vectorMenu emptyPortlet" id="p-variants" role="navigation">
<h3 id="p-variants-label">
<span>Variants</span><a href="#"></a>
</h3>
<div class="menu">
<ul>
</ul>
</div>
</div>
</div>
<li id="ca-talk"><span><a accesskey="t" href="/wiki/Talk:Main_Page" rel="discussion" title="Discussion about the content page [t]">Talk</a></span></li>
<div aria-labelledby="p-variants-label" class="vectorMenu emptyPortlet" id="p-variants" role="navigation">
<h3 id="p-variants-label">
<span>Variants</span><a href="#"></a>
</h3>
<div class="menu">
<ul>
</ul>
</div>
</div>
<div class="menu">
<ul>
</ul>
</div>
<li id="ca-viewsource"><span><a accesskey="e" href="/w/index.php?title=Main_Page&action=edit" title="This page is protected.
You can view its source [e]">View source</a></span></li>
<li class="collapsible" id="ca-history"><span><a accesskey="h" href="/w/index.php?title=Main_Page&action=history" title="Past revisions of this page [h]">View history</a></span></li>
<div aria-labelledby="p-cactions-label" class="vectorMenu emptyPortlet" id="p-cactions" role="navigation">
<h3 id="p-cactions-label"><span>More</span><a href="#"></a></h3>
<div class="menu">
<ul>
</ul>
</div>
</div>
<div class="menu">
<ul>
</ul>
</div>
<div aria-labelledby="p-navigation-label" class="portal" id="p-navigation" role="navigation">
<h3 id="p-navigation-label">Navigation</h3>
<div class="body">
<ul>
<li id="n-mainpage-description"><a accesskey="z" href="/wiki/Main_Page" title="Visit the main page [z]">Main page</a></li><li id="n-contents"><a href="/wiki/Portal:Contents" title="Guides to browsing Wikipedia">Contents</a></li><li id="n-featuredcontent"><a href="/wiki/Portal:Featured_content" title="Featured content – the best of Wikipedia">Featured content</a></li><li id="n-currentevents"><a href="/wiki/Portal:Current_events" title="Find background information on current events">Current events</a></li><li id="n-randompage"><a accesskey="x" href="/wiki/Special:Random" title="Load a random article [x]">Random article</a></li><li id="n-sitesupport"><a href="https://donate.wikimedia.org/wiki/Special:FundraiserRedirector?utm_source=donate&utm_medium=sidebar&utm_campaign=C13_en.wikipedia.org&uselang=en" title="Support us">Donate to Wikipedia</a></li><li id="n-shoplink"><a href="//shop.wikimedia.org" title="Visit the Wikipedia store">Wikipedia store</a></li> </ul>
</div>
</div>
<li id="n-contents"><a href="/wiki/Portal:Contents" title="Guides to browsing Wikipedia">Contents</a></li>
<li id="n-featuredcontent"><a href="/wiki/Portal:Featured_content" title="Featured content – the best of Wikipedia">Featured content</a></li>
<li id="n-currentevents"><a href="/wiki/Portal:Current_events" title="Find background information on current events">Current events</a></li>
<li id="n-randompage"><a accesskey="x" href="/wiki/Special:Random" title="Load a random article [x]">Random article</a></li>
<li id="n-sitesupport"><a href="https://donate.wikimedia.org/wiki/Special:FundraiserRedirector?utm_source=donate&utm_medium=sidebar&utm_campaign=C13_en.wikipedia.org&uselang=en" title="Support us">Donate to Wikipedia</a></li>
<li id="n-shoplink"><a href="//shop.wikimedia.org" title="Visit the Wikipedia store">Wikipedia store</a></li>
<div aria-labelledby="p-interaction-label" class="portal" id="p-interaction" role="navigation">
<h3 id="p-interaction-label">Interaction</h3>
<div class="body">
<ul>
<li id="n-help"><a href="/wiki/Help:Contents" title="Guidance on how to use and edit Wikipedia">Help</a></li><li id="n-aboutsite"><a href="/wiki/Wikipedia:About" title="Find out about Wikipedia">About Wikipedia</a></li><li id="n-portal"><a href="/wiki/Wikipedia:Community_portal" title="About the project, what you can do, where to find things">Community portal</a></li><li id="n-recentchanges"><a accesskey="r" href="/wiki/Special:RecentChanges" title="A list of recent changes in the wiki [r]">Recent changes</a></li><li id="n-contactpage"><a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us" title="How to contact Wikipedia">Contact page</a></li> </ul>
</div>
</div>
<li id="n-aboutsite"><a href="/wiki/Wikipedia:About" title="Find out about Wikipedia">About Wikipedia</a></li>
<li id="n-portal"><a href="/wiki/Wikipedia:Community_portal" title="About the project, what you can do, where to find things">Community portal</a></li>
<li id="n-recentchanges"><a accesskey="r" href="/wiki/Special:RecentChanges" title="A list of recent changes in the wiki [r]">Recent changes</a></li>
<li id="n-contactpage"><a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us" title="How to contact Wikipedia">Contact page</a></li>
<div aria-labelledby="p-tb-label" class="portal" id="p-tb" role="navigation">
<h3 id="p-tb-label">Tools</h3>
<div class="body">
<ul>
<li id="t-whatlinkshere"><a accesskey="j" href="/wiki/Special:WhatLinksHere/Main_Page" title="List of all English Wikipedia pages containing links to this page [j]">What links here</a></li><li id="t-recentchangeslinked"><a accesskey="k" href="/wiki/Special:RecentChangesLinked/Main_Page" title="Recent changes in pages linked from this page [k]">Related changes</a></li><li id="t-upload"><a accesskey="u" href="/wiki/Wikipedia:File_Upload_Wizard" title="Upload files [u]">Upload file</a></li><li id="t-specialpages"><a accesskey="q" href="/wiki/Special:SpecialPages" title="A list of all special pages [q]">Special pages</a></li><li id="t-permalink"><a href="/w/index.php?title=Main_Page&oldid=696846920" title="Permanent link to this revision of the page">Permanent link</a></li><li id="t-info"><a href="/w/index.php?title=Main_Page&action=info" title="More information about this page">Page information</a></li><li id="t-wikibase"><a accesskey="g" href="//www.wikidata.org/wiki/Q5296" title="Link to connected data repository item [g]">Wikidata item</a></li><li id="t-cite"><a href="/w/index.php?title=Special:CiteThisPage&page=Main_Page&id=696846920" title="Information on how to cite this page">Cite this page</a></li> </ul>
</div>
</div>
<li id="t-recentchangeslinked"><a accesskey="k" href="/wiki/Special:RecentChangesLinked/Main_Page" title="Recent changes in pages linked from this page [k]">Related changes</a></li>
<li id="t-upload"><a accesskey="u" href="/wiki/Wikipedia:File_Upload_Wizard" title="Upload files [u]">Upload file</a></li>
<li id="t-specialpages"><a accesskey="q" href="/wiki/Special:SpecialPages" title="A list of all special pages [q]">Special pages</a></li>
<li id="t-permalink"><a href="/w/index.php?title=Main_Page&oldid=696846920" title="Permanent link to this revision of the page">Permanent link</a></li>
<li id="t-info"><a href="/w/index.php?title=Main_Page&action=info" title="More information about this page">Page information</a></li>
<li id="t-wikibase"><a accesskey="g" href="//www.wikidata.org/wiki/Q5296" title="Link to connected data repository item [g]">Wikidata item</a></li>
<li id="t-cite"><a href="/w/index.php?title=Special:CiteThisPage&page=Main_Page&id=696846920" title="Information on how to cite this page">Cite this page</a></li>
<div aria-labelledby="p-coll-print_export-label" class="portal" id="p-coll-print_export" role="navigation">
<h3 id="p-coll-print_export-label">Print/export</h3>
<div class="body">
<ul>
<li id="coll-create_a_book"><a href="/w/index.php?title=Special:Book&bookcmd=book_creator&referer=Main+Page">Create a book</a></li><li id="coll-download-as-rdf2latex"><a href="/w/index.php?title=Special:Book&bookcmd=render_article&arttitle=Main+Page&returnto=Main+Page&oldid=696846920&writer=rdf2latex">Download as PDF</a></li><li id="t-print"><a accesskey="p" href="/w/index.php?title=Main_Page&printable=yes" title="Printable version of this page [p]">Printable version</a></li> </ul>
</div>
</div>
<li id="coll-download-as-rdf2latex"><a href="/w/index.php?title=Special:Book&bookcmd=render_article&arttitle=Main+Page&returnto=Main+Page&oldid=696846920&writer=rdf2latex">Download as PDF</a></li>
<li id="t-print"><a accesskey="p" href="/w/index.php?title=Main_Page&printable=yes" title="Printable version of this page [p]">Printable version</a></li>
<div aria-labelledby="p-wikibase-otherprojects-label" class="portal" id="p-wikibase-otherprojects" role="navigation">
<h3 id="p-wikibase-otherprojects-label">In other projects</h3>
<div class="body">
<ul>
<li class="wb-otherproject-link wb-otherproject-commons"><a href="https://commons.wikimedia.org/wiki/Main_Page" hreflang="en">Wikimedia Commons</a></li><li class="wb-otherproject-link wb-otherproject-meta"><a href="https://meta.wikimedia.org/wiki/Main_Page" hreflang="en">Meta-Wiki</a></li><li class="wb-otherproject-link wb-otherproject-species"><a href="https://species.wikimedia.org/wiki/Main_Page" hreflang="en">Wikispecies</a></li><li class="wb-otherproject-link wb-otherproject-wikibooks"><a href="https://en.wikibooks.org/wiki/Main_Page" hreflang="en">Wikibooks</a></li><li class="wb-otherproject-link wb-otherproject-wikidata"><a href="https://www.wikidata.org/wiki/Wikidata:Main_Page" hreflang="en">Wikidata</a></li><li class="wb-otherproject-link wb-otherproject-wikinews"><a href="https://en.wikinews.org/wiki/Main_Page" hreflang="en">Wikinews</a></li><li class="wb-otherproject-link wb-otherproject-wikiquote"><a href="https://en.wikiquote.org/wiki/Main_Page" hreflang="en">Wikiquote</a></li><li class="wb-otherproject-link wb-otherproject-wikisource"><a href="https://en.wikisource.org/wiki/Main_Page" hreflang="en">Wikisource</a></li><li class="wb-otherproject-link wb-otherproject-wikiversity"><a href="https://en.wikiversity.org/wiki/Wikiversity:Main_Page" hreflang="en">Wikiversity</a></li><li class="wb-otherproject-link wb-otherproject-wikivoyage"><a href="https://en.wikivoyage.org/wiki/Main_Page" hreflang="en">Wikivoyage</a></li> </ul>
</div>
</div>
<li class="wb-otherproject-link wb-otherproject-meta"><a href="https://meta.wikimedia.org/wiki/Main_Page" hreflang="en">Meta-Wiki</a></li>
<li class="wb-otherproject-link wb-otherproject-species"><a href="https://species.wikimedia.org/wiki/Main_Page" hreflang="en">Wikispecies</a></li>
<li class="wb-otherproject-link wb-otherproject-wikibooks"><a href="https://en.wikibooks.org/wiki/Main_Page" hreflang="en">Wikibooks</a></li>
<li class="wb-otherproject-link wb-otherproject-wikidata"><a href="https://www.wikidata.org/wiki/Wikidata:Main_Page" hreflang="en">Wikidata</a></li>
<li class="wb-otherproject-link wb-otherproject-wikinews"><a href="https://en.wikinews.org/wiki/Main_Page" hreflang="en">Wikinews</a></li>
<li class="wb-otherproject-link wb-otherproject-wikiquote"><a href="https://en.wikiquote.org/wiki/Main_Page" hreflang="en">Wikiquote</a></li>
<li class="wb-otherproject-link wb-otherproject-wikisource"><a href="https://en.wikisource.org/wiki/Main_Page" hreflang="en">Wikisource</a></li>
<li class="wb-otherproject-link wb-otherproject-wikiversity"><a href="https://en.wikiversity.org/wiki/Wikiversity:Main_Page" hreflang="en">Wikiversity</a></li>
<li class="wb-otherproject-link wb-otherproject-wikivoyage"><a href="https://en.wikivoyage.org/wiki/Main_Page" hreflang="en">Wikivoyage</a></li>
<div aria-labelledby="p-lang-label" class="portal" id="p-lang" role="navigation">
<h3 id="p-lang-label">Languages</h3>
<div class="body">
<ul>
<li class="interlanguage-link interwiki-simple"><a href="//simple.wikipedia.org/wiki/" hreflang="simple" lang="simple" title="Simple English">Simple English</a></li><li class="interlanguage-link interwiki-ar"><a href="//ar.wikipedia.org/wiki/" hreflang="ar" lang="ar" title="Arabic">العربية</a></li><li class="interlanguage-link interwiki-id"><a href="//id.wikipedia.org/wiki/" hreflang="id" lang="id" title="Indonesian">Bahasa Indonesia</a></li><li class="interlanguage-link interwiki-ms"><a href="//ms.wikipedia.org/wiki/" hreflang="ms" lang="ms" title="Malay">Bahasa Melayu</a></li><li class="interlanguage-link interwiki-bs"><a href="//bs.wikipedia.org/wiki/" hreflang="bs" lang="bs" title="Bosnian">Bosanski</a></li><li class="interlanguage-link interwiki-bg"><a href="//bg.wikipedia.org/wiki/" hreflang="bg" lang="bg" title="Bulgarian">Български</a></li><li class="interlanguage-link interwiki-ca"><a href="//ca.wikipedia.org/wiki/" hreflang="ca" lang="ca" title="Catalan">Català</a></li><li class="interlanguage-link interwiki-cs"><a href="//cs.wikipedia.org/wiki/" hreflang="cs" lang="cs" title="Czech">Čeština</a></li><li class="interlanguage-link interwiki-da"><a href="//da.wikipedia.org/wiki/" hreflang="da" lang="da" title="Danish">Dansk</a></li><li class="interlanguage-link interwiki-de"><a href="//de.wikipedia.org/wiki/" hreflang="de" lang="de" title="German">Deutsch</a></li><li class="interlanguage-link interwiki-et"><a href="//et.wikipedia.org/wiki/" hreflang="et" lang="et" title="Estonian">Eesti</a></li><li class="interlanguage-link interwiki-el"><a href="//el.wikipedia.org/wiki/" hreflang="el" lang="el" title="Greek">Ελληνικά</a></li><li class="interlanguage-link interwiki-es"><a href="//es.wikipedia.org/wiki/" hreflang="es" lang="es" title="Spanish">Español</a></li><li class="interlanguage-link interwiki-eo"><a href="//eo.wikipedia.org/wiki/" hreflang="eo" lang="eo" title="Esperanto">Esperanto</a></li><li class="interlanguage-link interwiki-eu"><a href="//eu.wikipedia.org/wiki/" hreflang="eu" lang="eu" title="Basque">Euskara</a></li><li class="interlanguage-link interwiki-fa"><a href="//fa.wikipedia.org/wiki/" hreflang="fa" lang="fa" title="Persian">فارسی</a></li><li class="interlanguage-link interwiki-fr"><a href="//fr.wikipedia.org/wiki/" hreflang="fr" lang="fr" title="French">Français</a></li><li class="interlanguage-link interwiki-gl"><a href="//gl.wikipedia.org/wiki/" hreflang="gl" lang="gl" title="Galician">Galego</a></li><li class="interlanguage-link interwiki-ko"><a href="//ko.wikipedia.org/wiki/" hreflang="ko" lang="ko" title="Korean">한국어</a></li><li class="interlanguage-link interwiki-he"><a href="//he.wikipedia.org/wiki/" hreflang="he" lang="he" title="Hebrew">עברית</a></li><li class="interlanguage-link interwiki-hr"><a href="//hr.wikipedia.org/wiki/" hreflang="hr" lang="hr" title="Croatian">Hrvatski</a></li><li class="interlanguage-link interwiki-it"><a href="//it.wikipedia.org/wiki/" hreflang="it" lang="it" title="Italian">Italiano</a></li><li class="interlanguage-link interwiki-ka"><a href="//ka.wikipedia.org/wiki/" hreflang="ka" lang="ka" title="Georgian">ქართული</a></li><li class="interlanguage-link interwiki-lv"><a href="//lv.wikipedia.org/wiki/" hreflang="lv" lang="lv" title="Latvian">Latviešu</a></li><li class="interlanguage-link interwiki-lt"><a href="//lt.wikipedia.org/wiki/" hreflang="lt" lang="lt" title="Lithuanian">Lietuvių</a></li><li class="interlanguage-link interwiki-hu"><a href="//hu.wikipedia.org/wiki/" hreflang="hu" lang="hu" title="Hungarian">Magyar</a></li><li class="interlanguage-link interwiki-nl"><a href="//nl.wikipedia.org/wiki/" hreflang="nl" lang="nl" title="Dutch">Nederlands</a></li><li class="interlanguage-link interwiki-ja"><a href="//ja.wikipedia.org/wiki/" hreflang="ja" lang="ja" title="Japanese">日本語</a></li><li class="interlanguage-link interwiki-no"><a href="//no.wikipedia.org/wiki/" hreflang="no" lang="no" title="Norwegian">Norsk bokmål</a></li><li class="interlanguage-link interwiki-nn"><a href="//nn.wikipedia.org/wiki/" hreflang="nn" lang="nn" title="Norwegian Nynorsk">Norsk nynorsk</a></li><li class="interlanguage-link interwiki-pl"><a href="//pl.wikipedia.org/wiki/" hreflang="pl" lang="pl" title="Polish">Polski</a></li><li class="interlanguage-link interwiki-pt"><a href="//pt.wikipedia.org/wiki/" hreflang="pt" lang="pt" title="Portuguese">Português</a></li><li class="interlanguage-link interwiki-ro"><a href="//ro.wikipedia.org/wiki/" hreflang="ro" lang="ro" title="Romanian">Română</a></li><li class="interlanguage-link interwiki-ru"><a href="//ru.wikipedia.org/wiki/" hreflang="ru" lang="ru" title="Russian">Русский</a></li><li class="interlanguage-link interwiki-sk"><a href="//sk.wikipedia.org/wiki/" hreflang="sk" lang="sk" title="Slovak">Slovenčina</a></li><li class="interlanguage-link interwiki-sl"><a href="//sl.wikipedia.org/wiki/" hreflang="sl" lang="sl" title="Slovenian">Slovenščina</a></li><li class="interlanguage-link interwiki-sr"><a href="//sr.wikipedia.org/wiki/" hreflang="sr" lang="sr" title="Serbian">Српски / srpski</a></li><li class="interlanguage-link interwiki-sh"><a href="//sh.wikipedia.org/wiki/" hreflang="sh" lang="sh" title="Serbo-Croatian">Srpskohrvatski / српскохрватски</a></li><li class="interlanguage-link interwiki-fi"><a href="//fi.wikipedia.org/wiki/" hreflang="fi" lang="fi" title="Finnish">Suomi</a></li><li class="interlanguage-link interwiki-sv"><a href="//sv.wikipedia.org/wiki/" hreflang="sv" lang="sv" title="Swedish">Svenska</a></li><li class="interlanguage-link interwiki-th"><a href="//th.wikipedia.org/wiki/" hreflang="th" lang="th" title="Thai">ไทย</a></li><li class="interlanguage-link interwiki-vi"><a href="//vi.wikipedia.org/wiki/" hreflang="vi" lang="vi" title="Vietnamese">Tiếng Việt</a></li><li class="interlanguage-link interwiki-tr"><a href="//tr.wikipedia.org/wiki/" hreflang="tr" lang="tr" title="Turkish">Türkçe</a></li><li class="interlanguage-link interwiki-uk"><a href="//uk.wikipedia.org/wiki/" hreflang="uk" lang="uk" title="Ukrainian">Українська</a></li><li class="interlanguage-link interwiki-zh"><a href="//zh.wikipedia.org/wiki/" hreflang="zh" lang="zh" title="Chinese">中文</a></li><li class="uls-p-lang-dummy"><a href="#"></a></li> </ul>
</div>
</div>
<li class="interlanguage-link interwiki-ar"><a href="//ar.wikipedia.org/wiki/" hreflang="ar" lang="ar" title="Arabic">العربية</a></li>
<li class="interlanguage-link interwiki-id"><a href="//id.wikipedia.org/wiki/" hreflang="id" lang="id" title="Indonesian">Bahasa Indonesia</a></li>
<li class="interlanguage-link interwiki-ms"><a href="//ms.wikipedia.org/wiki/" hreflang="ms" lang="ms" title="Malay">Bahasa Melayu</a></li>
<li class="interlanguage-link interwiki-bs"><a href="//bs.wikipedia.org/wiki/" hreflang="bs" lang="bs" title="Bosnian">Bosanski</a></li>
<li class="interlanguage-link interwiki-bg"><a href="//bg.wikipedia.org/wiki/" hreflang="bg" lang="bg" title="Bulgarian">Български</a></li>
<li class="interlanguage-link interwiki-ca"><a href="//ca.wikipedia.org/wiki/" hreflang="ca" lang="ca" title="Catalan">Català</a></li>
<li class="interlanguage-link interwiki-cs"><a href="//cs.wikipedia.org/wiki/" hreflang="cs" lang="cs" title="Czech">Čeština</a></li>
<li class="interlanguage-link interwiki-da"><a href="//da.wikipedia.org/wiki/" hreflang="da" lang="da" title="Danish">Dansk</a></li>
<li class="interlanguage-link interwiki-de"><a href="//de.wikipedia.org/wiki/" hreflang="de" lang="de" title="German">Deutsch</a></li>
<li class="interlanguage-link interwiki-et"><a href="//et.wikipedia.org/wiki/" hreflang="et" lang="et" title="Estonian">Eesti</a></li>
<li class="interlanguage-link interwiki-el"><a href="//el.wikipedia.org/wiki/" hreflang="el" lang="el" title="Greek">Ελληνικά</a></li>
<li class="interlanguage-link interwiki-es"><a href="//es.wikipedia.org/wiki/" hreflang="es" lang="es" title="Spanish">Español</a></li>
<li class="interlanguage-link interwiki-eo"><a href="//eo.wikipedia.org/wiki/" hreflang="eo" lang="eo" title="Esperanto">Esperanto</a></li>
<li class="interlanguage-link interwiki-eu"><a href="//eu.wikipedia.org/wiki/" hreflang="eu" lang="eu" title="Basque">Euskara</a></li>
<li class="interlanguage-link interwiki-fa"><a href="//fa.wikipedia.org/wiki/" hreflang="fa" lang="fa" title="Persian">فارسی</a></li>
<li class="interlanguage-link interwiki-fr"><a href="//fr.wikipedia.org/wiki/" hreflang="fr" lang="fr" title="French">Français</a></li>
<li class="interlanguage-link interwiki-gl"><a href="//gl.wikipedia.org/wiki/" hreflang="gl" lang="gl" title="Galician">Galego</a></li>
<li class="interlanguage-link interwiki-ko"><a href="//ko.wikipedia.org/wiki/" hreflang="ko" lang="ko" title="Korean">한국어</a></li>
<li class="interlanguage-link interwiki-he"><a href="//he.wikipedia.org/wiki/" hreflang="he" lang="he" title="Hebrew">עברית</a></li>
<li class="interlanguage-link interwiki-hr"><a href="//hr.wikipedia.org/wiki/" hreflang="hr" lang="hr" title="Croatian">Hrvatski</a></li>
<li class="interlanguage-link interwiki-it"><a href="//it.wikipedia.org/wiki/" hreflang="it" lang="it" title="Italian">Italiano</a></li>
<li class="interlanguage-link interwiki-ka"><a href="//ka.wikipedia.org/wiki/" hreflang="ka" lang="ka" title="Georgian">ქართული</a></li>
<li class="interlanguage-link interwiki-lv"><a href="//lv.wikipedia.org/wiki/" hreflang="lv" lang="lv" title="Latvian">Latviešu</a></li>
<li class="interlanguage-link interwiki-lt"><a href="//lt.wikipedia.org/wiki/" hreflang="lt" lang="lt" title="Lithuanian">Lietuvių</a></li>
<li class="interlanguage-link interwiki-hu"><a href="//hu.wikipedia.org/wiki/" hreflang="hu" lang="hu" title="Hungarian">Magyar</a></li>
<li class="interlanguage-link interwiki-nl"><a href="//nl.wikipedia.org/wiki/" hreflang="nl" lang="nl" title="Dutch">Nederlands</a></li>
<li class="interlanguage-link interwiki-ja"><a href="//ja.wikipedia.org/wiki/" hreflang="ja" lang="ja" title="Japanese">日本語</a></li>
<li class="interlanguage-link interwiki-no"><a href="//no.wikipedia.org/wiki/" hreflang="no" lang="no" title="Norwegian">Norsk bokmål</a></li>
<li class="interlanguage-link interwiki-nn"><a href="//nn.wikipedia.org/wiki/" hreflang="nn" lang="nn" title="Norwegian Nynorsk">Norsk nynorsk</a></li>
<li class="interlanguage-link interwiki-pl"><a href="//pl.wikipedia.org/wiki/" hreflang="pl" lang="pl" title="Polish">Polski</a></li>
<li class="interlanguage-link interwiki-pt"><a href="//pt.wikipedia.org/wiki/" hreflang="pt" lang="pt" title="Portuguese">Português</a></li>
<li class="interlanguage-link interwiki-ro"><a href="//ro.wikipedia.org/wiki/" hreflang="ro" lang="ro" title="Romanian">Română</a></li>
<li class="interlanguage-link interwiki-ru"><a href="//ru.wikipedia.org/wiki/" hreflang="ru" lang="ru" title="Russian">Русский</a></li>
<li class="interlanguage-link interwiki-sk"><a href="//sk.wikipedia.org/wiki/" hreflang="sk" lang="sk" title="Slovak">Slovenčina</a></li>
<li class="interlanguage-link interwiki-sl"><a href="//sl.wikipedia.org/wiki/" hreflang="sl" lang="sl" title="Slovenian">Slovenščina</a></li>
<li class="interlanguage-link interwiki-sr"><a href="//sr.wikipedia.org/wiki/" hreflang="sr" lang="sr" title="Serbian">Српски / srpski</a></li>
<li class="interlanguage-link interwiki-sh"><a href="//sh.wikipedia.org/wiki/" hreflang="sh" lang="sh" title="Serbo-Croatian">Srpskohrvatski / српскохрватски</a></li>
<li class="interlanguage-link interwiki-fi"><a href="//fi.wikipedia.org/wiki/" hreflang="fi" lang="fi" title="Finnish">Suomi</a></li>
<li class="interlanguage-link interwiki-sv"><a href="//sv.wikipedia.org/wiki/" hreflang="sv" lang="sv" title="Swedish">Svenska</a></li>
<li class="interlanguage-link interwiki-th"><a href="//th.wikipedia.org/wiki/" hreflang="th" lang="th" title="Thai">ไทย</a></li>
<li class="interlanguage-link interwiki-vi"><a href="//vi.wikipedia.org/wiki/" hreflang="vi" lang="vi" title="Vietnamese">Tiếng Việt</a></li>
<li class="interlanguage-link interwiki-tr"><a href="//tr.wikipedia.org/wiki/" hreflang="tr" lang="tr" title="Turkish">Türkçe</a></li>
<li class="interlanguage-link interwiki-uk"><a href="//uk.wikipedia.org/wiki/" hreflang="uk" lang="uk" title="Ukrainian">Українська</a></li>
<li class="interlanguage-link interwiki-zh"><a href="//zh.wikipedia.org/wiki/" hreflang="zh" lang="zh" title="Chinese">中文</a></li>
<li class="uls-p-lang-dummy"><a href="#"></a></li>
<div id="footer" role="contentinfo">
<ul id="footer-info">
<li id="footer-info-lastmod"> This page was last modified on 26 December 2015, at 10:03.</li>
<li id="footer-info-copyright">Text is available under the <a href="//en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License" rel="license">Creative Commons Attribution-ShareAlike License</a><a href="//creativecommons.org/licenses/by-sa/3.0/" rel="license" style="display:none;"></a>;
additional terms may apply. By using this site, you agree to the <a href="//wikimediafoundation.org/wiki/Terms_of_Use">Terms of Use</a> and <a href="//wikimediafoundation.org/wiki/Privacy_policy">Privacy Policy</a>. Wikipedia® is a registered trademark of the <a href="//www.wikimediafoundation.org/">Wikimedia Foundation, Inc.</a>, a non-profit organization.</li>
</ul>
<ul id="footer-places">
<li id="footer-places-privacy"><a href="//wikimediafoundation.org/wiki/Privacy_policy" title="wmf:Privacy policy">Privacy policy</a></li>
<li id="footer-places-about"><a href="/wiki/Wikipedia:About" title="Wikipedia:About">About Wikipedia</a></li>
<li id="footer-places-disclaimer"><a href="/wiki/Wikipedia:General_disclaimer" title="Wikipedia:General disclaimer">Disclaimers</a></li>
<li id="footer-places-contact"><a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us">Contact Wikipedia</a></li>
<li id="footer-places-developers"><a href="https://www.mediawiki.org/wiki/Special:MyLanguage/How_to_contribute">Developers</a></li>
<li id="footer-places-cookiestatement"><a href="//wikimediafoundation.org/wiki/Cookie_statement">Cookie statement</a></li>
<li id="footer-places-mobileview"><a class="noprint stopMobileRedirectToggle" href="//en.m.wikipedia.org/w/index.php?title=Main_Page&mobileaction=toggle_view_mobile">Mobile view</a></li>
</ul>
<ul class="noprint" id="footer-icons">
<li id="footer-copyrightico">
<a href="//wikimediafoundation.org/"><img alt="Wikimedia Foundation" height="31" src="/static/images/wikimedia-button.png" srcset="/static/images/wikimedia-button-1.5x.png 1.5x, /static/images/wikimedia-button-2x.png 2x" width="88"/></a> </li>
<li id="footer-poweredbyico">
<a href="//www.mediawiki.org/"><img alt="Powered by MediaWiki" height="31" src="/w/resources/assets/poweredby_mediawiki_88x31.png" srcset="/w/resources/assets/poweredby_mediawiki_132x47.png 1.5x, /w/resources/assets/poweredby_mediawiki_176x62.png 2x" width="88"/></a> </li>
</ul>
<div style="clear:both"></div>
</div>
<a href="//creativecommons.org/licenses/by-sa/3.0/" rel="license" style="display:none;"></a>
<a href="//wikimediafoundation.org/wiki/Terms_of_Use">Terms of Use</a>
<a href="//wikimediafoundation.org/wiki/Privacy_policy">Privacy Policy</a>
<a href="//www.wikimediafoundation.org/">Wikimedia Foundation, Inc.</a>
<ul id="footer-places">
<li id="footer-places-privacy"><a href="//wikimediafoundation.org/wiki/Privacy_policy" title="wmf:Privacy policy">Privacy policy</a></li>
<li id="footer-places-about"><a href="/wiki/Wikipedia:About" title="Wikipedia:About">About Wikipedia</a></li>
<li id="footer-places-disclaimer"><a href="/wiki/Wikipedia:General_disclaimer" title="Wikipedia:General disclaimer">Disclaimers</a></li>
<li id="footer-places-contact"><a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us">Contact Wikipedia</a></li>
<li id="footer-places-developers"><a href="https://www.mediawiki.org/wiki/Special:MyLanguage/How_to_contribute">Developers</a></li>
<li id="footer-places-cookiestatement"><a href="//wikimediafoundation.org/wiki/Cookie_statement">Cookie statement</a></li>
<li id="footer-places-mobileview"><a class="noprint stopMobileRedirectToggle" href="//en.m.wikipedia.org/w/index.php?title=Main_Page&mobileaction=toggle_view_mobile">Mobile view</a></li>
</ul>
<li id="footer-places-about"><a href="/wiki/Wikipedia:About" title="Wikipedia:About">About Wikipedia</a></li>
<li id="footer-places-disclaimer"><a href="/wiki/Wikipedia:General_disclaimer" title="Wikipedia:General disclaimer">Disclaimers</a></li>
<li id="footer-places-contact"><a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us">Contact Wikipedia</a></li>
<li id="footer-places-developers"><a href="https://www.mediawiki.org/wiki/Special:MyLanguage/How_to_contribute">Developers</a></li>
<li id="footer-places-cookiestatement"><a href="//wikimediafoundation.org/wiki/Cookie_statement">Cookie statement</a></li>
<li id="footer-places-mobileview"><a class="noprint stopMobileRedirectToggle" href="//en.m.wikipedia.org/w/index.php?title=Main_Page&mobileaction=toggle_view_mobile">Mobile view</a></li>
<ul class="noprint" id="footer-icons">
<li id="footer-copyrightico">
<a href="//wikimediafoundation.org/"><img alt="Wikimedia Foundation" height="31" src="/static/images/wikimedia-button.png" srcset="/static/images/wikimedia-button-1.5x.png 1.5x, /static/images/wikimedia-button-2x.png 2x" width="88"/></a> </li>
<li id="footer-poweredbyico">
<a href="//www.mediawiki.org/"><img alt="Powered by MediaWiki" height="31" src="/w/resources/assets/poweredby_mediawiki_88x31.png" srcset="/w/resources/assets/poweredby_mediawiki_132x47.png 1.5x, /w/resources/assets/poweredby_mediawiki_176x62.png 2x" width="88"/></a> </li>
</ul>
<img alt="Wikimedia Foundation" height="31" src="/static/images/wikimedia-button.png" srcset="/static/images/wikimedia-button-1.5x.png 1.5x, /static/images/wikimedia-button-2x.png 2x" width="88"/>
<img alt="Powered by MediaWiki" height="31" src="/w/resources/assets/poweredby_mediawiki_88x31.png" srcset="/w/resources/assets/poweredby_mediawiki_132x47.png 1.5x, /w/resources/assets/poweredby_mediawiki_176x62.png 2x" width="88"/>
###Markdown
Time for a challenge!To make sure that everyone is on the same page (and to give you a little more practice dealing with HTML), let's partner up with the person next to you and try challenge A, on using html, in the challenges directory. Creating data with web APIsMost people who think they want to do web scraping actually want to pull data down from site-supplied APIs. Using an API is better in almost every way, and really the only reason to scrape data is if:1. The website was constructed in the 90s and does not have an API; or,2. You are doing something illegalIf [LiveJournal has an API](http://dev.livejournal.com/), the website you are interested in probably does too. What is an API?**API** is shorthand for **A**pplication **P**rogramming **I**nterface, which is in turn computer-ese for a middleman.Think about it this way. You have a bunch of things on your computer that you want other people to be able to look at. Some of them are static documents, some of them call programs in real time, and some of them are programs themselves. Solution 1You publish login credentials on the internet, and let anyone log into your computerProblems:1. People will need to know how each document and program works to be able to access their data2. You don't want the world looking at your browser history Solution 2You paste everything into HTML and publish it on the internetProblems:1. This can be information overload2. Making things dynamic can be tricky Solution 3You create a set of methods to act as an intermediary between the people you want to help and the things you want them to have access to.Why this is the best solution:1. People only access what you want them to have, in the way that you want them to have it2. People use one language to get the things they wantWhy this is still not Panglossian:1. You will have to explain to people how to use your middleman Twitter's APITwitter has an API - mostly written for third-party apps - that is comparatively straightforward and gives you access to _nearly_ all of the information that Twitter has about its users, including:1. User histories2. User (and tweet) location3. User language4. Tweet popularity5. Tweet spread6. Conversation chainsAlso, Twitter returns data to you in json, or **J**ava **S**cript **O**bject **N**otation. This is a very common format for passing data around http connections for browsers and servers, so many APIs return it as a datatype as well (instead of using something like xml or plain text).Luckily, json converts into native Python data structures. Specifically, every json object you get from Twitter will be a combination of nested `dicts` and `lists`, which you learned about yesterday. This makes Twitter a lot easier to manipulate in Python than html objects, for example.Here's what a tweet looks like:
###Code
import json
with open('../data/02_tweet.json','r') as f:
a_tweet = json.loads(f.read())
###Output
_____no_output_____
###Markdown
We can take a quick look at the structure by pretty printing it:
###Code
from pprint import pprint
pprint(a_tweet)
###Output
{'contributors': None,
'coordinates': None,
'created_at': 'Thu Apr 02 06:09:39 +0000 2015',
'entities': {'hashtags': [], 'symbols': [], 'urls': [], 'user_mentions': []},
'favorite_count': 0,
'favorited': False,
'geo': None,
'id': 583511591334719488,
'id_str': '583511591334719488',
'in_reply_to_screen_name': None,
'in_reply_to_status_id': None,
'in_reply_to_status_id_str': None,
'in_reply_to_user_id': None,
'in_reply_to_user_id_str': None,
'lang': 'ht',
'place': None,
'retweet_count': 0,
'retweeted': False,
'source': '<a href="http://twitter.com" rel="nofollow">Twitter Web Client</a>',
'text': '.IPA rettiwT eht tuoba nraeL',
'truncated': False,
'user': {'contributors_enabled': False,
'created_at': 'Thu Apr 02 05:54:25 +0000 2015',
'default_profile': True,
'default_profile_image': False,
'description': '',
'entities': {'description': {'urls': []}},
'favourites_count': 0,
'follow_request_sent': False,
'followers_count': 0,
'following': False,
'friends_count': 0,
'geo_enabled': False,
'id': 3129088320,
'id_str': '3129088320',
'is_translation_enabled': False,
'is_translator': False,
'lang': 'en',
'listed_count': 0,
'location': '',
'name': 'Yelekreb Bald',
'notifications': False,
'profile_background_color': 'C0DEED',
'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png',
'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png',
'profile_background_tile': False,
'profile_image_url': 'http://pbs.twimg.com/profile_images/583509317476712449/mkd8KGeu_normal.jpg',
'profile_image_url_https': 'https://pbs.twimg.com/profile_images/583509317476712449/mkd8KGeu_normal.jpg',
'profile_link_color': '0084B4',
'profile_location': None,
'profile_sidebar_border_color': 'C0DEED',
'profile_sidebar_fill_color': 'DDEEF6',
'profile_text_color': '333333',
'profile_use_background_image': True,
'protected': False,
'screen_name': 'tob_pohskrow',
'statuses_count': 1,
'time_zone': None,
'url': None,
'utc_offset': None,
'verified': False}}
###Markdown
Time for a challenge!Let's see how much you remember about lists and dicts from yesterday. Go into the challenges directory and try your hand at `02_scraping/C_json.py`. AuthenticationTwitter controls access to their servers via a process of authentication and authorization. Authentication is how you let Twitter know who you are, in a way that is very hard to fake. Authorization is how the account owner (which will usually be yourself unless you are writing a Twitter app) controls what you are allowed to do in Twitter using their account. In Twitter, different levels of authorization require different levels of authentication. Because we want to be able to interact with everything, we'll need the highest level of authorization and the strictest level of authentication. In Twitter, this means that we need two sets of ID's (called keys or tokens) and passwords (called secrets):* consumer_key* consumer_secret* access_token_key* access_token_secretWe'll provide some for you to use, but if you want to get your own you need to create an account on Twitter with a verified phone number. Then, while signed in to your Twitter account, go to: https://apps.twitter.com/. Follow the prompts to generate your keys and access tokens. Note that getting the second ID/password pair requires that you manually set the authorization level of your app.We've stored our credentials in a separate file, which is smart. However, we have uploaded it to Github so that you have them too, which is not smart. **You should NEVER NEVER NEVER do this in real life.**We've stored it in YAML format, because it is more human-readible than JSON is. However, once it's inside Python, these data structures behave the same way.
###Code
import yaml
with open('../etc/creds.yml', 'r') as f:
creds = yaml.load(f)
###Output
_____no_output_____
###Markdown
We're going to load these credentials into a requests module specifically designed for handling the flavor of authentication management that Twitter uses.
###Code
from requests_oauthlib import OAuth1Session
twitter = OAuth1Session(**creds)
###Output
_____no_output_____
###Markdown
That `**` syntax we just used is called a "double splat" and is a python convenience function for converting the key-value pairs of a dictionary into keyword-argument pairs to pass to a function. Accessing the API Access to Twitter's API is organized through URLs called "endpoints". An endpoint is the location at which you can submit a request for Twitter to do something for you.For example, the "endpoint" to search for specific kinds of tweets is at:```https://api.twitter.com/1.1/search/tweets.json```whereas posting new tweets is at:```https://api.twitter.com/1.1/statuses/update.json```For more information on the REST APIs, end points, and terms, check out: https://dev.twitter.com/rest/public. For the Streaming APIs: https://dev.twitter.com/streaming/overview.All APIs on Twitter are "rate-limited" - this means that you are only allowed to ask a set number of questions per unit time (to keep their servers from being overloaded). This rate varies by endpoint and authorization, so be sure to check their developer site for the action you are trying to take.For example, at the lowest level of authorization (Twitter calls this `application only`), you are allowed to make 450 search requests per 15 minute window, or about one every two seconds. At the highest level of authorization (Twitter calls this `user`) you can submit 180 requests every 15 minutes, or only about once every five seconds.> side note - Google search is the worst rate-limiting I've ever seen, with an allowance of one hundred requests per day, or about once every *nine hundred seconds*Let's try a couple of simple API queries. We're going to specify query parameters with `param`.
###Code
search = "https://api.twitter.com/1.1/search/tweets.json"
r = twitter.get(search, params={'q' : 'technology'})
###Output
_____no_output_____
###Markdown
This has returned an http response object, which contains data like whether or not the request succeeded:
###Code
r.ok
###Output
_____no_output_____
###Markdown
You can also get the http response code, and the reason why Twitter sent you that code (these are all super important for controlling the flow of your program).
###Code
r.status_code, r.reason
###Output
_____no_output_____
###Markdown
The data that we asked Twitter to send us in r.content
###Code
r.content
###Output
_____no_output_____
###Markdown
But that's not helpful. We can extract it in python's representation of json with the `json` method:
###Code
r.json()
###Output
_____no_output_____
###Markdown
This has some helpful metadata about our request, like a url where we can get the next batch of results from Twitter for the same query:
###Code
data = r.json()
data['search_metadata']
###Output
_____no_output_____
###Markdown
The tweets that we want are under the key "statuses"
###Code
statuses = data['statuses']
statuses[0]
###Output
_____no_output_____
###Markdown
This is one tweet.> Depending on which tweet this is, you may or may not see that Twitter automatically pulls out links and mentions and gives you their index location in the raw tweet stringTwitter gives you a whole lot of information about their users, including geographical coordinates, the device they are tweeting from, and links to their photographs. Twitter supports what it calls query operators, which modify the search behavior. For example, if you want to search for tweets where a particular user is mentioned, include the at-sign, `@`, followed by the username. To search for tweets sent to a particular user, use `to:username`. For tweets from a particular user, `from:username`. For hashtags, use `hashtag`.For a complete set of options: https://dev.twitter.com/rest/public/search.Let's try a more complicated search:
###Code
r = twitter.get(search, params={
'q' : 'happy',
'geocode' : '37.8734855,-122.2597169,10mi'
})
r.ok
statuses = r.json()['statuses']
statuses[0]
###Output
_____no_output_____
###Markdown
If we want to store this data somewhere, we can output it as json using the json library from above. However, if you're doing a lot of these, you'll probaby want to use a database to handle everything.
###Code
with open('my_tweets.json', 'w') as f:
json.dump(statuses, f)
###Output
_____no_output_____
###Markdown
To post tweets, we need to use a different endpoint:
###Code
post = "https://api.twitter.com/1.1/statuses/update.json"
###Output
_____no_output_____
###Markdown
And now we can pass a new tweet (remember, Twitter calls these 'statuses') as a parameter to our post request.
###Code
r = twitter.post(post, params={
'status' : "I stole Juan's Twitter credentials"
})
r.ok
###Output
_____no_output_____
###Markdown
Other (optional) parameters include things like location, and replies. Scheduling The real beauty of bots is that they are designed to work without interaction or oversight. Imagine a situation where you want to automatically retweet everything coming out of the D-Lab's twitter account, "@DLabAtBerkeley". You could:1. spend the rest of your life glued to D-Lab's twitter page and hitting refresh; or,2. write a functionWe're going to import a module called `time` that will pause our code, so that we don't hit Twitter's rate limit
###Code
import time
def retweet():
r = twitter.get(search, {'q':'DLabAtBerkeley'})
if r.ok:
statuses = r.json()['statuses']
for update in statuses:
username = item['user']['screen_name']
parameters = {'status':'HOORAY! @' + username}
r = twitter.post(post, parameters)
print(r.status_code, r.reason)
time.sleep(5)
###Output
_____no_output_____
###Markdown
But you are a human that needs to eat, sleep, and be social with other humans. Luckily, Linux systems have a time-based daemon called `cron` that will run scripts like this *for you*. > People on windows and macs will not be able to run this. That's okay.The way that `cron` works is it reads in files where each line has a time followed by a job (these are called cronjobs). You can edit your crontab by typing `crontab -e` into a terminal.They looks like this:
###Code
with open('../etc/crontab_example', 'r') as f:
print(f.read())
###Output
# In a user's crontab, jobs run under that user
# Time is specified as <min> <hour> <day> <month> <wday>
# To specify any time, use `*`
# For unknown reasons, cronjobs fail unless the tab ends with a newline
00 08 * * 1 echo "It is 8am on Monday" >> /var/dumblog
###Markdown
This is telling `cron` to print that statement to a file called "dumblog" at 8am every Monday.It's generally frowned upon to enter jobs through crontabs because they are hard to modify without breaking them. The better solution is to put your timed command into a file and copy the file into `/etc/cron.d/`. These files look like this:
###Code
with open('../etc/crond_example', 'r') as f:
print(f.read())
###Output
#!/bin/bash
# First, make sure you specify all of the paths that you might need to run
# your task. If you aren't sure, copy the entire $PATH variable
PATH=/home/dillon/.conda/envs/py27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Then, specify when you want the task to occur; the user account to run it;
# and the job
@hourly dillon cd ~/scripts; python simple.py
|
Analysis of opening a new shopping centre in Sydney.ipynb | ###Markdown
Final Capstone Project Analysis of opening a new shopping centre in Sydney Author: Hamid Doostmohammadi Date created: 15/03/2020 Description: Analysis of opening a new shopping centre in Sydney by web scraping and K-means Clustring _____________________________________________________________________________________________________________________________ 1.1 Importing necessary libraries
###Code
from bs4 import BeautifulSoup # Library for web scraping
import requests # Library to handle requests
import numpy as np # Library for numericals
import pandas as pd # Library for working with dataframs
from sklearn.cluster import KMeans # Library for machine learning
#!conda install -c conda-forge folium=0.5.0 --yes
import folium # Library for map rendering
!conda install -c conda-forge geopy --yes
from geopy.geocoders import Nominatim
#!conda install -c conda-forge geocoder --yes
import geocoder
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
print ("Libraries imported!")
###Output
Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... done
## Package Plan ##
environment location: C:\Users\doost\anaconda3
added / updated specs:
- geopy
The following packages will be downloaded:
package | build
---------------------------|-----------------
geographiclib-1.50 | py_0 34 KB conda-forge
geopy-1.21.0 | py_0 58 KB conda-forge
------------------------------------------------------------
Total: 92 KB
The following NEW packages will be INSTALLED:
geographiclib conda-forge/noarch::geographiclib-1.50-py_0
geopy conda-forge/noarch::geopy-1.21.0-py_0
Downloading and Extracting Packages
geopy-1.21.0 | 58 KB | | 0%
geopy-1.21.0 | 58 KB | ##7 | 27%
geopy-1.21.0 | 58 KB | ########## | 100%
geographiclib-1.50 | 34 KB | | 0%
geographiclib-1.50 | 34 KB | ########## | 100%
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Libraries imported!
###Markdown
1.2 Web scraping for list of Sydneys suburbs from Wikipedia
###Code
# Scraping list of Sydneys suburbs from Wikipedia
List_url = "https://en.wikipedia.org/wiki/Category:Suburbs_of_Sydney"
source = requests.get(List_url).text
soup = BeautifulSoup(source, 'html.parser')
# create a list to store neighborhood data
neighborhoodList = []
# append the data into the list
for row in soup.find_all("div", class_="mw-category")[0].findAll("a"):
neighborhoodList.append(row.text)
# create a new DataFrame from the list
Syd_df = pd.DataFrame({"Neighborhood": neighborhoodList})
Syd_df.head()
Syd_df.shape
###Output
_____no_output_____
###Markdown
2. Get geographical data for Sydneys suburbs
###Code
# define a function to get coordinates
def get_latlng(neighborhood):
# initialize your variable to None
lat_lng_coords = None
# loop until you get the coordinates
while(lat_lng_coords is None):
g = geocoder.arcgis('{}, Sydney, Australia'.format(neighborhood))
lat_lng_coords = g.latlng
return lat_lng_coords
# call the function to get the coordinates, store in a new list using list comprehension
coords = [ get_latlng(neighborhood) for neighborhood in Syd_df["Neighborhood"].tolist() ]
coords
# create temporary dataframe to populate the coordinates into Latitude and Longitude
df_coords = pd.DataFrame(coords, columns=['Latitude', 'Longitude'])
# merge the coordinates into the original dataframe
Syd_df['Latitude'] = df_coords['Latitude']
Syd_df['Longitude'] = df_coords['Longitude']
Syd_df.head()
Syd_df.shape
# save the DataFrame as CSV file
Syd_df.to_csv("Syd_df.csv", index=False)
###Output
_____no_output_____
###Markdown
3. Creating a map of Sydney with lables from datafram
###Code
# get the coordinates of Sydney
address = 'Sydney, Australia'
geolocator = Nominatim(user_agent="my-application")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Sydney, Australia {}, {}.'.format(latitude, longitude))
# create map of Sydney using latitude and longitude values
map_Syd = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, neighborhood in zip(Syd_df['Latitude'], Syd_df['Longitude'], Syd_df['Neighborhood']):
label = '{}'.format(neighborhood)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7).add_to(map_Syd)
map_Syd
###Output
_____no_output_____
###Markdown
4. Getting information from Foursquare for neighbourhood Foursquare information
###Code
CLIENT_ID = 'KCUQTOFTF4HZ0ROJNJTNXQJFTNFH32A1FKQOUF2QCYKLIA4X'
CLIENT_SECRET = 'JF3S4NHLZPERTTEOG4ATCPWRYTJIYKLBQ1YFEXEV2TZ3XYCW'
VERSION = '20200404'
###Output
_____no_output_____
###Markdown
Now, let's get the top 100 venues that are within a radius of 2000 meters.
###Code
radius = 2000
LIMIT = 100
venues = []
for lat, long, neighborhood in zip(Syd_df['Latitude'], Syd_df['Longitude'], Syd_df['Neighborhood']):
# create the API request URL
url = "https://api.foursquare.com/v2/venues/explore?client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}".format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
long,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
for venue in results:
venues.append((
neighborhood,
lat,
long,
venue['venue']['name'],
venue['venue']['location']['lat'],
venue['venue']['location']['lng'],
venue['venue']['categories'][0]['name']))
# convert the venues list into a new DataFrame
venues_df = pd.DataFrame(venues)
# define the column names
venues_df.columns = ['Neighborhood', 'Latitude', 'Longitude', 'VenueName', 'VenueLatitude', 'VenueLongitude', 'VenueCategory']
print(venues_df.shape)
venues_df.head()
# Number of venues which were returned for each neighorhood
venues_df.groupby(["Neighborhood"]).count()
# Let's find out how many unique categories can be curated from all the returned venues
print('There are {} uniques categories.'.format(len(venues_df['VenueCategory'].unique())))
# print out the list of categories
venues_df['VenueCategory'].unique()[:50]
# check if the results contain "Shopping Mall"
"Shopping Mall" in venues_df['VenueCategory'].unique()
###Output
_____no_output_____
###Markdown
5. Analysing neighbourhoods
###Code
# one hot encoding
Syd_onehot = pd.get_dummies(venues_df[['VenueCategory']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
Syd_onehot['Neighborhoods'] = venues_df['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [Syd_onehot.columns[-1]] + list(Syd_onehot.columns[:-1])
Syd_onehot = Syd_onehot[fixed_columns]
print(Syd_onehot.shape)
Syd_onehot.head()
# Next, let's group rows by neighborhood and by taking the mean of the frequency of occurrence of each category
Syd_grouped = Syd_onehot.groupby(["Neighborhoods"]).mean().reset_index()
print(Syd_grouped.shape)
Syd_grouped
# Create a new DataFrame for Shopping Mall data only
Syd_mall = Syd_grouped[["Neighborhoods","Shopping Mall"]]
Syd_mall.head()
###Output
_____no_output_____
###Markdown
6. Clustering neighbourhoods
###Code
# set number of clusters
kclusters = 3
Syd_clustering = Syd_mall.drop(["Neighborhoods"], 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(Syd_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_
# create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood.
Syd_merged = Syd_mall.copy()
# add clustering labels
Syd_merged["Cluster Labels"] = kmeans.labels_
Syd_merged.rename(columns={"Neighborhoods": "Neighborhood"}, inplace=True)
Syd_merged.head()
Syd_merged = Syd_merged.join(Syd_df.set_index("Neighborhood"), on="Neighborhood")
print(Syd_merged.shape)
Syd_merged.head() # check the last columns!
# sort the results by Cluster Labels
print(Syd_merged.shape)
Syd_merged.sort_values(["Cluster Labels"], inplace=True)
Syd_merged
###Output
(200, 5)
###Markdown
7.Visualising the resulting clusters
###Code
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i+x+(i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(Syd_merged['Latitude'], Syd_merged['Longitude'], Syd_merged['Neighborhood'], Syd_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' - Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
# save the map as HTML file
map_clusters.save('map_clusters.html')
###Output
_____no_output_____
###Markdown
8. Examining clusters
###Code
# Cluster 0
Syd_merged.loc[Syd_merged['Cluster Labels'] == 0]
# Cluster 1
Syd_merged.loc[Syd_merged['Cluster Labels'] == 1]
# Cluster 2
Syd_merged.loc[Syd_merged['Cluster Labels'] == 2]
###Output
_____no_output_____ |
NHIS Data Cleaner.ipynb | ###Markdown
Data Cleaner for Falls Data from CDC's NHISAuthor: Vikas Enti, [email protected] script cleans the csv files from CDC's NHIS Dataset to create a single, easy to analyze and visualize dataset.
###Code
import pandas as pd
import sqlite3
import glob
# This is a quick and dirty approach. Rewrite if you need to ingest a lot more CSV files
# Create injury episode dataframes from csv files
#inj_df_2017 = pd.read_csv('NHIS/2017_injpoiep.csv')
#inj_df_2016 = pd.read_csv('NHIS/2016_injpoiep.csv')
#inj_df_2015 = pd.read_csv('NHIS/2015_injpoiep.csv')
# Create sample adult dataframes from csv files
#sam_df_2017 = pd.read_csv('NHIS/2017_samadult.csv')
#sam_df_2016 = pd.read_csv('NHIS/2016_samadult.csv')
#sam_df_2015 = pd.read_csv('NHIS/2015_samadult.csv')
# Elegant approach
# Injury Episodes
inj_epi_df = pd.concat([pd.read_csv(f, encoding='latin1') for f in glob.glob('NHIS/*inj*.csv')], ignore_index=True, sort=True)
# Sameple Adult
sam_adu_df = pd.concat([pd.read_csv(f, encoding='latin1') for f in glob.glob('NHIS/*sam*.csv')], ignore_index=True, sort=True)
inj_epi_df
sam_adu_df
# Dictionaries for different variable values
# Source: Injury Episode Frequency file.
# ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Dataset_Documentation/NHIS/2016/Injpoiep_freq.pdf
#ICAUS
injury_cause = {
1:'In a motor vehicle',
2:'On a bike, scooter, skateboard, skates, skis, horse, etc',
3:'Pedestrian who was struck by a vehicle such as a car or bicycle',
4:'In a boat, train, or plane',
5:'Fall',
6:'Burned or scalded by substances such as hot objects or liquids, fire, or chemicals',
7:'Other',
97:'Refused',
98:'Not ascertained',
99:"Don't know"
}
#ijbody1, ijbody2, ijbody4, ijbody4
body_part = {
1:'Ankle',
2:'Back',
3:'Buttocks',
4:'Chest',
5:'Ear',
6:'Elbow',
7:'Eye',
8:'Face',
9:'Finger/thumb',
10:'Foot',
11:'Forearm',
12:'Groin',
13:'Hand',
14:'Head (not face)',
15:'Hip',
16:'Jaw',
17:'Knee',
18:'Lower leg',
19:'Mouth',
20:'Neck',
22:'Shoulder',
23:'Stomach',
24:'Teeth',
25:'Thigh',
26:'Toe',
27:'Upper arm',
28:'Wrist',
29:'Other',
97:'Refused',
98:'Not ascertained',
99:"Don't know"
}
#ifall1, ifall2
fall_loc = {
1:"Stairs, steps, or escalator",
2:"Floor or level ground",
3:"Curb (including sidewalk)",
4:"Ladder or scaffolding",
5:"Playground equipment",
6:"Sports field, court, or rink",
7:"Building or other structure",
8:"Chair, bed, sofa, or other furniture",
9:"Bathtub, shower, toilet, or commode",
10:"Hole or other opening",
11:"Other",
97:"Refused",
98:"Not ascertained",
99:"Don't know",
}
#ifallwhy
fall_reason = {
1:"Slipping or tripping",
2:"Jumping or diving",
3:"Bumping into an object or another person",
4:"Being shoved or pushed by another person",
5:"Losing balance or having dizziness (becoming faint or having a seizure)",
6:"Other",
7:"Refused",
8:"Not ascertained",
9:"Don't know",
}
#SEX
gender = {
1:"Male",
2:"Female"
}
# Merge both dataframes for easier analysis
nhis_falls = pd.merge(sam_adu_df, inj_epi_df, on = ['SRVY_YR','HHX','FMX','FPX'], how = 'inner')
nhis_falls = nhis_falls.fillna(999)
nhis_falls = nhis_falls.astype('int32')
# Embed dictionary values as new columns
nhis_falls['injury_cause'] = nhis_falls['ICAUS'].map(injury_cause)
nhis_falls['body_part1'] = nhis_falls['IJBODY1'].map(body_part)
nhis_falls['body_part2'] = nhis_falls['IJBODY2'].map(body_part)
nhis_falls['body_part3'] = nhis_falls['IJBODY3'].map(body_part)
nhis_falls['body_part4'] = nhis_falls['IJBODY4'].map(body_part)
nhis_falls['fall_loc1'] = nhis_falls['IFALL1'].map(fall_loc)
nhis_falls['fall_loc2'] = nhis_falls['IFALL2'].map(fall_loc)
nhis_falls['fall_reason'] = nhis_falls['IFALLWHY'].map(fall_reason)
nhis_falls['gender'] = nhis_falls['SEX'].map(gender)
nhis_falls['ICAUS']
# Output select variables from dataframe to csv file
header = ['SRVY_YR','HHX','FMX','FPX','AGE_P','gender','ICAUS','IJBODY1','IJBODY2','IJBODY3','IJBODY4',
'IFALL1','IFALL2','IFALLWHY','injury_cause','body_part1','body_part2','body_part3','body_part4',
'fall_loc1','fall_loc2','fall_reason']
nhis_falls.to_csv('NHIS/nhis_falls.csv', columns=header)
nhis_falls[header]
###Output
_____no_output_____ |
data-512-a1/.ipynb_checkpoints/hcds-a1-data-curation-checkpoint.ipynb | ###Markdown
DATA 512 Human-Centered Data ScienceA1 : Data Curation The goal of this assignment is to construct, analyze, and publish a dataset of monthly traffic on English Wikipedia from January 1 2008 through August 30 2020.We will combine data about Wikipedia page traffic from two different [Wikimedia REST API](https://www.mediawiki.org/wiki/Wikimedia_REST_API) endpoints into a single dataset, perform some simple data processing steps on the data, and then analyze the data visually. Step 1: Gathering the data In order to measure Wikipedia traffic from 2008-2020, as a first step, we collect data from two different API endpoints, the Legacy Pagecounts API and the Pageviews API.* The Legacy Pagecounts API ([documentation](https://wikitech.wikimedia.org/wiki/Analytics/AQS/Legacy_Pagecounts), [endpoint](https://wikimedia.org/api/rest_v1//Pagecounts_data_(legacy)/get_metrics_legacy_pagecounts_aggregate_project_access_site_granularity_start_end)) provides access to desktop and mobile traffic data from December 2007 through July 2016.* The Pageviews API ([documentation](https://wikitech.wikimedia.org/wiki/Analytics/AQS/Pageviews), [endpoint](https://wikimedia.org/api/rest_v1//Pageviews_data/get_metrics_pageviews_aggregate_project_access_agent_granularity_start_end)) provides access to desktop, mobile web, and mobile app traffic data from July 2015 through last month. Importing the libraries required for data collection, preprocessing and visualization
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline
import json
import requests
import datetime as dt
###Output
_____no_output_____
###Markdown
Assigning the endpoint URLs
###Code
pagecounts = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access-site}/{granularity}/{start}/{end}'
pageviews = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
###Output
_____no_output_____
###Markdown
Creating dictionaries of required parameters to be passed in the endpoints for the two APIs
###Code
# parameters for getting aggregated legacy view data - desktop-site
params_pagecounts_desktop = {"project" : "en.wikipedia.org",
"access-site" : "desktop-site",
"granularity" : "monthly",
"start" : "2007120100",
# for end use 1st day of month following final month of data
"end" : "2020090100"
}
# parameters for getting aggregated legacy view data - mobile-site
params_pagecounts_mobile = {"project" : "en.wikipedia.org",
"access-site" : "mobile-site",
"granularity" : "monthly",
"start" : "2007120100",
# for end use 1st day of month following final month of data
"end" : "2020090100"
}
# parameters for getting aggregated current standard pageview data - desktop
params_pageviews_desktop = {"project" : "en.wikipedia.org",
"access" : "desktop",
"agent" : "user",
"granularity" : "monthly",
"start" : "2007120100",
# for end use 1st day of month following final month of data
"end" : '2020090100'
}
# parameters for getting aggregated current standard pageview data - mobile-web
params_pageviews_mobile_web = {"project" : "en.wikipedia.org",
"access" : "mobile-web",
"agent" : "user",
"granularity" : "monthly",
"start" : "2007120100",
# for end use 1st day of month following final month of data
"end" : '2020090100'
}
# parameters for getting aggregated current standard pageview data - mobile-app
params_pageviews_mobile_app = {"project" : "en.wikipedia.org",
"access" : "mobile-app",
"agent" : "user",
"granularity" : "monthly",
"start" : "2007120100",
# for end use 1st day of month following final month of data
"end" : '2020090100'
}
###Output
_____no_output_____
###Markdown
Defining a function that requests the API calls and save the output in a json fileThe below function does the API call with the parameters and returns a dictionary of output as well as a json file
###Code
def create_json(endpoint,parameters,apiname,accesstype,firstmonth,lastmonth):
"""
Function that passes the required parameters into the endpoint URLs for the two APIs and
a) saves the output of the request in a json file and
b) returns the output of the request as a dictionary
Input:
endpoint - pagecounts or pageviews
parameters - the dictionary of parameters for each API and access type
apiname - One of 'pageviews' or 'pagecounts'
accesstype - One of 'desktop-site', 'mobile-site', 'desktop', 'mobile-app', 'mobile-web'
firstmonth - First month of the analysis (Dec 2007)
lastmonth - last month of the analysis (Aug 2020)
Output:
* json file saved as apiname_accesstype_firstmonth-lastmonth.json
* a dictionary output
"""
call = requests.get(endpoint.format(**parameters), headers=headers)
response = call.json()
with open(f'{apiname}_{accesstype}_{firstmonth}-{lastmonth}.json', 'w') as file:
json.dump(response, file, indent=4)
return response
###Output
_____no_output_____
###Markdown
Creating the headers for the API calls
###Code
headers = {
'User-Agent': 'https://github.com/Pradeepprabhakar92',
'From': '[email protected]'
}
###Output
_____no_output_____
###Markdown
Calling the above function for each API and access type
###Code
monthly_pagecounts_desktop = create_json(pagecounts, params_pagecounts_desktop,'pagecounts','desktop-site','200712','202008')
monthly_pagecounts_mobile = create_json(pagecounts, params_pagecounts_mobile,'pagecounts','mobile-site','200712','202008')
monthly_pageviews_desktop = create_json(pageviews, params_pageviews_desktop,'pageviews','desktop','200712','202008')
monthly_pageviews_mobile_web = create_json(pageviews, params_pageviews_mobile_web,'pageviews','mobile-web','200712','202008')
monthly_pageviews_mobile_app = create_json(pageviews, params_pageviews_mobile_app,'pageviews','mobile-app','200712','202008')
###Output
_____no_output_____
###Markdown
The above function calls will create 5 json files in the folder, 2 files corresponding to the desktop site and mobile site for Legacy Pagecounts API and 3 files corresponding to desktop, mobile-web and mobile-app for Pageviews API Step 2: Processing the data Given the API outputs stored as dictionaries, we perform the following in step 2* For data collected from the Pageviews API, combine the monthly values for mobile-app and mobile-web to create a total mobile traffic count for each month.* For all data, separate the value of timestamp into four-digit year (YYYY) and two-digit month (MM) and discard values for day and hour (DDHH). Loading the json files as data frames using pandas
###Code
monthly_pagecounts_desktop_pd = pd.json_normalize(monthly_pagecounts_desktop['items'])
monthly_pagecounts_mobile_pd = pd.json_normalize(monthly_pagecounts_mobile['items'])
monthly_pageviews_desktop_pd = pd.json_normalize(monthly_pageviews_desktop['items'])
monthly_pageviews_mobile_web_pd = pd.json_normalize(monthly_pageviews_mobile_web['items'])
monthly_pageviews_mobile_app_pd = pd.json_normalize(monthly_pageviews_mobile_app['items'])
###Output
_____no_output_____
###Markdown
Summming up the mobile web and mobile app page views and concatenating pageviews and pagecounts into a single dataframe
###Code
monthly_pageviews_mobile_pd = pd.concat([monthly_pageviews_mobile_web_pd,monthly_pageviews_mobile_app_pd],ignore_index=True) \
.groupby(['project','agent','granularity','timestamp']) \
.agg({'views': sum}).reset_index()
monthly_pageviews_mobile_pd['access'] = 'mobile'
monthly_pageviews_pd = pd.concat([monthly_pageviews_desktop_pd,monthly_pageviews_mobile_pd],ignore_index=True) \
.rename(columns={'access':'access-site','views':'count'}) \
.drop(columns='agent')
monthly_overall_pd = pd.concat([monthly_pagecounts_desktop_pd,monthly_pagecounts_mobile_pd,monthly_pageviews_pd],
ignore_index=True).drop(columns=['project','granularity'])
###Output
_____no_output_____
###Markdown
Pivoting the dataframe based on access type with missing value imputation and renaming the columns
###Code
monthly_overall_pivot = monthly_overall_pd.pivot(index='timestamp',columns='access-site',values='count').fillna(0). \
reset_index().rename_axis(None, axis=1)
monthly_overall_pivot.iloc[:,1:] = monthly_overall_pivot.iloc[:,1:].astype(np.int64)
monthly_overall_pivot = monthly_overall_pivot.rename(columns={'desktop-site':'pagecount_desktop_views',
'mobile-site':'pagecount_mobile_views',
'desktop':'pageview_desktop_views',
'mobile':'pageview_mobile_views'})
###Output
_____no_output_____
###Markdown
Summing up the desktop views and mobile views to create total page views for both the APIs
###Code
monthly_overall_pivot['pagecount_all_views'] = monthly_overall_pivot.pagecount_desktop_views + \
monthly_overall_pivot.pagecount_mobile_views
monthly_overall_pivot['pageview_all_views'] = monthly_overall_pivot.pageview_desktop_views + \
monthly_overall_pivot.pageview_mobile_views
###Output
_____no_output_____
###Markdown
Creating year (YYYY) and month(MM) columns from timestamp
###Code
monthly_overall_pivot['year'] = pd.to_datetime(monthly_overall_pivot['timestamp'],format="%Y%m%d%H").dt.year
monthly_overall_pivot['month'] = pd.to_datetime(monthly_overall_pivot['timestamp'],format="%Y%m%d%H").dt.month
###Output
_____no_output_____
###Markdown
Selecting only the required columns and checking the head of the final dataframe
###Code
en_wikipedia_traffic = monthly_overall_pivot[['year',
'month',
'pagecount_all_views',
'pagecount_desktop_views',
'pagecount_mobile_views',
'pageview_all_views',
'pageview_desktop_views',
'pageview_mobile_views']]
en_wikipedia_traffic.head()
###Output
_____no_output_____
###Markdown
Saving the final processed data frame as a CSV
###Code
en_wikipedia_traffic.to_csv('en-wikipedia_traffic_200712-202008.csv',index=False)
###Output
_____no_output_____
###Markdown
Step 3: Analyze the data In this step, we will create a visualization that will track three traffic metrics: mobile traffic, desktop traffic, and all traffic (mobile + desktop) using matplotlib. Concatenating year and month columns into a datetime datatype for visualization
###Code
data = en_wikipedia_traffic.copy()
data['date'] = pd.to_datetime(data['year'].map(str)+ '-' +data['month'].map(str), format='%Y-%m')
###Output
_____no_output_____
###Markdown
Using matplotlib to create a time series line chart for the three traffic metrics -desktop, mobile and all for the two APIs
###Code
import matplotlib.dates as mdates
years = mdates.YearLocator() # a ticker for the first day of every year
months = mdates.MonthLocator() #a ticker for the first day of every month
years_fmt = mdates.DateFormatter('%Y')
fig, ax = plt.subplots(figsize=(15,6))
ax.plot( 'date', 'pagecount_desktop_views', data=data, color='green', linewidth=2, linestyle='dashed',label='main site')
ax.plot( 'date', 'pagecount_mobile_views', data=data, color='blue', linewidth=2, linestyle='dashed',label='mobile site')
ax.plot( 'date', 'pagecount_all_views', data=data, color='black', linewidth=2, linestyle='dashed',label='total')
ax.plot( 'date', 'pageview_desktop_views', data=data, color='green', linewidth=2,alpha=0.9,label='')
ax.plot( 'date', 'pageview_mobile_views', data=data, color='blue', linewidth=2,alpha=0.9,label='')
ax.plot( 'date', 'pageview_all_views', data=data, color='black', linewidth=2,alpha=0.9,label='')
plt.title("Page views on English wikipedia (x 1,000,000)",fontsize=14)
ax.xaxis.set_major_locator(years)
ax.xaxis.set_major_formatter(years_fmt)
ax.xaxis.set_minor_locator(months)
scale_y = 1e6
ticks_y = ticker.FuncFormatter(lambda x, pos: '{0:g}'.format(x/scale_y))
ax.yaxis.set_major_formatter(ticks_y)
# ax.set_ylim(0, 12000000000)
plt.xticks(fontsize=12)
plt.yticks(fontsize=12)
caption="May 2015: A new pageview definition took effect, which eliminated crawler traffic. Solid lines mark new definition"
plt.figtext(0.5, 0.01, caption, wrap=True, horizontalalignment='center', fontsize=14,color='red')
ax.legend(fontsize=12)
plt.savefig("en_wikipedia_traffic_visualization_200712-202008.png",dpi=400)
plt.show();
###Output
_____no_output_____ |
content/LAB 06.02 - NMF face search.ipynb | ###Markdown
LAB 06.02 - NMF face search
###Code
!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/ai4eng.v1.20211.udea/main/content/init.py
import init; init.init(force_download=False); init.get_weblink()
from local.lib.rlxmoocapi import submit, session
session.LoginSequence(endpoint=init.endpoint, course_id=init.course_id, lab_id="L06.02", varname="student");
###Output
_____no_output_____
###Markdown
Datasetwe will use the faces dataset
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
import numpy as np
faces = np.load("local/data/faces.npy")
faces.shape
def plot_faces(faces):
assert len(faces)<=30, "can only plot at most 30 faces"
plt.figure(figsize=(15,2))
for i in range(len(faces)):
plt.subplot(2,15,i+1)
plt.imshow(faces[i].reshape(19,19), cmap=plt.cm.Greys_r)
plt.xticks([]); plt.yticks([])
plot_faces(np.random.permutation(faces)[:30])
###Output
_____no_output_____
###Markdown
Task 1: Distance function for a vectorcomplete the following function so that given a vector $v \in \mathbb{R}^n$ and a `numpy` array $X \in \mathbb{R}^{m\times n}$ (whose rows are vectors of the same size as $v$) returns a new array $\in \mathbb{R}^m$ with the Euclidean distance between $v$ and each vector in $X$.Recall that the Euclidean distance between vectors $z=[z_0,...z_{n-1}]$ and $w=[w_0,...,w_{n-1}]$ is given by$$\text{distance}(z,w) = \sqrt{\sum_{i=0}^{n-1} (z_i-w_i)^2}$$**hint**: use [`np.linalg.norm`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html) to compute a distance between two vectors**challenge**: solve it using one line of code.**note**: your function must return a 1D numpy array of dimension $m$, not a list.for instance, for the following values of $v$ and $X$ X = array([[9, 5, 1, 3, 8, 3, 3, 3, 9, 2], [9, 7, 0, 7, 9, 1, 4, 7, 3, 6], [8, 0, 0, 5, 0, 5, 5, 1, 1, 5], [8, 2, 9, 5, 6, 0, 8, 7, 2, 8], [0, 6, 3, 0, 6, 6, 1, 2, 8, 0]]) v = np.array([9, 7, 0, 7, 9, 1, 4, 7, 3, 6])you should get the following result array([ 9.74679434, 0. , 13.89244399, 11.91637529, 16.40121947])
###Code
def distances(v, X):
result = .... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your code
###Code
X = np.random.randint(10, size=(5,10))
v = X[1]
print ("X=\n", X)
print ("\nv=", v)
distances(v, X)
###Output
_____no_output_____
###Markdown
**submit your code**
###Code
student.submit_task(globals(), task_id="task_01");
###Output
_____no_output_____
###Markdown
Task 2: Positions of closest vectorscomplete the following function so that given $v$ and $X$ as previously, returns the positions of the $n$ closest vectors to $v$ in $X$.**hint**: use the [`np.argsort`](https://numpy.org/doc/stable/reference/generated/numpy.argsort.html) function**challenge**: solve it using one line of codefor the example $v$ and $X$ above you should get the following outputs >> closest(v, X, 2) array([1, 0]) >> closest(v, X, 3) array([1, 0, 3])
###Code
def closest(v, X, n):
assert n<len(X), "n must at most the number of vectors in X"
result = .... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your code
###Code
X = np.random.randint(10, size=(5,10))
v = X[1]
print ("X=\n", X)
print ("\nv=", v,"\n\n")
print (closest(v, X, 2))
print (closest(v, X, 3))
###Output
_____no_output_____
###Markdown
observe now how we can use your functions to search for faces similar to any other face
###Code
plt.figure(figsize=(1,1))
fi = 314 # np.random.randint(len(faces)) # 314
face = faces[fi]
plt.imshow(faces[fi].reshape(19,19), cmap=plt.cm.Greys_r)
print ("TARGET FACE")
plot_faces(faces[closest(face, faces, 30)])
print ("SIMILAR FACES")
###Output
SIMILAR FACES
###Markdown
But they do not look so similar, this is because we are doing comparison **pixel by pixel**. We will fix this in the next task **submit your code**
###Code
student.submit_task(globals(), task_id="task_02");
###Output
_____no_output_____
###Markdown
Task 3: Use NMF to find similar facesMake the comparison in the faces space resulting from transforming them using NMF. For this you have to:- create an instance of NMF with `n_components=30, init="random", random_state=0`- fit the instance with $X$- transform $X$- transform $v$- return the positions of closest $n$ vectors in the transformed $X$ to the transformed $v$For the target face above, you should get the following
###Code
from IPython.display import Image
Image(filename='local/imgs/similar-images2.png')
def find_similar(v,X,n):
from sklearn.decomposition import NMF
nmf = NMF(n_components=30, init="random", random_state=0)
nmf... ## your code here. call the 'fit' method
Xt = ... # use nmf to transform X
vt = ... # use nmf to transform v .. you will have to use reshape like this v.reshape(1,-1)
result = ... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your answer
###Code
plot_faces(faces[find_similar(face, faces, 30)])
###Output
_____no_output_____
###Markdown
**submit your code**
###Code
student.submit_task(globals(), task_id="task_03");
###Output
_____no_output_____
###Markdown
LAB 06.02 - NMF face search
###Code
!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/ai4eng.v1.20211.udea/main/content/init.py
import init; init.init(force_download=False); init.get_weblink()
from local.lib.rlxmoocapi import submit, session
student = session.Session(init.endpoint).login( course_id=init.course_id,
lab_id="L06.02" )
###Output
_____no_output_____
###Markdown
Datasetwe will use the faces dataset
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
import numpy as np
faces = np.load("local/data/faces.npy")
faces.shape
def plot_faces(faces):
assert len(faces)<=30, "can only plot at most 30 faces"
plt.figure(figsize=(15,2))
for i in range(len(faces)):
plt.subplot(2,15,i+1)
plt.imshow(faces[i].reshape(19,19), cmap=plt.cm.Greys_r)
plt.xticks([]); plt.yticks([])
plot_faces(np.random.permutation(faces)[:30])
###Output
_____no_output_____
###Markdown
Task 1: Distance function for a vectorcomplete the following function so that given a vector $v \in \mathbb{R}^n$ and a `numpy` array $X \in \mathbb{R}^{m\times n}$ (whose rows are vectors of the same size as $v$) returns a new array $\in \mathbb{R}^m$ with the Euclidean distance between $v$ and each vector in $X$.Recall that the Euclidean distance between vectors $z=[z_0,...z_{n-1}]$ and $w=[w_0,...,w_{n-1}]$ is given by$$\text{distance}(z,w) = \sqrt{\sum_{i=0}^{n-1} (z_i-w_i)^2}$$**hint**: use [`np.linalg.norm`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html) to compute a distance between two vectors**challenge**: solve it using one line of code.**note**: your function must return a 1D numpy array of dimension $m$, not a list.for instance, for the following values of $v$ and $X$ X = array([[9, 5, 1, 3, 8, 3, 3, 3, 9, 2], [9, 7, 0, 7, 9, 1, 4, 7, 3, 6], [8, 0, 0, 5, 0, 5, 5, 1, 1, 5], [8, 2, 9, 5, 6, 0, 8, 7, 2, 8], [0, 6, 3, 0, 6, 6, 1, 2, 8, 0]]) v = np.array([9, 7, 0, 7, 9, 1, 4, 7, 3, 6])you should get the following result array([ 9.74679434, 0. , 13.89244399, 11.91637529, 16.40121947])
###Code
def distances(v, X):
result = .... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your code
###Code
X = np.random.randint(10, size=(5,10))
v = X[1]
print ("X=\n", X)
print ("\nv=", v)
distances(v, X)
###Output
_____no_output_____
###Markdown
**submit your code**
###Code
student.submit_task(globals(), task_id="task_01");
###Output
_____no_output_____
###Markdown
Task 2: Positions of closest vectorscomplete the following function so that given $v$ and $X$ as previously, returns the positions of the $n$ closest vectors to $v$ in $X$.**hint**: use the [`np.argsort`](https://numpy.org/doc/stable/reference/generated/numpy.argsort.html) function**challenge**: solve it using one line of codefor the example $v$ and $X$ above you should get the following outputs >> closest(v, X, 2) array([1, 0]) >> closest(v, X, 3) array([1, 0, 3])
###Code
def closest(v, X, n):
assert n<len(X), "n must at most the number of vectors in X"
result = .... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your code
###Code
X = np.random.randint(10, size=(5,10))
v = X[1]
print ("X=\n", X)
print ("\nv=", v,"\n\n")
print (closest(v, X, 2))
print (closest(v, X, 3))
###Output
_____no_output_____
###Markdown
observe now how we can use your functions to search for faces similar to any other face
###Code
plt.figure(figsize=(1,1))
fi = 314 # np.random.randint(len(faces)) # 314
face = faces[fi]
plt.imshow(faces[fi].reshape(19,19), cmap=plt.cm.Greys_r)
print ("TARGET FACE")
plot_faces(faces[closest(face, faces, 30)])
print ("SIMILAR FACES")
###Output
SIMILAR FACES
###Markdown
But they do not look so similar, this is because we are doing comparison **pixel by pixel**. We will fix this in the next task **submit your code**
###Code
student.submit_task(globals(), task_id="task_02");
###Output
_____no_output_____
###Markdown
Task 3: Use NMF to find similar facesMake the comparison in the faces space resulting from transforming them using NMF. For this you have to:- create an instance of NMF with `n_components=30, init="random", random_state=0`- fit the instance with $X$- transform $X$- transform $v$- return the positions of closest $n$ vectors in the transformed $X$ to the transformed $v$For the target face above, you should get the following
###Code
from IPython.display import Image
Image(filename='local/imgs/similar-images2.png')
def find_similar(v,X,n):
from sklearn.decomposition import NMF
nmf = NMF(n_components=30, init="random", random_state=0)
nmf... ## your code here. call the 'fit' method
Xt = ... # use nmf to transform X
vt = ... # use nmf to transform v .. you will have to use reshape like this v.reshape(1,-1)
result = ... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your answer
###Code
plot_faces(faces[find_similar(face, faces, 30)])
###Output
_____no_output_____
###Markdown
**submit your code**
###Code
student.submit_task(globals(), task_id="task_03");
###Output
_____no_output_____
###Markdown
LAB 06.02 - NMF face search
###Code
!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/ai4eng.v1/main/content/init.py
import init; init.init(force_download=False); init.get_weblink()
from local.lib.rlxmoocapi import submit, session
session.LoginSequence(endpoint=init.endpoint, course_id=init.course_id, lab_id="L06.02", varname="student");
###Output
_____no_output_____
###Markdown
Datasetwe will use the faces dataset
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
import numpy as np
faces = np.load("local/data/faces.npy")
faces.shape
def plot_faces(faces):
assert len(faces)<=30, "can only plot at most 30 faces"
plt.figure(figsize=(15,2))
for i in range(len(faces)):
plt.subplot(2,15,i+1)
plt.imshow(faces[i].reshape(19,19), cmap=plt.cm.Greys_r)
plt.xticks([]); plt.yticks([])
plot_faces(np.random.permutation(faces)[:30])
###Output
_____no_output_____
###Markdown
Task 1: Distance function for a vectorcomplete the following function so that given a vector $v \in \mathbb{R}^n$ and a `numpy` array $X \in \mathbb{R}^{m\times n}$ (whose rows are vectors of the same size as $v$) returns a new array $\in \mathbb{R}^m$ with the Euclidean distance between $v$ and each vector in $X$.Recall that the Euclidean distance between vectors $z=[z_0,...z_{n-1}]$ and $w=[w_0,...,w_{n-1}]$ is given by$$\text{distance}(z,w) = \sqrt{\sum_{i=0}^{n-1} (z_i-w_i)^2}$$**hint**: use [`np.linalg.norm`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html) to compute a distance between two vectors**challenge**: solve it using one line of code.**note**: your function must return a 1D numpy array of dimension $m$, not a list.for instance, for the following values of $v$ and $X$ X = array([[9, 5, 1, 3, 8, 3, 3, 3, 9, 2], [9, 7, 0, 7, 9, 1, 4, 7, 3, 6], [8, 0, 0, 5, 0, 5, 5, 1, 1, 5], [8, 2, 9, 5, 6, 0, 8, 7, 2, 8], [0, 6, 3, 0, 6, 6, 1, 2, 8, 0]]) v = np.array([9, 7, 0, 7, 9, 1, 4, 7, 3, 6])you should get the following result array([ 9.74679434, 0. , 13.89244399, 11.91637529, 16.40121947])
###Code
def distances(v, X):
result = .... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your code
###Code
X = np.random.randint(10, size=(5,10))
v = X[1]
print ("X=\n", X)
print ("\nv=", v)
distances(v, X)
###Output
_____no_output_____
###Markdown
**submit your code**
###Code
student.submit_task(globals(), task_id="task_01");
###Output
_____no_output_____
###Markdown
Task 2: Positions of closest vectorscomplete the following function so that given $v$ and $X$ as previously, returns the positions of the $n$ closest vectors to $v$ in $X$.**hint**: use the [`np.argsort`](https://numpy.org/doc/stable/reference/generated/numpy.argsort.html) function**challenge**: solve it using one line of codefor the example $v$ and $X$ above you should get the following outputs >> closest(v, X, 2) array([1, 0]) >> closest(v, X, 3) array([1, 0, 3])
###Code
def closest(v, X, n):
assert n<len(X), "n must at most the number of vectors in X"
result = .... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your code
###Code
X = np.random.randint(10, size=(5,10))
v = X[1]
print ("X=\n", X)
print ("\nv=", v,"\n\n")
print (closest(v, X, 2))
print (closest(v, X, 3))
###Output
_____no_output_____
###Markdown
observe now how we can use your functions to search for faces similar to any other face
###Code
plt.figure(figsize=(1,1))
fi = 314 # np.random.randint(len(faces)) # 314
face = faces[fi]
plt.imshow(faces[fi].reshape(19,19), cmap=plt.cm.Greys_r)
print ("TARGET FACE")
plot_faces(faces[closest(face, faces, 30)])
print ("SIMILAR FACES")
###Output
SIMILAR FACES
###Markdown
But they do not look so similar, this is because we are doing comparison **pixel by pixel**. We will fix this in the next task **submit your code**
###Code
student.submit_task(globals(), task_id="task_02");
###Output
_____no_output_____
###Markdown
Task 3: Use NMF to find similar facesMake the comparison in the faces space resulting from transforming them using NMF. For this you have to:- create an instance of NMF with `n_components=30, init="random", random_state=0`- fit the instance with $X$- transform $X$- transform $v$- return the positions of closest $n$ vectors in the transformed $X$ to the transformed $v$For the target face above, you should get the following
###Code
from IPython.display import Image
Image(filename='local/imgs/similar-images2.png')
def find_similar(v,X,n):
from sklearn.decomposition import NMF
nmf = NMF(n_components=30, init="random", random_state=0)
nmf... ## your code here. call the 'fit' method
Xt = ... # use nmf to transform X
vt = ... # use nmf to transform v .. you will have to use reshape like this v.reshape(1,-1)
result = ... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your answer
###Code
plot_faces(faces[find_similar(face, faces, 30)])
###Output
_____no_output_____
###Markdown
**submit your code**
###Code
student.submit_task(globals(), task_id="task_03");
###Output
_____no_output_____
###Markdown
LAB 06.02 - NMF face search
###Code
!wget --no-cache -O init.py -q https://raw.githubusercontent.com/rramosp/20201.xai4eng/master/content/init.py
import init; init.init(force_download=False); init.get_weblink()
from local.lib.rlxmoocapi import submit, session
student = session.Session(init.endpoint).login( course_id=init.course_id,
session_id="UDEA",
lab_id="L06.02" )
init.get_weblink()
###Output
_____no_output_____
###Markdown
Datasetwe will use the faces dataset
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Image
%matplotlib inline
import numpy as np
faces = np.load("local/data/faces.npy")
faces.shape
def plot_faces(faces):
assert len(faces)<=30, "can only plot at most 30 faces"
plt.figure(figsize=(15,2))
for i in range(len(faces)):
plt.subplot(2,15,i+1)
plt.imshow(faces[i].reshape(19,19), cmap=plt.cm.Greys_r)
plt.xticks([]); plt.yticks([])
plot_faces(np.random.permutation(faces)[:30])
###Output
_____no_output_____
###Markdown
Task 1: Distance function for a vectorcomplete the following function so that given a vector $v \in \mathbb{R}^n$ and a `numpy` array $X \in \mathbb{R}^{m\times n}$ (whose rows are vectors of the same size as $v$) returns a new array $\in \mathbb{R}^m$ with the Euclidean distance between $v$ and each vector in $X$.Recall that the Euclidean distance between vectors $z=[z_0,...z_{n-1}]$ and $w=[w_0,...,w_{n-1}]$ is given by$$\text{distance}(z,w) = \sqrt{\sum_{i=0}^{n-1} (z_i-w_i)^2}$$**hint**: use [`np.linalg.norm`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html) to compute a distance between two vectors**challenge**: solve it using one line of code.**note**: your function must return a 1D numpy array of dimension $m$, not a list.for instance, for the following values of $v$ and $X$ X = array([[9, 5, 1, 3, 8, 3, 3, 3, 9, 2], [9, 7, 0, 7, 9, 1, 4, 7, 3, 6], [8, 0, 0, 5, 0, 5, 5, 1, 1, 5], [8, 2, 9, 5, 6, 0, 8, 7, 2, 8], [0, 6, 3, 0, 6, 6, 1, 2, 8, 0]]) v = np.array([9, 7, 0, 7, 9, 1, 4, 7, 3, 6])you should get the following result array([ 9.74679434, 0. , 13.89244399, 11.91637529, 16.40121947])
###Code
def distances(v, X):
result = .... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your code
###Code
X = np.random.randint(10, size=(5,10))
v = X[1]
print ("X=\n", X)
print ("\nv=", v)
distances(v, X)
###Output
_____no_output_____
###Markdown
**submit your code**
###Code
student.submit_task(globals(), task_id="task_01");
###Output
_____no_output_____
###Markdown
Task 2: Positions of closest vectorscomplete the following function so that given $v$ and $X$ as previously, returns the positions of the $n$ closest vectors to $v$ in $X$.**hint**: use the [`np.argsort`](https://numpy.org/doc/stable/reference/generated/numpy.argsort.html) function**challenge**: solve it using one line of codefor the example $v$ and $X$ above you should get the following outputs >> closest(v, X, 2) array([1, 0]) >> closest(v, X, 3) array([1, 0, 3])
###Code
def closest(v, X, n):
assert n<len(X), "n must at most the number of vectors in X"
result = .... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your code
###Code
X = np.random.randint(10, size=(5,10))
v = X[1]
print ("X=\n", X)
print ("\nv=", v,"\n\n")
print (closest(v, X, 2))
print (closest(v, X, 3))
###Output
_____no_output_____
###Markdown
observe now how we can use your functions to search for faces similar to any other face
###Code
plt.figure(figsize=(1,1))
fi = 314 # np.random.randint(len(faces)) # 314
face = faces[fi]
plt.imshow(faces[fi].reshape(19,19), cmap=plt.cm.Greys_r)
print ("TARGET FACE")
plot_faces(faces[closest(face, faces, 30)])
print ("SIMILAR FACES")
###Output
SIMILAR FACES
###Markdown
But they do not look so similar, this is because we are doing comparison **pixel by pixel**. We will fix this in the next task **submit your code**
###Code
student.submit_task(globals(), task_id="task_02");
###Output
_____no_output_____
###Markdown
Task 3: Use NMF to find similar facesMake the comparison in the faces space resulting from transforming them using NMF. For this you have to:- create an instance of NMF with `n_components=30, init="random", random_state=0`- fit the instance with $X$- transform $X$- transform $v$- return the positions of closest $n$ vectors in the transformed $X$ to the transformed $v$For the target face above, you should get the following
###Code
from IPython.display import Image
Image(filename='local/imgs/similar-images2.png')
def find_similar(v,X,n):
from sklearn.decomposition import NMF
nmf = NMF(n_components=30, init="random", random_state=0)
nmf... ## your code here. call the 'fit' method
Xt = ... # use nmf to transform X
vt = ... # use nmf to transform v .. you will have to use reshape like this v.reshape(1,-1)
result = ... # your code here
return result
###Output
_____no_output_____
###Markdown
check manually your answer
###Code
plot_faces(faces[find_similar(face, faces, 30)])
###Output
_____no_output_____
###Markdown
**submit your code**
###Code
student.submit_task(globals(), task_id="task_03");
###Output
_____no_output_____ |
Segment_T5.ipynb | ###Markdown
###Code
mkdir t5_segtiment
tokenizer = T5Tokenizer.from_pretrained('t5-base')
tokenizer.tokenize("</s> negative positive")
args_dict.update({'data_dir': 'data', 'output_dir': './t5_sentiment', 'num_train_epochs':1})
args = argparse.Namespace(**args_dict)
model = T5Lightning(args)
model.model
checkpoint_callback = pl.callbacks.ModelCheckpoint(
filepath=args.output_dir, prefix="checkpoint", monitor="val_loss", mode="min", save_top_k=5
)
train_params = dict(
accumulate_grad_batches=args.gradient_accumulation_steps,
gpus=args.n_gpu,
max_epochs=args.num_train_epochs,
early_stop_callback=False,
precision= 16 if args.fp_16 else 32,
amp_level=args.opt_level,
gradient_clip_val=args.max_grad_norm,
checkpoint_callback=checkpoint_callback,
callbacks=[LoggingCallback()],
)
trainer = pl.Trainer(**train_params)
from google.colab import files
files.upload()
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!pip install kaggle
!kaggle competitions download -c jigsaw-toxic-comment-classification-challenge
inp = ['train', 'test']
import zipfile
for i in inp:
with zipfile.ZipFile(i+'.csv.zip','r') as f:
f.extractall('')
!mkdir data
!mv *.csv data
train = pd.read_csv("data/train.csv")
test = pd.read_csv("data/test.csv")
train.head(10)
dic = train.columns[2:].tolist()
target_col = []
for i in range(len(train)):
val = train.loc[i][2:].values
target = [dic[k] for k in range(6) if val[k]>0]
if not target:
target_col.append('None')
else:
target_col.append(' '.join(target))
train['target'] = target_col
train.head(10)
from sklearn.model_selection import train_test_split
df = train[['comment_text', 'target']]
train ,valid = train_test_split(df, test_size=0.2, random_state = 42)
train.to_csv("data/train.csv",index = False)
valid.to_csv("data/val.csv", index = False)
train.shape, valid.shape
class SegmentDataset(Dataset):
def __init__(self, tokenizer, data_dir, type_path, max_len=512):
self.path = os.path.join(data_dir, type_path + '.csv')
self.data_column = ["comment_text"]
self.class_column = ['target']
self.data = pd.read_csv(self.path)
self.max_len = max_len
self.tokenizer = tokenizer
self.inputs = []
self.targets = []
self._build()
def __len__(self):
return len(self.inputs)
def __getitem__(self, index):
source_ids = self.inputs[index]["input_ids"].squeeze()
target_ids = self.targets[index]["input_ids"].squeeze()
src_mask = self.inputs[index]["attention_mask"].squeeze() # might need to squeeze
target_mask = self.targets[index]["attention_mask"].squeeze() # might need to squeeze
return {"source_ids": source_ids, "source_mask": src_mask, "target_ids": target_ids, "target_mask": target_mask}
def _build(self):
for idx in range(self.data.shape[0]):
input_ = self.data.loc[idx][self.data_column]
target = self.data.loc[idx, self.class_column]
input_ = str(input_) + ' </s>'
target = str(target) + ' </s>'
# tokenize inputs
tokenized_inputs = self.tokenizer.batch_encode_plus(
[input_], max_length=self.max_len, pad_to_max_length=True, return_tensors="pt"
)
tokenized_targets = self.tokenizer.batch_encode_plus(
[target], max_length=7, pad_to_max_length=True, return_tensors="pt"
)
self.inputs.append(tokenized_inputs)
self.targets.append(tokenized_targets)
dataset = SegmentDataset(tokenizer, 'data', 'train', 64)
len(dataset)
loader = DataLoader(dataset, batch_size=32, shuffle=True)
it = iter(loader)
batch = next(it)
batch["source_ids"].shape
import gc
gc.collect()
torch.cuda.reset_max_memory_cached()
def get_dataset(tokenizer, type_path, args):
return SegmentDataset(tokenizer=tokenizer, data_dir=args.data_dir, type_path=type_path, max_len=args.max_seq_length)
trainer.fit(model)
%load_ext tensorboard
%tensorboard --logdir lightning_logs/
outs = model.model.generate(input_ids=batch['source_ids'].cuda(),
attention_mask=batch['source_mask'].cuda(),
max_length=2)
dec = [tokenizer.decode(ids) for ids in outs]
texts = [tokenizer.decode(ids) for ids in batch['source_ids']]
targets = [tokenizer.decode(ids) for ids in batch['target_ids']]
import textwrap
for i in range(12):
c = texts[i]
lines = textwrap.wrap("text:\n%s\n" % c, width=100)
print("\n".join(lines))
print("\nActual sentiment: %s" % targets[i])
print("predicted sentiment: %s" % dec[i])
print("=====================================================================\n")
from tqdm import tqdm
seed_all(34)
dataset = ToxicDataset(tokenizer, 'data', 'val', 512)
loader = DataLoader(dataset, batch_size=32, num_workers=4)
model.model.eval()
outputs = []
targets = []
for batch in tqdm(loader):
outs = model.model.generate(input_ids=batch['source_ids'].cuda(),
attention_mask=batch['source_mask'].cuda(),
max_length=2)
dec = [tokenizer.decode(ids) for ids in outs]
target = [tokenizer.decode(ids) for ids in batch["target_ids"]]
outputs.extend(dec)
targets.extend(target)
from sklearn import metrics
metrics.accuracy_score(targets, outputs)
from tqdm import tqdm
dataset = ToxicDataset(tokenizer, 'data', 'test', 512)
loader = DataLoader(dataset, batch_size=32, num_workers=4)
model.model.eval()
outputs = []
targets = []
for batch in tqdm(loader):
outs = model.model.generate(input_ids=batch['source_ids'].cuda(),
attention_mask=batch['source_mask'].cuda(),
max_length=2)
dec = [tokenizer.decode(ids) for ids in outs]
target = [tokenizer.decode(ids) for ids in batch["target_ids"]]
outputs.extend(dec)
targets.extend(target)
###Output
_____no_output_____ |
DAY 201 ~ 300/DAY219_[leetCode] Long Pressed Name (Python).ipynb | ###Markdown
2020년 9월 25일 금요일 leetCode - Long Pressed Name (Python) 문제 : https://leetcode.com/problems/long-pressed-name/ 블로그 : https://somjang.tistory.com/entry/leetCode-925-Long-Pressed-Name-Python 첫번째 시도
###Code
class Solution:
def isLongPressedName(self, name: str, typed: str) -> bool:
cnt = 0
answer = False
for i in range(len(typed)):
if cnt < len(name) and name[cnt] == typed[i]:
cnt = cnt + 1
elif i == 0 or typed[i] != typed[i-1]:
return answer
if cnt == len(name):
answer = True
return answer
###Output
_____no_output_____ |
deeplab_retrain_voc2012.ipynb | ###Markdown
XCEPTION INITIAL MODEL - OPTION 1
###Code
%cd models/research/deeplab/
!sh ./local_test.sh
model_dir = '/content/models/research/deeplab/datasets/pascal_voc_seg/exp/train_on_trainval_set/export/'
###Output
/content/models/research/deeplab
/usr/local/lib/python3.6/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
testBuildDeepLabv2 (__main__.DeeplabModelTest) ... 2018-05-11 08:03:04.923044: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-05-11 08:03:04.923492: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:04.0
totalMemory: 11.17GiB freeMemory: 11.10GiB
2018-05-11 08:03:04.923532: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-05-11 08:03:05.310849: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-11 08:03:05.310917: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-05-11 08:03:05.310946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-05-11 08:03:05.311323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3431 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
2018-05-11 08:03:10.613237: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-05-11 08:03:10.613326: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-11 08:03:10.613358: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-05-11 08:03:10.613383: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-05-11 08:03:10.613657: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3431 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() or inspect.getfullargspec()
if d.decorator_argspec is not None), _inspect.getargspec(target))
2018-05-11 08:03:20.025154: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-05-11 08:03:20.025239: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-11 08:03:20.025271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-05-11 08:03:20.025295: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-05-11 08:03:20.025544: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3431 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
2018-05-11 08:03:21.888148: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-05-11 08:03:21.888232: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-11 08:03:21.888263: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-05-11 08:03:21.888290: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-05-11 08:03:21.888571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3431 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
ok
testForwardpassDeepLabv3plus (__main__.DeeplabModelTest) ... 2018-05-11 08:03:25.445368: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0
2018-05-11 08:03:25.445465: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-11 08:03:25.445496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-05-11 08:03:25.445518: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-05-11 08:03:25.445812: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3431 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
/content/models/research/deeplab/model_test.py:114: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(len(scales_to_logits), 1)
ok
testScaleDimensionOutput (__main__.DeeplabModelTest) ... ok
testWrongDeepLabVariant (__main__.DeeplabModelTest) ... ok
test_session (__main__.DeeplabModelTest)
Returns a TensorFlow Session for use in executing tests. ... ok
----------------------------------------------------------------------
Ran 5 tests in 23.256s
OK
Downloading VOCtrainval_11-May-2012.tar to ./pascal_voc_seg
--2018-05-11 08:03:28-- http://host.robots.ox.ac.uk/pascal/VOC/voc2012//VOCtrainval_11-May-2012.tar
Resolving host.robots.ox.ac.uk (host.robots.ox.ac.uk)... 129.67.94.152
Connecting to host.robots.ox.ac.uk (host.robots.ox.ac.uk)|129.67.94.152|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1999639040 (1.9G) [application/x-tar]
Saving to: ‘VOCtrainval_11-May-2012.tar’
VOCtrainval_11-May- 9%[> ] 181.22M 90.6MB/s
###Markdown
MOBILE INITIAL MODEL - OPTION 2
###Code
%cd models/research/deeplab/
!sh ./local_test_mobilenetv2.sh
model_dir = '/content/models/research/deeplab/datasets/pascal_voc_seg/exp/train_on_trainval_set_mobilenetv2/export/'
###Output
_____no_output_____
###Markdown
RUN INFERENCE
###Code
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from matplotlib import gridspec
class DeepLabModel(object):
"""Class to load deeplab model and run inference."""
INPUT_TENSOR_NAME = 'ImageTensor:0'
OUTPUT_TENSOR_NAME = 'SemanticPredictions:0'
INPUT_SIZE = 513
FROZEN_GRAPH_NAME = 'frozen_inference_graph'
def __init__(self, tarball_path):
"""Creates and loads pretrained deeplab model."""
self.graph = tf.Graph()
graph_def = None
# Extract frozen graph from tar archive.
tar_file = tarfile.open(tarball_path)
for tar_info in tar_file.getmembers():
if self.FROZEN_GRAPH_NAME in os.path.basename(tar_info.name):
file_handle = tar_file.extractfile(tar_info)
graph_def = tf.GraphDef.FromString(file_handle.read())
break
tar_file.close()
if graph_def is None:
raise RuntimeError('Cannot find inference graph in tar archive.')
with self.graph.as_default():
tf.import_graph_def(graph_def, name='')
self.sess = tf.Session(graph=self.graph)
def run(self, image):
"""Runs inference on a single image.
Args:
image: A PIL.Image object, raw input image.
Returns:
resized_image: RGB image resized from original input image.
seg_map: Segmentation map of `resized_image`.
"""
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert('RGB').resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(
self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]})
seg_map = batch_seg_map[0]
return resized_image, seg_map
def create_pascal_label_colormap():
"""Creates a label colormap used in PASCAL VOC segmentation benchmark.
Returns:
A Colormap for visualizing segmentation results.
"""
colormap = np.zeros((256, 3), dtype=int)
ind = np.arange(256, dtype=int)
for shift in reversed(range(8)):
for channel in range(3):
colormap[:, channel] |= ((ind >> channel) & 1) << shift
ind >>= 3
return colormap
def label_to_color_image(label):
"""Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with integer type, storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the PASCAL color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color
map maximum entry.
"""
if label.ndim != 2:
raise ValueError('Expect 2-D input label')
colormap = create_pascal_label_colormap()
if np.max(label) >= len(colormap):
raise ValueError('label value too large.')
return colormap[label]
def vis_segmentation(image, seg_map):
"""Visualizes input image, segmentation map and overlay view."""
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])
plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis('off')
plt.title('input image')
plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis('off')
plt.title('segmentation map')
plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis('off')
plt.title('segmentation overlay')
unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(
FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation='nearest')
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid('off')
plt.show()
LABEL_NAMES = np.asarray([
'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tv'
])
FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)
import os
import tarfile
_MODEL_NAME = 'frozen_inference_graph.pb'
_TARBALL_NAME = 'deeplab_model.tar.gz'
model_path = os.path.join(model_dir, _MODEL_NAME)
download_path = os.path.join(model_dir, _TARBALL_NAME)
with tarfile.open(download_path, "w:gz") as tar:
tar.add(model_path)
MODEL = DeepLabModel(download_path)
print('model loaded successfully!')
from google.colab import files
from os import path
from PIL import Image
uploaded = files.upload()
for name, data in uploaded.items():
with open('img.jpg', 'wb') as f:
f.write(data)
f.close()
print('saved file ' + name)
im = Image.open(name)
resized_im, seg_map = MODEL.run(im)
vis_segmentation(resized_im, seg_map)
###Output
_____no_output_____ |
luhns_algorithm.ipynb | ###Markdown
Steps1. Multiply every 2nd digit by 2 starting from the 2nd last and then add those digits together2. Add that number to the sum of the digits that were not multiplied by 23. Find the remainder when that is divided by 10 if remainder is 0 number is valid!
###Code
def check_validity_number(card_number):
num_list = list(map(int, card_number))
#print("lista de numeros", num_list)
num_list_rev = num_list[::-1]
#print("lista de numeros reversa", num_list_rev)
multiplied_numbers = []
single_numbers = []
for index,number in enumerate(num_list_rev):
if index % 2 != 0:
m = str(2 * number).zfill(2)
multiplied_numbers.append(int(m[0])) if int(m[0]) != 0 else None
multiplied_numbers.append(int(m[1])) if int(m[1]) != 0 else None
else:
single_numbers.append(number)
sum_multiplied_numbers = sum(multiplied_numbers)
sum_single_numbers = sum(single_numbers)
#print(f'multiplied_numbers {multiplied_numbers} - sum {sum_multiplied_numbers}')
#print(f'single_numbers {single_numbers} - sum {sum_single_numbers}')
return True if (sum_multiplied_numbers + sum_single_numbers) % 10 == 0 else False
#Testing
d = {True: 'is a valid credit card number', False: 'is not a valid credit card number'}
card = "371449635398431"
print(f'Card {card} ',d[check_validity_number(card)])
card = "371449635398430"
print(f'Card {card} ',d[check_validity_number(card)])
###Output
Card 371449635398431 is a valid credit card number
Card 371449635398430 is not a valid credit card number
|
Hello TensorFlow2.ipynb | ###Markdown
Hello TensorFlow 2.0 - Your First Program 'Hello, World' program is known for a beginner who writes the first coding. Like 'Hello, World', I make first TensorFlow 2.0 program in order to explain how TensorFlow 2.0 works is like this. It is called 'Hello, TensorFlow 2.0'In the case of creating neural networks, this sample I make is one where it learns the relationship between two numbers. For example, if you were writing code for a function like this, you already know the 'rules'. ```float calc_function(float x){ float y = (2 * x) - 1; return y;}```So how would you train a neural network to do the equivalent task? I give you a hint! Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them. So let's step through it step by step! InstallLet's start with installing TensorFlow 2.0. Here we are installing TensorFlow and calling '!' as executing commnad environment(cmd) on Jupyter Notebook.When you have no GPU on the local computer, you should run this command: !pip install tensorflow==2.0.0-alpha0.But if you have a GPU on the local computer, you should run this command: !pip install tensorflow-gpu==2.0.0-alpha0.Note that I comment on the situation of using GPU environment.
###Code
!pip install tensorflow==2.0.0-alpha0 # if you have no GPU on the local computer
# !pip install tensorflow-gpu==2.0.0-alpha0 # if you have GPU on the local computer
###Output
Requirement already satisfied: tensorflow==2.0.0-alpha0 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (2.0.0a0)
Requirement already satisfied: keras-applications>=1.0.6 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (1.0.7)
Requirement already satisfied: google-pasta>=0.1.2 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (0.1.4)
Requirement already satisfied: astor>=0.6.0 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (0.7.1)
Requirement already satisfied: wheel>=0.26 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (0.33.1)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (1.0.9)
Requirement already satisfied: numpy<2.0,>=1.14.5 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (1.16.2)
Requirement already satisfied: six>=1.10.0 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (1.12.0)
Requirement already satisfied: tb-nightly<1.14.0a20190302,>=1.14.0a20190301 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (1.14.0a20190301)
Requirement already satisfied: absl-py>=0.7.0 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (0.7.0)
Requirement already satisfied: tf-estimator-nightly<1.14.0.dev2019030116,>=1.14.0.dev2019030115 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (1.14.0.dev2019030115)
Requirement already satisfied: protobuf>=3.6.1 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (3.7.0)
Requirement already satisfied: grpcio>=1.8.6 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (1.19.0)
Requirement already satisfied: gast>=0.2.0 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (0.2.2)
Requirement already satisfied: termcolor>=1.1.0 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tensorflow==2.0.0-alpha0) (1.1.0)
Requirement already satisfied: h5py in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from keras-applications>=1.0.6->tensorflow==2.0.0-alpha0) (2.9.0)
Requirement already satisfied: werkzeug>=0.11.15 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tb-nightly<1.14.0a20190302,>=1.14.0a20190301->tensorflow==2.0.0-alpha0) (0.14.1)
Requirement already satisfied: markdown>=2.6.8 in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from tb-nightly<1.14.0a20190302,>=1.14.0a20190301->tensorflow==2.0.0-alpha0) (3.0.1)
Requirement already satisfied: setuptools in /Users/synabreu/anaconda3/lib/python3.6/site-packages (from protobuf>=3.6.1->tensorflow==2.0.0-alpha0) (40.8.0)
###Markdown
ImportLet me import TensorFlow and calling it tf for ease of use. We then import a library called numpy, which helps us to represent our data as lists easily and quickly.The framework for defining a neural network as a set of Sequential layers is called keras, so we import that too. In addition, confirm on the installed TensorFlow version.
###Code
import tensorflow as tf
import numpy as np
from tensorflow import keras
# check the TensorFlow version out
print(tf.__version__)
###Output
2.0.0-dev20190308
###Markdown
Define and Compile the Neural NetworkFirst, we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value.
###Code
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
###Output
_____no_output_____
###Markdown
Now we compile our Neural Network. So we have to specify 2 functions, a loss and an optimizer.If you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. We alredy know that in our function, the relationship between the numbers is y=2x-1. When the computer is trying to 'learn' that, it makes a guess what it is maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did.It then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower). It will repeat this for the number of EPOCHS which you will see shortly. But first, we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. Over time you will learn the different and appropriate loss and optimizer functions for different scenarios.
###Code
model.compile(optimizer='sgd', loss='mean_squared_error')
###Output
_____no_output_____
###Markdown
Feeding the DataOkay! we'll feed in some data. In this case, we are taking 6 xs and 6 ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. on and on. A python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values asn an np.array[]
###Code
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
###Output
_____no_output_____
###Markdown
Training the Neural Network The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. Next up, it will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side.
###Code
model.fit(xs, ys, epochs=500)
###Output
Epoch 1/500
6/6 [==============================] - 0s 14ms/sample - loss: 0.9182
Epoch 2/500
6/6 [==============================] - 0s 222us/sample - loss: 0.9108
Epoch 3/500
6/6 [==============================] - 0s 613us/sample - loss: 0.9036
Epoch 4/500
6/6 [==============================] - 0s 259us/sample - loss: 0.8964
Epoch 5/500
6/6 [==============================] - 0s 265us/sample - loss: 0.8895
Epoch 6/500
6/6 [==============================] - 0s 433us/sample - loss: 0.8826
Epoch 7/500
6/6 [==============================] - 0s 295us/sample - loss: 0.8759
Epoch 8/500
6/6 [==============================] - 0s 338us/sample - loss: 0.8693
Epoch 9/500
6/6 [==============================] - 0s 738us/sample - loss: 0.8628
Epoch 10/500
6/6 [==============================] - 0s 513us/sample - loss: 0.8564
Epoch 11/500
6/6 [==============================] - 0s 414us/sample - loss: 0.8502
Epoch 12/500
6/6 [==============================] - 0s 548us/sample - loss: 0.8440
Epoch 13/500
6/6 [==============================] - 0s 754us/sample - loss: 0.8380
Epoch 14/500
6/6 [==============================] - 0s 644us/sample - loss: 0.8321
Epoch 15/500
6/6 [==============================] - 0s 431us/sample - loss: 0.8263
Epoch 16/500
6/6 [==============================] - 0s 583us/sample - loss: 0.8206
Epoch 17/500
6/6 [==============================] - 0s 537us/sample - loss: 0.8150
Epoch 18/500
6/6 [==============================] - 0s 553us/sample - loss: 0.8095
Epoch 19/500
6/6 [==============================] - 0s 521us/sample - loss: 0.8041
Epoch 20/500
6/6 [==============================] - 0s 624us/sample - loss: 0.7988
Epoch 21/500
6/6 [==============================] - 0s 539us/sample - loss: 0.7935
Epoch 22/500
6/6 [==============================] - 0s 587us/sample - loss: 0.7884
Epoch 23/500
6/6 [==============================] - 0s 683us/sample - loss: 0.7834
Epoch 24/500
6/6 [==============================] - 0s 439us/sample - loss: 0.7784
Epoch 25/500
6/6 [==============================] - 0s 748us/sample - loss: 0.7736
Epoch 26/500
6/6 [==============================] - 0s 723us/sample - loss: 0.7688
Epoch 27/500
6/6 [==============================] - 0s 439us/sample - loss: 0.7641
Epoch 28/500
6/6 [==============================] - 0s 345us/sample - loss: 0.7595
Epoch 29/500
6/6 [==============================] - 0s 332us/sample - loss: 0.7549
Epoch 30/500
6/6 [==============================] - 0s 311us/sample - loss: 0.7504
Epoch 31/500
6/6 [==============================] - 0s 346us/sample - loss: 0.7461
Epoch 32/500
6/6 [==============================] - 0s 390us/sample - loss: 0.7417
Epoch 33/500
6/6 [==============================] - 0s 600us/sample - loss: 0.7375
Epoch 34/500
6/6 [==============================] - 0s 280us/sample - loss: 0.7333
Epoch 35/500
6/6 [==============================] - 0s 296us/sample - loss: 0.7292
Epoch 36/500
6/6 [==============================] - 0s 359us/sample - loss: 0.7252
Epoch 37/500
6/6 [==============================] - 0s 366us/sample - loss: 0.7212
Epoch 38/500
6/6 [==============================] - 0s 430us/sample - loss: 0.7173
Epoch 39/500
6/6 [==============================] - 0s 357us/sample - loss: 0.7134
Epoch 40/500
6/6 [==============================] - 0s 278us/sample - loss: 0.7096
Epoch 41/500
6/6 [==============================] - 0s 419us/sample - loss: 0.7059
Epoch 42/500
6/6 [==============================] - 0s 298us/sample - loss: 0.7022
Epoch 43/500
6/6 [==============================] - 0s 298us/sample - loss: 0.6986
Epoch 44/500
6/6 [==============================] - 0s 400us/sample - loss: 0.6950
Epoch 45/500
6/6 [==============================] - 0s 271us/sample - loss: 0.6915
Epoch 46/500
6/6 [==============================] - 0s 973us/sample - loss: 0.6881
Epoch 47/500
6/6 [==============================] - 0s 291us/sample - loss: 0.6847
Epoch 48/500
6/6 [==============================] - 0s 257us/sample - loss: 0.6814
Epoch 49/500
6/6 [==============================] - 0s 302us/sample - loss: 0.6781
Epoch 50/500
6/6 [==============================] - 0s 587us/sample - loss: 0.6748
Epoch 51/500
6/6 [==============================] - 0s 467us/sample - loss: 0.6716
Epoch 52/500
6/6 [==============================] - 0s 458us/sample - loss: 0.6685
Epoch 53/500
6/6 [==============================] - 0s 602us/sample - loss: 0.6654
Epoch 54/500
6/6 [==============================] - 0s 380us/sample - loss: 0.6623
Epoch 55/500
6/6 [==============================] - 0s 471us/sample - loss: 0.6593
Epoch 56/500
6/6 [==============================] - 0s 447us/sample - loss: 0.6563
Epoch 57/500
6/6 [==============================] - 0s 330us/sample - loss: 0.6534
Epoch 58/500
6/6 [==============================] - 0s 416us/sample - loss: 0.6505
Epoch 59/500
6/6 [==============================] - 0s 281us/sample - loss: 0.6476
Epoch 60/500
6/6 [==============================] - 0s 221us/sample - loss: 0.6448
Epoch 61/500
6/6 [==============================] - 0s 751us/sample - loss: 0.6421
Epoch 62/500
6/6 [==============================] - 0s 675us/sample - loss: 0.6393
Epoch 63/500
6/6 [==============================] - 0s 453us/sample - loss: 0.6366
Epoch 64/500
6/6 [==============================] - 0s 409us/sample - loss: 0.6340
Epoch 65/500
6/6 [==============================] - 0s 509us/sample - loss: 0.6314
Epoch 66/500
6/6 [==============================] - 0s 356us/sample - loss: 0.6288
Epoch 67/500
6/6 [==============================] - 0s 687us/sample - loss: 0.6262
Epoch 68/500
6/6 [==============================] - 0s 552us/sample - loss: 0.6237
Epoch 69/500
6/6 [==============================] - 0s 622us/sample - loss: 0.6212
Epoch 70/500
6/6 [==============================] - 0s 302us/sample - loss: 0.6188
Epoch 71/500
6/6 [==============================] - 0s 419us/sample - loss: 0.6163
Epoch 72/500
6/6 [==============================] - 0s 391us/sample - loss: 0.6140
Epoch 73/500
6/6 [==============================] - 0s 301us/sample - loss: 0.6116
Epoch 74/500
6/6 [==============================] - 0s 199us/sample - loss: 0.6093
Epoch 75/500
6/6 [==============================] - 0s 308us/sample - loss: 0.6070
Epoch 76/500
6/6 [==============================] - 0s 290us/sample - loss: 0.6047
Epoch 77/500
6/6 [==============================] - 0s 321us/sample - loss: 0.6024
Epoch 78/500
6/6 [==============================] - 0s 736us/sample - loss: 0.6002
Epoch 79/500
6/6 [==============================] - 0s 313us/sample - loss: 0.5980
Epoch 80/500
6/6 [==============================] - 0s 1ms/sample - loss: 0.5959
Epoch 81/500
6/6 [==============================] - 0s 317us/sample - loss: 0.5937
Epoch 82/500
6/6 [==============================] - 0s 434us/sample - loss: 0.5916
Epoch 83/500
6/6 [==============================] - 0s 407us/sample - loss: 0.5895
Epoch 84/500
6/6 [==============================] - 0s 302us/sample - loss: 0.5874
Epoch 85/500
6/6 [==============================] - 0s 571us/sample - loss: 0.5854
Epoch 86/500
6/6 [==============================] - 0s 449us/sample - loss: 0.5834
Epoch 87/500
6/6 [==============================] - 0s 586us/sample - loss: 0.5814
Epoch 88/500
6/6 [==============================] - 0s 359us/sample - loss: 0.5794
Epoch 89/500
6/6 [==============================] - 0s 380us/sample - loss: 0.5774
Epoch 90/500
6/6 [==============================] - 0s 249us/sample - loss: 0.5755
Epoch 91/500
6/6 [==============================] - 0s 239us/sample - loss: 0.5736
Epoch 92/500
6/6 [==============================] - 0s 266us/sample - loss: 0.5717
Epoch 93/500
6/6 [==============================] - 0s 264us/sample - loss: 0.5698
Epoch 94/500
6/6 [==============================] - 0s 262us/sample - loss: 0.5680
Epoch 95/500
6/6 [==============================] - 0s 300us/sample - loss: 0.5661
Epoch 96/500
6/6 [==============================] - 0s 444us/sample - loss: 0.5643
Epoch 97/500
6/6 [==============================] - 0s 211us/sample - loss: 0.5625
Epoch 98/500
6/6 [==============================] - 0s 234us/sample - loss: 0.5607
Epoch 99/500
6/6 [==============================] - 0s 733us/sample - loss: 0.5590
###Markdown
Finally, you have a model that has been trained to learn the relationshop between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. For example, if X = 10, what do you think Y will be? Take a guess before you run this code:
###Code
print(model.predict([10.0]))
###Output
[[17.57591]]
|
EJERCICIOS_TALLER_1.ipynb | ###Markdown
SHIOBAM VALENTINA ESPITIA PRADA PRIMER PUNTO
###Code
def datos():
primern = (input("Escriba su primer nombre: "))
segundon = (input("Si tiene segundo nombre diga Si de lo contrario diga No: "))
if segundon == "Si":
segundo = (input("Escriba su segundo nombre: "))
PrimerA = input("Escriba su primer apellido: ")
seg = (input("Si tiene segundo apellido diga Si de lo contrario diga No: "))
if seg == "Si":
SegAp =str((input("Escriba su segundo apellido: ")))
Edad = int(input("Escriba su edad: "))
iden_via = input("Escriba la identificacion de la vida: ")
num_via = int(input("Escriba el numero que acompaña a la via: "))
marca_num1 = int(input("Escriba el numero de marca 1: "))
letra = input("Escriba la letra luego del numero: ")
marca_num2 = int(input("Escriba el numero de marca 2: "))
casa = (input("Escriba numero de casa: "))
if segundon == "No" and seg == "No":
print(f"Su hombre es {primern} {PrimerA} su edad es {Edad} y\nLa direccion es {iden_via} {num_via} # {marca_num1}{letra} - {marca_num2} y la casa es {casa}" )
if segundon == "No":
print(f"Su hombre es {primern} {PrimerA} {SegAp} su edad es {Edad} y\nLa direccion es {iden_via} {num_via} # {marca_num1}{letra} - {marca_num2} y la casa es {casa}" )
if seg == "No":
print(f"Su hombre es {primern} {segundo} {PrimerA} su edad es {Edad} y\nLa direccion es {iden_via} {num_via} # {marca_num1}{letra} - {marca_num2} y la casa es {casa}" )
else:
print(f"Su hombre es {primern} {segundo} {PrimerA} {SegAp} su edad es {Edad} y\nLa direccion es {iden_via} {num_via} # {marca_num1}{letra} - {marca_num2} y la casa es {casa}" )
datos()
###Output
Escriba su primer nombre: Shiobam
Si tiene segundo nombre diga Si de lo contrario diga No: Si
Escriba su segundo nombre: Valentina
Escriba su primer apellido: Espitia
Si tiene segundo apellido diga Si de lo contrario diga No: No
Escriba su edad: 18
Escriba la identificacion de la vida: Carrera
Escriba el numero que acompaña a la via: 10
Escriba el numero de marca 1: 15
Escriba la letra luego del numero: d
Escriba el numero de marca 2: 24
Escriba numero de casa: apto 303
Su hombre es Shiobam Valentina Espitia su edad es 18 y
La direccion es Carrera 10 # 15d - 24 y la casa es apto 303
###Markdown
SEGUNDO PUNTO
###Code
def Nombre():
n = (input("Escriba su nombre: "))
resul = n
return f"Hola {resul} "
Nombre()
###Output
Escriba su nombre: Valentina
###Markdown
TERCER PUNTO
###Code
def Area():
CmLados = int(input("Escriba cuanto mide uno de los lados del cuadrado: "))
resul = CmLados**2
return f"El area del cuadrado es: {resul} centimetros cuadrados"
Area()
###Output
Escriba cuanto mide uno de los lados del cuadrado: 6
###Markdown
CUARTO PUNTO
###Code
def AreaRec():
Base = int(input("Escriba en cm la base del rectangulo: "))
Altura = int(input("Escriba en cm la altura del rectangulo: "))
resul = Base * Altura
return f"El area del rectangulo es: {resul} centimetros cuadrados"
AreaRec()
###Output
Escriba en cm la base del rectangulo: 15
Escriba en cm la altura del rectangulo: 7
###Markdown
QUINTO PUNTO
###Code
def AreaTria():
Base = int(input("Escriba en cm la base del triangulo: "))
Altura = int(input("Escriba en cm la altura del triangulo: "))
resul = int((Base * Altura)/2)
return f"El area del triangulo es: {resul} centimetros cuadrados"
AreaTria()
###Output
Escriba en cm la base del triangulo: 12
Escriba en cm la altura del triangulo: 15
###Markdown
SEXTO PUNTO
###Code
def Botellas():
Bot1L = int(input("Escriba cuantas botellas de 1 litro reciclo: "))
Bot1mL = int(input("Escriba cuantas botellas de 1.5 litros reciclo: "))
Bot2L = int(input("Escriba cuantas botellas de 2 litros reciclo: "))
resul = Bot1L * 1000 + Bot1mL * 2000 + Bot2L * 3000
return f"Lo que el usuario debe recibir es: {resul}"
Botellas()
###Output
Escriba cuantas botellas de 1 litro reciclo: 10
Escriba cuantas botellas de 1.5 litros reciclo: 10
Escriba cuantas botellas de 2 litros reciclo: 30
###Markdown
SEPTIMO PUNTO
###Code
def Comida():
Valor = int(input("Escriba el costo de su comida: "))
Propi = int(input("Escriba el valor de propina: "))
resul = ((Propi/100)* Valor)+(Valor*0.08)+Valor
return f"Su valor total es {resul}"
Comida()
###Output
Escriba el costo de su comida: 20000
Escriba el valor de propina: 10
###Markdown
OCTAVO PUNTO
###Code
def producto():
A = int(input("Escriba cuantos productos del A compro: "))
B = int(input("Escriba cuantos productos del B compro: "))
Peso = A*123 + B*35
if Peso%2 == 0:
print("Su peso es par y es: " , Peso )
else:
print("No es par y no le podemos vender si no es par, su peso es:", Peso)
producto()
###Output
Escriba cuantos productos del A compro: 1
Escriba cuantos productos del B compro: 2
No es par y no le podemos vender si no es par, su peso es: 193
###Markdown
NOVENO PUNTO
###Code
def parqueadero():
vehiculo = (input("¿Que tipo de vehiculo tiene: "))
if vehiculo == "carro":
vcarro = int(input("¿Cuantos minutos lleva su carro estacionado?: "))
print("Su valor a pagar es:", vcarro*70)
pago = int(input("¿Con cuanto dinero va a pagar?: "))
print("Su cambio es: ", pago - (vcarro*70) )
elif vehiculo == "moto":
vmoto = int(input("¿Cuantos minutos lleva su moto estacionada?: "))
print("Su valor a pagar es: ", vmoto*42)
pago1 = int(input("¿Con cuanto dinero va a pagar?: "))
print("Su cambio es: ", pago1 - (vmoto*42) )
elif vehiculo == "bicicleta":
vbici = int(input("¿Cuantos minutos lleva su bicicleta estacionada?: "))
print("Su valor a pagar es: ", vbici*10)
pago2 = int(input("¿Con cuanto dinero va a pagar?: "))
print("Su cambio es: ", pago2 - (vbici*10) )
parqueadero()
###Output
¿Que tipo de vehiculo tiene: moto
¿Cuantos minutos lleva su moto estacionada?: 160
Su valor a pagar es: 6720
¿Con cuanto dinero va a pagar?: 20000
Su cambio es: 13280
###Markdown
DECIMO PUNTO
###Code
import numpy as np
def Circulo():
radio = int(input("Escriba el radio de el circulo: "))
perimetro = 2*np.pi*radio
area = np.pi*radio**2
return f"El perimetro del circulo es: {perimetro} centimetros y su area es: {area} centimetros cuadrados"
Circulo()
###Output
Escriba el radio de el circulo: 3
###Markdown
DECIMO PRIMER PUNTO
###Code
from datetime import datetime
def Años():
fecha_nacimiento=(input("ingresa le fecha de tu nacimiento con el siguiente formato: DD/MM/YYYY"))
fecha_actual=datetime.today().strftime('%d/%m/%Y')
fecha_nacimiento=fecha_nacimiento.split('/')
fecha_actual=fecha_actual.split('/')
if int(fecha_actual[1]) > int(fecha_nacimiento[1]) or int(fecha_actual[1]) == int(fecha_nacimiento[1]) and int(fecha_actual[0]) >= int(fecha_nacimiento[0]):
years=int(fecha_actual[2])-int(fecha_nacimiento[2])
print('Tienes ' + str(years) + ' años')
else:
years=int(fecha_actual[2])-int(fecha_nacimiento[2])-1
print('Tienes ' + str(years) + ' años')
Años()
###Output
ingresa le fecha de tu nacimiento con el siguiente formato: DD/MM/YYYY10/05/2001
Tienes 20 años
###Markdown
DECIMO SEGUNDO PUNTO
###Code
def Temperatura():
Cel = int(input("Escriba los grados en Celsius: "))
Fahrenheit = Cel * 1.8 + 32
Kelvin = Cel + 273.15
return f"Loa grados de Celsius a Fahrenheit son: {Fahrenheit} y de Celsius a Kelvin son: {Kelvin}"
Temperatura()
###Output
Escriba los grados en Celsius: 100
###Markdown
DECIMO TERCER PUNTO
###Code
lista= []
cantidad = int(input("Cuantos datos desea agregar: "))
while cantidad>0:
dato = input("Ingrese sus datos: ")
lista.append(dato)
cantidad-=1
print("Contenido lista",lista)
for i in range(len(lista)):
lista[i] = int(lista[i])
lista.sort()
Max = (max(lista))
Min = (min(lista))
Sum = (sum(lista))
print(f"El valor maximo es: {Max} el valor minimo es: {Min} y la suma de todos los elementos es {Sum}")
###Output
Cuantos datos desea agregar: 10
Ingrese sus datos: 2
Ingrese sus datos: 4
Ingrese sus datos: 6
Ingrese sus datos: 8
Ingrese sus datos: 10
Ingrese sus datos: 12
Ingrese sus datos: 14
Ingrese sus datos: 16
Ingrese sus datos: 18
Ingrese sus datos: 20
Contenido lista ['2', '4', '6', '8', '10', '12', '14', '16', '18', '20']
El valor maximo es: 20 el valor minimo es: 2 y la suma de todos los elementos es 110
###Markdown
DECIMO CUARTO PUNTO
###Code
def dias(mes):
if mes.lower() in ("enero", "marzo","mayo","julio","agosto","octubre","diciembre"):
return "31"
elif mes.lower() == "febrero":
return "28/29"
else:
return "30"
meses = input("Ingrese el mes: ")
print(dias(meses))
###Output
Ingrese el mes: septiembre
30
###Markdown
DECIMO QUINTO
###Code
def edades():
n= int(input("Escriba su edad: "))
if n<18:
print("MENOR DE EDAD")
elif n > 18 and n< 45:
print("ADULTO JOVEN")
elif n > 45 and n< 60:
print("ADULTO")
elif n>60:
print("ADULTO MAYOR")
edades()
###Output
Escriba su edad: 46
ADULTO
###Markdown
DECIMO SEXTO
###Code
def Caras():
valor= int(input("Ingrese el valor de su billete: "))
if valor == 1000:
print("La cara es: Jorge Eliecer Gaitan")
elif valor == 2000:
print("La cara es: Francisco de Paula Santander")
elif valor == 5000:
print("La cara es: Jose Asuncion Silva")
elif valor == 10000:
print("La cara es: Policarpa Salavarrieta")
elif valor == 20000:
print("La cara es: Julio Garavito Armero")
elif valor == 50000:
print("La cara es: Jorge Isaacs")
elif valor == 100000:
print("La cara es: Carlos Lleras Restrepo")
Caras()
###Output
Ingrese el valor de su billete: 5000
La cara es: Jose Asuncion Silva
###Markdown
DECIMO SEPTIMO
###Code
New = [3,5,1,9,10,11,32,21,5,1,209,432,1,32,45]
#Agregar un elemento
New.append(10)
#Agregar un elemento
New.append(3)
#Agregar varios elementos
New.extend([5,6,7])
#Eliminar el ultimo elemento
New.pop()
#Ordenar lista ascendente
New.sort()
#Eliminar el ultimo elemento
New.pop()
#Ordenar lista descendente
New.sort(reverse=True)
#Eliminar posicion 10
New.pop(10)
#Agrege el 10
New.append(10)
#Agrege el 345
New.append(345)
#Agrege el 1
New.append(1)
#Elimine el 9
New.remove(9)
#Invierta el orden de la lista
New.reverse()
#Organice la lista
New.sort()
New
#Numeros pares de la lista al cuadrado
for num in New:
if num % 2 == 0:
print(num**2, end = " ")
#Numeros multiplos de 3 al cubo
for num in New:
if num % 3 == 0:
print(num**3, end = " ")
#Elimine ultimo elemento
New.pop()
#Elimine ultimo elemento
New.pop()
New
###Output
_____no_output_____ |
DNA_Machine_Learning.ipynb | ###Markdown
Methods to Use in Machine Learning Seq data 1.Encode the seq informatin as an ordinal Vector and work with that directly, 2.One-hot encode the sequence letters and use the resulting array and 3. treat the DNA sequence as a language(text) and use various "language Processing" methods
###Code
# Function to convert a DNA sequence string to a numpy array
# converts lower case , changes any non **acgt** character to "n"
import numpy as np
import re
def string_to_array(my_string):
my_string = my_string.lower()
my_string = re.sub('[^acgt]',"z",my_string)
my_array = np.array(list(my_string))
return my_array
#testing Our Function
#string_to_array("actgmamnklhh")
###Output
_____no_output_____
###Markdown
Label Encoder
###Code
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(np.array(["a","c","g","t","z"]))
###Output
_____no_output_____
###Markdown
It returns a numpy array with a =0.25 , c =0.5 ,g =0.75 , t =1.0 , z =0
###Code
def ordinal_encoder(my_array):
integer_encoded = label_encoder.transform(my_array)
float_encoded = integer_encoded.astype(float)
float_encoded[float_encoded == 0] = 0.25 #A
float_encoded[float_encoded == 1] = 0.5 #C
float_encoded[float_encoded == 2] = 0.75 #T
float_encoded[float_encoded == 3] = 1.00 #G
float_encoded[float_encoded==4] = 0 # Other character zero
return float_encoded
# testing
test_seq = "zzACTACGMNCC"
ordinal_encoder(string_to_array(test_seq))
###Output
_____no_output_____
###Markdown
One Hot encoding DNA Sequence data Another approach is to use one hot encoding to represent the DNA sequence. This is widely used in deep learning methods and lends itself well to algorithms like convolutional neural nerworks. In this example, "ATCG" would become[0,0,0,1],[0,0,1,0],[0,1,0,0],[1,0,0,0]
###Code
# Function to one-hot encode a DNA sequence String
# non "acgt" bases (n) are 0000
# returnsa LX4 numpy array
from sklearn.preprocessing import OneHotEncoder
def one_hot_encoder(my_array):
integer_encoded = label_encoder.transform(my_array)
onehot_encoder = OneHotEncoder(sparse = False , dtype = int , n_values = 5)
integer_encoded = integer_encoded.reshape(len(integer_encoded),1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
onehot_encoded = np.delete(onehot_encoded,-1,1)
return onehot_encoded
# test the above function
test_sequence = "AACGCGGTTNM"
one_hot_encoder(string_to_array(test_sequence))
###Output
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\_encoders.py:373: DeprecationWarning: Passing 'n_values' is deprecated in version 0.20 and will be removed in 0.22. You can use the 'categories' keyword instead. 'n_values=n' corresponds to 'categories=[range(n)] * n_features'.
warnings.warn(msg, DeprecationWarning)
###Markdown
Treating DNA Sequence as a "Language" otherwise known as k-mer counting
###Code
def getkmers(seq ,size):
return [seq[x:x+size].lower() for x in range(len(seq)-size +1)]
my_seq = "CATGGCCATCCCCCCCCGAGCGGGGGGGGGG"
#getkmers(my_seq, size=10)
###Output
_____no_output_____
###Markdown
It returns a list of K-mer "words". You can then join the "words" into a "sentence" then apply your favorite natural language processing methods
###Code
words = getkmers(my_seq, size = 6)
sentence = " ".join(words)
sentence[:30]
###>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
my_seq2 = 'GATGGCCATCCCCGCCCGAGCGGGGGGGG'
my_seq3 = 'CATGGCCATCCCCGCCCGAGCGGGCGGGG'
sentence2 = " ".join(getkmers(my_seq2,size =6))
sentence3 = " ".join(getkmers(my_seq3, size = 6))
## Creating the Bag of Words Model\
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
x = cv.fit_transform([sentence, sentence2 , sentence3]).toarray()
###Output
_____no_output_____
###Markdown
Classification of gene function
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Let's open the data for human and see what we have
###Code
human = pd.read_table("datas/human_data/human_data.txt")
human.head()
###Output
_____no_output_____
###Markdown
We have some data for human DNA sequence coding regions and a class label. We also have data for Chimpanzee and a more divergent species, the dog. Let's get that.
###Code
chimp = pd.read_table("datas/chimp_data/chimp_data.txt")
dog = pd.read_table("datas/dog_data/dog_data.txt")
chimp.head() , dog.head()
###Output
_____no_output_____
###Markdown
let's define a function to collect all possible overlapping k-mers of a specified length from any sequence string
###Code
# Function to convert sequence strings into k-mer words, default size =6 (hexamer words)
def getkmers(sequence, size = 6):
return [sequence [x:x+size].lower() for x in range(len(sequence)-size+1)]
###Output
_____no_output_____
###Markdown
Now we can convert our training data sequences into short overlapping k-mers of legth 6. lets do that for each species of data we have using our getKmers function.
###Code
human["words"] = human.apply(lambda x : getkmers(x["sequence"]), axis =1)
human = human = human.drop("sequence", axis = 1)
chimp["words"] = chimp.apply(lambda x : getkmers(x["sequence"]),axis =1)
chimp = chimp.drop("sequence", axis=1)
dog["words"] = dog.apply(lambda x:getkmers(x["sequence"]),axis =1)
dog = dog.drop("sequence", axis = 1)
###Output
_____no_output_____
###Markdown
Now our coding sequence data is changed to lowercase, split up into all possible k-mer words of length 6 and ready for the next step. Let's take a look
###Code
human.head()
human.columns
len(human.words[1])
human.shape, len(human.words[44][5])
###Output
_____no_output_____
###Markdown
Since we are going to use scikit-learn natural language processing tools to do the k-mer , we need to now convert the lists of k-mers for each gene into string sentences of words that the count vectorizeer can use. We can also make a y - variable to hold the class labels. Lets do that now.
###Code
human_texts = list(human["words"])
for item in range(len(human_texts)):
human_texts[item] = " ".join(human_texts[item])
y_h = human.iloc[:,0].values
y_h
#human_texts[1]
###Output
_____no_output_____
###Markdown
Now let's do the same for chimp and dog.
###Code
chimp_text = list(chimp["words"])
for item in range(len(chimp_text)):
chimp_text[item] = " ".join(chimp_text[item])
y_c = chimp.iloc[: , 0].values # y_c for chimp
dog_texts = list(dog["words"])
for item in range(len(dog_texts)):
dog_texts[item] = " ".join(dog_texts[item])
y_d = dog.iloc[: , 0].values # y_d for dog
#y_c , y_d
###Output
_____no_output_____
###Markdown
Now let's review how to use sklearn's "Natural Language " Processing tools to convert out k-mer words into uniform length numerical vectors that represent counts for every k-mer in the vocabulary.
###Code
# Creating the Bag of Words model using CountVectorizer()
# This is equivalent to k-mer counting
# The n-gram size of 4 was previously determined by testing
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(ngram_range = (4,4))
x= cv.fit_transform(human_texts)
x_chimp = cv.transform(chimp_text)
x_dog = cv.transform(dog_texts)
###Output
_____no_output_____
###Markdown
Let's see what we havefor human we have 4380 genes converted into uniform length feature vectors of 4-gram k-mer (length 6 ) counts. For chimp and dog we have the expected same number of features with 1682 and 820 genes respectively.
###Code
print(x.shape,x_chimp.shape , x_dog.shape)
human["class"].value_counts().sort_index().plot.bar()
chimp["class"].value_counts().sort_index().plot.bar()
dog["class"].value_counts().sort_index().plot.bar()
###Output
_____no_output_____ |
materials/4_pandas.ipynb | ###Markdown
1D analysis: `pandas`!
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
pd.set_option('max_rows', 6) # max number of rows to show in this notebook — to save space!
import seaborn as sns # for better style in plots
###Output
_____no_output_____
###Markdown
Reading in data to a dataframe For 1D analysis, we are generally thinking about data that varies in time, so time series analysis. The `pandas` package is particularly suited to deal with this type of data, having very convenient methods for interpreting, searching through, and using time representations.Let's start with the example we started the class with: taxi rides in New York City.
###Code
df = pd.read_csv('../data/yellow_tripdata_2016-05-01_decimated.csv', parse_dates=[0, 2], index_col=[0])
###Output
_____no_output_____
###Markdown
What do all these (and other) input keyword arguments do?* header: tells which row of the data file is the header, from which it will extract column names* parse_dates: try to interpret the values in `[col]` or `[[col1, col2]]` as dates, to convert them into `datetime` objects.* index_col: if no index column is given, an index counting from 0 is given to the rows. By inputting `index_col=[column integer]`, that column will be used as the index instead. This is usually done with the time information for the dataset.* skiprows: can skip specific rows, `skiprows=[list of rows to skip numbered from start of file with 0]`, or number of rows to skip, `skiprows=N`. We can check to make sure the date/time information has been read in as the index, which allows us to reference the other columns using this time information really easily:
###Code
df.index
###Output
_____no_output_____
###Markdown
From this we see that the index is indeed using the timing information in the file, and we can see that the `dtype` is `datetime`. Selecting rows and columns of dataIn particular, we will select rows based on the index. Since in this example we are indexing by time, we can use human-readable notation to select based on date/times themselves instead of index. Columns can be selected by name. We can now access the columns of the file using dictionary-like keyword arguments, like so:
###Code
df['trip_distance']
###Output
_____no_output_____
###Markdown
We can equivalently access the columns of data as if they are methods. This means that we can use tab autocomplete to see methods and data available in a dataframe.
###Code
df.trip_distance
###Output
_____no_output_____
###Markdown
We can plot in this way, too:
###Code
df['trip_distance'].plot(figsize=(14,6))
###Output
_____no_output_____
###Markdown
Simple data selectionOne of the biggest benefits of using `pandas` is being able to easily reference the data in intuitive ways. For example, because we set up the index of the dataframe to be the date and time, we can pull out data using dates. In the following, we pull out all data from the first hour of the day:
###Code
df['2016-05-01 00']
###Output
_____no_output_____
###Markdown
Here we further subdivide to examine the passenger count during that time period:
###Code
df['passenger_count']['2016-05-01 00']
###Output
_____no_output_____
###Markdown
We can also access a range of data, for example any data rows from midnight until noon:
###Code
df['2016-05-01 00':'2016-05-01 11']
###Output
_____no_output_____
###Markdown
If you want more choice in your selectionThe following, adding on minutes, does not work:
###Code
df['2016-05-01 00:30']
###Output
_____no_output_____
###Markdown
However, we can use another approach to have more control, with `.loc` to access combinations of specific columns and/or rows, or subsets of columns and/or rows.
###Code
df.loc['2016-05-01 00:30']
###Output
_____no_output_____
###Markdown
You can also select data for more specific time periods.`df.loc[row_label, col_label]`
###Code
df.loc['2016-05-01 00:30', 'passenger_count']
###Output
_____no_output_____
###Markdown
You can select more than one column:
###Code
df.loc['2016-05-01 00:30', ['passenger_count','trip_distance']]
###Output
_____no_output_____
###Markdown
You can select a range of data:
###Code
df.loc['2016-05-01 00:30':'2016-05-01 01:30', ['passenger_count','trip_distance']]
###Output
_____no_output_____
###Markdown
You can alternatively select data by index instead of by label, using `iloc` instead of `loc`. Here we select the first 5 rows of data for all columns:
###Code
df.iloc[0:5, :]
###Output
_____no_output_____
###Markdown
--- *Exercise*> Access the data from dataframe `df` for the last three hours of the day at once. Plot the tip amount (`tip_amount`) for this time period.> After you can make a line plot, try making a histogram of the data. Play around with the data range and the number of bins. A number of `plot` types are available built-in to a `pandas` dataframe inside the `plot` method under the keyword argument `kind`.--- --- *Exercise*> Using `pandas`, read in the CTD data we've used in class several times. What variable would make sense to use for your index column?--- Notes about datetimes You can change the format of datetimes using `strftime()`. Compare the datetimes in our dataframe index in the first cell below with the second cell, in which we format the look of the datetimes differently. We can choose how it looks using formatting codes. You can find a comprehensive list of the formatting directives at [http://strftime.org/](http://strftime.org/). Note that inside the parentheses, you can write other characters that will be passed through (like the comma in the example below).
###Code
df = pd.read_csv('../data/yellow_tripdata_2016-05-01_decimated.csv', parse_dates=[0, 2], index_col=[0])
df.index
df.index.strftime('%b %d, %Y %H:%m')
###Output
_____no_output_____
###Markdown
You can create and use datetimes using `pandas`. It will interpret the information you put into a string as best it can. Year-month-day is a good way to put in dates instead of using either American or European-specific ordering. After defining a pandas Timestamp, you can also change time using Timedelta.
###Code
now = pd.Timestamp('October 22, 2019 1:19PM')
now
tomorrow = pd.Timedelta('1 day')
now + tomorrow
###Output
_____no_output_____
###Markdown
You can set up a range of datetimes to make your own data frame indices with the following. Codes for frequency [are available](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html).
###Code
pd.date_range(start='Jan 1 2019', end='May 1 2019', freq='15T')
###Output
_____no_output_____
###Markdown
Note that you can get many different measures of your time index.
###Code
df.index.minute
df.index.dayofweek
###Output
_____no_output_____
###Markdown
--- *Exercise*> How would you change the call to `strftime` above to format all of the indices such that the first index, for example, would be "the 1st of May, 2016 at the hour of 00 and the minute of 00 and the seconds of 00, which is the following day of the week: Sunday." Use the format codes for as many of the values as possible.--- Adding column to dataframeWe can add data to our dataframe very easily. Below we add an index that gives the minute in the hour throughout the day.
###Code
df['tip squared'] = df.tip_amount**2 # making up some numbers to save to a new column
df['tip squared'].plot()
###Output
_____no_output_____
###Markdown
Another example: Wind dataLet's read in the wind data file that we have used before to have another data set to use. Note the parameters used to read it in properly.
###Code
df2 = pd.read_table('../data/burl1h2010.txt', header=0, skiprows=[1], delim_whitespace=True,
parse_dates={'dates': ['#YY', 'MM', 'DD', 'hh']}, index_col=0)
df2
df2.index
###Output
_____no_output_____
###Markdown
Plotting with `pandas`You can plot with `matplotlib` and control many things directly from `pandas`. Get more info about plotting from pandas dataframes directly from:
###Code
df.plot?
###Output
_____no_output_____
###Markdown
You can mix and match plotting with matplotlib by either setting up a figure and axes you want to use with calls to `plot` from your dataframe (which you input to the plot call), or you can start with a pandas plot and save an axes from that call. Each will be demonstrated next. Or, you can bring the pandas data to matplotlib fully. Start from `matplotlib`, then input axes to `pandas`To demonstrate plotting starting from `matplotlib`, we will also demonstrate a note about column selection for plotting. You can select which data columns to plot either by selecting in the line before the `plot` call, or you can choose the columns within the plot call. The key part here is that you input to your pandas plot call the axes you wanted plotted into (here: `ax=axes[0]`).
###Code
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 2, figsize=(14,4))
df2['WSPD']['2010-5'].plot(ax=axes[0])
df2.loc['2010-5'].plot(y='WSPD', ax=axes[1])
###Output
_____no_output_____
###Markdown
Start with `pandas`, then use `matplotlib` commandsThe important part here is that the call to `pandas` dataframe plotting returns an axes handle which you can save; here, it is saved as "ax".
###Code
ax = df2['WSPD']['2010 11 1'].plot()
ax.set_ylabel('Wind speed')
###Output
_____no_output_____
###Markdown
Bring `pandas` dataframe data to `matplotlib` fullyYou can also use `matplotlib` directly by pulling the data you want to plot out of your dataframe.
###Code
plt.plot(df2['WSPD'])
###Output
_____no_output_____
###Markdown
Plot all or multiple columns at once
###Code
# all
df2.plot()
###Output
_____no_output_____
###Markdown
To plot more than one but less than all columns, give a list of column names. Here are two ways to do the same thing:
###Code
# multiple
fig, axes = plt.subplots(1, 2, figsize=(14,4))
df2[['WSPD', 'GST']].plot(ax=axes[0])
df2.plot(y=['WSPD', 'GST'], ax=axes[1])
###Output
_____no_output_____
###Markdown
Formatting datesYou can control how datetimes look on the x axis in these plots as demonstrated in this section. The formatting codes used in the call to `DateFormatter` are the same as those used above in this notebook for `strftime`.Note that you can also control all of this with minor ticks additionally.
###Code
ax = df2['WSPD'].plot(figsize=(14,4))
from matplotlib.dates import DateFormatter
ax = df2['WSPD'].plot(figsize=(14,4))
ax.set_xlabel('2010')
date_form = DateFormatter("%b %d")
ax.xaxis.set_major_formatter(date_form)
# import matplotlib.dates as mdates
# # You can also control where the ticks are located, by date with Locators
# ticklocations = mdates.MonthLocator()
# ax.xaxis.set_major_locator(ticklocations)
###Output
_____no_output_____
###Markdown
Plotting with twin axisYou can very easily plot two variables with different y axis limits with the `secondary_y` keyword argument to `df.plot`.
###Code
axleft = df2['WSPD']['2010-10'].plot(figsize=(14,4))
axright = df2['WDIR']['2010-10'].plot(secondary_y=True, alpha=0.5)
axleft.set_ylabel('Speed [m/s]', color='blue');
axright.set_ylabel('Dir [degrees]', color='orange');
###Output
_____no_output_____
###Markdown
ResamplingSometimes we want our data to be at a different sampling frequency that we have, that is, we want to change the time between rows or observations. Changing this is called resampling. We can upsample to increase the number of data points in a given dataset (or decrease the period between points) or we can downsample to decrease the number of data points.The wind data is given every hour. Here we downsample it to be once a day instead. After the `resample` function, a method needs to be used for how to combine the data over the downsampling period since the existing data needs to be combined in some way. We could use the max value over the 1-day period to represent each day:
###Code
df2.resample('1d').max() #['DEWP'] # now the data is daily
###Output
_____no_output_____
###Markdown
It's always important to check our results to make sure they look reasonable. Let's plot our resampled data with the original data to make sure they align well. We'll choose one variable for this check.We can see that the daily max wind gust does indeed look like the max value for each day, though note that it is plotted at the start of the day.
###Code
df2['GST']['2010-4-1':'2010-4-5'].plot()
df2.resample('1d').max()['GST']['2010-4-1':'2010-4-5'].plot()
###Output
_____no_output_____
###Markdown
We can also upsample our data or add more rows of data. Note that like before, after we resample our data we still need a method on the end telling `pandas` how to process the data. However, since in this case we are not combining data (downsampling) but are adding more rows (upsampling), using a function like `max` doesn't change the existing observations (taking the max of a single row). For the new rows, we haven't said how to fill them so they are nan's by default.Here we are changing from having data every hour to having it every 30 minutes.
###Code
df2.resample('30min').max() # max doesn't say what to do with data in new rows
###Output
_____no_output_____
###Markdown
When upsampling, a reasonable option is to fill the new rows with data from the previous existing row:
###Code
df2.resample('30min').ffill()
###Output
_____no_output_____
###Markdown
Here we upsample to have data every 15 minutes, but we interpolate to fill in the data between. This is a very useful thing to be able to do.
###Code
df2.resample('15 T').interpolate()
###Output
_____no_output_____
###Markdown
The codes for time period/frequency are [available](http://pandas.pydata.org/pandas-docs/stable/timeseries.htmloffset-aliases) and are presented here for convenience: Alias Description B business day frequency C custom business day frequency (experimental) D calendar day frequency W weekly frequency M month end frequency SM semi-month end frequency (15th and end of month) BM business month end frequency CBM custom business month end frequency MS month start frequency SMS semi-month start frequency (1st and 15th) BMS business month start frequency CBMS custom business month start frequency Q quarter end frequency BQ business quarter endfrequency QS quarter start frequency BQS business quarter start frequency A year end frequency BA business year end frequency AS year start frequency BAS business year start frequency BH business hour frequency H hourly frequency T, min minutely frequency S secondly frequency L, ms milliseconds U, us microseconds N nanoseconds --- *Exercise*> We looked at NYC taxi trip distance earlier, but it was hard to tell what was going on with so much data. Resample this high resolution data to be lower resolution so that any trends in the information are easier to see. By what method do you want to do this downsampling? Plot your results.--- `groupby` and difference between `groupby` and resampling`groupby` allows us to aggregate data across a category or value. We'll use the example of grouping across a measure of time.Let's examine this further using a dataset of some water properties near the Flower Garden Banks in Texas. We want to find the average salinity by month across the years of data available, that is, we want to know the average salinity value for each month of the year, calculated for each month from all of the years of data available. We will end up with 12 data points in this case. This is distinct from resampling for which if you calculate the average salinity by month, you will get a data point for each month in the time series. If there are 5 years of data in your dataset, you will end up with 12*5=60 data points total.In the `groupby` example below, we first read the data into dataframe 'df3', then we group it by month (across years, since there are many years of data). From this grouping, we decide what function we want to apply to all of the numbers we've aggregated across the months of the year. We'll use mean for this example.
###Code
df3 = pd.read_table('http://pong.tamu.edu/tabswebsite/daily/tabs_V_salt_all', index_col=0, parse_dates=True)
df3
ax = df3.groupby(df3.index.month).aggregate(np.mean)['Salinity'].plot(color='k', grid=True, figsize=(14, 4), marker='o')
# the x axis is now showing month of the year, which is what we aggregated over
ax.set_xlabel('Month of year')
ax.set_ylabel('Average salinity')
###Output
_____no_output_____ |
analysis/notebooks/will_30-12-EDA.ipynb | ###Markdown
Explorando os dados
###Code
bncc_db.info()
name1 = bncc_db['name.1'].nunique()
d = ('São um total de %d Áreas do Conhecimento' % (name1))
display(
d,
bncc_db.iloc[:, 5].agg(['value_counts']).head()
)
### Coluna code
bncc_db.iloc[:, 8].unique()
code = bncc_db['code'].nunique()
d = ('São um total de %d códigos da BNCC presentes no dataset' % (code))
display(
d,
bncc_db.iloc[:, 8].agg(['value_counts']).head()
)
description = bncc_db['description'].nunique()
d = ('São um total de %d descrições' % (description))
display(
d,
bncc_db['description'].agg(['value_counts']).head()
)
question = bncc_db['question'].nunique()
d = ('São um total de %d questões' % (question))
display(
d,
bncc_db['question'].agg(['value_counts']).head()
)
###Output
_____no_output_____
###Markdown
Limpando as questões
###Code
### Resolvendo problema de codificação de caracteres presentes nas Questões
import html
data_quest = bncc_db['question'].astype('str').apply(html.unescape)
### Resolvendo problema de tags html
import regex as reg
CLEANR = reg.compile('<.*?>')
def cleanhtml(raw_html):
cleantext = reg.sub(CLEANR, '', raw_html)
return cleantext
text = data_quest.map(lambda x: cleanhtml(x))
bncc_db.insert(1, 'question_clean', text, allow_duplicates=False)
bncc_db.head()
###Output
_____no_output_____
###Markdown
Instalando Bibliotecas necessárias para NLP- `!pip install regex`- `!pip install html`- `!pip install lxml`- `!pip install nltk`- `!pip install gensim`- `!pip install pyldavis`- `!pip install wordcloud`- `!pip install textblob`- `!pip install spacy`- `!pip install textstat` Número de Caracteres por Sentença
###Code
max = bncc_db['question_clean'].str.len().max()
min = bncc_db['question_clean'].str.len().min()
median = bncc_db['question_clean'].str.len().median()
mean = bncc_db['question_clean'].str.len().mean()
print('As questões vão de %d à %d caracteres por questão' % (min, max))
print('O valor mediano e médio de caracteres por questão é de %d e de %d, respectivamente.' %
(median, mean))
fig, ax = plt.subplots(figsize=(20, 10))
sns.histplot(bncc_db['question_clean'].str.len(), ax = ax)
###Output
As questões vão de 0 à 8419 caracteres por questão
O valor mediano e médio de caracteres por questão é de 194 e de 337, respectivamente.
###Markdown
- Verificando Questões vazias
###Code
np.where(bncc_db['question_clean'].str.len() == 0)
###Output
_____no_output_____
###Markdown
Número de Palavras em cada questão:
###Code
text = bncc_db['question_clean']
max = text.str.split().map(lambda x: len(x)).max()
min = text.str.split().map(lambda x: len(x)).min()
median = text.str.split().map(lambda x: len(x)).median()
mean = text.str.split().map(lambda x: len(x)).mean()
print('O número de palavras vão de %d à %d por questão' % (min, max))
print('O valor mediano e médio de palavras por questão é de %d e de %d, respectivamente.' %
(median, mean))
fig, ax = plt.subplots(figsize=(20, 10))
sns.histplot(text.str.split().map(lambda x: len(x)), ax = ax)
###Output
O número de palavras vão de 0 à 1261 por questão
O valor mediano e médio de palavras por questão é de 30 e de 52, respectivamente.
###Markdown
- Média do tamanho das palavras em cada questão
###Code
mean_words = bncc_db['question_clean'].str.split().apply(lambda x : [len(i) for i in x]).map(lambda x: np.mean(x))
mean_words
###Output
C:\Users\Danilo\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\fromnumeric.py:3440: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
C:\Users\Danilo\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\_methods.py:189: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
###Markdown
- Valor médio máximo e médio do tamanho de palavras por questão:
###Code
max_len_words = bncc_db['question_clean'].str.split().apply(lambda x : [len(i) for i in x]).map(lambda x: np.mean(x)).max()
print('O valor máximo do tamanho médio das palavras por questão é de %d (Algo está errado)'%(max_len_words))
mean_len_words = bncc_db['question_clean'].str.split().apply(lambda x : [len(i) for i in x]).map(lambda x: np.mean(x)).mean()
print('O valor médio do tamanho de uma palavra por questão é de %d'%(mean_len_words))
###Output
O valor máximo do tamanho médio das palavras por questão é de 348 (Algo está errado)
O valor médio do tamanho de uma palavra por questão é de 5
###Markdown
Possíveis problemas relacionados a palavras compridas;- Palavras distintas não separadas por espaço- Ausencia de espaço após finalizar uma frase. - Une uma palavra do final de uma frase com a palavra do início de uma nova frase (Acabou.Começou != Acabou. Começou)Apesar de termos limpados as tag html, a própria estrutura das questões deixam alguns erros. Por ex.: - Tópicos enumerados a serem cumpridos na questão estão grudados. ex.: - Deveria ser: - 1. alternativa - 2. alternativa - 3. alternativa - 4. alternativa - Como está: - -1. alternativa-2. alternativa- 3. alternativa-4. alternativa
###Code
bncc_db['question_clean'][4]
## Verificando o tamanho médio das palavras
fig, ax = plt.subplots(figsize=(20, 10))
sns.histplot(bncc_db['question_clean'].str.split().apply(lambda x : [len(i) for i in x]).map(lambda x: np.mean(x)),
ax = ax)
## Vale notar que existe umas palavras com 40, 60, 80 ... letras, o que é inverossímel
###Output
C:\Users\Danilo\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\fromnumeric.py:3440: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
C:\Users\Danilo\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\_methods.py:189: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
###Markdown
Verificar 'stopwords' nas questões
###Code
import nltk
nltk.download('stopwords')
stop = nltk.corpus.stopwords.words('portuguese')
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Danilo\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
|
ps1.2_Amin.ipynb | ###Markdown
Importing Libraries
###Code
import nltk
from nltk.stem import PorterStemmer, WordNetLemmatizer
import string
from nltk.corpus import stopwords
from nltk import word_tokenize
import string
import numpy as np
import random
###Output
_____no_output_____
###Markdown
Variable DefinitionTo run this the code, user only need to know about one code segment titled as "Variable Definition"}. In this code segment, there are five variables that users can modify to see how these parameter influence the overall performance of the decision tree classifier. The description of the variables are as follows:decision_list_rule_boundary: Number of top ranked rule boundary (For selecting top-10 rule, set the value `10`. To add more rule in decision list, increase the value. )target_file_name: Target corpus name (In our case, it will be "bass" or "sake")total_size: Total training data size from the actual training data (Range: 0 to 1) mu: Percentage of data for training and validation set (Range: 0 to 1. `0.8` refers to 80\% of data will be treated as training set and 20\% will be treated as validation set.)k: Length of context sentence or number of words in context sentenceFor a given code segment, the output will be observed below the code segment titled as "Sense Disambiguation using Decision List". The result will contain Accuracy, Precision and Recall.To use the model for a isolated sentence, a function titled "predict_sense" needs to be called. This function takes a sentence as input and return the sense as "1 or 2". Definitely, you need to define the decision list first.
###Code
# --------------------------------------Value Alteration Allowed Start--------------------------
# Number of top ranked rule boundary
decision_list_rule_boundary = 10
# Define the target file name
target_file_name = "sake"
# define total training data size from the actual training data
total_size = 0.7
# percentage of data for training data
mu = 0.8
# length of context sentence or number of words in context sentence
k = 11
# --------------------------------------Value Alteration Allowed End--------------------------
#-------------------------Altering these values are not recommended start------------------
# How many rules needs to be selected from each criteria for calculating log likelihood
top_rules = 5
# List for context sentences
contexts = []
# List of decisions
decisionList=[]
default_sense = 1
# Default value for alpha(Because the size of corpora is small)
alpha = 0.1
#-------------------------Altering these values are not recommended end------------------
###Output
_____no_output_____
###Markdown
Text Preprocessing
###Code
target = target_file_name
lines = open(target_file_name+".trn","r").readlines()
testlines = open(target_file_name+".tst","r").readlines()
# set the size of the training data based on the value of total_size
lines = lines[:int(len(lines)*total_size)]
train_lines = lines[:int(len(lines)*mu)]
validation_lines = lines[int(len(lines)*mu):]
# Prcessing the text: to extract the text and corresponding sense from each line of the file
def process_text(line):
splitLine = line.split("\t")
splitLine[0] = splitLine[0].replace(":","")
splitLine[1] = splitLine[1].lower()
splitLine[1].translate(str.maketrans('', '', string.punctuation))
return splitLine
# Unpacking the training corpora into two arrays, each containing text from two senses
def unpack_corpora():
type1Text = []
type2Text = []
for line in train_lines:
splitLine = process_text(line)
if splitLine[0] == target:
type1Text.append(splitLine[1])
else:
type2Text.append(splitLine[1])
print("Length of Type 1 texts:",len(type1Text), "Length of Type 2 texts:", len(type2Text))
return type1Text, type2Text
###Output
_____no_output_____
###Markdown
Contextualization of the Text
###Code
def convert_lower_case(data):
return np.char.lower(data)
def remove_stop_words(data):
stop_words = stopwords.words('english')
words = word_tokenize(str(data))
new_text = ""
for w in words:
if w not in stop_words and len(w) > 1:
new_text = new_text + " " + w
return new_text
def remove_punctuation(data):
symbols = "!\"#$%&()*+-./:;<=>?@[\]^_`{|}~\n"
for i in range(len(symbols)):
data = np.char.replace(data, symbols[i], ' ')
data = np.char.replace(data, " ", " ")
data = np.char.replace(data, ',', '')
return data
def remove_apostrophe(data):
return np.char.replace(data, "'", "")
def stemming(data):
stemmer= PorterStemmer()
tokens = word_tokenize(str(data))
new_text = ""
for w in tokens:
new_text = new_text + " " + stemmer.stem(w)
return new_text
def lemmatizing(data):
lemmatizer = WordNetLemmatizer()
tokens = word_tokenize(str(data))
new_text = ""
for w in tokens:
new_text = new_text + " " + lemmatizer.lemmatize(w)
return new_text
def convert_numbers(data):
tokens = word_tokenize(str(data))
new_text = ""
for w in tokens:
try:
w = num2words(int(w))
except:
a = 0
new_text = new_text + " " + w
new_text = np.char.replace(new_text, "-", " ")
return new_text
def preprocess(data):
data = convert_lower_case(data)
data = remove_punctuation(data) #remove comma seperately
data = remove_apostrophe(data)
data = remove_stop_words(data)
data = convert_numbers(data)
data = lemmatizing(data)
data = remove_punctuation(data)
data = convert_numbers(data)
data = remove_punctuation(data)
data = remove_stop_words(data) #needed again as num2word is giving stop words 101 - one hundred and one
return data
# Make context of each sentences after removing punctuation, some extraneous quotation mark from the text.
def context_dictionary():
type1Text, type2Text = unpack_corpora()
# This for loop is for sense 1
for sentence in type1Text:
# preprocess the sentence
clean_sentence = preprocess(sentence)
# tokenizing the words from the sentence
words = word_tokenize(clean_sentence)
# Pre-process the words
words = [word for word in words]
for i in range(0,len(words)):
if target == words[i]:
left = max(i-int(k/2),0)
right = min(i+int(k/2),len(words))
context = words[left:right]
dict = {
"sentence" : context,
"sense" : 1,
"position": i
}
contexts.append(dict)
# This for loop is for sense 2
for sentence in type2Text:
# preprocess the sentence
clean_sentence = preprocess(sentence)
# tokenizing the words from the sentence
words = word_tokenize(clean_sentence)
# Pre-process the words
words = [word for word in words]
for i in range(0,len(words)):
if target == words[i]:
left = max(i-int(k/2),0)
right = min(i+int(k/2),len(words))
context = words[left:right]
dict = {
"sentence" : context,
"sense" : 2,
"position": i
}
contexts.append(dict)
return contexts
###Output
_____no_output_____
###Markdown
Check Collocation Distribution
###Code
# define rules
# if seed word is at K distance from the pattern word index
def k_closest(context, index_of_pattern, words):
for index, w in enumerate(context):
if w == words and (index < index_of_pattern - 1 or index > index_of_pattern + 1):
return True
return False
# if seed word is the next of the pattern word index
def right(context, index_of_pattern, words):
if len(context) <= index_of_pattern + 1:
return False
else:
return context[index_of_pattern + 1] == words
# if seed word is the prior of the pattern word index
def left(context, index_of_pattern, words):
if index_of_pattern == 0:
return False
else:
return context[index_of_pattern - 1] == words
# if seed words are the prior of the pattern word index
def two_left(context, index_of_pattern, words):
if index_of_pattern < 2:
return False
else:
return (context[index_of_pattern - 2], context[index_of_pattern - 1]) == words
# if seed words are around the pattern word index
def surround(context, index_of_pattern, words):
if index_of_pattern >= len(context) - 1 or index_of_pattern == 0:
return False
else:
return (context[index_of_pattern - 1], context[index_of_pattern + 1]) == words
# if seed words are the prior of the pattern word index
def two_right(context, index_of_pattern, words):
if index_of_pattern >= len(context) - 2:
return False
else:
return (context[index_of_pattern + 1], context[index_of_pattern + 2]) == words
RULES = {
0: right,
1: left,
2: k_closest,
3: two_left,
4: surround,
5: two_right
}
two_right(['stephan', 'weidner', 'composer', 'bass', 'player', 'boehse', 'onkelz'], 3, ('player','boehse'))
###Output
_____no_output_____
###Markdown
Freq Distribution in Sense 1 and Sense 2We will count the frequency of each word to derive which word to expect within the range(+/-k) of target word.
###Code
def unigram_count(contexts):
freqSense1 = {}
freqSense2 = {}
# Freq Distribution in Sense 1 and Sense 2
for context in contexts:
for word in context['sentence']:
if context['sense']==1 and word != target:
if freqSense1.get(word):
freqSense1[word]=freqSense1[word]+1;
else:
freqSense1[word]=1;
if context['sense']==2 and word != target:
if freqSense2.get(word):
freqSense2[word]=freqSense2[word]+1;
else:
freqSense2[word]=1;
freq_dist_type_1 = sorted(freqSense1.items(), key=lambda x: x[1], reverse=True)
freq_dist_type_2 = sorted(freqSense2.items(), key=lambda x: x[1], reverse=True)
return freq_dist_type_1, freq_dist_type_2
###Output
_____no_output_____
###Markdown
Count Next Word in Sense 1 and Sense 2
###Code
def forward_one_count(contexts):
# Count Next words in Sense 1 and Sense 2
seed_forward_1 = {}
seed_forward_2 = {}
for context in contexts:
if context['sense'] == 1:
try:
candidate = (target, context['sentence'][context['position']+1])
except:
continue
if not seed_forward_1.get(candidate):
seed_forward_1[candidate]=1
else:
seed_forward_1[candidate]=seed_forward_1[candidate]+1
else:
try:
candidate = (target, context['sentence'][context['position']+1])
except:
continue
if not seed_forward_2.get(candidate):
seed_forward_2[candidate]=1
else:
seed_forward_2[candidate]=seed_forward_2[candidate]+1
seed_forward_1 = sorted(seed_forward_1.items(), key=lambda x: x[1], reverse=True)
seed_forward_2 = sorted(seed_forward_2.items(), key=lambda x: x[1], reverse=True)
# print(seed_forward_1[:5],seed_forward_2[:5])
return seed_forward_1, seed_forward_2
###Output
_____no_output_____
###Markdown
Count Previous Word in Sense 1 and Sense 2
###Code
def backward_one_count(contexts):
# Count Previous Words in Sense 1 and Sense 2
seed_backward_1 = {}
seed_backward_2 = {}
for context in contexts:
if context['sense'] == 1:
try:
candidate = (context['sentence'][context['position']-1], target)
except:
continue
if not seed_backward_1.get(candidate):
seed_backward_1[candidate]=1
else:
seed_backward_1[candidate]=seed_backward_1[candidate]+1
else:
try:
candidate = (context['sentence'][context['position']-1], target)
except:
continue
if not seed_backward_2.get(candidate):
seed_backward_2[candidate]=1
else:
seed_backward_2[candidate]=seed_backward_2[candidate]+1
seed_backward_1 = sorted(seed_backward_1.items(), key=lambda x: x[1], reverse=True)
seed_backward_2 = sorted(seed_backward_2.items(), key=lambda x: x[1], reverse=True)
# print(seed_backward_1[:5],seed_backward_2[:5])
return seed_backward_1, seed_backward_2
###Output
_____no_output_____
###Markdown
Count Next two Words in Sense 1 and Sense 2
###Code
def forward_two_count(contexts):
# Count Next two Words in Sense 1 and Sense 2
seed_forward_2_1 = {}
seed_forward_2_2 = {}
for context in contexts:
if context['sense'] == 1:
try:
candidate = (target, context['sentence'][context['position']+1], context['sentence'][context['position']+2])
except:
continue
if not seed_forward_2_1.get(candidate):
seed_forward_2_1[candidate]=1
else:
seed_forward_2_1[candidate]=seed_forward_2_1[candidate]+1
else:
try:
candidate = (target, context['sentence'][context['position']+1], context['sentence'][context['position']+2])
except:
continue
if not seed_forward_2_2.get(candidate):
seed_forward_2_2[candidate]=1
else:
seed_forward_2_2[candidate]=seed_forward_2_2[candidate]+1
seed_forward_2_1 = sorted(seed_forward_2_1.items(), key=lambda x: x[1], reverse=True)
seed_forward_2_2 = sorted(seed_forward_2_2.items(), key=lambda x: x[1], reverse=True)
# print(seed_forward_2_1[:5],seed_forward_2_2[:5])
return seed_forward_2_1, seed_forward_2_2
###Output
_____no_output_____
###Markdown
Count Previous Two Words in Sense 1 and Sense 2
###Code
def backward_two_count(contexts):
# Count Previous two Words in Sense 1 and Sense 2
seed_backward_2_1 = {}
seed_backward_2_2 = {}
for context in contexts:
if context['sense'] == 1:
try:
candidate = (context['sentence'][context['position']-2], context['sentence'][context['position']-1], target)
except:
continue
if not seed_backward_2_1.get(candidate):
seed_backward_2_1[candidate]=1
else:
seed_backward_2_1[candidate]=seed_backward_2_1[candidate]+1
else:
try:
candidate = (context['sentence'][context['position']-2], context['sentence'][context['position']-1], target)
except:
continue
if not seed_backward_2_2.get(candidate):
seed_backward_2_2[candidate]=1
else:
seed_backward_2_2[candidate]=seed_backward_2_2[candidate]+1
seed_backward_2_1 = sorted(seed_backward_2_1.items(), key=lambda x: x[1], reverse=True)
seed_backward_2_2 = sorted(seed_backward_2_2.items(), key=lambda x: x[1], reverse=True)
return seed_backward_2_1, seed_backward_2_2
###Output
_____no_output_____
###Markdown
Count Surrounding Two Words in Sense 1 and Sense 2
###Code
def surrounding_count(contexts):
# Count Surrounding two Words in Sense 1 and Sense 2
seed_surround_1 = {}
seed_surround_2 = {}
for context in contexts:
if context['sense'] == 1:
try:
candidate = (context['sentence'][context['position']-1], target, context['sentence'][context['position']+1])
except:
continue
if not seed_surround_1.get(candidate):
seed_surround_1[candidate]=1
else:
seed_surround_1[candidate]=seed_surround_1[candidate]+1
else:
try:
candidate = (context['sentence'][context['position']-1], target, context['sentence'][context['position']+1])
except:
continue
if not seed_surround_2.get(candidate):
seed_surround_2[candidate]=1
else:
seed_surround_2[candidate]=seed_surround_2[candidate]+1
seed_surround_1 = sorted(seed_surround_1.items(), key=lambda x: x[1], reverse=True)
seed_surround_2 = sorted(seed_surround_2.items(), key=lambda x: x[1], reverse=True)
# print(seed_surround_1[:5],seed_surround_2[:5])
return seed_surround_1, seed_surround_2
# Count words within the range
###Output
_____no_output_____
###Markdown
Merging Rules from Each Collocation List
###Code
def construct_sense():
contexts = context_dictionary()
freq_dist_type_1, freq_dist_type_2 = unigram_count(contexts)
seed_forward_1, seed_forward_2 = forward_one_count(contexts)
seed_backward_1, seed_backward_2 = backward_one_count(contexts)
seed_forward_2_1, seed_forward_2_2 = forward_two_count(contexts)
seed_backward_2_1, seed_backward_2_2 = backward_two_count(contexts)
seed_surround_1, seed_surround_2 = surrounding_count(contexts)
# # Merging the rules into one list for sense 1
seed_sense_1= freq_dist_type_1[:top_rules] + seed_forward_1[:top_rules]+seed_backward_1[:top_rules]+seed_forward_2_1[:top_rules]+seed_backward_2_1[:top_rules]+seed_surround_1[:top_rules]
# # Merging the rules into one list for sense 2
seed_sense_2= freq_dist_type_2[:top_rules] + seed_forward_2[:top_rules]+seed_backward_2[:top_rules]+seed_forward_2_2[:top_rules]+seed_backward_2_2[:top_rules]+seed_surround_2[:top_rules]
# Merging the rules into one list for sense 1
# seed_sense_1= freq_dist_type_1 + seed_forward_1+seed_backward_1+seed_forward_2_1+seed_backward_2_1+seed_surround_1
# Merging the rules into one list for sense 2
# seed_sense_2= freq_dist_type_2+ seed_forward_2+seed_backward_2+seed_forward_2_2+seed_backward_2_2+seed_surround_2
# print(seed_sense_1)
# print(seed_sense_2)
return seed_sense_1, seed_sense_2
construct_sense()
###Output
Length of Type 1 texts: 484 Length of Type 2 texts: 20
###Markdown
Appending rules to Decision List by computing Collocation Frequency of Sense 1 and Sense 2
###Code
def populate_sense_in_decision_list():
seed_sense_1, seed_sense_2 = construct_sense()
for key1,value1 in seed_sense_1:
dicision_dict ={
'collocation': key1,
'sense1': value1,
'sense2': 0,
'sense' : 1
}
for key2, value2 in seed_sense_2:
if key2 == key1:
dicision_dict['sense2'] = value2
decisionList.append(dicision_dict)
for key1,value1 in seed_sense_2:
dicision_dict ={
'collocation': key1,
'sense1': 0,
'sense2': value1,
'sense' : 2
}
for key2, value2 in seed_sense_1:
if key2 == key1:
dicision_dict['sense1'] = value2
decisionList.append(dicision_dict)
print(decisionList)
# return decisionList
populate_sense_in_decision_list()
###Output
Length of Type 1 texts: 484 Length of Type 2 texts: 20
[{'collocation': 'said', 'sense1': 68, 'sense2': 0, 'sense': 1}, {'collocation': 'peace', 'sense1': 56, 'sense2': 0, 'sense': 1}, {'collocation': 'child', 'sense1': 42, 'sense2': 0, 'sense': 1}, {'collocation': 'country', 'sense1': 40, 'sense2': 0, 'sense': 1}, {'collocation': 'people', 'sense1': 30, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'peace'), 'sense1': 42, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'child'), 'sense1': 26, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'nation'), 'sense1': 20, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'country'), 'sense1': 14, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'national'), 'sense1': 14, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'sake'), 'sense1': 18, 'sense2': 0, 'sense': 1}, {'collocation': ('life', 'sake'), 'sense1': 12, 'sense2': 0, 'sense': 1}, {'collocation': ('country', 'sake'), 'sense1': 12, 'sense2': 0, 'sense': 1}, {'collocation': ('sacrifice', 'sake'), 'sense1': 10, 'sense2': 0, 'sense': 1}, {'collocation': ('said', 'sake'), 'sense1': 10, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'national', 'interest'), 'sense1': 8, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'peace', 'national'), 'sense1': 6, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'national', 'unity'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'peace', 'stability'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'negotiation', 'help'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': ('country', 'law', 'sake'), 'sense1': 6, 'sense2': 0, 'sense': 1}, {'collocation': ('sake', 'sake', 'sake'), 'sense1': 6, 'sense2': 0, 'sense': 1}, {'collocation': ('art', 'art', 'sake'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': ('minded', 'take', 'sake'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': ('employment', 'society', 'sake'), 'sense1': 2, 'sense2': 0, 'sense': 1}, {'collocation': ('law', 'sake', 'enforcing'), 'sense1': 6, 'sense2': 0, 'sense': 1}, {'collocation': ('take', 'sake', 'survival'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': ('change', 'sake', 'change'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': ('negotiation', 'sake', 'negotiation'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': ('god', 'sake', 'dont'), 'sense1': 4, 'sense2': 0, 'sense': 1}, {'collocation': 'cup', 'sense1': 0, 'sense2': 6, 'sense': 2}, {'collocation': 'needed', 'sense1': 0, 'sense2': 4, 'sense': 2}, {'collocation': 'cold', 'sense1': 0, 'sense2': 4, 'sense': 2}, {'collocation': 'undated', 'sense1': 0, 'sense2': 4, 'sense': 2}, {'collocation': 'secret', 'sense1': 0, 'sense2': 4, 'sense': 2}, {'collocation': ('sake', 'cup'), 'sense1': 0, 'sense2': 4, 'sense': 2}, {'collocation': ('sake', 'undated'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'secret'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'ginger'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'vodka'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('japanese', 'sake'), 'sense1': 0, 'sense2': 4, 'sense': 2}, {'collocation': ('cup', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('cold', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('undated', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('plus', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'cup', 'chicken'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'undated', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'secret', 'doesnt'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'ginger', 'root'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'vodka', 'cocktail'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('chopped', 'cup', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('needed', 'cold', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sake', 'undated', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('sauce', 'plus', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('unpeeled', 'combine', 'sake'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('cup', 'sake', 'cup'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('cold', 'sake', 'undated'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('undated', 'sake', 'secret'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('plus', 'sake', 'ginger'), 'sense1': 0, 'sense2': 2, 'sense': 2}, {'collocation': ('combine', 'sake', 'vodka'), 'sense1': 0, 'sense2': 2, 'sense': 2}]
###Markdown
Calculating and Sorting Log LikelihoodLaplace Smoothing: For this data, relatively small alpha (between 0.1 and 0.25) tended to be effective, while noisier training data warrant larger alpha.
###Code
def calculate_log_decision():
logDecisionList_preliminary = {}
for rule in decisionList:
probability = abs(np.log10((rule['sense1']+alpha)/(rule['sense2']+2*alpha)))
logDecisionList_preliminary[rule['collocation']] = (probability, rule['sense'])
logDecisionList_preliminary = sorted(logDecisionList_preliminary.items(), key=lambda x: x[1], reverse=True)
return logDecisionList_preliminary
calculate_log_decision()
###Output
_____no_output_____
###Markdown
Defining Default Sense
###Code
# This default sense is derived from the number of sense present in the conrpora. Default sense is mianly used in baseline.
def get_default_sense():
countSense1 = 0
countSense2 = 0
logDecisionList_sample = calculate_log_decision()
for key,value in logDecisionList_sample[:decision_list_rule_boundary]:
# print(value)
if value[1] == 1:
countSense1 = countSense1+1
else:
countSense2 = countSense2+1
default_sense = 1 if countSense1 > countSense2 else 2
print(default_sense)
# defaultSense
get_default_sense()
###Output
1
###Markdown
Predict Sense of a Sentence
###Code
def predict_sense(sentence):
# By default the sense will remain the default one(Like the baseline one.)
sense = default_sense
# preprocess the sentence
clean_sentence = preprocess(sentence)
# tokenizing the words from the sentence
words = word_tokenize(clean_sentence)
# Pre-process the words
words = [word for word in words]
logDecisionList = calculate_log_decision()
print(logDecisionList[:decision_list_rule_boundary])
pattern_index = words.index(target)
for decision in logDecisionList[:decision_list_rule_boundary]:
sanitizedDecision = decision[0]
if type(sanitizedDecision) != str:
sanitizedDecision = [ele for ele in decision[0]]
if target in sanitizedDecision: sanitizedDecision.remove(target)
sanitizedDecision = tuple(sanitizedDecision)
# Check whether the pattern index match with any of the rule defined in decision list
if (k_closest(words, pattern_index, sanitizedDecision) or right(words, pattern_index, sanitizedDecision) or
left(words, pattern_index,sanitizedDecision) or
two_right(words, pattern_index, sanitizedDecision) or
two_left(words, pattern_index, sanitizedDecision) or
surround(words, pattern_index, sanitizedDecision)):
sense = decision[1][1]
return sense
# predict_sense("I am a sake player")
###Output
_____no_output_____
###Markdown
Computing Accuracy, Precision and Recall
###Code
def compute_metrics(lines, analysis_type=1):
FP = 0
FN = 0
TP = 0
TN = 0
text = []
count_true = 0
for line in lines:
splitLine = process_text(line)
actual_sense = 1 if splitLine[0] == target else 2
predicted_sense = predict_sense(splitLine[1])
# default_sense = defaultSense
# print(predicted_sense, splitLine[1])
if (analysis_type==1 and predicted_sense == actual_sense) or (analysis_type==2 and default_sense == actual_sense):
count_true = count_true + 1
#######################Precision and Recall Start#############################
if analysis_type==1:
# count true positive
if actual_sense ==1 and predicted_sense==1:
TP=TP+1
# count true negative
if actual_sense ==2 and predicted_sense==2:
TN=TN+1
# count false positive
if actual_sense ==1 and predicted_sense==2:
FP=FP+1
# count false negative
if actual_sense ==2 and predicted_sense==1:
FN=FN+1
# Count for baseline
else:
# count true positive
if actual_sense ==1 and default_sense==1:
TP=TP+1
# count true negative
if actual_sense ==2 and default_sense==2:
TN=TN+1
# count false positive
if actual_sense ==1 and default_sense==2:
FP=FP+1
# count false negative
if actual_sense ==2 and default_sense==1:
FN=FN+1
#######################Precision and Recall End#############################
print(TP, TN, FP, FN)
accuracy = count_true/len(lines)
# Precision
precision = TP/(TP+FP)
# Recall
recall = TP/(TP+FN)
return accuracy, precision, recall
# print("Accuracy:", accuracy)
###Output
_____no_output_____
###Markdown
Test on Validation Set
###Code
accuracy, precision, recall = compute_metrics(validation_lines)
print("Validation accuracy = %0.4f, precision= = %0.4f, recall= = %0.4f" %
(accuracy, precision, recall))
###Output
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
###Markdown
Sense Disambiguation using Decision List
###Code
accuracy, precision, recall = compute_metrics(testlines)
print("Decision list test accuracy = %0.4f, precision= = %0.4f, recall= = %0.4f" %
(accuracy, precision, recall))
###Output
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
###Markdown
Baseline Sense Disambiguation
###Code
accuracy, precision, recall = compute_metrics(testlines,2)
print("Baseline accuracy = %0.4f, precision= = %0.4f, recall= = %0.4f" %
(accuracy, precision, recall))
###Output
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
[('said', (2.532117116248804, 1)), ('peace', (2.4479328655921804, 1)), ('child', (2.323252100171687, 1)), (('sake', 'peace'), (2.323252100171687, 1)), ('country', (2.302114376956201, 1)), ('people', (2.1775364999298623, 1)), (('sake', 'child'), (2.1156105116742996, 1)), (('sake', 'nation'), (2.002166061756508, 1)), (('sake', 'sake'), (1.9566485792052033, 1)), (('sake', 'country'), (1.8481891169913987, 1))]
|
facedetection.ipynb | ###Markdown
Model 1 haarcascade_frontalface_default.xml
###Code
#importing libraries
import cv2
import os
import requests
import numpy as np
import pandas as pd
from IPython.display import display
#starting video
cap=cv2.VideoCapture(0)
#loading default cascade
face=cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
#variable to be used
skip=0
face_data=[]
dataset_path='./data/'
#getting required info from user
file_roll_person=input("enter the roll number:")
stud_phone = input("enter the Phone Number :")
#saving the info in the file
df = pd.read_csv('students.csv')
data = {
"Phone Number" : [str(stud_phone)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('students.csv',index=False)
#setting file name to roll number of user
file_name = str(file_roll_person)
#recording the face through webcam
while True:
ret,frame=cap.read()
#converting into gray
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
#detection of face
faces=face.detectMultiScale(frame,1.3,5)
#sort them in order to achieve highest face ratio
faces=sorted(faces,key=lambda f:f[2]*f[3])
#lopping the faces and appending face data
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
#converting data into face
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#save the data
np.save(dataset_path+file_name+".npy",face_data)
#turn of the webcam
cap.release()
cv2.destroyAllWindows()
#importing the libraries
import cv2
import requests
import os
import numpy as np
import pandas as pd
from IPython.display import display
def knn(X,Y,k=5):
"""
It takes trainset,face section and nearest neighbour and based on
data it has it return highest probability prediction.
-Args: trainset,face section and nearest neighbour
-return: prediction
"""
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
def dist(x1,x2):
"""
It takes X1 and X2 and it return the square root distance between them.
-Args: X1,X2
-return: distance between them
"""
return np.sqrt(sum(((x1-x2)**2)))
def mark_attendance(ids):
"""
It takes id , save the ids in attendance.csv file and send them notification on their
phone number .
-Args: ids
-return: None
"""
df = pd.DataFrame({
'Roll Number' : ids
})
df.to_csv('attendance.csv')
#saving the roll number and dropping un necessary columns
unique_phone_ = []
new_df = pd.read_csv('attendance.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendance.csv',index=False)
#sending them notification using fast 2 sms service
df = pd.read_csv('students.csv')
phone_numbers = []
for idi in ids:
if int(idi) in df['Roll Number'].unique():
phone_numbers.append((df[df['Roll Number']==idi]['Phone Number'].values[0]))
url = "https://www.fast2sms.com/dev/bulk"
headers = {'authorization': "AUTHORIZATION_KEY",
'Content-Type': "application/x-www-form-urlencoded",
'Cache-Control': "no-cache",
}
print("before sending messages")
print(phone_numbers)
for num in phone_numbers:
if num not in unique_phone_:
unique_phone_.append(num)
for numbers in unique_phone_:
print(numbers)
payload = "sender_id=FSTSMS&message= Your Attendance is marked &language=english&route=p&numbers="+str(numbers)
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
uniq_student_ids = []
names={}
students_ids = [ ]
stud_df = pd.read_csv('students.csv')
current_students = [ ]
student_id = ' '
for i in range(stud_df.shape[0]):
student_id = str(stud_df['Roll Number'].values[i])
current_students.append(student_id)
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
students_ids.append(pred)
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
for ids in students_ids:
if ids not in uniq_student_ids:
uniq_student_ids.append(int(ids))
print(uniq_student_ids )
mark_attendance(uniq_student_ids)
cap.release()
cv2.destroyAllWindows()
###Output
[8, 8, 8, 8, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12]
before sending messages
[9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633]
9267953633
{"return":true,"request_id":"xhmvk3ibodyr8cn","message":["Message sent successfully to NonDND numbers"]}
###Markdown
Model 2 haarcascade_frontalface_alt .xml
###Code
#importing libraries
import cv2
import os
import requests
import numpy as np
import pandas as pd
from IPython.display import display
#starting video
cap=cv2.VideoCapture(0)
#loading default cascade
face=cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
#variable to be used
skip=0
face_data=[]
dataset_path='./data/'
#getting required info from user
file_roll_person=input("enter the roll number:")
stud_phone = input("enter the Phone Number :")
#saving the info in the file
df = pd.read_csv('students.csv')
data = {
"Phone Number" : [str(stud_phone)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('students.csv',index=False)
#setting file name to roll number of user
file_name = str(file_roll_person)
#recording the face through webcam
while True:
ret,frame=cap.read()
#converting into gray
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
#detection of face
faces=face.detectMultiScale(frame,1.3,5)
#sort them in order to achieve highest face ratio
faces=sorted(faces,key=lambda f:f[2]*f[3])
#lopping the faces and appending face data
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
#converting data into face
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#save the data
np.save(dataset_path+file_name+".npy",face_data)
#turn of the webcam
cap.release()
cv2.destroyAllWindows()
#importing the libraries
import cv2
import requests
import os
import numpy as np
import pandas as pd
from IPython.display import display
def knn(X,Y,k=5):
"""
It takes trainset,face section and nearest neighbour and based on
data it has it return highest probability prediction.
-Args: trainset,face section and nearest neighbour
-return: prediction
"""
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
def dist(x1,x2):
"""
It takes X1 and X2 and it return the square root distance between them.
-Args: X1,X2
-return: distance between them
"""
return np.sqrt(sum(((x1-x2)**2)))
def mark_attendance(ids):
"""
It takes id , save the ids in attendance.csv file and send them notification on their
phone number .
-Args: ids
-return: None
"""
df = pd.DataFrame({
'Roll Number' : ids
})
df.to_csv('attendance.csv')
#saving the roll number and dropping un necessary columns
unique_phone_ = []
new_df = pd.read_csv('attendance.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendance.csv',index=False)
#sending them notification using fast 2 sms service
df = pd.read_csv('students.csv')
phone_numbers = []
for idi in ids:
if int(idi) in df['Roll Number'].unique():
phone_numbers.append((df[df['Roll Number']==idi]['Phone Number'].values[0]))
url = "https://www.fast2sms.com/dev/bulk"
headers = {'authorization': "AUTHORIZATION_KEY",
'Content-Type': "application/x-www-form-urlencoded",
'Cache-Control': "no-cache",
}
print("before sending messages")
print(phone_numbers)
for num in phone_numbers:
if num not in unique_phone_:
unique_phone_.append(num)
for numbers in unique_phone_:
print(numbers)
payload = "sender_id=FSTSMS&message= Your Attendance is marked &language=english&route=p&numbers="+str(numbers)
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
uniq_student_ids = []
names={}
students_ids = [ ]
stud_df = pd.read_csv('students.csv')
current_students = [ ]
student_id = ' '
for i in range(stud_df.shape[0]):
student_id = str(stud_df['Roll Number'].values[i])
current_students.append(student_id)
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
students_ids.append(pred)
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
for ids in students_ids:
if ids not in uniq_student_ids:
uniq_student_ids.append(int(ids))
print(uniq_student_ids )
mark_attendance(uniq_student_ids)
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Model 3 haarcascade_frontalface_alt2 .xml
###Code
#importing libraries
import cv2
import os
import requests
import numpy as np
import pandas as pd
from IPython.display import display
#starting video
cap=cv2.VideoCapture(0)
#loading default cascade
face=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
#variable to be used
skip=0
face_data=[]
dataset_path='./data/'
#getting required info from user
file_roll_person=input("enter the roll number:")
stud_phone = input("enter the Phone Number :")
#saving the info in the file
df = pd.read_csv('students.csv')
data = {
"Phone Number" : [str(stud_phone)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('students.csv',index=False)
#setting file name to roll number of user
file_name = str(file_roll_person)
#recording the face through webcam
while True:
ret,frame=cap.read()
#converting into gray
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
#detection of face
faces=face.detectMultiScale(frame,1.3,5)
#sort them in order to achieve highest face ratio
faces=sorted(faces,key=lambda f:f[2]*f[3])
#lopping the faces and appending face data
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
#converting data into face
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#save the data
np.save(dataset_path+file_name+".npy",face_data)
#turn of the webcam
cap.release()
cv2.destroyAllWindows()
#importing the libraries
import cv2
import requests
import os
import numpy as np
import pandas as pd
from IPython.display import display
def knn(X,Y,k=5):
"""
It takes trainset,face section and nearest neighbour and based on
data it has it return highest probability prediction.
-Args: trainset,face section and nearest neighbour
-return: prediction
"""
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
def dist(x1,x2):
"""
It takes X1 and X2 and it return the square root distance between them.
-Args: X1,X2
-return: distance between them
"""
return np.sqrt(sum(((x1-x2)**2)))
def mark_attendance(ids):
"""
It takes id , save the ids in attendance.csv file and send them notification on their
phone number .
-Args: ids
-return: None
"""
df = pd.DataFrame({
'Roll Number' : ids
})
df.to_csv('attendance.csv')
#saving the roll number and dropping un necessary columns
unique_phone_ = []
new_df = pd.read_csv('attendance.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendance.csv',index=False)
#sending them notification using fast 2 sms service
df = pd.read_csv('students.csv')
phone_numbers = []
for idi in ids:
if int(idi) in df['Roll Number'].unique():
phone_numbers.append((df[df['Roll Number']==idi]['Phone Number'].values[0]))
url = "https://www.fast2sms.com/dev/bulk"
headers = {'authorization': "AUTHORIZATION_KEY",
'Content-Type': "application/x-www-form-urlencoded",
'Cache-Control': "no-cache",
}
print("before sending messages")
print(phone_numbers)
for num in phone_numbers:
if num not in unique_phone_:
unique_phone_.append(num)
for numbers in unique_phone_:
print(numbers)
payload = "sender_id=FSTSMS&message= Your Attendance is marked &language=english&route=p&numbers="+str(numbers)
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
uniq_student_ids = []
names={}
students_ids = [ ]
stud_df = pd.read_csv('students.csv')
current_students = [ ]
student_id = ' '
for i in range(stud_df.shape[0]):
student_id = str(stud_df['Roll Number'].values[i])
current_students.append(student_id)
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
students_ids.append(pred)
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
for ids in students_ids:
if ids not in uniq_student_ids:
uniq_student_ids.append(int(ids))
print(uniq_student_ids )
mark_attendance(uniq_student_ids)
cap.release()
cv2.destroyAllWindows()
###Output
[89, 89, 89, 89, 89, 89, 89]
before sending messages
[789789, 789789, 789789, 789789, 789789, 789789, 789789]
789789
{"return":false,"status_code":411,"message":"Invalid Numbers"}
###Markdown
Model 1 haarcascade_frontalface_default.xml
###Code
import cv2
import numpy as np
import pandas as pd
cap=cv2.VideoCapture(0)
face=cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
skip=0
face_data=[]
dataset_path='./data/'
file_name_person=input("enter the name:")
file_roll_person=input("enter the roll number:")
df = pd.read_csv('attendace.csv')
print(df)
data = {
"Names" : [str(file_name_person)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('attendace.csv',index=False)
file_name = str(file_name_person) + str(file_roll_person)
while True:
ret,frame=cap.read()
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
faces=face.detectMultiScale(frame,1.3,5)
faces=sorted(faces,key=lambda f:f[2]*f[3])
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#print(face_data.shape)
new_df = pd.read_csv('attendace.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendace.csv',index=False)
new_df = pd.read_csv('attendace.csv')
print(new_df.head())
np.save(dataset_path+file_name+".npy",face_data)
cap.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
import pandas as pd
import os
def dist(x1,x2):
return np.sqrt(sum(((x1-x2)**2)))
new_df = pd.read_csv('attendace.csv')
current_students = [ ]
student_id = ' '
for i in range(new_df.shape[0]):
student_id = new_df['Names'].values[i] + str(new_df['Roll Number'].values[i])
current_students.append(student_id)
print(current_students)
def knn(X,Y,k=5):
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
#print(new_val)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
names={}
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
print("loaded "+fx)
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
print(face_section)
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
if pred in current_students:
print(pred)
pred = pred + 'Access Allowed'
else:
pred = pred + 'Access Denied'
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Model 2 haarcascade_frontalface_alt .xml
###Code
import cv2
import numpy as np
import pandas as pd
cap=cv2.VideoCapture(0)
face=cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
skip=0
face_data=[]
dataset_path='./data/'
file_name_person=input("enter the name:")
file_roll_person=input("enter the roll number:")
df = pd.read_csv('attendace.csv')
print(df)
data = {
"Names" : [str(file_name_person)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('attendace.csv',index=False)
file_name = str(file_name_person) + str(file_roll_person)
while True:
ret,frame=cap.read()
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
faces=face.detectMultiScale(frame,1.3,5)
faces=sorted(faces,key=lambda f:f[2]*f[3])
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#print(face_data.shape)
new_df = pd.read_csv('attendace.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendace.csv',index=False)
new_df = pd.read_csv('attendace.csv')
print(new_df.head())
np.save(dataset_path+file_name+".npy",face_data)
cap.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
import pandas as pd
import os
def dist(x1,x2):
return np.sqrt(sum(((x1-x2)**2)))
new_df = pd.read_csv('attendace.csv')
current_students = [ ]
student_id = ' '
for i in range(new_df.shape[0]):
student_id = new_df['Names'].values[i] + str(new_df['Roll Number'].values[i])
current_students.append(student_id)
print(current_students)
def knn(X,Y,k=5):
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
#print(new_val)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
names={}
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
print("loaded "+fx)
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
print(face_section)
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
if pred in current_students:
print(pred)
pred = pred + 'Access Allowed'
else:
pred = pred + 'Access Denied'
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Model 2 haarcascade_frontalface_alt2 .xml
###Code
import cv2
import numpy as np
import pandas as pd
cap=cv2.VideoCapture(0)
face=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
skip=0
face_data=[]
dataset_path='./data/'
file_name_person=input("enter the name:")
file_roll_person=input("enter the roll number:")
df = pd.read_csv('attendace.csv')
print(df)
data = {
"Names" : [str(file_name_person)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('attendace.csv',index=False)
file_name = str(file_name_person) + str(file_roll_person)
while True:
ret,frame=cap.read()
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
faces=face.detectMultiScale(frame,1.3,5)
faces=sorted(faces,key=lambda f:f[2]*f[3])
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#print(face_data.shape)
new_df = pd.read_csv('attendace.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendace.csv',index=False)
new_df = pd.read_csv('attendace.csv')
print(new_df.head())
np.save(dataset_path+file_name+".npy",face_data)
cap.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
import pandas as pd
import os
def dist(x1,x2):
return np.sqrt(sum(((x1-x2)**2)))
new_df = pd.read_csv('attendace.csv')
current_students = [ ]
student_id = ' '
for i in range(new_df.shape[0]):
student_id = new_df['Names'].values[i] + str(new_df['Roll Number'].values[i])
current_students.append(student_id)
print(current_students)
def knn(X,Y,k=5):
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
#print(new_val)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
names={}
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
print("loaded "+fx)
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
print(face_section)
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
if pred in current_students:
print(pred)
pred = pred + 'Access Allowed'
else:
pred = pred + 'Access Denied'
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Model 1 haarcascade_frontalface_default.xml
###Code
#importing libraries
import cv2
import os
import requests
import numpy as np
import pandas as pd
from IPython.display import display
#starting video
cap=cv2.VideoCapture(0)
#loading default cascade
face=cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
#variable to be used
skip=0
face_data=[]
dataset_path='./data/'
#getting required info from user
file_roll_person=input("enter the roll number:")
stud_phone = input("enter the Phone Number :")
#saving the info in the file
df = pd.read_csv('students.csv')
data = {
"Phone Number" : [str(stud_phone)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('students.csv',index=False)
#setting file name to roll number of user
file_name = str(file_roll_person)
#recording the face through webcam
while True:
ret,frame=cap.read()
#converting into gray
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
#detection of face
faces=face.detectMultiScale(frame,1.3,5)
#sort them in order to achieve highest face ratio
faces=sorted(faces,key=lambda f:f[2]*f[3])
#lopping the faces and appending face data
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
#converting data into face
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#save the data
np.save(dataset_path+file_name+".npy",face_data)
#turn of the webcam
cap.release()
cv2.destroyAllWindows()
#importing the libraries
import cv2
import requests
import os
import numpy as np
import pandas as pd
from IPython.display import display
def knn(X,Y,k=5):
"""
It takes trainset,face section and nearest neighbour and based on
data it has it return highest probability prediction.
-Args: trainset,face section and nearest neighbour
-return: prediction
"""
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
def dist(x1,x2):
"""
It takes X1 and X2 and it return the square root distance between them.
-Args: X1,X2
-return: distance between them
"""
return np.sqrt(sum(((x1-x2)**2)))
def mark_attendance(ids):
"""
It takes id , save the ids in attendance.csv file and send them notification on their
phone number .
-Args: ids
-return: None
"""
df = pd.DataFrame({
'Roll Number' : ids
})
df.to_csv('attendance.csv')
#saving the roll number and dropping un necessary columns
unique_phone_ = []
new_df = pd.read_csv('attendance.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendance.csv',index=False)
#sending them notification using fast 2 sms service
df = pd.read_csv('students.csv')
phone_numbers = []
for idi in ids:
if int(idi) in df['Roll Number'].unique():
phone_numbers.append((df[df['Roll Number']==idi]['Phone Number'].values[0]))
url = "https://www.fast2sms.com/dev/bulk"
headers = {'authorization': "9vUsQhlqu5DtGKMYyB4P6WJNdACoSFiaR3jLHbwczmf2VO8Ip7nzYXQxLFt4gIcdmWy29STeOl5EPjbB",
'Content-Type': "application/x-www-form-urlencoded",
'Cache-Control': "no-cache",
}
print("before sending messages")
print(phone_numbers)
for num in phone_numbers:
if num not in unique_phone_:
unique_phone_.append(num)
for numbers in unique_phone_:
print(numbers)
payload = "sender_id=FSTSMS&message= Your Attendance is marked &language=english&route=p&numbers="+str(numbers)
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
uniq_student_ids = []
names={}
students_ids = [ ]
stud_df = pd.read_csv('students.csv')
current_students = [ ]
student_id = ' '
for i in range(stud_df.shape[0]):
student_id = str(stud_df['Roll Number'].values[i])
current_students.append(student_id)
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
students_ids.append(pred)
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
for ids in students_ids:
if ids not in uniq_student_ids:
uniq_student_ids.append(int(ids))
print(uniq_student_ids )
mark_attendance(uniq_student_ids)
cap.release()
cv2.destroyAllWindows()
###Output
[8, 8, 8, 8, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12]
before sending messages
[9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633, 9267953633]
9267953633
{"return":true,"request_id":"xhmvk3ibodyr8cn","message":["Message sent successfully to NonDND numbers"]}
###Markdown
Model 2 haarcascade_frontalface_alt .xml
###Code
#importing libraries
import cv2
import os
import requests
import numpy as np
import pandas as pd
from IPython.display import display
#starting video
cap=cv2.VideoCapture(0)
#loading default cascade
face=cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
#variable to be used
skip=0
face_data=[]
dataset_path='./data/'
#getting required info from user
file_roll_person=input("enter the roll number:")
stud_phone = input("enter the Phone Number :")
#saving the info in the file
df = pd.read_csv('students.csv')
data = {
"Phone Number" : [str(stud_phone)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('students.csv',index=False)
#setting file name to roll number of user
file_name = str(file_roll_person)
#recording the face through webcam
while True:
ret,frame=cap.read()
#converting into gray
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
#detection of face
faces=face.detectMultiScale(frame,1.3,5)
#sort them in order to achieve highest face ratio
faces=sorted(faces,key=lambda f:f[2]*f[3])
#lopping the faces and appending face data
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
#converting data into face
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#save the data
np.save(dataset_path+file_name+".npy",face_data)
#turn of the webcam
cap.release()
cv2.destroyAllWindows()
#importing the libraries
import cv2
import requests
import os
import numpy as np
import pandas as pd
from IPython.display import display
def knn(X,Y,k=5):
"""
It takes trainset,face section and nearest neighbour and based on
data it has it return highest probability prediction.
-Args: trainset,face section and nearest neighbour
-return: prediction
"""
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
def dist(x1,x2):
"""
It takes X1 and X2 and it return the square root distance between them.
-Args: X1,X2
-return: distance between them
"""
return np.sqrt(sum(((x1-x2)**2)))
def mark_attendance(ids):
"""
It takes id , save the ids in attendance.csv file and send them notification on their
phone number .
-Args: ids
-return: None
"""
df = pd.DataFrame({
'Roll Number' : ids
})
df.to_csv('attendance.csv')
#saving the roll number and dropping un necessary columns
unique_phone_ = []
new_df = pd.read_csv('attendance.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendance.csv',index=False)
#sending them notification using fast 2 sms service
df = pd.read_csv('students.csv')
phone_numbers = []
for idi in ids:
if int(idi) in df['Roll Number'].unique():
phone_numbers.append((df[df['Roll Number']==idi]['Phone Number'].values[0]))
url = "https://www.fast2sms.com/dev/bulk"
headers = {'authorization': "9vUsQhlqu5DtGKMYyB4P6WJNdACoSFiaR3jLHbwczmf2VO8Ip7nzYXQxLFt4gIcdmWy29STeOl5EPjbB",
'Content-Type': "application/x-www-form-urlencoded",
'Cache-Control': "no-cache",
}
print("before sending messages")
print(phone_numbers)
for num in phone_numbers:
if num not in unique_phone_:
unique_phone_.append(num)
for numbers in unique_phone_:
print(numbers)
payload = "sender_id=FSTSMS&message= Your Attendance is marked &language=english&route=p&numbers="+str(numbers)
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
uniq_student_ids = []
names={}
students_ids = [ ]
stud_df = pd.read_csv('students.csv')
current_students = [ ]
student_id = ' '
for i in range(stud_df.shape[0]):
student_id = str(stud_df['Roll Number'].values[i])
current_students.append(student_id)
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
students_ids.append(pred)
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
for ids in students_ids:
if ids not in uniq_student_ids:
uniq_student_ids.append(int(ids))
print(uniq_student_ids )
mark_attendance(uniq_student_ids)
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Model 3 haarcascade_frontalface_alt2 .xml
###Code
#importing libraries
import cv2
import os
import requests
import numpy as np
import pandas as pd
from IPython.display import display
#starting video
cap=cv2.VideoCapture(0)
#loading default cascade
face=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
#variable to be used
skip=0
face_data=[]
dataset_path='./data/'
#getting required info from user
file_roll_person=input("enter the roll number:")
stud_phone = input("enter the Phone Number :")
#saving the info in the file
df = pd.read_csv('students.csv')
data = {
"Phone Number" : [str(stud_phone)],
"Roll Number" :[ str(file_roll_person)]
}
add_df = pd.DataFrame(data)
new_df = df.append(add_df)
new_df.to_csv('students.csv',index=False)
#setting file name to roll number of user
file_name = str(file_roll_person)
#recording the face through webcam
while True:
ret,frame=cap.read()
#converting into gray
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
if ret==False:
continue
#detection of face
faces=face.detectMultiScale(frame,1.3,5)
#sort them in order to achieve highest face ratio
faces=sorted(faces,key=lambda f:f[2]*f[3])
#lopping the faces and appending face data
for (x,y,w,h) in faces[-1:]:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section=cv2.resize(face_section,(100,100))
skip+=1
if skip%10==0:
face_data.append(face_section)
print(face_data)
cv2.imshow("frame",frame)
#cv2.imshow("face_section",face_section)
key=cv2.waitKey(30) & 0xFF
if key==ord('q'):
break
#converting data into face
face_data=np.asarray(face_data)
face_data=face_data.reshape((face_data.shape[0],-1))
#save the data
np.save(dataset_path+file_name+".npy",face_data)
#turn of the webcam
cap.release()
cv2.destroyAllWindows()
#importing the libraries
import cv2
import requests
import os
import numpy as np
import pandas as pd
from IPython.display import display
def knn(X,Y,k=5):
"""
It takes trainset,face section and nearest neighbour and based on
data it has it return highest probability prediction.
-Args: trainset,face section and nearest neighbour
-return: prediction
"""
val=[]
m=X.shape[0]
for i in range(m):
ix=X[i,:-1]
iy=X[i,-1]
d=dist(Y,ix)
val.append((d,iy))
vals=sorted(val,key=lambda x:x[0])[:k]
vals=np.array(vals)[:,-1]
new_val=np.unique(vals,return_counts=True)
index=np.argmax(new_val[1])
pred=new_val[0][index]
return pred
def dist(x1,x2):
"""
It takes X1 and X2 and it return the square root distance between them.
-Args: X1,X2
-return: distance between them
"""
return np.sqrt(sum(((x1-x2)**2)))
def mark_attendance(ids):
"""
It takes id , save the ids in attendance.csv file and send them notification on their
phone number .
-Args: ids
-return: None
"""
df = pd.DataFrame({
'Roll Number' : ids
})
df.to_csv('attendance.csv')
#saving the roll number and dropping un necessary columns
unique_phone_ = []
new_df = pd.read_csv('attendance.csv')
columns_list = np.array(new_df.columns)
drop_col = []
for col in columns_list:
if "Unnamed:" in col:
drop_col.append(col)
new_df.drop(drop_col,axis = 1,inplace=True)
new_df.fillna(0,inplace=True)
new_df.to_csv('attendance.csv',index=False)
#sending them notification using fast 2 sms service
df = pd.read_csv('students.csv')
phone_numbers = []
for idi in ids:
if int(idi) in df['Roll Number'].unique():
phone_numbers.append((df[df['Roll Number']==idi]['Phone Number'].values[0]))
url = "https://www.fast2sms.com/dev/bulk"
headers = {'authorization': "9vUsQhlqu5DtGKMYyB4P6WJNdACoSFiaR3jLHbwczmf2VO8Ip7nzYXQxLFt4gIcdmWy29STeOl5EPjbB",
'Content-Type': "application/x-www-form-urlencoded",
'Cache-Control': "no-cache",
}
print("before sending messages")
print(phone_numbers)
for num in phone_numbers:
if num not in unique_phone_:
unique_phone_.append(num)
for numbers in unique_phone_:
print(numbers)
payload = "sender_id=FSTSMS&message= Your Attendance is marked &language=english&route=p&numbers="+str(numbers)
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
cap=cv2.VideoCapture(0)
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
skip=0
face_data=[]
dataset_path='./data/'
label=[]
class_id=0
uniq_student_ids = []
names={}
students_ids = [ ]
stud_df = pd.read_csv('students.csv')
current_students = [ ]
student_id = ' '
for i in range(stud_df.shape[0]):
student_id = str(stud_df['Roll Number'].values[i])
current_students.append(student_id)
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
names[class_id]=fx[:-4]
data_item=np.load(dataset_path+fx)
face_data.append(data_item)
#Create labels for class
target=class_id*np.ones((data_item.shape[0],))
class_id+=1
label.append(target)
face_dataset=np.concatenate(face_data,axis=0)
labels_dataset=np.concatenate(label,axis=0).reshape((-1,1))
trainset=np.concatenate((face_dataset,labels_dataset),axis=1)
while True:
ret,frame=cap.read()
if ret==False:
continue
faces=face_cascade.detectMultiScale(frame,1.3,5)
for face in faces:
x,y,w,h=face
offset=10
face_section=frame[y-offset:y+h+offset,x-offset:x+offset+w]
face_section=cv2.resize(face_section,(100,100))
out=knn(trainset,face_section.flatten())
pred=names[int(out)]
students_ids.append(pred)
cv2.putText(frame,pred,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,255,1),1,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("frame",frame)
key=cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
for ids in students_ids:
if ids not in uniq_student_ids:
uniq_student_ids.append(int(ids))
print(uniq_student_ids )
mark_attendance(uniq_student_ids)
cap.release()
cv2.destroyAllWindows()
###Output
[89, 89, 89, 89, 89, 89, 89]
before sending messages
[789789, 789789, 789789, 789789, 789789, 789789, 789789]
789789
{"return":false,"status_code":411,"message":"Invalid Numbers"}
###Markdown
###Code
# only pip install mtcnn the first time
# !pip install mtcnn
# Function to extract all faces from an image
from matplotlib import pyplot
from PIL import Image
from numpy import asarray
from mtcnn.mtcnn import MTCNN
# extract a single face from a given photograph
def extract_faces(filename):
# load image from file
pixels = pyplot.imread(filename)
# instantiate detector class, using default weights
detector = MTCNN()
# detect faces in the image
results = detector.detect_faces(pixels)
i=0
for result in results:
#insert face only if confidence is greater than 99%
if(result['confidence'] > 0.99):
face_x,face_y,width,height = result['box']
#check for negative index
if((face_x > 0) & (face_y >0)):
face = pixels[face_y:face_y+height,face_x:face_x+width]
face_image = Image.fromarray(face)
face_image.save(f'{i}.jpg')
pyplot.imshow(face_image)
pyplot.show()
print(i)
i +=1
return f'{i} faces have been detected in the given image'
# load the photo and extract the face
extract_faces('people.jpg')
###Output
_____no_output_____ |
quality_embeddings/tfidf_vectorization_large_corpus.ipynb | ###Markdown
TfIdf Vectorization of a large corpusUsually Tfidf vectors need to be trained on a domain-specific corpus. However, in many cases, a generic baseline of idf values can be good enough, and helpful for computing generic tasks like weighting sentence embeddings. Besides the obvious memory challenges with processing a large corpus, there are important questions that need to be resolved when organizing a collection of documents:* What is considered a document? * is one epistle one document? * is one section or chapter of one speech one document? * is one poem a one document? ranging from epigram to a book of epic poetry? * is one chapter in a prose book one document? * Disagree with any of these? then you'll want to train your own word idf mapping and compare results.* How can we compare TfIdf vectors, what are some simple baselines?In this notebook we'll work towards creating a generic tfidf vector for a discrete but general purpose corpus.Of course, any time you can limit the scope of your documents to a particular domain and train on those, then you will get better results, but to handle unseen data in a robust manner, a general idf mapping is better than assuming a uniform distribution!We'll look at the Tessearae corpus, and generate a word : idf mapping that we can use elsewhere for computing sentence embeddings.We'll generate and assess tfidf vectors of the Tesserae corpus broken into (by turns):* 762 files* 49,938 docs
###Code
import os
import pickle
import re
import sys
from collections import Counter, defaultdict
from glob import glob
from pathlib import Path
currentdir = Path.cwd()
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
from tqdm import tqdm
from cltk.alphabet.lat import normalize_lat
from cltk.sentence.lat import LatinPunktSentenceTokenizer
from cltk.tokenizers.lat.lat import LatinWordTokenizer
from mlyoucanuse.text_cleaners import swallow
from scipy.spatial.distance import cosine
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import mean_squared_error as mse
import matplotlib.pyplot as plt
tesserae = glob(os.path.expanduser('~/cltk_data/latin/text/latin_text_tesserae/texts/*.tess'))
print(f"Tesserae corpus contains: {len(tesserae)} files")
###Output
Tesserae corpus contains: 762 files
###Markdown
Conversions and helper functions
###Code
ANY_ANGLE = re.compile("<.[^>]+>") # used to remove tesserae metadata
toker = LatinWordTokenizer()
sent_toker = LatinPunktSentenceTokenizer()
def toker_call(text):
# skip blank lines
if text.strip() is None:
return []
text = swallow(text, ANY_ANGLE)
# normalize effectively reduces our corpus diversity by 0.028%
text = normalize_lat(text, drop_accents=True,
drop_macrons=True,
jv_replacement=True,
ligature_replacement=True)
return toker.tokenize(text)
vectorizer = TfidfVectorizer(input='filename', tokenizer=toker_call)
vectorizer.fit(tesserae)
print(f"size of vocab: {len(vectorizer.vocabulary_):,}")
word_idf_files = {key: vectorizer.idf_[idx]
for key,idx in tqdm(vectorizer.vocabulary_.items(), total=len(vectorizer.idf_))}
del vectorizer
###Output
/Users/todd/opt/anaconda3/envs/mlycu3.8/lib/python3.8/site-packages/sklearn/feature_extraction/text.py:489: UserWarning: The parameter 'token_pattern' will not be used since 'tokenizer' is not None'
warnings.warn("The parameter 'token_pattern' will not be used"
0%| | 65/299456 [00:00<07:44, 644.12it/s]
###Markdown
Corpus to Documents functions
###Code
def count_numbers(text):
"""
Count the numbers groups in a line of text
>>> count_numbers ('<caes. gal. 8.0.4>')
3
>>> count_numbers('<caes. gal. 1.10.1>')
3
>>> count_numbers("<ov. her. 1.116> Protinus")
2
>>> count_numbers("<cic. arch. 1> si quid est in me ingeni")
1
"""
if re.search(r'\d+\.\d+\.\d+', text):
return 3
if re.search(r'\d+\.\d+', text):
return 2
if re.search(r'\d+', text):
return 1
return 0
def make_file_docs(filename):
"""given a filename return a dictionary with a list of docs.
if two numbers found, join on the first one
<verg. aen. 9.10> Nec satis: extremas Corythi penetravit ad urbes
<verg. ecl. 1.2> silvestrem tenui Musam meditaris avena;
if 3 numbers found, create a doc for each cluster of the first two numbers
<livy. urbe. 31.1.3> tot enim sunt a primo Punico ad secundum bellum finitum—
if just one number split on that
"<cic. arch. 1> si quid est in me ingeni"
"""
file_docs =defaultdict(list)
file_stats = {}
file = os.path.basename(filename)
ibook = None
ichapter = None
with open(filename, 'rt') as fin:
prev_ch= None
lines =[]
all_text=""
for line in fin:
numbers_found = count_numbers(line)
if numbers_found == 0:
if line.strip():
text = swallow(line, ANY_ANGLE)
file_docs[f"{file}"].append(text)
continue
if numbers_found == 3:
match = re.search(r'\d+\.\d+\.\d+', line)
if not match:
continue
start, end = match.span()
num_section = line[start:end]
book, chapter, sent = num_section.split(".")
ibook = int(book)
ichapter = int(chapter)
text = swallow(line, ANY_ANGLE)
if prev_ch == None:
lines.append(text)
prev_ch = ichapter
continue
if prev_ch != ichapter:
file_docs[f"{file}.{ibook}.{prev_ch}"].extend(lines)
lines = []
lines.append(text)
prev_ch = ichapter
else:
lines.append(text)
if numbers_found ==2:
if line.strip():
match = re.search(r'\d+\.\d+', line)
if not match:
continue
start, end = match.span()
num_section = line[start:end]
book, chapter = num_section.split(".")
ibook = int(book)
ichapter = int(chapter)
text = swallow(line, ANY_ANGLE)
file_docs[f"{file}.{ibook}"].append(text)
continue
if numbers_found ==1:
if line.strip():
match = re.search(r'\d+', line)
start, end = match.span()
num_section = line[start:end]
ibook = int(num_section)
text = swallow(line, ANY_ANGLE)
file_docs[f"{file}.{ibook}"].append(text)
continue
if ibook and ichapter and lines:
all_text = ' '.join(lines)
file_docs[f"{file}.{ibook}.{ichapter}"].append(all_text)
prev_ch = None
return file_docs
def make_docs(files):
docs = []
for file in files:
try:
file_docs = make_file_docs( file )
for key in file_docs:
docs.append(' '.join(file_docs[key]))
except Exception as ex:
print("fail with", file)
raise(ex)
return docs
###Output
_____no_output_____
###Markdown
Tests of corpus processing
###Code
base = os.path.expanduser("~/cltk_data/latin/text/latin_text_tesserae/texts/")
file_docs = make_file_docs(f"{base}caesar.de_bello_gallico.part.1.tess")
assert(len(file_docs)==54)
file_docs = make_file_docs(f"{base}vergil.eclogues.tess")
assert(len(file_docs)==10)
file_docs = make_file_docs(f"{base}ovid.fasti.part.1.tess")
assert(len(file_docs)==1)
# print(len(file_docs))
# file_docs
test_files = [ f"{base}caesar.de_bello_gallico.part.1.tess" ,
f"{base}vergil.eclogues.tess",
f"{base}ovid.fasti.part.1.tess"]
docs = make_docs(test_files)
assert(len(docs)==65)
docs = make_docs(tesserae)
print(f"{len(tesserae)} corpus files broken up into {len(docs):,} documents")
vectorizer = TfidfVectorizer(tokenizer=toker_call)
vectorizer.fit(docs)
word_idf = {key: vectorizer.idf_[idx]
for key,idx in tqdm(vectorizer.vocabulary_.items(), total=len(vectorizer.idf_))}
del vectorizer
print(f"distinct words {len(word_idf):,}")
token_lengths = [len(tmp.split()) for tmp in docs]
counter = Counter(token_lengths)
indices_counts = list(counter.items())
indices_counts.sort(key=lambda x:x[0])
indices, counts = zip(*indices_counts )
fig = plt.figure()
ax = fig.add_subplot(2, 1, 1)
line, = ax.plot(counts, color='blue', lw=2)
ax.set_yscale('log')
plt.title("Document Token Counts")
plt.xlabel("# Tokens per Doc")
plt.ylabel("# of Docs")
plt.show()
###Output
_____no_output_____
###Markdown
This word : idf mapping we'll save for sentence vectorization
###Code
latin_idf_dict_file = "word_idf.latin.pkl"
with open(latin_idf_dict_file, 'wb') as fout:
pickle.dump(word_idf, fout)
###Output
_____no_output_____
###Markdown
Compare the idf values using Mean Square Error, CosineThese values become more meaningful as the ETL processes are changed; the measurements may well indicate how much value have shifted.
###Code
words_idfs = list(word_idf.items())
words_idfs.sort(key=lambda x: x[0])
words_idf_files = list(word_idf_files.items())
words_idf_files.sort(key=lambda x: x[0])
print(f"Words Idfs vocab size: {len(words_idfs):,}, Words Idf from files {len(words_idf_files):,}")
words_idfs = [(key, word_idf.get(key)) for key,val in words_idfs
if key in word_idf_files]
words_idf_files = [(key, word_idf_files.get(key)) for key,val in words_idf_files
if key in word_idf]
assert( len(words_idfs) == len(words_idf_files))
print(f"Total # shared vocab: {len(words_idfs):,}")
_, idfs = zip(*words_idfs)
_, idfs2 = zip(*words_idf_files)
print(f"MSE: {mse(idfs, idfs2)}")
print(f"Cosine: {cosine(idfs, idfs2)}")
###Output
Words Idfs vocab size: 299,406, Words Idf from files 299,456
Total # shared vocab: 299,387
MSE: 16.972181321245785
Cosine: 0.0015073304069079807
|
tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | ###Markdown
**Copyright 2019 The TensorFlow Authors.**
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create and convert a TensorFlow modelThis notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. The model created in this notebook is used in the [hello_world](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world) sample for [TensorFlow Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers/overview). Run in Google Colab View source on GitHub Import dependenciesOur first task is to import the dependencies we need. Run the following cell to do so:
###Code
# TensorFlow is an open source machine learning library
# Note: The following line is temporary to use v2
!pip install tensorflow==2.0.0-beta0
import tensorflow as tf
# Numpy is a math library
import numpy as np
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# math is Python's math library
import math
###Output
_____no_output_____
###Markdown
Generate dataDeep learning networks learn to model patterns in underlying data. In this notebook, we're going to train a network to model data generated by a [sine](https://en.wikipedia.org/wiki/Sine) function. This will result in a model that can take a value, `x`, and predict its sine, `y`.In a real world application, if you needed the sine of `x`, you could just calculate it directly. However, by training a model to do this, we can demonstrate the basic principles of machine learning.In the [hello_world](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world) sample for [TensorFlow Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers/overview), we'll use this model to control LEDs that light up in a sequence.The code in the following cell will generate a set of random `x` values, calculate their sine values, and display them on a graph:
###Code
# We'll generate this many sample datapoints
SAMPLES = 1000
# Set a "seed" value, so we get the same random numbers each time we run this
# notebook
np.random.seed(1337)
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
###Output
_____no_output_____
###Markdown
Add some noiseSince it was generated directly by the sine function, our data fits a nice, smooth curve.However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.In the following cell, we'll add some random noise to each value, then draw a new graph:
###Code
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
###Output
_____no_output_____
###Markdown
Split our dataWe now have a noisy dataset that approximates real world data. We'll be using this to train our model.To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.To ensure we have data to use for evaluation, we'll set some aside before we begin training. We'll reserve 20% of our data for validation, and another 20% for testing. The remaining 60% will be used to train the model. This is a typical split used when training models.The following code will split our data and then plot each set as a different color:
###Code
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Design a modelWe're going to build a model that will take an input value (in this case, `x`) and use it to predict a numeric output value (the sine of `x`). This type of problem is called a _regression_.To achieve this, we're going to create a simple neural network. It will use _layers_ of _neurons_ to attempt to learn any patterns underlying the training data, so it can make predictions.To begin with, we'll define two layers. The first layer takes a single input (our `x` value) and runs it through 16 neurons. Based on this input, each neuron will become _activated_ to a certain degree based on its internal state (its _weight_ and _bias_ values). A neuron's degree of activation is expressed as a number.The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our `y` value.**Note:** To learn more about how neural networks function, you can explore the [Learn TensorFlow](https://codelabs.developers.google.com/codelabs/tensorflow-lab1-helloworld) codelabs.The code in the following cell defines our model using [Keras](https://www.tensorflow.org/guide/keras), TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we _compile_ it, specifying parameters that determine how it will be trained:
###Code
# We'll use Keras to create a simple model architecture
from tensorflow.keras import layers
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_1.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
###Output
_____no_output_____
###Markdown
Train the modelOnce we've defined the model, we can use our data to _train_ it. Training involves passing an `x` value into the neural network, checking how far the network's output deviates from the expected `y` value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.Training runs this process on the full dataset multiple times, and each full run-through is known as an _epoch_. The number of epochs to run during training is a parameter we can set.During each epoch, data is run through the network in multiple _batches_. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The _batch size_ is also a parameter we can set.The code in the following cell uses the `x` and `y` values from our training data to train the model. It runs for 1000 _epochs_, with 16 pieces of data in each _batch_. We also pass in some data to use for _validation_. As you will see when you run the cell, training can take a while to complete:
###Code
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=1000, batch_size=16,
validation_data=(x_validate, y_validate))
###Output
Train on 600 samples, validate on 200 samples
Epoch 1/1000
600/600 [==============================] - 0s 412us/sample - loss: 0.5016 - mae: 0.6297 - val_loss: 0.4922 - val_mae: 0.6235
Epoch 2/1000
600/600 [==============================] - 0s 105us/sample - loss: 0.3905 - mae: 0.5436 - val_loss: 0.4262 - val_mae: 0.5641
...
Epoch 998/1000
600/600 [==============================] - 0s 109us/sample - loss: 0.1535 - mae: 0.3068 - val_loss: 0.1507 - val_mae: 0.3113
Epoch 999/1000
600/600 [==============================] - 0s 100us/sample - loss: 0.1545 - mae: 0.3077 - val_loss: 0.1499 - val_mae: 0.3103
Epoch 1000/1000
600/600 [==============================] - 0s 132us/sample - loss: 0.1530 - mae: 0.3045 - val_loss: 0.1542 - val_mae: 0.3143
###Markdown
Check the training metricsDuring training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.The following cells will display some of that data in a graphical form:
###Code
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Look closer at the dataThe graph shows the _loss_ (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is _mean squared error_. There is a distinct loss value given for the training and the validation data.As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!Our goal is to stop training when either the model is no longer improving, or when the _training loss_ is less than the _validation loss_, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.To make the flatter part of the graph more readable, let's skip the first 50 epochs:
###Code
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Further metricsFrom the plot, we can see that loss continues to reduce until around 600 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 600 epochs.However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.To gain more insight into our model's performance we can plot some more data. This time, we'll plot the _mean absolute error_, which is another way of measuring how far the network's predictions are from the actual numbers:
###Code
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
This graph of _mean absolute error_ tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have _overfit_, or learned the training data so rigidly that it can't make effective predictions about new data.In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.To get more insight into what is happening, we can plot our network's predictions for the training data against the expected values:
###Code
# Use the model to make predictions from our validation data
predictions = model_1.predict(x_train)
# Plot the predictions along with to the test data
plt.clf()
plt.title('Training data predicted vs actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_train, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way. From `0 <= x <= 1.1` the line mostly fits, but for the rest of our `x` values it is a rough approximation at best.The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance. Change our modelTo make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with an additional layer of 16 neurons in the middle:
###Code
model_2 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_2.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second layer may help the network learn more complex representations
model_2.add(layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model_2.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_2.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
###Output
_____no_output_____
###Markdown
We'll now train the new model. To save time, we'll train for only 600 epochs:
###Code
history_2 = model_2.fit(x_train, y_train, epochs=600, batch_size=16,
validation_data=(x_validate, y_validate))
###Output
Train on 600 samples, validate on 200 samples
Epoch 1/600
600/600 [==============================] - 0s 422us/sample - loss: 0.5655 - mae: 0.6259 - val_loss: 0.4104 - val_mae: 0.5509
Epoch 2/600
600/600 [==============================] - 0s 111us/sample - loss: 0.3195 - mae: 0.4902 - val_loss: 0.3341 - val_mae: 0.4927
...
Epoch 598/600
600/600 [==============================] - 0s 116us/sample - loss: 0.0124 - mae: 0.0886 - val_loss: 0.0096 - val_mae: 0.0771
Epoch 599/600
600/600 [==============================] - 0s 130us/sample - loss: 0.0125 - mae: 0.0900 - val_loss: 0.0107 - val_mae: 0.0824
Epoch 600/600
600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845
###Markdown
Evaluate our new modelEach training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ): ```Epoch 600/600600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845```You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.015, and validation MAE has dropped from 0.31 to 0.1.The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
###Code
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_2.history['loss']
val_loss = history_2.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.clf()
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_2.history['mae']
val_mae = history_2.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Great results! From these graphs, we can see several exciting things:* Our network has reached its peak accuracy much more quickly (within 200 epochs instead of 600)* The overall loss and MAE are much better than our previous network* Metrics are better for validation than training, which means the network is not overfittingThe reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
###Code
# Calculate and print the loss on our test dataset
loss = model_2.evaluate(x_test, y_test)
# Make predictions based on our test dataset
predictions = model_2.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
###Output
200/200 [==============================] - 0s 146us/sample - loss: 0.0124 - mae: 0.0907
###Markdown
Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when `x` is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.However, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern. Convert to TensorFlow LiteWe now have an acceptably accurate model in-memory. However, to use this with TensorFlow Lite for Microcontrollers, we'll need to convert it into the correct format and download it as a file. To do this, we'll use the [TensorFlow Lite Converter](https://www.tensorflow.org/lite/convert). The converter outputs a file in a special, space-efficient format for use on memory-constrained devices.Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called [quantization](https://www.tensorflow.org/lite/performance/post_training_quantization). It reduces the precision of the model's weights, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.The TensorFlow Lite Converter can apply quantization while it converts the model. In the following cell, we'll convert the model twice: once with quantization, once without:
###Code
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
tflite_model = converter.convert()
# Save the model to disk
open("sine_model.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
# Save the model to disk
open("sine_model_quantized.tflite", "wb").write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the converted modelsTo prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results:
###Code
# Instantiate an interpreter for each model
sine_model = tf.lite.Interpreter('sine_model.tflite')
sine_model_quantized = tf.lite.Interpreter('sine_model_quantized.tflite')
# Allocate memory for each model
sine_model.allocate_tensors()
sine_model_quantized.allocate_tensors()
# Get the input and output tensors so we can feed in values and get the results
sine_model_input = sine_model.tensor(sine_model.get_input_details()[0]["index"])
sine_model_output = sine_model.tensor(sine_model.get_output_details()[0]["index"])
sine_model_quantized_input = sine_model_quantized.tensor(sine_model_quantized.get_input_details()[0]["index"])
sine_model_quantized_output = sine_model_quantized.tensor(sine_model_quantized.get_output_details()[0]["index"])
# Create arrays to store the results
sine_model_predictions = np.empty(x_test.size)
sine_model_quantized_predictions = np.empty(x_test.size)
# Run each model's interpreter for each value and store the results in arrays
for i in range(x_test.size):
sine_model_input().fill(x_test[i])
sine_model.invoke()
sine_model_predictions[i] = sine_model_output()[0]
sine_model_quantized_input().fill(x_test[i])
sine_model_quantized.invoke()
sine_model_quantized_predictions[i] = sine_model_quantized_output()[0]
# See how they line up with the data
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual')
plt.plot(x_test, predictions, 'ro', label='Original predictions')
plt.plot(x_test, sine_model_predictions, 'bx', label='Lite predictions')
plt.plot(x_test, sine_model_quantized_predictions, 'gx', label='Lite quantized predictions')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be indistinguishable. This means that our quantized model is ready to use!We can print the difference in file size:
###Code
import os
basic_model_size = os.path.getsize("sine_model.tflite")
print("Basic model is %d bytes" % basic_model_size)
quantized_model_size = os.path.getsize("sine_model_quantized.tflite")
print("Quantized model is %d bytes" % quantized_model_size)
difference = basic_model_size - quantized_model_size
print("Difference is %d bytes" % difference)
###Output
Basic model is 2656 bytes
Quantized model is 2640 bytes
Difference is 16 bytes
###Markdown
Our quantized model is only 16 bytes smaller than the original version, which only a tiny reduction in size! At around 2.6 kilobytes, this model is already so small that the weights make up only a small fraction of the overall size, meaning quantization has little effect.More complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models.Regardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller! Write to a C fileThe final step in preparing our model for use with TensorFlow Lite for Microcontrollers is to convert it into a C source file. You can see an example of this format in [`hello_world/sine_model_data.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/sine_model_data.cc).To do so, we can use a command line utility named [`xxd`](https://linux.die.net/man/1/xxd). The following cell runs `xxd` on our quantized model and prints the output:
###Code
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i sine_model_quantized.tflite > sine_model_quantized.cc
# Print the source file
!cat sine_model_quantized.cc
###Output
unsigned char sine_model_quantized_tflite[] = {
0x18, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x0e, 0x00,
0x18, 0x00, 0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x14, 0x00,
0x0e, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x10, 0x0a, 0x00, 0x00,
0xb8, 0x05, 0x00, 0x00, 0xa0, 0x05, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x0b, 0x00, 0x00, 0x00, 0x90, 0x05, 0x00, 0x00, 0x7c, 0x05, 0x00, 0x00,
0x24, 0x05, 0x00, 0x00, 0xd4, 0x04, 0x00, 0x00, 0xc4, 0x00, 0x00, 0x00,
0x74, 0x00, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00,
0x14, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x54, 0xf6, 0xff, 0xff, 0x58, 0xf6, 0xff, 0xff, 0x5c, 0xf6, 0xff, 0xff,
0x60, 0xf6, 0xff, 0xff, 0xc2, 0xfa, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00,
0x40, 0x00, 0x00, 0x00, 0x7c, 0x19, 0xa7, 0x3e, 0x99, 0x81, 0xb9, 0x3e,
0x56, 0x8b, 0x9f, 0x3e, 0x88, 0xd8, 0x12, 0xbf, 0x74, 0x10, 0x56, 0x3e,
0xfe, 0xc6, 0xdf, 0xbe, 0xf2, 0x10, 0x5a, 0xbe, 0xf0, 0xe2, 0x0a, 0xbe,
0x10, 0x5a, 0x98, 0xbe, 0xb9, 0x36, 0xce, 0x3d, 0x8f, 0x7f, 0x87, 0x3e,
0x2c, 0xb1, 0xfd, 0xbd, 0xe6, 0xa6, 0x8a, 0xbe, 0xa5, 0x3e, 0xda, 0x3e,
0x50, 0x34, 0xed, 0xbd, 0x90, 0x91, 0x69, 0xbe, 0x0e, 0xfb, 0xff, 0xff,
0x04, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x67, 0x41, 0x48, 0xbf,
0x24, 0xcd, 0xa0, 0xbe, 0xb7, 0x92, 0x0c, 0xbf, 0x00, 0x00, 0x00, 0x00,
0x98, 0xfe, 0x3c, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4a, 0x17, 0x9a, 0xbe,
0x41, 0xcb, 0xb6, 0xbe, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x13, 0xd6, 0x1e, 0x3e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x5a, 0xfb, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00,
0x4b, 0x98, 0xdd, 0xbd, 0x40, 0x6b, 0xcb, 0xbe, 0x36, 0x0c, 0xd4, 0x3c,
0xbd, 0x44, 0xb5, 0x3e, 0x95, 0x70, 0xe3, 0x3e, 0xe7, 0xac, 0x86, 0x3e,
0x00, 0xc4, 0x4e, 0x3d, 0x7e, 0xa6, 0x1d, 0x3e, 0xbd, 0x87, 0xbb, 0x3e,
0xb4, 0xb8, 0x09, 0xbf, 0xa1, 0x1f, 0xf8, 0xbe, 0x8d, 0x90, 0xdd, 0x3e,
0xde, 0xfa, 0x6f, 0xbe, 0xb2, 0x75, 0xe4, 0x3d, 0x6e, 0xfe, 0x36, 0x3e,
0x20, 0x18, 0xc2, 0xbe, 0x39, 0xc7, 0xfb, 0xbe, 0xfe, 0xa4, 0x30, 0xbe,
0xf7, 0x91, 0xde, 0xbe, 0xde, 0xab, 0x24, 0x3e, 0xfb, 0xbb, 0xce, 0x3e,
0xeb, 0x23, 0x80, 0xbe, 0x7b, 0x58, 0x73, 0xbe, 0x9a, 0x2e, 0x03, 0x3e,
0x10, 0x42, 0xa9, 0xbc, 0x10, 0x12, 0x64, 0xbd, 0xe3, 0x8d, 0x0c, 0x3d,
0x9e, 0x48, 0x97, 0xbe, 0x34, 0x51, 0xd4, 0xbe, 0x02, 0x3b, 0x0d, 0x3e,
0x62, 0x67, 0x89, 0xbe, 0x74, 0xdf, 0xa2, 0x3d, 0xf3, 0x25, 0xb3, 0xbe,
0xef, 0x34, 0x7b, 0x3d, 0x61, 0x70, 0xe3, 0x3d, 0xba, 0x76, 0xc0, 0xbe,
0x7d, 0xe9, 0xa7, 0x3e, 0xc3, 0xab, 0xd0, 0xbe, 0xcf, 0x7c, 0xdb, 0xbe,
0x70, 0x27, 0x9a, 0xbe, 0x98, 0xf5, 0x3c, 0xbd, 0xff, 0x4b, 0x4b, 0x3e,
0x7e, 0xa0, 0xf8, 0xbd, 0xd4, 0x6e, 0x86, 0x3d, 0x00, 0x4a, 0x07, 0x3a,
0x4c, 0x24, 0x61, 0xbe, 0x54, 0x68, 0xf7, 0xbd, 0x02, 0x3f, 0x77, 0xbe,
0x23, 0x79, 0xb3, 0x3e, 0x1c, 0x83, 0xad, 0xbd, 0xc8, 0x92, 0x8d, 0x3e,
0xa8, 0xf3, 0x15, 0xbd, 0xe6, 0x4d, 0x6c, 0x3d, 0xac, 0xe7, 0x98, 0xbe,
0x81, 0xec, 0xbd, 0x3e, 0xe2, 0x55, 0x73, 0x3e, 0xc1, 0x77, 0xc7, 0x3e,
0x6e, 0x1b, 0x5e, 0x3d, 0x27, 0x78, 0x02, 0x3f, 0xd4, 0x21, 0x90, 0x3d,
0x52, 0xdc, 0x1f, 0x3e, 0xbf, 0xda, 0x88, 0x3e, 0x80, 0x79, 0xe3, 0xbd,
0x40, 0x6f, 0x10, 0xbe, 0x20, 0x43, 0x2e, 0xbd, 0xf0, 0x76, 0xc5, 0xbd,
0xcc, 0xa0, 0x04, 0xbe, 0xf0, 0x69, 0xd7, 0xbe, 0xb1, 0xfe, 0x64, 0xbe,
0x20, 0x41, 0x84, 0xbe, 0xb2, 0xc3, 0x26, 0xbe, 0xd8, 0xf4, 0x09, 0xbe,
0x64, 0x44, 0xd1, 0x3d, 0xd5, 0xe1, 0xc8, 0xbe, 0x35, 0xbc, 0x3f, 0xbe,
0xc0, 0x94, 0x82, 0x3d, 0xdc, 0x2b, 0xb1, 0xbd, 0x02, 0xdb, 0xbf, 0xbe,
0xa5, 0x7f, 0x8a, 0x3e, 0x21, 0xb4, 0xa2, 0x3e, 0xcd, 0x86, 0x56, 0xbf,
0x9c, 0x3b, 0x76, 0xbc, 0x85, 0x6d, 0x60, 0xbf, 0x86, 0x00, 0x3c, 0xbe,
0xc1, 0x23, 0x7e, 0x3e, 0x96, 0xcd, 0x3f, 0x3e, 0x86, 0x91, 0x2d, 0x3e,
0x55, 0xef, 0x87, 0x3e, 0x7e, 0x97, 0x03, 0xbe, 0x2a, 0xcd, 0x01, 0x3e,
0x32, 0xc9, 0x8e, 0xbe, 0x72, 0x77, 0x3b, 0xbe, 0xe0, 0xa1, 0xbc, 0xbe,
0x8d, 0xb7, 0xa7, 0x3e, 0x1c, 0x05, 0x95, 0xbe, 0xf7, 0x1f, 0xbb, 0x3e,
0xc9, 0x3e, 0xd6, 0x3e, 0x80, 0x42, 0xe9, 0xbd, 0x27, 0x0c, 0xd2, 0xbe,
0x5c, 0x32, 0x34, 0xbe, 0x14, 0xcb, 0xca, 0xbd, 0xdd, 0x3a, 0x67, 0xbe,
0x1c, 0xbb, 0x8d, 0xbe, 0x91, 0xac, 0x5c, 0xbe, 0x52, 0x40, 0x6f, 0xbe,
0xd7, 0x71, 0x94, 0x3e, 0x18, 0x71, 0x09, 0xbe, 0x9b, 0x29, 0xd9, 0xbe,
0x7d, 0x66, 0xd2, 0xbe, 0x98, 0xd6, 0xb2, 0xbe, 0x00, 0xc9, 0x84, 0x3a,
0xbc, 0xda, 0xc2, 0xbd, 0x1d, 0xc2, 0x1b, 0xbf, 0xd4, 0xdd, 0x92, 0x3e,
0x07, 0x87, 0x6c, 0xbe, 0x40, 0xc2, 0x3b, 0xbe, 0xbd, 0xe2, 0x9c, 0x3e,
0x0a, 0xb5, 0xa0, 0xbe, 0xe2, 0xd5, 0x9c, 0xbe, 0x3e, 0xbb, 0x7c, 0x3e,
0x17, 0xb4, 0xcf, 0x3e, 0xd5, 0x8e, 0xc8, 0xbe, 0x7c, 0xf9, 0x5c, 0x3e,
0x80, 0xfc, 0x0d, 0x3d, 0xc5, 0xd5, 0x8b, 0x3e, 0xf5, 0x17, 0xa2, 0x3e,
0xc7, 0x60, 0x89, 0xbe, 0xec, 0x95, 0x87, 0x3d, 0x7a, 0xc2, 0x5d, 0xbf,
0x77, 0x94, 0x98, 0x3e, 0x77, 0x39, 0x07, 0xbc, 0x42, 0x29, 0x00, 0x3e,
0xaf, 0xd0, 0xa9, 0x3e, 0x31, 0x23, 0xc4, 0xbe, 0x95, 0x36, 0x5b, 0xbe,
0xc7, 0xdc, 0x83, 0xbe, 0x1e, 0x6b, 0x47, 0x3e, 0x5b, 0x24, 0x99, 0x3e,
0x99, 0x27, 0x54, 0x3e, 0xc8, 0x20, 0xdd, 0xbd, 0x5a, 0x86, 0x2f, 0x3e,
0x80, 0xf0, 0x69, 0xbe, 0x44, 0xfc, 0x84, 0xbd, 0x82, 0xa0, 0x2a, 0xbe,
0x87, 0xe6, 0x2a, 0x3e, 0xd8, 0x34, 0xae, 0x3d, 0x50, 0xbd, 0xb5, 0x3e,
0xc4, 0x8c, 0x88, 0xbe, 0xe3, 0xbc, 0xa5, 0x3e, 0xa9, 0xda, 0x9e, 0x3e,
0x3e, 0xb8, 0x23, 0xbe, 0x80, 0x90, 0x15, 0x3d, 0x97, 0x3f, 0xc3, 0x3e,
0xca, 0x5c, 0x9d, 0x3e, 0x21, 0xe8, 0xe1, 0x3e, 0xc0, 0x49, 0x01, 0xbc,
0x00, 0x0b, 0x88, 0xbd, 0x3f, 0xf7, 0xca, 0x3c, 0xfb, 0x5a, 0xb1, 0x3e,
0x60, 0xd2, 0x0d, 0x3c, 0xce, 0x23, 0x78, 0xbf, 0x8f, 0x4f, 0xb9, 0xbe,
0x69, 0x6a, 0x34, 0xbf, 0x4b, 0x5e, 0xa9, 0x3e, 0x64, 0x8c, 0xd9, 0x3e,
0x52, 0x77, 0x36, 0x3e, 0xeb, 0xaf, 0xbe, 0x3e, 0x40, 0xbe, 0x36, 0x3c,
0x08, 0x65, 0x3b, 0xbd, 0x55, 0xe0, 0x66, 0xbd, 0xd2, 0xe8, 0x9b, 0xbe,
0x86, 0xe3, 0x09, 0xbe, 0x93, 0x3d, 0xdd, 0x3e, 0x0f, 0x66, 0x18, 0x3f,
0x18, 0x05, 0x33, 0xbd, 0xde, 0x15, 0xd7, 0xbe, 0xaa, 0xcf, 0x49, 0xbe,
0xa2, 0xa5, 0x64, 0x3e, 0xe6, 0x9c, 0x42, 0xbe, 0x54, 0x42, 0xcc, 0x3d,
0xa0, 0xbd, 0x9d, 0xbe, 0xc2, 0x69, 0x48, 0x3e, 0x5b, 0x8b, 0xa2, 0xbe,
0xc0, 0x13, 0x87, 0x3d, 0x36, 0xfd, 0x69, 0x3e, 0x05, 0x86, 0x40, 0xbe,
0x1e, 0x7a, 0xce, 0xbe, 0x46, 0x13, 0xa7, 0xbe, 0x68, 0x52, 0x86, 0xbe,
0x04, 0x9e, 0x86, 0xbd, 0x8c, 0x54, 0xc1, 0x3d, 0xe0, 0x3b, 0xad, 0x3c,
0x42, 0x67, 0x85, 0xbd, 0xea, 0x97, 0x42, 0x3e, 0x6e, 0x13, 0x3b, 0xbf,
0x56, 0x5b, 0x16, 0x3e, 0xaa, 0xab, 0xdf, 0x3e, 0xc8, 0x41, 0x36, 0x3d,
0x24, 0x2d, 0x47, 0xbe, 0x77, 0xa5, 0xae, 0x3e, 0xc0, 0xc2, 0x5b, 0x3c,
0xac, 0xac, 0x4e, 0x3e, 0x99, 0xec, 0x13, 0xbe, 0xf2, 0xab, 0x73, 0x3e,
0xaa, 0xa1, 0x48, 0xbe, 0xe8, 0xd3, 0x01, 0xbe, 0x60, 0xb7, 0xc7, 0xbd,
0x64, 0x72, 0xd3, 0x3d, 0x83, 0xd3, 0x99, 0x3e, 0x0c, 0x76, 0x34, 0xbe,
0x42, 0xda, 0x0d, 0x3e, 0xfb, 0x47, 0x9a, 0x3e, 0x8b, 0xdc, 0x92, 0xbe,
0x56, 0x7f, 0x6b, 0x3e, 0x04, 0xd4, 0x88, 0xbd, 0x11, 0x9e, 0x80, 0x3e,
0x3c, 0x89, 0xff, 0x3d, 0xb3, 0x3e, 0x88, 0x3e, 0xf7, 0xf0, 0x88, 0x3e,
0x28, 0xfb, 0xc9, 0xbe, 0x53, 0x3e, 0xcf, 0x3e, 0xac, 0x75, 0xdc, 0xbe,
0xdd, 0xca, 0xd7, 0x3e, 0x01, 0x58, 0xa7, 0x3e, 0x29, 0xb8, 0x13, 0xbf,
0x76, 0x81, 0x12, 0xbc, 0x28, 0x8b, 0x16, 0xbf, 0x0e, 0xec, 0x0e, 0x3e,
0x40, 0x0a, 0xdb, 0xbd, 0x98, 0xec, 0xbf, 0xbd, 0x32, 0x55, 0x0c, 0xbe,
0xfb, 0xf9, 0xc9, 0x3e, 0x83, 0x4a, 0x6d, 0xbe, 0x76, 0x59, 0xe2, 0xbe,
0x54, 0x7d, 0x9f, 0xbb, 0x9d, 0xe8, 0x95, 0x3e, 0x5c, 0xd3, 0xd0, 0x3d,
0x19, 0x8a, 0xb0, 0x3e, 0xde, 0x6f, 0x2e, 0xbe, 0xd0, 0x16, 0x83, 0x3d,
0x9c, 0x7d, 0x11, 0xbf, 0x2b, 0xcc, 0x25, 0x3c, 0x2a, 0xa5, 0x27, 0xbe,
0x22, 0x14, 0xc7, 0xbe, 0x5e, 0x7a, 0xac, 0x3e, 0x4e, 0x41, 0x94, 0xbe,
0x5a, 0x68, 0x7b, 0x3e, 0x86, 0xfd, 0x4e, 0x3e, 0xa2, 0x56, 0x6a, 0xbe,
0xca, 0xfe, 0x81, 0xbe, 0x43, 0xc3, 0xb1, 0xbd, 0xc5, 0xb8, 0xa7, 0x3e,
0x55, 0x23, 0xcd, 0x3e, 0xaf, 0x2e, 0x76, 0x3e, 0x69, 0xa8, 0x90, 0xbe,
0x0d, 0xba, 0xb9, 0x3e, 0x66, 0xff, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00,
0x40, 0x00, 0x00, 0x00, 0x53, 0xd6, 0xe2, 0x3d, 0x66, 0xb6, 0xcc, 0x3e,
0x03, 0xe7, 0xf6, 0x3e, 0xe0, 0x28, 0x10, 0xbf, 0x00, 0x00, 0x00, 0x00,
0x3e, 0x3d, 0xb0, 0x3e, 0x00, 0x00, 0x00, 0x00, 0x62, 0xf0, 0x77, 0x3e,
0xa6, 0x9d, 0xa4, 0x3e, 0x3a, 0x4b, 0xf3, 0xbe, 0x71, 0x9e, 0xa7, 0x3e,
0x00, 0x00, 0x00, 0x00, 0x34, 0x39, 0xa2, 0x3e, 0x00, 0x00, 0x00, 0x00,
0xcc, 0x9c, 0x4a, 0x3e, 0xab, 0x40, 0xa3, 0x3e, 0xb2, 0xff, 0xff, 0xff,
0x04, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0xb3, 0x71, 0x67, 0x3f,
0x9a, 0x7a, 0x95, 0xbf, 0xe1, 0x48, 0xe8, 0xbe, 0x8a, 0x72, 0x96, 0x3e,
0x00, 0xd2, 0xd3, 0xbb, 0x1a, 0xc5, 0xd7, 0x3f, 0xac, 0x7e, 0xc8, 0xbe,
0x90, 0xa7, 0x95, 0xbe, 0x3b, 0xd7, 0xdc, 0xbe, 0x41, 0xa8, 0x16, 0x3f,
0x50, 0x5b, 0xcb, 0x3f, 0x52, 0xb9, 0xed, 0xbe, 0x2e, 0xa7, 0xc6, 0xbe,
0xaf, 0x0f, 0x14, 0xbf, 0xb3, 0xda, 0x59, 0x3f, 0x02, 0xec, 0xd7, 0xbe,
0x00, 0x00, 0x06, 0x00, 0x08, 0x00, 0x04, 0x00, 0x06, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x66, 0x11, 0x1f, 0xbf,
0xb8, 0xfb, 0xff, 0xff, 0x0f, 0x00, 0x00, 0x00, 0x54, 0x4f, 0x43, 0x4f,
0x20, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x65, 0x64, 0x2e, 0x00,
0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x14, 0x00,
0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x0c, 0x00, 0x00, 0x00,
0xf0, 0x00, 0x00, 0x00, 0xe4, 0x00, 0x00, 0x00, 0xd8, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x90, 0x00, 0x00, 0x00,
0x48, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xce, 0xff, 0xff, 0xff,
0x00, 0x00, 0x00, 0x08, 0x18, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x1c, 0xfc, 0xff, 0xff, 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00,
0x14, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x10, 0x00,
0x0e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x1c, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xba, 0xff, 0xff, 0xff,
0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00,
0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x16, 0x00, 0x00, 0x00,
0x08, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x10, 0x00, 0x0e, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x08, 0x24, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x08, 0x00, 0x07, 0x00,
0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x10, 0x03, 0x00, 0x00, 0xa4, 0x02, 0x00, 0x00,
0x40, 0x02, 0x00, 0x00, 0xf4, 0x01, 0x00, 0x00, 0xac, 0x01, 0x00, 0x00,
0x48, 0x01, 0x00, 0x00, 0xfc, 0x00, 0x00, 0x00, 0xb4, 0x00, 0x00, 0x00,
0x50, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x26, 0xfd, 0xff, 0xff,
0x3c, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x18, 0xfd, 0xff, 0xff, 0x20, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x34, 0x2f, 0x4d, 0x61, 0x74,
0x4d, 0x75, 0x6c, 0x5f, 0x62, 0x69, 0x61, 0x73, 0x00, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x6e, 0xfd, 0xff, 0xff,
0x50, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x60, 0xfd, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x34, 0x2f, 0x4d, 0x61, 0x74,
0x4d, 0x75, 0x6c, 0x2f, 0x52, 0x65, 0x61, 0x64, 0x56, 0x61, 0x72, 0x69,
0x61, 0x62, 0x6c, 0x65, 0x4f, 0x70, 0x2f, 0x74, 0x72, 0x61, 0x6e, 0x73,
0x70, 0x6f, 0x73, 0x65, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0xce, 0xfd, 0xff, 0xff,
0x34, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0xc0, 0xfd, 0xff, 0xff, 0x19, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x33, 0x2f, 0x52, 0x65, 0x6c,
0x75, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x12, 0xfe, 0xff, 0xff, 0x3c, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x04, 0xfe, 0xff, 0xff, 0x20, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x33, 0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x5f,
0x62, 0x69, 0x61, 0x73, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x5a, 0xfe, 0xff, 0xff, 0x50, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x4c, 0xfe, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x33, 0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x2f,
0x52, 0x65, 0x61, 0x64, 0x56, 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65,
0x4f, 0x70, 0x2f, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x70, 0x6f, 0x73, 0x65,
0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0xba, 0xfe, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0xac, 0xfe, 0xff, 0xff, 0x19, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x32, 0x2f, 0x52, 0x65, 0x6c, 0x75, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0xfe, 0xfe, 0xff, 0xff, 0x3c, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xf0, 0xfe, 0xff, 0xff,
0x20, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69,
0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32,
0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x5f, 0x62, 0x69, 0x61, 0x73,
0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x46, 0xff, 0xff, 0xff, 0x50, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x38, 0xff, 0xff, 0xff,
0x34, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69,
0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32,
0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x2f, 0x52, 0x65, 0x61, 0x64,
0x56, 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65, 0x4f, 0x70, 0x2f, 0x74,
0x72, 0x61, 0x6e, 0x73, 0x70, 0x6f, 0x73, 0x65, 0x00, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0xa6, 0xff, 0xff, 0xff, 0x48, 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00,
0x2c, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00,
0x04, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x7f, 0x43,
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00,
0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32, 0x5f, 0x69, 0x6e, 0x70, 0x75,
0x74, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x14, 0x00, 0x04, 0x00,
0x00, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x0e, 0x00, 0x00, 0x00,
0x28, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x04, 0x00, 0x04, 0x00, 0x04, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x49, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x74, 0x79,
0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x00, 0x00, 0x0a, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x00, 0x00, 0x08, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, 0x03, 0x00, 0x00, 0x00
};
unsigned int sine_model_quantized_tflite_len = 2640;
###Markdown
**Copyright 2019 The TensorFlow Authors.**
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create and convert a TensorFlow modelThis notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. The model created in this notebook is used in the [hello_world](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world) sample for [TensorFlow Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers/overview). Run in Google Colab View source on GitHub Import dependenciesOur first task is to import the dependencies we need. Run the following cell to do so:
###Code
# TensorFlow is an open source machine learning library
# Note: The following line is temporary to use v2
!pip install tensorflow==2.0.0-beta0
import tensorflow as tf
# Numpy is a math library
import numpy as np
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# math is Python's math library
import math
###Output
_____no_output_____
###Markdown
Generate dataDeep learning networks learn to model patterns in underlying data. In this notebook, we're going to train a network to model data generated by a [sine](https://en.wikipedia.org/wiki/Sine) function. This will result in a model that can take a value, `x`, and predict its sine, `y`.In a real world application, if you needed the sine of `x`, you could just calculate it directly. However, by training a model to do this, we can demonstrate the basic principles of machine learning.In the [hello_world](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world) sample for [TensorFlow Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers/overview), we'll use this model to control LEDs that light up in a sequence.The code in the following cell will generate a set of random `x` values, calculate their sine values, and display them on a graph:
###Code
# We'll generate this many sample datapoints
SAMPLES = 1000
# Set a "seed" value, so we get the same random numbers each time we run this
# notebook
np.random.seed(1337)
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
###Output
_____no_output_____
###Markdown
Add some noiseSince it was generated directly by the sine function, our data fits a nice, smooth curve.However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.In the following cell, we'll add some random noise to each value, then draw a new graph:
###Code
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
###Output
_____no_output_____
###Markdown
Split our dataWe now have a noisy dataset that approximates real world data. We'll be using this to train our model.To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.To ensure we have data to use for evaluation, we'll set some aside before we begin training. We'll reserve 20% of our data for validation, and another 20% for testing. The remaining 60% will be used to train the model. This is a typical split used when training models.The following code will split our data and then plot each set as a different color:
###Code
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Design a modelWe're going to build a model that will take an input value (in this case, `x`) and use it to predict a numeric output value (the sine of `x`). This type of problem is called a _regression_.To achieve this, we're going to create a simple neural network. It will use _layers_ of _neurons_ to attempt to learn any patterns underlying the training data, so it can make predictions.To begin with, we'll define two layers. The first layer takes a single input (our `x` value) and runs it through 16 neurons. Based on this input, each neuron will become _activated_ to a certain degree based on its internal state (its _weight_ and _bias_ values). A neuron's degree of activation is expressed as a number.The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our `y` value.**Note:** To learn more about how neural networks function, you can explore the [Learn TensorFlow](https://codelabs.developers.google.com/codelabs/tensorflow-lab1-helloworld) codelabs.The code in the following cell defines our model using [Keras](https://www.tensorflow.org/guide/keras), TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we _compile_ it, specifying parameters that determine how it will be trained:
###Code
# We'll use Keras to create a simple model architecture
from tensorflow.keras import layers
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_1.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
###Output
_____no_output_____
###Markdown
Train the modelOnce we've defined the model, we can use our data to _train_ it. Training involves passing an `x` value into the neural network, checking how far the network's output deviates from the expected `y` value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.Training runs this process on the full dataset multiple times, and each full run-through is known as an _epoch_. The number of epochs to run during training is a parameter we can set.During each epoch, data is run through the network in multiple _batches_. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The _batch size_ is also a parameter we can set.The code in the following cell uses the `x` and `y` values from our training data to train the model. It runs for 1000 _epochs_, with 16 pieces of data in each _batch_. We also pass in some data to use for _validation_. As you will see when you run the cell, training can take a while to complete:
###Code
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=1000, batch_size=16,
validation_data=(x_validate, y_validate))
###Output
Train on 600 samples, validate on 200 samples
Epoch 1/1000
600/600 [==============================] - 0s 412us/sample - loss: 0.5016 - mae: 0.6297 - val_loss: 0.4922 - val_mae: 0.6235
Epoch 2/1000
600/600 [==============================] - 0s 105us/sample - loss: 0.3905 - mae: 0.5436 - val_loss: 0.4262 - val_mae: 0.5641
...
Epoch 998/1000
600/600 [==============================] - 0s 109us/sample - loss: 0.1535 - mae: 0.3068 - val_loss: 0.1507 - val_mae: 0.3113
Epoch 999/1000
600/600 [==============================] - 0s 100us/sample - loss: 0.1545 - mae: 0.3077 - val_loss: 0.1499 - val_mae: 0.3103
Epoch 1000/1000
600/600 [==============================] - 0s 132us/sample - loss: 0.1530 - mae: 0.3045 - val_loss: 0.1542 - val_mae: 0.3143
###Markdown
Check the training metricsDuring training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.The following cells will display some of that data in a graphical form:
###Code
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Look closer at the dataThe graph shows the _loss_ (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is _mean squared error_. There is a distinct loss value given for the training and the validation data.As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!Our goal is to stop training when either the model is no longer improving, or when the _training loss_ is less than the _validation loss_, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.To make the flatter part of the graph more readable, let's skip the first 50 epochs:
###Code
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Further metricsFrom the plot, we can see that loss continues to reduce until around 600 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 600 epochs.However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.To gain more insight into our model's performance we can plot some more data. This time, we'll plot the _mean absolute error_, which is another way of measuring how far the network's predictions are from the actual numbers:
###Code
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
This graph of _mean absolute error_ tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have _overfit_, or learned the training data so rigidly that it can't make effective predictions about new data.In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.To get more insight into what is happening, we can plot our network's predictions for the training data against the expected values:
###Code
# Use the model to make predictions from our validation data
predictions = model_1.predict(x_train)
# Plot the predictions along with to the test data
plt.clf()
plt.title('Training data predicted vs actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_train, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way. From `0 <= x <= 1.1` the line mostly fits, but for the rest of our `x` values it is a rough approximation at best.The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance. Change our modelTo make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with an additional layer of 16 neurons in the middle:
###Code
model_2 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_2.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second layer may help the network learn more complex representations
model_2.add(layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model_2.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_2.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
###Output
_____no_output_____
###Markdown
We'll now train the new model. To save time, we'll train for only 600 epochs:
###Code
history_2 = model_2.fit(x_train, y_train, epochs=600, batch_size=16,
validation_data=(x_validate, y_validate))
###Output
Train on 600 samples, validate on 200 samples
Epoch 1/600
600/600 [==============================] - 0s 422us/sample - loss: 0.5655 - mae: 0.6259 - val_loss: 0.4104 - val_mae: 0.5509
Epoch 2/600
600/600 [==============================] - 0s 111us/sample - loss: 0.3195 - mae: 0.4902 - val_loss: 0.3341 - val_mae: 0.4927
...
Epoch 598/600
600/600 [==============================] - 0s 116us/sample - loss: 0.0124 - mae: 0.0886 - val_loss: 0.0096 - val_mae: 0.0771
Epoch 599/600
600/600 [==============================] - 0s 130us/sample - loss: 0.0125 - mae: 0.0900 - val_loss: 0.0107 - val_mae: 0.0824
Epoch 600/600
600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845
###Markdown
Evaluate our new modelEach training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ): ```Epoch 600/600600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845```You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.015, and validation MAE has dropped from 0.31 to 0.1.The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
###Code
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_2.history['loss']
val_loss = history_2.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.clf()
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_2.history['mae']
val_mae = history_2.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Great results! From these graphs, we can see several exciting things:* Our network has reached its peak accuracy much more quickly (within 200 epochs instead of 600)* The overall loss and MAE are much better than our previous network* Metrics are better for validation than training, which means the network is not overfittingThe reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
###Code
# Calculate and print the loss on our test dataset
loss = model_2.evaluate(x_test, y_test)
# Make predictions based on our test dataset
predictions = model_2.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
###Output
200/200 [==============================] - 0s 146us/sample - loss: 0.0124 - mae: 0.0907
###Markdown
Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when `x` is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.However, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern. Convert to TensorFlow LiteWe now have an acceptably accurate model in-memory. However, to use this with TensorFlow Lite for Microcontrollers, we'll need to convert it into the correct format and download it as a file. To do this, we'll use the [TensorFlow Lite Converter](https://www.tensorflow.org/lite/convert). The converter outputs a file in a special, space-efficient format for use on memory-constrained devices.Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called [quantization](https://www.tensorflow.org/lite/performance/post_training_quantization). It reduces the precision of the model's weights, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.The TensorFlow Lite Converter can apply quantization while it converts the model. In the following cell, we'll convert the model twice: once with quantization, once without:
###Code
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
tflite_model = converter.convert()
# Save the model to disk
open("sine_model.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
# Save the model to disk
open("sine_model_quantized.tflite", "wb").write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the converted modelsTo prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results:
###Code
# Instantiate an interpreter for each model
sine_model = tf.lite.Interpreter('sine_model.tflite')
sine_model_quantized = tf.lite.Interpreter('sine_model_quantized.tflite')
# Allocate memory for each model
sine_model.allocate_tensors()
sine_model_quantized.allocate_tensors()
# Get the input and output tensors so we can feed in values and get the results
sine_model_input = sine_model.tensor(sine_model.get_input_details()[0]["index"])
sine_model_output = sine_model.tensor(sine_model.get_output_details()[0]["index"])
sine_model_quantized_input = sine_model_quantized.tensor(sine_model_quantized.get_input_details()[0]["index"])
sine_model_quantized_output = sine_model_quantized.tensor(sine_model_quantized.get_output_details()[0]["index"])
# Create arrays to store the results
sine_model_predictions = np.empty(x_test.size)
sine_model_quantized_predictions = np.empty(x_test.size)
# Run each model's interpreter for each value and store the results in arrays
for i in range(x_test.size):
sine_model_input().fill(x_test[i])
sine_model.invoke()
sine_model_predictions[i] = sine_model_output()[0]
sine_model_quantized_input().fill(x_test[i])
sine_model_quantized.invoke()
sine_model_quantized_predictions[i] = sine_model_quantized_output()[0]
# See how they line up with the data
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual')
plt.plot(x_test, predictions, 'ro', label='Original predictions')
plt.plot(x_test, sine_model_predictions, 'bx', label='Lite predictions')
plt.plot(x_test, sine_model_quantized_predictions, 'gx', label='Lite quantized predictions')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be indistinguishable. This means that our quantized model is ready to use!We can print the difference in file size:
###Code
import os
basic_model_size = os.path.getsize("sine_model.tflite")
print("Basic model is %d bytes" % basic_model_size)
quantized_model_size = os.path.getsize("sine_model_quantized.tflite")
print("Quantized model is %d bytes" % quantized_model_size)
difference = basic_model_size - quantized_model_size
print("Difference is %d bytes" % difference)
###Output
Basic model is 2656 bytes
Quantized model is 2640 bytes
Difference is 16 bytes
###Markdown
Our quantized model is only 16 bytes smaller than the original version, which only a tiny reduction in size! At around 2.6 kilobytes, this model is already so small that the weights make up only a small fraction of the overall size, meaning quantization has little effect.More complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models.Regardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller! Write to a C fileThe final step in preparing our model for use with TensorFlow Lite for Microcontrollers is to convert it into a C source file. You can see an example of this format in [`hello_world/sine_model_data.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/sine_model_data.cc).To do so, we can use a command line utility named [`xxd`](https://linux.die.net/man/1/xxd). The following cell runs `xxd` on our quantized model and prints the output:
###Code
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i sine_model_quantized.tflite > sine_model_quantized.cc
# Print the source file
!cat sine_model_quantized.cc
###Output
unsigned char sine_model_quantized_tflite[] = {
0x18, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x0e, 0x00,
0x18, 0x00, 0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x14, 0x00,
0x0e, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x10, 0x0a, 0x00, 0x00,
0xb8, 0x05, 0x00, 0x00, 0xa0, 0x05, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x0b, 0x00, 0x00, 0x00, 0x90, 0x05, 0x00, 0x00, 0x7c, 0x05, 0x00, 0x00,
0x24, 0x05, 0x00, 0x00, 0xd4, 0x04, 0x00, 0x00, 0xc4, 0x00, 0x00, 0x00,
0x74, 0x00, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00,
0x14, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x54, 0xf6, 0xff, 0xff, 0x58, 0xf6, 0xff, 0xff, 0x5c, 0xf6, 0xff, 0xff,
0x60, 0xf6, 0xff, 0xff, 0xc2, 0xfa, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00,
0x40, 0x00, 0x00, 0x00, 0x7c, 0x19, 0xa7, 0x3e, 0x99, 0x81, 0xb9, 0x3e,
0x56, 0x8b, 0x9f, 0x3e, 0x88, 0xd8, 0x12, 0xbf, 0x74, 0x10, 0x56, 0x3e,
0xfe, 0xc6, 0xdf, 0xbe, 0xf2, 0x10, 0x5a, 0xbe, 0xf0, 0xe2, 0x0a, 0xbe,
0x10, 0x5a, 0x98, 0xbe, 0xb9, 0x36, 0xce, 0x3d, 0x8f, 0x7f, 0x87, 0x3e,
0x2c, 0xb1, 0xfd, 0xbd, 0xe6, 0xa6, 0x8a, 0xbe, 0xa5, 0x3e, 0xda, 0x3e,
0x50, 0x34, 0xed, 0xbd, 0x90, 0x91, 0x69, 0xbe, 0x0e, 0xfb, 0xff, 0xff,
0x04, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x67, 0x41, 0x48, 0xbf,
0x24, 0xcd, 0xa0, 0xbe, 0xb7, 0x92, 0x0c, 0xbf, 0x00, 0x00, 0x00, 0x00,
0x98, 0xfe, 0x3c, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4a, 0x17, 0x9a, 0xbe,
0x41, 0xcb, 0xb6, 0xbe, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x13, 0xd6, 0x1e, 0x3e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x5a, 0xfb, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00,
0x4b, 0x98, 0xdd, 0xbd, 0x40, 0x6b, 0xcb, 0xbe, 0x36, 0x0c, 0xd4, 0x3c,
0xbd, 0x44, 0xb5, 0x3e, 0x95, 0x70, 0xe3, 0x3e, 0xe7, 0xac, 0x86, 0x3e,
0x00, 0xc4, 0x4e, 0x3d, 0x7e, 0xa6, 0x1d, 0x3e, 0xbd, 0x87, 0xbb, 0x3e,
0xb4, 0xb8, 0x09, 0xbf, 0xa1, 0x1f, 0xf8, 0xbe, 0x8d, 0x90, 0xdd, 0x3e,
0xde, 0xfa, 0x6f, 0xbe, 0xb2, 0x75, 0xe4, 0x3d, 0x6e, 0xfe, 0x36, 0x3e,
0x20, 0x18, 0xc2, 0xbe, 0x39, 0xc7, 0xfb, 0xbe, 0xfe, 0xa4, 0x30, 0xbe,
0xf7, 0x91, 0xde, 0xbe, 0xde, 0xab, 0x24, 0x3e, 0xfb, 0xbb, 0xce, 0x3e,
0xeb, 0x23, 0x80, 0xbe, 0x7b, 0x58, 0x73, 0xbe, 0x9a, 0x2e, 0x03, 0x3e,
0x10, 0x42, 0xa9, 0xbc, 0x10, 0x12, 0x64, 0xbd, 0xe3, 0x8d, 0x0c, 0x3d,
0x9e, 0x48, 0x97, 0xbe, 0x34, 0x51, 0xd4, 0xbe, 0x02, 0x3b, 0x0d, 0x3e,
0x62, 0x67, 0x89, 0xbe, 0x74, 0xdf, 0xa2, 0x3d, 0xf3, 0x25, 0xb3, 0xbe,
0xef, 0x34, 0x7b, 0x3d, 0x61, 0x70, 0xe3, 0x3d, 0xba, 0x76, 0xc0, 0xbe,
0x7d, 0xe9, 0xa7, 0x3e, 0xc3, 0xab, 0xd0, 0xbe, 0xcf, 0x7c, 0xdb, 0xbe,
0x70, 0x27, 0x9a, 0xbe, 0x98, 0xf5, 0x3c, 0xbd, 0xff, 0x4b, 0x4b, 0x3e,
0x7e, 0xa0, 0xf8, 0xbd, 0xd4, 0x6e, 0x86, 0x3d, 0x00, 0x4a, 0x07, 0x3a,
0x4c, 0x24, 0x61, 0xbe, 0x54, 0x68, 0xf7, 0xbd, 0x02, 0x3f, 0x77, 0xbe,
0x23, 0x79, 0xb3, 0x3e, 0x1c, 0x83, 0xad, 0xbd, 0xc8, 0x92, 0x8d, 0x3e,
0xa8, 0xf3, 0x15, 0xbd, 0xe6, 0x4d, 0x6c, 0x3d, 0xac, 0xe7, 0x98, 0xbe,
0x81, 0xec, 0xbd, 0x3e, 0xe2, 0x55, 0x73, 0x3e, 0xc1, 0x77, 0xc7, 0x3e,
0x6e, 0x1b, 0x5e, 0x3d, 0x27, 0x78, 0x02, 0x3f, 0xd4, 0x21, 0x90, 0x3d,
0x52, 0xdc, 0x1f, 0x3e, 0xbf, 0xda, 0x88, 0x3e, 0x80, 0x79, 0xe3, 0xbd,
0x40, 0x6f, 0x10, 0xbe, 0x20, 0x43, 0x2e, 0xbd, 0xf0, 0x76, 0xc5, 0xbd,
0xcc, 0xa0, 0x04, 0xbe, 0xf0, 0x69, 0xd7, 0xbe, 0xb1, 0xfe, 0x64, 0xbe,
0x20, 0x41, 0x84, 0xbe, 0xb2, 0xc3, 0x26, 0xbe, 0xd8, 0xf4, 0x09, 0xbe,
0x64, 0x44, 0xd1, 0x3d, 0xd5, 0xe1, 0xc8, 0xbe, 0x35, 0xbc, 0x3f, 0xbe,
0xc0, 0x94, 0x82, 0x3d, 0xdc, 0x2b, 0xb1, 0xbd, 0x02, 0xdb, 0xbf, 0xbe,
0xa5, 0x7f, 0x8a, 0x3e, 0x21, 0xb4, 0xa2, 0x3e, 0xcd, 0x86, 0x56, 0xbf,
0x9c, 0x3b, 0x76, 0xbc, 0x85, 0x6d, 0x60, 0xbf, 0x86, 0x00, 0x3c, 0xbe,
0xc1, 0x23, 0x7e, 0x3e, 0x96, 0xcd, 0x3f, 0x3e, 0x86, 0x91, 0x2d, 0x3e,
0x55, 0xef, 0x87, 0x3e, 0x7e, 0x97, 0x03, 0xbe, 0x2a, 0xcd, 0x01, 0x3e,
0x32, 0xc9, 0x8e, 0xbe, 0x72, 0x77, 0x3b, 0xbe, 0xe0, 0xa1, 0xbc, 0xbe,
0x8d, 0xb7, 0xa7, 0x3e, 0x1c, 0x05, 0x95, 0xbe, 0xf7, 0x1f, 0xbb, 0x3e,
0xc9, 0x3e, 0xd6, 0x3e, 0x80, 0x42, 0xe9, 0xbd, 0x27, 0x0c, 0xd2, 0xbe,
0x5c, 0x32, 0x34, 0xbe, 0x14, 0xcb, 0xca, 0xbd, 0xdd, 0x3a, 0x67, 0xbe,
0x1c, 0xbb, 0x8d, 0xbe, 0x91, 0xac, 0x5c, 0xbe, 0x52, 0x40, 0x6f, 0xbe,
0xd7, 0x71, 0x94, 0x3e, 0x18, 0x71, 0x09, 0xbe, 0x9b, 0x29, 0xd9, 0xbe,
0x7d, 0x66, 0xd2, 0xbe, 0x98, 0xd6, 0xb2, 0xbe, 0x00, 0xc9, 0x84, 0x3a,
0xbc, 0xda, 0xc2, 0xbd, 0x1d, 0xc2, 0x1b, 0xbf, 0xd4, 0xdd, 0x92, 0x3e,
0x07, 0x87, 0x6c, 0xbe, 0x40, 0xc2, 0x3b, 0xbe, 0xbd, 0xe2, 0x9c, 0x3e,
0x0a, 0xb5, 0xa0, 0xbe, 0xe2, 0xd5, 0x9c, 0xbe, 0x3e, 0xbb, 0x7c, 0x3e,
0x17, 0xb4, 0xcf, 0x3e, 0xd5, 0x8e, 0xc8, 0xbe, 0x7c, 0xf9, 0x5c, 0x3e,
0x80, 0xfc, 0x0d, 0x3d, 0xc5, 0xd5, 0x8b, 0x3e, 0xf5, 0x17, 0xa2, 0x3e,
0xc7, 0x60, 0x89, 0xbe, 0xec, 0x95, 0x87, 0x3d, 0x7a, 0xc2, 0x5d, 0xbf,
0x77, 0x94, 0x98, 0x3e, 0x77, 0x39, 0x07, 0xbc, 0x42, 0x29, 0x00, 0x3e,
0xaf, 0xd0, 0xa9, 0x3e, 0x31, 0x23, 0xc4, 0xbe, 0x95, 0x36, 0x5b, 0xbe,
0xc7, 0xdc, 0x83, 0xbe, 0x1e, 0x6b, 0x47, 0x3e, 0x5b, 0x24, 0x99, 0x3e,
0x99, 0x27, 0x54, 0x3e, 0xc8, 0x20, 0xdd, 0xbd, 0x5a, 0x86, 0x2f, 0x3e,
0x80, 0xf0, 0x69, 0xbe, 0x44, 0xfc, 0x84, 0xbd, 0x82, 0xa0, 0x2a, 0xbe,
0x87, 0xe6, 0x2a, 0x3e, 0xd8, 0x34, 0xae, 0x3d, 0x50, 0xbd, 0xb5, 0x3e,
0xc4, 0x8c, 0x88, 0xbe, 0xe3, 0xbc, 0xa5, 0x3e, 0xa9, 0xda, 0x9e, 0x3e,
0x3e, 0xb8, 0x23, 0xbe, 0x80, 0x90, 0x15, 0x3d, 0x97, 0x3f, 0xc3, 0x3e,
0xca, 0x5c, 0x9d, 0x3e, 0x21, 0xe8, 0xe1, 0x3e, 0xc0, 0x49, 0x01, 0xbc,
0x00, 0x0b, 0x88, 0xbd, 0x3f, 0xf7, 0xca, 0x3c, 0xfb, 0x5a, 0xb1, 0x3e,
0x60, 0xd2, 0x0d, 0x3c, 0xce, 0x23, 0x78, 0xbf, 0x8f, 0x4f, 0xb9, 0xbe,
0x69, 0x6a, 0x34, 0xbf, 0x4b, 0x5e, 0xa9, 0x3e, 0x64, 0x8c, 0xd9, 0x3e,
0x52, 0x77, 0x36, 0x3e, 0xeb, 0xaf, 0xbe, 0x3e, 0x40, 0xbe, 0x36, 0x3c,
0x08, 0x65, 0x3b, 0xbd, 0x55, 0xe0, 0x66, 0xbd, 0xd2, 0xe8, 0x9b, 0xbe,
0x86, 0xe3, 0x09, 0xbe, 0x93, 0x3d, 0xdd, 0x3e, 0x0f, 0x66, 0x18, 0x3f,
0x18, 0x05, 0x33, 0xbd, 0xde, 0x15, 0xd7, 0xbe, 0xaa, 0xcf, 0x49, 0xbe,
0xa2, 0xa5, 0x64, 0x3e, 0xe6, 0x9c, 0x42, 0xbe, 0x54, 0x42, 0xcc, 0x3d,
0xa0, 0xbd, 0x9d, 0xbe, 0xc2, 0x69, 0x48, 0x3e, 0x5b, 0x8b, 0xa2, 0xbe,
0xc0, 0x13, 0x87, 0x3d, 0x36, 0xfd, 0x69, 0x3e, 0x05, 0x86, 0x40, 0xbe,
0x1e, 0x7a, 0xce, 0xbe, 0x46, 0x13, 0xa7, 0xbe, 0x68, 0x52, 0x86, 0xbe,
0x04, 0x9e, 0x86, 0xbd, 0x8c, 0x54, 0xc1, 0x3d, 0xe0, 0x3b, 0xad, 0x3c,
0x42, 0x67, 0x85, 0xbd, 0xea, 0x97, 0x42, 0x3e, 0x6e, 0x13, 0x3b, 0xbf,
0x56, 0x5b, 0x16, 0x3e, 0xaa, 0xab, 0xdf, 0x3e, 0xc8, 0x41, 0x36, 0x3d,
0x24, 0x2d, 0x47, 0xbe, 0x77, 0xa5, 0xae, 0x3e, 0xc0, 0xc2, 0x5b, 0x3c,
0xac, 0xac, 0x4e, 0x3e, 0x99, 0xec, 0x13, 0xbe, 0xf2, 0xab, 0x73, 0x3e,
0xaa, 0xa1, 0x48, 0xbe, 0xe8, 0xd3, 0x01, 0xbe, 0x60, 0xb7, 0xc7, 0xbd,
0x64, 0x72, 0xd3, 0x3d, 0x83, 0xd3, 0x99, 0x3e, 0x0c, 0x76, 0x34, 0xbe,
0x42, 0xda, 0x0d, 0x3e, 0xfb, 0x47, 0x9a, 0x3e, 0x8b, 0xdc, 0x92, 0xbe,
0x56, 0x7f, 0x6b, 0x3e, 0x04, 0xd4, 0x88, 0xbd, 0x11, 0x9e, 0x80, 0x3e,
0x3c, 0x89, 0xff, 0x3d, 0xb3, 0x3e, 0x88, 0x3e, 0xf7, 0xf0, 0x88, 0x3e,
0x28, 0xfb, 0xc9, 0xbe, 0x53, 0x3e, 0xcf, 0x3e, 0xac, 0x75, 0xdc, 0xbe,
0xdd, 0xca, 0xd7, 0x3e, 0x01, 0x58, 0xa7, 0x3e, 0x29, 0xb8, 0x13, 0xbf,
0x76, 0x81, 0x12, 0xbc, 0x28, 0x8b, 0x16, 0xbf, 0x0e, 0xec, 0x0e, 0x3e,
0x40, 0x0a, 0xdb, 0xbd, 0x98, 0xec, 0xbf, 0xbd, 0x32, 0x55, 0x0c, 0xbe,
0xfb, 0xf9, 0xc9, 0x3e, 0x83, 0x4a, 0x6d, 0xbe, 0x76, 0x59, 0xe2, 0xbe,
0x54, 0x7d, 0x9f, 0xbb, 0x9d, 0xe8, 0x95, 0x3e, 0x5c, 0xd3, 0xd0, 0x3d,
0x19, 0x8a, 0xb0, 0x3e, 0xde, 0x6f, 0x2e, 0xbe, 0xd0, 0x16, 0x83, 0x3d,
0x9c, 0x7d, 0x11, 0xbf, 0x2b, 0xcc, 0x25, 0x3c, 0x2a, 0xa5, 0x27, 0xbe,
0x22, 0x14, 0xc7, 0xbe, 0x5e, 0x7a, 0xac, 0x3e, 0x4e, 0x41, 0x94, 0xbe,
0x5a, 0x68, 0x7b, 0x3e, 0x86, 0xfd, 0x4e, 0x3e, 0xa2, 0x56, 0x6a, 0xbe,
0xca, 0xfe, 0x81, 0xbe, 0x43, 0xc3, 0xb1, 0xbd, 0xc5, 0xb8, 0xa7, 0x3e,
0x55, 0x23, 0xcd, 0x3e, 0xaf, 0x2e, 0x76, 0x3e, 0x69, 0xa8, 0x90, 0xbe,
0x0d, 0xba, 0xb9, 0x3e, 0x66, 0xff, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00,
0x40, 0x00, 0x00, 0x00, 0x53, 0xd6, 0xe2, 0x3d, 0x66, 0xb6, 0xcc, 0x3e,
0x03, 0xe7, 0xf6, 0x3e, 0xe0, 0x28, 0x10, 0xbf, 0x00, 0x00, 0x00, 0x00,
0x3e, 0x3d, 0xb0, 0x3e, 0x00, 0x00, 0x00, 0x00, 0x62, 0xf0, 0x77, 0x3e,
0xa6, 0x9d, 0xa4, 0x3e, 0x3a, 0x4b, 0xf3, 0xbe, 0x71, 0x9e, 0xa7, 0x3e,
0x00, 0x00, 0x00, 0x00, 0x34, 0x39, 0xa2, 0x3e, 0x00, 0x00, 0x00, 0x00,
0xcc, 0x9c, 0x4a, 0x3e, 0xab, 0x40, 0xa3, 0x3e, 0xb2, 0xff, 0xff, 0xff,
0x04, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0xb3, 0x71, 0x67, 0x3f,
0x9a, 0x7a, 0x95, 0xbf, 0xe1, 0x48, 0xe8, 0xbe, 0x8a, 0x72, 0x96, 0x3e,
0x00, 0xd2, 0xd3, 0xbb, 0x1a, 0xc5, 0xd7, 0x3f, 0xac, 0x7e, 0xc8, 0xbe,
0x90, 0xa7, 0x95, 0xbe, 0x3b, 0xd7, 0xdc, 0xbe, 0x41, 0xa8, 0x16, 0x3f,
0x50, 0x5b, 0xcb, 0x3f, 0x52, 0xb9, 0xed, 0xbe, 0x2e, 0xa7, 0xc6, 0xbe,
0xaf, 0x0f, 0x14, 0xbf, 0xb3, 0xda, 0x59, 0x3f, 0x02, 0xec, 0xd7, 0xbe,
0x00, 0x00, 0x06, 0x00, 0x08, 0x00, 0x04, 0x00, 0x06, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x66, 0x11, 0x1f, 0xbf,
0xb8, 0xfb, 0xff, 0xff, 0x0f, 0x00, 0x00, 0x00, 0x54, 0x4f, 0x43, 0x4f,
0x20, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x65, 0x64, 0x2e, 0x00,
0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x14, 0x00,
0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x0c, 0x00, 0x00, 0x00,
0xf0, 0x00, 0x00, 0x00, 0xe4, 0x00, 0x00, 0x00, 0xd8, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x90, 0x00, 0x00, 0x00,
0x48, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xce, 0xff, 0xff, 0xff,
0x00, 0x00, 0x00, 0x08, 0x18, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x1c, 0xfc, 0xff, 0xff, 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00,
0x14, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x10, 0x00,
0x0e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x1c, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xba, 0xff, 0xff, 0xff,
0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00,
0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x16, 0x00, 0x00, 0x00,
0x08, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x10, 0x00, 0x0e, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x08, 0x24, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x08, 0x00, 0x07, 0x00,
0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x10, 0x03, 0x00, 0x00, 0xa4, 0x02, 0x00, 0x00,
0x40, 0x02, 0x00, 0x00, 0xf4, 0x01, 0x00, 0x00, 0xac, 0x01, 0x00, 0x00,
0x48, 0x01, 0x00, 0x00, 0xfc, 0x00, 0x00, 0x00, 0xb4, 0x00, 0x00, 0x00,
0x50, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x26, 0xfd, 0xff, 0xff,
0x3c, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x18, 0xfd, 0xff, 0xff, 0x20, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x34, 0x2f, 0x4d, 0x61, 0x74,
0x4d, 0x75, 0x6c, 0x5f, 0x62, 0x69, 0x61, 0x73, 0x00, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x6e, 0xfd, 0xff, 0xff,
0x50, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x60, 0xfd, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x34, 0x2f, 0x4d, 0x61, 0x74,
0x4d, 0x75, 0x6c, 0x2f, 0x52, 0x65, 0x61, 0x64, 0x56, 0x61, 0x72, 0x69,
0x61, 0x62, 0x6c, 0x65, 0x4f, 0x70, 0x2f, 0x74, 0x72, 0x61, 0x6e, 0x73,
0x70, 0x6f, 0x73, 0x65, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0xce, 0xfd, 0xff, 0xff,
0x34, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0xc0, 0xfd, 0xff, 0xff, 0x19, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x33, 0x2f, 0x52, 0x65, 0x6c,
0x75, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x12, 0xfe, 0xff, 0xff, 0x3c, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x04, 0xfe, 0xff, 0xff, 0x20, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x33, 0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x5f,
0x62, 0x69, 0x61, 0x73, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x5a, 0xfe, 0xff, 0xff, 0x50, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x4c, 0xfe, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x33, 0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x2f,
0x52, 0x65, 0x61, 0x64, 0x56, 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65,
0x4f, 0x70, 0x2f, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x70, 0x6f, 0x73, 0x65,
0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0xba, 0xfe, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0xac, 0xfe, 0xff, 0xff, 0x19, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x32, 0x2f, 0x52, 0x65, 0x6c, 0x75, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0xfe, 0xfe, 0xff, 0xff, 0x3c, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xf0, 0xfe, 0xff, 0xff,
0x20, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69,
0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32,
0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x5f, 0x62, 0x69, 0x61, 0x73,
0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x46, 0xff, 0xff, 0xff, 0x50, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x38, 0xff, 0xff, 0xff,
0x34, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69,
0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32,
0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x2f, 0x52, 0x65, 0x61, 0x64,
0x56, 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65, 0x4f, 0x70, 0x2f, 0x74,
0x72, 0x61, 0x6e, 0x73, 0x70, 0x6f, 0x73, 0x65, 0x00, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0xa6, 0xff, 0xff, 0xff, 0x48, 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00,
0x2c, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00,
0x04, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x7f, 0x43,
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00,
0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32, 0x5f, 0x69, 0x6e, 0x70, 0x75,
0x74, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x14, 0x00, 0x04, 0x00,
0x00, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x0e, 0x00, 0x00, 0x00,
0x28, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x04, 0x00, 0x04, 0x00, 0x04, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x49, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x74, 0x79,
0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x00, 0x00, 0x0a, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x00, 0x00, 0x08, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, 0x03, 0x00, 0x00, 0x00
};
unsigned int sine_model_quantized_tflite_len = 2640;
###Markdown
**Copyright 2019 The TensorFlow Authors.**
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create and convert a TensorFlow modelThis notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. The model created in this notebook is used in the [hello_world](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world) sample for [TensorFlow Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers/overview). Run in Google Colab View source on GitHub Import dependenciesOur first task is to import the dependencies we need. Run the following cell to do so:
###Code
# TensorFlow is an open source machine learning library
import tensorflow as tf
# Numpy is a math library
import numpy as np
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# math is Python's math library
import math
###Output
_____no_output_____
###Markdown
Generate dataDeep learning networks learn to model patterns in underlying data. In this notebook, we're going to train a network to model data generated by a [sine](https://en.wikipedia.org/wiki/Sine) function. This will result in a model that can take a value, `x`, and predict its sine, `y`.In a real world application, if you needed the sine of `x`, you could just calculate it directly. However, by training a model to do this, we can demonstrate the basic principles of machine learning.In the [hello_world](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world) sample for [TensorFlow Lite for Microcontrollers](https://www.tensorflow.org/lite/microcontrollers/overview), we'll use this model to control LEDs that light up in a sequence.The code in the following cell will generate a set of random `x` values, calculate their sine values, and display them on a graph:
###Code
# We'll generate this many sample datapoints
SAMPLES = 1000
# Set a "seed" value, so we get the same random numbers each time we run this
# notebook
np.random.seed(1337)
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show()
###Output
_____no_output_____
###Markdown
Add some noiseSince it was generated directly by the sine function, our data fits a nice, smooth curve.However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.In the following cell, we'll add some random noise to each value, then draw a new graph:
###Code
# Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show()
###Output
_____no_output_____
###Markdown
Split our dataWe now have a noisy dataset that approximates real world data. We'll be using this to train our model.To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.To ensure we have data to use for evaluation, we'll set some aside before we begin training. We'll reserve 20% of our data for validation, and another 20% for testing. The remaining 60% will be used to train the model. This is a typical split used when training models.The following code will split our data and then plot each set as a different color:
###Code
# We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Design a modelWe're going to build a model that will take an input value (in this case, `x`) and use it to predict a numeric output value (the sine of `x`). This type of problem is called a _regression_.To achieve this, we're going to create a simple neural network. It will use _layers_ of _neurons_ to attempt to learn any patterns underlying the training data, so it can make predictions.To begin with, we'll define two layers. The first layer takes a single input (our `x` value) and runs it through 16 neurons. Based on this input, each neuron will become _activated_ to a certain degree based on its internal state (its _weight_ and _bias_ values). A neuron's degree of activation is expressed as a number.The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our `y` value.**Note:** To learn more about how neural networks function, you can explore the [Learn TensorFlow](https://codelabs.developers.google.com/codelabs/tensorflow-lab1-helloworld) codelabs.The code in the following cell defines our model using [Keras](https://www.tensorflow.org/guide/keras), TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we _compile_ it, specifying parameters that determine how it will be trained:
###Code
# We'll use Keras to create a simple model architecture
from tensorflow.keras import layers
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_1.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
###Output
_____no_output_____
###Markdown
Train the modelOnce we've defined the model, we can use our data to _train_ it. Training involves passing an `x` value into the neural network, checking how far the network's output deviates from the expected `y` value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.Training runs this process on the full dataset multiple times, and each full run-through is known as an _epoch_. The number of epochs to run during training is a parameter we can set.During each epoch, data is run through the network in multiple _batches_. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The _batch size_ is also a parameter we can set.The code in the following cell uses the `x` and `y` values from our training data to train the model. It runs for 1000 _epochs_, with 16 pieces of data in each _batch_. We also pass in some data to use for _validation_. As you will see when you run the cell, training can take a while to complete:
###Code
# Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=1000, batch_size=16,
validation_data=(x_validate, y_validate))
###Output
Train on 600 samples, validate on 200 samples
Epoch 1/1000
600/600 [==============================] - 0s 412us/sample - loss: 0.5016 - mae: 0.6297 - val_loss: 0.4922 - val_mae: 0.6235
Epoch 2/1000
600/600 [==============================] - 0s 105us/sample - loss: 0.3905 - mae: 0.5436 - val_loss: 0.4262 - val_mae: 0.5641
...
Epoch 998/1000
600/600 [==============================] - 0s 109us/sample - loss: 0.1535 - mae: 0.3068 - val_loss: 0.1507 - val_mae: 0.3113
Epoch 999/1000
600/600 [==============================] - 0s 100us/sample - loss: 0.1545 - mae: 0.3077 - val_loss: 0.1499 - val_mae: 0.3103
Epoch 1000/1000
600/600 [==============================] - 0s 132us/sample - loss: 0.1530 - mae: 0.3045 - val_loss: 0.1542 - val_mae: 0.3143
###Markdown
Check the training metricsDuring training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.The following cells will display some of that data in a graphical form:
###Code
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Look closer at the dataThe graph shows the _loss_ (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is _mean squared error_. There is a distinct loss value given for the training and the validation data.As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!Our goal is to stop training when either the model is no longer improving, or when the _training loss_ is less than the _validation loss_, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.To make the flatter part of the graph more readable, let's skip the first 50 epochs:
###Code
# Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Further metricsFrom the plot, we can see that loss continues to reduce until around 600 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 600 epochs.However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.To gain more insight into our model's performance we can plot some more data. This time, we'll plot the _mean absolute error_, which is another way of measuring how far the network's predictions are from the actual numbers:
###Code
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
This graph of _mean absolute error_ tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have _overfit_, or learned the training data so rigidly that it can't make effective predictions about new data.In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.To get more insight into what is happening, we can plot our network's predictions for the training data against the expected values:
###Code
# Use the model to make predictions from our validation data
predictions = model_1.predict(x_train)
# Plot the predictions along with to the test data
plt.clf()
plt.title('Training data predicted vs actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_train, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way. From `0 <= x <= 1.1` the line mostly fits, but for the rest of our `x` values it is a rough approximation at best.The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance. Change our modelTo make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with an additional layer of 16 neurons in the middle:
###Code
model_2 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_2.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second layer may help the network learn more complex representations
model_2.add(layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model_2.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_2.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
###Output
_____no_output_____
###Markdown
We'll now train the new model. To save time, we'll train for only 600 epochs:
###Code
history_2 = model_2.fit(x_train, y_train, epochs=600, batch_size=16,
validation_data=(x_validate, y_validate))
###Output
Train on 600 samples, validate on 200 samples
Epoch 1/600
600/600 [==============================] - 0s 422us/sample - loss: 0.5655 - mae: 0.6259 - val_loss: 0.4104 - val_mae: 0.5509
Epoch 2/600
600/600 [==============================] - 0s 111us/sample - loss: 0.3195 - mae: 0.4902 - val_loss: 0.3341 - val_mae: 0.4927
...
Epoch 598/600
600/600 [==============================] - 0s 116us/sample - loss: 0.0124 - mae: 0.0886 - val_loss: 0.0096 - val_mae: 0.0771
Epoch 599/600
600/600 [==============================] - 0s 130us/sample - loss: 0.0125 - mae: 0.0900 - val_loss: 0.0107 - val_mae: 0.0824
Epoch 600/600
600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845
###Markdown
Evaluate our new modelEach training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ): ```Epoch 600/600600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845```You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.015, and validation MAE has dropped from 0.31 to 0.1.The following cell will print the same graphs we used to evaluate our original model, but showing our new training history:
###Code
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_2.history['loss']
val_loss = history_2.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.clf()
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_2.history['mae']
val_mae = history_2.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Great results! From these graphs, we can see several exciting things:* Our network has reached its peak accuracy much more quickly (within 200 epochs instead of 600)* The overall loss and MAE are much better than our previous network* Metrics are better for validation than training, which means the network is not overfittingThe reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier:
###Code
# Calculate and print the loss on our test dataset
loss = model_2.evaluate(x_test, y_test)
# Make predictions based on our test dataset
predictions = model_2.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
###Output
200/200 [==============================] - 0s 146us/sample - loss: 0.0124 - mae: 0.0907
###Markdown
Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when `x` is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.However, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern. Convert to TensorFlow LiteWe now have an acceptably accurate model in-memory. However, to use this with TensorFlow Lite for Microcontrollers, we'll need to convert it into the correct format and download it as a file. To do this, we'll use the [TensorFlow Lite Converter](https://www.tensorflow.org/lite/convert). The converter outputs a file in a special, space-efficient format for use on memory-constrained devices.Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called [quantization](https://www.tensorflow.org/lite/performance/post_training_quantization). It reduces the precision of the model's weights, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.The TensorFlow Lite Converter can apply quantization while it converts the model. In the following cell, we'll convert the model twice: once with quantization, once without:
###Code
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
tflite_model = converter.convert()
# Save the model to disk
open("sine_model.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
# Save the model to disk
open("sine_model_quantized.tflite", "wb").write(tflite_model)
###Output
_____no_output_____
###Markdown
Test the converted modelsTo prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results:
###Code
# Instantiate an interpreter for each model
sine_model = tf.lite.Interpreter('sine_model.tflite')
sine_model_quantized = tf.lite.Interpreter('sine_model_quantized.tflite')
# Allocate memory for each model
sine_model.allocate_tensors()
sine_model_quantized.allocate_tensors()
# Get the input and output tensors so we can feed in values and get the results
sine_model_input = sine_model.tensor(sine_model.get_input_details()[0]["index"])
sine_model_output = sine_model.tensor(sine_model.get_output_details()[0]["index"])
sine_model_quantized_input = sine_model_quantized.tensor(sine_model_quantized.get_input_details()[0]["index"])
sine_model_quantized_output = sine_model_quantized.tensor(sine_model_quantized.get_output_details()[0]["index"])
# Create arrays to store the results
sine_model_predictions = np.empty(x_test.size)
sine_model_quantized_predictions = np.empty(x_test.size)
# Run each model's interpreter for each value and store the results in arrays
for i in range(x_test.size):
sine_model_input().fill(x_test[i])
sine_model.invoke()
sine_model_predictions[i] = sine_model_output()[0]
sine_model_quantized_input().fill(x_test[i])
sine_model_quantized.invoke()
sine_model_quantized_predictions[i] = sine_model_quantized_output()[0]
# See how they line up with the data
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(x_test, y_test, 'bo', label='Actual')
plt.plot(x_test, predictions, 'ro', label='Original predictions')
plt.plot(x_test, sine_model_predictions, 'bx', label='Lite predictions')
plt.plot(x_test, sine_model_quantized_predictions, 'gx', label='Lite quantized predictions')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be indistinguishable. This means that our quantized model is ready to use!We can print the difference in file size:
###Code
import os
basic_model_size = os.path.getsize("sine_model.tflite")
print("Basic model is %d bytes" % basic_model_size)
quantized_model_size = os.path.getsize("sine_model_quantized.tflite")
print("Quantized model is %d bytes" % quantized_model_size)
difference = basic_model_size - quantized_model_size
print("Difference is %d bytes" % difference)
###Output
Basic model is 2656 bytes
Quantized model is 2640 bytes
Difference is 16 bytes
###Markdown
Our quantized model is only 16 bytes smaller than the original version, which only a tiny reduction in size! At around 2.6 kilobytes, this model is already so small that the weights make up only a small fraction of the overall size, meaning quantization has little effect.More complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models.Regardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller! Write to a C fileThe final step in preparing our model for use with TensorFlow Lite for Microcontrollers is to convert it into a C source file. You can see an example of this format in [`hello_world/sine_model_data.cc`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/sine_model_data.cc).To do so, we can use a command line utility named [`xxd`](https://linux.die.net/man/1/xxd). The following cell runs `xxd` on our quantized model and prints the output:
###Code
# Install xxd if it is not available
!apt-get -qq install xxd
# Save the file as a C source file
!xxd -i sine_model_quantized.tflite > sine_model_quantized.cc
# Print the source file
!cat sine_model_quantized.cc
###Output
unsigned char sine_model_quantized_tflite[] = {
0x18, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x0e, 0x00,
0x18, 0x00, 0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x14, 0x00,
0x0e, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x10, 0x0a, 0x00, 0x00,
0xb8, 0x05, 0x00, 0x00, 0xa0, 0x05, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x0b, 0x00, 0x00, 0x00, 0x90, 0x05, 0x00, 0x00, 0x7c, 0x05, 0x00, 0x00,
0x24, 0x05, 0x00, 0x00, 0xd4, 0x04, 0x00, 0x00, 0xc4, 0x00, 0x00, 0x00,
0x74, 0x00, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00,
0x14, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x54, 0xf6, 0xff, 0xff, 0x58, 0xf6, 0xff, 0xff, 0x5c, 0xf6, 0xff, 0xff,
0x60, 0xf6, 0xff, 0xff, 0xc2, 0xfa, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00,
0x40, 0x00, 0x00, 0x00, 0x7c, 0x19, 0xa7, 0x3e, 0x99, 0x81, 0xb9, 0x3e,
0x56, 0x8b, 0x9f, 0x3e, 0x88, 0xd8, 0x12, 0xbf, 0x74, 0x10, 0x56, 0x3e,
0xfe, 0xc6, 0xdf, 0xbe, 0xf2, 0x10, 0x5a, 0xbe, 0xf0, 0xe2, 0x0a, 0xbe,
0x10, 0x5a, 0x98, 0xbe, 0xb9, 0x36, 0xce, 0x3d, 0x8f, 0x7f, 0x87, 0x3e,
0x2c, 0xb1, 0xfd, 0xbd, 0xe6, 0xa6, 0x8a, 0xbe, 0xa5, 0x3e, 0xda, 0x3e,
0x50, 0x34, 0xed, 0xbd, 0x90, 0x91, 0x69, 0xbe, 0x0e, 0xfb, 0xff, 0xff,
0x04, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x67, 0x41, 0x48, 0xbf,
0x24, 0xcd, 0xa0, 0xbe, 0xb7, 0x92, 0x0c, 0xbf, 0x00, 0x00, 0x00, 0x00,
0x98, 0xfe, 0x3c, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4a, 0x17, 0x9a, 0xbe,
0x41, 0xcb, 0xb6, 0xbe, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x13, 0xd6, 0x1e, 0x3e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x5a, 0xfb, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00,
0x4b, 0x98, 0xdd, 0xbd, 0x40, 0x6b, 0xcb, 0xbe, 0x36, 0x0c, 0xd4, 0x3c,
0xbd, 0x44, 0xb5, 0x3e, 0x95, 0x70, 0xe3, 0x3e, 0xe7, 0xac, 0x86, 0x3e,
0x00, 0xc4, 0x4e, 0x3d, 0x7e, 0xa6, 0x1d, 0x3e, 0xbd, 0x87, 0xbb, 0x3e,
0xb4, 0xb8, 0x09, 0xbf, 0xa1, 0x1f, 0xf8, 0xbe, 0x8d, 0x90, 0xdd, 0x3e,
0xde, 0xfa, 0x6f, 0xbe, 0xb2, 0x75, 0xe4, 0x3d, 0x6e, 0xfe, 0x36, 0x3e,
0x20, 0x18, 0xc2, 0xbe, 0x39, 0xc7, 0xfb, 0xbe, 0xfe, 0xa4, 0x30, 0xbe,
0xf7, 0x91, 0xde, 0xbe, 0xde, 0xab, 0x24, 0x3e, 0xfb, 0xbb, 0xce, 0x3e,
0xeb, 0x23, 0x80, 0xbe, 0x7b, 0x58, 0x73, 0xbe, 0x9a, 0x2e, 0x03, 0x3e,
0x10, 0x42, 0xa9, 0xbc, 0x10, 0x12, 0x64, 0xbd, 0xe3, 0x8d, 0x0c, 0x3d,
0x9e, 0x48, 0x97, 0xbe, 0x34, 0x51, 0xd4, 0xbe, 0x02, 0x3b, 0x0d, 0x3e,
0x62, 0x67, 0x89, 0xbe, 0x74, 0xdf, 0xa2, 0x3d, 0xf3, 0x25, 0xb3, 0xbe,
0xef, 0x34, 0x7b, 0x3d, 0x61, 0x70, 0xe3, 0x3d, 0xba, 0x76, 0xc0, 0xbe,
0x7d, 0xe9, 0xa7, 0x3e, 0xc3, 0xab, 0xd0, 0xbe, 0xcf, 0x7c, 0xdb, 0xbe,
0x70, 0x27, 0x9a, 0xbe, 0x98, 0xf5, 0x3c, 0xbd, 0xff, 0x4b, 0x4b, 0x3e,
0x7e, 0xa0, 0xf8, 0xbd, 0xd4, 0x6e, 0x86, 0x3d, 0x00, 0x4a, 0x07, 0x3a,
0x4c, 0x24, 0x61, 0xbe, 0x54, 0x68, 0xf7, 0xbd, 0x02, 0x3f, 0x77, 0xbe,
0x23, 0x79, 0xb3, 0x3e, 0x1c, 0x83, 0xad, 0xbd, 0xc8, 0x92, 0x8d, 0x3e,
0xa8, 0xf3, 0x15, 0xbd, 0xe6, 0x4d, 0x6c, 0x3d, 0xac, 0xe7, 0x98, 0xbe,
0x81, 0xec, 0xbd, 0x3e, 0xe2, 0x55, 0x73, 0x3e, 0xc1, 0x77, 0xc7, 0x3e,
0x6e, 0x1b, 0x5e, 0x3d, 0x27, 0x78, 0x02, 0x3f, 0xd4, 0x21, 0x90, 0x3d,
0x52, 0xdc, 0x1f, 0x3e, 0xbf, 0xda, 0x88, 0x3e, 0x80, 0x79, 0xe3, 0xbd,
0x40, 0x6f, 0x10, 0xbe, 0x20, 0x43, 0x2e, 0xbd, 0xf0, 0x76, 0xc5, 0xbd,
0xcc, 0xa0, 0x04, 0xbe, 0xf0, 0x69, 0xd7, 0xbe, 0xb1, 0xfe, 0x64, 0xbe,
0x20, 0x41, 0x84, 0xbe, 0xb2, 0xc3, 0x26, 0xbe, 0xd8, 0xf4, 0x09, 0xbe,
0x64, 0x44, 0xd1, 0x3d, 0xd5, 0xe1, 0xc8, 0xbe, 0x35, 0xbc, 0x3f, 0xbe,
0xc0, 0x94, 0x82, 0x3d, 0xdc, 0x2b, 0xb1, 0xbd, 0x02, 0xdb, 0xbf, 0xbe,
0xa5, 0x7f, 0x8a, 0x3e, 0x21, 0xb4, 0xa2, 0x3e, 0xcd, 0x86, 0x56, 0xbf,
0x9c, 0x3b, 0x76, 0xbc, 0x85, 0x6d, 0x60, 0xbf, 0x86, 0x00, 0x3c, 0xbe,
0xc1, 0x23, 0x7e, 0x3e, 0x96, 0xcd, 0x3f, 0x3e, 0x86, 0x91, 0x2d, 0x3e,
0x55, 0xef, 0x87, 0x3e, 0x7e, 0x97, 0x03, 0xbe, 0x2a, 0xcd, 0x01, 0x3e,
0x32, 0xc9, 0x8e, 0xbe, 0x72, 0x77, 0x3b, 0xbe, 0xe0, 0xa1, 0xbc, 0xbe,
0x8d, 0xb7, 0xa7, 0x3e, 0x1c, 0x05, 0x95, 0xbe, 0xf7, 0x1f, 0xbb, 0x3e,
0xc9, 0x3e, 0xd6, 0x3e, 0x80, 0x42, 0xe9, 0xbd, 0x27, 0x0c, 0xd2, 0xbe,
0x5c, 0x32, 0x34, 0xbe, 0x14, 0xcb, 0xca, 0xbd, 0xdd, 0x3a, 0x67, 0xbe,
0x1c, 0xbb, 0x8d, 0xbe, 0x91, 0xac, 0x5c, 0xbe, 0x52, 0x40, 0x6f, 0xbe,
0xd7, 0x71, 0x94, 0x3e, 0x18, 0x71, 0x09, 0xbe, 0x9b, 0x29, 0xd9, 0xbe,
0x7d, 0x66, 0xd2, 0xbe, 0x98, 0xd6, 0xb2, 0xbe, 0x00, 0xc9, 0x84, 0x3a,
0xbc, 0xda, 0xc2, 0xbd, 0x1d, 0xc2, 0x1b, 0xbf, 0xd4, 0xdd, 0x92, 0x3e,
0x07, 0x87, 0x6c, 0xbe, 0x40, 0xc2, 0x3b, 0xbe, 0xbd, 0xe2, 0x9c, 0x3e,
0x0a, 0xb5, 0xa0, 0xbe, 0xe2, 0xd5, 0x9c, 0xbe, 0x3e, 0xbb, 0x7c, 0x3e,
0x17, 0xb4, 0xcf, 0x3e, 0xd5, 0x8e, 0xc8, 0xbe, 0x7c, 0xf9, 0x5c, 0x3e,
0x80, 0xfc, 0x0d, 0x3d, 0xc5, 0xd5, 0x8b, 0x3e, 0xf5, 0x17, 0xa2, 0x3e,
0xc7, 0x60, 0x89, 0xbe, 0xec, 0x95, 0x87, 0x3d, 0x7a, 0xc2, 0x5d, 0xbf,
0x77, 0x94, 0x98, 0x3e, 0x77, 0x39, 0x07, 0xbc, 0x42, 0x29, 0x00, 0x3e,
0xaf, 0xd0, 0xa9, 0x3e, 0x31, 0x23, 0xc4, 0xbe, 0x95, 0x36, 0x5b, 0xbe,
0xc7, 0xdc, 0x83, 0xbe, 0x1e, 0x6b, 0x47, 0x3e, 0x5b, 0x24, 0x99, 0x3e,
0x99, 0x27, 0x54, 0x3e, 0xc8, 0x20, 0xdd, 0xbd, 0x5a, 0x86, 0x2f, 0x3e,
0x80, 0xf0, 0x69, 0xbe, 0x44, 0xfc, 0x84, 0xbd, 0x82, 0xa0, 0x2a, 0xbe,
0x87, 0xe6, 0x2a, 0x3e, 0xd8, 0x34, 0xae, 0x3d, 0x50, 0xbd, 0xb5, 0x3e,
0xc4, 0x8c, 0x88, 0xbe, 0xe3, 0xbc, 0xa5, 0x3e, 0xa9, 0xda, 0x9e, 0x3e,
0x3e, 0xb8, 0x23, 0xbe, 0x80, 0x90, 0x15, 0x3d, 0x97, 0x3f, 0xc3, 0x3e,
0xca, 0x5c, 0x9d, 0x3e, 0x21, 0xe8, 0xe1, 0x3e, 0xc0, 0x49, 0x01, 0xbc,
0x00, 0x0b, 0x88, 0xbd, 0x3f, 0xf7, 0xca, 0x3c, 0xfb, 0x5a, 0xb1, 0x3e,
0x60, 0xd2, 0x0d, 0x3c, 0xce, 0x23, 0x78, 0xbf, 0x8f, 0x4f, 0xb9, 0xbe,
0x69, 0x6a, 0x34, 0xbf, 0x4b, 0x5e, 0xa9, 0x3e, 0x64, 0x8c, 0xd9, 0x3e,
0x52, 0x77, 0x36, 0x3e, 0xeb, 0xaf, 0xbe, 0x3e, 0x40, 0xbe, 0x36, 0x3c,
0x08, 0x65, 0x3b, 0xbd, 0x55, 0xe0, 0x66, 0xbd, 0xd2, 0xe8, 0x9b, 0xbe,
0x86, 0xe3, 0x09, 0xbe, 0x93, 0x3d, 0xdd, 0x3e, 0x0f, 0x66, 0x18, 0x3f,
0x18, 0x05, 0x33, 0xbd, 0xde, 0x15, 0xd7, 0xbe, 0xaa, 0xcf, 0x49, 0xbe,
0xa2, 0xa5, 0x64, 0x3e, 0xe6, 0x9c, 0x42, 0xbe, 0x54, 0x42, 0xcc, 0x3d,
0xa0, 0xbd, 0x9d, 0xbe, 0xc2, 0x69, 0x48, 0x3e, 0x5b, 0x8b, 0xa2, 0xbe,
0xc0, 0x13, 0x87, 0x3d, 0x36, 0xfd, 0x69, 0x3e, 0x05, 0x86, 0x40, 0xbe,
0x1e, 0x7a, 0xce, 0xbe, 0x46, 0x13, 0xa7, 0xbe, 0x68, 0x52, 0x86, 0xbe,
0x04, 0x9e, 0x86, 0xbd, 0x8c, 0x54, 0xc1, 0x3d, 0xe0, 0x3b, 0xad, 0x3c,
0x42, 0x67, 0x85, 0xbd, 0xea, 0x97, 0x42, 0x3e, 0x6e, 0x13, 0x3b, 0xbf,
0x56, 0x5b, 0x16, 0x3e, 0xaa, 0xab, 0xdf, 0x3e, 0xc8, 0x41, 0x36, 0x3d,
0x24, 0x2d, 0x47, 0xbe, 0x77, 0xa5, 0xae, 0x3e, 0xc0, 0xc2, 0x5b, 0x3c,
0xac, 0xac, 0x4e, 0x3e, 0x99, 0xec, 0x13, 0xbe, 0xf2, 0xab, 0x73, 0x3e,
0xaa, 0xa1, 0x48, 0xbe, 0xe8, 0xd3, 0x01, 0xbe, 0x60, 0xb7, 0xc7, 0xbd,
0x64, 0x72, 0xd3, 0x3d, 0x83, 0xd3, 0x99, 0x3e, 0x0c, 0x76, 0x34, 0xbe,
0x42, 0xda, 0x0d, 0x3e, 0xfb, 0x47, 0x9a, 0x3e, 0x8b, 0xdc, 0x92, 0xbe,
0x56, 0x7f, 0x6b, 0x3e, 0x04, 0xd4, 0x88, 0xbd, 0x11, 0x9e, 0x80, 0x3e,
0x3c, 0x89, 0xff, 0x3d, 0xb3, 0x3e, 0x88, 0x3e, 0xf7, 0xf0, 0x88, 0x3e,
0x28, 0xfb, 0xc9, 0xbe, 0x53, 0x3e, 0xcf, 0x3e, 0xac, 0x75, 0xdc, 0xbe,
0xdd, 0xca, 0xd7, 0x3e, 0x01, 0x58, 0xa7, 0x3e, 0x29, 0xb8, 0x13, 0xbf,
0x76, 0x81, 0x12, 0xbc, 0x28, 0x8b, 0x16, 0xbf, 0x0e, 0xec, 0x0e, 0x3e,
0x40, 0x0a, 0xdb, 0xbd, 0x98, 0xec, 0xbf, 0xbd, 0x32, 0x55, 0x0c, 0xbe,
0xfb, 0xf9, 0xc9, 0x3e, 0x83, 0x4a, 0x6d, 0xbe, 0x76, 0x59, 0xe2, 0xbe,
0x54, 0x7d, 0x9f, 0xbb, 0x9d, 0xe8, 0x95, 0x3e, 0x5c, 0xd3, 0xd0, 0x3d,
0x19, 0x8a, 0xb0, 0x3e, 0xde, 0x6f, 0x2e, 0xbe, 0xd0, 0x16, 0x83, 0x3d,
0x9c, 0x7d, 0x11, 0xbf, 0x2b, 0xcc, 0x25, 0x3c, 0x2a, 0xa5, 0x27, 0xbe,
0x22, 0x14, 0xc7, 0xbe, 0x5e, 0x7a, 0xac, 0x3e, 0x4e, 0x41, 0x94, 0xbe,
0x5a, 0x68, 0x7b, 0x3e, 0x86, 0xfd, 0x4e, 0x3e, 0xa2, 0x56, 0x6a, 0xbe,
0xca, 0xfe, 0x81, 0xbe, 0x43, 0xc3, 0xb1, 0xbd, 0xc5, 0xb8, 0xa7, 0x3e,
0x55, 0x23, 0xcd, 0x3e, 0xaf, 0x2e, 0x76, 0x3e, 0x69, 0xa8, 0x90, 0xbe,
0x0d, 0xba, 0xb9, 0x3e, 0x66, 0xff, 0xff, 0xff, 0x04, 0x00, 0x00, 0x00,
0x40, 0x00, 0x00, 0x00, 0x53, 0xd6, 0xe2, 0x3d, 0x66, 0xb6, 0xcc, 0x3e,
0x03, 0xe7, 0xf6, 0x3e, 0xe0, 0x28, 0x10, 0xbf, 0x00, 0x00, 0x00, 0x00,
0x3e, 0x3d, 0xb0, 0x3e, 0x00, 0x00, 0x00, 0x00, 0x62, 0xf0, 0x77, 0x3e,
0xa6, 0x9d, 0xa4, 0x3e, 0x3a, 0x4b, 0xf3, 0xbe, 0x71, 0x9e, 0xa7, 0x3e,
0x00, 0x00, 0x00, 0x00, 0x34, 0x39, 0xa2, 0x3e, 0x00, 0x00, 0x00, 0x00,
0xcc, 0x9c, 0x4a, 0x3e, 0xab, 0x40, 0xa3, 0x3e, 0xb2, 0xff, 0xff, 0xff,
0x04, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0xb3, 0x71, 0x67, 0x3f,
0x9a, 0x7a, 0x95, 0xbf, 0xe1, 0x48, 0xe8, 0xbe, 0x8a, 0x72, 0x96, 0x3e,
0x00, 0xd2, 0xd3, 0xbb, 0x1a, 0xc5, 0xd7, 0x3f, 0xac, 0x7e, 0xc8, 0xbe,
0x90, 0xa7, 0x95, 0xbe, 0x3b, 0xd7, 0xdc, 0xbe, 0x41, 0xa8, 0x16, 0x3f,
0x50, 0x5b, 0xcb, 0x3f, 0x52, 0xb9, 0xed, 0xbe, 0x2e, 0xa7, 0xc6, 0xbe,
0xaf, 0x0f, 0x14, 0xbf, 0xb3, 0xda, 0x59, 0x3f, 0x02, 0xec, 0xd7, 0xbe,
0x00, 0x00, 0x06, 0x00, 0x08, 0x00, 0x04, 0x00, 0x06, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x66, 0x11, 0x1f, 0xbf,
0xb8, 0xfb, 0xff, 0xff, 0x0f, 0x00, 0x00, 0x00, 0x54, 0x4f, 0x43, 0x4f,
0x20, 0x43, 0x6f, 0x6e, 0x76, 0x65, 0x72, 0x74, 0x65, 0x64, 0x2e, 0x00,
0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x14, 0x00,
0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x0c, 0x00, 0x00, 0x00,
0xf0, 0x00, 0x00, 0x00, 0xe4, 0x00, 0x00, 0x00, 0xd8, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x90, 0x00, 0x00, 0x00,
0x48, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xce, 0xff, 0xff, 0xff,
0x00, 0x00, 0x00, 0x08, 0x18, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x1c, 0xfc, 0xff, 0xff, 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00,
0x14, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x10, 0x00,
0x0e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x1c, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xba, 0xff, 0xff, 0xff,
0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00,
0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x16, 0x00, 0x00, 0x00,
0x08, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x10, 0x00, 0x0e, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x08, 0x24, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x08, 0x00, 0x07, 0x00,
0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x10, 0x03, 0x00, 0x00, 0xa4, 0x02, 0x00, 0x00,
0x40, 0x02, 0x00, 0x00, 0xf4, 0x01, 0x00, 0x00, 0xac, 0x01, 0x00, 0x00,
0x48, 0x01, 0x00, 0x00, 0xfc, 0x00, 0x00, 0x00, 0xb4, 0x00, 0x00, 0x00,
0x50, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x26, 0xfd, 0xff, 0xff,
0x3c, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x18, 0xfd, 0xff, 0xff, 0x20, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x34, 0x2f, 0x4d, 0x61, 0x74,
0x4d, 0x75, 0x6c, 0x5f, 0x62, 0x69, 0x61, 0x73, 0x00, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x6e, 0xfd, 0xff, 0xff,
0x50, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x60, 0xfd, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x34, 0x2f, 0x4d, 0x61, 0x74,
0x4d, 0x75, 0x6c, 0x2f, 0x52, 0x65, 0x61, 0x64, 0x56, 0x61, 0x72, 0x69,
0x61, 0x62, 0x6c, 0x65, 0x4f, 0x70, 0x2f, 0x74, 0x72, 0x61, 0x6e, 0x73,
0x70, 0x6f, 0x73, 0x65, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0xce, 0xfd, 0xff, 0xff,
0x34, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0xc0, 0xfd, 0xff, 0xff, 0x19, 0x00, 0x00, 0x00,
0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31,
0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x33, 0x2f, 0x52, 0x65, 0x6c,
0x75, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x12, 0xfe, 0xff, 0xff, 0x3c, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x04, 0xfe, 0xff, 0xff, 0x20, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x33, 0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x5f,
0x62, 0x69, 0x61, 0x73, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0x5a, 0xfe, 0xff, 0xff, 0x50, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x4c, 0xfe, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x33, 0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x2f,
0x52, 0x65, 0x61, 0x64, 0x56, 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65,
0x4f, 0x70, 0x2f, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x70, 0x6f, 0x73, 0x65,
0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x10, 0x00, 0x00, 0x00, 0xba, 0xfe, 0xff, 0xff, 0x34, 0x00, 0x00, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0xac, 0xfe, 0xff, 0xff, 0x19, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75,
0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e,
0x73, 0x65, 0x5f, 0x32, 0x2f, 0x52, 0x65, 0x6c, 0x75, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0xfe, 0xfe, 0xff, 0xff, 0x3c, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xf0, 0xfe, 0xff, 0xff,
0x20, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69,
0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32,
0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x5f, 0x62, 0x69, 0x61, 0x73,
0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x46, 0xff, 0xff, 0xff, 0x50, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00,
0x0c, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x38, 0xff, 0xff, 0xff,
0x34, 0x00, 0x00, 0x00, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x74, 0x69,
0x61, 0x6c, 0x5f, 0x31, 0x2f, 0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32,
0x2f, 0x4d, 0x61, 0x74, 0x4d, 0x75, 0x6c, 0x2f, 0x52, 0x65, 0x61, 0x64,
0x56, 0x61, 0x72, 0x69, 0x61, 0x62, 0x6c, 0x65, 0x4f, 0x70, 0x2f, 0x74,
0x72, 0x61, 0x6e, 0x73, 0x70, 0x6f, 0x73, 0x65, 0x00, 0x00, 0x00, 0x00,
0x02, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0xa6, 0xff, 0xff, 0xff, 0x48, 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00,
0x2c, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x08, 0x00, 0x0c, 0x00,
0x04, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x7f, 0x43,
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00,
0x64, 0x65, 0x6e, 0x73, 0x65, 0x5f, 0x32, 0x5f, 0x69, 0x6e, 0x70, 0x75,
0x74, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x14, 0x00, 0x04, 0x00,
0x00, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x0e, 0x00, 0x00, 0x00,
0x28, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x04, 0x00, 0x04, 0x00, 0x04, 0x00, 0x00, 0x00,
0x08, 0x00, 0x00, 0x00, 0x49, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x74, 0x79,
0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x00, 0x00, 0x0a, 0x00, 0x0c, 0x00, 0x07, 0x00, 0x00, 0x00, 0x08, 0x00,
0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, 0x03, 0x00, 0x00, 0x00
};
unsigned int sine_model_quantized_tflite_len = 2640;
|
course/preprocessing/03_lemmatization.ipynb | ###Markdown
LemmatizationLemmatization is very similiar to stemming in that it reduces a set of inflected words down to a common word. The difference is that lemmatization reduces inflections down to their real root words, which is called a lemma. If we take the words *'amaze'*, *'amazing'*, *'amazingly'*, the lemma of all of these is *'amaze'*. Compared to stemming which would usually return *'amaz'*. Generally lemmatization is seen as more advanced than stemming.
###Code
words = ['amaze', 'amazed', 'amazing']
###Output
_____no_output_____
###Markdown
We will use NLTK again for our lemmatization. We also need to ensure we have the *WordNet Database* downloaded which will act as the lookup for our lemmatizer to ensure that it has produced a real lemma.
###Code
import nltk
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
[lemmatizer.lemmatize(word) for word in words]
###Output
[nltk_data] Downloading package wordnet to
[nltk_data] /Users/adenevreze/nltk_data...
[nltk_data] Unzipping corpora/wordnet.zip.
###Markdown
Clearly nothing has happened, and that is because lemmatization requires that we also provide the *parts-of-speech* (POS) tag, which is the category of a word based on syntax. For example noun, adjective, or verb. In our case we could place each word as a verb, which we can then implement like so:
###Code
from nltk.corpus import wordnet
[lemmatizer.lemmatize(word, wordnet.VERB) for word in words]
###Output
_____no_output_____
###Markdown
LemmatizationLemmatization is very similiar to stemming in that it reduces a set of inflected words down to a common word. The difference is that lemmatization reduces inflections down to their real root words, which is called a lemma. If we take the words *'amaze'*, *'amazing'*, *'amazingly'*, the lemma of all of these is *'amaze'*. Compared to stemming which would usually return *'amaz'*. Generally lemmatization is seen as more advanced than stemming.
###Code
words = ["amaze", "amazed", "amazing"]
###Output
_____no_output_____
###Markdown
We will use NLTK again for our lemmatization. We also need to ensure we have the *WordNet Database* downloaded which will act as the lookup for our lemmatizer to ensure that it has produced a real lemma.
###Code
import nltk
nltk.download("wordnet")
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
[lemmatizer.lemmatize(word) for word in words]
###Output
[nltk_data] Downloading package wordnet to
[nltk_data] C:\Users\James\AppData\Roaming\nltk_data...
[nltk_data] Package wordnet is already up-to-date!
###Markdown
Clearly nothing has happened, and that is because lemmatization requires that we also provide the *parts-of-speech* (POS) tag, which is the category of a word based on syntax. For example noun, adjective, or verb. In our case we could place each word as a verb, which we can then implement like so:
###Code
from nltk.corpus import wordnet
[lemmatizer.lemmatize(word, wordnet.VERB) for word in words]
###Output
_____no_output_____
###Markdown
LemmatizationLemmatization is very similiar to stemming in that it reduces a set of inflected words down to a common word. The difference is that lemmatization reduces inflections down to their real root words, which is called a lemma. If we take the words *'amaze'*, *'amazing'*, *'amazingly'*, the lemma of all of these is *'amaze'*. Compared to stemming which would usually return *'amaz'*. Generally lemmatization is seen as more advanced than stemming.
###Code
words = ['amaze', 'amazed', 'amazing']
###Output
_____no_output_____
###Markdown
We will use NLTK again for our lemmatization. We also need to ensure we have the *WordNet Database* downloaded which will act as the lookup for our lemmatizer to ensure that it has produced a real lemma.
###Code
import nltk
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
[lemmatizer.lemmatize(word) for word in words]
###Output
[nltk_data] Downloading package wordnet to
[nltk_data] C:\Users\James\AppData\Roaming\nltk_data...
[nltk_data] Package wordnet is already up-to-date!
###Markdown
Clearly nothing has happened, and that is because lemmatization requires that we also provide the *parts-of-speech* (POS) tag, which is the category of a word based on syntax. For example noun, adjective, or verb. In our case we could place each word as a verb, which we can then implement like so:
###Code
from nltk.corpus import wordnet
[lemmatizer.lemmatize(word, wordnet.VERB) for word in words]
###Output
_____no_output_____ |
examples/S3 example.ipynb | ###Markdown
Saving Profiles to S3 ---
###Code
from whylogs import get_or_create_session
import pandas as pd
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Create a mock s3 server For this example we will create a fake s3 server using moto lib. You should remove this section if you have you own bucket setup on aws. Make sure you have your aws configuration set. By default this mock server creates a server in region "us-east-1"
###Code
BUCKET="super_awesome_bucket"
from moto import mock_s3
from moto.s3.responses import DEFAULT_REGION_NAME
import boto3
mocks3 = mock_s3()
mocks3.start()
res = boto3.resource('s3', region_name=DEFAULT_REGION_NAME)
res.create_bucket(Bucket=BUCKET)
###Output
_____no_output_____
###Markdown
Load Data We can go by our usual way, load a example csv data
###Code
df = pd.read_csv("data/lending_club_1000.csv")
###Output
_____no_output_____
###Markdown
Config File Example---Seting up whylogs to save your data on s3 can be in several ways. Simplest is to simply create a config file,where each data format can be saved to a specific location. As shown below
###Code
CONFIG = """
project: s3_example_project
pipeline: latest_results
verbose: false
writers:
- formats:
- protobuf
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
- formats:
- flat
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
- formats:
- json
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
"""
config_path=".whylogs_s3.yaml"
with open(".whylogs_s3.yaml","w") as file:
file.write(CONFIG)
###Output
_____no_output_____
###Markdown
Checking the content:
###Code
%cat .whylogs_s3.yaml
###Output
project: s3_example_project
pipeline: latest_results
verbose: false
writers:
- formats:
- protobuf
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
- formats:
- flat
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
- formats:
- json
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
###Markdown
If you have a custom name for your config file or place it in a special location you can use the helper function
###Code
from whylogs.app.session import load_config, session_from_config
config = load_config(".whylogs_s3.yaml")
session = session_from_config(config)
print(session.get_config().to_yaml())
###Output
metadata: null
pipeline: latest_results
project: s3_example_project
verbose: false
with_rotation_time: null
writers:
- filename_template: <string.Template object at 0x14e917fa0>
formats:
- OutputFormat.protobuf
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14d5a1c70>
- filename_template: <string.Template object at 0x14e9683d0>
formats:
- OutputFormat.flat
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e968220>
- filename_template: <string.Template object at 0x14e968580>
formats:
- OutputFormat.json
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e968490>
###Markdown
Otherwise if the file is located in your home directory or current location you are running, you can simply run `get_or_create_session()`
###Code
session= get_or_create_session()
print(session.get_config().to_yaml())
###Output
metadata: null
pipeline: latest_results
project: s3_example_project
verbose: false
with_rotation_time: null
writers:
- filename_template: <string.Template object at 0x14e917d30>
formats:
- OutputFormat.protobuf
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e917c10>
- filename_template: <string.Template object at 0x14e968d00>
formats:
- OutputFormat.flat
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e968e20>
- filename_template: <string.Template object at 0x14e968e80>
formats:
- OutputFormat.json
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e968fa0>
###Markdown
Loggin Data --- The data can be save by simply closing a logger, or one a logger is out of scope.
###Code
with session.logger("dataset_test_s3") as logger:
logger.log_dataframe(df)
client = boto3.client('s3')
objects = client.list_objects(Bucket=BUCKET)
[obj["Key"] for obj in objects.get("Contents",[])]
###Output
_____no_output_____
###Markdown
You can define the configure for were the data is save through a configuration file or creating a custom writer.
###Code
mocks3.stop()
###Output
_____no_output_____
###Markdown
Without Config File---
###Code
mocks3.start()
res = boto3.resource('s3', region_name=DEFAULT_REGION_NAME)
res.create_bucket(Bucket=BUCKET)
from whylogs.app.session import load_config, session_from_config
from whylogs.app.config import WriterConfig, SessionConfig
s3_writer_config= WriterConfig(type="s3",formats=["json","flat","protobuf"],
output_path="s3://super_awesome_bucket/",
path_template="$name/dataset_summary",
filename_template="dataset_profile")
#you can also create a local, so you have a local version of the data.
session_config=SessionConfig(project="my_super_duper_project_name",
pipeline="latest_results",
writers=[s3_writer_config])
session = session_from_config(session_config)
print(session.get_config().to_yaml())
with session.logger("dataset_test_s3_config_as_code") as logger:
logger.log_dataframe(df)
client = boto3.client('s3')
objects = client.list_objects(Bucket=BUCKET)
[obj["Key"] for obj in objects.get("Contents",[])]
###Output
_____no_output_____
###Markdown
Close mock s3 server
###Code
mocks3.stop()
###Output
_____no_output_____
###Markdown
Saving Profiles to S3 ---
###Code
from whylogs import get_or_create_session
import pandas as pd
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Create a mock s3 server For this example we will create a fake s3 server using moto lib. You should remove this section if you have you own bucket setup on aws. Make sure you have your aws configuration set. By default this mock server creates a server in region "us-east-1"
###Code
BUCKET="super_awesome_bucket"
from moto import mock_s3
from moto.s3.responses import DEFAULT_REGION_NAME
import boto3
mocks3 = mock_s3()
mocks3.start()
res = boto3.resource('s3', region_name=DEFAULT_REGION_NAME)
res.create_bucket(Bucket=BUCKET)
###Output
_____no_output_____
###Markdown
Load Data We can go by our usual way, load a example csv data
###Code
df = pd.read_csv("data/lending_club_1000.csv")
###Output
_____no_output_____
###Markdown
Config File Example---Seting up whylogs to save your data on s3 can be in several ways. Simplest is to simply create a config file,where each data format can be saved to a specific location. As shown below
###Code
CONFIG = """
project: s3_example_project
pipeline: latest_results
verbose: false
writers:
- formats:
- protobuf
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
- formats:
- flat
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
- formats:
- json
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
"""
config_path=".whylogs_s3.yaml"
with open(".whylogs_s3.yaml","w") as file:
file.write(CONFIG)
###Output
_____no_output_____
###Markdown
Checking the content:
###Code
%cat .whylogs_s3.yaml
###Output
project: s3_example_project
pipeline: latest_results
verbose: false
writers:
- formats:
- protobuf
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
- formats:
- flat
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
- formats:
- json
output_path: s3://super_awesome_bucket/
path_template: $name/dataset_summary
filename_template: dataset_summary
type: s3
###Markdown
If you have a custom name for your config file or place it in a special location you can use the helper function
###Code
from whylogs.app.session import load_config, session_from_config
config = load_config(".whylogs_s3.yaml")
session = session_from_config(config)
print(session.get_config().to_yaml())
###Output
metadata: null
pipeline: latest_results
project: s3_example_project
verbose: false
with_rotation_time: null
writers:
- filename_template: <string.Template object at 0x14e917fa0>
formats:
- OutputFormat.protobuf
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14d5a1c70>
- filename_template: <string.Template object at 0x14e9683d0>
formats:
- OutputFormat.flat
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e968220>
- filename_template: <string.Template object at 0x14e968580>
formats:
- OutputFormat.json
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e968490>
###Markdown
Otherwise if the file is located in your home directory or current location you are running, you can simply run `get_or_create_session()`
###Code
session= get_or_create_session()
print(session.get_config().to_yaml())
###Output
metadata: null
pipeline: latest_results
project: s3_example_project
verbose: false
with_rotation_time: null
writers:
- filename_template: <string.Template object at 0x14e917d30>
formats:
- OutputFormat.protobuf
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e917c10>
- filename_template: <string.Template object at 0x14e968d00>
formats:
- OutputFormat.flat
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e968e20>
- filename_template: <string.Template object at 0x14e968e80>
formats:
- OutputFormat.json
output_path: s3://super_awesome_bucket/
path_template: <string.Template object at 0x14e968fa0>
###Markdown
Loggin Data --- The data can be save by simply closing a logger, or one a logger is out of scope.
###Code
with session.logger("dataset_test_s3") as logger:
logger.log_dataframe(df)
client = boto3.client('s3')
objects = client.list_objects(Bucket=BUCKET)
[obj["Key"] for obj in objects.get("Contents",[])]
###Output
_____no_output_____
###Markdown
You can define the configure for were the data is save through a configuration file or creating a custom writer.
###Code
mocks3.stop()
###Output
_____no_output_____
###Markdown
Without Config File---
###Code
mocks3.start()
res = boto3.resource('s3', region_name=DEFAULT_REGION_NAME)
res.create_bucket(Bucket=BUCKET)
from whylogs.app.session import load_config, session_from_config
from whylogs.app.config import WriterConfig, SessionConfig
s3_writer_config= WriterConfig(type="s3",formats=["json","flat","protobuf"],
output_path="s3://super_awesome_bucket/",
path_template="$name/dataset_summary",
filename_template="dataset_profile",
data_collection_consent=True)
#you can also create a local, so you have a local version of the data.
session_config=SessionConfig(project="my_super_duper_project_name",
pipeline="latest_results",
writers=[s3_writer_config])
session = session_from_config(session_config)
print(session.get_config().to_yaml())
with session.logger("dataset_test_s3_config_as_code") as logger:
logger.log_dataframe(df)
client = boto3.client('s3')
objects = client.list_objects(Bucket=BUCKET)
[obj["Key"] for obj in objects.get("Contents",[])]
###Output
_____no_output_____
###Markdown
Close mock s3 server
###Code
mocks3.stop()
###Output
_____no_output_____ |
samples/humanoids_pouring/inspect_dataset.ipynb | ###Markdown
Mask R-CNN - Inspect DatasetsInspect and visualize data loading and pre-processing code.
###Code
import os
import sys
#import itertools
#import math
#import logging
#import json
#import re
import random
#from collections import OrderedDict
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
# Root directory of the project
ROOT_DIR = os.path.abspath("../../")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
import mrcnn.model as modellib
from mrcnn.model import log
from samples.humanoids_pouring import tabletop_bottles as tabletop
from samples.humanoids_pouring import datasets
from samples.humanoids_pouring import configurations
%matplotlib inline
###Output
_____no_output_____
###Markdown
Configuration
###Code
config = configurations.YCBVideoConfigTraining()
DATASET_ROOT_DIR = os.path.join(ROOT_DIR, "datasets/bottles")
###Output
_____no_output_____
###Markdown
Dataset
###Code
# Load dataset
# Get the dataset from the releases page
dataset = datasets.YCBVideoDataset()
dataset.load_dataset(DATASET_ROOT_DIR, "train")
# Actually load image paths
dataset.prepare()
#print("Image Count: {}".format(len(dataset.image_ids)))
#print("Class Count: {}".format(dataset.num_classes))
#for i, info in enumerate(dataset.class_info):
# print("{:3}. {:50}".format(i, info['name']))
###Output
Classes loaded: 10
ID 0: BG
ID 1: bottle_iit
ID 2: bottle_pinktea
ID 3: bottle_greentea
ID 4: bottle_orange
ID 5: bottle_mustard
ID 6: bottle_activia
ID 7: bottle_yogurt
ID 8: bottle_aloe
ID 9: za_hando
Loading train dataset...
1751/1751 [==============================] - 2s 1ms/step
Dataset loaded: 1751 images found.
###Markdown
Display SamplesLoad and display images and masks.
###Code
# Load and display random samples
image_ids = np.random.choice(dataset.image_ids, 10)
for image_id in image_ids:
image = dataset.load_image(image_id)
try:
mask, class_ids = dataset.load_mask(image_id)
except AssertionError:
print("No mask available for image {}".format(dataset.image_info[image_id]))
continue
visualize.display_top_masks(image, mask, class_ids, dataset.class_names)
###Output
_____no_output_____
###Markdown
Bounding BoxesRather than using bounding box coordinates provided by the source datasets, we compute the bounding boxes from masks instead. This allows us to handle bounding boxes consistently regardless of the source dataset, and it also makes it easier to resize, rotate, or crop images because we simply generate the bounding boxes from the updates masks rather than computing bounding box transformation for each type of image transformation.
###Code
# Load random image and mask.
image_id = random.choice(dataset.image_ids)
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
# Compute Bounding box
bbox = utils.extract_bboxes(mask)
# Display image and additional stats
print("image_id ", image_id, dataset.image_reference(image_id))
log("image", image)
log("mask", mask)
log("class_ids", class_ids)
log("bbox", bbox)
# Display image and instances
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
###Output
image_id 1438 /home/IIT.LOCAL/fbottarel/Mask_RCNN/datasets/bottles/data/0007/000125-color.png
image shape: (480, 640, 3) min: 0.00000 max: 223.00000 uint8
mask shape: (480, 640, 1) min: 0.00000 max: 1.00000 bool
class_ids shape: (1,) min: 5.00000 max: 5.00000 int64
bbox shape: (1, 4) min: 28.00000 max: 536.00000 int32
|
Big-Data-Clusters/GDR1/public/content/sample/sam001-load-sample-data-into-bdc.ipynb | ###Markdown
SAM001 - Storage Pool - Load sample data========================================Description----------- Common functionsDefine helper functions used in this notebook.
###Code
%%local
# Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows
import sys
import os
import re
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
def run(cmd, return_output=False, no_output=False, error_hints=[], retry_hints=[], retry_count=0):
"""
Run shell command, stream stdout, print stderr and optionally return output
"""
max_retries = 5
install_hint = None
output = ""
retry = False
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
python_retry_hints, python_error_hints, install_hint = python_hints()
retry_hints += python_retry_hints
error_hints += python_error_hints
if (cmd.startswith("kubectl ")):
kubectl_retry_hints, kubectl_error_hints, install_hint = kubectl_hints()
retry_hints += kubectl_retry_hints
error_hints += kubectl_error_hints
if (cmd.startswith("azdata ")):
azdata_retry_hints, azdata_error_hints, install_hint = azdata_hints()
retry_hints += azdata_retry_hints
error_hints += azdata_error_hints
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
which_binary = shutil.which(cmd_actual[0])
if which_binary == None:
if install_hint is not None:
display(Markdown(f'SUGGEST: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
else:
print(line, end='')
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'SUGGEST: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
if not no_output:
for line in iter(p.stderr.readline, b''):
line_decoded = line.decode()
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"ERR: {line_decoded}", end='')
for error_hint in error_hints:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'SUGGEST: Use [{error_hint[2]}]({error_hint[1]}) to resolve this issue.'))
for retry_hint in retry_hints:
if line_decoded.find(retry_hint) != -1:
if retry_count < max_retries:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, error_hints=error_hints, retry_hints=retry_hints, retry_count=retry_count)
if return_output:
return output
else:
return
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed\n')
if return_output:
return output
def azdata_hints():
retry_hints = [
"Endpoint sql-server-master does not exist",
"Endpoint livy does not exist",
"Failed to get state for cluster",
"Endpoint webhdfs does not exist",
"Adaptive Server is unavailable or does not exist",
"Error: Address already in use",
"Timed out getting health status after 5000 milliseconds"
]
error_hints = [
["""azdata login""", """../common/sop028-azdata-login.ipynb""", """SOP028 - azdata login"""],
["""The token is expired""", """../common/sop028-azdata-login.ipynb""", """SOP028 - azdata login"""],
["""Reason: Unauthorized""", """../common/sop028-azdata-login.ipynb""", """SOP028 - azdata login"""],
["""Max retries exceeded with url: /api/v1/bdc/endpoints""", """../common/sop028-azdata-login.ipynb""", """SOP028 - azdata login"""],
["""Look at the controller logs for more details""", """../diagnose/tsg027-observe-bdc-create.ipynb""", """TSG027 - Observe cluster deployment"""],
["""provided port is already allocated""", """../log-files/tsg062-tail-bdc-previous-container-logs.ipynb""", """TSG062 - Get tail of all previous container logs for pods in BDC namespace"""],
["""Create cluster failed since the existing namespace""", """../install/sop061-delete-bdc.ipynb""", """SOP061 - Delete a big data cluster"""],
["""Failed to complete kube config setup""", """../repair/tsg067-failed-to-complete-kube-config-setup.ipynb""", """TSG067 - Failed to complete kube config setup"""],
["""Error processing command: "ApiError""", """../repair/tsg110-azdata-returns-apierror.ipynb""", """TSG110 - Azdata returns ApiError"""],
["""Error processing command: "ControllerError""", """../log-analyzers/tsg036-get-controller-logs.ipynb""", """TSG036 - Controller logs"""],
["""ERROR: 500""", """../log-analyzers/tsg046-get-knox-logs.ipynb""", """TSG046 - Knox gateway logs"""],
["""Data source name not found and no default driver specified""", """../install/sop069-install-odbc-driver-for-sql-server.ipynb""", """SOP069 - Install ODBC for SQL Server"""],
["""Can't open lib 'ODBC Driver 17 for SQL Server""", """../install/sop069-install-odbc-driver-for-sql-server.ipynb""", """SOP069 - Install ODBC for SQL Server"""]
]
install_hint = "[SOP055 - Install azdata command line interface](../install/sop055-install-azdata.ipynb)'"
return retry_hints, error_hints, install_hint
print('Common functions defined successfully.')
###Output
_____no_output_____
###Markdown
Instantiate Kubernetes client
###Code
%%local
# Instantiate the Python Kubernetes client into 'api' variable
import os
try:
from kubernetes import client, config
from kubernetes.stream import stream
if "KUBERNETES_SERVICE_PORT" in os.environ and "KUBERNETES_SERVICE_HOST" in os.environ:
config.load_incluster_config()
else:
config.load_kube_config()
api = client.CoreV1Api()
print('Kubernetes client instantiated')
except ImportError:
from IPython.display import Markdown
display(Markdown(f'SUGGEST: Use [SOP059 - Install Kubernetes Python module](../install/sop059-install-kubernetes-module.ipynb) to resolve this issue.'))
raise
###Output
_____no_output_____
###Markdown
Get the namespace for the big data clusterGet the namespace of the big data cluster from the Kuberenetes API.NOTE: If there is more than one big data cluster in the targetKubernetes cluster, then set \[0\] to the correct value for the big datacluster.
###Code
%%local
# Place Kubernetes namespace name for BDC into 'namespace' variable
try:
namespace = api.list_namespace(label_selector='MSSQL_CLUSTER').items[0].metadata.name
except IndexError:
from IPython.display import Markdown
display(Markdown(f'SUGGEST: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'SUGGEST: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'SUGGEST: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print('The kubernetes namespace for your big data cluster is: ' + namespace)
###Output
_____no_output_____
###Markdown
Get required user credentialsGet the credentials from the Kuberenetes secret store required toperform the tasks below
###Code
%%local
import base64
controller_secret = api.read_namespaced_secret('controller-login-secret', namespace)
bdc_controller_username = base64.b64decode(controller_secret.data['username']).decode()
bdc_controller_password = base64.b64decode(controller_secret.data['password']).decode()
gateway_secret = api.read_namespaced_secret('gateway-secret', namespace)
bdc_knox_password = base64.b64decode(gateway_secret.data['knox-admin-password']).decode()
print ('Credentials retrieved')
###Output
_____no_output_____
###Markdown
Tutorial1. To be able to get the cluster endpoints, login.
###Code
%%local
import os
os.environ["AZDATA_PASSWORD"] = bdc_controller_password
run(f'azdata login -n {namespace} --username {bdc_controller_username} --accept-eula yes')
os.environ["AZDATA_PASSWORD"] = ""
###Output
_____no_output_____
###Markdown
1. Now we will get the cluster endopoints and we will get the HDFS address. This will be used for our next step when creating the .csv file and sending it to HDFS.
###Code
%%local
import json
cluster_res = run('azdata bdc endpoint list --endpoint="webhdfs"', return_output=True)
json = json.loads(cluster_res)
hdfs_addr = json['endpoint']
print(f'The HDFS address is: {hdfs_addr}')
###Output
_____no_output_____
###Markdown
1. This code will upload this data into HDFS.
###Code
%%local
import os
import csv
import tempfile
items = [ [1,"Eldon Base for stackable storage shelf, platinum","Muhammed MacIntyre",3,-213.25,38.94,35,"Nunavut,Storage & Organization",0.8 ],
[2,"1.7 Cubic Foot Compact ""Cube"" Office Refrigerators","Barry French",293,457.81,208.16,68.02,"Nunavut,Appliances",0.58],
[3,"Cardinal Slant-D Ring Binder, Heavy Gauge Vinyl","Barry French",293,46.71,8.69,2.99,"Nunavut","Binders and Binder Accessories",0.39],
[4,"R380","Clay Rozendal",483,1198.97,195.99,3.99,"Nunavut","Telephones and Communication",0.58],
[5,"Holmes HEPA Air Purifier","Carlos Soltero",515,30.94,21.78,5.94,"Nunavut","Appliances",0.5],
[6,"G.E. Longer-Life Indoor Recessed Floodlight Bulbs","Carlos Soltero",515,4.43,6.64,4.95,"Nunavut","Office Furnishings",0.37],
[7,"Angle-D Binders with Locking Rings, Label Holders","Carl Jackson",613,-54.04,7.3,7.72,"Nunavut","Binders and Binder Accessories",0.38],
[8,"SAFCO Mobile Desk Side File, Wire Frame","Carl Jackson",613,127.7,42.76,6.22,"Nunavut","Storage & Organization",],
[9,"SAFCO Commercial Wire Shelving, Black","Monica Federle",643,-695.26,138.14,35,"Nunavut","Storage & Organization",],
[10,"Xerox 198","Dorothy Badders",678,-226.36,4.98,8.33,"Nunavut","Paper",0.38 ] ]
import requests
import io
url = hdfs_addr + '/clickstream_data/datasampleCS.csv?op=CREATE&overwrite=true'
output = io.StringIO()
csv.writer(output, quoting=csv.QUOTE_NONNUMERIC).writerows(items)
r = requests.put(url, allow_redirects=True, auth=('root', bdc_knox_password), data=output.getvalue().encode('utf-8'), verify=False, headers={'content-type':'application/octet-stream'})
print (f"CSV uploaded to: {url}")
print (f"CSV:\r\n{output.getvalue()}")
###Output
_____no_output_____
###Markdown
Convert CSV to Parquet PYSPARK3The following steps will allow you to convert your .csv file to parquet
###Code
%%local
import json
cluster_res = run('azdata bdc endpoint list --endpoint="livy"', return_output=True)
json = json.loads(cluster_res)
livy_adrss = json['endpoint']
print(f'The Livy address is: {livy_adrss}')
%_do_not_call_change_endpoint --username=root --password={bdc_knox_password} --server={livy_adrss} --auth=Basic_Access
###Output
_____no_output_____
###Markdown
1. First open the .csv file and convert it to a data frame object.
###Code
results = spark.read.option("inferSchema", "true").csv('/clickstream_data/datasampleCS.csv').toDF("NumberID", "Name", "Name2", "Price", "Discount", "Money", "Money2", "Type", "Space")
###Output
_____no_output_____
###Markdown
1. Verify the schema using the following command.
###Code
results.printSchema()
###Output
_____no_output_____
###Markdown
1. You can now see the first 20 lines of this data using the following command.
###Code
results.show()
###Output
_____no_output_____
###Markdown
1. Now let’s turn your .csv file to a parquet file following this commands.
###Code
sc._jsc.hadoopConfiguration().set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
results.write.mode("overwrite").parquet('/clickstream_data_parquet')
###Output
_____no_output_____
###Markdown
1. You can verify the parquet file using the following commands.
###Code
result_parquet = spark.read.parquet('/clickstream_data_parquet')
result_parquet.show()
print('Notebook execution complete.')
###Output
_____no_output_____ |
examples/neurolib_brain_network.ipynb | ###Markdown
Brain network exploration with `neurolib` In this example, we will run a parameter exploration of a whole-brain model that we load using the brain simulation framework `neurolib`. Please visit the [Github repo](https://github.com/neurolib-dev/neurolib) to learn more about this library or read the [gentle introduction to `neurolib`](https://caglorithm.github.io/notebooks/neurolib-intro/) to learn more about the neuroscience background of neural mass models and whole-brain simulations.
###Code
# change into the root directory of the project
import os
if os.getcwd().split("/")[-1] == "examples":
os.chdir('..')
%load_ext autoreload
%autoreload 2
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
!pip install matplotlib
import matplotlib.pyplot as plt
import numpy as np
# a nice color map
plt.rcParams['image.cmap'] = 'plasma'
!pip install neurolib
from neurolib.models.aln import ALNModel
from neurolib.utils.loadData import Dataset
import neurolib.utils.functions as func
ds = Dataset("hcp")
import mopet
###Output
_____no_output_____
###Markdown
We load a model with parameters that generate interesting dynamics.
###Code
model = ALNModel(Cmat = ds.Cmat, Dmat = ds.Dmat)
model.params['duration'] = 0.2*60*1000
model.params['mue_ext_mean'] = 1.57
model.params['mui_ext_mean'] = 1.6
# We set an appropriate level of noise
model.params['sigma_ou'] = 0.09
# And turn on adaptation with a low value of spike-triggered adaptation currents.
model.params['b'] = 5.0
###Output
INFO:root:aln: Model initialized.
###Markdown
Let's run it to see what kind of output it produces!
###Code
model.run(bold=True, chunkwise=True)
plt.plot(model.output.T);
###Output
_____no_output_____
###Markdown
We simualted the model with BOLD output, so let's compute the functional connectivity (fc) matrix:
###Code
plt.imshow(func.fc(model.BOLD.BOLD[:, model.BOLD.t_BOLD > 5000]))
###Output
_____no_output_____
###Markdown
This is our multi-stage evaluation function.
###Code
def evaluateSimulation(params):
model.params.update(params)
defaultDuration = model.params['duration']
invalid_result = {"fc" : [0]* len(ds.BOLDs)}
logging.info("Running stage 1")
# -------- stage wise simulation --------
# Stage 1 : simulate for a few seconds to see if there is any activity
# ---------------------------------------
model.params['duration'] = 3*1000.
model.run()
# check if stage 1 was successful
amplitude = np.max(model.output[:, model.t > 500]) - np.min(model.output[:, model.t > 500])
if amplitude < 0.05:
invalid_result = {"fc" : 0}
return invalid_result
logging.info("Running stage 2")
# Stage 2: simulate BOLD for a few seconds to see if it moves
# ---------------------------------------
model.params['duration'] = 20*1000.
model.run(bold = True, chunkwise=True)
if np.std(model.BOLD.BOLD[:, 5:10]) < 0.0001:
invalid_result = {"fc" : -1}
return invalid_result
logging.info("Running stage 3")
# Stage 3: full and final simulation
# ---------------------------------------
model.params['duration'] = defaultDuration
model.run(bold = True, chunkwise=True)
# -------- evaluation here --------
scores = []
for i, fc in enumerate(ds.FCs):#range(len(ds.FCs)):
fc_score = func.matrix_correlation(func.fc(model.BOLD.BOLD[:, 5:]), fc)
scores.append(fc_score)
meanScore = np.mean(scores)
result_dict = {"fc" : meanScore}
return result_dict
###Output
_____no_output_____
###Markdown
We test run the evaluation function.
###Code
model.params['duration'] = 20*1000.
evaluateSimulation(model.params)
# NOTE: These values are low for testing
model.params['duration'] = 10*1000.
explore_params = {"a": np.linspace(0, 40.0, 2)
,"K_gl": np.linspace(100, 400, 2)
,"sigma_ou" : np.linspace(0.1, 0.5, 2)
}
# we need this random filename to avoid testing clashes
hdf_filename = f"exploration-{np.random.randint(99999)}.h5"
ex = mopet.Exploration(evaluateSimulation, explore_params, default_params=model.params, hdf_filename=hdf_filename)
ex.run()
ex.load_results(as_dict=True)
ex.results
ex.params
ex.df
sigma_selectors = np.unique(ex.df.sigma_ou)
for s in sigma_selectors:
df = ex.df[(ex.df.sigma_ou == s)]
pivotdf = df.pivot_table(values='fc', index = 'K_gl', columns='a')
plt.imshow(pivotdf, \
extent = [min(df.a), max(df.a),
min(df.K_gl), max(df.K_gl)], origin='lower', aspect='auto')
plt.colorbar(label='Mean correlation to empirical rs-FC')
plt.xlabel("a")
plt.ylabel("K_gl")
plt.title("$\sigma_{ou}$" + "={}".format(s))
plt.show()
###Output
_____no_output_____ |
code/Mstats2018.ipynb | ###Markdown
Mens Tourney Prediction AnalysisI feel the following are important in determing a teams success in the tourney1) Seeding2) Strength of Conference3) Individual team statistics4) Experience5) Ability of team to win on the road
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from math import pi
# import seaborn as sns
import time
from sklearn.utils import shuffle
from sklearn.model_selection import GridSearchCV, train_test_split, StratifiedKFold, cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import preprocessing, metrics,ensemble, model_selection
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
# from xgboost.sklearn import XGBClassifier
from sklearn.metrics import accuracy_score, roc_curve, auc, classification_report, confusion_matrix
pd.set_option('display.max_columns', 999)
pd.options.display.float_format = '{:.6f}'.format
start_time = time.time()
#standard files
#df_tourney = pd.read_csv('NCAATourneyCompactResults.csv')
#df_season = pd.read_csv('RegularSeasonDetailedResults.csv')
#df_teams = pd.read_csv('Teams.csv')
#df_seeds = pd.read_csv('NCAATourneySeeds.csv')
#df_conferences = pd.read_csv('Conferences.csv')
#df_rankings = pd.read_csv('MasseyOrdinals.csv')
#df_sample_sub = pd.read_csv('SampleSubmissionStage1.csv')
#my custom file
#df_tourney_experience = pd.read_csv('tourney_experience_senior_class.csv')
# Kaggle locations
df_tourney = pd.read_csv('../Minput2/NCAATourneyCompactResults.csv')
df_season = pd.read_csv('../Minput3/RegularSeasonDetailedResults.csv') # TODO update
df_teams = pd.read_csv('../Minput2/Teams.csv')
df_seeds = pd.read_csv('../Minput3/NCAATourneySeeds.csv') # TODO update
df_conferences = pd.read_csv('../Minput2/Conferences.csv')
df_rankings = pd.read_csv('../input/MasseyOrdinals_Prelim2018.csv') # TODO update
df_sample_sub = pd.read_csv('../Minput2/SampleSubmissionStage2.csv') # TODO update
df_team_conferences = pd.read_csv('../Minput2/Teamconferences.csv')
#private data file
df_tourney_experience = pd.read_csv('../additional/experience.csv')
df_season.head(5)
#Calculate Winning/losing Team Possesion Feature
#https://www.nbastuffer.com/analytics101/possession/
wPos = df_season.apply(lambda row: 0.96*(row.WFGA + row.WTO + 0.44*row.WFTA - row.WOR), axis=1)
lPos = df_season.apply(lambda row: 0.96*(row.LFGA + row.LTO + 0.44*row.LFTA - row.LOR), axis=1)
#two teams use almost the same number of possessions in a game
#(plus/minus one or two - depending on how quarters end)
#so let's just take the average
df_season['Possesions'] = (wPos+lPos)/2
df_season.head(5)
#Name Player Impact Estimate Definition PIE measures a player's overall statistical contribution
#against the total statistics in games they play in. PIE yields results which are
#comparable to other advanced statistics (e.g. PER) using a simple formula.
#Formula (PTS + FGM + FTM - FGA - FTA + DREB + (.5 * OREB) + AST + STL + (.5 * BLK) - PF - TO)
# / (GmPTS + GmFGM + GmFTM - GmFGA - GmFTA + GmDREB + (.5 * GmOREB) + GmAST + GmSTL + (.5 * GmBLK) - GmPF - GmTO)
#We will use this to measure Team Skill
wtmp = df_season.apply(lambda row: row.WScore + row.WFGM + row.WFTM - row.WFGA - row.WFTA + row.WDR + 0.5*row.WOR + row.WAst +row.WStl + 0.5*row.WBlk - row.WPF - row.WTO, axis=1)
ltmp = df_season.apply(lambda row: row.LScore + row.LFGM + row.LFTM - row.LFGA - row.LFTA + row.LDR + 0.5*row.LOR + row.LAst +row.LStl + 0.5*row.LBlk - row.LPF - row.LTO, axis=1)
df_season['WPIE'] = wtmp/(wtmp + ltmp)
df_season['LPIE'] = ltmp/(wtmp + ltmp)
#Four factors statistic from the NBA
#https://www.nbastuffer.com/analytics101/four-factors/
#Effective Field Goal Percentage=(Field Goals Made) + 0.5*3P Field Goals Made))/(Field Goal Attempts)
#you have to put the ball in the bucket eventually
df_season['WeFGP'] = df_season.apply(lambda row:(row.WFGM + 0.5 * row.WFGM3) / row.WFGA, axis=1)
df_season['LeFGP'] = df_season.apply(lambda row:(row.LFGM + 0.5 * row.LFGM3) / row.LFGA, axis=1)
#Turnover Rate= Turnovers/(Field Goal Attempts + 0.44*Free Throw Attempts + Turnovers)
#he who doesnt turn the ball over wins games
df_season['WTOR'] = df_season.apply(lambda row: row.WTO / (row.WFGA + 0.44*row.WFTA + row.WTO), axis=1)
df_season['LTOR'] = df_season.apply(lambda row: row.LTO / (row.LFGA + 0.44*row.LFTA + row.LTO), axis=1)
#Offensive Rebounding Percentage = (Offensive Rebounds)/[(Offensive Rebounds)+(Opponent’s Defensive Rebounds)]
#You can win games controlling the offensive glass
df_season['WORP'] = df_season.apply(lambda row: row.WOR / (row.WOR + row.LDR), axis=1)
df_season['LORP'] = df_season.apply(lambda row: row.LOR / (row.LOR + row.WDR), axis=1)
#Free Throw Rate=(Free Throws Made)/(Field Goals Attempted) or Free Throws Attempted/Field Goals Attempted
#You got to get to the line to win close games
df_season['WFTAR'] = df_season.apply(lambda row: row.WFTA / row.WFGA, axis=1)
df_season['LFTAR'] = df_season.apply(lambda row: row.LFTA / row.LFGA, axis=1)
#4 Factors is weighted as follows
#1. Shooting (40%)
#2. Turnovers (25%)
#3. Rebounding (20%)
#4. Free Throws (15%)
df_season['W4Factor'] = df_season.apply(lambda row: .40*row.WeFGP + .25*row.WTOR + .20*row.WORP + .15*row.WFTAR, axis=1)
df_season['L4Factor'] = df_season.apply(lambda row: .40*row.LeFGP + .25*row.LTOR + .20*row.LORP + .15*row.LFTAR, axis=1)
#Offensive efficiency (OffRtg) = (Points / Possessions)
#Every possession counts
df_season['WOffRtg'] = df_season.apply(lambda row: (row.WScore / row.Possesions), axis=1)
df_season['LOffRtg'] = df_season.apply(lambda row: (row.LScore / row.Possesions), axis=1)
#Defensive efficiency (DefRtg) = (Opponent points / Opponent possessions)
#defense wins championships
df_season['WDefRtg'] = df_season.LOffRtg
df_season['LDefRtg'] = df_season.WOffRtg
#Assist Ratio : Percentage of team possessions that end in assists
#distribute the rock - dont go isolation all the time
df_season['WAstR'] = df_season.apply(lambda row: row.WAst / (row.WFGA + 0.44*row.WFTA + row.WAst + row.WTO), axis=1)
df_season['LAstR'] = df_season.apply(lambda row: row.LAst / (row.LFGA + 0.44*row.LFTA + row.LAst + row.LTO), axis=1)
#DREB% : Percentage of team defensive rebounds
#control your own glass
df_season['WDRP'] = df_season.apply(lambda row: row.WDR / (row.WDR + row.LOR), axis=1)
df_season['LDRP'] = df_season.apply(lambda row: row.LDR / (row.LDR + row.WOR), axis=1)
#Free Throw Percentage
#Make your damn free throws
df_season['WFTPCT'] = df_season.apply(lambda row : 0 if row.WFTA < 1 else row.WFTM / row.WFTA, axis=1)
df_season['LFTPCT'] = df_season.apply(lambda row : 0 if row.LFTA < 1 else row.LFTM / row.LFTA, axis=1)
df_season.drop(['WFGM', 'WFGA', 'WFGM3', 'WFGA3', 'WFTM', 'WFTA', 'WOR', 'WDR', 'WAst', 'WTO', 'WStl', 'WBlk', 'WPF'], axis=1, inplace=True)
df_season.drop(['LFGM', 'LFGA', 'LFGM3', 'LFGA3', 'LFTM', 'LFTA', 'LOR', 'LDR', 'LAst', 'LTO', 'LStl', 'LBlk', 'LPF'], axis=1, inplace=True)
df_season.head()
df_season_composite = pd.DataFrame()
#This will aggregate individual games into season totals for a team
#calculates wins and losses to get winning percentage
df_season_composite['WINS'] = df_season['WTeamID'].groupby([df_season['Season'], df_season['WTeamID']]).count()
df_season_composite['LOSSES'] = df_season['LTeamID'].groupby([df_season['Season'], df_season['LTeamID']]).count()
df_season_composite['WINPCT'] = df_season_composite['WINS'] / (df_season_composite['WINS'] + df_season_composite['LOSSES'])
# calculates averages for games team won
df_season_composite['WPIE'] = df_season['WPIE'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WeFGP'] = df_season['WeFGP'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WTOR'] = df_season['WTOR'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WORP'] = df_season['WORP'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WFTAR'] = df_season['WFTAR'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['W4Factor'] = df_season['W4Factor'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WOffRtg'] = df_season['WOffRtg'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WDefRtg'] = df_season['WDefRtg'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WAstR'] = df_season['WAstR'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WDRP'] = df_season['WDRP'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
df_season_composite['WFTPCT'] = df_season['WFTPCT'].groupby([df_season['Season'], df_season['WTeamID']]).mean()
# calculates averages for games team lost
df_season_composite['LPIE'] = df_season['LPIE'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LeFGP'] = df_season['LeFGP'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LTOR'] = df_season['LTOR'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LORP'] = df_season['LORP'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LFTAR'] = df_season['LFTAR'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['L4Factor'] = df_season['L4Factor'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LOffRtg'] = df_season['LOffRtg'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LDefRtg'] = df_season['LDefRtg'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LAstR'] = df_season['LAstR'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LDRP'] = df_season['LDRP'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
df_season_composite['LFTPCT'] = df_season['LFTPCT'].groupby([df_season['Season'], df_season['LTeamID']]).mean()
# calculates weighted average using winning percent to weight the statistic
df_season_composite['PIE'] = df_season_composite['WPIE'] * df_season_composite['WINPCT'] + df_season_composite['LPIE'] * (1 - df_season_composite['WINPCT'])
df_season_composite['FG_PCT'] = df_season_composite['WeFGP'] * df_season_composite['WINPCT'] + df_season_composite['LeFGP'] * (1 - df_season_composite['WINPCT'])
df_season_composite['TURNOVER_RATE'] = df_season_composite['WTOR'] * df_season_composite['WINPCT'] + df_season_composite['LTOR'] * (1 - df_season_composite['WINPCT'])
df_season_composite['OFF_REB_PCT'] = df_season_composite['WORP'] * df_season_composite['WINPCT'] + df_season_composite['LORP'] * (1 - df_season_composite['WINPCT'])
df_season_composite['FT_RATE'] = df_season_composite['WFTAR'] * df_season_composite['WINPCT'] + df_season_composite['LFTAR'] * (1 - df_season_composite['WINPCT'])
df_season_composite['4FACTOR'] = df_season_composite['W4Factor'] * df_season_composite['WINPCT'] + df_season_composite['L4Factor'] * (1 - df_season_composite['WINPCT'])
df_season_composite['OFF_EFF'] = df_season_composite['WOffRtg'] * df_season_composite['WINPCT'] + df_season_composite['LOffRtg'] * (1 - df_season_composite['WINPCT'])
df_season_composite['DEF_EFF'] = df_season_composite['WDefRtg'] * df_season_composite['WINPCT'] + df_season_composite['LDefRtg'] * (1 - df_season_composite['WINPCT'])
df_season_composite['ASSIST_RATIO'] = df_season_composite['WAstR'] * df_season_composite['WINPCT'] + df_season_composite['LAstR'] * (1 - df_season_composite['WINPCT'])
df_season_composite['DEF_REB_PCT'] = df_season_composite['WDRP'] * df_season_composite['WINPCT'] + df_season_composite['LDRP'] * (1 - df_season_composite['WINPCT'])
df_season_composite['FT_PCT'] = df_season_composite['WFTPCT'] * df_season_composite['WINPCT'] + df_season_composite['LFTPCT'] * (1 - df_season_composite['WINPCT'])
df_season_composite.reset_index(inplace = True)
#Kentucy and Witchita State went undefeated causing problems with the data since cant calculate average stats without WINPCT
df_season_composite[df_season_composite['LOSSES'].isnull()]
#Complete hack to fix the data
df_season_composite.loc[4064,'WINPCT'] = 1
df_season_composite.loc[4064,'LOSSES'] = 0
df_season_composite.loc[4064,'PIE'] = df_season_composite.loc[4064,'WPIE']
df_season_composite.loc[4064,'FG_PCT'] = df_season_composite.loc[4064,'WeFGP']
df_season_composite.loc[4064,'TURNOVER_RATE'] = df_season_composite.loc[4064,'WTOR']
df_season_composite.loc[4064,'OFF_REB_PCT'] = df_season_composite.loc[4064,'WORP']
df_season_composite.loc[4064,'FT_RATE'] = df_season_composite.loc[4064,'WFTAR']
df_season_composite.loc[4064,'4FACTOR'] = df_season_composite.loc[4064,'W4Factor']
df_season_composite.loc[4064,'OFF_EFF'] = df_season_composite.loc[4064,'WOffRtg']
df_season_composite.loc[4064,'DEF_EFF'] = df_season_composite.loc[4064,'WDefRtg']
df_season_composite.loc[4064,'ASSIST_RATIO'] = df_season_composite.loc[4064,'WAstR']
df_season_composite.loc[4064,'DEF_REB_PCT'] = df_season_composite.loc[4064,'WDRP']
df_season_composite.loc[4064,'FT_PCT'] = df_season_composite.loc[4064,'WFTPCT']
df_season_composite.loc[4211,'WINPCT'] = 1
df_season_composite.loc[4211,'LOSSES'] = 0
df_season_composite.loc[4211,'PIE'] = df_season_composite.loc[4211,'WPIE']
df_season_composite.loc[4211,'FG_PCT'] = df_season_composite.loc[4211,'WeFGP']
df_season_composite.loc[4211,'TURNOVER_RATE'] = df_season_composite.loc[4211,'WTOR']
df_season_composite.loc[4211,'OFF_REB_PCT'] = df_season_composite.loc[4211,'WORP']
df_season_composite.loc[4211,'FT_RATE'] = df_season_composite.loc[4211,'WFTAR']
df_season_composite.loc[4211,'4FACTOR'] = df_season_composite.loc[4211,'W4Factor']
df_season_composite.loc[4211,'OFF_EFF'] = df_season_composite.loc[4211,'WOffRtg']
df_season_composite.loc[4211,'DEF_EFF'] = df_season_composite.loc[4211,'WDefRtg']
df_season_composite.loc[4211,'ASSIST_RATIO'] = df_season_composite.loc[4211,'WAstR']
df_season_composite.loc[4211,'DEF_REB_PCT'] = df_season_composite.loc[4211,'WDRP']
df_season_composite.loc[4211,'FT_PCT'] = df_season_composite.loc[4211,'WFTPCT']
#we only need the final summary stats
df_season_composite.drop(['WINS','WPIE','WeFGP','WTOR','WORP','WFTAR','W4Factor','WOffRtg','WDefRtg','WAstR','WDRP','WFTPCT'], axis=1, inplace=True)
df_season_composite.drop(['LOSSES','LPIE','LeFGP','LTOR','LORP','LFTAR','L4Factor','LOffRtg','LDefRtg','LAstR','LDRP','LFTPCT'], axis=1, inplace=True)
df_season_composite.head()
#a little housekeeping to make easier to graph correlation matrix
columns = list(df_season_composite.columns.values)
columns.pop(columns.index('WINPCT'))
columns.append('WINPCT')
df_season_composite = df_season_composite[columns]
df_season_composite.rename(columns={'WTeamID':'TeamID'}, inplace=True)
df_season_composite.head()
#Strength of Schedule
#We will use the RPI ranking of the teams before entering the tourney to get a measure of strength of schedule.
#Rating Percentage Index (RPI) Formula=.25*(Team’s Winning Percentage)+
#.50*(Opponents’ Average Winning Percentage)+0.25*(Opponents’ Opponents’ Average Winning Percentage)
#The rating percentage index, commonly known as the RPI, is a quantity used to rank sports teams based upon
#a team's wins and losses and its strength of schedule. It is one of the sports rating systems by which NCAA basketball,
#baseball, softball, hockey, soccer, lacrosse, and volleyball teams are ranked.
#The final pre-tournament rankings each year have a RankingDayNum of 133.
#and can thus be used to make predictions of the games from the NCAA® tournament
df_RPI = df_rankings[df_rankings['SystemName'] == 'RPI']
df_RPI_final = df_RPI[df_RPI['RankingDayNum'] == 133]
df_RPI_final.drop(labels=['RankingDayNum', 'SystemName'], inplace=True, axis=1)
df_RPI_final.head()
#Get seeds of teams for all tourney games
df_seeds.head()
# Convert string to an integer
df_seeds['seed_int'] = df_seeds['Seed'].apply( lambda x : int(x[1:3]) )
df_seeds.drop(labels=['Seed'], inplace=True, axis=1)
df_seeds.rename(columns={'seed_int':'Seed'},inplace=True)
df_seeds.head()
#Create dataframe of team features for all seasons
#ranks only start since 2003
df_seeds_final = df_seeds[df_seeds['Season'] > 2002]
#2 step merge
df_tourney_stage = pd.merge(left=df_seeds_final, right=df_RPI_final, how='left', on=['Season', 'TeamID'])
df_tourney_final = pd.merge(left=df_tourney_stage, right=df_season_composite, how='left', on=['Season', 'TeamID'])
df_tourney_final.head()
#I couldnt figure out how to manipulate/calculate the way I wanted so I exported to Excel and am reimporting it back in here.
#df_tourney_experience = pd.read_csv('tourney_experience_senior_class.csv')
#This indicates the number of tourney games that the senior class would have played in going in to this
#years tourney (basically games played in the prior 3 tourneys) Using it as a gage of tourney experience of the team.
#All things being equal between two #teams the team with more experience in the tourney I feel would win the game.
df_tourney_experience.tail()
#this function looks up the number of games for a year/team combination
def get_wins(year, teamid):
# print("year, teamid",year, teamid )
row_id = df_tourney_experience[df_tourney_experience['TeamID'] == teamid]
# print(row_id.shape)
if row_id.shape[0]==0:
# print("year, teamid",year, teamid )
games = 0
else:
row_id = row_id.index[0]
column_id = df_tourney_experience.columns.get_loc(str(year))
games = df_tourney_experience.iloc[row_id,column_id]
return games
#iterates thru the dataframe to build another single column dataframe by calling the function
result = []
for row in df_tourney_final.iterrows():
years = (df_tourney_final['Season'])
teams = (df_tourney_final['TeamID'])
for i in range(len(df_tourney_final)):
matrix = ((years[i], teams[i]))
result.append(get_wins(*matrix))
team_experience = pd.DataFrame(result, columns=['experience'])
team_experience.head()
#merges them together
df_tourney_final = pd.concat((df_tourney_final, team_experience), axis=1)
df_tourney_final.head()
#generate teams in the tourney
df_tourney.drop(labels=['DayNum', 'WScore', 'LScore', 'WLoc', 'NumOT'], inplace=True, axis=1)
df_tourney = pd.merge(left=df_tourney, right=df_seeds, how='left', left_on=['Season', 'WTeamID'], right_on=['Season', 'TeamID'])
df_tourney = pd.merge(left=df_tourney, right=df_seeds, how='left', left_on=['Season', 'LTeamID'], right_on=['Season', 'TeamID'])
df_tourney.drop(labels=['TeamID_x', 'TeamID_y'], inplace=True, axis=1)
df_tourney.rename(columns={'Seed_x':'WSeed', 'Seed_y':'LSeed'},inplace=True)
df_tourney.head()
df_tourney.head()
df_tourney_final.head()
df_tourney_final
df_tourney_final.to_csv("../additional/Mdf_tourney_final_2018.csv")
###Output
_____no_output_____ |
examples/multi-gpu-movielens/01-03-MultiGPU-Download-Convert-ETL-with-NVTabular-Training-with-TensorFlow.ipynb | ###Markdown
Multi-GPU with MovieLens: ETL and Training OverviewNVIDIA Merlin is a open source framework to accelerate and scale end-to-end recommender system pipelines on GPU. In this notebook, we use NVTabular, Merlin’s ETL component, to scale feature engineering and pre-processing to multiple GPUs and then perform data-parallel distributed training of a neural network on multiple GPUs with TensorFlow, [Horovod](https://horovod.readthedocs.io/en/stable/), and [NCCL](https://developer.nvidia.com/nccl).The pre-requisites for this notebook are to be familiar with NVTabular and its API:- You can read more about NVTabular, its API and specialized dataloaders in [Getting Started with Movielens notebooks](../getting-started-movielens).- You can read more about scaling NVTabular ETL in [Scaling Criteo notebooks](../scaling-criteo).**In this notebook, we will focus only on the new information related to multi-GPU training, so please check out the other notebooks first (if you haven’t already.)** Learning objectivesIn this notebook, we learn how to scale ETL and deep learning taining to multiple GPUs- Learn to use larger than GPU/host memory datasets for ETL and training- Use multi-GPU or multi node for ETL with NVTabular- Use NVTabular dataloader to accelerate TensorFlow pipelines- Scale TensorFlow training with Horovod DatasetIn this notebook, we use the [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) dataset. It is popular for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well.Note: We are using the MovieLens 25M dataset in this example for simplicity, although the dataset is not large enough to require multi-GPU training. However, the functionality demonstrated in this notebook can be easily extended to scale recommender pipelines for larger datasets in the same way. Tools- [Horovod](https://horovod.readthedocs.io/en/stable/) is a distributed deep learning framework that provides tools for multi-GPU optimization.- The [NVIDIA Collective Communication Library (NCCL)](https://developer.nvidia.com/nccl) provides the underlying GPU-based implementations of the [allgather](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallgather) and [allreduce](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallreduce) cross-GPU communication operations. Download and ConvertFirst, we will download and convert the dataset to Parquet. This section is based on [01-Download-Convert.ipynb](../getting-started-movielens/01-Download-Convert.ipynb). Download
###Code
# External dependencies
import os
import pathlib
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
from merlin.core.utils import download_file
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", "~/nvt-examples/multigpu-movielens/data/"
)
BASE_DIR = pathlib.Path(INPUT_DATA_DIR).expanduser()
zip_path = pathlib.Path(BASE_DIR, "ml-25m.zip")
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", zip_path, redownload=False
)
###Output
downloading ml-25m.zip: 262MB [00:06, 41.9MB/s]
unzipping files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:04<00:00, 1.74files/s]
###Markdown
Convert
###Code
movies = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "movies.csv"))
movies["genres"] = movies["genres"].str.split("|")
movies = movies.drop("title", axis=1)
movies.to_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
###Output
_____no_output_____
###Markdown
Split into train and validation datasets
###Code
ratings = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "ratings.csv"))
ratings = ratings.drop("timestamp", axis=1)
# shuffle the dataset
ratings = ratings.sample(len(ratings), replace=False)
# split the train_df as training and validation data sets.
num_valid = int(len(ratings) * 0.2)
train = ratings[:-num_valid]
valid = ratings[-num_valid:]
train.to_parquet(pathlib.Path(BASE_DIR, "train.parquet"))
valid.to_parquet(pathlib.Path(BASE_DIR, "valid.parquet"))
###Output
_____no_output_____
###Markdown
ETL with NVTabularWe finished downloading and converting the dataset. We will preprocess and engineer features with NVTabular on multiple GPUs. You can read more- about NVTabular's features and API in [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).- scaling NVTabular ETL to multiple GPUs [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb). Deploy a Distributed-Dask ClusterThis section is based on [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb) and [multi-gpu-toy-example/multi-gpu_dask.ipynb](../multi-gpu-toy-example/multi-gpu_dask.ipynb)
###Code
# Standard Libraries
import shutil
# External Dependencies
import cupy as cp
import numpy as np
import cudf
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.utils import parse_bytes
from dask.delayed import delayed
import rmm
# NVTabular
import nvtabular as nvt
import nvtabular.ops as ops
from merlin.io import Shuffle
from merlin.core.utils import device_mem_size
# define some information about where to get our data
input_path = pathlib.Path(BASE_DIR, "converted", "movielens")
dask_workdir = pathlib.Path(BASE_DIR, "test_dask", "workdir")
output_path = pathlib.Path(BASE_DIR, "test_dask", "output")
stats_path = pathlib.Path(BASE_DIR, "test_dask", "stats")
# Make sure we have a clean worker space for Dask
if pathlib.Path.is_dir(dask_workdir):
shutil.rmtree(dask_workdir)
dask_workdir.mkdir(parents=True)
# Make sure we have a clean stats space for Dask
if pathlib.Path.is_dir(stats_path):
shutil.rmtree(stats_path)
stats_path.mkdir(parents=True)
# Make sure we have a clean output path
if pathlib.Path.is_dir(output_path):
shutil.rmtree(output_path)
output_path.mkdir(parents=True)
# Get device memory capacity
capacity = device_mem_size(kind="total")
# Deploy a Single-Machine Multi-GPU Cluster
protocol = "tcp" # "tcp" or "ucx"
visible_devices = "0,1" # Delect devices to place workers
device_spill_frac = 0.5 # Spill GPU-Worker memory to host at this limit.
# Reduce if spilling fails to prevent
# device memory errors.
cluster = None # (Optional) Specify existing scheduler port
if cluster is None:
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
local_directory=dask_workdir,
device_memory_limit=capacity * device_spill_frac,
)
# Create the distributed client
client = Client(cluster)
client
# Initialize RMM pool on ALL workers
def _rmm_pool():
rmm.reinitialize(
pool_allocator=True,
initial_pool_size=None, # Use default size
)
client.run(_rmm_pool)
###Output
_____no_output_____
###Markdown
Defining our Preprocessing PipelineThis subsection is based on [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).
###Code
movies = cudf.read_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
joined = ["userId", "movieId"] >> nvt.ops.JoinExternal(movies, on=["movieId"])
cat_features = joined >> nvt.ops.Categorify()
ratings = nvt.ColumnSelector(["rating"]) >> nvt.ops.LambdaOp(lambda col: (col > 3).astype("int8"), dtype=np.int8)
output = cat_features + ratings
workflow = nvt.Workflow(output)
!rm -rf $BASE_DIR/train
!rm -rf $BASE_DIR/valid
train_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "train.parquet"))], part_size="100MB")
valid_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "valid.parquet"))], part_size="100MB")
workflow.fit(train_iter)
workflow.save(str(pathlib.Path(BASE_DIR, "workflow")))
shuffle = Shuffle.PER_WORKER # Shuffle algorithm
out_files_per_proc = 4 # Number of output files per worker
workflow.transform(train_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "train"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
workflow.transform(valid_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "valid"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
client.shutdown()
cluster.close()
###Output
/usr/local/lib/python3.8/dist-packages/distributed/worker.py:3560: UserWarning: Large object of size 1.90 MiB detected in task graph:
("('read-parquet-d36dd514a8adc53a9a91115c9be1d852' ... 1115c9be1d852')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
warnings.warn(
###Markdown
Training with TensorFlow on multiGPUsIn this section, we will train a TensorFlow model with multi-GPU support. In the NVTabular v0.5 release, we added multi-GPU support for NVTabular dataloaders. We will modify the [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb) to use multiple GPUs. Please review that notebook, if you have questions about the general functionality of the NVTabular dataloaders or the neural network architecture. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The normal TensorFlow dataloaders cannot prepare the next training batches fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras models- **supporting multi-GPU training with Horovod**You can find more information on the dataloaders in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49). Using Horovod with Tensorflow and NVTabularThe training script below is based on [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb), with a few important changes:- We provide several additional parameters to the `KerasSequenceLoader` class, including the total number of workers `hvd.size()`, the current worker's id number `hvd.rank()`, and a function for generating random seeds `seed_fn()`. ```python train_dataset_tf = KerasSequenceLoader( ... global_size=hvd.size(), global_rank=hvd.rank(), seed_fn=seed_fn, )```- The seed function uses Horovod to collectively generate a random seed that's shared by all workers so that they can each shuffle the dataset in a consistent way and select partitions to work on without overlap. The seed function is called by the dataloader during the shuffling process at the beginning of each epoch:```python def seed_fn(): min_int, max_int = tf.int32.limits max_rand = max_int // hvd.size() Generate a seed fragment on each worker seed_fragment = cupy.random.randint(0, max_rand).get() Aggregate seed fragments from all Horovod workers seed_tensor = tf.constant(seed_fragment) reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum) return reduced_seed % max_rand```- We wrap the TensorFlow optimizer with Horovod's `DistributedOptimizer` class and scale the learning rate by the number of workers:```python opt = tf.keras.optimizers.SGD(0.01 * hvd.size()) opt = hvd.DistributedOptimizer(opt)```- We wrap the TensorFlow gradient tape with Horovod's `DistributedGradientTape` class:```python with tf.GradientTape() as tape: ... tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)```- After the first batch, we broadcast the model and optimizer parameters to all workers with Horovod:```python Note: broadcast should be done after the first gradient step to ensure optimizer initialization. if first_batch: hvd.broadcast_variables(model.variables, root_rank=0) hvd.broadcast_variables(opt.variables(), root_rank=0)```- We only save checkpoints from the first worker to avoid multiple workers trying to write to the same files:```python if hvd.rank() == 0: checkpoint.save(checkpoint_dir)```The rest of the script is the same as the MovieLens example in [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb). In order to run it with Horovod, we first need to write it to a file.
###Code
%%writefile './tf_trainer.py'
# External dependencies
import argparse
import glob
import os
import cupy
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.3" # fraction of free memory
import nvtabular as nvt # noqa: E402 isort:skip
from nvtabular.framework_utils.tensorflow import layers # noqa: E402 isort:skip
from nvtabular.loader.tensorflow import KerasSequenceLoader # noqa: E402 isort:skip
import tensorflow as tf # noqa: E402 isort:skip
import horovod.tensorflow as hvd # noqa: E402 isort:skip
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("--dir_in", default=None, help="Input directory")
parser.add_argument("--batch_size", default=None, help="batch size")
parser.add_argument("--cats", default=None, help="categorical columns")
parser.add_argument("--cats_mh", default=None, help="categorical multihot columns")
parser.add_argument("--conts", default=None, help="continuous columns")
parser.add_argument("--labels", default=None, help="continuous columns")
args = parser.parse_args()
BASE_DIR = args.dir_in or "./data/"
BATCH_SIZE = int(args.batch_size or 16384) # Batch Size
CATEGORICAL_COLUMNS = args.cats or ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = args.cats_mh or ["genres"] # Multi-hot
NUMERIC_COLUMNS = args.conts or []
TRAIN_PATHS = sorted(
glob.glob(os.path.join(BASE_DIR, "train/*.parquet"))
) # Output from ETL-with-NVTabular
hvd.init()
# Seed with system randomness (or a static seed)
cupy.random.seed(None)
def seed_fn():
"""
Generate consistent dataloader shuffle seeds across workers
Reseeds each worker's dataloader each epoch to get fresh a shuffle
that's consistent across workers.
"""
min_int, max_int = tf.int32.limits
max_rand = max_int // hvd.size()
# Generate a seed fragment on each worker
seed_fragment = cupy.random.randint(0, max_rand).get()
# Aggregate seed fragments from all Horovod workers
seed_tensor = tf.constant(seed_fragment)
reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum)
return reduced_seed % max_rand
proc = nvt.Workflow.load(os.path.join(BASE_DIR, "workflow/"))
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
global_size=hvd.size(),
global_rank=hvd.rank(),
seed_fn=seed_fn,
)
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = \
(tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
loss = tf.losses.BinaryCrossentropy()
opt = tf.keras.optimizers.SGD(0.01 * hvd.size())
opt = hvd.DistributedOptimizer(opt)
checkpoint_dir = "./checkpoints"
checkpoint = tf.train.Checkpoint(model=model, optimizer=opt)
@tf.function(experimental_relax_shapes=True)
def training_step(examples, labels, first_batch):
with tf.GradientTape() as tape:
probs = model(examples, training=True)
loss_value = loss(labels, probs)
# Horovod: add Horovod Distributed GradientTape.
tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)
grads = tape.gradient(loss_value, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
#
# Note: broadcast should be done after the first gradient step to ensure optimizer
# initialization.
if first_batch:
hvd.broadcast_variables(model.variables, root_rank=0)
hvd.broadcast_variables(opt.variables(), root_rank=0)
return loss_value
# Horovod: adjust number of steps based on number of GPUs.
for batch, (examples, labels) in enumerate(train_dataset_tf):
loss_value = training_step(examples, labels, batch == 0)
if batch % 100 == 0 and hvd.local_rank() == 0:
print("Step #%d\tLoss: %.6f" % (batch, loss_value))
hvd.join()
# Horovod: save checkpoints only on worker 0 to prevent other workers from
# corrupting it.
if hvd.rank() == 0:
checkpoint.save(checkpoint_dir)
###Output
Overwriting ./tf_trainer.py
###Markdown
We'll also need a small wrapper script to check environment variables set by the Horovod runner to see which rank we'll be assigned, in order to set CUDA_VISIBLE_DEVICES properly for each worker:
###Code
%%writefile './hvd_wrapper.sh'
#!/bin/bash
# Get local process ID from OpenMPI or alternatively from SLURM
if [ -z "${CUDA_VISIBLE_DEVICES:-}" ]; then
if [ -n "${OMPI_COMM_WORLD_LOCAL_RANK:-}" ]; then
LOCAL_RANK="${OMPI_COMM_WORLD_LOCAL_RANK}"
elif [ -n "${SLURM_LOCALID:-}" ]; then
LOCAL_RANK="${SLURM_LOCALID}"
fi
export CUDA_VISIBLE_DEVICES=${LOCAL_RANK}
fi
exec "$@"
###Output
Overwriting ./hvd_wrapper.sh
###Markdown
OpenMPI and Slurm are tools for running distributed computed jobs. In this example, we’re using OpenMPI, but depending on the environment you run distributed training jobs in, you may need to check slightly different environment variables to find the total number of workers (global size) and each process’s worker number (global rank.)Why do we have to check environment variables instead of using `hvd.rank()` and `hvd.local_rank()`? NVTabular does some GPU configuration when imported and needs to be imported before Horovod to avoid conflicts. We need to set GPU visibility before NVTabular is imported (when Horovod isn’t yet available) so that multiple processes don’t each try to configure all the GPUs, so as a workaround, we “cheat” and peek at environment variables set by horovodrun to decide which GPU each process should use.
###Code
!horovodrun -np 2 sh hvd_wrapper.sh python tf_trainer.py --dir_in $BASE_DIR --batch_size 16384
###Output
2021-06-04 16:39:06.000313: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:08.979997: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:09.064191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.138200: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:10.138376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,0]<stderr>:2021-06-04 16:39:10.139777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:10.139823: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.139907: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139949: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140084: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140123: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140169: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:10.144021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:10.367414: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:10.367496: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,1]<stderr>:2021-06-04 16:39:10.368324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:10.368347: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:10.368396: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368424: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368451: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368475: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368512: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368537: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368573: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:10.369841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.730033: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:11.730907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:11.730990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.731005: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731018: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731038: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731049: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731059: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731078: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:11.732312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.732350: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.732473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,1]<stderr>:2021-06-04 16:39:11.732487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,1]<stderr>:2021-06-04 16:39:11.732493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,1]<stderr>:2021-06-04 16:39:11.734431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:42:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:11.821346: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:11.822270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:11.822360: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.822376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822389: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822411: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822425: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822434: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822454: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:11.823684: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,0]<stderr>:2021-06-04 16:39:11.823731: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.823868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,0]<stderr>:2021-06-04 16:39:11.823881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,0]<stderr>:2021-06-04 16:39:11.823888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,0]<stderr>:2021-06-04 16:39:11.825784: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:0b:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:17.634485: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,0]<stderr>:2021-06-04 16:39:17.668915: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,1]<stderr>:2021-06-04 16:39:17.694128: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,1]<stderr>:2021-06-04 16:39:17.703326: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,0]<stderr>:2021-06-04 16:39:17.780825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:17.810644: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:17.984966: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:18.012113: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stdout>:Step #0 Loss: 0.695094
[1,0]<stdout>:Step #100 Loss: 0.669580
[1,0]<stdout>:Step #200 Loss: 0.661098
[1,0]<stdout>:Step #300 Loss: 0.660680
[1,0]<stdout>:Step #400 Loss: 0.658633
[1,0]<stdout>:Step #500 Loss: 0.660251
[1,0]<stdout>:Step #600 Loss: 0.657047
###Markdown
Multi-GPU with MovieLens: ETL and Training OverviewNVIDIA Merlin is a open source framework to accelerate and scale end-to-end recommender system pipelines on GPU. In this notebook, we use NVTabular, Merlin’s ETL component, to scale feature engineering and pre-processing to multiple GPUs and then perform data-parallel distributed training of a neural network on multiple GPUs with TensorFlow, [Horovod](https://horovod.readthedocs.io/en/stable/), and [NCCL](https://developer.nvidia.com/nccl).The pre-requisites for this notebook are to be familiar with NVTabular and its API:- You can read more about NVTabular, its API and specialized dataloaders in [Getting Started with Movielens notebooks](../getting-started-movielens).- You can read more about scaling NVTabular ETL in [Scaling Criteo notebooks](../scaling-criteo).**In this notebook, we will focus only on the new information related to multi-GPU training, so please check out the other notebooks first (if you haven’t already.)** Learning objectivesIn this notebook, we learn how to scale ETL and deep learning taining to multiple GPUs- Learn to use larger than GPU/host memory datasets for ETL and training- Use multi-GPU or multi node for ETL with NVTabular- Use NVTabular dataloader to accelerate TensorFlow pipelines- Scale TensorFlow training with Horovod DatasetIn this notebook, we use the [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) dataset. It is popular for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well.Note: We are using the MovieLens 25M dataset in this example for simplicity, although the dataset is not large enough to require multi-GPU training. However, the functionality demonstrated in this notebook can be easily extended to scale recommender pipelines for larger datasets in the same way. Tools- [Horovod](https://horovod.readthedocs.io/en/stable/) is a distributed deep learning framework that provides tools for multi-GPU optimization.- The [NVIDIA Collective Communication Library (NCCL)](https://developer.nvidia.com/nccl) provides the underlying GPU-based implementations of the [allgather](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallgather) and [allreduce](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallreduce) cross-GPU communication operations. Download and ConvertFirst, we will download and convert the dataset to Parquet. This section is based on [01-Download-Convert.ipynb](../getting-started-movielens/01-Download-Convert.ipynb). Download
###Code
# External dependencies
import os
import pathlib
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
from nvtabular.utils import download_file
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", "~/nvt-examples/multigpu-movielens/data/"
)
BASE_DIR = pathlib.Path(INPUT_DATA_DIR).expanduser()
zip_path = pathlib.Path(BASE_DIR, "ml-25m.zip")
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", zip_path, redownload=False
)
###Output
downloading ml-25m.zip: 262MB [00:06, 41.9MB/s]
unzipping files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:04<00:00, 1.74files/s]
###Markdown
Convert
###Code
movies = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "movies.csv"))
movies["genres"] = movies["genres"].str.split("|")
movies = movies.drop("title", axis=1)
movies.to_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
###Output
_____no_output_____
###Markdown
Split into train and validation datasets
###Code
ratings = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "ratings.csv"))
ratings = ratings.drop("timestamp", axis=1)
# shuffle the dataset
ratings = ratings.sample(len(ratings), replace=False)
# split the train_df as training and validation data sets.
num_valid = int(len(ratings) * 0.2)
train = ratings[:-num_valid]
valid = ratings[-num_valid:]
train.to_parquet(pathlib.Path(BASE_DIR, "train.parquet"))
valid.to_parquet(pathlib.Path(BASE_DIR, "valid.parquet"))
###Output
_____no_output_____
###Markdown
ETL with NVTabularWe finished downloading and converting the dataset. We will preprocess and engineer features with NVTabular on multiple GPUs. You can read more- about NVTabular's features and API in [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).- scaling NVTabular ETL to multiple GPUs [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb). Deploy a Distributed-Dask ClusterThis section is based on [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb) and [multi-gpu-toy-example/multi-gpu_dask.ipynb](../multi-gpu-toy-example/multi-gpu_dask.ipynb)
###Code
# Standard Libraries
import shutil
# External Dependencies
import cupy as cp
import numpy as np
import cudf
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.utils import parse_bytes
from dask.delayed import delayed
import rmm
# NVTabular
import nvtabular as nvt
import nvtabular.ops as ops
from nvtabular.io import Shuffle
from nvtabular.utils import device_mem_size
# define some information about where to get our data
input_path = pathlib.Path(BASE_DIR, "converted", "movielens")
dask_workdir = pathlib.Path(BASE_DIR, "test_dask", "workdir")
output_path = pathlib.Path(BASE_DIR, "test_dask", "output")
stats_path = pathlib.Path(BASE_DIR, "test_dask", "stats")
# Make sure we have a clean worker space for Dask
if pathlib.Path.is_dir(dask_workdir):
shutil.rmtree(dask_workdir)
dask_workdir.mkdir(parents=True)
# Make sure we have a clean stats space for Dask
if pathlib.Path.is_dir(stats_path):
shutil.rmtree(stats_path)
stats_path.mkdir(parents=True)
# Make sure we have a clean output path
if pathlib.Path.is_dir(output_path):
shutil.rmtree(output_path)
output_path.mkdir(parents=True)
# Get device memory capacity
capacity = device_mem_size(kind="total")
# Deploy a Single-Machine Multi-GPU Cluster
protocol = "tcp" # "tcp" or "ucx"
visible_devices = "0,1" # Delect devices to place workers
device_spill_frac = 0.5 # Spill GPU-Worker memory to host at this limit.
# Reduce if spilling fails to prevent
# device memory errors.
cluster = None # (Optional) Specify existing scheduler port
if cluster is None:
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
local_directory=dask_workdir,
device_memory_limit=capacity * device_spill_frac,
)
# Create the distributed client
client = Client(cluster)
client
# Initialize RMM pool on ALL workers
def _rmm_pool():
rmm.reinitialize(
pool_allocator=True,
initial_pool_size=None, # Use default size
)
client.run(_rmm_pool)
###Output
_____no_output_____
###Markdown
Defining our Preprocessing PipelineThis subsection is based on [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).
###Code
movies = cudf.read_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
joined = ["userId", "movieId"] >> nvt.ops.JoinExternal(movies, on=["movieId"])
cat_features = joined >> nvt.ops.Categorify()
ratings = nvt.ColumnSelector(["rating"]) >> nvt.ops.LambdaOp(lambda col: (col > 3).astype("int8"), dtype=np.int8)
output = cat_features + ratings
workflow = nvt.Workflow(output)
!rm -rf $BASE_DIR/train
!rm -rf $BASE_DIR/valid
train_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "train.parquet"))], part_size="100MB")
valid_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "valid.parquet"))], part_size="100MB")
workflow.fit(train_iter)
workflow.save(str(pathlib.Path(BASE_DIR, "workflow")))
shuffle = Shuffle.PER_WORKER # Shuffle algorithm
out_files_per_proc = 4 # Number of output files per worker
workflow.transform(train_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "train"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
workflow.transform(valid_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "valid"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
client.shutdown()
cluster.close()
###Output
/usr/local/lib/python3.8/dist-packages/distributed/worker.py:3560: UserWarning: Large object of size 1.90 MiB detected in task graph:
("('read-parquet-d36dd514a8adc53a9a91115c9be1d852' ... 1115c9be1d852')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
warnings.warn(
###Markdown
Training with TensorFlow on multiGPUsIn this section, we will train a TensorFlow model with multi-GPU support. In the NVTabular v0.5 release, we added multi-GPU support for NVTabular dataloaders. We will modify the [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb) to use multiple GPUs. Please review that notebook, if you have questions about the general functionality of the NVTabular dataloaders or the neural network architecture. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The normal TensorFlow dataloaders cannot prepare the next training batches fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras models- **supporting multi-GPU training with Horovod**You can find more information on the dataloaders in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49). Using Horovod with Tensorflow and NVTabularThe training script below is based on [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb), with a few important changes:- We provide several additional parameters to the `KerasSequenceLoader` class, including the total number of workers `hvd.size()`, the current worker's id number `hvd.rank()`, and a function for generating random seeds `seed_fn()`. ```python train_dataset_tf = KerasSequenceLoader( ... global_size=hvd.size(), global_rank=hvd.rank(), seed_fn=seed_fn, )```- The seed function uses Horovod to collectively generate a random seed that's shared by all workers so that they can each shuffle the dataset in a consistent way and select partitions to work on without overlap. The seed function is called by the dataloader during the shuffling process at the beginning of each epoch:```python def seed_fn(): min_int, max_int = tf.int32.limits max_rand = max_int // hvd.size() Generate a seed fragment on each worker seed_fragment = cupy.random.randint(0, max_rand).get() Aggregate seed fragments from all Horovod workers seed_tensor = tf.constant(seed_fragment) reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum) return reduced_seed % max_rand```- We wrap the TensorFlow optimizer with Horovod's `DistributedOptimizer` class and scale the learning rate by the number of workers:```python opt = tf.keras.optimizers.SGD(0.01 * hvd.size()) opt = hvd.DistributedOptimizer(opt)```- We wrap the TensorFlow gradient tape with Horovod's `DistributedGradientTape` class:```python with tf.GradientTape() as tape: ... tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)```- After the first batch, we broadcast the model and optimizer parameters to all workers with Horovod:```python Note: broadcast should be done after the first gradient step to ensure optimizer initialization. if first_batch: hvd.broadcast_variables(model.variables, root_rank=0) hvd.broadcast_variables(opt.variables(), root_rank=0)```- We only save checkpoints from the first worker to avoid multiple workers trying to write to the same files:```python if hvd.rank() == 0: checkpoint.save(checkpoint_dir)```The rest of the script is the same as the MovieLens example in [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb). In order to run it with Horovod, we first need to write it to a file.
###Code
%%writefile './tf_trainer.py'
# External dependencies
import argparse
import glob
import os
import cupy
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.3" # fraction of free memory
import nvtabular as nvt # noqa: E402 isort:skip
from nvtabular.framework_utils.tensorflow import layers # noqa: E402 isort:skip
from nvtabular.loader.tensorflow import KerasSequenceLoader # noqa: E402 isort:skip
import tensorflow as tf # noqa: E402 isort:skip
import horovod.tensorflow as hvd # noqa: E402 isort:skip
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("--dir_in", default=None, help="Input directory")
parser.add_argument("--batch_size", default=None, help="batch size")
parser.add_argument("--cats", default=None, help="categorical columns")
parser.add_argument("--cats_mh", default=None, help="categorical multihot columns")
parser.add_argument("--conts", default=None, help="continuous columns")
parser.add_argument("--labels", default=None, help="continuous columns")
args = parser.parse_args()
BASE_DIR = args.dir_in or "./data/"
BATCH_SIZE = int(args.batch_size or 16384) # Batch Size
CATEGORICAL_COLUMNS = args.cats or ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = args.cats_mh or ["genres"] # Multi-hot
NUMERIC_COLUMNS = args.conts or []
TRAIN_PATHS = sorted(
glob.glob(os.path.join(BASE_DIR, "train/*.parquet"))
) # Output from ETL-with-NVTabular
hvd.init()
# Seed with system randomness (or a static seed)
cupy.random.seed(None)
def seed_fn():
"""
Generate consistent dataloader shuffle seeds across workers
Reseeds each worker's dataloader each epoch to get fresh a shuffle
that's consistent across workers.
"""
min_int, max_int = tf.int32.limits
max_rand = max_int // hvd.size()
# Generate a seed fragment on each worker
seed_fragment = cupy.random.randint(0, max_rand).get()
# Aggregate seed fragments from all Horovod workers
seed_tensor = tf.constant(seed_fragment)
reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum)
return reduced_seed % max_rand
proc = nvt.Workflow.load(os.path.join(BASE_DIR, "workflow/"))
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
global_size=hvd.size(),
global_rank=hvd.rank(),
seed_fn=seed_fn,
)
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = \
(tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
loss = tf.losses.BinaryCrossentropy()
opt = tf.keras.optimizers.SGD(0.01 * hvd.size())
opt = hvd.DistributedOptimizer(opt)
checkpoint_dir = "./checkpoints"
checkpoint = tf.train.Checkpoint(model=model, optimizer=opt)
@tf.function(experimental_relax_shapes=True)
def training_step(examples, labels, first_batch):
with tf.GradientTape() as tape:
probs = model(examples, training=True)
loss_value = loss(labels, probs)
# Horovod: add Horovod Distributed GradientTape.
tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)
grads = tape.gradient(loss_value, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
#
# Note: broadcast should be done after the first gradient step to ensure optimizer
# initialization.
if first_batch:
hvd.broadcast_variables(model.variables, root_rank=0)
hvd.broadcast_variables(opt.variables(), root_rank=0)
return loss_value
# Horovod: adjust number of steps based on number of GPUs.
for batch, (examples, labels) in enumerate(train_dataset_tf):
loss_value = training_step(examples, labels, batch == 0)
if batch % 100 == 0 and hvd.local_rank() == 0:
print("Step #%d\tLoss: %.6f" % (batch, loss_value))
hvd.join()
# Horovod: save checkpoints only on worker 0 to prevent other workers from
# corrupting it.
if hvd.rank() == 0:
checkpoint.save(checkpoint_dir)
###Output
Overwriting ./tf_trainer.py
###Markdown
We'll also need a small wrapper script to check environment variables set by the Horovod runner to see which rank we'll be assigned, in order to set CUDA_VISIBLE_DEVICES properly for each worker:
###Code
%%writefile './hvd_wrapper.sh'
#!/bin/bash
# Get local process ID from OpenMPI or alternatively from SLURM
if [ -z "${CUDA_VISIBLE_DEVICES:-}" ]; then
if [ -n "${OMPI_COMM_WORLD_LOCAL_RANK:-}" ]; then
LOCAL_RANK="${OMPI_COMM_WORLD_LOCAL_RANK}"
elif [ -n "${SLURM_LOCALID:-}" ]; then
LOCAL_RANK="${SLURM_LOCALID}"
fi
export CUDA_VISIBLE_DEVICES=${LOCAL_RANK}
fi
exec "$@"
###Output
Overwriting ./hvd_wrapper.sh
###Markdown
OpenMPI and Slurm are tools for running distributed computed jobs. In this example, we’re using OpenMPI, but depending on the environment you run distributed training jobs in, you may need to check slightly different environment variables to find the total number of workers (global size) and each process’s worker number (global rank.)Why do we have to check environment variables instead of using `hvd.rank()` and `hvd.local_rank()`? NVTabular does some GPU configuration when imported and needs to be imported before Horovod to avoid conflicts. We need to set GPU visibility before NVTabular is imported (when Horovod isn’t yet available) so that multiple processes don’t each try to configure all the GPUs, so as a workaround, we “cheat” and peek at environment variables set by horovodrun to decide which GPU each process should use.
###Code
!horovodrun -np 2 sh hvd_wrapper.sh python tf_trainer.py --dir_in $BASE_DIR --batch_size 16384
###Output
2021-06-04 16:39:06.000313: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:08.979997: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:09.064191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.138200: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:10.138376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,0]<stderr>:2021-06-04 16:39:10.139777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:10.139823: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.139907: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139949: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140084: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140123: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140169: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:10.144021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:10.367414: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:10.367496: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,1]<stderr>:2021-06-04 16:39:10.368324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:10.368347: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:10.368396: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368424: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368451: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368475: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368512: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368537: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368573: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:10.369841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.730033: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:11.730907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:11.730990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.731005: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731018: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731038: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731049: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731059: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731078: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:11.732312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.732350: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.732473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,1]<stderr>:2021-06-04 16:39:11.732487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,1]<stderr>:2021-06-04 16:39:11.732493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,1]<stderr>:2021-06-04 16:39:11.734431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:42:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:11.821346: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:11.822270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:11.822360: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.822376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822389: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822411: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822425: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822434: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822454: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:11.823684: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,0]<stderr>:2021-06-04 16:39:11.823731: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.823868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,0]<stderr>:2021-06-04 16:39:11.823881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,0]<stderr>:2021-06-04 16:39:11.823888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,0]<stderr>:2021-06-04 16:39:11.825784: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:0b:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:17.634485: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,0]<stderr>:2021-06-04 16:39:17.668915: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,1]<stderr>:2021-06-04 16:39:17.694128: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,1]<stderr>:2021-06-04 16:39:17.703326: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,0]<stderr>:2021-06-04 16:39:17.780825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:17.810644: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:17.984966: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:18.012113: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stdout>:Step #0 Loss: 0.695094
[1,0]<stdout>:Step #100 Loss: 0.669580
[1,0]<stdout>:Step #200 Loss: 0.661098
[1,0]<stdout>:Step #300 Loss: 0.660680
[1,0]<stdout>:Step #400 Loss: 0.658633
[1,0]<stdout>:Step #500 Loss: 0.660251
[1,0]<stdout>:Step #600 Loss: 0.657047
###Markdown
Multi-GPU with MovieLens: ETL and Training OverviewNVIDIA Merlin is a open source framework to accelerate and scale end-to-end recommender system pipelines on GPU. In this notebook, we use NVTabular, Merlin’s ETL component, to scale feature engineering and pre-processing to multiple GPUs and then perform data-parallel distributed training of a neural network on multiple GPUs with TensorFlow, [Horovod](https://horovod.readthedocs.io/en/stable/), and [NCCL](https://developer.nvidia.com/nccl).The pre-requisites for this notebook are to be familiar with NVTabular and its API:- You can read more about NVTabular, its API and specialized dataloaders in [Getting Started with Movielens notebooks](../getting-started-movielens).- You can read more about scaling NVTabular ETL in [Scaling Criteo notebooks](../scaling-criteo).**In this notebook, we will focus only on the new information related to multi-GPU training, so please check out the other notebooks first (if you haven’t already.)** Learning objectivesIn this notebook, we learn how to scale ETL and deep learning taining to multiple GPUs- Learn to use larger than GPU/host memory datasets for ETL and training- Use multi-GPU or multi node for ETL with NVTabular- Use NVTabular dataloader to accelerate TensorFlow pipelines- Scale TensorFlow training with Horovod DatasetIn this notebook, we use the [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) dataset. It is popular for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well.Note: We are using the MovieLens 25M dataset in this example for simplicity, although the dataset is not large enough to require multi-GPU training. However, the functionality demonstrated in this notebook can be easily extended to scale recommender pipelines for larger datasets in the same way. Tools- [Horovod](https://horovod.readthedocs.io/en/stable/) is a distributed deep learning framework that provides tools for multi-GPU optimization.- The [NVIDIA Collective Communication Library (NCCL)](https://developer.nvidia.com/nccl) provides the underlying GPU-based implementations of the [allgather](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallgather) and [allreduce](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallreduce) cross-GPU communication operations. Download and ConvertFirst, we will download and convert the dataset to Parquet. This section is based on [01-Download-Convert.ipynb](../getting-started-movielens/01-Download-Convert.ipynb). Download
###Code
# External dependencies
import os
import pathlib
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
from nvtabular.utils import download_file
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", "~/nvt-examples/multigpu-movielens/data/"
)
BASE_DIR = pathlib.Path(INPUT_DATA_DIR).expanduser()
zip_path = pathlib.Path(BASE_DIR, "ml-25m.zip")
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", zip_path, redownload=False
)
###Output
downloading ml-25m.zip: 262MB [00:06, 41.9MB/s]
unzipping files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:04<00:00, 1.74files/s]
###Markdown
Convert
###Code
movies = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "movies.csv"))
movies["genres"] = movies["genres"].str.split("|")
movies = movies.drop("title", axis=1)
movies.to_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
###Output
_____no_output_____
###Markdown
Split into train and validation datasets
###Code
ratings = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "ratings.csv"))
ratings = ratings.drop("timestamp", axis=1)
# shuffle the dataset
ratings = ratings.sample(len(ratings), replace=False)
# split the train_df as training and validation data sets.
num_valid = int(len(ratings) * 0.2)
train = ratings[:-num_valid]
valid = ratings[-num_valid:]
train.to_parquet(pathlib.Path(BASE_DIR, "train.parquet"))
valid.to_parquet(pathlib.Path(BASE_DIR, "valid.parquet"))
###Output
_____no_output_____
###Markdown
ETL with NVTabularWe finished downloading and converting the dataset. We will preprocess and engineer features with NVTabular on multiple GPUs. You can read more- about NVTabular's features and API in [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).- scaling NVTabular ETL to multiple GPUs [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb). Deploy a Distributed-Dask ClusterThis section is based on [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb) and [multi-gpu-toy-example/multi-gpu_dask.ipynb](../multi-gpu-toy-example/multi-gpu_dask.ipynb)
###Code
# Standard Libraries
import shutil
# External Dependencies
import cupy as cp
import cudf
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.utils import parse_bytes
from dask.delayed import delayed
import rmm
# NVTabular
import nvtabular as nvt
import nvtabular.ops as ops
from nvtabular.io import Shuffle
from nvtabular.utils import device_mem_size
# define some information about where to get our data
input_path = pathlib.Path(BASE_DIR, "converted", "movielens")
dask_workdir = pathlib.Path(BASE_DIR, "test_dask", "workdir")
output_path = pathlib.Path(BASE_DIR, "test_dask", "output")
stats_path = pathlib.Path(BASE_DIR, "test_dask", "stats")
# Make sure we have a clean worker space for Dask
if pathlib.Path.is_dir(dask_workdir):
shutil.rmtree(dask_workdir)
dask_workdir.mkdir(parents=True)
# Make sure we have a clean stats space for Dask
if pathlib.Path.is_dir(stats_path):
shutil.rmtree(stats_path)
stats_path.mkdir(parents=True)
# Make sure we have a clean output path
if pathlib.Path.is_dir(output_path):
shutil.rmtree(output_path)
output_path.mkdir(parents=True)
# Get device memory capacity
capacity = device_mem_size(kind="total")
# Deploy a Single-Machine Multi-GPU Cluster
protocol = "tcp" # "tcp" or "ucx"
visible_devices = "0,1" # Delect devices to place workers
device_spill_frac = 0.5 # Spill GPU-Worker memory to host at this limit.
# Reduce if spilling fails to prevent
# device memory errors.
cluster = None # (Optional) Specify existing scheduler port
if cluster is None:
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
local_directory=dask_workdir,
device_memory_limit=capacity * device_spill_frac,
)
# Create the distributed client
client = Client(cluster)
client
# Initialize RMM pool on ALL workers
def _rmm_pool():
rmm.reinitialize(
pool_allocator=True,
initial_pool_size=None, # Use default size
)
client.run(_rmm_pool)
###Output
_____no_output_____
###Markdown
Defining our Preprocessing PipelineThis subsection is based on [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb). The only difference is that we initialize the NVTabular workflow using the LocalCUDACluster client with `nvt.Workflow(output, client=client)`.
###Code
movies = cudf.read_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
joined = ["userId", "movieId"] >> nvt.ops.JoinExternal(movies, on=["movieId"])
cat_features = joined >> nvt.ops.Categorify()
ratings = nvt.ColumnSelector(["rating"]) >> nvt.ops.LambdaOp(lambda col: (col > 3).astype("int8"))
output = cat_features + ratings
# USE client in NVTabular workflow
workflow = nvt.Workflow(output, client=client)
!rm -rf $BASE_DIR/train
!rm -rf $BASE_DIR/valid
train_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "train.parquet"))], part_size="100MB")
valid_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "valid.parquet"))], part_size="100MB")
workflow.fit(train_iter)
workflow.save(pathlib.Path(BASE_DIR, "workflow"))
shuffle = Shuffle.PER_WORKER # Shuffle algorithm
out_files_per_proc = 4 # Number of output files per worker
workflow.transform(train_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "train"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
workflow.transform(valid_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "valid"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
client.shutdown()
cluster.close()
###Output
/usr/local/lib/python3.8/dist-packages/distributed/worker.py:3560: UserWarning: Large object of size 1.90 MiB detected in task graph:
("('read-parquet-d36dd514a8adc53a9a91115c9be1d852' ... 1115c9be1d852')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
warnings.warn(
###Markdown
Training with TensorFlow on multiGPUsIn this section, we will train a TensorFlow model with multi-GPU support. In the NVTabular v0.5 release, we added multi-GPU support for NVTabular dataloaders. We will modify the [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb) to use multiple GPUs. Please review that notebook, if you have questions about the general functionality of the NVTabular dataloaders or the neural network architecture. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The normal TensorFlow dataloaders cannot prepare the next training batches fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras models- **supporting multi-GPU training with Horovod**You can find more information on the dataloaders in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49). Using Horovod with Tensorflow and NVTabularThe training script below is based on [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb), with a few important changes:- We provide several additional parameters to the `KerasSequenceLoader` class, including the total number of workers `hvd.size()`, the current worker's id number `hvd.rank()`, and a function for generating random seeds `seed_fn()`. ```python train_dataset_tf = KerasSequenceLoader( ... global_size=hvd.size(), global_rank=hvd.rank(), seed_fn=seed_fn, )```- The seed function uses Horovod to collectively generate a random seed that's shared by all workers so that they can each shuffle the dataset in a consistent way and select partitions to work on without overlap. The seed function is called by the dataloader during the shuffling process at the beginning of each epoch:```python def seed_fn(): min_int, max_int = tf.int32.limits max_rand = max_int // hvd.size() Generate a seed fragment on each worker seed_fragment = cupy.random.randint(0, max_rand).get() Aggregate seed fragments from all Horovod workers seed_tensor = tf.constant(seed_fragment) reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum) return reduced_seed % max_rand```- We wrap the TensorFlow optimizer with Horovod's `DistributedOptimizer` class and scale the learning rate by the number of workers:```python opt = tf.keras.optimizers.SGD(0.01 * hvd.size()) opt = hvd.DistributedOptimizer(opt)```- We wrap the TensorFlow gradient tape with Horovod's `DistributedGradientTape` class:```python with tf.GradientTape() as tape: ... tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)```- After the first batch, we broadcast the model and optimizer parameters to all workers with Horovod:```python Note: broadcast should be done after the first gradient step to ensure optimizer initialization. if first_batch: hvd.broadcast_variables(model.variables, root_rank=0) hvd.broadcast_variables(opt.variables(), root_rank=0)```- We only save checkpoints from the first worker to avoid multiple workers trying to write to the same files:```python if hvd.rank() == 0: checkpoint.save(checkpoint_dir)```The rest of the script is the same as the MovieLens example in [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb). In order to run it with Horovod, we first need to write it to a file.
###Code
%%writefile './tf_trainer.py'
# External dependencies
import argparse
import glob
import os
import cupy
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.3" # fraction of free memory
import nvtabular as nvt # noqa: E402 isort:skip
from nvtabular.framework_utils.tensorflow import layers # noqa: E402 isort:skip
from nvtabular.loader.tensorflow import KerasSequenceLoader # noqa: E402 isort:skip
import tensorflow as tf # noqa: E402 isort:skip
import horovod.tensorflow as hvd # noqa: E402 isort:skip
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("--dir_in", default=None, help="Input directory")
parser.add_argument("--batch_size", default=None, help="batch size")
parser.add_argument("--cats", default=None, help="categorical columns")
parser.add_argument("--cats_mh", default=None, help="categorical multihot columns")
parser.add_argument("--conts", default=None, help="continuous columns")
parser.add_argument("--labels", default=None, help="continuous columns")
args = parser.parse_args()
BASE_DIR = args.dir_in or "./data/"
BATCH_SIZE = int(args.batch_size or 16384) # Batch Size
CATEGORICAL_COLUMNS = args.cats or ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = args.cats_mh or ["genres"] # Multi-hot
NUMERIC_COLUMNS = args.conts or []
TRAIN_PATHS = sorted(
glob.glob(os.path.join(BASE_DIR, "train/*.parquet"))
) # Output from ETL-with-NVTabular
hvd.init()
# Seed with system randomness (or a static seed)
cupy.random.seed(None)
def seed_fn():
"""
Generate consistent dataloader shuffle seeds across workers
Reseeds each worker's dataloader each epoch to get fresh a shuffle
that's consistent across workers.
"""
min_int, max_int = tf.int32.limits
max_rand = max_int // hvd.size()
# Generate a seed fragment on each worker
seed_fragment = cupy.random.randint(0, max_rand).get()
# Aggregate seed fragments from all Horovod workers
seed_tensor = tf.constant(seed_fragment)
reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum)
return reduced_seed % max_rand
proc = nvt.Workflow.load(os.path.join(BASE_DIR, "workflow/"))
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
global_size=hvd.size(),
global_rank=hvd.rank(),
seed_fn=seed_fn,
)
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = \
(tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
loss = tf.losses.BinaryCrossentropy()
opt = tf.keras.optimizers.SGD(0.01 * hvd.size())
opt = hvd.DistributedOptimizer(opt)
checkpoint_dir = "./checkpoints"
checkpoint = tf.train.Checkpoint(model=model, optimizer=opt)
@tf.function(experimental_relax_shapes=True)
def training_step(examples, labels, first_batch):
with tf.GradientTape() as tape:
probs = model(examples, training=True)
loss_value = loss(labels, probs)
# Horovod: add Horovod Distributed GradientTape.
tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)
grads = tape.gradient(loss_value, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
#
# Note: broadcast should be done after the first gradient step to ensure optimizer
# initialization.
if first_batch:
hvd.broadcast_variables(model.variables, root_rank=0)
hvd.broadcast_variables(opt.variables(), root_rank=0)
return loss_value
# Horovod: adjust number of steps based on number of GPUs.
for batch, (examples, labels) in enumerate(train_dataset_tf):
loss_value = training_step(examples, labels, batch == 0)
if batch % 100 == 0 and hvd.local_rank() == 0:
print("Step #%d\tLoss: %.6f" % (batch, loss_value))
hvd.join()
# Horovod: save checkpoints only on worker 0 to prevent other workers from
# corrupting it.
if hvd.rank() == 0:
checkpoint.save(checkpoint_dir)
###Output
Overwriting ./tf_trainer.py
###Markdown
We'll also need a small wrapper script to check environment variables set by the Horovod runner to see which rank we'll be assigned, in order to set CUDA_VISIBLE_DEVICES properly for each worker:
###Code
%%writefile './hvd_wrapper.sh'
#!/bin/bash
# Get local process ID from OpenMPI or alternatively from SLURM
if [ -z "${CUDA_VISIBLE_DEVICES:-}" ]; then
if [ -n "${OMPI_COMM_WORLD_LOCAL_RANK:-}" ]; then
LOCAL_RANK="${OMPI_COMM_WORLD_LOCAL_RANK}"
elif [ -n "${SLURM_LOCALID:-}" ]; then
LOCAL_RANK="${SLURM_LOCALID}"
fi
export CUDA_VISIBLE_DEVICES=${LOCAL_RANK}
fi
exec "$@"
###Output
Overwriting ./hvd_wrapper.sh
###Markdown
OpenMPI and Slurm are tools for running distributed computed jobs. In this example, we’re using OpenMPI, but depending on the environment you run distributed training jobs in, you may need to check slightly different environment variables to find the total number of workers (global size) and each process’s worker number (global rank.)Why do we have to check environment variables instead of using `hvd.rank()` and `hvd.local_rank()`? NVTabular does some GPU configuration when imported and needs to be imported before Horovod to avoid conflicts. We need to set GPU visibility before NVTabular is imported (when Horovod isn’t yet available) so that multiple processes don’t each try to configure all the GPUs, so as a workaround, we “cheat” and peek at environment variables set by horovodrun to decide which GPU each process should use.
###Code
!horovodrun -np 2 sh hvd_wrapper.sh python tf_trainer.py --dir_in $BASE_DIR --batch_size 16384
###Output
2021-06-04 16:39:06.000313: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:08.979997: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:09.064191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.138200: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:10.138376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,0]<stderr>:2021-06-04 16:39:10.139777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:10.139823: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.139907: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139949: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140084: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140123: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140169: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:10.144021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:10.367414: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:10.367496: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,1]<stderr>:2021-06-04 16:39:10.368324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:10.368347: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:10.368396: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368424: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368451: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368475: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368512: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368537: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368573: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:10.369841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.730033: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:11.730907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:11.730990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.731005: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731018: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731038: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731049: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731059: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731078: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:11.732312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.732350: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.732473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,1]<stderr>:2021-06-04 16:39:11.732487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,1]<stderr>:2021-06-04 16:39:11.732493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,1]<stderr>:2021-06-04 16:39:11.734431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:42:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:11.821346: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:11.822270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:11.822360: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.822376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822389: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822411: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822425: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822434: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822454: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:11.823684: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,0]<stderr>:2021-06-04 16:39:11.823731: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.823868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,0]<stderr>:2021-06-04 16:39:11.823881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,0]<stderr>:2021-06-04 16:39:11.823888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,0]<stderr>:2021-06-04 16:39:11.825784: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:0b:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:17.634485: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,0]<stderr>:2021-06-04 16:39:17.668915: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,1]<stderr>:2021-06-04 16:39:17.694128: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,1]<stderr>:2021-06-04 16:39:17.703326: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,0]<stderr>:2021-06-04 16:39:17.780825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:17.810644: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:17.984966: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:18.012113: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stdout>:Step #0 Loss: 0.695094
[1,0]<stdout>:Step #100 Loss: 0.669580
[1,0]<stdout>:Step #200 Loss: 0.661098
[1,0]<stdout>:Step #300 Loss: 0.660680
[1,0]<stdout>:Step #400 Loss: 0.658633
[1,0]<stdout>:Step #500 Loss: 0.660251
[1,0]<stdout>:Step #600 Loss: 0.657047
###Markdown
Multi-GPU Training with TensorFlow on MovieLens OverviewNVIDIA Merlin is a open source framework to accelerate and scale end-to-end recommender system pipelines on GPU. In this notebook, we use NVTabular, Merlin’s ETL component, to scale feature engineering and pre-processing to multiple GPUs and then perform data-parallel distributed training of a neural network on multiple GPUs with TensorFlow, [Horovod](https://horovod.readthedocs.io/en/stable/), and [NCCL](https://developer.nvidia.com/nccl).The pre-requisites for this notebook are to be familiar with NVTabular and its API:- You can read more about NVTabular, its API and specialized dataloaders in [Getting Started with Movielens notebooks](https://nvidia-merlin.github.io/NVTabular/main/examples/getting-started-movielens/index.html).- You can read more about scaling NVTabular ETL in [Scaling Criteo notebooks](https://nvidia-merlin.github.io/NVTabular/main/examples/scaling-criteo/index.html).**In this notebook, we will focus only on the new information related to multi-GPU training, so please check out the other notebooks first (if you haven’t already.)** Learning objectivesIn this notebook, we learn how to scale ETL and deep learning taining to multiple GPUs- Learn to use larger than GPU/host memory datasets for ETL and training- Use multi-GPU or multi node for ETL with NVTabular- Use NVTabular dataloader to accelerate TensorFlow pipelines- Scale TensorFlow training with Horovod DatasetIn this notebook, we use the [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) dataset. It is popular for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well.Note: We are using the MovieLens 25M dataset in this example for simplicity, although the dataset is not large enough to require multi-GPU training. However, the functionality demonstrated in this notebook can be easily extended to scale recommender pipelines for larger datasets in the same way. Tools- [Horovod](https://horovod.readthedocs.io/en/stable/) is a distributed deep learning framework that provides tools for multi-GPU optimization.- The [NVIDIA Collective Communication Library (NCCL)](https://developer.nvidia.com/nccl) provides the underlying GPU-based implementations of the [allgather](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallgather) and [allreduce](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallreduce) cross-GPU communication operations. Download and ConvertFirst, we will download and convert the dataset to Parquet. This section is based on [01-Download-Convert.ipynb](../getting-started-movielens/01-Download-Convert.ipynb). Download
###Code
# External dependencies
import os
import pathlib
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
from merlin.core.utils import download_file
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", "~/nvt-examples/multigpu-movielens/data/"
)
BASE_DIR = pathlib.Path(INPUT_DATA_DIR).expanduser()
zip_path = pathlib.Path(BASE_DIR, "ml-25m.zip")
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", zip_path, redownload=False
)
###Output
downloading ml-25m.zip: 262MB [00:06, 41.9MB/s]
unzipping files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:04<00:00, 1.74files/s]
###Markdown
Convert
###Code
movies = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "movies.csv"))
movies["genres"] = movies["genres"].str.split("|")
movies = movies.drop("title", axis=1)
movies.to_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
###Output
_____no_output_____
###Markdown
Split into train and validation datasets
###Code
ratings = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "ratings.csv"))
ratings = ratings.drop("timestamp", axis=1)
# shuffle the dataset
ratings = ratings.sample(len(ratings), replace=False)
# split the train_df as training and validation data sets.
num_valid = int(len(ratings) * 0.2)
train = ratings[:-num_valid]
valid = ratings[-num_valid:]
train.to_parquet(pathlib.Path(BASE_DIR, "train.parquet"))
valid.to_parquet(pathlib.Path(BASE_DIR, "valid.parquet"))
###Output
_____no_output_____
###Markdown
ETL with NVTabularWe finished downloading and converting the dataset. We will preprocess and engineer features with NVTabular on multiple GPUs. You can read more- about NVTabular's features and API in [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).- scaling NVTabular ETL to multiple GPUs [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb). Deploy a Distributed-Dask ClusterThis section is based on [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb) and [multi-gpu-toy-example/multi-gpu_dask.ipynb](../multi-gpu-toy-example/multi-gpu_dask.ipynb)
###Code
# Standard Libraries
import shutil
# External Dependencies
import cupy as cp
import numpy as np
import cudf
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.utils import parse_bytes
from dask.delayed import delayed
import rmm
# NVTabular
import nvtabular as nvt
import nvtabular.ops as ops
from merlin.io import Shuffle
from merlin.core.utils import device_mem_size
# define some information about where to get our data
input_path = pathlib.Path(BASE_DIR, "converted", "movielens")
dask_workdir = pathlib.Path(BASE_DIR, "test_dask", "workdir")
output_path = pathlib.Path(BASE_DIR, "test_dask", "output")
stats_path = pathlib.Path(BASE_DIR, "test_dask", "stats")
# Make sure we have a clean worker space for Dask
if pathlib.Path.is_dir(dask_workdir):
shutil.rmtree(dask_workdir)
dask_workdir.mkdir(parents=True)
# Make sure we have a clean stats space for Dask
if pathlib.Path.is_dir(stats_path):
shutil.rmtree(stats_path)
stats_path.mkdir(parents=True)
# Make sure we have a clean output path
if pathlib.Path.is_dir(output_path):
shutil.rmtree(output_path)
output_path.mkdir(parents=True)
# Get device memory capacity
capacity = device_mem_size(kind="total")
# Deploy a Single-Machine Multi-GPU Cluster
protocol = "tcp" # "tcp" or "ucx"
visible_devices = "0,1" # Delect devices to place workers
device_spill_frac = 0.5 # Spill GPU-Worker memory to host at this limit.
# Reduce if spilling fails to prevent
# device memory errors.
cluster = None # (Optional) Specify existing scheduler port
if cluster is None:
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
local_directory=dask_workdir,
device_memory_limit=capacity * device_spill_frac,
)
# Create the distributed client
client = Client(cluster)
client
# Initialize RMM pool on ALL workers
def _rmm_pool():
rmm.reinitialize(
pool_allocator=True,
initial_pool_size=None, # Use default size
)
client.run(_rmm_pool)
###Output
_____no_output_____
###Markdown
Defining our Preprocessing PipelineThis subsection is based on [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).
###Code
movies = cudf.read_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
joined = ["userId", "movieId"] >> nvt.ops.JoinExternal(movies, on=["movieId"])
cat_features = joined >> nvt.ops.Categorify()
ratings = nvt.ColumnSelector(["rating"]) >> nvt.ops.LambdaOp(lambda col: (col > 3).astype("int8"), dtype=np.int8)
output = cat_features + ratings
workflow = nvt.Workflow(output)
!rm -rf $BASE_DIR/train
!rm -rf $BASE_DIR/valid
train_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "train.parquet"))], part_size="100MB")
valid_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "valid.parquet"))], part_size="100MB")
workflow.fit(train_iter)
workflow.save(str(pathlib.Path(BASE_DIR, "workflow")))
shuffle = Shuffle.PER_WORKER # Shuffle algorithm
out_files_per_proc = 4 # Number of output files per worker
workflow.transform(train_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "train"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
workflow.transform(valid_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "valid"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
client.shutdown()
cluster.close()
###Output
/usr/local/lib/python3.8/dist-packages/distributed/worker.py:3560: UserWarning: Large object of size 1.90 MiB detected in task graph:
("('read-parquet-d36dd514a8adc53a9a91115c9be1d852' ... 1115c9be1d852')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
warnings.warn(
###Markdown
Training with TensorFlow on multiGPUsIn this section, we will train a TensorFlow model with multi-GPU support. In the NVTabular v0.5 release, we added multi-GPU support for NVTabular dataloaders. We will modify the [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb) to use multiple GPUs. Please review that notebook, if you have questions about the general functionality of the NVTabular dataloaders or the neural network architecture. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The normal TensorFlow dataloaders cannot prepare the next training batches fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras models- **supporting multi-GPU training with Horovod**You can find more information on the dataloaders in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49). Using Horovod with Tensorflow and NVTabularThe training script below is based on [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb), with a few important changes:- We provide several additional parameters to the `KerasSequenceLoader` class, including the total number of workers `hvd.size()`, the current worker's id number `hvd.rank()`, and a function for generating random seeds `seed_fn()`. ```python train_dataset_tf = KerasSequenceLoader( ... global_size=hvd.size(), global_rank=hvd.rank(), seed_fn=seed_fn, )```- The seed function uses Horovod to collectively generate a random seed that's shared by all workers so that they can each shuffle the dataset in a consistent way and select partitions to work on without overlap. The seed function is called by the dataloader during the shuffling process at the beginning of each epoch:```python def seed_fn(): min_int, max_int = tf.int32.limits max_rand = max_int // hvd.size() Generate a seed fragment on each worker seed_fragment = cupy.random.randint(0, max_rand).get() Aggregate seed fragments from all Horovod workers seed_tensor = tf.constant(seed_fragment) reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum) return reduced_seed % max_rand```- We wrap the TensorFlow optimizer with Horovod's `DistributedOptimizer` class and scale the learning rate by the number of workers:```python opt = tf.keras.optimizers.SGD(0.01 * hvd.size()) opt = hvd.DistributedOptimizer(opt)```- We wrap the TensorFlow gradient tape with Horovod's `DistributedGradientTape` class:```python with tf.GradientTape() as tape: ... tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)```- After the first batch, we broadcast the model and optimizer parameters to all workers with Horovod:```python Note: broadcast should be done after the first gradient step to ensure optimizer initialization. if first_batch: hvd.broadcast_variables(model.variables, root_rank=0) hvd.broadcast_variables(opt.variables(), root_rank=0)```- We only save checkpoints from the first worker to avoid multiple workers trying to write to the same files:```python if hvd.rank() == 0: checkpoint.save(checkpoint_dir)```The rest of the script is the same as the MovieLens example in [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb). In order to run it with Horovod, we first need to write it to a file.
###Code
%%writefile './tf_trainer.py'
# External dependencies
import argparse
import glob
import os
import cupy
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.3" # fraction of free memory
import nvtabular as nvt # noqa: E402 isort:skip
from nvtabular.framework_utils.tensorflow import layers # noqa: E402 isort:skip
from nvtabular.loader.tensorflow import KerasSequenceLoader # noqa: E402 isort:skip
import tensorflow as tf # noqa: E402 isort:skip
import horovod.tensorflow as hvd # noqa: E402 isort:skip
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("--dir_in", default=None, help="Input directory")
parser.add_argument("--batch_size", default=None, help="batch size")
parser.add_argument("--cats", default=None, help="categorical columns")
parser.add_argument("--cats_mh", default=None, help="categorical multihot columns")
parser.add_argument("--conts", default=None, help="continuous columns")
parser.add_argument("--labels", default=None, help="continuous columns")
args = parser.parse_args()
BASE_DIR = args.dir_in or "./data/"
BATCH_SIZE = int(args.batch_size or 16384) # Batch Size
CATEGORICAL_COLUMNS = args.cats or ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = args.cats_mh or ["genres"] # Multi-hot
NUMERIC_COLUMNS = args.conts or []
TRAIN_PATHS = sorted(
glob.glob(os.path.join(BASE_DIR, "train/*.parquet"))
) # Output from ETL-with-NVTabular
hvd.init()
# Seed with system randomness (or a static seed)
cupy.random.seed(None)
def seed_fn():
"""
Generate consistent dataloader shuffle seeds across workers
Reseeds each worker's dataloader each epoch to get fresh a shuffle
that's consistent across workers.
"""
min_int, max_int = tf.int32.limits
max_rand = max_int // hvd.size()
# Generate a seed fragment on each worker
seed_fragment = cupy.random.randint(0, max_rand).get()
# Aggregate seed fragments from all Horovod workers
seed_tensor = tf.constant(seed_fragment)
reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum)
return reduced_seed % max_rand
proc = nvt.Workflow.load(os.path.join(BASE_DIR, "workflow/"))
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
global_size=hvd.size(),
global_rank=hvd.rank(),
seed_fn=seed_fn,
)
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = \
(tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
loss = tf.losses.BinaryCrossentropy()
opt = tf.keras.optimizers.SGD(0.01 * hvd.size())
opt = hvd.DistributedOptimizer(opt)
checkpoint_dir = "./checkpoints"
checkpoint = tf.train.Checkpoint(model=model, optimizer=opt)
@tf.function(experimental_relax_shapes=True)
def training_step(examples, labels, first_batch):
with tf.GradientTape() as tape:
probs = model(examples, training=True)
loss_value = loss(labels, probs)
# Horovod: add Horovod Distributed GradientTape.
tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)
grads = tape.gradient(loss_value, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
#
# Note: broadcast should be done after the first gradient step to ensure optimizer
# initialization.
if first_batch:
hvd.broadcast_variables(model.variables, root_rank=0)
hvd.broadcast_variables(opt.variables(), root_rank=0)
return loss_value
# Horovod: adjust number of steps based on number of GPUs.
for batch, (examples, labels) in enumerate(train_dataset_tf):
loss_value = training_step(examples, labels, batch == 0)
if batch % 100 == 0 and hvd.local_rank() == 0:
print("Step #%d\tLoss: %.6f" % (batch, loss_value))
hvd.join()
# Horovod: save checkpoints only on worker 0 to prevent other workers from
# corrupting it.
if hvd.rank() == 0:
checkpoint.save(checkpoint_dir)
###Output
Overwriting ./tf_trainer.py
###Markdown
We'll also need a small wrapper script to check environment variables set by the Horovod runner to see which rank we'll be assigned, in order to set CUDA_VISIBLE_DEVICES properly for each worker:
###Code
%%writefile './hvd_wrapper.sh'
#!/bin/bash
# Get local process ID from OpenMPI or alternatively from SLURM
if [ -z "${CUDA_VISIBLE_DEVICES:-}" ]; then
if [ -n "${OMPI_COMM_WORLD_LOCAL_RANK:-}" ]; then
LOCAL_RANK="${OMPI_COMM_WORLD_LOCAL_RANK}"
elif [ -n "${SLURM_LOCALID:-}" ]; then
LOCAL_RANK="${SLURM_LOCALID}"
fi
export CUDA_VISIBLE_DEVICES=${LOCAL_RANK}
fi
exec "$@"
###Output
Overwriting ./hvd_wrapper.sh
###Markdown
OpenMPI and Slurm are tools for running distributed computed jobs. In this example, we’re using OpenMPI, but depending on the environment you run distributed training jobs in, you may need to check slightly different environment variables to find the total number of workers (global size) and each process’s worker number (global rank.)Why do we have to check environment variables instead of using `hvd.rank()` and `hvd.local_rank()`? NVTabular does some GPU configuration when imported and needs to be imported before Horovod to avoid conflicts. We need to set GPU visibility before NVTabular is imported (when Horovod isn’t yet available) so that multiple processes don’t each try to configure all the GPUs, so as a workaround, we “cheat” and peek at environment variables set by horovodrun to decide which GPU each process should use.
###Code
!horovodrun -np 2 sh hvd_wrapper.sh python tf_trainer.py --dir_in $BASE_DIR --batch_size 16384
###Output
2021-06-04 16:39:06.000313: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:08.979997: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:09.064191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.138200: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:10.138376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,0]<stderr>:2021-06-04 16:39:10.139777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:10.139823: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.139907: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139949: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140084: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140123: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140169: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:10.144021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:10.367414: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:10.367496: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,1]<stderr>:2021-06-04 16:39:10.368324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:10.368347: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:10.368396: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368424: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368451: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368475: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368512: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368537: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368573: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:10.369841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.730033: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:11.730907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:11.730990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.731005: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731018: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731038: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731049: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731059: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731078: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:11.732312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.732350: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.732473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,1]<stderr>:2021-06-04 16:39:11.732487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,1]<stderr>:2021-06-04 16:39:11.732493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,1]<stderr>:2021-06-04 16:39:11.734431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:42:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:11.821346: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:11.822270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:11.822360: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.822376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822389: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822411: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822425: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822434: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822454: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:11.823684: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,0]<stderr>:2021-06-04 16:39:11.823731: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.823868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,0]<stderr>:2021-06-04 16:39:11.823881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,0]<stderr>:2021-06-04 16:39:11.823888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,0]<stderr>:2021-06-04 16:39:11.825784: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:0b:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:17.634485: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,0]<stderr>:2021-06-04 16:39:17.668915: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,1]<stderr>:2021-06-04 16:39:17.694128: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,1]<stderr>:2021-06-04 16:39:17.703326: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,0]<stderr>:2021-06-04 16:39:17.780825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:17.810644: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:17.984966: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:18.012113: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stdout>:Step #0 Loss: 0.695094
[1,0]<stdout>:Step #100 Loss: 0.669580
[1,0]<stdout>:Step #200 Loss: 0.661098
[1,0]<stdout>:Step #300 Loss: 0.660680
[1,0]<stdout>:Step #400 Loss: 0.658633
[1,0]<stdout>:Step #500 Loss: 0.660251
[1,0]<stdout>:Step #600 Loss: 0.657047
###Markdown
Multi-GPU with MovieLens: ETL and Training OverviewNVIDIA Merlin is a open source framework to accelerate and scale end-to-end recommender system pipelines on GPU. In this notebook, we use NVTabular, Merlin’s ETL component, to scale feature engineering and pre-processing to multiple GPUs and then perform data-parallel distributed training of a neural network on multiple GPUs with TensorFlow, [Horovod](https://horovod.readthedocs.io/en/stable/), and [NCCL](https://developer.nvidia.com/nccl).The pre-requisites for this notebook are to be familiar with NVTabular and its API:- You can read more about NVTabular, its API and specialized dataloaders in [Getting Started with Movielens notebooks](../getting-started-movielens).- You can read more about scaling NVTabular ETL in [Scaling Criteo notebooks](../scaling-criteo).**In this notebook, we will focus only on the new information related to multi-GPU training, so please check out the other notebooks first (if you haven’t already.)** Learning objectivesIn this notebook, we learn how to scale ETL and deep learning taining to multiple GPUs- Learn to use larger than GPU/host memory datasets for ETL and training- Use multi-GPU or multi node for ETL with NVTabular- Use NVTabular dataloader to accelerate TensorFlow pipelines- Scale TensorFlow training with Horovod DatasetIn this notebook, we use the [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) dataset. It is popular for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well.Note: We are using the MovieLens 25M dataset in this example for simplicity, although the dataset is not large enough to require multi-GPU training. However, the functionality demonstrated in this notebook can be easily extended to scale recommender pipelines for larger datasets in the same way. Tools- [Horovod](https://horovod.readthedocs.io/en/stable/) is a distributed deep learning framework that provides tools for multi-GPU optimization.- The [NVIDIA Collective Communication Library (NCCL)](https://developer.nvidia.com/nccl) provides the underlying GPU-based implementations of the [allgather](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallgather) and [allreduce](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallreduce) cross-GPU communication operations. Download and ConvertFirst, we will download and convert the dataset to Parquet. This section is based on [01-Download-Convert.ipynb](../getting-started-movielens/01-Download-Convert.ipynb). Download
###Code
# External dependencies
import os
import pathlib
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
from nvtabular.utils import download_file
from sklearn.model_selection import train_test_split
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", "~/nvt-examples/multigpu-movielens/data/"
)
BASE_DIR = pathlib.Path(INPUT_DATA_DIR).expanduser()
zip_path = pathlib.Path(BASE_DIR, "ml-25m.zip")
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", zip_path, redownload=False
)
###Output
unzipping files: 100%|██████████| 8/8 [00:04<00:00, 1.66files/s]
###Markdown
Convert
###Code
movies = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "movies.csv"))
movies["genres"] = movies["genres"].str.split("|")
movies = movies.drop("title", axis=1)
movies.to_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
###Output
_____no_output_____
###Markdown
Split into train and validation datasets
###Code
ratings = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "ratings.csv"))
ratings = ratings.drop("timestamp", axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.to_parquet(pathlib.Path(BASE_DIR, "train.parquet"))
valid.to_parquet(pathlib.Path(BASE_DIR, "valid.parquet"))
###Output
_____no_output_____
###Markdown
ETL with NVTabularWe finished downloading and converting the dataset. We will preprocess and engineer features with NVTabular on multiple GPUs. You can read more- about NVTabular's features and API in [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).- scaling NVTabular ETL to multiple GPUs [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb). Deploy a Distributed-Dask ClusterThis section is based on [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb) and [multi-gpu-toy-example/multi-gpu_dask.ipynb](../multi-gpu-toy-example/multi-gpu_dask.ipynb)
###Code
# Standard Libraries
import shutil
# External Dependencies
import cupy as cp
import cudf
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.utils import parse_bytes
from dask.delayed import delayed
import rmm
# NVTabular
import nvtabular as nvt
import nvtabular.ops as ops
from nvtabular.io import Shuffle
from nvtabular.utils import device_mem_size
# define some information about where to get our data
input_path = pathlib.Path(BASE_DIR, "converted", "movielens")
dask_workdir = pathlib.Path(BASE_DIR, "test_dask", "workdir")
output_path = pathlib.Path(BASE_DIR, "test_dask", "output")
stats_path = pathlib.Path(BASE_DIR, "test_dask", "stats")
# Make sure we have a clean worker space for Dask
if pathlib.Path.is_dir(dask_workdir):
shutil.rmtree(dask_workdir)
dask_workdir.mkdir(parents=True)
# Make sure we have a clean stats space for Dask
if pathlib.Path.is_dir(stats_path):
shutil.rmtree(stats_path)
stats_path.mkdir(parents=True)
# Make sure we have a clean output path
if pathlib.Path.is_dir(output_path):
shutil.rmtree(output_path)
output_path.mkdir(parents=True)
# Get device memory capacity
capacity = device_mem_size(kind="total")
# Deploy a Single-Machine Multi-GPU Cluster
protocol = "tcp" # "tcp" or "ucx"
visible_devices = "0,1" # Delect devices to place workers
device_spill_frac = 0.5 # Spill GPU-Worker memory to host at this limit.
# Reduce if spilling fails to prevent
# device memory errors.
cluster = None # (Optional) Specify existing scheduler port
if cluster is None:
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
local_directory=dask_workdir,
device_memory_limit=capacity * device_spill_frac,
)
# Create the distributed client
client = Client(cluster)
client
# Initialize RMM pool on ALL workers
def _rmm_pool():
rmm.reinitialize(
pool_allocator=True,
initial_pool_size=None, # Use default size
)
client.run(_rmm_pool)
###Output
_____no_output_____
###Markdown
Defining our Preprocessing PipelineThis subsection is based on [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb). The only difference is that we initialize the NVTabular workflow using the LocalCUDACluster client with `nvt.Workflow(output, client=client)`.
###Code
movies = cudf.read_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
joined = ["userId", "movieId"] >> nvt.ops.JoinExternal(movies, on=["movieId"])
cat_features = joined >> nvt.ops.Categorify()
ratings = nvt.ColumnGroup(["rating"]) >> (lambda col: (col > 3).astype("int8"))
output = cat_features + ratings
# USE client in NVTabular workfow
workflow = nvt.Workflow(output, client=client)
!rm -rf $BASE_DIR/train
!rm -rf $BASE_DIR/valid
train_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "train.parquet"))], part_size="100MB")
valid_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "valid.parquet"))], part_size="100MB")
workflow.fit(train_iter)
workflow.save(pathlib.Path(BASE_DIR, "workflow"))
shuffle = Shuffle.PER_WORKER # Shuffle algorithm
out_files_per_proc = 4 # Number of output files per worker
workflow.transform(train_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "train"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
workflow.transform(valid_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "valid"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
client.shutdown()
cluster.close()
###Output
/usr/local/lib/python3.8/dist-packages/distributed/worker.py:3560: UserWarning: Large object of size 1.90 MiB detected in task graph:
("('read-parquet-d282e60016f67eeed62ccc707e5a7466' ... ccc707e5a7466')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
warnings.warn(
###Markdown
Training with TensorFlow on multiGPUsIn this section, we will train a TensorFlow model with multi-GPU support. In the NVTabular v0.5 release, we added multi-GPU support for NVTabular dataloaders. We will modify the [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb) to use multiple GPUs. Please review that notebook, if you have questions about the general functionality of the NVTabular dataloaders or the neural network architecture. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The normal TensorFlow dataloaders cannot prepare the next training batches fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras models- **supporting multi-GPU training with Horovod**You can find more information on the dataloaders in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49). Using Horovod with Tensorflow and NVTabularThe training script below is based on [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb), with a few important changes:- We provide several additional parameters to the `KerasSequenceLoader` class, including the total number of workers `hvd.size()`, the current worker's id number `hvd.rank()`, and a function for generating random seeds `seed_fn()`. ```python train_dataset_tf = KerasSequenceLoader( ... global_size=hvd.size(), global_rank=hvd.rank(), seed_fn=seed_fn, )```- The seed function uses Horovod to collectively generate a random seed that's shared by all workers so that they can each shuffle the dataset in a consistent way and select partitions to work on without overlap. The seed function is called by the dataloader during the shuffling process at the beginning of each epoch:```python def seed_fn(): min_int, max_int = tf.int32.limits max_rand = max_int // hvd.size() Generate a seed fragment on each worker seed_fragment = cupy.random.randint(0, max_rand).get() Aggregate seed fragments from all Horovod workers seed_tensor = tf.constant(seed_fragment) reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum) return reduced_seed % max_rand```- We wrap the TensorFlow optimizer with Horovod's `DistributedOptimizer` class and scale the learning rate by the number of workers:```python opt = tf.keras.optimizers.SGD(0.01 * hvd.size()) opt = hvd.DistributedOptimizer(opt)```- We wrap the TensorFlow gradient tape with Horovod's `DistributedGradientTape` class:```python with tf.GradientTape() as tape: ... tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)```- After the first batch, we broadcast the model and optimizer parameters to all workers with Horovod:```python Note: broadcast should be done after the first gradient step to ensure optimizer initialization. if first_batch: hvd.broadcast_variables(model.variables, root_rank=0) hvd.broadcast_variables(opt.variables(), root_rank=0)```- We only save checkpoints from the first worker to avoid multiple workers trying to write to the same files:```python if hvd.rank() == 0: checkpoint.save(checkpoint_dir)```The rest of the script is the same as the MovieLens example in [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb). In order to run it with Horovod, we first need to write it to a file.
###Code
%%writefile './tf_trainer.py'
# External dependencies
import argparse
import glob
import os
import cupy
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.3" # fraction of free memory
import nvtabular as nvt # noqa: E402 isort:skip
from nvtabular.framework_utils.tensorflow import layers # noqa: E402 isort:skip
from nvtabular.loader.tensorflow import KerasSequenceLoader # noqa: E402 isort:skip
import tensorflow as tf # noqa: E402 isort:skip
import horovod.tensorflow as hvd # noqa: E402 isort:skip
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("--dir_in", default=None, help="Input directory")
parser.add_argument("--batch_size", default=None, help="batch size")
parser.add_argument("--cats", default=None, help="categorical columns")
parser.add_argument("--cats_mh", default=None, help="categorical multihot columns")
parser.add_argument("--conts", default=None, help="continuous columns")
parser.add_argument("--labels", default=None, help="continuous columns")
args = parser.parse_args()
BASE_DIR = args.dir_in or "./data/"
BATCH_SIZE = int(args.batch_size or 16384) # Batch Size
CATEGORICAL_COLUMNS = args.cats or ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = args.cats_mh or ["genres"] # Multi-hot
NUMERIC_COLUMNS = args.conts or []
TRAIN_PATHS = sorted(
glob.glob(os.path.join(BASE_DIR, "train/*.parquet"))
) # Output from ETL-with-NVTabular
hvd.init()
# Seed with system randomness (or a static seed)
cupy.random.seed(None)
def seed_fn():
"""
Generate consistent dataloader shuffle seeds across workers
Reseeds each worker's dataloader each epoch to get fresh a shuffle
that's consistent across workers.
"""
min_int, max_int = tf.int32.limits
max_rand = max_int // hvd.size()
# Generate a seed fragment on each worker
seed_fragment = cupy.random.randint(0, max_rand).get()
# Aggregate seed fragments from all Horovod workers
seed_tensor = tf.constant(seed_fragment)
reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum)
return reduced_seed % max_rand
proc = nvt.Workflow.load(os.path.join(BASE_DIR, "workflow/"))
EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
global_size=hvd.size(),
global_rank=hvd.rank(),
seed_fn=seed_fn,
)
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col + "__values"] = tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,))
inputs[col + "__nnzs"] = tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,))
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
loss = tf.losses.BinaryCrossentropy()
opt = tf.keras.optimizers.SGD(0.01 * hvd.size())
opt = hvd.DistributedOptimizer(opt)
checkpoint_dir = "./checkpoints"
checkpoint = tf.train.Checkpoint(model=model, optimizer=opt)
@tf.function(experimental_relax_shapes=True)
def training_step(examples, labels, first_batch):
with tf.GradientTape() as tape:
probs = model(examples, training=True)
loss_value = loss(labels, probs)
# Horovod: add Horovod Distributed GradientTape.
tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)
grads = tape.gradient(loss_value, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
#
# Note: broadcast should be done after the first gradient step to ensure optimizer
# initialization.
if first_batch:
hvd.broadcast_variables(model.variables, root_rank=0)
hvd.broadcast_variables(opt.variables(), root_rank=0)
return loss_value
# Horovod: adjust number of steps based on number of GPUs.
for batch, (examples, labels) in enumerate(train_dataset_tf):
loss_value = training_step(examples, labels, batch == 0)
if batch % 100 == 0 and hvd.local_rank() == 0:
print("Step #%d\tLoss: %.6f" % (batch, loss_value))
hvd.join()
# Horovod: save checkpoints only on worker 0 to prevent other workers from
# corrupting it.
if hvd.rank() == 0:
checkpoint.save(checkpoint_dir)
###Output
Overwriting ./tf_trainer.py
###Markdown
We'll also need a small wrapper script to check environment variables set by the Horovod runner to see which rank we'll be assigned, in order to set CUDA_VISIBLE_DEVICES properly for each worker:
###Code
%%writefile './hvd_wrapper.sh'
#!/bin/bash
# Get local process ID from OpenMPI or alternatively from SLURM
if [ -z "${CUDA_VISIBLE_DEVICES:-}" ]; then
if [ -n "${OMPI_COMM_WORLD_LOCAL_RANK:-}" ]; then
LOCAL_RANK="${OMPI_COMM_WORLD_LOCAL_RANK}"
elif [ -n "${SLURM_LOCALID:-}" ]; then
LOCAL_RANK="${SLURM_LOCALID}"
fi
export CUDA_VISIBLE_DEVICES=${LOCAL_RANK}
fi
exec "$@"
###Output
Overwriting ./hvd_wrapper.sh
###Markdown
OpenMPI and Slurm are tools for running distributed computed jobs. In this example, we’re using OpenMPI, but depending on the environment you run distributed training jobs in, you may need to check slightly different environment variables to find the total number of workers (global size) and each process’s worker number (global rank.)Why do we have to check environment variables instead of using `hvd.rank()` and `hvd.local_rank()`? NVTabular does some GPU configuration when imported and needs to be imported before Horovod to avoid conflicts. We need to set GPU visibility before NVTabular is imported (when Horovod isn’t yet available) so that multiple processes don’t each try to configure all the GPUs, so as a workaround, we “cheat” and peek at environment variables set by horovodrun to decide which GPU each process should use.
###Code
!horovodrun -np 2 sh hvd_wrapper.sh python tf_trainer.py --dir_in $BASE_DIR --batch_size 16384
###Output
2021-05-10 16:25:54.167339: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-05-10 16:25:57.853400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-05-10 16:25:57.853413: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-05-10 16:25:59.322516: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-05-10 16:25:59.322879: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,0]<stderr>:2021-05-10 16:25:59.325075: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:09:00.0 name: NVIDIA GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-05-10 16:25:59.325104: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-05-10 16:25:59.325161: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-05-10 16:25:59.325189: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-05-10 16:25:59.325221: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-05-10 16:25:59.325247: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-05-10 16:25:59.325283: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-05-10 16:25:59.325308: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-05-10 16:25:59.325319: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-05-10 16:25:59.331299: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-05-10 16:25:59.339389: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-05-10 16:25:59.339534: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,1]<stderr>:2021-05-10 16:25:59.340672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:41:00.0 name: NVIDIA GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-05-10 16:25:59.340721: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-05-10 16:25:59.340788: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-05-10 16:25:59.340825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-05-10 16:25:59.340861: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-05-10 16:25:59.340894: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-05-10 16:25:59.340942: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-05-10 16:25:59.340975: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-05-10 16:25:59.340989: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-05-10 16:25:59.342780: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,0]<stderr>:2021-05-10 16:26:00.974712: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-05-10 16:26:00.975672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:09:00.0 name: NVIDIA GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-05-10 16:26:00.975764: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-05-10 16:26:00.975779: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-05-10 16:26:00.975793: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-05-10 16:26:00.975803: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-05-10 16:26:00.975813: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-05-10 16:26:00.975824: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-05-10 16:26:00.975835: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-05-10 16:26:00.975844: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-05-10 16:26:00.977100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,0]<stderr>:2021-05-10 16:26:00.977329: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-05-10 16:26:00.977852: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,0]<stderr>:2021-05-10 16:26:00.977869: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,0]<stderr>:2021-05-10 16:26:00.977876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,0]<stderr>:2021-05-10 16:26:00.979981: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:09:00.0, compute capability: 6.1)
[1,1]<stderr>:2021-05-10 16:26:01.017026: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-05-10 16:26:01.017947: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:41:00.0 name: NVIDIA GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-05-10 16:26:01.018014: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-05-10 16:26:01.018029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-05-10 16:26:01.018041: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-05-10 16:26:01.018050: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-05-10 16:26:01.018059: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-05-10 16:26:01.018069: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-05-10 16:26:01.018077: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-05-10 16:26:01.018088: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-05-10 16:26:01.019405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-05-10 16:26:01.019444: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-05-10 16:26:01.019556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,1]<stderr>:2021-05-10 16:26:01.019571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,1]<stderr>:2021-05-10 16:26:01.019577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,1]<stderr>:2021-05-10 16:26:01.021620: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:41:00.0, compute capability: 6.1)
###Markdown
Multi-GPU with MovieLens: ETL and Training OverviewNVIDIA Merlin is a open source framework to accelerate and scale end-to-end recommender system pipelines on GPU. In this notebook, we use NVTabular, Merlin’s ETL component, to scale feature engineering and pre-processing to multiple GPUs and then perform data-parallel distributed training of a neural network on multiple GPUs with TensorFlow, [Horovod](https://horovod.readthedocs.io/en/stable/), and [NCCL](https://developer.nvidia.com/nccl).The pre-requisites for this notebook are to be familiar with NVTabular and its API:- You can read more about NVTabular, its API and specialized dataloaders in [Getting Started with Movielens notebooks](../getting-started-movielens).- You can read more about scaling NVTabular ETL in [Scaling Criteo notebooks](../scaling-criteo).**In this notebook, we will focus only on the new information related to multi-GPU training, so please check out the other notebooks first (if you haven’t already.)** Learning objectivesIn this notebook, we learn how to scale ETL and deep learning taining to multiple GPUs- Learn to use larger than GPU/host memory datasets for ETL and training- Use multi-GPU or multi node for ETL with NVTabular- Use NVTabular dataloader to accelerate TensorFlow pipelines- Scale TensorFlow training with Horovod DatasetIn this notebook, we use the [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) dataset. It is popular for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well.Note: We are using the MovieLens 25M dataset in this example for simplicity, although the dataset is not large enough to require multi-GPU training. However, the functionality demonstrated in this notebook can be easily extended to scale recommender pipelines for larger datasets in the same way. Tools- [Horovod](https://horovod.readthedocs.io/en/stable/) is a distributed deep learning framework that provides tools for multi-GPU optimization.- The [NVIDIA Collective Communication Library (NCCL)](https://developer.nvidia.com/nccl) provides the underlying GPU-based implementations of the [allgather](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallgather) and [allreduce](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallreduce) cross-GPU communication operations. Download and ConvertFirst, we will download and convert the dataset to Parquet. This section is based on [01-Download-Convert.ipynb](../getting-started-movielens/01-Download-Convert.ipynb). Download
###Code
# External dependencies
import os
import pathlib
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
from nvtabular.utils import download_file
from sklearn.model_selection import train_test_split
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", "~/nvt-examples/multigpu-movielens/data/"
)
BASE_DIR = pathlib.Path(INPUT_DATA_DIR).expanduser()
zip_path = pathlib.Path(BASE_DIR, "ml-25m.zip")
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", zip_path, redownload=False
)
###Output
downloading ml-25m.zip: 262MB [00:13, 19.4MB/s]
unzipping files: 100%|██████████| 8/8 [00:04<00:00, 1.92files/s]
###Markdown
Convert
###Code
movies = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "movies.csv"))
movies["genres"] = movies["genres"].str.split("|")
movies = movies.drop("title", axis=1)
movies.to_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
###Output
_____no_output_____
###Markdown
Split into train and validation datasets
###Code
ratings = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "ratings.csv"))
ratings = ratings.drop("timestamp", axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.to_parquet(pathlib.Path(BASE_DIR, "train.parquet"))
valid.to_parquet(pathlib.Path(BASE_DIR, "valid.parquet"))
###Output
_____no_output_____
###Markdown
ETL with NVTabularWe finished downloading and converting the dataset. We will preprocess and engineer features with NVTabular on multiple GPUs. You can read more- about NVTabular's features and API in [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).- scaling NVTabular ETL to multiple GPUs [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb). Deploy a Distributed-Dask ClusterThis section is based on [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb) and [multi-gpu-toy-example/multi-gpu_dask.ipynb](../multi-gpu-toy-example/multi-gpu_dask.ipynb)
###Code
# Standard Libraries
import shutil
# External Dependencies
import cupy as cp
import cudf
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.utils import parse_bytes
from dask.delayed import delayed
import rmm
# NVTabular
import nvtabular as nvt
import nvtabular.ops as ops
from nvtabular.io import Shuffle
from nvtabular.utils import device_mem_size
# define some information about where to get our data
input_path = pathlib.Path(BASE_DIR, "converted", "movielens")
dask_workdir = pathlib.Path(BASE_DIR, "test_dask", "workdir")
output_path = pathlib.Path(BASE_DIR, "test_dask", "output")
stats_path = pathlib.Path(BASE_DIR, "test_dask", "stats")
# Make sure we have a clean worker space for Dask
if pathlib.Path.is_dir(dask_workdir):
shutil.rmtree(dask_workdir)
dask_workdir.mkdir(parents=True)
# Make sure we have a clean stats space for Dask
if pathlib.Path.is_dir(stats_path):
shutil.rmtree(stats_path)
stats_path.mkdir(parents=True)
# Make sure we have a clean output path
if pathlib.Path.is_dir(output_path):
shutil.rmtree(output_path)
output_path.mkdir(parents=True)
# Get device memory capacity
capacity = device_mem_size(kind="total")
# Deploy a Single-Machine Multi-GPU Cluster
protocol = "tcp" # "tcp" or "ucx"
visible_devices = "0,1" # Delect devices to place workers
device_spill_frac = 0.5 # Spill GPU-Worker memory to host at this limit.
# Reduce if spilling fails to prevent
# device memory errors.
cluster = None # (Optional) Specify existing scheduler port
if cluster is None:
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
local_directory=dask_workdir,
device_memory_limit=capacity * device_spill_frac,
)
# Create the distributed client
client = Client(cluster)
client
# Initialize RMM pool on ALL workers
def _rmm_pool():
rmm.reinitialize(
pool_allocator=True,
initial_pool_size=None, # Use default size
)
client.run(_rmm_pool)
###Output
_____no_output_____
###Markdown
Defining our Preprocessing PipelineThis subsection is based on [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb). The only difference is that we initialize the NVTabular workflow using the LocalCUDACluster client with `nvt.Workflow(output, client=client)`.
###Code
movies = cudf.read_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
joined = ["userId", "movieId"] >> nvt.ops.JoinExternal(movies, on=["movieId"])
cat_features = joined >> nvt.ops.Categorify()
ratings = nvt.ColumnGroup(["rating"]) >> (lambda col: (col > 3).astype("int8"))
output = cat_features + ratings
# USE client in NVTabular workfow
workflow = nvt.Workflow(output, client=client)
!rm -rf $BASE_DIR/train
!rm -rf $BASE_DIR/valid
train_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "train.parquet"))], part_size="100MB")
valid_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "valid.parquet"))], part_size="100MB")
workflow.fit(train_iter)
workflow.save(pathlib.Path(BASE_DIR, "workflow"))
shuffle = Shuffle.PER_WORKER # Shuffle algorithm
out_files_per_proc = 4 # Number of output files per worker
workflow.transform(train_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "train"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
workflow.transform(valid_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "valid"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
client.shutdown()
cluster.close()
###Output
/usr/local/lib/python3.8/dist-packages/distributed/worker.py:3560: UserWarning: Large object of size 1.90 MiB detected in task graph:
("('read-parquet-d36dd514a8adc53a9a91115c9be1d852' ... 1115c9be1d852')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
warnings.warn(
###Markdown
Training with TensorFlow on multiGPUsIn this section, we will train a TensorFlow model with multi-GPU support. In the NVTabular v0.5 release, we added multi-GPU support for NVTabular dataloaders. We will modify the [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb) to use multiple GPUs. Please review that notebook, if you have questions about the general functionality of the NVTabular dataloaders or the neural network architecture. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The normal TensorFlow dataloaders cannot prepare the next training batches fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras models- **supporting multi-GPU training with Horovod**You can find more information on the dataloaders in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49). Using Horovod with Tensorflow and NVTabularThe training script below is based on [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb), with a few important changes:- We provide several additional parameters to the `KerasSequenceLoader` class, including the total number of workers `hvd.size()`, the current worker's id number `hvd.rank()`, and a function for generating random seeds `seed_fn()`. ```python train_dataset_tf = KerasSequenceLoader( ... global_size=hvd.size(), global_rank=hvd.rank(), seed_fn=seed_fn, )```- The seed function uses Horovod to collectively generate a random seed that's shared by all workers so that they can each shuffle the dataset in a consistent way and select partitions to work on without overlap. The seed function is called by the dataloader during the shuffling process at the beginning of each epoch:```python def seed_fn(): min_int, max_int = tf.int32.limits max_rand = max_int // hvd.size() Generate a seed fragment on each worker seed_fragment = cupy.random.randint(0, max_rand).get() Aggregate seed fragments from all Horovod workers seed_tensor = tf.constant(seed_fragment) reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum) return reduced_seed % max_rand```- We wrap the TensorFlow optimizer with Horovod's `DistributedOptimizer` class and scale the learning rate by the number of workers:```python opt = tf.keras.optimizers.SGD(0.01 * hvd.size()) opt = hvd.DistributedOptimizer(opt)```- We wrap the TensorFlow gradient tape with Horovod's `DistributedGradientTape` class:```python with tf.GradientTape() as tape: ... tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)```- After the first batch, we broadcast the model and optimizer parameters to all workers with Horovod:```python Note: broadcast should be done after the first gradient step to ensure optimizer initialization. if first_batch: hvd.broadcast_variables(model.variables, root_rank=0) hvd.broadcast_variables(opt.variables(), root_rank=0)```- We only save checkpoints from the first worker to avoid multiple workers trying to write to the same files:```python if hvd.rank() == 0: checkpoint.save(checkpoint_dir)```The rest of the script is the same as the MovieLens example in [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb). In order to run it with Horovod, we first need to write it to a file.
###Code
%%writefile './tf_trainer.py'
# External dependencies
import argparse
import glob
import os
import cupy
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.3" # fraction of free memory
import nvtabular as nvt # noqa: E402 isort:skip
from nvtabular.framework_utils.tensorflow import layers # noqa: E402 isort:skip
from nvtabular.loader.tensorflow import KerasSequenceLoader # noqa: E402 isort:skip
import tensorflow as tf # noqa: E402 isort:skip
import horovod.tensorflow as hvd # noqa: E402 isort:skip
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("--dir_in", default=None, help="Input directory")
parser.add_argument("--batch_size", default=None, help="batch size")
parser.add_argument("--cats", default=None, help="categorical columns")
parser.add_argument("--cats_mh", default=None, help="categorical multihot columns")
parser.add_argument("--conts", default=None, help="continuous columns")
parser.add_argument("--labels", default=None, help="continuous columns")
args = parser.parse_args()
BASE_DIR = args.dir_in or "./data/"
BATCH_SIZE = int(args.batch_size or 16384) # Batch Size
CATEGORICAL_COLUMNS = args.cats or ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = args.cats_mh or ["genres"] # Multi-hot
NUMERIC_COLUMNS = args.conts or []
TRAIN_PATHS = sorted(
glob.glob(os.path.join(BASE_DIR, "train/*.parquet"))
) # Output from ETL-with-NVTabular
hvd.init()
# Seed with system randomness (or a static seed)
cupy.random.seed(None)
def seed_fn():
"""
Generate consistent dataloader shuffle seeds across workers
Reseeds each worker's dataloader each epoch to get fresh a shuffle
that's consistent across workers.
"""
min_int, max_int = tf.int32.limits
max_rand = max_int // hvd.size()
# Generate a seed fragment on each worker
seed_fragment = cupy.random.randint(0, max_rand).get()
# Aggregate seed fragments from all Horovod workers
seed_tensor = tf.constant(seed_fragment)
reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum)
return reduced_seed % max_rand
proc = nvt.Workflow.load(os.path.join(BASE_DIR, "workflow/"))
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
global_size=hvd.size(),
global_rank=hvd.rank(),
seed_fn=seed_fn,
)
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = \
(tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
loss = tf.losses.BinaryCrossentropy()
opt = tf.keras.optimizers.SGD(0.01 * hvd.size())
opt = hvd.DistributedOptimizer(opt)
checkpoint_dir = "./checkpoints"
checkpoint = tf.train.Checkpoint(model=model, optimizer=opt)
@tf.function(experimental_relax_shapes=True)
def training_step(examples, labels, first_batch):
with tf.GradientTape() as tape:
probs = model(examples, training=True)
loss_value = loss(labels, probs)
# Horovod: add Horovod Distributed GradientTape.
tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)
grads = tape.gradient(loss_value, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
#
# Note: broadcast should be done after the first gradient step to ensure optimizer
# initialization.
if first_batch:
hvd.broadcast_variables(model.variables, root_rank=0)
hvd.broadcast_variables(opt.variables(), root_rank=0)
return loss_value
# Horovod: adjust number of steps based on number of GPUs.
for batch, (examples, labels) in enumerate(train_dataset_tf):
loss_value = training_step(examples, labels, batch == 0)
if batch % 100 == 0 and hvd.local_rank() == 0:
print("Step #%d\tLoss: %.6f" % (batch, loss_value))
hvd.join()
# Horovod: save checkpoints only on worker 0 to prevent other workers from
# corrupting it.
if hvd.rank() == 0:
checkpoint.save(checkpoint_dir)
###Output
Overwriting ./tf_trainer.py
###Markdown
We'll also need a small wrapper script to check environment variables set by the Horovod runner to see which rank we'll be assigned, in order to set CUDA_VISIBLE_DEVICES properly for each worker:
###Code
%%writefile './hvd_wrapper.sh'
#!/bin/bash
# Get local process ID from OpenMPI or alternatively from SLURM
if [ -z "${CUDA_VISIBLE_DEVICES:-}" ]; then
if [ -n "${OMPI_COMM_WORLD_LOCAL_RANK:-}" ]; then
LOCAL_RANK="${OMPI_COMM_WORLD_LOCAL_RANK}"
elif [ -n "${SLURM_LOCALID:-}" ]; then
LOCAL_RANK="${SLURM_LOCALID}"
fi
export CUDA_VISIBLE_DEVICES=${LOCAL_RANK}
fi
exec "$@"
###Output
Overwriting ./hvd_wrapper.sh
###Markdown
OpenMPI and Slurm are tools for running distributed computed jobs. In this example, we’re using OpenMPI, but depending on the environment you run distributed training jobs in, you may need to check slightly different environment variables to find the total number of workers (global size) and each process’s worker number (global rank.)Why do we have to check environment variables instead of using `hvd.rank()` and `hvd.local_rank()`? NVTabular does some GPU configuration when imported and needs to be imported before Horovod to avoid conflicts. We need to set GPU visibility before NVTabular is imported (when Horovod isn’t yet available) so that multiple processes don’t each try to configure all the GPUs, so as a workaround, we “cheat” and peek at environment variables set by horovodrun to decide which GPU each process should use.
###Code
!horovodrun -np 2 sh hvd_wrapper.sh python tf_trainer.py --dir_in $BASE_DIR --batch_size 16384
###Output
2021-06-04 16:39:06.000313: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:08.979997: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:09.064191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.138200: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:10.138376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,0]<stderr>:2021-06-04 16:39:10.139777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:10.139823: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.139907: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139949: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140084: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140123: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140169: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:10.144021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:10.367414: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:10.367496: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,1]<stderr>:2021-06-04 16:39:10.368324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:10.368347: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:10.368396: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368424: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368451: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368475: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368512: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368537: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368573: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:10.369841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.730033: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:11.730907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:11.730990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.731005: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731018: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731038: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731049: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731059: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731078: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:11.732312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.732350: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.732473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,1]<stderr>:2021-06-04 16:39:11.732487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,1]<stderr>:2021-06-04 16:39:11.732493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,1]<stderr>:2021-06-04 16:39:11.734431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:42:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:11.821346: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:11.822270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:11.822360: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.822376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822389: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822411: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822425: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822434: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822454: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:11.823684: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,0]<stderr>:2021-06-04 16:39:11.823731: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.823868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,0]<stderr>:2021-06-04 16:39:11.823881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,0]<stderr>:2021-06-04 16:39:11.823888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,0]<stderr>:2021-06-04 16:39:11.825784: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:0b:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:17.634485: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,0]<stderr>:2021-06-04 16:39:17.668915: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,1]<stderr>:2021-06-04 16:39:17.694128: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,1]<stderr>:2021-06-04 16:39:17.703326: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,0]<stderr>:2021-06-04 16:39:17.780825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:17.810644: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:17.984966: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:18.012113: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stdout>:Step #0 Loss: 0.695094
[1,0]<stdout>:Step #100 Loss: 0.669580
[1,0]<stdout>:Step #200 Loss: 0.661098
[1,0]<stdout>:Step #300 Loss: 0.660680
[1,0]<stdout>:Step #400 Loss: 0.658633
[1,0]<stdout>:Step #500 Loss: 0.660251
[1,0]<stdout>:Step #600 Loss: 0.657047
###Markdown
Multi-GPU with MovieLens: ETL and Training OverviewNVIDIA Merlin is a open source framework to accelerate and scale end-to-end recommender system pipelines on GPU. In this notebook, we use NVTabular, Merlin’s ETL component, to scale feature engineering and pre-processing to multiple GPUs and then perform data-parallel distributed training of a neural network on multiple GPUs with TensorFlow, [Horovod](https://horovod.readthedocs.io/en/stable/), and [NCCL](https://developer.nvidia.com/nccl).The pre-requisites for this notebook are to be familiar with NVTabular and its API:- You can read more about NVTabular, its API and specialized dataloaders in [Getting Started with Movielens notebooks](../getting-started-movielens).- You can read more about scaling NVTabular ETL in [Scaling Criteo notebooks](../scaling-criteo).**In this notebook, we will focus only on the new information related to multi-GPU training, so please check out the other notebooks first (if you haven’t already.)** Learning objectivesIn this notebook, we learn how to scale ETL and deep learning taining to multiple GPUs- Learn to use larger than GPU/host memory datasets for ETL and training- Use multi-GPU or multi node for ETL with NVTabular- Use NVTabular dataloader to accelerate TensorFlow pipelines- Scale TensorFlow training with Horovod DatasetIn this notebook, we use the [MovieLens25M](https://grouplens.org/datasets/movielens/25m/) dataset. It is popular for recommender systems and is used in academic publications. The dataset contains 25M movie ratings for 62,000 movies given by 162,000 users. Many projects use only the user/item/rating information of MovieLens, but the original dataset provides metadata for the movies, as well.Note: We are using the MovieLens 25M dataset in this example for simplicity, although the dataset is not large enough to require multi-GPU training. However, the functionality demonstrated in this notebook can be easily extended to scale recommender pipelines for larger datasets in the same way. Tools- [Horovod](https://horovod.readthedocs.io/en/stable/) is a distributed deep learning framework that provides tools for multi-GPU optimization.- The [NVIDIA Collective Communication Library (NCCL)](https://developer.nvidia.com/nccl) provides the underlying GPU-based implementations of the [allgather](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallgather) and [allreduce](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/operations.htmlallreduce) cross-GPU communication operations. Download and ConvertFirst, we will download and convert the dataset to Parquet. This section is based on [01-Download-Convert.ipynb](../getting-started-movielens/01-Download-Convert.ipynb). Download
###Code
# External dependencies
import os
import pathlib
import cudf # cuDF is an implementation of Pandas-like Dataframe on GPU
from nvtabular.utils import download_file
from sklearn.model_selection import train_test_split
INPUT_DATA_DIR = os.environ.get(
"INPUT_DATA_DIR", "~/nvt-examples/multigpu-movielens/data/"
)
BASE_DIR = pathlib.Path(INPUT_DATA_DIR).expanduser()
zip_path = pathlib.Path(BASE_DIR, "ml-25m.zip")
download_file(
"http://files.grouplens.org/datasets/movielens/ml-25m.zip", zip_path, redownload=False
)
###Output
downloading ml-25m.zip: 262MB [00:13, 19.4MB/s]
unzipping files: 100%|██████████| 8/8 [00:04<00:00, 1.92files/s]
###Markdown
Convert
###Code
movies = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "movies.csv"))
movies["genres"] = movies["genres"].str.split("|")
movies = movies.drop("title", axis=1)
movies.to_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
###Output
_____no_output_____
###Markdown
Split into train and validation datasets
###Code
ratings = cudf.read_csv(pathlib.Path(BASE_DIR, "ml-25m", "ratings.csv"))
ratings = ratings.drop("timestamp", axis=1)
train, valid = train_test_split(ratings, test_size=0.2, random_state=42)
train.to_parquet(pathlib.Path(BASE_DIR, "train.parquet"))
valid.to_parquet(pathlib.Path(BASE_DIR, "valid.parquet"))
###Output
_____no_output_____
###Markdown
ETL with NVTabularWe finished downloading and converting the dataset. We will preprocess and engineer features with NVTabular on multiple GPUs. You can read more- about NVTabular's features and API in [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb).- scaling NVTabular ETL to multiple GPUs [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb). Deploy a Distributed-Dask ClusterThis section is based on [scaling-criteo/02-ETL-with-NVTabular.ipynb](../scaling-criteo/02-ETL-with-NVTabular.ipynb) and [multi-gpu-toy-example/multi-gpu_dask.ipynb](../multi-gpu-toy-example/multi-gpu_dask.ipynb)
###Code
# Standard Libraries
import shutil
# External Dependencies
import cupy as cp
import cudf
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
from dask.utils import parse_bytes
from dask.delayed import delayed
import rmm
# NVTabular
import nvtabular as nvt
import nvtabular.ops as ops
from nvtabular.io import Shuffle
from nvtabular.utils import device_mem_size
# define some information about where to get our data
input_path = pathlib.Path(BASE_DIR, "converted", "movielens")
dask_workdir = pathlib.Path(BASE_DIR, "test_dask", "workdir")
output_path = pathlib.Path(BASE_DIR, "test_dask", "output")
stats_path = pathlib.Path(BASE_DIR, "test_dask", "stats")
# Make sure we have a clean worker space for Dask
if pathlib.Path.is_dir(dask_workdir):
shutil.rmtree(dask_workdir)
dask_workdir.mkdir(parents=True)
# Make sure we have a clean stats space for Dask
if pathlib.Path.is_dir(stats_path):
shutil.rmtree(stats_path)
stats_path.mkdir(parents=True)
# Make sure we have a clean output path
if pathlib.Path.is_dir(output_path):
shutil.rmtree(output_path)
output_path.mkdir(parents=True)
# Get device memory capacity
capacity = device_mem_size(kind="total")
# Deploy a Single-Machine Multi-GPU Cluster
protocol = "tcp" # "tcp" or "ucx"
visible_devices = "0,1" # Delect devices to place workers
device_spill_frac = 0.5 # Spill GPU-Worker memory to host at this limit.
# Reduce if spilling fails to prevent
# device memory errors.
cluster = None # (Optional) Specify existing scheduler port
if cluster is None:
cluster = LocalCUDACluster(
protocol=protocol,
CUDA_VISIBLE_DEVICES=visible_devices,
local_directory=dask_workdir,
device_memory_limit=capacity * device_spill_frac,
)
# Create the distributed client
client = Client(cluster)
client
# Initialize RMM pool on ALL workers
def _rmm_pool():
rmm.reinitialize(
pool_allocator=True,
initial_pool_size=None, # Use default size
)
client.run(_rmm_pool)
###Output
_____no_output_____
###Markdown
Defining our Preprocessing PipelineThis subsection is based on [getting-started-movielens/02-ETL-with-NVTabular.ipynb](../getting-started-movielens/02-ETL-with-NVTabular.ipynb). The only difference is that we initialize the NVTabular workflow using the LocalCUDACluster client with `nvt.Workflow(output, client=client)`.
###Code
movies = cudf.read_parquet(pathlib.Path(BASE_DIR, "ml-25m", "movies_converted.parquet"))
joined = ["userId", "movieId"] >> nvt.ops.JoinExternal(movies, on=["movieId"])
cat_features = joined >> nvt.ops.Categorify()
ratings = nvt.ColumnGroup(["rating"]) >> (lambda col: (col > 3).astype("int8"))
output = cat_features + ratings
# USE client in NVTabular workflow
workflow = nvt.Workflow(output, client=client)
!rm -rf $BASE_DIR/train
!rm -rf $BASE_DIR/valid
train_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "train.parquet"))], part_size="100MB")
valid_iter = nvt.Dataset([str(pathlib.Path(BASE_DIR, "valid.parquet"))], part_size="100MB")
workflow.fit(train_iter)
workflow.save(pathlib.Path(BASE_DIR, "workflow"))
shuffle = Shuffle.PER_WORKER # Shuffle algorithm
out_files_per_proc = 4 # Number of output files per worker
workflow.transform(train_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "train"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
workflow.transform(valid_iter).to_parquet(
output_path=pathlib.Path(BASE_DIR, "valid"),
shuffle=shuffle,
out_files_per_proc=out_files_per_proc,
)
client.shutdown()
cluster.close()
###Output
/usr/local/lib/python3.8/dist-packages/distributed/worker.py:3560: UserWarning: Large object of size 1.90 MiB detected in task graph:
("('read-parquet-d36dd514a8adc53a9a91115c9be1d852' ... 1115c9be1d852')
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
warnings.warn(
###Markdown
Training with TensorFlow on multiGPUsIn this section, we will train a TensorFlow model with multi-GPU support. In the NVTabular v0.5 release, we added multi-GPU support for NVTabular dataloaders. We will modify the [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb) to use multiple GPUs. Please review that notebook, if you have questions about the general functionality of the NVTabular dataloaders or the neural network architecture. NVTabular dataloader for TensorFlowWe’ve identified that the dataloader is one bottleneck in deep learning recommender systems when training pipelines with TensorFlow. The normal TensorFlow dataloaders cannot prepare the next training batches fast enough and therefore, the GPU is not fully utilized. We developed a highly customized tabular dataloader for accelerating existing pipelines in TensorFlow. In our experiments, we see a speed-up by 9x of the same training workflow with NVTabular dataloader. NVTabular dataloader’s features are:- removing bottleneck of item-by-item dataloading- enabling larger than memory dataset by streaming from disk- reading data directly into GPU memory and remove CPU-GPU communication- preparing batch asynchronously in GPU to avoid CPU-GPU communication- supporting commonly used .parquet format- easy integration into existing TensorFlow pipelines by using similar API - works with tf.keras models- **supporting multi-GPU training with Horovod**You can find more information on the dataloaders in our [blogpost](https://medium.com/nvidia-merlin/training-deep-learning-based-recommender-systems-9x-faster-with-tensorflow-cc5a2572ea49). Using Horovod with Tensorflow and NVTabularThe training script below is based on [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb), with a few important changes:- We provide several additional parameters to the `KerasSequenceLoader` class, including the total number of workers `hvd.size()`, the current worker's id number `hvd.rank()`, and a function for generating random seeds `seed_fn()`. ```python train_dataset_tf = KerasSequenceLoader( ... global_size=hvd.size(), global_rank=hvd.rank(), seed_fn=seed_fn, )```- The seed function uses Horovod to collectively generate a random seed that's shared by all workers so that they can each shuffle the dataset in a consistent way and select partitions to work on without overlap. The seed function is called by the dataloader during the shuffling process at the beginning of each epoch:```python def seed_fn(): min_int, max_int = tf.int32.limits max_rand = max_int // hvd.size() Generate a seed fragment on each worker seed_fragment = cupy.random.randint(0, max_rand).get() Aggregate seed fragments from all Horovod workers seed_tensor = tf.constant(seed_fragment) reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum) return reduced_seed % max_rand```- We wrap the TensorFlow optimizer with Horovod's `DistributedOptimizer` class and scale the learning rate by the number of workers:```python opt = tf.keras.optimizers.SGD(0.01 * hvd.size()) opt = hvd.DistributedOptimizer(opt)```- We wrap the TensorFlow gradient tape with Horovod's `DistributedGradientTape` class:```python with tf.GradientTape() as tape: ... tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)```- After the first batch, we broadcast the model and optimizer parameters to all workers with Horovod:```python Note: broadcast should be done after the first gradient step to ensure optimizer initialization. if first_batch: hvd.broadcast_variables(model.variables, root_rank=0) hvd.broadcast_variables(opt.variables(), root_rank=0)```- We only save checkpoints from the first worker to avoid multiple workers trying to write to the same files:```python if hvd.rank() == 0: checkpoint.save(checkpoint_dir)```The rest of the script is the same as the MovieLens example in [getting-started-movielens/03-Training-with-TF.ipynb](../getting-started-movielens/03-Training-with-TF.ipynb). In order to run it with Horovod, we first need to write it to a file.
###Code
%%writefile './tf_trainer.py'
# External dependencies
import argparse
import glob
import os
import cupy
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# TF will have claimed all free GPU memory
os.environ["TF_MEMORY_ALLOCATION"] = "0.3" # fraction of free memory
import nvtabular as nvt # noqa: E402 isort:skip
from nvtabular.framework_utils.tensorflow import layers # noqa: E402 isort:skip
from nvtabular.loader.tensorflow import KerasSequenceLoader # noqa: E402 isort:skip
import tensorflow as tf # noqa: E402 isort:skip
import horovod.tensorflow as hvd # noqa: E402 isort:skip
parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument("--dir_in", default=None, help="Input directory")
parser.add_argument("--batch_size", default=None, help="batch size")
parser.add_argument("--cats", default=None, help="categorical columns")
parser.add_argument("--cats_mh", default=None, help="categorical multihot columns")
parser.add_argument("--conts", default=None, help="continuous columns")
parser.add_argument("--labels", default=None, help="continuous columns")
args = parser.parse_args()
BASE_DIR = args.dir_in or "./data/"
BATCH_SIZE = int(args.batch_size or 16384) # Batch Size
CATEGORICAL_COLUMNS = args.cats or ["movieId", "userId"] # Single-hot
CATEGORICAL_MH_COLUMNS = args.cats_mh or ["genres"] # Multi-hot
NUMERIC_COLUMNS = args.conts or []
TRAIN_PATHS = sorted(
glob.glob(os.path.join(BASE_DIR, "train/*.parquet"))
) # Output from ETL-with-NVTabular
hvd.init()
# Seed with system randomness (or a static seed)
cupy.random.seed(None)
def seed_fn():
"""
Generate consistent dataloader shuffle seeds across workers
Reseeds each worker's dataloader each epoch to get fresh a shuffle
that's consistent across workers.
"""
min_int, max_int = tf.int32.limits
max_rand = max_int // hvd.size()
# Generate a seed fragment on each worker
seed_fragment = cupy.random.randint(0, max_rand).get()
# Aggregate seed fragments from all Horovod workers
seed_tensor = tf.constant(seed_fragment)
reduced_seed = hvd.allreduce(seed_tensor, name="shuffle_seed", op=hvd.mpi_ops.Sum)
return reduced_seed % max_rand
proc = nvt.Workflow.load(os.path.join(BASE_DIR, "workflow/"))
EMBEDDING_TABLE_SHAPES, MH_EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
EMBEDDING_TABLE_SHAPES.update(MH_EMBEDDING_TABLE_SHAPES)
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
batch_size=BATCH_SIZE,
label_names=["rating"],
cat_names=CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS,
cont_names=NUMERIC_COLUMNS,
engine="parquet",
shuffle=True,
buffer_size=0.06, # how many batches to load at once
parts_per_chunk=1,
global_size=hvd.size(),
global_rank=hvd.rank(),
seed_fn=seed_fn,
)
inputs = {} # tf.keras.Input placeholders for each feature to be used
emb_layers = [] # output of all embedding layers, which will be concatenated
for col in CATEGORICAL_COLUMNS:
inputs[col] = tf.keras.Input(name=col, dtype=tf.int32, shape=(1,))
# Note that we need two input tensors for multi-hot categorical features
for col in CATEGORICAL_MH_COLUMNS:
inputs[col] = \
(tf.keras.Input(name=f"{col}__values", dtype=tf.int64, shape=(1,)),
tf.keras.Input(name=f"{col}__nnzs", dtype=tf.int64, shape=(1,)))
for col in CATEGORICAL_COLUMNS + CATEGORICAL_MH_COLUMNS:
emb_layers.append(
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
col, EMBEDDING_TABLE_SHAPES[col][0]
), # Input dimension (vocab size)
EMBEDDING_TABLE_SHAPES[col][1], # Embedding output dimension
)
)
emb_layer = layers.DenseFeatures(emb_layers)
x_emb_output = emb_layer(inputs)
x = tf.keras.layers.Dense(128, activation="relu")(x_emb_output)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(128, activation="relu")(x)
x = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
loss = tf.losses.BinaryCrossentropy()
opt = tf.keras.optimizers.SGD(0.01 * hvd.size())
opt = hvd.DistributedOptimizer(opt)
checkpoint_dir = "./checkpoints"
checkpoint = tf.train.Checkpoint(model=model, optimizer=opt)
@tf.function(experimental_relax_shapes=True)
def training_step(examples, labels, first_batch):
with tf.GradientTape() as tape:
probs = model(examples, training=True)
loss_value = loss(labels, probs)
# Horovod: add Horovod Distributed GradientTape.
tape = hvd.DistributedGradientTape(tape, sparse_as_dense=True)
grads = tape.gradient(loss_value, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))
# Horovod: broadcast initial variable states from rank 0 to all other processes.
# This is necessary to ensure consistent initialization of all workers when
# training is started with random weights or restored from a checkpoint.
#
# Note: broadcast should be done after the first gradient step to ensure optimizer
# initialization.
if first_batch:
hvd.broadcast_variables(model.variables, root_rank=0)
hvd.broadcast_variables(opt.variables(), root_rank=0)
return loss_value
# Horovod: adjust number of steps based on number of GPUs.
for batch, (examples, labels) in enumerate(train_dataset_tf):
loss_value = training_step(examples, labels, batch == 0)
if batch % 100 == 0 and hvd.local_rank() == 0:
print("Step #%d\tLoss: %.6f" % (batch, loss_value))
hvd.join()
# Horovod: save checkpoints only on worker 0 to prevent other workers from
# corrupting it.
if hvd.rank() == 0:
checkpoint.save(checkpoint_dir)
###Output
Overwriting ./tf_trainer.py
###Markdown
We'll also need a small wrapper script to check environment variables set by the Horovod runner to see which rank we'll be assigned, in order to set CUDA_VISIBLE_DEVICES properly for each worker:
###Code
%%writefile './hvd_wrapper.sh'
#!/bin/bash
# Get local process ID from OpenMPI or alternatively from SLURM
if [ -z "${CUDA_VISIBLE_DEVICES:-}" ]; then
if [ -n "${OMPI_COMM_WORLD_LOCAL_RANK:-}" ]; then
LOCAL_RANK="${OMPI_COMM_WORLD_LOCAL_RANK}"
elif [ -n "${SLURM_LOCALID:-}" ]; then
LOCAL_RANK="${SLURM_LOCALID}"
fi
export CUDA_VISIBLE_DEVICES=${LOCAL_RANK}
fi
exec "$@"
###Output
Overwriting ./hvd_wrapper.sh
###Markdown
OpenMPI and Slurm are tools for running distributed computed jobs. In this example, we’re using OpenMPI, but depending on the environment you run distributed training jobs in, you may need to check slightly different environment variables to find the total number of workers (global size) and each process’s worker number (global rank.)Why do we have to check environment variables instead of using `hvd.rank()` and `hvd.local_rank()`? NVTabular does some GPU configuration when imported and needs to be imported before Horovod to avoid conflicts. We need to set GPU visibility before NVTabular is imported (when Horovod isn’t yet available) so that multiple processes don’t each try to configure all the GPUs, so as a workaround, we “cheat” and peek at environment variables set by horovodrun to decide which GPU each process should use.
###Code
!horovodrun -np 2 sh hvd_wrapper.sh python tf_trainer.py --dir_in $BASE_DIR --batch_size 16384
###Output
2021-06-04 16:39:06.000313: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:08.979997: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:09.064191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.138200: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:10.138376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,0]<stderr>:2021-06-04 16:39:10.139777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:10.139823: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:10.139907: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139949: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:10.139990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:10.140084: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140123: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:10.140169: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:10.144021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:10.367414: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:10.367496: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
[1,1]<stderr>:2021-06-04 16:39:10.368324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:10.368347: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:10.368396: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368424: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368451: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368475: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:10.368512: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368537: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:10.368573: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:10.369841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.730033: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,1]<stderr>:2021-06-04 16:39:11.730907: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,1]<stderr>:pciBusID: 0000:42:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,1]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
[1,1]<stderr>:2021-06-04 16:39:11.730990: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.731005: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731018: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731029: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731038: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,1]<stderr>:2021-06-04 16:39:11.731049: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731059: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,1]<stderr>:2021-06-04 16:39:11.731078: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,1]<stderr>:2021-06-04 16:39:11.732312: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,1]<stderr>:2021-06-04 16:39:11.732350: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-06-04 16:39:11.732473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,1]<stderr>:2021-06-04 16:39:11.732487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,1]<stderr>:2021-06-04 16:39:11.732493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,1]<stderr>:2021-06-04 16:39:11.734431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:42:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:11.821346: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
[1,0]<stderr>:2021-06-04 16:39:11.822270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Found device 0 with properties:
[1,0]<stderr>:pciBusID: 0000:0b:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
[1,0]<stderr>:coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.91GiB deviceMemoryBandwidth: 451.17GiB/s
[1,0]<stderr>:2021-06-04 16:39:11.822360: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.822376: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822389: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822400: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822411: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
[1,0]<stderr>:2021-06-04 16:39:11.822425: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822434: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
[1,0]<stderr>:2021-06-04 16:39:11.822454: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
[1,0]<stderr>:2021-06-04 16:39:11.823684: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1888] Adding visible gpu devices: 0
[1,0]<stderr>:2021-06-04 16:39:11.823731: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-06-04 16:39:11.823868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1287] Device interconnect StreamExecutor with strength 1 edge matrix:
[1,0]<stderr>:2021-06-04 16:39:11.823881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1293] 0
[1,0]<stderr>:2021-06-04 16:39:11.823888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0: N
[1,0]<stderr>:2021-06-04 16:39:11.825784: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3352 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:0b:00.0, compute capability: 6.1)
[1,0]<stderr>:2021-06-04 16:39:17.634485: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,0]<stderr>:2021-06-04 16:39:17.668915: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,1]<stderr>:2021-06-04 16:39:17.694128: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
[1,1]<stderr>:2021-06-04 16:39:17.703326: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2993950000 Hz
[1,0]<stderr>:2021-06-04 16:39:17.780825: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,1]<stderr>:2021-06-04 16:39:17.810644: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
[1,0]<stderr>:2021-06-04 16:39:17.984966: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,1]<stderr>:2021-06-04 16:39:18.012113: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
[1,0]<stdout>:Step #0 Loss: 0.695094
[1,0]<stdout>:Step #100 Loss: 0.669580
[1,0]<stdout>:Step #200 Loss: 0.661098
[1,0]<stdout>:Step #300 Loss: 0.660680
[1,0]<stdout>:Step #400 Loss: 0.658633
[1,0]<stdout>:Step #500 Loss: 0.660251
[1,0]<stdout>:Step #600 Loss: 0.657047
|
run_detection_rnn.ipynb | ###Markdown
Ce script applique les données du DCU à un modèle LSTM .
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import argparse
import copy
import numpy as np
import pandas as pd
import keras
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from data_handler import DataHandler
from keras.layers import Activation, Dense, LSTM
from keras.models import Sequential
from keras.callbacks import CSVLogger
def embeding(df):
df_copy = copy.deepcopy(df)
for header, values in df_copy.items():
df_copy[header] = pd.Categorical(df_copy[header])
df_copy[header] = df_copy[header].cat.codes
return df_copy
def DA_Jitter(X, sigma=0.05):
myNoise = np.random.normal(loc=0, scale=sigma, size=X.shape)
return X+myNoise
def data_augmentation(data_arr,sigma):
newData_arr = data_arr[:,1:8]
newData_arr = DA_Jitter(newData_arr, sigma)
newData_arr = np.column_stack((data_arr[:,0],newData_arr,data_arr[:,8]))
newData_arr = newData_arr[newData_arr[:,-1] != 1]
return newData_arr
# parse arguments
## general
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument('--working_path', default='.')
## data
arg_parser.add_argument('dataset_name', default='mimic3',
help='The data files should be saved in [working_path]/data/[dataset_name] directory.')
arg_parser.add_argument('label_name', default='mortality')
arg_parser.add_argument('--max_timesteps', type=int, default=200,
help='Time series of at most # time steps are used. Default: 200.')
arg_parser.add_argument('--max_timestamp', type=int, default=48*60*60,
help='Time series of at most # seconds are used. Default: 48 (hours).')
## model
arg_parser.add_argument('--recurrent_dim', type=lambda x: x and [int(xx) for xx in x.split(',')] or [], default='64')
arg_parser.add_argument('--hidden_dim', type=lambda x: x and [int(xx) for xx in x.split(',')] or [], default='64')
arg_parser.add_argument('--model', default='GRUD', choices=['GRUD', 'GRUforward', 'GRU0', 'GRUsimple'])
arg_parser.add_argument('--use_bidirectional_rnn', default=False)
## training
arg_parser.add_argument('--pretrained_model_file', default=None,
help='If pre-trained model is provided, training will be skipped.') # e.g., [model_name]_[i_fold].h5
arg_parser.add_argument('--epochs', type=int, default=100)
arg_parser.add_argument('--early_stopping_patience', type=int, default=10)
arg_parser.add_argument('--batch_size', type=int, default=2)
## set the actual arguments if running in notebook
if not (__name__ == '__main__' and '__file__' in globals()):
# '''ARGS = arg_parser.parse_args([
# 'mimic3',
# 'mortality',
# '--model', 'GRUD',
# '--hidden_dim', '',
# '--epochs', '100'
# ])'''
ARGS = arg_parser.parse_args([
'detection',
'risk_situation',
'--model', 'GRUD',
'--hidden_dim', '',
'--max_timestamp', '5807537',
'--epochs', '100'
])
else:
ARGS = arg_parser.parse_args()
#print('Arguments:', ARGS)
# get dataset
dataset = DataHandler(
data_path=os.path.join(ARGS.working_path, 'data', ARGS.dataset_name),
label_name=ARGS.label_name,
max_steps=ARGS.max_timesteps,
max_timestamp=ARGS.max_timestamp
)
###Output
_____no_output_____
###Markdown
Embeding
###Code
sigma = 0.05
data = pd.DataFrame(dataset._data['input'])
data = embeding(data)
##on enleve fall et timestamp et fusion des classes
df = pd.DataFrame(data)
df.columns = ["timestamp","name", "latitude", "longitude", "step","gsr","heart_rate","skin_temp","calories","risk_situation"]
df.pop("timestamp")
df = df[df.risk_situation != -1]
df = df[df.risk_situation != 0]
df = df[df.risk_situation != 3]
df.loc[df.risk_situation == 4 , 'risk_situation'] = 0
df.loc[df.risk_situation == 2 , 'risk_situation'] = 0
# df = df[pd.notnull(df['risk_situation'])]
to_remove = np.random.choice(df[df['risk_situation']==1].index,size=15000,replace=False)
df=df.drop(to_remove)
stat = df['risk_situation'].value_counts(dropna=False)
print(stat)
df.head(30)
targets = df.pop('risk_situation')
targets.shape
from keras.utils import to_categorical
# targets = to_categorical(targets,2)
print(targets.shape)
X_train, X_val, y_train, y_val = train_test_split(df.values,
targets, test_size=0.2, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X_train,
y_train, test_size=0.125, random_state=0)
print(X_train.shape, y_train.shape)
print(X_val.shape, y_val.shape)
print(X_test.shape, y_test.shape)
###Output
(6678,)
(4674, 8) (4674,)
(1336, 8) (1336,)
(668, 8) (668,)
###Markdown
Data augmentation
###Code
train = np.column_stack((X_train,y_train))
stat = pd.DataFrame(train)[8].value_counts(dropna=False)
print(stat)
newData_arr = data_augmentation(train, 0.05)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
newData_arr = data_augmentation(train, 0.04)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
newData_arr = data_augmentation(train, 0.06)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
# newData_arr = data_augmentation(train, 0.055)
# X_train = np.concatenate((X_train,newData_arr[:,:8]))
# y_train = np.concatenate((y_train,newData_arr[:,8]))
#
# newData_arr = data_augmentation(train, 0.045)
# X_train = np.concatenate((X_train,newData_arr[:,:8]))
# y_train = np.concatenate((y_train,newData_arr[:,8]))
#
# newData_arr = data_augmentation(train, 0.0555)
# X_train = np.concatenate((X_train,newData_arr[:,:8]))
# y_train = np.concatenate((y_train,newData_arr[:,8]))
#
# newData_arr = data_augmentation(train, 0.0455)
# X_train = np.concatenate((X_train,newData_arr[:,:8]))
# y_train = np.concatenate((y_train,newData_arr[:,8]))
train = np.column_stack((X_train,y_train))
stat = pd.DataFrame(train)[8].value_counts(dropna=False)
print(stat)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print('Pad sequences (samples x time)')
#X_train = sequence.pad_sequences(X_train[:200], maxlen=maxlen)
#X_test = sequence.pad_sequences(X_test[:200], maxlen=maxlen)
print('x_train shape:', X_train.shape)
print('x_test shape:', X_test.shape)
#X = X_train.reshape(len(X_train),3,3)
#y = y_train.values.reshape(len(y_train), 1)
# reshape input to be [samples, time steps, features]
trainX = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
testX = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))
X_val = np.reshape(X_val, (X_val.shape[0], 1, X_val.shape[1]))
# trainY = np.reshape(y_train, (y_train.shape[0], 1, 1))
# testY = np.reshape(y_test, (y_test.shape[0], 1, 1))
print(trainX.shape, y_train.shape, testX.shape, y_test.shape)
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
from keras.optimizers import SGD
opt = SGD(lr=0.001)
# create and fit the LSTM network
print("Building model...")
model = Sequential()
model.add(LSTM(8, input_shape=(1, 8)))
model.add(Dense(1, activation='softmax'))
# model.compile(loss='mean_squared_error', optimizer='adam',metrics=['accuracy'])
# model.compile(loss='mean_squared_error', optimizer=opt,metrics=['accuracy'])
model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['accuracy'])
model.summary()
###Output
Building model...
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (None, 8) 544
_________________________________________________________________
dense_1 (Dense) (None, 1) 9
=================================================================
Total params: 553
Trainable params: 553
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training
###Code
print("Training...")
history = LossHistory()
csv_logger = CSVLogger('log.csv', append=False, separator=';')
h = model.fit(trainX, y_train, epochs=50, batch_size=200, verbose=2,callbacks=[csv_logger],validation_data=(X_val, y_val))
print(max(h.history['val_acc']))
print(h.history['val_acc'].index(max(h.history['val_acc'])))
log = pd.read_csv('log.csv',sep=';')
ax = plt.gca()
ax.set_ylim([0,1])
ax.set_xlim([0,50])
log.plot(kind='line',x='epoch',y='acc',ax=ax)
log.plot(kind='line',x='epoch',y='loss', color='red', ax=ax)
log.plot(kind='line',x='epoch',y='val_acc',color='purple',ax=ax)
log.plot(kind='line',x='epoch',y='val_loss', color='green', ax=ax)
plt.show()
from sklearn.metrics import classification_report
y_pred = model.predict_classes(testX)
y_test_rounded = np.argmax(y_test,axis=1)
# print(y_test)
# print(y_test)
#
print(classification_report(y_test, y_pred))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
###Output
precision recall f1-score support
0 0.00 0.00 0.00 55
1 0.92 1.00 0.96 613
avg / total 0.84 0.92 0.88 668
###Markdown
Ce script applique les données du DCU à un modèle LSTM .
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
import argparse
import copy
import numpy as np
import pandas as pd
import keras
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from data_handler import DataHandler
from keras.layers import Activation, Dense, LSTM
from keras.models import Sequential
from keras.callbacks import CSVLogger
def embeding(df):
df_copy = copy.deepcopy(df)
for header, values in df_copy.items():
df_copy[header] = pd.Categorical(df_copy[header])
df_copy[header] = df_copy[header].cat.codes
return df_copy
def DA_Jitter(X, sigma):
myNoise = np.random.normal(loc=0, scale=sigma, size=X.shape)
return X+myNoise
def data_augmentation(data_arr,sigma):
newData_arr = data_arr[:,1:8]
newData_arr = DA_Jitter(newData_arr, sigma)
newData_arr = np.column_stack((data_arr[:,0],newData_arr,data_arr[:,8]))
newData_arr = newData_arr[newData_arr[:,-1] != 1]
return newData_arr
# parse arguments
## general
arg_parser = argparse.ArgumentParser()
arg_parser.add_argument('--working_path', default='.')
## data
arg_parser.add_argument('dataset_name', default='mimic3',
help='The data files should be saved in [working_path]/data/[dataset_name] directory.')
arg_parser.add_argument('label_name', default='mortality')
arg_parser.add_argument('--max_timesteps', type=int, default=200,
help='Time series of at most # time steps are used. Default: 200.')
arg_parser.add_argument('--max_timestamp', type=int, default=48*60*60,
help='Time series of at most # seconds are used. Default: 48 (hours).')
## model
arg_parser.add_argument('--recurrent_dim', type=lambda x: x and [int(xx) for xx in x.split(',')] or [], default='64')
arg_parser.add_argument('--hidden_dim', type=lambda x: x and [int(xx) for xx in x.split(',')] or [], default='64')
arg_parser.add_argument('--model', default='GRUD', choices=['GRUD', 'GRUforward', 'GRU0', 'GRUsimple'])
arg_parser.add_argument('--use_bidirectional_rnn', default=False)
## training
arg_parser.add_argument('--pretrained_model_file', default=None,
help='If pre-trained model is provided, training will be skipped.') # e.g., [model_name]_[i_fold].h5
arg_parser.add_argument('--epochs', type=int, default=100)
arg_parser.add_argument('--early_stopping_patience', type=int, default=10)
arg_parser.add_argument('--batch_size', type=int, default=2)
## set the actual arguments if running in notebook
if not (__name__ == '__main__' and '__file__' in globals()):
# '''ARGS = arg_parser.parse_args([
# 'mimic3',
# 'mortality',
# '--model', 'GRUD',
# '--hidden_dim', '',
# '--epochs', '100'
# ])'''
ARGS = arg_parser.parse_args([
'detection',
'risk_situation',
'--model', 'GRUD',
'--hidden_dim', '',
'--max_timestamp', '5807537',
'--epochs', '100'
])
else:
ARGS = arg_parser.parse_args()
#print('Arguments:', ARGS)
# get dataset
dataset = DataHandler(
data_path=os.path.join(ARGS.working_path, 'data', ARGS.dataset_name),
label_name=ARGS.label_name,
max_steps=ARGS.max_timesteps,
max_timestamp=ARGS.max_timestamp
)
###Output
_____no_output_____
###Markdown
Embeding
###Code
sigma = 0.05
data = pd.DataFrame(dataset._data['input'])
data = embeding(data)
##on enleve fall et timestamp et fusion des classes
df = pd.DataFrame(data)
df.columns = ["timestamp","name", "latitude", "longitude", "step","gsr","heart_rate","skin_temp","calories","risk_situation"]
df.pop("timestamp")
df = df[df.risk_situation != -1]
df = df[df.risk_situation != 0]
df = df[df.risk_situation != 3]
df.loc[df.risk_situation == 4 , 'risk_situation'] = 0
df.loc[df.risk_situation == 2 , 'risk_situation'] = 0
# df = df[pd.notnull(df['risk_situation'])]
to_remove = np.random.choice(df[df['risk_situation']==1].index,size=15000,replace=False)
df=df.drop(to_remove)
stat = df['risk_situation'].value_counts(dropna=False)
print(stat)
df.head(30)
targets = df.pop('risk_situation')
targets.shape
from keras.utils import to_categorical
# targets = to_categorical(targets,2)
print(targets.shape)
X_train, X_val, y_train, y_val = train_test_split(df.values,
targets, test_size=0.2, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X_train,
y_train, test_size=0.125, random_state=0)
print(X_train.shape, y_train.shape)
print(X_val.shape, y_val.shape)
print(X_test.shape, y_test.shape)
###Output
(6678,)
(4674, 8) (4674,)
(1336, 8) (1336,)
(668, 8) (668,)
###Markdown
Data augmentation
###Code
train = np.column_stack((X_train,y_train))
stat = pd.DataFrame(train)[8].value_counts(dropna=False)
print(stat)
newData_arr = data_augmentation(train, 0.05)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
pd.DataFrame(train)[7].plot()
pd.DataFrame(newData_arr)[7].plot()
newData_arr = data_augmentation(train, 0.04)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
newData_arr = data_augmentation(train, 0.06)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
newData_arr = data_augmentation(train, 0.055)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
newData_arr = data_augmentation(train, 0.045)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
newData_arr = data_augmentation(train, 0.03)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
newData_arr = data_augmentation(train, 0.07)
X_train = np.concatenate((X_train,newData_arr[:,:8]))
y_train = np.concatenate((y_train,newData_arr[:,8]))
train = np.column_stack((X_train,y_train))
stat = pd.DataFrame(train)[8].value_counts(dropna=False)
print(stat)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print('Pad sequences (samples x time)')
#X_train = sequence.pad_sequences(X_train[:200], maxlen=maxlen)
#X_test = sequence.pad_sequences(X_test[:200], maxlen=maxlen)
print('x_train shape:', X_train.shape)
print('x_test shape:', X_test.shape)
#X = X_train.reshape(len(X_train),3,3)
#y = y_train.values.reshape(len(y_train), 1)
# reshape input to be [samples, time steps, features]
trainX = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
testX = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))
X_val = np.reshape(X_val, (X_val.shape[0], 1, X_val.shape[1]))
# trainY = np.reshape(y_train, (y_train.shape[0], 1, 1))
# testY = np.reshape(y_test, (y_test.shape[0], 1, 1))
print(trainX.shape, y_train.shape, testX.shape, y_test.shape)
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
from keras.optimizers import SGD
opt = SGD(lr=0.001)
# create and fit the LSTM network
print("Building model...")
model = Sequential()
model.add(LSTM(8, input_shape=(1, 8)))
model.add(Dense(1, activation='softmax'))
# model.compile(loss='mean_squared_error', optimizer='adam',metrics=['accuracy'])
# model.compile(loss='mean_squared_error', optimizer=opt,metrics=['accuracy'])
model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['accuracy'])
model.summary()
###Output
Building model...
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_6 (LSTM) (None, 8) 544
_________________________________________________________________
dense_6 (Dense) (None, 1) 9
=================================================================
Total params: 553
Trainable params: 553
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training
###Code
print("Training...")
history = LossHistory()
csv_logger = CSVLogger('log_epoch.csv', append=False, separator=';')
h = model.fit(trainX, y_train, epochs=50, batch_size=200, verbose=2,callbacks=[csv_logger],validation_data=(X_val, y_val))
print(max(h.history['val_acc']))
print(h.history['val_acc'].index(max(h.history['val_acc'])))
log = pd.read_csv('log_epoch.csv',sep=';')
ax = plt.gca()
ax.set_ylim([0,1])
ax.set_xlim([0,50])
log.plot(kind='line',x='epoch',y='acc',ax=ax)
log.plot(kind='line',x='epoch',y='loss', color='red', ax=ax)
log.plot(kind='line',x='epoch',y='val_acc',color='purple',ax=ax)
log.plot(kind='line',x='epoch',y='val_loss', color='green', ax=ax)
plt.show()
from sklearn.metrics import classification_report
y_pred = model.predict_classes(testX)
y_test_rounded = np.argmax(y_test,axis=1)
# print(y_test)
# print(y_test)
#
print(classification_report(y_test, y_pred))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
###Output
precision recall f1-score support
0 0.00 0.00 0.00 57
1 0.91 1.00 0.96 611
avg / total 0.84 0.91 0.87 668
|
homework06-pandas/homework06-fbrumen.ipynb | ###Markdown
This will import the data, you have to run it to be able to solve the homework.
###Code
def read_single_csv_entso_e(file):
return pd.read_csv(file, sep='\t', encoding='utf-16', parse_dates=["DateTime"])
def load_complete_entso_e_data(directory):
pattern = Path(directory) / '*.csv'
files = glob.glob(str(pattern))
if not files:
raise ValueError(f"No files found when searching in {pattern}, wrong directory?")
print(f'Concatenating {len(files)} csv files...')
each_csv_file = [read_single_csv_entso_e(file) for file in files]
data = pd.concat(each_csv_file, ignore_index=True)
data = data.sort_values(by=["AreaName", "DateTime"])
data = data.set_index("DateTime")
print("Loading done.")
return data
power_demand = load_complete_entso_e_data(DOWNLOAD_DIR)
###Output
Concatenating 68 csv files...
Loading done.
###Markdown
Exercise 1 - Calculate the relation of Wednesday average consumption to Sunday average consumption for selected countriesIn this exercise, calculate the relation of Wednesday average consumption to Sunday average consumption for the following countries: Austria, Germany, United Kingdom, Spain, Sweden, Italy, Croatia.(1) First create a variable that contains only power consumption data for these countries. The pandas command ```isin()``` may be very helpful here. Reduce the data to only consider the period 2015-01-01 until 2019-12-31. The lecture slides may contain relevant code here.(2) Then, group the data by weekday and country (i.e. AreaName). Use ```groupby``` and ```mean```for that purpose. (3) Calculate for all countries the proportion of Wednesday (day 2) and Sunday (day 6) by dividing the two values.(4) For which country, this relative value is highest? What could this indicate?
###Code
power_demand.columns
countries = power_demand['AreaName'].isin(['Austria', 'Germany', 'United Kingdom', 'Spain', 'Sweden', 'Italy', 'Croatia'])
power_demand_countries = power_demand[countries]
power_demand_selected = power_demand_countries['2015-01-01':'2019-12-31']
power_demand_selected
power_demand_weekday = power_demand_selected.groupby([power_demand_selected.index.weekday, 'AreaName']).mean()
power_demand_wednesday = power_demand_weekday.loc[2, 'TotalLoadValue']
power_demand_sunday = power_demand_weekday.loc[6, 'TotalLoadValue']
power_demand_wednesday
power_demand_sunday
relation_wed_sun = power_demand_wednesday / power_demand_sunday
relation_wed_sun
highest_relation = relation_wed_sun.idxmax()
highest_relation
###Output
_____no_output_____
###Markdown
Italy has the highest relative value -> The power consumption in Italy is much higher on Wednesday than on Sunday, probably because most of the shops and companies are closed on Sunday. Italy is a religious country, which might be another reason for the fewer power consumption on Sundays. Because of the warmer seasons during the year, people tend to spend more time outside, especially on Sunday when they don't have to work - households consume less energy. Exercise 2 - Calculate the monthly average consumption as deviation from mean consumptionFor the same countries as in the above dataset, calculate the monthly mean consumption as deviation from the mean of consumption over the whole time. Plot the curves for all countries.(1) First create a variable that contains only power consumption data for the selected countries. The pandas command ```isin()``` may be very helpful here. If you did Exercise 1, you can use the same dataset.(2) Then, aggregate the data by country (i.e. AreaName) and month. Use ```groupby``` and ```mean``` for that purpose. Select the column ```TotalLoadValue``` from the result.(3) Aggregate the data by country (i..e AreaName) only, i.e. calculate the average consumption by country using ```groupby``` and ```mean```. Select the column ```TotalLoadValue``` from the result.(4) Divide the result of (2) by (3) and observe how well broadcasting works here.(5) Use the command ```unstack``` on the result. How does the table look now? Plot the result. If your resulting, unstacked dataframe is called ```result```, you may use ```result.plot()``` to get a nice plot.(6) How would you explain the difference in the curve between Croatia and Sweden?
###Code
power_demand_monthly = power_demand_countries.groupby([power_demand_countries.index.month, 'AreaName']).mean()
power_demand_monthly = power_demand_monthly['TotalLoadValue']
power_demand_monthly
power_demand_average = power_demand_countries.groupby(['AreaName']).mean()
power_demand_average = power_demand_average['TotalLoadValue']
power_demand_average
power_demand_monthly_average = power_demand_monthly/power_demand_average
result = power_demand_monthly_average.unstack()
result.plot()
plt.xlabel('months')
plt.ylabel('monthly average consumption')
plt.show()
plt.plot(result)
plt.xlabel('months')
plt.ylabel('monthly average consumption')
###Output
_____no_output_____
###Markdown
Difference between Croatia and Sweden: Sweden has the highest consumption during the winter months of all the countries probably because of the short days and less sunlight. The consumption during summer in Sweden is the lowest probably because of the long days. Croatia has a high consumption during the summer months because of the tourists and the air conditioning. Exercise 3 - calculate the hourly average consumption as deviation from mean consumptionDo the same as in exercise 2, but now for the hourly average consumption. I.e. how much is consumed on each of the 24 hours of a day?Which country has the lowest, which the highest variability? What may be the reason for it?
###Code
power_demand_hourly = power_demand_countries.groupby([power_demand_countries.index.hour, 'AreaName']).mean()
power_demand_hourly = power_demand_hourly['TotalLoadValue']
power_demand_average = power_demand_countries.groupby(['AreaName']).mean()
power_demand_average = power_demand_average['TotalLoadValue']
power_demand_hourly_average = power_demand_hourly/power_demand_average
result_hours = power_demand_hourly_average.unstack()
result_hours.plot()
plt.xlabel('hours')
plt.ylabel('hourly average consumption')
plt.show()
hours_country = result_hours.std()
print(hours_country)
print('\nhighest deviation:\n', hours_country[hours_country == hours_country.max()])
print('\nlowest deviation:\n', hours_country[hours_country == hours_country.min()])
###Output
highest deviation:
AreaName
United Kingdom 0.177716
dtype: float64
lowest deviation:
AreaName
Sweden 0.09139
dtype: float64
###Markdown
Sweden has the lowest deviation of the countries -> there is not a big difference between the energy consumption during the day and at night, probably because of the long or very short days UK has the highest deviation -> big difference between the energy consumption during day and night, highest consumption around 7, maybe because they spend more time inside, colder weather, tv... Exercise 4 - Calculate the average load per capitaBelow you find a table with population data for our selected countries. You should use it to calculate per capita consumption.(1) Calculate the average load in all countries using ```groupby``` and ```mean``` and select the column ```TotalLoadValue``` from the result.(2) Divide the result by the ```Population``` column of the dataframe ```population```. Observe, how broadcasting helps here nicely.(3) Plot the result. Which country has the highest load, which the lowest? What may be the reason? In which unit is this value? How could we convert it to MWh per year?
###Code
population = pd.DataFrame({'Country': ["Austria", "Croatia", "Germany", "Italy", "Spain", "Sweden", "United Kingdom"],
'Population': [8840521, 4087843, 82905782, 60421760, 46796540, 10175214, 66460344]})
population.index = population["Country"]
population
average_load = power_demand_countries.groupby(['AreaName']).mean()
average_load = average_load['TotalLoadValue']
average_load_capita = average_load / population['Population']
fig,ax = plt.subplots(figsize=(10,5))
ax.plot(average_load_capita)
ax.set_xlabel('COUNTRIES')
ax.set_ylabel('MW')
plt.show()
print(average_load_capita)
print('\nhighest load:\n', average_load_capita[average_load_capita == average_load_capita.max()])
print('\nlowest load:\n', average_load_capita[average_load_capita == average_load_capita.min()])
###Output
_____no_output_____ |
Second-Minimum-Node-In-a-Binary-Tree.ipynb | ###Markdown
Second Minimum Node In a Binary TreeGiven a non-empty special binary tree consisting of nodes with the non-negative value, where each node in this tree has exactly two or zero sub-node. If the node has two sub-nodes, then this node's value is the smaller value among its two sub-nodes. More formally, the property root.val = min(root.left.val, root.right.val) always holds. 解析题目来源:[LeetCode - Second Minimum Node In a Binary Tree - 671](https://leetcode.com/problems/second-minimum-node-in-a-binary-tree/)题目非常简单,遍历树的方法非常多
###Code
def findSecondMinimumValue(root):
queue = [root]
result = []
while(len(queue) != 0):
node = queue.pop()
if (node.val not in result):
result.append(node.val)
if (node.left is not None):
queue.append(node.left)
if (node.right is not None):
queue.append(node.right)
result.sort()
if (len(result) <= 1):
return -1
return result[1]
###Output
_____no_output_____ |
Model backlog/Train/62-melanoma-5fold-inceptionresnetv2.ipynb | ###Markdown
Dependencies
###Code
# !pip install --quiet efficientnet
!pip install --quiet image-classifiers
import warnings, json, re, glob, math
from scripts_step_lr_schedulers import *
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import optimizers, layers, metrics, losses, Model
# import efficientnet.tfkeras as efn
from classification_models.tfkeras import Classifiers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Model parameters
###Code
config = {
"HEIGHT": 256,
"WIDTH": 256,
"CHANNELS": 3,
"BATCH_SIZE": 128,
"EPOCHS": 12,
"LEARNING_RATE": 3e-4,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"TTA_STEPS": 25,
"BASE_MODEL": 'inceptionresnetv2',
"BASE_MODEL_WEIGHTS": 'imagenet',
"DATASET_PATH": 'melanoma-256x256'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
k_fold = pd.read_csv(database_base_path + 'train.csv')
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print(f'Test samples: {len(test)}')
display(test.head())
GCS_PATH = 'gs://kds-65548a4c87d02212371fce6e9bd762100c34bf9b9ebbd04b0dd4b65b'# KaggleDatasets().get_gcs_path(config['DATASET_PATH'])
TRAINING_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/train*.tfrec')
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/test*.tfrec')
###Output
Train samples: 33126
###Markdown
Augmentations
###Code
def data_augment(image, label):
p_spatial = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_spatial2 = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_rotate = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_crop = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_pixel = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
### Spatial-level transforms
if p_spatial >= .2: # flips
image['input_image'] = tf.image.random_flip_left_right(image['input_image'])
image['input_image'] = tf.image.random_flip_up_down(image['input_image'])
if p_spatial >= .7:
image['input_image'] = tf.image.transpose(image['input_image'])
if p_rotate >= .8: # rotate 270º
image['input_image'] = tf.image.rot90(image['input_image'], k=3)
elif p_rotate >= .6: # rotate 180º
image['input_image'] = tf.image.rot90(image['input_image'], k=2)
elif p_rotate >= .4: # rotate 90º
image['input_image'] = tf.image.rot90(image['input_image'], k=1)
if p_spatial2 >= .6:
if p_spatial2 >= .9:
image['input_image'] = transform_rotation(image['input_image'], config['HEIGHT'], 180.)
elif p_spatial2 >= .8:
image['input_image'] = transform_zoom(image['input_image'], config['HEIGHT'], 8., 8.)
elif p_spatial2 >= .7:
image['input_image'] = transform_shift(image['input_image'], config['HEIGHT'], 8., 8.)
else:
image['input_image'] = transform_shear(image['input_image'], config['HEIGHT'], 2.)
if p_crop >= .6: # crops
if p_crop >= .8:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.8), int(config['WIDTH']*.8), config['CHANNELS']])
elif p_crop >= .7:
image['input_image'] = tf.image.random_crop(image['input_image'], size=[int(config['HEIGHT']*.9), int(config['WIDTH']*.9), config['CHANNELS']])
else:
image['input_image'] = tf.image.central_crop(image['input_image'], central_fraction=.8)
image['input_image'] = tf.image.resize(image['input_image'], size=[config['HEIGHT'], config['WIDTH']])
if p_pixel >= .6: # Pixel-level transforms
if p_pixel >= .9:
image['input_image'] = tf.image.random_hue(image['input_image'], 0.01)
elif p_pixel >= .8:
image['input_image'] = tf.image.random_saturation(image['input_image'], 0.7, 1.3)
elif p_pixel >= .7:
image['input_image'] = tf.image.random_contrast(image['input_image'], 0.8, 1.2)
else:
image['input_image'] = tf.image.random_brightness(image['input_image'], 0.1)
return image, label
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
# Datasets utility functions
def read_labeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_meta': data}, label # returns a dataset of (image, data, label)
def read_labeled_tfrecord_eval(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_meta': data}, label, image_name # returns a dataset of (image, data, label, image_name)
def load_dataset(filenames, ordered=False, buffer_size=-1):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(read_labeled_tfrecord, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label)
def load_dataset_eval(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_labeled_tfrecord_eval, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label, image_name)
def get_training_dataset(filenames, batch_size, buffer_size=-1):
dataset = load_dataset(filenames, ordered=False, buffer_size=buffer_size)
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.repeat() # the training dataset must repeat for several epochs
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=True) # slighly faster with fixed tensor sizes
dataset = dataset.prefetch(buffer_size) # prefetch next batch while training (autotune prefetch buffer size)
return dataset
def get_validation_dataset(filenames, ordered=True, repeated=False, batch_size=32, buffer_size=-1):
dataset = load_dataset(filenames, ordered=ordered, buffer_size=buffer_size)
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=repeated)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_eval_dataset(filenames, batch_size=32, buffer_size=-1):
dataset = load_dataset_eval(filenames, buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Test function
def read_unlabeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, UNLABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_tabular': data}, image_name # returns a dataset of (image, data, image_name)
def load_dataset_test(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_unlabeled_tfrecord, num_parallel_calls=buffer_size)
# returns a dataset of (image, data, label, image_name) pairs if labeled=True or (image, data, image_name) pairs if labeled=False
return dataset
def get_test_dataset(filenames, batch_size=32, buffer_size=-1, tta=False):
dataset = load_dataset_test(filenames, buffer_size=buffer_size)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Advanced augmentations
def transform_rotation(image, height, rotation):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly rotated
DIM = height
XDIM = DIM%2 #fix for size 331
rotation = rotation * tf.random.normal([1],dtype='float32')
# CONVERT DEGREES TO RADIANS
rotation = math.pi * rotation / 180.
# ROTATION MATRIX
c1 = tf.math.cos(rotation)
s1 = tf.math.sin(rotation)
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
rotation_matrix = tf.reshape( tf.concat([c1,s1,zero, -s1,c1,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(rotation_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_shear(image, height, shear):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly sheared
DIM = height
XDIM = DIM%2 #fix for size 331
shear = shear * tf.random.normal([1],dtype='float32')
shear = math.pi * shear / 180.
# SHEAR MATRIX
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
c2 = tf.math.cos(shear)
s2 = tf.math.sin(shear)
shear_matrix = tf.reshape( tf.concat([one,s2,zero, zero,c2,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(shear_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_shift(image, height, h_shift, w_shift):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly shifted
DIM = height
XDIM = DIM%2 #fix for size 331
height_shift = h_shift * tf.random.normal([1],dtype='float32')
width_shift = w_shift * tf.random.normal([1],dtype='float32')
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
# SHIFT MATRIX
shift_matrix = tf.reshape( tf.concat([one,zero,height_shift, zero,one,width_shift, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(shift_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
def transform_zoom(image, height, h_zoom, w_zoom):
# input image - is one image of size [dim,dim,3] not a batch of [b,dim,dim,3]
# output - image randomly zoomed
DIM = height
XDIM = DIM%2 #fix for size 331
height_zoom = 1.0 + tf.random.normal([1],dtype='float32')/h_zoom
width_zoom = 1.0 + tf.random.normal([1],dtype='float32')/w_zoom
one = tf.constant([1],dtype='float32')
zero = tf.constant([0],dtype='float32')
# ZOOM MATRIX
zoom_matrix = tf.reshape( tf.concat([one/height_zoom,zero,zero, zero,one/width_zoom,zero, zero,zero,one],axis=0),[3,3] )
# LIST DESTINATION PIXEL INDICES
x = tf.repeat( tf.range(DIM//2,-DIM//2,-1), DIM )
y = tf.tile( tf.range(-DIM//2,DIM//2),[DIM] )
z = tf.ones([DIM*DIM],dtype='int32')
idx = tf.stack( [x,y,z] )
# ROTATE DESTINATION PIXELS ONTO ORIGIN PIXELS
idx2 = K.dot(zoom_matrix,tf.cast(idx,dtype='float32'))
idx2 = K.cast(idx2,dtype='int32')
idx2 = K.clip(idx2,-DIM//2+XDIM+1,DIM//2)
# FIND ORIGIN PIXEL VALUES
idx3 = tf.stack( [DIM//2-idx2[0,], DIM//2-1+idx2[1,]] )
d = tf.gather_nd(image, tf.transpose(idx3))
return tf.reshape(d,[DIM,DIM,3])
###Output
_____no_output_____
###Markdown
Learning rate scheduler
###Code
lr_min = 1e-6
lr_start = 5e-6
lr_max = config['LEARNING_RATE']
steps_per_epoch = 24844 // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * steps_per_epoch
warmup_steps = steps_per_epoch * 5
hold_max_steps = 0
step_decay = .8
step_size = steps_per_epoch * 1
rng = [i for i in range(0, total_steps, 32)]
y = [step_schedule_with_warmup(tf.cast(x, tf.float32), step_size=step_size,
warmup_steps=warmup_steps, hold_max_steps=hold_max_steps,
lr_start=lr_start, lr_max=lr_max, step_decay=step_decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 5e-06 to 0.0003 to 7.86e-05
###Markdown
Model
###Code
# Initial bias
pos = len(k_fold[k_fold['target'] == 1])
neg = len(k_fold[k_fold['target'] == 0])
initial_bias = np.log([pos/neg])
print('Bias')
print(pos)
print(neg)
print(initial_bias)
# class weights
total = len(k_fold)
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Class weight')
print(class_weight)
def model_fn(input_shape):
input_image = L.Input(shape=input_shape, name='input_image')
BaseModel, preprocess_input = Classifiers.get(config['BASE_MODEL'])
base_model = BaseModel(input_shape=input_shape,
weights=config['BASE_MODEL_WEIGHTS'],
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
output = L.Dense(1, activation='sigmoid', name='output',
bias_initializer=tf.keras.initializers.Constant(initial_bias))(x)
model = Model(inputs=input_image, outputs=output)
return model
###Output
_____no_output_____
###Markdown
Training
###Code
# Evaluation
eval_dataset = get_eval_dataset(TRAINING_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
image_names = next(iter(eval_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(TRAINING_FILENAMES)))).numpy().astype('U')
image_data = eval_dataset.map(lambda data, label, image_name: data)
# Test
NUM_TEST_IMAGES = len(test)
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
test_preds_tta = np.zeros((NUM_TEST_IMAGES, 1))
test_preds_last = np.zeros((NUM_TEST_IMAGES, 1))
test_preds_tta_last = np.zeros((NUM_TEST_IMAGES, 1))
test_dataset = get_test_dataset(TEST_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
test_dataset_tta = get_test_dataset(TEST_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, tta=True)
image_names_test = next(iter(test_dataset.unbatch().map(lambda data, image_name: image_name).batch(NUM_TEST_IMAGES))).numpy().astype('U')
test_image_data = test_dataset.map(lambda data, image_name: data)
test_tta_image_data = test_dataset_tta.map(lambda data, image_name: data)
history_list = []
k_fold_best = k_fold.copy()
kfold = KFold(config['N_FOLDS'], shuffle=True, random_state=SEED)
for n_fold, (trn_idx, val_idx) in enumerate(kfold.split(TRAINING_FILENAMES)):
if n_fold < config['N_USED_FOLDS']:
n_fold +=1
print('\nFOLD: %d' % (n_fold))
tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
### Data
train_filenames = np.array(TRAINING_FILENAMES)[trn_idx]
valid_filenames = np.array(TRAINING_FILENAMES)[val_idx]
steps_per_epoch = count_data_items(train_filenames) // config['BATCH_SIZE']
# Train model
model_path = f'model_fold_{n_fold}.h5'
es = EarlyStopping(monitor='val_auc', mode='max', patience=config['ES_PATIENCE'],
restore_best_weights=False, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_auc', mode='max',
save_best_only=True, save_weights_only=True)
with strategy.scope():
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
lr = lambda: step_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
step_size=step_size, warmup_steps=warmup_steps,
hold_max_steps=hold_max_steps, lr_start=lr_start,
lr_max=lr_max, step_decay=step_decay)
optimizer = optimizers.Adam(learning_rate=lr)
model.compile(optimizer, loss=losses.BinaryCrossentropy(label_smoothing=0.05),
metrics=[metrics.AUC()])
history = model.fit(get_training_dataset(train_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
validation_data=get_validation_dataset(valid_filenames, ordered=True, repeated=False,
batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
epochs=config['EPOCHS'],
steps_per_epoch=steps_per_epoch,
callbacks=[checkpoint, es],
class_weight=class_weight,
verbose=2).history
# save last epoch weights
model.save_weights('last_' + model_path)
history_list.append(history)
# Get validation IDs
valid_dataset = get_eval_dataset(valid_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
valid_image_names = next(iter(valid_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(valid_filenames)))).numpy().astype('U')
k_fold[f'fold_{n_fold}'] = k_fold.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
k_fold_best[f'fold_{n_fold}'] = k_fold_best.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
##### Last model #####
print('Last model evaluation...')
preds = model.predict(image_data)
name_preds_eval = dict(zip(image_names, preds.reshape(len(preds))))
k_fold[f'pred_fold_{n_fold}'] = k_fold.apply(lambda x: name_preds_eval[x['image_name']], axis=1)
print('Last model inference...')
test_preds_last += model.predict(test_image_data)
# TTA preds
print(f'Running TTA (last) {config["TTA_STEPS"]} steps...')
for step in range(config['TTA_STEPS']):
test_preds_tta_last += model.predict(test_tta_image_data)
##### Best model #####
print('Best model evaluation...')
model.load_weights(model_path)
preds = model.predict(image_data)
name_preds_eval = dict(zip(image_names, preds.reshape(len(preds))))
k_fold_best[f'pred_fold_{n_fold}'] = k_fold_best.apply(lambda x: name_preds_eval[x['image_name']], axis=1)
print('Best model inference...')
test_preds += model.predict(test_image_data)
# TTA preds
print(f'Running TTA (best) {config["TTA_STEPS"]} steps...')
for step in range(config['TTA_STEPS']):
test_preds_tta += model.predict(test_tta_image_data)
# normalize preds
test_preds /= config['N_USED_FOLDS']
test_preds_tta /= (config['N_USED_FOLDS'] * config['TTA_STEPS'])
test_preds_last /= config['N_USED_FOLDS']
test_preds_tta_last /= (config['N_USED_FOLDS'] * config['TTA_STEPS'])
name_preds = dict(zip(image_names_test, test_preds.reshape(NUM_TEST_IMAGES)))
name_preds_tta = dict(zip(image_names_test, test_preds_tta.reshape(NUM_TEST_IMAGES)))
name_preds_last = dict(zip(image_names_test, test_preds_last.reshape(NUM_TEST_IMAGES)))
name_preds_tta_last = dict(zip(image_names_test, test_preds_tta_last.reshape(NUM_TEST_IMAGES)))
test['target'] = test.apply(lambda x: name_preds[x['image_name']], axis=1)
test['target_tta'] = test.apply(lambda x: name_preds_tta[x['image_name']], axis=1)
test['target_last'] = test.apply(lambda x: name_preds_last[x['image_name']], axis=1)
test['target_tta_last'] = test.apply(lambda x: name_preds_tta_last[x['image_name']], axis=1)
###Output
FOLD: 1
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.7/inception_resnet_v2_weights_tf_dim_ordering_tf_kernels_notop.h5
219062272/219055592 [==============================] - 3s 0us/step
Epoch 1/12
194/194 - 87s - auc: 0.7220 - loss: 1.0140 - val_auc: 0.7791 - val_loss: 0.2776
Epoch 2/12
194/194 - 52s - auc: 0.8470 - loss: 0.5199 - val_auc: 0.8088 - val_loss: 0.7455
Epoch 3/12
194/194 - 51s - auc: 0.8532 - loss: 0.5175 - val_auc: 0.8112 - val_loss: 0.4484
Epoch 4/12
194/194 - 46s - auc: 0.8778 - loss: 0.4778 - val_auc: 0.7881 - val_loss: 1.5193
Epoch 5/12
194/194 - 46s - auc: 0.8862 - loss: 0.4630 - val_auc: 0.7417 - val_loss: 11.1022
Epoch 6/12
194/194 - 46s - auc: 0.8877 - loss: 0.4564 - val_auc: 0.7576 - val_loss: 1.5265
Epoch 7/12
194/194 - 52s - auc: 0.9074 - loss: 0.4263 - val_auc: 0.8591 - val_loss: 0.2855
Epoch 8/12
194/194 - 51s - auc: 0.9261 - loss: 0.3895 - val_auc: 0.8604 - val_loss: 0.3378
Epoch 9/12
194/194 - 46s - auc: 0.9394 - loss: 0.3583 - val_auc: 0.8368 - val_loss: 0.4446
Epoch 10/12
194/194 - 51s - auc: 0.9498 - loss: 0.3450 - val_auc: 0.8646 - val_loss: 0.3962
Epoch 11/12
194/194 - 51s - auc: 0.9668 - loss: 0.2952 - val_auc: 0.8731 - val_loss: 0.4087
Epoch 12/12
194/194 - 51s - auc: 0.9663 - loss: 0.2928 - val_auc: 0.8828 - val_loss: 0.3841
Last model evaluation...
Last model inference...
Running TTA (last) 25 steps...
Best model evaluation...
Best model inference...
Running TTA (best) 25 steps...
FOLD: 2
Epoch 1/12
210/210 - 90s - auc: 0.7299 - loss: 0.9335 - val_auc: 0.7839 - val_loss: 0.3459
Epoch 2/12
210/210 - 54s - auc: 0.8483 - loss: 0.5096 - val_auc: 0.8126 - val_loss: 0.4927
Epoch 3/12
210/210 - 53s - auc: 0.8433 - loss: 0.5325 - val_auc: 0.8578 - val_loss: 0.5397
Epoch 4/12
210/210 - 49s - auc: 0.8644 - loss: 0.4828 - val_auc: 0.8489 - val_loss: 0.4756
Epoch 5/12
210/210 - 49s - auc: 0.8810 - loss: 0.4677 - val_auc: 0.8529 - val_loss: 0.5346
Epoch 6/12
210/210 - 49s - auc: 0.9017 - loss: 0.4282 - val_auc: 0.8299 - val_loss: 0.6978
Epoch 7/12
210/210 - 54s - auc: 0.9178 - loss: 0.4047 - val_auc: 0.8596 - val_loss: 0.5974
Epoch 8/12
210/210 - 54s - auc: 0.9408 - loss: 0.3552 - val_auc: 0.8873 - val_loss: 0.6261
Epoch 9/12
210/210 - 49s - auc: 0.9498 - loss: 0.3374 - val_auc: 0.8811 - val_loss: 0.6856
Epoch 10/12
210/210 - 54s - auc: 0.9597 - loss: 0.3113 - val_auc: 0.8886 - val_loss: 0.4850
Epoch 11/12
210/210 - 54s - auc: 0.9687 - loss: 0.2895 - val_auc: 0.8903 - val_loss: 0.4380
Epoch 12/12
210/210 - 49s - auc: 0.9706 - loss: 0.2852 - val_auc: 0.8816 - val_loss: 0.5968
Last model evaluation...
Last model inference...
Running TTA (last) 25 steps...
Best model evaluation...
Best model inference...
Running TTA (best) 25 steps...
FOLD: 3
Epoch 1/12
210/210 - 89s - auc: 0.7226 - loss: 1.0050 - val_auc: 0.7529 - val_loss: 0.4733
Epoch 2/12
210/210 - 55s - auc: 0.8360 - loss: 0.5363 - val_auc: 0.8193 - val_loss: 0.4422
Epoch 3/12
210/210 - 54s - auc: 0.8480 - loss: 0.5098 - val_auc: 0.8832 - val_loss: 0.4287
Epoch 4/12
210/210 - 49s - auc: 0.8748 - loss: 0.4711 - val_auc: 0.8789 - val_loss: 0.8874
Epoch 5/12
210/210 - 54s - auc: 0.8728 - loss: 0.4742 - val_auc: 0.8863 - val_loss: 0.4278
Epoch 6/12
210/210 - 49s - auc: 0.8975 - loss: 0.4351 - val_auc: 0.8621 - val_loss: 7.1251
Epoch 7/12
210/210 - 49s - auc: 0.9201 - loss: 0.3960 - val_auc: 0.8848 - val_loss: 0.3328
Epoch 8/12
210/210 - 48s - auc: 0.9337 - loss: 0.3655 - val_auc: 0.8853 - val_loss: 0.9047
Epoch 9/12
210/210 - 54s - auc: 0.9503 - loss: 0.3352 - val_auc: 0.8928 - val_loss: 0.3281
Epoch 10/12
210/210 - 48s - auc: 0.9606 - loss: 0.3094 - val_auc: 0.8834 - val_loss: 0.6941
Epoch 11/12
210/210 - 54s - auc: 0.9685 - loss: 0.2842 - val_auc: 0.9086 - val_loss: 0.5022
Epoch 12/12
210/210 - 49s - auc: 0.9711 - loss: 0.2778 - val_auc: 0.8977 - val_loss: 0.3475
Last model evaluation...
Last model inference...
Running TTA (last) 25 steps...
Best model evaluation...
Best model inference...
Running TTA (best) 25 steps...
FOLD: 4
Epoch 1/12
210/210 - 88s - auc: 0.7225 - loss: 0.9806 - val_auc: 0.7967 - val_loss: 0.3519
Epoch 2/12
210/210 - 50s - auc: 0.8446 - loss: 0.5219 - val_auc: 0.7647 - val_loss: 1.6226
Epoch 3/12
210/210 - 51s - auc: 0.8611 - loss: 0.5002 - val_auc: 0.7725 - val_loss: 0.6871
Epoch 4/12
210/210 - 56s - auc: 0.8668 - loss: 0.4911 - val_auc: 0.7984 - val_loss: 1.1666
Epoch 5/12
210/210 - 56s - auc: 0.8817 - loss: 0.4622 - val_auc: 0.8336 - val_loss: 0.5131
Epoch 6/12
210/210 - 51s - auc: 0.8930 - loss: 0.4461 - val_auc: 0.8097 - val_loss: 1.6809
Epoch 7/12
210/210 - 57s - auc: 0.9200 - loss: 0.4018 - val_auc: 0.8862 - val_loss: 0.7302
Epoch 8/12
210/210 - 51s - auc: 0.9417 - loss: 0.3544 - val_auc: 0.8719 - val_loss: 0.4141
Epoch 9/12
210/210 - 50s - auc: 0.9442 - loss: 0.3434 - val_auc: 0.8632 - val_loss: 0.7358
Epoch 10/12
210/210 - 50s - auc: 0.9612 - loss: 0.3105 - val_auc: 0.8744 - val_loss: 0.6342
Epoch 11/12
210/210 - 50s - auc: 0.9696 - loss: 0.2850 - val_auc: 0.8578 - val_loss: 0.7206
Epoch 12/12
210/210 - 50s - auc: 0.9717 - loss: 0.2780 - val_auc: 0.8812 - val_loss: 0.7234
Last model evaluation...
Last model inference...
Running TTA (last) 25 steps...
Best model evaluation...
Best model inference...
Running TTA (best) 25 steps...
FOLD: 5
Epoch 1/12
210/210 - 88s - auc: 0.7208 - loss: 0.9846 - val_auc: 0.8341 - val_loss: 0.2067
Epoch 2/12
210/210 - 49s - auc: 0.8321 - loss: 0.5526 - val_auc: 0.7444 - val_loss: 1.1122
Epoch 3/12
210/210 - 48s - auc: 0.8555 - loss: 0.5004 - val_auc: 0.8187 - val_loss: 0.9502
Epoch 4/12
210/210 - 54s - auc: 0.8562 - loss: 0.5106 - val_auc: 0.8646 - val_loss: 0.5096
Epoch 5/12
210/210 - 54s - auc: 0.8762 - loss: 0.4738 - val_auc: 0.8799 - val_loss: 0.4489
Epoch 6/12
210/210 - 54s - auc: 0.8876 - loss: 0.4545 - val_auc: 0.8920 - val_loss: 0.4163
Epoch 7/12
210/210 - 53s - auc: 0.9239 - loss: 0.3842 - val_auc: 0.9002 - val_loss: 0.3863
Epoch 8/12
210/210 - 49s - auc: 0.9327 - loss: 0.3764 - val_auc: 0.8889 - val_loss: 0.3529
Epoch 9/12
210/210 - 50s - auc: 0.9409 - loss: 0.3565 - val_auc: 0.8878 - val_loss: 0.3237
Epoch 10/12
210/210 - 49s - auc: 0.9535 - loss: 0.3229 - val_auc: 0.8871 - val_loss: 0.2923
Epoch 11/12
210/210 - 48s - auc: 0.9611 - loss: 0.3070 - val_auc: 0.8911 - val_loss: 0.3593
Epoch 12/12
210/210 - 48s - auc: 0.9678 - loss: 0.2892 - val_auc: 0.8760 - val_loss: 0.3472
Last model evaluation...
Last model inference...
Running TTA (last) 25 steps...
Best model evaluation...
Best model inference...
Running TTA (best) 25 steps...
###Markdown
Model loss graph
###Code
for n_fold in range(config['N_USED_FOLDS']):
print(f'Fold: {n_fold + 1}')
plot_metrics(history_list[n_fold])
###Output
Fold: 1
###Markdown
Model loss graph aggregated
###Code
plot_metrics_agg(history_list, config['N_USED_FOLDS'])
###Output
_____no_output_____
###Markdown
Model evaluation (last)
###Code
display(evaluate_model(k_fold, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(k_fold, config['N_USED_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Model evaluation (best)
###Code
display(evaluate_model(k_fold_best, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(k_fold_best, config['N_USED_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
for n_fold in range(config['N_USED_FOLDS']):
n_fold += 1
pred_col = f'pred_fold_{n_fold}'
train_set = k_fold_best[k_fold_best[f'fold_{n_fold}'] == 'train']
valid_set = k_fold_best[k_fold_best[f'fold_{n_fold}'] == 'validation']
print(f'Fold: {n_fold}')
plot_confusion_matrix(train_set['target'], np.round(train_set[pred_col]),
valid_set['target'], np.round(valid_set[pred_col]))
###Output
Fold: 1
###Markdown
Visualize predictions
###Code
k_fold['pred'] = 0
for n_fold in range(config['N_USED_FOLDS']):
k_fold['pred'] += k_fold[f'pred_fold_{n_fold+1}'] / config['N_FOLDS']
print('Label/prediction distribution')
print(f"Train positive labels: {len(k_fold[k_fold['target'] > .5])}")
print(f"Train positive predictions: {len(k_fold[k_fold['pred'] > .5])}")
print(f"Train positive correct predictions: {len(k_fold[(k_fold['target'] > .5) & (k_fold['pred'] > .5)])}")
print('Top 10 samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10))
###Output
Label/prediction distribution
Train positive labels: 584
Train positive predictions: 2996
Train positive correct predictions: 584
Top 10 samples
###Markdown
Visualize test predictions
###Code
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print(f"Test predictions (last) {len(test[test['target_last'] > .5])}|{len(test[test['target_last'] <= .5])}")
print(f"Test predictions (tta) {len(test[test['target_tta'] > .5])}|{len(test[test['target_tta'] <= .5])}")
print(f"Test predictions (last tta) {len(test[test['target_tta_last'] > .5])}|{len(test[test['target_tta_last'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last',
'target_tta', 'target_tta_last'] + [c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last',
'target_tta', 'target_tta_last'] + [c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10))
print('Top 10 positive samples (last)')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last',
'target_tta', 'target_tta_last'] + [c for c in test.columns if (c.startswith('pred_fold'))]].query('target_last > .5').head(10))
###Output
Test predictions 1244|9738
Test predictions (last) 1112|9870
Test predictions (tta) 1293|9689
Test predictions (last tta) 1157|9825
Top 10 samples
###Markdown
Test set predictions
###Code
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
submission['target_last'] = test['target_last']
submission['target_blend'] = (test['target'] * .5) + (test['target_last'] * .5)
submission['target_tta'] = test['target_tta']
submission['target_tta_last'] = test['target_tta_last']
submission['target_tta_blend'] = (test['target_tta'] * .5) + (test['target_tta_last'] * .5)
display(submission.head(10))
display(submission.describe())
### BEST ###
submission[['image_name', 'target']].to_csv('submission.csv', index=False)
### LAST ###
submission_last = submission[['image_name', 'target_last']]
submission_last.columns = ['image_name', 'target']
submission_last.to_csv('submission_last.csv', index=False)
### BLEND ###
submission_blend = submission[['image_name', 'target_blend']]
submission_blend.columns = ['image_name', 'target']
submission_blend.to_csv('submission_blend.csv', index=False)
### TTA ###
submission_tta = submission[['image_name', 'target_tta']]
submission_tta.columns = ['image_name', 'target']
submission_tta.to_csv('submission_tta.csv', index=False)
### TTA LAST ###
submission_tta_last = submission[['image_name', 'target_tta_last']]
submission_tta_last.columns = ['image_name', 'target']
submission_tta_last.to_csv('submission_tta_last.csv', index=False)
### TTA BLEND ###
submission_blend_tta = submission[['image_name', 'target_tta_blend']]
submission_blend_tta.columns = ['image_name', 'target']
submission_blend_tta.to_csv('submission_blend_tta.csv', index=False)
###Output
_____no_output_____ |
Plotting/showMoods.ipynb | ###Markdown
MoodCube: plot Moods take some data and display a 2D Surface plot
###Code
# Library Imports and Python parameter settings
%matplotlib inline
from __future__ import division
#import nds2
import numpy as np
import matplotlib.pyplot as plt
#import matplotlib.mlab as mlab
import scipy.signal as sig
#import scipy.io.wavfile as wave
debugme = 1
# Update the matplotlib configuration parameters:
plt.rcParams.update({'font.size': 20,
'font.family': 'serif',
'figure.figsize': (10, 8),
'axes.grid': True,
'grid.color': '#555555'})
# this is the dimensions of the jellyfish
z = np.random.randint(low=0, high=255, size=(8, 64, 3), dtype='uint8')
print z.shape
print z.dtype
fig = plt.figure(figsize=(16, 8))
#plt.loglog(aligo[:,0], sqrt(aligo[:,1]), color='Indigo', ls='--', alpha=0.65, lw=4)
plt.imshow(z)
#leg = plt.legend(loc='best', fancybox=True, fontsize=14)
#leg.get_frame().set_alpha(0.5)
#plt.savefig("TRY.pdf", bbox_inches='tight')
#plt.axis('tight')
plt.show()
dat = np.load('../Data/test.npz')
v = dat['arr_0']
plt.figure()
#plt.plot(v[:,0])
#plt.plot(v[:,1])
plt.plot(v[:,2])
plt.show()
b = np.zeros((1000, 6))
b.shape
b[0] = [1,2,3,4,5,6]
b[0]
###Output
_____no_output_____ |
module1/s3_api.ipynb | ###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
OPERA - Wireless Communications Group (50.811783 - 4.3830304)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Belgium"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join([lang for lang in languages.values()])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
Languages: German, French, Dutch
Borders: France, Germany, Luxembourg, Netherlands
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
_____no_output_____
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Belgium"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join(languages.values())}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
Languages: German, French, Dutch
Borders: France, Germany, Luxembourg, Netherlands
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
OPERA - Wireless Communications Group (50.811783 - 4.3830304)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Brazil"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join([lang for lang in languages.values()])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
Languages: Portuguese
Borders: Argentina, Bolivia, Colombia, French Guiana, Guyana, Paraguay, Peru, Suriname, Uruguay, Venezuela
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
OPERA - Wireless Communications Group (50.811783 - 4.3830304)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Belgium"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join(languages.values())}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
Languages: German, French, Dutch
Borders: France, Germany, Luxembourg, Netherlands
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
# Interesante que la API también es legible para un user lamba
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
OPERA - Wireless Communications Group (50.811783 - 4.3830304)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Brasil"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join([lang for lang in languages.values()])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
Languages: Portuguese
Borders: Argentina, Bolivia, Colombia, French Guiana, Guyana, Paraguay, Peru, Suriname, Uruguay, Venezuela
###Markdown
Testing web APIs with HTTP GET method
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Fonctions
###Code
def print_coord(address):
"""Retrieve coordinates from Open Street Map"""
osm = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(osm, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
def print_info(country_name):
"""Retrieve country info from REST API"""
base_url = "https://restcountries.eu/rest/v2/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
try:
country = json.loads(resp.text)[0]
languages = country['languages']
print(f"Languages: {', '.join([lang['name'] for lang in languages])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = json.loads(resp.text)
border_name = border_country["name"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
_____no_output_____
###Markdown
Exemple 1: Obtenir la longitude et la latitude de l’Université libre de Bruxelles
###Code
print_coord("Avenue Franklin Roosevelt 50, 1050 Bruxelles")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Exemple 2: Récupérer des informations sur la France
###Code
print_info('Belgique')
###Output
Languages: Dutch, French, German
Borders: France, Germany, Luxembourg, Netherlands
###Markdown
Testing web APIs with HTTP GET method
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Fonctions
###Code
def print_coord(address):
"""Retrieve coordinates from Open Street Map"""
osm = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(osm, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
def print_info(country_name):
"""Retrieve country info from REST API"""
base_url = "https://restcountries.eu/rest/v2/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
try:
country = json.loads(resp.text)[0]
languages = country['languages']
print(f"Languages: {', '.join([lang['name'] for lang in languages])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = json.loads(resp.text)
border_name = border_country["name"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
_____no_output_____
###Markdown
Exemple 1: Obtenir la longitude et la latitude de l’Université libre de Bruxelles
###Code
print_coord("Avenue Franklin Roosevelt 50, 1050 Bruxelles")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Exemple 2: Récupérer des informations sur la France
###Code
print_info('Belgique')
##Exo. rapidapi
def locations(locations):
url = "https://hotels4.p.rapidapi.com/locations/search"
querystring = {"query":"bruxelles","locale":"en_US"}
headers = {
'x-rapidapi-key': "f4fa486957msh657dcc064d10cb8p17b721jsn5897eafcd9a6",
'x-rapidapi-host': "hotels4.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print( f" {locations} locations in Belgium {response.text} " )
###Output
_____no_output_____
###Markdown
Récupérer des informations sur des endroits en Belgique.
###Code
locations(10)
###Output
10 locations in Belgium {"term":"bruxelles","moresuggestions":941,"autoSuggestInstance":null,"trackingID":"2c36426b-1cbc-424a-9362-a3ff93e1795a","misspellingfallback":false,"suggestions":[{"group":"CITY_GROUP","entities":[{"geoId":"1000000000000000690","destinationId":"59474","landmarkCityDestinationId":null,"type":"CITY","caption":"Brussels, Belgium (<span class='highlighted'>Bruxelles</span>)","redirectPage":"DEFAULT_PAGE","latitude":50.8465,"longitude":4.35331,"name":"Brussels"},{"geoId":"1000000000006051229","destinationId":"10234047","landmarkCityDestinationId":null,"type":"REGION","caption":"Brussels-Capital Region, Belgium (<span class='highlighted'>Bruxelles</span>-Hovedstadsregionen)","redirectPage":"DEFAULT_PAGE","latitude":50.836026,"longitude":4.370634,"name":"Brussels-Capital Region"},{"geoId":"1000000000006139368","destinationId":"1705514","landmarkCityDestinationId":null,"type":"REGION","caption":"Brussels West, Belgium (<span class='highlighted'>Bruxelles</span> Vest)","redirectPage":"DEFAULT_PAGE","latitude":50.874304,"longitude":4.31419,"name":"Brussels West"},{"geoId":"1000000000006139363","destinationId":"1705510","landmarkCityDestinationId":null,"type":"REGION","caption":"Brussels East, Belgium (<span class='highlighted'>Bruxelles</span> Est)","redirectPage":"DEFAULT_PAGE","latitude":50.871937,"longitude":4.427227,"name":"Brussels East"},{"geoId":"1000000000006225243","destinationId":"1749350","landmarkCityDestinationId":null,"type":"CITY","caption":"Anderlecht, Belgium (<span class='highlighted'>Bruxelles</span>)","redirectPage":"DEFAULT_PAGE","latitude":50.829719,"longitude":4.290954,"name":"Anderlecht"},{"geoId":"1000000000006052156","destinationId":"1706926","landmarkCityDestinationId":null,"type":"CITY","caption":"Ixelles, Belgium (<span class='highlighted'>Bruxelles</span>)","redirectPage":"DEFAULT_PAGE","latitude":50.824824,"longitude":4.36733,"name":"Ixelles"}]},{"group":"LANDMARK_GROUP","entities":[{"geoId":"1000000000006099542","destinationId":"1675613","landmarkCityDestinationId":"63984","type":"LANDMARK","caption":"Brussels Gate, Mechelen, Belgium (<span class='highlighted'>Bruxelles</span>-porten)","redirectPage":"DEFAULT_PAGE","latitude":51.021919,"longitude":4.473797,"name":"Brussels Gate"},{"geoId":"1000000000006132050","destinationId":"1690418","landmarkCityDestinationId":"11113763","type":"LANDMARK","caption":"Brussels Expo, Laken, Belgium (<span class='highlighted'>Bruxelles</span> Expo)","redirectPage":"DEFAULT_PAGE","latitude":50.898929,"longitude":4.337912,"name":"Brussels Expo"},{"geoId":"1000000000006070945","destinationId":"1659646","landmarkCityDestinationId":"59474","type":"LANDMARK","caption":"Universite Libre de <span class='highlighted'>Bruxelles</span> Solbosch Campus, Brussels, Belgium","redirectPage":"DEFAULT_PAGE","latitude":50.811697,"longitude":4.38082,"name":"Universite Libre de Bruxelles Solbosch Campus"}]},{"group":"TRANSPORT_GROUP","entities":[{"geoId":"1000000000006021136","destinationId":"1696918","landmarkCityDestinationId":null,"type":"TRAIN_STATION","caption":"<span class='highlighted'>Bruxelles</span>-Midi Station, Brussels, Belgium","redirectPage":"DEFAULT_PAGE","latitude":50.837282,"longitude":4.335196,"name":"Bruxelles-Midi Station"},{"geoId":"1000000000005591618","destinationId":"51277","landmarkCityDestinationId":null,"type":"AIRPORT","caption":"Brussels Airport (BRU), Belgium (Zračna luka <span class='highlighted'>Bruxelles</span> (BRU))","redirectPage":"DEFAULT_PAGE","latitude":50.89654,"longitude":4.48405,"name":"Brussels Airport (BRU)"},{"geoId":"1000000000006021138","destinationId":"1696919","landmarkCityDestinationId":null,"type":"TRAIN_STATION","caption":"<span class='highlighted'>Bruxelles</span>-Nord Station, Schaerbeek, Belgium","redirectPage":"DEFAULT_PAGE","latitude":50.860187,"longitude":4.362422,"name":"Bruxelles-Nord Station"}]},{"group":"HOTEL_GROUP","entities":[{"geoId":"1100000001216946368","destinationId":"1216946368","landmarkCityDestinationId":null,"type":"HOTEL","caption":"MEININGER Hotel <span class='highlighted'>Bruxelles</span> Gare du Midi, Brussels, Belgium","redirectPage":"DEFAULT_PAGE","latitude":50.835768,"longitude":4.33119,"name":"MEININGER Hotel Bruxelles Gare du Midi"},{"geoId":"1100000000000421076","destinationId":"421076","landmarkCityDestinationId":null,"type":"HOTEL","caption":"MEININGER Hotels <span class='highlighted'>Bruxelles</span> City Center, Brussels, Belgium","redirectPage":"DEFAULT_PAGE","latitude":50.851563,"longitude":4.339202,"name":"MEININGER Hotels Bruxelles City Center"},{"geoId":"1100000000000225154","destinationId":"225154","landmarkCityDestinationId":null,"type":"HOTEL","caption":"Campanile Hotel Brussel / <span class='highlighted'>Bruxelles</span> - Vilvoorde, Vilvoorde, Belgium","redirectPage":"DEFAULT_PAGE","latitude":50.92606,"longitude":4.43429,"name":"Campanile Hotel Brussel / Bruxelles - Vilvoorde"}]}]}
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
OPERA - Wireless Communications Group (50.811783 - 4.3830304)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Belgium"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join([lang for lang in languages.values()])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
Languages: German, French, Dutch
Borders: France, Germany, Luxembourg, Netherlands
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
# print(json_list)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
rue = "royale"
#https://opendata.brussels.be/api/records/1.0/search/?dataset=bruxelles_rues_par_secteur_pour_les_cartes_de_riverain&q=&facet=secteur
url = "https://opendata.brussels.be/api/records/1.0/search/"
dataset_ = "bruxelles_rues_par_secteur_pour_les_cartes_de_riverain"
facet_ = "secteur"
data = {'dataset' : dataset_ , 'q' : rue, 'facet' : facet_ }
r = requests.get(url, data)
json_list = json.loads(r.text)
sub = json_list['records']
print(len(json_list))
print(json_list)
print(json_list['records'][0]['fields'])
print(len(sub))
print(sub)
record = []
for item in sub:
record = item['fields']
print(f"Secteur, lieu : {', '.join(record.values())}")
# source : https://opendata.brussels.be/api/records/1.0/search/?dataset=bruxelles_rues_par_secteur_pour_les_cartes_de_riverain&q=&facet=secteur
terme = "porc"
url = "https://opendata.brussels.be/api/records/1.0/search/"
dataset_ = "bruxelles_rues_par_secteur_pour_les_cartes_de_riverain"
facet_ = "secteur"
data = {'dataset' : dataset_ , 'q' : terme, 'facet' : facet_ }
resp = requests.get(url, data)
print(resp.headers['content-type'])
json_list = json.loads(resp.text)
subset = json_list['records']
print(f"Le dataset a détecté {len(subset)} enregistrement(s) pour ce lieu :\n")
for item in subset:
secteurs = item['fields']['secteur']
rues = item['fields']['rue']
print(f"Lieu : {rues} ({secteurs})")
# source : https://opendata.brussels.be/api/records/1.0/search/?dataset=bxl_fontaines&q=
url = "https://opendata.brussels.be/api/records/1.0/search/"
dataset_ = "bxl_fontaines"
data = {'dataset' : dataset_}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
subset = json_list['records']
print(f"Bruxelles recense {len(subset)} fontaines d'eau potable :\n")
for item in subset:
adresses = item['fields']['adrvoisfr']
specifs = item['fields']['speclocfr']
print(f"Au {adresses} --> {specifs}")
# source : https://opendata.brussels.be/api/records/1.0/search/?dataset=bruxelles_theatres&q=
url = "https://opendata.brussels.be/api/records/1.0/search/"
dataset_ = "bruxelles_theatres"
data = {'dataset' : dataset_}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
subset = json_list['records']
print(f"Bruxelles recense {len(subset)} théâtres :\n")
for item in subset:
noms = item['fields']['nom']
rues = item['fields']['rue']
print(f"{noms} ({rues})")
dataset = bruxelles_arbres_remarquables
# source : https://opendata.brussels.be/api/records/1.0/search/?dataset=bruxelles_theatres&q=
url = "https://opendata.brussels.be/api/records/1.0/search/"
dataset_ = "bruxelles_cinemas"
data = {'dataset' : dataset_}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
subset = json_list['records']
print(f"Bruxelles recense {len(subset)} cinémas :\n")
for item in subset:
noms = item['fields']['cinema']
rues = item['fields']['adresse']
print(f"{noms} ({rues})")
# source : https://opendata.brussels.be/api/records/1.0/search/?dataset=bruxelles_theatres&q=
url = "https://opendata.brussels.be/api/records/1.0/search/"
dataset_ = "bxl_bourgmestres"
data = {'dataset' : dataset_}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
subset = json_list['records']
print(f"Liste des {len(subset)} dates concernant l'arrêté royal de nomination des bourgmestres de Bruxelles :\n")
for item in subset:
bourgmestres = item['fields']['bourgmestres']
#arretes = item['fields']['arrete_royal_de_nomination']
print(f"{bourgmestres}")
###Output
Liste des 10 dates concernant l'arrêté royal de nomination des bourgmestres de Bruxelles :
Michel Demaret
Yvan Mayeur
Adolphe Max
Marion Lemesre
Charles Lemonnier
Baron Joseph
Van de Meulebroeck
Lucien Cooremans
Nicolas Rouppe
Nicolas Verhulst - Van Hoegaarden
André Fontainas
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Belgium"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
print(resp.headers['content-type'])
country = resp.json()[0]
# print(country)
try:
languages = country['languages']
print(f"Languages: {', '.join(languages.values())}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0] # conversion du résulat de la requête (json) en dictionnaire
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
application/json
Languages: German, French, Dutch
Borders: France, Germany, Luxembourg, Netherlands
###Markdown
Testing web APIs with HTTP GET method
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Fonctions
###Code
def print_coord(lieu):
##Fonction pour avoir des informations sur les lieux culturels de bruxelles
url = "https://opendata.bruxelles.be/api/records/1.0/search/"
data = {'dataset': 'bruxelles_lieux_culturels', 'q' : lieu, 'format':'json'}
response = requests.get(url, data)
json_list = response.json()
for item in json_list['records']:
codePostal=item['fields']['code_postal']
adresse=item['fields']['adresse']
description=item['fields']['description']
lieu=item['fields']['lieu']
print(f"{description} ({adresse} - {lieu} - {codePostal})")
def print_info(theatre):
##Fonction pour des informations sur les theatres a bruxelles
url = "https://opendata.bruxelles.be/api/records/1.0/search/"
data = {'dataset': 'theatres', 'q' : theatre, 'format':'json'}
response = requests.get(url, data)
json_list = response.json()
for item in json_list['records']:
theatreName=item['fields']['nom']
rue=item['fields']['rue']
telephone=item['fields']['telephone_telefoon']
code_postal=item['fields']['code_postal_postcode']
siteWeb=item['fields']['site_web_website']
try:
facebook=item['fields']['facebook']
print(f"{theatreName} ({rue} - {telephone} - {code_postal} - {siteWeb} - {facebook})")
break
except KeyError:
pass
print(f"{theatreName} ({rue} - {telephone} - {code_postal} - {siteWeb}")
print_coord('Palais du Coudenberg')
print_info('(Le) Jardin de ma Sœur')
###Output
(Le) Jardin de ma Soeur (Rue du Grand Hospice - 02 217 65 82 - 1000 - www.lejardindemasoeur.be
###Markdown
Testing web APIs with HTTP GET method
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Fonctions
###Code
def print_coord(address):
"""Retrieve coordinates from Open Street Map"""
osm = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(osm, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
def print_info(country_name):
"""Retrieve country info from REST API"""
base_url = "https://restcountries.eu/rest/v2/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
try:
country = json.loads(resp.text)[0]
languages = country['languages']
print(f"Languages: {', '.join([lang['name'] for lang in languages])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = json.loads(resp.text)
border_name = border_country["name"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
_____no_output_____
###Markdown
Exemple 1: Obtenir la longitude et la latitude de l’Université libre de Bruxelles
###Code
print_coord("Avenue Franklin Roosevelt 50, 1050 Bruxelles")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Exemple 2: Récupérer des informations sur la France
###Code
print_info('Belgique')
###Output
Languages: Dutch, French, German
Borders: France, Germany, Luxembourg, Netherlands
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
_____no_output_____
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Belgium"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join(languages.values())}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
_____no_output_____
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
_____no_output_____
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Belgium"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join([lang for lang in languages.values()])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
_____no_output_____
###Markdown
APIs: requêtes HTTP Imports
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Utiliser Nominatim pour connaître les coordonnées géographiques d'une adresse https://nominatim.org/
###Code
address = "Avenue Franklin Roosevelt 50, 1050 Bruxelles"
"""Retrieve coordinates from Open Street Map"""
url = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(url, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
OPERA - Wireless Communications Group (50.811783 - 4.3830304)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Utiliser REST Countries pour récupérer des informations sur un pays https://restcountries.com/
###Code
country_name = "Belgium"
base_url = "http://restcountries.com/v3.1/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
country = resp.json()[0]
try:
languages = country['languages']
print(f"Languages: {', '.join(languages.values())}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = resp.json()[0]
border_name = border_country["name"]["common"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
Languages: German, French, Dutch
Borders: France, Germany, Luxembourg, Netherlands
###Markdown
Testing web APIs with HTTP GET method
###Code
import json
import sys
import requests
###Output
_____no_output_____
###Markdown
Fonctions
###Code
def print_coord(address):
"""Retrieve coordinates from Open Street Map"""
osm = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(osm, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
def print_info(country_name):
"""Retrieve country info from REST API"""
base_url = "https://restcountries.eu/rest/v2/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
try:
country = json.loads(resp.text)[0]
languages = country['languages']
print(f"Languages: {', '.join([lang['name'] for lang in languages])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = json.loads(resp.text)
border_name = border_country["name"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
###Output
_____no_output_____
###Markdown
Exemple 1: Obtenir la longitude et la latitude de l’Université libre de Bruxelles
###Code
print_coord("Avenue Franklin Roosevelt 50, 1050 Bruxelles")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Exemple 2: Récupérer des informations sur la France
###Code
print_info('France')
###Output
Languages: French
Borders: Andorra, Belgium, Germany, Italy, Luxembourg, Monaco, Spain, Switzerland
###Markdown
Testing web APIs with HTTP GET method
###Code
import json
import requests
###Output
_____no_output_____
###Markdown
Fonctions
###Code
def print_coord(address):
"""Retrieve coordinates from Open Street Map"""
osm = "https://nominatim.openstreetmap.org/search"
data = {'q': address, 'format': 'json'}
resp = requests.get(osm, data)
json_list = json.loads(resp.text)
for item in json_list:
display_name = item['display_name']
short_name = display_name.split(", ")[0]
lat = item['lat']
lon = item['lon']
print(f"{short_name} ({lat} - {lon})")
def print_info(country_name):
"""Retrieve country info from REST API"""
base_url = "https://restcountries.eu/rest/v2/"
name_url = base_url + "name/"
code_url = base_url + "alpha/"
resp = requests.get(name_url + country_name)
try:
country = json.loads(resp.text)[0]
languages = country['languages']
print(f"Languages: {', '.join([lang['name'] for lang in languages])}")
border_codes = country['borders']
border_names = []
for code in border_codes:
resp = requests.get(code_url + code)
border_country = json.loads(resp.text)
border_name = border_country["name"]
border_names.append(border_name)
print(f"Borders: {', '.join(border_names)}")
except KeyError:
print("Unknown country, please use English or native name")
def print_crime():
"""Is there any crime during this perido at this location ?"""
base_url = "https://jgentes-crime-data-v1.p.rapidapi.com/crime"
querystring = {"enddate":"1/1/1950","startdate":"9/19/2015","long":"50.8126596","lat":"4.3798235"}
headers = {
'x-rapidapi-key': "{x-rapidapi-key}",
'x-rapidapi-host': "jgentes-Crime-Data-v1.p.rapidapi.com"
}
print(requests.request("GET", url, headers=headers, params=querystring).text)
###Output
_____no_output_____
###Markdown
Exemple 1: Obtenir la longitude et la latitude de l’Université libre de Bruxelles
###Code
print_coord("Avenue Franklin Roosevelt 50, 1050 Bruxelles")
###Output
Bibliothèque de droit et de criminologie (50.8126596 - 4.3798235)
OPERA - Wireless Communications Group (50.811783 - 4.3830304)
CReA-Patrimoine (50.811503 - 4.3821658)
###Markdown
Exemple 2: Récupérer des informations sur la France
###Code
print_info('Belgique')
print_info("France")
print_crime()
###Output
_____no_output_____ |
atmosphere/AtmosphereStats.ipynb | ###Markdown
Atmospheric Phase Statistics First we import the `ceo` module.
###Code
import sys
import numpy as np
import math
import ceo
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Then a `Source` object is created. You must specify the photometric bandwidth.The zenith and azimuth angles and the source height are optional parameters set by default to 0,0 and $\infty$, respectively.The wavefront shape is also optional set to (0,0) per default, meaning that the source won't have a wavefront.
###Code
n = 64
src = ceo.Source("K",resolution=(n,n))
###Output
_____no_output_____
###Markdown
An `Atmosphere` object is created by specifying first the $r_0$ and $L_0$, the the optional number of layers, layer altitudes, fractional powers, wind speeds and directions.Here a single atmospheric layer at the ground is created.
###Code
atm = ceo.Atmosphere(0.15,25)
#atm = ceo.GmtAtmosphere(0.15,30)
###Output
_____no_output_____
###Markdown
A phase screen is computed by passing the source object, the number of sources in the source object, the sampling step and number in the X and y directions and the time delay.
###Code
atm.get_phase_screen(src,0.1,n,0.1,n,0.0)
###Output
_____no_output_____
###Markdown
The phase screen is written in the phase attribute of the source object.The phase attribute is a `cuFloatArray` object that contains a pointer to the phase screen on the device.To copy the data to the host, simply call the `host` method of the `cuFloatArray` object.
###Code
imshow(src.phase.host(units='micron'))
###Output
_____no_output_____
###Markdown
Variance The atmosphere turbulence phase can also be computed for arbitrary coordinates $[x,y]$.The coordinates are defined as `cuFloatArray` objects setting the `host_data` attributes to a numpy array.A copy on the device of `host_data` is immediately made.
###Code
x = ceo.cuFloatArray(host_data=np.array([0]))
y = ceo.cuFloatArray(host_data=np.array([0]))
###Output
_____no_output_____
###Markdown
Now lets define a function to compute single phase values.By calling the `reset` method of the `Atmosphere` object, we ensure to have a set of independent variates.`reset` re-draw the random number used to compute the phase values.
###Code
def var_eval(x,y,atm,src):
atm.reset()
ps = atm.get_phase_values(x,y,src,0.0)
return ps.host()
###Output
_____no_output_____
###Markdown
The function is called a number of times and the phase values are saved in `ps`
###Code
ps_fun = lambda z: [var_eval(x,y,atm,src) for k in range(z) ]
###Output
_____no_output_____
###Markdown
The variance of the phase is computed next.The phase is given in meter, so it is converted in radian.
###Code
ps = ps_fun(1000)
wavenumber = 2*math.pi/0.55e-6
num_var = np.var(ps)*(wavenumber**2)
###Output
_____no_output_____
###Markdown
The numerical variance is compared to the theoretical variance.
###Code
the_var = ceo.phaseStats.variance(atmosphere=atm)
print("Theoretical variance: %6.2frd^2" % the_var)
print("Numerical variance : %6.2frd^2" % num_var)
print("Variance ratio : %8.5f" % (num_var/the_var))
the_var = ceo.phaseStats.variance(atmosphere=atm)
print("Theoretical variance: %6.2fnm" % (np.sqrt(the_var)*715/2/np.pi))
print("Numerical variance : %6.2fnm" % (np.sqrt(num_var)*715/2/np.pi))
print("Variance ratio : %8.5f" % (num_var/the_var))
###Output
Theoretical variance: 2375.24nm
Numerical variance : 2375.65nm
Variance ratio : 1.00034
###Markdown
Structure function The $[x,y]$ plane is randomly sampled in the range $[-2\mathcal L_0,+2\mathcal L_0]$.
###Code
n_points = 1000
x = np.random.uniform(-1,1,n_points)*atm.L0*2
y = np.random.uniform(-1,1,n_points)*atm.L0*2
z_xy = x + 1j*y
###Output
_____no_output_____
###Markdown
Data are copied to the device:
###Code
cu_x = ceo.cuFloatArray(host_data=z_xy.real)
cu_y = ceo.cuFloatArray(host_data=z_xy.imag)
###Output
_____no_output_____
###Markdown
The structure function is computed for baseline $\rho$ randomly distributed on a circle.
###Code
phi = np.random.uniform(0,2*math.pi,n_points)
rho,rho_step = np.linspace(0,atm.L0*4,21,retstep=True)
rho[0] = 0.1
###Output
_____no_output_____
###Markdown
The differential phase is computed for `n_sample` independent variates.The structure function of the independent variates is computed first.The structure function `sf` is distributed on a circle of radius $\rho$ where it should be constant.The mean and standart deviation on the circle of the structure function `sf` is evaluated in `mean_sf` and `std_sf`.
###Code
n_sample = 1000
d_ps = np.zeros( (n_points, n_sample) , dtype=np.float32)
mean_sf = np.zeros( rho.size)
std_sf = np.zeros( rho.size)
print("rho sample: %d" % (rho.size))
for k_rho in range(rho.size):
sys.stdout.write("\r[%d]" % k_rho)
z_rho = rho[k_rho]*np.exp(1j*phi)
z_xy_rho = z_xy + z_rho
cu_x_rho = ceo.cuFloatArray(host_data=z_xy_rho.real)
cu_y_rho = ceo.cuFloatArray(host_data=z_xy_rho.imag)
for k in range(n_sample):
atm.reset()
ps = atm.get_phase_values(cu_x,cu_y,src,0.0).host()
ps_rho = atm.get_phase_values(cu_x_rho,cu_y_rho,src,0.0).host()
d_ps[:,k] = ps - ps_rho
sf = d_ps.var(axis=1)*(wavenumber**2)
mean_sf[k_rho] = sf.mean()
std_sf[k_rho] = sf.std()
sf_th = ceo.phaseStats.structure_function(rho,atmosphere=atm)
plot(rho,sf_th,label='Theory')
errorbar(rho,mean_sf,yerr=std_sf,marker='.',linestyle='none',label='Numerical')
grid()
xlabel('Baseline [m]')
ylabel('Phase S.F. (rd^2)')
legend()
###Output
_____no_output_____
###Markdown
Atmospheric Phase Statistics First we import the `ceo` module.
###Code
import sys
import numpy as np
import math
import ceo
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Then a `Source` object is created. You must specify the photometric bandwidth.The zenith and azimuth angles and the source height are optional parameters set by default to 0,0 and $\infty$, respectively.The wavefront shape is also optional set to (0,0) per default, meaning that the source won't have a wavefront.
###Code
n = 64
src = ceo.Source("K",resolution=(n,n))
###Output
_____no_output_____
###Markdown
An `Atmosphere` object is created by specifying first the $r_0$ and $L_0$, the the optional number of layers, layer altitudes, fractional powers, wind speeds and directions.Here a single atmospheric layer at the ground is created.
###Code
atm = ceo.Atmosphere(0.15,30)
#atm = ceo.GmtAtmosphere(0.15,30)
###Output
_____no_output_____
###Markdown
A phase screen is computed by passing the source object, the number of sources in the source object, the sampling step and number in the X and y directions and the time delay.
###Code
atm.get_phase_screen(src,0.1,n,0.1,n,0.0)
###Output
_____no_output_____
###Markdown
The phase screen is written in the phase attribute of the source object.The phase attribute is a `cuFloatArray` object that contains a pointer to the phase screen on the device.To copy the data to the host, simply call the `host` method of the `cuFloatArray` object.
###Code
imshow(src.phase.host(units='micron'))
###Output
_____no_output_____
###Markdown
Variance The atmosphere turbulence phase can also be computed for arbitrary coordinates $[x,y]$.The coordinates are defined as `cuFloatArray` objects setting the `host_data` attributes to a numpy array.A copy on the device of `host_data` is immediately made.
###Code
x = ceo.cuFloatArray(host_data=np.array([0]))
y = ceo.cuFloatArray(host_data=np.array([0]))
###Output
_____no_output_____
###Markdown
Now lets define a function to compute single phase values.By calling the `reset` method of the `Atmosphere` object, we ensure to have a set of independent variates.`reset` re-draw the random number used to compute the phase values.
###Code
def var_eval(x,y,atm,src):
atm.reset()
ps = atm.get_phase_values(x,y,src,0.0)
return ps.host()
###Output
_____no_output_____
###Markdown
The function is called a number of times and the phase values are saved in `ps`
###Code
ps_fun = lambda z: [var_eval(x,y,atm,src) for k in range(z) ]
###Output
_____no_output_____
###Markdown
The variance of the phase is computed next.The phase is given in meter, so it is converted in radian.
###Code
ps = ps_fun(1000)
wavenumber = 2*math.pi/0.55e-6
num_var = np.var(ps)*(wavenumber**2)
###Output
_____no_output_____
###Markdown
The numerical variance is compared to the theoretical variance.
###Code
the_var = ceo.phaseStats.variance(atmosphere=atm)
print "Theoretical variance: %6.2frd^2" % the_var
print "Numerical variance : %6.2frd^2" % num_var
print "Variance ratio : %8.5f" % (num_var/the_var)
###Output
Theoretical variance: 590.38rd^2
Numerical variance : 586.61rd^2
Variance ratio : 0.99362
###Markdown
Structure function The $[x,y]$ plane is randomly sampled in the range $[-2\mathcal L_0,+2\mathcal L_0]$.
###Code
n_points = 1000
x = np.random.uniform(-1,1,n_points)*atm.L0*2
y = np.random.uniform(-1,1,n_points)*atm.L0*2
z_xy = x + 1j*y
###Output
_____no_output_____
###Markdown
Data are copied to the device:
###Code
cu_x = ceo.cuFloatArray(host_data=z_xy.real)
cu_y = ceo.cuFloatArray(host_data=z_xy.imag)
###Output
_____no_output_____
###Markdown
The structure function is computed for baseline $\rho$ randomly distributed on a circle.
###Code
phi = np.random.uniform(0,2*math.pi,n_points)
rho,rho_step = np.linspace(0,atm.L0*4,21,retstep=True)
rho[0] = 0.1
###Output
_____no_output_____
###Markdown
The differential phase is computed for `n_sample` independent variates.The structure function of the independent variates is computed first.The structure function `sf` is distributed on a circle of radius $\rho$ where it should be constant.The mean and standart deviation on the circle of the structure function `sf` is evaluated in `mean_sf` and `std_sf`.
###Code
n_sample = 1000
d_ps = np.zeros( (n_points, n_sample) , dtype=np.float32)
mean_sf = np.zeros( rho.size)
std_sf = np.zeros( rho.size)
print "rho sample: %d" % (rho.size)
for k_rho in range(rho.size):
sys.stdout.write("\r[%d]" % k_rho)
z_rho = rho[k_rho]*np.exp(1j*phi)
z_xy_rho = z_xy + z_rho
cu_x_rho = ceo.cuFloatArray(host_data=z_xy_rho.real)
cu_y_rho = ceo.cuFloatArray(host_data=z_xy_rho.imag)
for k in range(n_sample):
atm.reset()
ps = atm.get_phase_values(cu_x,cu_y,src,0.0).host()
ps_rho = atm.get_phase_values(cu_x_rho,cu_y_rho,src,0.0).host()
d_ps[:,k] = ps - ps_rho
sf = d_ps.var(axis=1)*(wavenumber**2)
mean_sf[k_rho] = sf.mean()
std_sf[k_rho] = sf.std()
sf_th = ceo.phaseStats.structure_function(rho,atmosphere=atm)
plot(rho,sf_th,label='Theory')
errorbar(rho,mean_sf,yerr=std_sf,marker='.',linestyle='none',label='Numerical')
grid()
xlabel('Baseline [m]')
ylabel('Phase S.F. (rd^2)')
legend()
###Output
_____no_output_____ |
notebooks/run.ipynb | ###Markdown
Get Source Code
###Code
import os
# clone repo
repo = f'https://github.com/awsaf49/deep-chimpact-1st-place-solution.git'
branch ='main'
directory ='deep-chimpact'
os.makedirs('deep-chimpact',exist_ok=True)
!git clone -b $branch $repo $directory
%cd {directory}
ls .
###Output
_____no_output_____
###Markdown
Installation
###Code
!pip install -qr requirements.txt
###Output
_____no_output_____
###Markdown
Prepare DataFirst, the training and testing data should be downloaded from the competition website. ideally, the data can be placed in the `data/raw` folder in the repo directory. The repo tree would then look like below:```../deep-chimpact/├── LICENSE.md├── README.md├── configs│ ├── checkpoints.json│ └── deep-chimpact.yaml├── data│ └── raw│ ├── submission_format.csv│ ├── test_metadata.csv│ ├── test_videos│ ├── train_labels.csv│ ├── train_metadata.csv│ ├── train_videos│ ├── video_access_metadata.csv│ └── video_download_instructions.txt...```
###Code
!python prepare_data.py --data-dir data/raw
!tree -L 1 data/processed
###Output
_____no_output_____
###Markdown
Train
###Code
!python3 train.py --model-name 'ECA_NFNetL2' --img-size 360 640 --batch-size 32 --scheduler 'cosine' --loss 'Huber'
!python3 train.py --model-name 'ECA_NFNetL2' --img-size 450 800 --batch-size 24 --scheduler 'cosine' --loss 'Huber'
!python3 train.py --model-name 'ECA_NFNetL2' --img-size 576 1024 --batch-size 12 --scheduler 'cosine' --loss 'Huber'
!python3 train.py --model-name 'ECA_NFNetL2' --img-size 720 1280 --batch-size 8 --scheduler 'cosine' --loss 'Huber'
!python3 train.py --model-name 'ECA_NFNetL2' --img-size 900 1600 --batch-size 4 --scheduler 'cosine' --loss 'Huber'
!python3 train.py --model-name 'ResNest200' --img-size 360 640 --batch-size 16 --scheduler 'step' --loss 'MAE'
!python3 train.py --model-name 'ResNest200' --img-size 576 1024 --batch-size 8 --scheduler 'step' --loss 'MAE'
!python3 train.py --model-name 'EfficientNetB7' --img-size 360 640 --batch-size 32 --scheduler 'cosine' --loss 'MAE'
!python3 train.py --model-name 'EfficientNetB7' --img-size 450 800 --batch-size 24 --scheduler 'cosine' --loss 'MAE'
!python3 train.py --model-name 'EfficientNetV2M' --img-size 450 800 --batch-size 24 --scheduler 'exp' --loss 'Huber'
!python3 train.py --model-name 'EfficientNetV2M' --img-size 576 1024 --batch-size 12 --scheduler 'exp' --loss 'Huber'
###Output
_____no_output_____
###Markdown
Infer> Before prediction, file tree would look like this:```../deep-chimpact/...├── data│ └── processed│ ├── sample_submission.csv│ ├── test.csv│ ├── test_images│ ├── train.csv│ └── train_images...├── output│ ├── ECA_NFNetL2-360x640│ ├── ECA_NFNetL2-450x800│ ├── ECA_NFNetL2-576x1024│ ├── ECA_NFNetL2-720x1280│ ├── ECA_NFNetL2-900x1600│ ├── EfficientNetB7-360x640│ ├── EfficientNetB7-450x800│ ├── EfficientNetV2M-450x800│ ├── EfficientNetV2M-576x1024│ ├── ResNest200-360x640│ └── ResNest200-576x1024... ```> Final submission will be saved at `submission/ensemble_submission.csv`
###Code
## RUN THIS IF DOING ONLY INFER
#!python prepare_data.py --data-dir data/raw --infer-only
!tree -L 1 output
!python3 predict_soln.py
###Output
_____no_output_____
###Markdown
1. Get source code
###Code
# Clone repo from Github
!git clone https://github.com/max-schaefer-dev/on-cloud-n-19th-place-solution.git
###Output
^C
###Markdown
2. Install requirements & restart kernel
###Code
# Change working dir. to repo dir. and install
%cd on-cloud-n-19th-place-solution
!pip install -r requirements.txt
# restarting kernel
!condacolab KERNEL RESTART
print("Restarting of kernel...")
get_ipython().kernel.do_shutdown(True)
###Output
/bin/bash: condacolab: command not found
Restarting of kernel...
###Markdown
3. Get dataThe competition data is freely available at Radiant MLHub. Follow these 3 steps: **3.1.** Run "Helper functions" cells **3.2.1.** Sign up for free at https://mlhub.earth/data/ref_cloud_cover_detection_challenge_v1 **3.2.2.** Generate an API key from the "Settings & API keys" menu **3.3.** Run !mlhub configure and paste your key into the prompt **3.4.** Run download script and prepare_data script 3.1 Helper functions
###Code
def download_competition_data():
'''Downloads competition data from radiant mlhub'''
collection_names = ['ref_cloud_cover_detection_challenge_v1_train_source',
'ref_cloud_cover_detection_challenge_v1_train_labels']
ds = Dataset.fetch('ref_cloud_cover_detection_challenge_v1')
for c in ds.collections:
if c.id not in collection_names:
continue
print('Downloading', c.id)
c.download('/content/on-cloud-n-19th-place-solution/data')
def prepare_competition_data():
'''Unzips the downloaded competition data and prepares features & labels for training'''
print('Unzipping competition data...')
# Unzip tar.gz data
train_features_gz_path = 'ref_cloud_cover_detection_challenge_v1_train_source.tar.gz'
train_labels_gz_path = 'ref_cloud_cover_detection_challenge_v1_train_labels.tar.gz'
shutil.unpack_archive(
filename=train_features_gz_path
)
shutil.unpack_archive(
filename=train_labels_gz_path
)
print('Renaming folder names...')
# Rename folder names
train_feature_gz_name = 'ref_cloud_cover_detection_challenge_v1_train_source'
train_labels_gz_name = 'ref_cloud_cover_detection_challenge_v1_train_labels'
os.rename(train_feature_gz_name, 'train_features')
os.rename(train_labels_gz_name, 'train_labels')
print('Renaming train_feature folders...')
# Rename train_feature folders
f_names = glob.glob('train_features/*/')
for f_name in sorted(f_names):
suffix = os.path.split(f_name[:-1])[1]
chip_id = suffix[-4:]
os.rename(f_name, f'train_features/{chip_id}')
# Rename train label folders
f_names = glob.glob('train_labels/*/')
for f_name in sorted(f_names):
suffix = os.path.split(f_name[:-1])[1]
chip_id = suffix[-4:]
os.rename(f_name, f'train_labels/{chip_id}')
print('Renaming & moving label files...')
# Renaming & moving label files. Delete old label folders
label_file_paths = sorted(glob.glob('train_labels/*/*.tif'))
for label_p in label_file_paths:
plitted = label_p.split('/')
chip_id = plitted[1]
# Move file to label_dir and rename it
shutil.move(label_p, f'train_labels/{chip_id}.tif')
# Delete label folder
shutil.rmtree(f'train_labels/{chip_id}')
print('Preparations done!')
###Output
_____no_output_____
###Markdown
Download the pseudo labeled data. Pseudo labeled data should be placed in data/pseudo_labels```../on-cloud-n-19th-place-solution/├── LICENSE.md├── ...├── configs│ ├── efficientnet-b1-unet-512.yaml│ ├── resnet34-unet-512.*yaml*│ └── resnext50_32x4d-unet-512.yaml├── data│ ├── train_features│ │ ├── train_chip_id_1│ │ │ ├── B02.tif│ │ │ ├── B03.tif│ │ │ ├── B04.tif│ │ │ └── B08.tif│ │ └── ...│ ├── train_labels│ ├── train_chip_id_1.tif│ ├── ...│ ...│ ├── metadata_updated.csv│ └── pseudo_labels.zip├── train_metadata.csv...``` 3.2 Sign up and generate API keySign up for free at https://mlhub.earth/data/ref_cloud_cover_detection_challenge_v1 3.3 Run mlhub configure and enter API key
###Code
# Setup radiant_mlhub
!pip install radiant_mlhub
from radiant_mlhub import Dataset
import glob
import shutil
import os
# Run cell and enter API key
!mlhub configure
###Output
_____no_output_____
###Markdown
3.4 Run download script and prepare_data script Download competition data script
###Code
# Runtime: about 2 mins.
# change working directory
%cd on-cloud-n-19th-place-solution/data/
download_competition_data()
###Output
_____no_output_____
###Markdown
Prepare competition data script
###Code
# Runtime: about 6 mins.
prepare_competition_data()
###Output
_____no_output_____
###Markdown
4. Training
###Code
# change working dir
import os
if os.getcwd() != '/content/on-cloud-n-19th-place-solution':
%cd on-cloud-n-19th-place-solution
print('> Changed working directory to', os.getcwd())
# Train all models
!python train.py --fast-dev-run 1 --cfg './configs/resnet34-unet-512.yaml'
# !python train.py --fast-dev-run 1 --cfg './configs/efficientnet-b1-unet-512.yaml'
# !python train.py --fast-dev-run 1 --cfg './configs/resnext50_32x4d-unet-512.yaml'
# Display logs. Only works in google chrome, since firefox blocks necessary cookies
model_name = 'resnet34-unet-512x512'
lightning_logs_p = f'/content/on-cloud-n-19th-place-solution/output/{model_name}/lightning_logs/'
%reload_ext tensorboard
%tensorboard --logdir={lightning_logs_p}
###Output
_____no_output_____
###Markdown
5. Prepare data for Inference Grab some dummy data
###Code
# grab random n samples from training set
!mkdir /content/on-cloud-n-19th-place-solution/data/test_features
import glob, random
n = 1000
train_f_paths = glob.glob('/content/on-cloud-n-19th-place-solution/data/train_features/*')
train_f_batch = random.choices(train_f_paths, k=n)
for p in train_f_batch:
!cp -r {p} /content/on-cloud-n-19th-place-solution/data/test_features
###Output
_____no_output_____
###Markdown
5.1 Inference after training
###Code
# create .tif prediction-files and save them in data/predictions
!python predict.py --model-dir './output/resnet34-unet-512x512' --ensemble 1 --tta 1 --batch-size 8
# plot batch of predictions
from utils.visualize import save_prediction_as_jpg
from pathlib import Path
pred_dir = Path('data/predictions')
# saves and plots 6 images with corresponding predictions
save_prediction_as_jpg(pred_dir)
###Output
_____no_output_____
###Markdown
5.2 Inference without training 1. Download model weights> Before predict, file tree would look like this:```../on-cloud-n-19th-place-solution/...├── output│ ├── Resnet34-Unet-512x512│ │ ├── resnet34-unet-512.yaml│ │ └── resnet34-unet.pt│ ├── EfficientNetB1-Unet-512x512│ └── Resnext50-Unet-512x512...```
###Code
### Create folder structure
!mkdir /content/on-cloud-n-19th-place-solution/output
# Model 1: Resnet34-Unet-512x512
!mkdir /content/on-cloud-n-19th-place-solution/output/Resnet34-Unet-512x512
!cp /content/on-cloud-n-19th-place-solution/configs/resnet34-unet-512.yaml /content/on-cloud-n-19th-place-solution/output/Resnet34-Unet-512x512
# Model 2: EfficientNetB1-Unet-512x512
!mkdir /content/on-cloud-n-19th-place-solution/output/EfficientNetB1-Unet-512x512
!cp /content/on-cloud-n-19th-place-solution/configs/efficientnet-b1-unet-512.yaml /content/on-cloud-n-19th-place-solution/output/EfficientNetB1-Unet-512x512
# Model 3: Resnext50-Unet-512x512
!mkdir /content/on-cloud-n-19th-place-solution/output/Resnext50-Unet-512x512
!cp /content/on-cloud-n-19th-place-solution/configs/Resnext50-Unet-512x512.yaml /content/on-cloud-n-19th-place-solution/output/Resnext50-Unet-512x512
### Download weights and place them into created folder structure
!gdown --id 15mL8c9OBPk2JIcPb0k6t_NMtH-mKeVWE -O /content/on-cloud-n-19th-place-solution/output/Resnext50-Unet-512x512/Resnext50-Unet-512x512.pt
!gdown --id 1uXuxV0j_9cI5oXcSw1mH1mSoPU6SWrYA -O /content/on-cloud-n-19th-place-solution/output/Resnet34-Unet-512x512/Resnet34-Unet-512x512.pt
!gdown --id 1OBesw6cZOZcop-p1X0LHKqdEQy2sYc5n -O /content/on-cloud-n-19th-place-solution/output/EfficientNetB1-Unet-512x512/EfficientNetB1-Unet-512x512.pt
###Output
_____no_output_____
###Markdown
2. Predict binary masks
###Code
!python predict.py --model-dir 'output/resnet34-unet-512x512' --ensemble 1 --tta 3 --batch-size 8
from utils.visualize import save_prediction_as_jpg
from pathlib import Path
pred_dir = Path('data/predictions')
# saves and plots 6 images with corresponding predictions
save_prediction_as_jpg(pred_dir)
###Output
_____no_output_____
###Markdown
apprss sources:https://tw.stock.yahoo.com/rss_index.html
###Code
%cd /workspace/twint/app
# !pip install -e .
# !python -m pytest tests/test_scrapers.py::test_cnbc_page_tags -v
# setup elasticsearch (one-time only)
# %cd /workspace/twint/app
# !python ./app/store/es.py
%cd /workspace/twint/app
# !chmod +x ./start.sh
# !./start.sh
# !python -m app.main run.scraper=cnbc run.n_workers=1
# !python -m app.main run.scraper=cnyes_api run.n_workers=1 run.max_startpoints=1
# !python -m app.main run.scraper=cnyes_page run.n_workers=1 run.max_startpoints=10
# !python -m app.main run.scraper=rss run.n_workers=1 run.loop_every=86400 scraper.rss.entry=./resource/rss_yahoo_us_stock.csv
# !python -m app.main run.scraper=rss run.n_workers=1 run.loop_every=43200 scraper.rss.entry=./resource/rss_yahoo_us_indicies.csv
# !python -m app.main run.scraper=rss run.n_workers=1 run.loop_every=43200 scraper.rss.entry=./resource/rss_yahoo_tw.csv
# !python -m app.main run.scraper=rss run.n_workers=1 run.loop_every=7200 scraper.rss.entry=./resource/rss_news_us.csv
# !python -m app.main run.scraper=moneydj_index run.n_workers=1 scraper.moneydj_index.until=3500 run.startpoints_csv='./outputs/2020-08-09/17-13-53/error_urls.csv'
# !python -m app.main run.scraper=moneydj_index run.n_workers=1
# !python -m app.main run.scraper=moneydj_page run.n_workers=1
!python -m app.main run.scraper=cnbc run.n_workers=1 run.max_startpoints=1000 run.loop_every=3600 run.startpoints_csv=./error_urls.csv
# run single scraper (for testing)
%cd /workspace/twint/app
import nest_asyncio
nest_asyncio.apply()
import asyncio
from hydra.experimental import compose, initialize
from app.scrapers import moneydj
from app.store import es
# initialize(config_dir="./app/app")
cfg = compose("config.yaml")
print(cfg)
es.connect()
scp = moneydj.MoneydjPageScraper(cfg)
asyncio.run(scp.run())
###Output
_____no_output_____
###Markdown
twinttwitter account: CNBC, CNNBusiness, businessinsider
###Code
# %cd /workspace/twint
# !pip install e .
# !twint -u CNBC
# !pip install -U fake-useragent
import nest_asyncio
nest_asyncio.apply()
import twint
c = twint.Config()
c.Username = "CNBC"
c.Elasticsearch = "http://es:9200"
c.Until='2015-01-01 00:00:00'
# c.Search = "fruit"
twint.run.Search(c)
###Output
_____no_output_____
###Markdown
elasticsearchquery twint```json{ "_source": [ "date", "username" ], "query": { "bool": { "must": [ { "match": { "username": "business" } }, { "range": { "date": { "gt": "2004-01-01 00:00:00", "lt": "2023-01-01 00:00:00" } } } ] } }, "from": 0, "size": 1000, "sort": [ { "date": "asc" } ]}```query cnyeshttp://localhost:9200/news_page/_search```json{ "query": { "bool": { "filter": [ { "wildcard": { "from_url": "*cnyes.com*" } }, { "range": { "entry_published_at": { "gte": "2020-05-01T00:00:00", "lt": "2021-01-01T00:00:00" } } } ] } }, "from": 0, "size": 1000, "sort": [ { "entry_published_at": "desc" } ]}``````json{ "query": { "bool": { "filter": [ { "wildcard": { "resolved_url": "*cnbc*" } } ] } }, "from": 0, "size": 1000, "sort": [ { "entry_published_at": "desc" } ]}``` Elasticsearch DumpInstall nodejs & elasticdump first https://github.com/nodesource/distributions/blob/master/README.md https://github.com/taskrabbit/elasticsearch-dump ```bashcurl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -sudo apt-get install -y nodejsnpm install elasticdump -g```Dump & load ```bash dumpmultielasticdump \ --direction=dump \ --match='^.*$' \ --fsCompress \ --input=http://es:9200 \ --output=./dump_2020xxxx loadmultielasticdump \ --direction=load \ --match='^.*$' \ --input=./dump_2020xxxx \ --output=http://es01:9200 \ --fsCompress singleelasticdump \ --input=http://es:9200/twinttweets \ --output=./twinttweets_mapping_20200503.json \ --type=mappingelasticdump \ --input=http://es:9200/twinttweets \ --output=./twinttweets_index_20200503.json \ --type=dataelasticdump \ --input=http://es:9200/twinttweets \ --output=$ \ | gzip > ./twinttweets_index_20200504.json.gz elasticdump \ --input=http://es:9200/news_page \ --output=$ \ | gzip > ./news_page_index_20200615.json.gz elasticdump \ --input=./twinttweets_index_20200602.json.gz \ --output=http://es:9200/twinttweets \ --fsCompress ``` Stockhttps://twstock.readthedocs.io/zh_TW/latest/index.html
###Code
!pip install twstock
###Output
Collecting twstock
Downloading twstock-1.3.1-py3-none-any.whl (1.9 MB)
[K |████████████████████████████████| 1.9 MB 853 kB/s eta 0:00:01 |█████████████████████▋ | 1.3 MB 853 kB/s eta 0:00:01
[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/site-packages (from twstock) (2.23.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests->twstock) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests->twstock) (2.9)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests->twstock) (1.25.9)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests->twstock) (2020.4.5.1)
Installing collected packages: twstock
Successfully installed twstock-1.3.1
[33mWARNING: You are using pip version 20.0.2; however, version 20.1.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.[0m
###Markdown
app.tool
###Code
%cd /workspace/twint/app
from app import tools
tools.generate_rss_yahoo_csv(
save_to="./resource/rss_yahoo_us_indicies.csv",
symbol_path="./resource/symbol_indicies.csv")
###Output
/workspace/twint/app
###Markdown
SingGlow Experiment
###Code
from common_definitions import *
from pipeline import *
import tensorflow as tf
import matplotlib.pyplot as plt
import librosa
import librosa.display
import soundfile as sf
import IPython.display as ipd
from tqdm.notebook import tqdm
import pickle
from data_loarder import *
os.chdir(r'D:\PlayGround\research\SinGlow\runs')
BATCH_SIZE = 64
tfrecord_dir = r'D:\PlayGround\research\SinGlow\runs'
data_loader = SongDataLoader('real.tfrecords',tfrecord_dir=tfrecord_dir)
data_loader.make(r'D:\PlayGround\research\SongDatabase\RealSinger\vocal collection\wav files')
real_dataset = data_loader.load(sampling_num=200)
del data_loader
# data_loader = SongDataLoader('virtual.tfrecords',tfrecord_dir=tfrecord_dir)
# data_loader.make(r'D:\PlayGround\research\SongDatabase\VirtualSinger')
# virtual_dataset = data_loader.load()
# del data_loader
# Step ?. the brain
brain = Brain(SQUEEZE_FACTOR, K_GLOW, L_GLOW, WINDOW_LENGTH, CHANNEL_SIZE, LEARNING_RATE)
# load weight if available
brain.model(tf.random.uniform((2, WINDOW_LENGTH, 1, CHANNEL_SIZE), 0.05, 1), training=True)
CHECKPOINT_PATH = r'D:\PlayGround\research\SinGlow\checkpoints\weights'
print(brain.load_weights(CHECKPOINT_PATH))
import pickle
# real z
real_z_path = r'D:\PlayGround\research\SinGlow\runs\real_z.pickle'
if os.path.exists(real_z_path):
with open(real_z_path,mode='rb') as f:
real_z = pickle.load(f)
else:
real_z_results = []
for i in tqdm(real_dataset):
real_z_results.append(brain.forward(i).numpy()[0])
real_z = np.apply_along_axis(np.mean,0,np.array(real_z_results))
with open(real_z_path,mode='wb') as f:
pickle.dump(real_z, f)
# # virtual z
# if os.path.exists('virtual_z.pickle'):
# with open('virtual_z.pickle',mode='rb') as f:
# virtual_z = pickle.load(f)
# else:
# virtual_z_results = []
# for i in tqdm(virtual_dataset):
# virtual_z_results.append(brain.forward(i).numpy()[0])
# virtual_z = np.apply_along_axis(np.mean,0,np.array(virtual_z_results))
# with open('virtual_z.pickle',mode='wb') as f:
# pickle.dump(virtual_z, f)
# # delta z
# delta_z = real_z-virtual_z
# figure, ax = plt.subplots(3)
# figure.set_size_inches(12,9)
# plt.subplots_adjust(hspace=1)
# ax[0] = plt.plot(np.array(real_z))
# # ax[1] = plt.plot(np.array(delta_z))
# # ax[2] = plt.plot(np.array(virtual_z))
# # librosa.display.waveplot(np.array(real_z), sr=SAMPLING_RATE, ax=ax[0])
# # librosa.display.waveplot(np.array(delta_z), sr=SAMPLING_RATE, ax=ax[1])
# # librosa.display.waveplot(np.array(virtual_z), sr=SAMPLING_RATE, ax=ax[2])
# ax[0].set_title("real_z")
# # ax[1].set_title("delta_z")
# # ax[2].set_title("virtual_z")
# ax[0].set_ylim([-1,1])
# # ax[1].set_ylim([-1,1])
# # ax[2].set_ylim([-1,1])
# plt.show()
os.chdir(r'D:\PlayGround\research\SinGlow\runs')
virtual_file_dir = r'D:\PlayGround\research\SongDatabase\TestSongs'
name = 'virtual_align_short'
pickle_file = f'result_{name}.pickle'
if os.path.exists(pickle_file):
with open(pickle_file, mode='rb') as f:
y, sr = pickle.load(f)
else:
y, sr = librosa.load(os.path.join(virtual_file_dir, name + '.mp3'))
with open(pickle_file, mode='wb') as f:
pickle.dump((y, sr), f)
ys = np.array([y[i*sr*WINDOW_SIZE:(i+1)*sr*WINDOW_SIZE] for i in range(len(y)//(sr*WINDOW_SIZE))] + [y[-sr*WINDOW_SIZE:]]).reshape((-1,sr*WINDOW_SIZE,1,1))
ys = tf.image.resize(ys,[WINDOW_LENGTH,1]).numpy().reshape((-1,1,WINDOW_LENGTH,1,1))
ys_dataset = tf.data.Dataset.from_tensor_slices(ys)
result_ys = []
for i in tqdm(ys_dataset):
result_z = (brain.forward(i)+real_z)/2 # 向量加法中点
result_ys+=list(tf.squeeze(brain.backward(result_z).numpy()))
sf.write(f'result_{name}.wav', np.array(result_ys), SAMPLING_RATE, subtype='PCM_24')
###Output
0%| | 0/47 [00:00<?, ?it/s]C:\Users\hobar\anaconda3\envs\DeepLearningTF2\lib\site-packages\keras\legacy_tf_layers\core.py:513: UserWarning: `tf.layers.flatten` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Flatten` instead.
warnings.warn('`tf.layers.flatten` is deprecated and '
C:\Users\hobar\anaconda3\envs\DeepLearningTF2\lib\site-packages\keras\engine\base_layer.py:2215: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
warnings.warn('`layer.apply` is deprecated and '
100%|██████████| 47/47 [02:11<00:00, 2.81s/it]
###Markdown
Test song merge
###Code
# test_file_dir = r'D:\PlayGround\research\SongDatabase\TestSongs'
# name='virtual_align_short'
# virtual_file_path = os.path.join(test_file_dir,name+'.pickle')
# if os.path.exists(virtual_file_path):
# with open(virtual_file_path,mode='rb') as f:
# y = pickle.load(f)
# else:
# y, sr = librosa.load(os.path.join(test_file_dir,name+'.mp3'))
# with open(virtual_file_path,mode='wb') as f:
# pickle.dump(y, f)
# virtual_data = y
# name='real_short_pure_reference'
# real_file_path = os.path.join(test_file_dir,name+'.pickle')
# if os.path.exists(real_file_path):
# with open(real_file_path,mode='rb') as f:
# y = pickle.load(f)
# else:
# y, sr = librosa.load(os.path.join(test_file_dir,name+'.mp3'))
# with open(real_file_path,mode='wb') as f:
# pickle.dump(y, f)
# real_data = y
# real = np.array([real_data[i*22050*WINDOW_SIZE:(i+1)*22050*WINDOW_SIZE] for i in range(len(real_data)//(22050*WINDOW_SIZE))] + [real_data[-22050*WINDOW_SIZE:]]).reshape((-1,22050*WINDOW_SIZE,1,1))
# real = tf.image.resize(real,[WINDOW_LENGTH,1]).numpy().reshape((-1,1,WINDOW_LENGTH,1,1))
# virtual = np.array([virtual_data[i*22050*WINDOW_SIZE:(i+1)*22050*WINDOW_SIZE] for i in range(len(virtual_data)//(22050*WINDOW_SIZE))] + [real_data[-22050*WINDOW_SIZE:]]).reshape((-1,22050*WINDOW_SIZE,1,1))
# virtual = tf.image.resize(virtual,[WINDOW_LENGTH,1]).numpy().reshape((-1,1,WINDOW_LENGTH,1,1))
# ys_dataset = tf.data.Dataset.from_tensor_slices((virtual,real[:virtual.shape[0]]))
# result_ys = []
# for virtual,real in tqdm(ys_dataset):
# virtual_forward = brain.forward(virtual)
# real_forward = brain.forward(real)
# result_z = virtual_forward/2 + real_forward/2
# result_ys+=list(tf.clip_by_value(tf.squeeze(brain.backward(result_z)),-1,1).numpy())
# virtual_file_path = os.path.join(test_file_dir,'result.wav')
# sf.write(virtual_file_path, np.array(result_ys), SAMPLING_RATE, subtype='PCM_24')
###Output
_____no_output_____
###Markdown
Generating names dataset Here we will generate names dataset. Names dataset is supposed to be list of names.
###Code
%load_ext autoreload
%autoreload 2
import re
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
file_lists=['/notebooks/nlp_deeplearning/charmodel/data/first_names.all.txt']
names_list = []
with open(file_lists[0],'r') as file:
for name in file.read().splitlines()[1:]:
filtered_name = re.sub(r'\W+', '', name)
names_list.append(filtered_name.upper())
names_list[:5]
###Output
_____no_output_____
###Markdown
Load data
###Code
import sys
sys.path.insert(0,'/notebooks/Projects/Seq2Seq')
sys.path.insert(0, '../')
sys.path.insert(0,'../runs')
from mllib.seq2seq.namegen import *
from dotmap import DotMap
from mllib.seq2seq.model import *
from pytorch_lightning.loggers import TensorBoardLogger
from pytorch_lightning.callbacks import ModelCheckpoint
from pytorch_lightning.loggers.neptune import NeptuneLogger
import pytorch_lightning as pl
dsrc = get_dataset(names_list)
###Output
_____no_output_____
###Markdown
Modelling
###Code
hparams = DotMap({'vocab_size': len(dsrc.vocab),
'embedding_size': 30,
'hidden_size': 300,
'max_len': 15,
'num_layers':2,
'lr': 0.02})
###Output
_____no_output_____
###Markdown
Training
###Code
neptune_logger = NeptuneLogger(
api_key="eyJhcGlfYWRkcmVzcyI6Imh0dHBzOi8vYXBwLm5lcHR1bmUuYWkiLCJhcGlfdXJsIjoiaHR0cHM6Ly9hcHAubmVwdHVuZS5haSIsImFwaV9rZXkiOiIwYWY0OTQ4MS03MGY4LTRhNjUtOTFlZC0zZjVjMjlmZGQxNjQifQ==",
project_name="puneetgirdhar.in/charnn")
tensorboard_logger = TensorBoardLogger("tb_logs", name="my_model")
dls = dsrc.dataloaders(after_item=after_item, before_batch=pad_input_chunk_new, bs=32, n_inp=2)
# make sure that we use serializing option to instantiate the model
model = RNN(hparams, char2tensor = str(dict(dls.numericalize.o2i)), vocab=str(dls.numericalize.vocab))
checkpoint_callback = ModelCheckpoint(
dirpath = './checkpoints',
filename='{epoch}',
save_top_k=3,
monitor='val_loss',
mode='min'
)
trainer = pl.Trainer(fast_dev_run=False, logger=neptune_logger, auto_lr_find='learning_rate',gpus=1,
callbacks=[EarlyStopping(monitor='val_loss',patience=5), checkpoint_callback],
)
trainer.fit(model, dls.train, dls.valid)
###Output
_____no_output_____
###Markdown
Evaluation Now, we can generate some names randomly
###Code
md = get_first_name_model()
md.cuda()
md.generate("CHRIS")
###Output
_____no_output_____
###Markdown
Bert TransformerHere is an example to use custom bert transformer for seq 2 seq task. I trained the model for German to english translation.
###Code
import torch
import spacy
from mllib.bert import *
from runs.run_bert import *
import spacy
device = torch.device('cpu')
model = LITTransformer.load_from_checkpoint("~/trainer.ckpt")
dm = MyDataModule(batch_size=1)
dm.prepare_data()
dm.setup()
src = dm.train_data.data[0][0]
trg = dm.train_data.data[0][1]
nlp = spacy.load('de_core_news_sm')
src = [token.text.lower() for token in nlp(src)]
nlp = spacy.load("en_core_web_sm")
trg = [token.text.lower() for token in nlp(trg)]
def translate_sentence(sentence, src_vocab, trg_vocab, model, device, max_len=50):
model.eval()
BOS_IDX = src_vocab['<bos>']
EOS_IDX = trg_vocab['<pad>']
if isinstance(sentence, str):
nlp = spacy.load('de_core_news_sm')
tokens = [token.text.lower() for token in nlp(sentence)]
else:
tokens = [token.lower() for token in sentence]
src_indices = [BOS_IDX] + [src_vocab.stoi[token] for token in tokens] + [EOS_IDX]
src_tensor = torch.LongTensor(src_indices).unsqueeze(0).to(device)
src_mask = model.make_src_mask(src_tensor)
with torch.no_grad():
enc_src = model.encoder(src_tensor, src_mask)
trg_indices = [BOS_IDX]
for i in range(max_len):
trg_tensor = torch.LongTensor(trg_indices).unsqueeze(0).to(device)
trg_mask = model.make_trg_mask(trg_tensor)
with torch.no_grad():
output, attention = model.decoder(trg_tensor, enc_src, src_mask, trg_mask)
pred_token = output.argmax(2)[:,-1].item()
trg_indices.append(pred_token)
if pred_token == EOS_IDX:
break
trg_tokens = [trg_vocab.itos[i] for i in trg_indices]
return trg_tokens[1:], attention
translation, attention = translate_sentence(src, dm.src_vocab, dm.trg_vocab, model.model, device)
#translation
def display_attention(sentence, translation, attention, n_heads= 8, n_rows= 4, n_cols=2):
assert n_rows * n_cols == n_heads
fig = plt.figure(figsize=(15, 25))
for i in range(n_heads):
ax = fig.add_subplot(n_rows, n_cols, i+1)
_attention = attention.squeeze(0)[i].cpu().detach().numpy()
cax = ax.matshow(_attention, cmap='bone')
ax.tick_params(labelsize=12)
ax.set_xticklabels([''] + ['<bos>'] + [t.lower() for t in sentence] + ['<eos>'], rotation=45)
ax.set_yticklabels([''] + translation)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
plt.close()
display_attention(src, translation, attention)
###Output
<ipython-input-81-3b3c3a1182ac>:12: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_xticklabels([''] + ['<bos>'] + [t.lower() for t in sentence] + ['<eos>'], rotation=45)
<ipython-input-81-3b3c3a1182ac>:13: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_yticklabels([''] + translation)
###Markdown
Run all notebooksRun all notebooks in correct order.
###Code
%run ./00_setup_shapes.ipynb
%run ./0_process_bathymetry.ipynb
%run ./1a_gather_data_stations.ipynb
%run ./1b_gather_data_rain.ipynb
%run ./2_data_description.ipynb
%run ./3_calculation.ipynb
###Output
_____no_output_____ |
site/public/courses/DS-2.1/Notebooks/simple_PCA.ipynb | ###Markdown
Principel Component Analysis (PCA)- PCA is one of the well-known algorithm for Dimensionality Reduction- PCA: - Reduce the number of the features - While keeping the features information - Removes correlations among features - PCA emphasizes variation of strong features, making the data easier to visualize - Lets watch: https://www.youtube.com/watch?v=HMOI_lkzW08 (What is PCA?)- Lets watch: https://www.youtube.com/watch?v=0GzMcUy7ZI0 (What is covariance matrix?)- Lets watch: https://www.youtube.com/watch?v=Awcj447pYuk (How multiply matrix with vector?) Review matrix multiplication- Matrix `A = np.array([[2, 0], [1, 5]])` and vector `v = np.array([3, 4])` are given.- What is the multiplication of `A` by `v`.- Compute it by hand- Write a Python code to compute it (Hint: use `np.dot(A, v)`)
###Code
import numpy as np
A = np.array([[2, 0], [1, 5]])
v = np.array([3, 4])
print(np.dot(A, v))
###Output
[ 6 23]
###Markdown
EigenValue and Eigenvector of matrixFor given matrix `A`, we are interested to obtain vector `v` and scalar value `a` such that:`Av = av` Write a Python code to obtain vector v and scalar a for given matrix A
###Code
eig_value, eig_vector = np.linalg.eig(A)
print(eig_value)
print(eig_vector)
np.dot(A, eig_vector[:, 0])
eig_value[0]*eig_vector[:, 0]
###Output
_____no_output_____
###Markdown
Check that Av = av
###Code
np.dot(A, eig_vector[:, 1])
eig_value[1]*eig_vector[:, 1]
###Output
_____no_output_____
###Markdown
Activity: Are the countries in great UK different in terms of food?- In the table is the average consumption of 17 types of food in grams per person per week for every country in the UK- It would be great if we can visually represent diffrence among UK countries based on the food they eat - Lets read: http://setosa.io/ev/principal-component-analysis/ Activity: Write a code that obtains the two principle components from 17 types of food in UK
###Code
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
from sklearn import preprocessing
import matplotlib.pyplot as plt
df = pd.read_excel('pca_uk.xlsx')
X = np.array([df[i].values for i in df.columns if i != 'Features'])
print(X)
pca = PCA(n_components=2)
X_r = pca.fit_transform(X)
# Principle components of 17 features:
print(X_r)
# Lets visualize the principle components
for k, (i,j) in enumerate(zip(X_r[:, 0], X_r[:, 1])):
plt.scatter(i, j)
plt.text(i+0.3, j+0.3, df.columns[:-1][k])
plt.show()
###Output
_____no_output_____
###Markdown
Answer: Ireland is different from other three countries in UK How much of the dataset information is preserved in the components?Hint: use `pca.explained_variance_ratio_`
###Code
# PCA computation by sklearn
pca = PCA(n_components=2)
X_r = pca.fit_transform(X)
print(X_r)
print(pca.explained_variance_)
print(pca.explained_variance_ratio_)
print(pca.explained_variance_ratio_.cumsum())
###Output
[[-144.99315218 -2.53299944]
[ 477.39163882 -58.90186182]
[ -91.869339 286.08178613]
[-240.52914764 -224.64692488]]
[105073.34576714 45261.62487597]
[0.67444346 0.29052475]
[0.67444346 0.96496821]
###Markdown
Calculate the correlation of the principle components
###Code
print('Correlation of PCA Component:')
print(scipy.stats.pearsonr(X_r[:, 0], X_r[:, 1]))
###Output
Correlation of PCA Component:
(0.0, 1.0)
###Markdown
Lets write our own function to obtain principle components Activity: PCA StepsFollow the steps here and write a function that computes the principle component for dataset we watched in YouTube.https://www.youtube.com/watch?v=0GzMcUy7ZI0 Steps: 1- Subtract column mean from feature matrix2- Calculate the covariance of centered matrix3- Calculate the eigenvalue and eigenvector of covariance matrix. Do arange eigevalue in decresing order 4- Return the first K (two for example) column of matrix multiplication of centerned matrix with eigenvector matrixCompare the result of custom function with PCA in `sklearn`
###Code
# PCA computation by sklearn
X = np.array([[1, 1, 1], [1, 2, 1], [1, 3, 2], [1, 4, 3]])
# print(X)
pca = PCA(n_components=2)
X_r = pca.fit_transform(X)
print(X_r)
print(pca.explained_variance_)
print(pca.explained_variance_ratio_)
print(pca.explained_variance_ratio_.cumsum())
print('Correlation of PCA Component:')
print(scipy.stats.pearsonr(X_r[:, 0], X_r[:, 1]))
# Our function to comapre
def PCA_calculation(data, n_comp=2):
M = np.mean(data, axis=0)
# center columns by subtracting column means
C = X - M
# calculate covariance matrix of centered matrix
V = np.cov(C.T)
print(V)
# eigendecomposition of covariance matrix
eig_value, eig_vector = np.linalg.eig(V)
# sort eigenvalue in decreasing order
idx = np.argsort(eig_value)[::-1]
eig_value = eig_value[idx]
# sort eigenvectors according to same index
eig_vector = eig_vector[:, idx]
P = np.dot(C, eig_vector)[:, :n_comp]
return P
PCA_calculation(X, 2)
def PCA_custom(data, dims_rescaled_data=2):
"""
returns: data transformed in 2 dims/columns + regenerated original data
pass in: data as 2D NumPy array
"""
# mean center the data
data = data - np.mean(data, axis=0)
# calculate the covariance matrix
R = np.cov(data, rowvar=False)
# calculate eigenvectors & eigenvalues of the covariance matrix
# use 'eigh' rather than 'eig' since R is symmetric,
# the performance gain is substantial
evals, evecs = np.linalg.eig(R)
# sort eigenvalue in decreasing order
idx = np.argsort(evals)[::-1]
evecs = evecs[:, idx]
# sort eigenvectors according to same index
evals = evals[idx]
# select the first n eigenvectors (n is desired dimension
# of rescaled data array, or dims_rescaled_data)
evecs = evecs[:, :dims_rescaled_data]
# carry out the transformation on the data using eigenvectors
# and return the re-scaled data, eigenvalues, and eigenvectors
return np.dot(evecs.T, data.T).T
print(PCA_custom(X, 2))
###Output
[[ 1.65392786 -0.2775295 ]
[ 0.84584087 0.31153366]
[-0.55130929 0.09250983]
[-1.94845944 -0.126514 ]]
|
docs/quick_start/demo/op2_demo_numpy2.ipynb | ###Markdown
OP2: Numpy Demo 2 (Composite Plate Stress)The Jupyter notebook for this demo can be found in: - docs/quick_start/demo/op2_demo_numpy1.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo_numpy1.ipynbIt's recommended that you first go through: - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_intro.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo_numpy1.ipynbIn this tutorial, composite plate stresses will be covered. Load the modelIf the BWB example OP2 doesn't exist, we'll run Nastran to create it.
###Code
import os
import copy
import numpy as np
np.set_printoptions(precision=2, threshold=20, linewidth=100, suppress=True)
import pyNastran
from pyNastran.op2.op2 import read_op2
from pyNastran.utils.nastran_utils import run_nastran
pkg_path = pyNastran.__path__[0]
model_path = os.path.join(pkg_path, '..', 'models')
bdf_filename = os.path.join(model_path, 'bwb', 'bwb_saero.bdf')
op2_filename = os.path.join(model_path, 'bwb', 'bwb_saero.op2')
if not os.path.exists(op2_filename):
keywords = ['scr=yes', 'bat=no', 'old=no']
run_nastran(bdf_filename, nastran_cmd='nastran', keywords=keywords, run=True)
import shutil
op2_filename2 = os.path.join('bwb_saero.op2')
shutil.move(op2_filename2, op2_filename)
assert os.path.exists(op2_filename), print_bad_path(op2_filename)
model = read_op2(op2_filename, build_dataframe=False, debug=False)
print(model.get_op2_stats(short=True))
###Output
_____no_output_____
###Markdown
Accessing the Composite Stress
###Code
isubcase = 1
stress = model.cquad4_composite_stress[isubcase]
print(stress)
headers = stress.get_headers()
imax = headers.index('major')
###Output
type=RealCompositePlateStressArray nelements=9236 ntotal=92360
data: [1, ntotal, 9] where 9=[o11, o22, t12, t1z, t2z, angle, major, minor, max_shear]
element_layer.shape = (92360, 2)
data.shape = (1, 92360, 9)
element type: QUAD4LC-composite
sort1
lsdvmns = [1]
###Markdown
Composite Stress/Strain data is tricky to access as there is not a good way to index the dataLet's cheat a bit using the element ids and layers to make a pivot table. - **table** is (ntimes, nelements, nlayers, ndata) - **max_principal_stress_table** is (nelements, nlayers)
###Code
from pyNastran.femutils.utils import pivot_table
eids = stress.element_layer[:, 0]
layers = stress.element_layer[:, 1]
## now pivot the stress
table, rows_new = pivot_table(stress.data, eids, layers)
# now access the max principal stress for the static result
# table is (itime, nelements, nlayers, data)
itime = 0
max_principal_stress_table = table[itime,:,:,imax]
ueids = np.unique(eids)
print('max_principal_stress_table:\n%s' % max_principal_stress_table)
###Output
max_principal_stress_table:
[[ 239.3 163.91 98.41 ... -35.77 -34.6 -19.86]
[ 18.61 78.52 25.52 ... -63.92 -62.48 -12.99]
[ 2.99 105.48 49.37 ... -137.74 -127.07 -41.14]
...
[ 157. 170.3 112.79 ... 44.56 47.13 38.9 ]
[ 123.96 143.01 97.41 ... 40.99 44.06 42.47]
[ 90.04 109.97 79.86 ... 33.18 36.12 24.04]]
###Markdown
More realistic pivot tableAll the elements have 10 layers. Let's remove the last 5 layers.By having empty layers, the pivot table now has nan data in it.
###Code
# drop out 5 layers
eids2 = stress.element_layer[:-5, 0]
layers2 = stress.element_layer[:-5, 1]
data2 = stress.data[:, :-5, :]
# now pivot the stress
table, rows_new = pivot_table(data2, eids2, layers2)
# access the table data
# table is (itime, nelements, nlayers, data)
itime = 0
max_principal_stress_table2 = table[itime,:,:,imax]
print('max_principal_stress_table2:\n%s' % max_principal_stress_table2)
###Output
max_principal_stress_table2:
[[ 239.3 163.91 98.41 ... -35.77 -34.6 -19.86]
[ 18.61 78.52 25.52 ... -63.92 -62.48 -12.99]
[ 2.99 105.48 49.37 ... -137.74 -127.07 -41.14]
...
[ 157. 170.3 112.79 ... 44.56 47.13 38.9 ]
[ 123.96 143.01 97.41 ... 40.99 44.06 42.47]
[ 90.04 109.97 79.86 ... nan nan nan]]
###Markdown
OP2: Numpy Demo 2 (Composite Plate Stress)The Jupyter notebook for this demo can be found in: - docs/quick_start/demo/op2_demo_numpy1.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo_numpy1.ipynbIt's recommended that you first go through: - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_intro.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo_numpy1.ipynbIn this tutorial, composite plate stresses will be covered. Load the modelIf the BWB example OP2 doesn't exist, we'll run Nastran to create it.
###Code
import os
import copy
import numpy as np
np.set_printoptions(precision=2, threshold=20, linewidth=100, suppress=True)
import pyNastran
from pyNastran.op2.op2 import read_op2
from pyNastran.utils.nastran_utils import run_nastran
pkg_path = pyNastran.__path__[0]
model_path = os.path.join(pkg_path, '..', 'models')
bdf_filename = os.path.join(model_path, 'bwb', 'bwb_saero.bdf')
op2_filename = os.path.join(model_path, 'bwb', 'bwb_saero.op2')
if not os.path.exists(op2_filename):
keywords = ['scr=yes', 'bat=no', 'old=no']
run_nastran(bdf_filename, nastran_cmd='nastran', keywords=keywords, run=True)
import shutil
op2_filename2 = os.path.join('bwb_saero.op2')
shutil.move(op2_filename2, op2_filename)
assert os.path.exists(op2_filename), print_bad_path(op2_filename)
model = read_op2(op2_filename, build_dataframe=False, debug=False)
print(model.get_op2_stats(short=True))
###Output
_____no_output_____
###Markdown
Accessing the Composite Stress
###Code
isubcase = 1
stress = model.cquad4_composite_stress[isubcase]
print(stress)
headers = stress.get_headers()
imax = headers.index('major')
###Output
type=RealCompositePlateStressArray nelements=9236 ntotal=92360
data: [1, ntotal, 9] where 9=[o11, o22, t12, t1z, t2z, angle, major, minor, max_shear]
element_layer.shape = (92360, 2)
data.shape = (1, 92360, 9)
element type: QUAD4LC-composite
sort1
lsdvmns = [1]
###Markdown
Composite Stress/Strain data is tricky to access as there is not a good way to index the dataLet's cheat a bit using the element ids and layers to make a pivot table. - **table** is (ntimes, nelements, nlayers, ndata) - **max_principal_stress_table** is (nelements, nlayers)
###Code
from pyNastran.femutils.utils import pivot_table
eids = stress.element_layer[:, 0]
layers = stress.element_layer[:, 1]
## now pivot the stress
table, rows_new = pivot_table(stress.data, eids, layers)
# now access the max principal stress for the static result
# table is (itime, nelements, nlayers, data)
itime = 0
max_principal_stress_table = table[itime,:,:,imax]
ueids = np.unique(eids)
print('max_principal_stress_table:\n%s' % max_principal_stress_table)
###Output
max_principal_stress_table:
[[ 239.3 163.91 98.41 ... -35.77 -34.6 -19.86]
[ 18.61 78.52 25.52 ... -63.92 -62.48 -12.99]
[ 2.99 105.48 49.37 ... -137.74 -127.07 -41.14]
...
[ 157. 170.3 112.79 ... 44.56 47.13 38.9 ]
[ 123.96 143.01 97.41 ... 40.99 44.06 42.47]
[ 90.04 109.97 79.86 ... 33.18 36.12 24.04]]
###Markdown
More realistic pivot tableAll the elements have 10 layers. Let's remove the last 5 layers.By having empty layers, the pivot table now has nan data in it.
###Code
# drop out 5 layers
eids2 = stress.element_layer[:-5, 0]
layers2 = stress.element_layer[:-5, 1]
data2 = stress.data[:, :-5, :]
# now pivot the stress
table, rows_new = pivot_table(data2, eids2, layers2)
# access the table data
# table is (itime, nelements, nlayers, data)
itime = 0
max_principal_stress_table2 = table[itime,:,:,imax]
print('max_principal_stress_table2:\n%s' % max_principal_stress_table2)
###Output
max_principal_stress_table2:
[[ 239.3 163.91 98.41 ... -35.77 -34.6 -19.86]
[ 18.61 78.52 25.52 ... -63.92 -62.48 -12.99]
[ 2.99 105.48 49.37 ... -137.74 -127.07 -41.14]
...
[ 157. 170.3 112.79 ... 44.56 47.13 38.9 ]
[ 123.96 143.01 97.41 ... 40.99 44.06 42.47]
[ 90.04 109.97 79.86 ... nan nan nan]]
###Markdown
OP2: Numpy Demo 2 (Composite Plate Stress)The Jupyter notebook for this demo can be found in: - docs/quick_start/demo/op2_demo_numpy1.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo_numpy1.ipynbIt's recommended that you first go through: - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo_numpy1.ipynbIn this tutorial, composite plate stresses will be covered. Load the modelIf the BWB example OP2 doesn't exist, we'll run Nastran to create it.
###Code
import os
import copy
import numpy as np
np.set_printoptions(precision=2, threshold=20, linewidth=100, suppress=True)
import pyNastran
from pyNastran.op2.op2 import read_op2
from pyNastran.utils.nastran_utils import run_nastran
pkg_path = pyNastran.__path__[0]
model_path = os.path.join(pkg_path, '..', 'models')
bdf_filename = os.path.join(model_path, 'bwb', 'bwb_saero.bdf')
op2_filename = os.path.join(model_path, 'bwb', 'bwb_saero.op2')
if not os.path.exists(op2_filename):
keywords = ['scr=yes', 'bat=no', 'old=no']
run_nastran(bdf_filename, nastran_cmd='nastran', keywords=keywords, run=True)
import shutil
op2_filename2 = os.path.join('bwb_saero.op2')
shutil.move(op2_filename2, op2_filename)
assert os.path.exists(op2_filename), print_bad_path(op2_filename)
model = read_op2(op2_filename, build_dataframe=False, debug=False)
print(model.get_op2_stats(short=True))
###Output
_____no_output_____
###Markdown
Accessing the Composite StressLet's get the max principal stress.
###Code
isubcase = 1
stress = model.cquad4_composite_stress[isubcase]
print(stress)
headers = stress.get_headers()
imax = headers.index('major')
###Output
type=RealCompositePlateStressArray nelements=9236 ntotal=92360
data: [1, ntotal, 9] where 9=[o11, o22, t12, t1z, t2z, angle, major, minor, max_shear]
element_layer.shape = (92360, 2)
data.shape = (1, 92360, 9)
element type: QUAD4LC-composite-95
sort1
lsdvmns = [1]
###Markdown
Composite Stress/Strain data is tricky to access as there is not a good way to index the dataLet's cheat a bit using the element ids and layers to make a pivot table. - **table** is (ntimes, nelements, nlayers, ndata) - **max_principal_stress_table** is (nelements, nlayers) 
###Code
print('Element, Layer')
print(stress.element_layer)
from pyNastran.femutils.utils import pivot_table
## now pivot the stress
eids = stress.element_layer[:, 0]
layers = stress.element_layer[:, 1]
table, rows_new = pivot_table(stress.data, eids, layers)
# now access the max principal stress for the static result
# table is (itime, nelements, nlayers, data)
itime = 0
max_principal_stress_table = table[itime, :, :, imax]
ueids = np.unique(eids)
print('max_principal_stress_table:\n%s' % max_principal_stress_table)
###Output
max_principal_stress_table:
[[ 239.3 163.91 98.41 ... -35.77 -34.6 -19.86]
[ 18.61 78.52 25.52 ... -63.92 -62.48 -12.99]
[ 2.99 105.48 49.37 ... -137.74 -127.07 -41.14]
...
[ 157. 170.3 112.79 ... 44.56 47.13 38.9 ]
[ 123.96 143.01 97.41 ... 40.99 44.06 42.47]
[ 90.04 109.97 79.86 ... 33.18 36.12 24.04]]
###Markdown
More realistic pivot tableAll the elements have 10 layers. Let's remove the last 5 layers of the last element.By having empty layers, the pivot table now has nan data in it.
###Code
# drop out 5 layers
eids2 = stress.element_layer[:-5, 0]
layers2 = stress.element_layer[:-5, 1]
data2 = stress.data[:, :-5, :]
# now pivot the stress
table, rows_new = pivot_table(data2, eids2, layers2)
# access the table data
# table is (itime, nelements, nlayers, data)
itime = 0
max_principal_stress_table2 = table[itime,:,:,imax]
print('max_principal_stress_table2:\n%s' % max_principal_stress_table2)
###Output
max_principal_stress_table2:
[[ 239.3 163.91 98.41 ... -35.77 -34.6 -19.86]
[ 18.61 78.52 25.52 ... -63.92 -62.48 -12.99]
[ 2.99 105.48 49.37 ... -137.74 -127.07 -41.14]
...
[ 157. 170.3 112.79 ... 44.56 47.13 38.9 ]
[ 123.96 143.01 97.41 ... 40.99 44.06 42.47]
[ 90.04 109.97 79.86 ... nan nan nan]]
###Markdown
Grid Point Forces - Interface LoadsWe need some more data from the geometry
###Code
import pyNastran
from pyNastran.bdf.bdf import read_bdf
bdf_model = read_bdf(bdf_filename)
out = bdf_model.get_displacement_index_xyz_cp_cd()
icd_transform, icp_transform, xyz_cp, nid_cp_cd = out
nids = nid_cp_cd[:, 0]
nid_cd = nid_cp_cd[:, [0, 2]]
xyz_cid0 = bdf_model.transform_xyzcp_to_xyz_cid(
xyz_cp, nids, icp_transform,
cid=0)
del nids, out
from pyNastran.bdf.utils import parse_patran_syntax_dict
elems_nids = (
'Elem 1396 1397 1398 1399 1418 1419 1749 1750 1751 1752 2010 2011 2012 2620 2621 2639 2640 2641 1247:1251 1344:1363 1372:1380 1526:1536 1766:1774 1842:1851 2141:2152 2310:2321 2342:2365 2569:2577 2801:2956 3081:3246 3683:3742 3855:3920 4506:4603 4968:5047 5070:5175 5298:5469 5494:5565 5837:5954 '
'Node 2795 2796 2797 2798 3104 3106 3107 3132 3133 3135 3136 3137 3746 3747 3748 3749 3751 3752 3753 3754 3756 3757 3758 3759 3761 3762 3763 3764 3766 3767 3768 3769 3771 3772 3773 3774 3776 3777 3778 3779 3781 3782 3783 3784 3791 3792 3793 3796 3797 3798 3801 3802 3803 3806 3807 3808 3811 3812 3813 3816 3817 3818 3821 3822 3823 3826 3827 3828 4334 4335 4336 4338 4339 4340 4343 4344 4347 4348 4350 4351 4352 4354 4355 4356 4359 4360 4363 4364 4367 4368 4371 4372 4374 4375 4376 4378 4379 4382 4383 4385 4386 4387 4389 4390 4391 4394 4395 4398 4399 4401 4402 4403 4405 4406 4407 4409 4410 4411 4413 4414 4415 4418 4419 4593 4594 4596 4597 4599 4600 4602 4603 4605 4606 4608 4609 4611 4612 4614 4615 4617 4618 4620 4621 5818 5819 5820 5822 5823 5824 5826 5827 5828 5830 5831 5832 5834 5835 5836 5838 5839 5840 5842 5843 5844 5846 5847 5848 5850 5851 5852 5854 5855 5856 5872 5873 5874 5876 5877 5878 5880 5881 5882 5884 5885 5886 5888 5889 5890 5892 5893 5894 5896 5897 5898 5900 5901 5902 5904 5905 5906 6203 6204 6205 6206 6208 6209 6210 6211 6213 6214 6215 6216 6218 6219 6220 6221 6223 6224 6225 6226 6228 6229 6230 6231 6233 6234 6235 6236 6238 6239 6240 6241 6243 6244 6245 6246 6255 6256 6257 6263 6264 6265 6266 6268 6269 6270 6271 6273 6274 6275 6276 6278 6279 6280 6281 6283 6284 6285 6286 6288 6289 6290 6291 6293 6294 6295 6296 6298 6299 6300 6301 6303 6304 6305 6306 6355 6356 6357 6359 6360 6361 6363 6364 6365 6367 6368 6369 6371 6372 6373 6375 6376 6377 6379 6380 6381 6383 6384 6385 6411 6412 6414 6415 6417 6418 6420 6421 6423 6424 6426 6427 6429 6430 6432 6433 6435 6436 6438 6439 6441 6442 6459 6460 6462 6463 6465 6466 6468 6469 6471 6472 6474 6475 6477 6478 6480 6481 6483 6484 6486 6487 6489 6490 1201506 1201531 1202016 1202039 1202764 1202767 1202768 1202770 1202771 1202773 1202774 1202776 1202779 1202780 1202782 1202783 1202785 1202786 1202788 1203040 1316:1327 1444:1473 1490:1507 1531:1538 1563:1567 1710:1729 2008:2016 2039:2054 2136:2153 2351:2356 2507:2528 2720:2729 2731:2735 2764:2793 3040:3055 3339:3346 3348:3355 3357:3364 3366:3373 3375:3382 3384:3391 3396:3406:2 3407:3414 3424:3431 3433:3440 3442:3449 3451:3458 3460:3467 3469:3476 3481:3491:2 3492:3499 3658:3668 3670:3680 3682:3692 3705:3715 3717:3727 3729:3739 4560:4589 5290:5298 5300:5308 5310:5318 5320:5328 5339:5347 5349:5357 5359:5367 5369:5377 5858:5870 5947:5994 6001:6005 6007:6011 6013:6017 6019:6023 6025:6029 6031:6035 6037:6041 6043:6047 6309:6314 6319:6350 6445:6455 6811:6819 6821:6829 6831:6839 6841:6849 6851:6859 6870:6878 6880:6888 6890:6898 6900:6908 6910:6918 1201316:1201326:2 1201464:1201473 1201533:1201537:2 1202041:1202053:2 1202136:1202152:2 1202351:1202355:2 1202507:1202527:2 1202731:1202735 1203042:1203052:2 1203424:1203431 1203433:1203440 1203442:1203449 1203451:1203458 1203460:1203467 1203469:1203476 1203481:1203491:2 1203492:1203499 1203705:1203715 1203717:1203727 1203729:1203739 1205339:1205347 1205349:1205357 1205359:1205367 1205369:1205377 1206870:1206878 1206880:1206888 1206890:1206898 1206900:1206908 1206910:1206918 '
)
# print(elems_nids)
data = parse_patran_syntax_dict(elems_nids)
eids = data['Elem']
nids = data['Node']
#print(data, type(data))
isubcase = 1
grid_point_forces = model.grid_point_forces[isubcase]
print(''.join(grid_point_forces.get_stats()))
#print(grid_point_forces.object_methods())
# global xyz
coords = bdf_model.coords
# some more data
coord_out = bdf_model.coords[0]
summation_point = [0., 0., 0.]
#summation_point = [1197.97, 704.153, 94.9258] # ~center of interface line
log = bdf_model.log
forcei, momenti, force_sumi, moment_sumi = grid_point_forces.extract_interface_loads(
nids, eids,
coord_out, coords,
nid_cd,
icd_transform,
xyz_cid0,
summation_point=summation_point,
consider_rxf=True,
itime=0, debug=False, log=log)
# print(forcei, force_sumi)
# print(momenti, moment_sumi)
np.set_printoptions(precision=8, threshold=20, linewidth=100, suppress=True)
print(f'force = {force_sumi}; total={np.linalg.norm(force_sumi):.2f}')
print(f'moment = {moment_sumi}; total={np.linalg.norm(moment_sumi):.2f}')
np.set_printoptions(precision=2, threshold=20, linewidth=100, suppress=True)
###Output
type=RealGridPointForcesArray nelements=2 total=56033
data: [1, ntotal, 6] where 6=[f1, f2, f3, m1, m2, m3]
data.shape=(1, 56033, 6)
element type: *TOTALS*, APP-LOAD, BAR, F-OF-MPC, F-OF-SPC, QUAD4, TRIA3
sort1
lsdvmns = [0]
force = [ -0.05078125 -0.08984375 126271.086 ]; total=126271.09
moment = [ 1.1500996e+08 -1.5267941e+08 2.0000000e+01]; total=191149920.00
###Markdown
OP2: Numpy Demo 2 (Composite Plate Stress)The Jupyter notebook for this demo can be found in: - docs/quick_start/demo/op2_demo_numpy1.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo_numpy1.ipynbIt's recommended that you first go through: - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo.ipynb - https://github.com/SteveDoyle2/pyNastran/tree/master/docs/quick_start/demo/op2_demo_numpy1.ipynbIn this tutorial, composite plate stresses will be covered. Load the modelIf the BWB example OP2 doesn't exist, we'll run Nastran to create it.
###Code
import os
import copy
import numpy as np
np.set_printoptions(precision=2, threshold=20, linewidth=100, suppress=True)
import pyNastran
from pyNastran.op2.op2 import read_op2
from pyNastran.utils.nastran_utils import run_nastran
pkg_path = pyNastran.__path__[0]
model_path = os.path.join(pkg_path, '..', 'models')
bdf_filename = os.path.join(model_path, 'bwb', 'bwb_saero.bdf')
op2_filename = os.path.join(model_path, 'bwb', 'bwb_saero.op2')
if not os.path.exists(op2_filename):
keywords = ['scr=yes', 'bat=no', 'old=no']
run_nastran(bdf_filename, nastran_cmd='nastran', keywords=keywords, run=True)
import shutil
op2_filename2 = os.path.join('bwb_saero.op2')
shutil.move(op2_filename2, op2_filename)
assert os.path.exists(op2_filename), print_bad_path(op2_filename)
model = read_op2(op2_filename, build_dataframe=False, debug=False)
print(model.get_op2_stats(short=True))
###Output
_____no_output_____
###Markdown
Accessing the Composite StressLet's get the max principal stress.
###Code
isubcase = 1
stress = model.cquad4_composite_stress[isubcase]
print(stress)
headers = stress.get_headers()
imax = headers.index('major')
###Output
type=RealCompositePlateStressArray nelements=9236 ntotal=92360
data: [1, ntotal, 9] where 9=[o11, o22, t12, t1z, t2z, angle, major, minor, max_shear]
element_layer.shape = (92360, 2)
data.shape = (1, 92360, 9)
element type: QUAD4LC-composite-95
sort1
lsdvmns = [1]
###Markdown
Composite Stress/Strain data is tricky to access as there is not a good way to index the dataLet's cheat a bit using the element ids and layers to make a pivot table. - **table** is (ntimes, nelements, nlayers, ndata) - **max_principal_stress_table** is (nelements, nlayers) 
###Code
print('Element, Layer')
print(stress.element_layer)
from pyNastran.femutils.utils import pivot_table
## now pivot the stress
eids = stress.element_layer[:, 0]
layers = stress.element_layer[:, 1]
table, rows_new = pivot_table(stress.data, eids, layers)
# now access the max principal stress for the static result
# table is (itime, nelements, nlayers, data)
itime = 0
max_principal_stress_table = table[itime, :, :, imax]
ueids = np.unique(eids)
print('max_principal_stress_table:\n%s' % max_principal_stress_table)
###Output
max_principal_stress_table:
[[ 235.29 161.24 95.75 ... -24.66 -24.58 -10.78]
[ 16.77 75.67 22.95 ... -56.91 -56.5 -3.56]
[ 2.87 103.14 46.78 ... -129.44 -120.5 -34.82]
...
[ 156.04 169.48 112.44 ... 44.56 47.04 39.24]
[ 123.26 142.38 97.18 ... 41.02 43.96 42.92]
[ 89.85 109.55 79.73 ... 33.28 36.07 24.61]]
###Markdown
More realistic pivot tableAll the elements have 10 layers. Let's remove the last 5 layers of the last element.By having empty layers, the pivot table now has nan data in it.
###Code
# drop out 5 layers
eids2 = stress.element_layer[:-5, 0]
layers2 = stress.element_layer[:-5, 1]
data2 = stress.data[:, :-5, :]
# now pivot the stress
table, rows_new = pivot_table(data2, eids2, layers2)
# access the table data
# table is (itime, nelements, nlayers, data)
itime = 0
max_principal_stress_table2 = table[itime,:,:,imax]
print('max_principal_stress_table2:\n%s' % max_principal_stress_table2)
###Output
max_principal_stress_table2:
[[ 235.29 161.24 95.75 ... -24.66 -24.58 -10.78]
[ 16.77 75.67 22.95 ... -56.91 -56.5 -3.56]
[ 2.87 103.14 46.78 ... -129.44 -120.5 -34.82]
...
[ 156.04 169.48 112.44 ... 44.56 47.04 39.24]
[ 123.26 142.38 97.18 ... 41.02 43.96 42.92]
[ 89.85 109.55 79.73 ... nan nan nan]]
###Markdown
Grid Point Forces - Interface LoadsWe need some more data from the geometry
###Code
import pyNastran
from pyNastran.bdf.bdf import read_bdf
bdf_model = read_bdf(bdf_filename)
out = bdf_model.get_displacement_index_xyz_cp_cd()
icd_transform, icp_transform, xyz_cp, nid_cp_cd = out
nids = nid_cp_cd[:, 0]
nid_cd = nid_cp_cd[:, [0, 2]]
xyz_cid0 = bdf_model.transform_xyzcp_to_xyz_cid(
xyz_cp, nids, icp_transform,
cid=0)
del nids, out
from pyNastran.bdf.utils import parse_patran_syntax_dict
elems_nids = (
'Elem 1396 1397 1398 1399 1418 1419 1749 1750 1751 1752 2010 2011 2012 2620 2621 2639 2640 2641 1247:1251 1344:1363 1372:1380 1526:1536 1766:1774 1842:1851 2141:2152 2310:2321 2342:2365 2569:2577 2801:2956 3081:3246 3683:3742 3855:3920 4506:4603 4968:5047 5070:5175 5298:5469 5494:5565 5837:5954 '
'Node 2795 2796 2797 2798 3104 3106 3107 3132 3133 3135 3136 3137 3746 3747 3748 3749 3751 3752 3753 3754 3756 3757 3758 3759 3761 3762 3763 3764 3766 3767 3768 3769 3771 3772 3773 3774 3776 3777 3778 3779 3781 3782 3783 3784 3791 3792 3793 3796 3797 3798 3801 3802 3803 3806 3807 3808 3811 3812 3813 3816 3817 3818 3821 3822 3823 3826 3827 3828 4334 4335 4336 4338 4339 4340 4343 4344 4347 4348 4350 4351 4352 4354 4355 4356 4359 4360 4363 4364 4367 4368 4371 4372 4374 4375 4376 4378 4379 4382 4383 4385 4386 4387 4389 4390 4391 4394 4395 4398 4399 4401 4402 4403 4405 4406 4407 4409 4410 4411 4413 4414 4415 4418 4419 4593 4594 4596 4597 4599 4600 4602 4603 4605 4606 4608 4609 4611 4612 4614 4615 4617 4618 4620 4621 5818 5819 5820 5822 5823 5824 5826 5827 5828 5830 5831 5832 5834 5835 5836 5838 5839 5840 5842 5843 5844 5846 5847 5848 5850 5851 5852 5854 5855 5856 5872 5873 5874 5876 5877 5878 5880 5881 5882 5884 5885 5886 5888 5889 5890 5892 5893 5894 5896 5897 5898 5900 5901 5902 5904 5905 5906 6203 6204 6205 6206 6208 6209 6210 6211 6213 6214 6215 6216 6218 6219 6220 6221 6223 6224 6225 6226 6228 6229 6230 6231 6233 6234 6235 6236 6238 6239 6240 6241 6243 6244 6245 6246 6255 6256 6257 6263 6264 6265 6266 6268 6269 6270 6271 6273 6274 6275 6276 6278 6279 6280 6281 6283 6284 6285 6286 6288 6289 6290 6291 6293 6294 6295 6296 6298 6299 6300 6301 6303 6304 6305 6306 6355 6356 6357 6359 6360 6361 6363 6364 6365 6367 6368 6369 6371 6372 6373 6375 6376 6377 6379 6380 6381 6383 6384 6385 6411 6412 6414 6415 6417 6418 6420 6421 6423 6424 6426 6427 6429 6430 6432 6433 6435 6436 6438 6439 6441 6442 6459 6460 6462 6463 6465 6466 6468 6469 6471 6472 6474 6475 6477 6478 6480 6481 6483 6484 6486 6487 6489 6490 1201506 1201531 1202016 1202039 1202764 1202767 1202768 1202770 1202771 1202773 1202774 1202776 1202779 1202780 1202782 1202783 1202785 1202786 1202788 1203040 1316:1327 1444:1473 1490:1507 1531:1538 1563:1567 1710:1729 2008:2016 2039:2054 2136:2153 2351:2356 2507:2528 2720:2729 2731:2735 2764:2793 3040:3055 3339:3346 3348:3355 3357:3364 3366:3373 3375:3382 3384:3391 3396:3406:2 3407:3414 3424:3431 3433:3440 3442:3449 3451:3458 3460:3467 3469:3476 3481:3491:2 3492:3499 3658:3668 3670:3680 3682:3692 3705:3715 3717:3727 3729:3739 4560:4589 5290:5298 5300:5308 5310:5318 5320:5328 5339:5347 5349:5357 5359:5367 5369:5377 5858:5870 5947:5994 6001:6005 6007:6011 6013:6017 6019:6023 6025:6029 6031:6035 6037:6041 6043:6047 6309:6314 6319:6350 6445:6455 6811:6819 6821:6829 6831:6839 6841:6849 6851:6859 6870:6878 6880:6888 6890:6898 6900:6908 6910:6918 1201316:1201326:2 1201464:1201473 1201533:1201537:2 1202041:1202053:2 1202136:1202152:2 1202351:1202355:2 1202507:1202527:2 1202731:1202735 1203042:1203052:2 1203424:1203431 1203433:1203440 1203442:1203449 1203451:1203458 1203460:1203467 1203469:1203476 1203481:1203491:2 1203492:1203499 1203705:1203715 1203717:1203727 1203729:1203739 1205339:1205347 1205349:1205357 1205359:1205367 1205369:1205377 1206870:1206878 1206880:1206888 1206890:1206898 1206900:1206908 1206910:1206918 '
)
# print(elems_nids)
data = parse_patran_syntax_dict(elems_nids)
eids = data['Elem']
nids = data['Node']
#print(data, type(data))
isubcase = 1
grid_point_forces = model.grid_point_forces[isubcase]
print(''.join(grid_point_forces.get_stats()))
#print(grid_point_forces.object_methods())
# global xyz
coords = bdf_model.coords
# some more data
coord_out = bdf_model.coords[0]
summation_point = [0., 0., 0.]
#summation_point = [1197.97, 704.153, 94.9258] # ~center of interface line
log = bdf_model.log
force_sumi, moment_sumi = grid_point_forces.extract_interface_loads(
nids, eids,
coord_out, coords,
nid_cd,
icd_transform,
xyz_cid0,
summation_point=summation_point,
consider_rxf=True,
itime=0, debug=False, log=log)
# print(forcei, force_sumi)
# print(momenti, moment_sumi)
np.set_printoptions(precision=8, threshold=20, linewidth=100, suppress=True)
print(f'force = {force_sumi}; total={np.linalg.norm(force_sumi):.2f}')
print(f'moment = {moment_sumi}; total={np.linalg.norm(moment_sumi):.2f}')
np.set_printoptions(precision=2, threshold=20, linewidth=100, suppress=True)
###Output
type=RealGridPointForcesArray nelements=2 total=56033
data: [1, ntotal, 6] where 6=[f1, f2, f3, m1, m2, m3]
data.shape=(1, 56033, 6)
element type: *TOTALS*, APP-LOAD, BAR, F-OF-MPC, F-OF-SPC, QUAD4, TRIA3
sort1
lsdvmns = [0]
force = [ -0.01953125 -0.0234375 126282.305 ]; total=126282.30
moment = [ 1.15019896e+08 -1.52693184e+08 -3.20000000e+01]; total=191166912.00
|
Lesson2.ipynb | ###Markdown
The Zen Of Python
###Code
import numpy
###Output
_____no_output_____
###Markdown
Variables A name that is used to denote something or a value is called a variable. In python, variables can be declared and values can be assigned to it as follows,
###Code
x = 2
y = 5
xy = 'Hey'
xy.replace('y','r')
print(x+y, xy)
###Output
(7, 'Hey')
###Markdown
Multiple variables can be assigned with the same value.
###Code
x = y = 1
print(x,y)
###Output
(1, 1)
###Markdown
Operators Arithmetic Operators | Symbol | Task Performed ||----|---|| + | Addition || - | Subtraction || / | division || % | mod || * | multiplication || // | floor division || ** | to the power of |
###Code
1+2
2-1
1*2
1/2
###Output
_____no_output_____
###Markdown
0? This is because both the numerator and denominator are integers but the result is a float value hence an integer value is returned. By changing either the numerator or the denominator to float, correct answer can be obtained.
###Code
1/2.0
15%10
###Output
_____no_output_____
###Markdown
Floor division is nothing but converting the result so obtained to the nearest integer.
###Code
2.8//2.0
###Output
_____no_output_____
###Markdown
Relational Operators | Symbol | Task Performed ||----|---|| == | True, if it is equal || != | True, if not equal to || < | less than || > | greater than || <= | less than or equal to || >= | greater than or equal to |
###Code
z = 1
z == 1
z > 1
###Output
_____no_output_____
###Markdown
Built-in Functions Simplifying Arithmetic Operations **round( )** function rounds the input value to a specified number of places or to the nearest integer.
###Code
print round(5.6231)
print round(4.55892, 2)
###Output
6.0
4.56
###Markdown
**complex( )** is used to define a complex number and **abs( )** outputs the absolute value of the same.
###Code
c =complex('5+2j')
print abs(c)
###Output
5.38516480713
###Markdown
**divmod(x,y)** outputs the quotient and the remainder in a tuple(you will be learning about it in the further chapters) in the format (quotient, remainder).
###Code
divmod(10,2)
###Output
_____no_output_____
###Markdown
**isinstance( )** returns True, if the first argument is an instance of that class. Multiple classes can also be checked at once.
###Code
print isinstance(1, int)
print isinstance(1.0,int)
print isinstance(1.0,(int,float))
###Output
True
False
True
###Markdown
**cmp(x,y)**|x ? y|Output||---|---|| x < y | -1 || x == y | 0 || x > y | 1 |
###Code
print cmp(1,5)
print cmp(2,1)
print cmp(2,2)
###Output
-1
1
0
###Markdown
**pow(x,y,z)** can be used to find the power $x^y$ also the mod of the resulting value with the third specified number can be found i.e. : ($x^y$ % z).
###Code
print pow(3,3)
print pow(3,3,5)
###Output
27
2
###Markdown
**range( )** function outputs the integers of the specified range. It can also be used to generate a series by specifying the difference between the two numbers within a particular range. The elements are returned in a list (will be discussing in detail later.)
###Code
print range(3)
print range(2,9)
print range(2,27,8)
###Output
[0, 1, 2]
[2, 3, 4, 5, 6, 7, 8]
[2, 10, 18, 26]
###Markdown
Accepting User Inputs **raw_input( )** accepts input and stores it as a string. Hence, if the user inputs a integer, the code should convert the string to an integer and then proceed.
###Code
abc = raw_input("Type something here and it will be stored in variable abc \t")
abc
type(abc)
###Output
_____no_output_____
###Markdown
**input( )**, this is used only for accepting only integer inputs.
###Code
abc1 = input("Only integer can be stored in variable abc \t")
type(abc1)
###Output
_____no_output_____
###Markdown
Note that **type( )** returns the format or the type of a variable or a number Conversion from hexadecimal to decimal is done by adding prefix **0x** to the hexadecimal value or vice versa by using built in **hex( )**, Octal to decimal by adding prefix **0** to the octal value or vice versa by using built in function **oct( )**. Bitwise Operators | Symbol | Task Performed ||----|---|| & | Logical And || l | Logical OR || ^ | XOR || ~ | Negate || >> | Right shift || << | Left shift |
###Code
a = 2 #10
b = 3 #11
bin(2),bin(3)
print a ^ b
print bin(a^b)
5 >> 1
###Output
_____no_output_____
###Markdown
0000 0101 -> 5 Shifting the digits by 1 to the right and zero padding0000 0010 -> 2
###Code
5 << 1
###Output
_____no_output_____
###Markdown
Six route wheel spins
###Code
from random import *
from statistics import *
from collections import *
population = ['red'] * 18 + ['black'] * 18 + ['green'] * 2
choice(population)
[choice(population) for i in range(6)]
Counter([choice(population) for i in range(6)])
Counter(choices(population, k = 6))
Counter(choices(['red', 'black', 'green'], [18, 18, 2], k = 6))
###Output
_____no_output_____
###Markdown
Playing cards
###Code
deck = Counter(tens = 16, low = 36)
deck = list(deck.elements())
deal = sample(deck, 52)
remainder = deal[20:]
Counter(remainder)
###Output
_____no_output_____
###Markdown
5 or more heads from 7 spins of a biased coin
###Code
# empirical result
trial = lambda : choices(['heads', 'tails'], cum_weights=[0.60, 1.00], k = 7).count('heads') >= 5
n = 100000
sum(trial() for i in range(n)) / n
# Compare to the analytic approach
# theoretical result
from math import factorial as fact
def comb(n, r):
return fact(n) // fact(r) // fact(n - r)
comb(10, 3)
ph = 0.6
# 5 heads out of 7 spins
ph ** 5 * (1 - ph) ** 2 * comb(7, 5) + \
ph ** 6 * (1 - ph) ** 1 * comb(7, 6) + \
ph ** 7 * (1 - ph) ** 0 * comb(7, 7)
###Output
_____no_output_____
###Markdown
Probability that median of 5 samples falls a middle quartile
###Code
trial = lambda : n // 4 <= median(sample(range(n), 5)) <= 3 * n // 4
sum(trial() for i in range(n)) / n
###Output
_____no_output_____
###Markdown
Confidence intervals
###Code
timings = [7.8, 8.9, 9.1, 6.9, 10.1, 15.6, 12.6, 9.1, 8.6, 6.8, 7.9, 8.1, 9.6]
def bootstrap(data):
return choices(data, k=len(data))
n = 10000
means = sorted(mean(bootstrap(timings)) for i in range(n))
print(f'The observed mean of {mean(timings)}')
print(f'Falls in 90% confidence interval from {means[500] : .1f} to {means[-500] : .1f}')
###Output
The observed mean of 9.315384615384616
Falls in 90% confidence interval from 8.4 to 10.4
###Markdown
Statistical difference
###Code
drug = [7.8, 8.9, 9.1, 6.9, 10.1, 15.6, 12.6, 9.1, 8.6, 6.8]
placedo = [7.8, 8.1, 9.1, 6.9, 3.2, 10.6, 10.6, 8.1, 8.6, 6.8]
obs_diff = mean(drug) - mean(placedo)
print(obs_diff)
###Output
1.5700000000000012
###Markdown
Null hypothesis assumes 2 groups are equivalent
###Code
n = len(drug)
comb = drug + placedo
newdiffs = []
def trail():
shuffle(comb)
drug = comb[:n]
placedo = comb[n:]
new_diff = mean(drug) - mean(placedo)
return new_diff >= obs_diff
count = 100000
sum(trail() for i in range(count)) / count #p-value. If p-value is <= 0.05, then it is statistical different.
###Output
_____no_output_____
###Markdown
Toss coins
###Code
# Toss a coind 30 times and see 22 heads. Is it a fair coin?
# Assue the Skeptic is correct: Even a fair coind could show 22 heads in 30 tosses. It might be just chance.
# Test the Null Hypothesis: What's the probability of a fair coin showing 22 heads simply by chance.
# The code below is doing simulation.
m = 0
n = 10000
for i in range(n):
if sum(randint(0, 1) for j in range(30)) >= 22:
m += 1
pvalue = m / n
print(pvalue)
# pvalue is around 0.008, reject fair coin hypothesis at p < 0.05. So it is not a fair coin. The coin is biased.
###Output
0.0081
###Markdown
- In Lesson 1, we played with Strings. A String is a *Datatype*- In this lesson, we will talk about another *Datatype* called _Boolean_- A *DataType* says what kind of values a *variable* may have- *Boolean* variables may only be _True_ or _False_- *False* is another way of saying not true- Try running the following statement to check
###Code
not True
###Output
_____no_output_____
###Markdown
- Also not False is True- Try running this to check
###Code
not False
###Output
_____no_output_____
###Markdown
So we can see that the `not` operator changes a Boolean value to its opposite - Here are some examples of boolean variables
###Code
big = True
fast = False
###Output
_____no_output_____
###Markdown
- *Boolean* variables are very handy in programming because they let the program do different things each time we run them- We use the _if_ statement to make a program do different things- Here is an example
###Code
if (big):
print("It is big")
###Output
_____no_output_____
###Markdown
- Execute the above statement and you will see `It is big` printed.- This is because the value of *big* is *True*- For and *if* statement, you put a boolean between the brackets- Then you put a `:` character - called a colon- And then you *indent* the statement you want to run if the boolean is True. Indenting is when we push the statement to the right.- Now try this
###Code
if (fast):
print("It is fast")
###Output
_____no_output_____
###Markdown
- This time nothing was printed. That is because *fast* is *False*.- So the *if* statemnt lets the program decide whether to run another statement or not- We can have more than one statement if we like
###Code
if (big):
print("It is big")
print("I like big things")
###Output
_____no_output_____
###Markdown
- Both print statements ran this time- This is because both of the prints were indented after the line with the *if*- What happens if we don't indent the statement?- Try this
###Code
if (fast):
print("It is fast")
print("I like fast")
###Output
_____no_output_____
###Markdown
- So `I like fast` was printed even though *fast* was *False*- Because we didn't indent it, it became a new statement. - Our program therefore had 2 statements. Up to now, we have only had one statement at a time. Real programs have lots of statements. - We can use the `not` operator if we want to check the opposite of what a variable means- Try the following two examples
###Code
if not fast:
print("It is slow")
if not big:
print("It is small")
###Output
_____no_output_____
###Markdown
- What if we want to print something regardless if it is fast or not.- We can do this by adding an *else* to the *if* statement
###Code
if (fast):
print("It is fast")
else:
print("It is slow")
###Output
_____no_output_____
###Markdown
- This time `It is slow` was printed because *fast* was *false* and therefore the print after *else* were run instead of the one after *if* - Let do a small game- The game is to guess the right name like in the Story- First we tell the computer what the right name is by creating a variable
###Code
name = "Rumpelstilsken"
###Output
_____no_output_____
###Markdown
- Next we tell the computer to ask the user to guess the name- Execute the following statement and enter a guess
###Code
guess = input("Guess my name: ")
###Output
_____no_output_____
###Markdown
- In order to check if the guess was right, we need to compare it with the right answer - We can use the `==` operator to check for us. This compares two Strings to see if they are the same. - (this is different to `=` which assigns a value to a variable)
###Code
guess == name
###Output
_____no_output_____
###Markdown
- So when we execute the above statement `False` is output- `False` is a boolean, so we can use it for an *if* statement. Cool, let's try
###Code
if (guess == name):
print("Ahh, who told you my name!")
else:
print("No, that's not my name")
###Output
_____no_output_____
###Markdown
- You can play the game a few times by running the *input* statement and then the *if* statement - So the `==` operator can work with two Strings and you get a Boolean- *String* and *Boolean* are examples of different _DataTypes_- Later we will see some other *DataTypes* that `==` can be used with but you will always get a Boolean out. - Therefore `==` is very useful to use with `if` statements.- We can also check if two Strings are _not_ the same by using `!=`. Exclamations `!` are used in python to mean *not* so `!=` means not equal and `==` means equal- Here is another way of playing the game
###Code
if (guess != name):
print("You will never guess my name")
else:
print("Noooooooooo")
###Output
_____no_output_____
###Markdown
- So we use `!=` to check two Strings are not the same - So far, we have played a round of the game by executing the two statements separately- Next we will try to create a program that can keep playing the game till the end- First we need to learn a new statement `while`. Like `if` it takes a boolean and executes the indented statements. However when it has run the indented statements, it checks the boolean again until it becomes false
###Code
guess = ""
while (guess != name):
guess = input("You will never guess my name. ")
print("Ahh, who told you my name")
###Output
_____no_output_____
###Markdown
変数
###Code
# 数値の代入
var = 1
print(var)
# 文字列の代入
var = 'hello'
print(var)
###Output
_____no_output_____
###Markdown
Table of Contents
###Code
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
from IPython.display import Image
###Output
_____no_output_____
###Markdown
Problem Solving Extract digits from a number- familar with basic arithmetic operators %, //, -, ( hands on)- basic problem solving skills- find different ways
###Code
# How to get last digit of a number ?
# for example 123, need print last digit 3
n = 123
n // 10
n - (n//10)*10
print(n % 10)
# comment str(...) convert a int to str
s = str(123)
s
#012
'123'
print(s[2])
print(s[1])
# how to get the middle digit of 123 ?
# how to get the 1st digt of 123 ?
# how to get the first two digits?
n // 10
# how to get the last two digits ?
n
n % 100
n - n // 100 * 100
s[1:]
# how to swap the two digits, for example 45 ? i Need 54
a = 45
x = a // 10
y = a % 10
print(x, y)
10 * y + x
n = int(input())
n // 10 + (n % 10) * 10
# how to print 2nd digit after decimal point of a number ?
# 123.456, print 5
int(123.456) # convert a float type to int
int(123.456 * 100) # * 100, then convert float to int
int(123.456 * 100) % 10
# extract 6 from 123.45678
x = 123.45678
int(x * 1000) % 10
###Output
_____no_output_____
###Markdown
Even or Odd numbers
###Code
# input a number, print even if is even, otherwise print odd
n = 4
if n % 2 == 0:
print("even")
else:
print("odd")
###Output
even
###Markdown
Factor & Prime testing
###Code
21 % 7 == 0
91 % 13 == 0 # 91 = 13 * 7
# 7 = 1* 7 prime number only has factors of 1 or itself.
# 6 = 1 * 2 * 3 not a prime number
n = int(input("please input a number >= 2:"))
# initial condition
is_prime = True
# loop i from 2 to n-1,
# each time, i take 2, 3, 4, 5 .... n-1
for i in range(2, n):
print("i=", i)
if n % i == 0:
print(f"{n} is not a prime, it is divisible by {i}")
is_prime = False
break
if is_prime:
print(f"{n} is a prime")
###Output
please input a number >= 2:21
i= 2
i= 3
21 is not a prime, it is divisible by 3
###Markdown
str - preview- str concatenation + - print(arg1, arg2, ... )- print(f".....{var}") if you don't understand, fine.
###Code
name = input("please enter your name: ")
# f string supported since Python 3.6
print(f"Welcome {name}, do you like Python?")
print("welcome", name, "do you like python?")
# str can be concatenated
print("welcome " + name + " do you like python?")
# this is supported earlier than 3.5
print("Welcome {}, do you like python?".format(name))
age = int(input("please enter your age"))
# print( arg1, arg2, ...) arg can be of any type
print("You start learning Python at", age, "?!", "what a genius!")
###Output
please enter your age5
You start learning Python at 5 ?! what a genius!
###Markdown
Math Fun
###Code
93 - 39
54 - 45
43 - 34
76 - 67
22 - 22
50 - 5
54 - 45
75 - 57
81- 18
63- 36
72 - 27
54 - 45
n = int(input("please enter a two different digits number:"))
while n > 9:
# a is 10th, b 1s
a = n // 10
b = n % 10
print(f"abs({n} - {10*b+a}) = {abs(n - 10*b - a)}")
# swap n and substract from n, then take abs()
n = abs(n - 10*b - a)
print(n)
# for, while loop, totally fine.
###Output
_____no_output_____
###Markdown
Практическое задание к уроку 2 Тема “Множество” Задание 2 Выполнить задание 1 на языке Python (даны три множества a,b и с; необходимо выполнить все изученные виды бинарных операций над всеми комбинациями множеств).
###Code
from math import lgamma
import numpy as np
a = set([1,2,3,4])
b = set([3,4,5,6])
c = set([])
###Output
_____no_output_____
###Markdown
Union
###Code
a.union(b)
b.union(a)
c.union(b)
c.union(a)
a.union(c)
b.union(c)
###Output
_____no_output_____
###Markdown
Intersection
###Code
a.intersection(b)
b.intersection(a)
c.intersection(b)
c.intersection(a)
a.intersection(c)
b.intersection(c)
###Output
_____no_output_____
###Markdown
Difference
###Code
a.difference(b)
b.difference(a)
c.difference(b)
c.difference(a)
a.difference(c)
b.difference(c)
###Output
_____no_output_____
###Markdown
Symmetric Difference
###Code
a.symmetric_difference(b)
b.symmetric_difference(a)
c.symmetric_difference(b)
c.symmetric_difference(a)
a.symmetric_difference(c)
b.symmetric_difference(c)
###Output
_____no_output_____
###Markdown
Тема 3 “Последовательность” Задание 3 *На языке Python предложить алгоритм вычисляющий численно предел с точностью ε = 10ˉ⁷ Вычисляем через факториалы. К сожалению при большом количестве итераций, происходит переполнение стека, поэтому изначальное условие точности недостижимо.
###Code
import math
def f(n):
return n / math.factorial(n)**(1/n)
i = 1
while abs(f(i + 1) - f(i)) > 0.001:
i += 1
print(f'i = {i}, a = {f(i)}')
###Output
i = 83, a = 2.617701998673183
###Markdown
Задание 4 *Предложить оптимизацию алгоритма, полученного в задании 3, ускоряющую его сходимость. Выразим из исходной формулы последующий член через предыдущий рекурсивно. Точность повысилась, но к сожалению есть ограничение по количеству рекурсий.
###Code
def f(n):
k = n / (n + 1)
return 1 if n == 1 else (f(n - 1) / k)**(k)
i = 1
while abs(f(i + 1) - f(i)) > 0.00001:
i += 1
print(f'i = {i}, a = {f(i)}')
###Output
i = 1117, a = 2.705857251767045
###Markdown
Преобразуем рекурсию в итеративный цикл за счет сохранения предыдущего члена с предыдущей итерации. Мы достаточно быстро достигли необходимой точности.
###Code
def f(n, fn):
k = n / (n + 1)
return 1 if n == 1 else (fn / k)**(k)
i = 1
fn = f(i, 1)
i += 1
fn1 = f(i, fn)
while abs(fn1 - fn) > 0.0000001:
i += 1
fn = fn1
fn1 = f(i, fn)
print(f'i = {i}, a = {fn}')
print(f'e: {np.e}')
print(f'fn: {fn}')
print(f'fn1: {fn1}')
print(f'Δ: {fn1-fn}')
###Output
i = 12588, a = 2.7169147517726997
e: 2.718281828459045
fn: 2.7169147517726997
fn1: 2.7169148517664836
Δ: 9.999378391967184e-08
###Markdown
Сделаем то же с помощью lgamma.
###Code
def f(n):
return n/np.e**(lgamma(n)/n)
i = 1
fn = f(i)
i += 1
fn1 = f(i)
while abs(fn1 - fn) > 0.0000001:
i += 1
fn = fn1
fn1 = f(i)
print(f'i = {i}, a = {fn}')
print(f'e: {np.e}')
print(f'fn: {fn}')
print(f'fn1: {fn1}')
print(f'Δ: {fn1-fn}')
###Output
i = 9252, a = 2.719353748566365
e: 2.718281828459045
fn: 2.719353748566365
fn1: 2.7193536485706393
Δ: -9.99957254776973e-08
###Markdown
Lesson 2: `if / else` and Functions---Sarah Middleton (http://sarahmid.github.io/)This tutorial series is intended as a basic introduction to Python for complete beginners, with a special focus on genomics applications. The series was originally designed for use in GCB535 at Penn, and thus the material has been highly condensed to fit into just four class periods. The full set of notebooks and exercises can be found at http://github.com/sarahmid/python-tutorialsFor a slightly more in-depth (but non-interactive) introduction to Python, see my Programming Bootcamp materials here: http://github.com/sarahmid/programming-bootcampNote that if you are viewing this notebook online from the github/nbviewer links, you will not be able to use the interactive features of the notebook. You must download the notebook files and run them locally with Jupyter/IPython (http://jupyter.org/). --- Table of Contents1. Conditionals I: The "`if / else`" statement2. Built-in functions3. Modules4. Test your understanding: practice set 2 1. Conditionals I: The "`if / else`" statement---Programming is a lot like giving someone instructions or directions. For example, if I wanted to give you directions to my house, I might say...> Turn right onto Main Street> Turn left onto Maple Ave> **If** there is construction, continue straight on Maple Ave, turn right on Cat Lane, and left on Fake Street; **else**, cut through the empty lot to Fake Street> Go straight on Fake Street until house 123The same directions, but in code:
###Code
construction = False
print "Turn right onto Main Street"
print "Turn left onto Maple Ave"
if construction:
print "Continue straight on Maple Ave"
print "Turn right onto Cat Lane"
print "Turn left onto Fake Street"
else:
print "Cut through the empty lot to Fake Street"
print "Go straight on Fake Street until house 123"
###Output
Turn right onto Main Street
Turn left onto Maple Ave
Cut through the empty lot to Fake Street
Go straight on Fake Street until house 123
###Markdown
This is called an "`if / else`" statement. It basically allows you to create a "fork" in the flow of your program based on a condition that you define. If the condition is `True`, the "`if`"-block of code is executed. If the condition is `False`, the `else`-block is executed. Here, our condition is simply the value of the variable `construction`. Since we defined this variable to quite literally hold the value `False` (this is a special data type called a Boolean, more on that in a minute), this means that we skip over the `if`-block and only execute the `else`-block. If instead we had set `construction` to `True`, we would have executed only the `if`-block.Let's define Booleans and `if / else` statements more formally now. --- [ Definition ] Booleans - A Boolean ("bool") is a type of variable, like a string, int, or float. - However, a Boolean is much more restricted than these other data types because it is only allowed to take two values: `True` or `False`. - In Python, `True` and `False` are always capitalized and never in quotes. - Don't think of `True` and `False` as words! You can't treat them like you would strings. To Python, they're actually interpreted as the numbers 1 and 0, respectively. - Booleans are most often used to create the "conditional statements" used in if / else statements and loops. --- [ Definition ] The `if / else` statement**Purpose:** creates a fork in the flow of the program based on whether a conditional statement is `True` or `False`. **Syntax:** if (conditional statement): this code is executed else: this code is executed**Notes:** - Based on the Boolean (`True` / `False`) value of a conditional statement, either executes the `if`-block or the `else`-block - The "blocks" are indicated by indentation. - The `else`-block is optional. - Colons are required after the `if` condition and after the `else`. - All code that is part of the `if` or `else` blocks must be indented. **Example:**
###Code
x = 5
if (x > 0):
print "x is positive"
else:
print "x is negative"
###Output
x is positive
###Markdown
---So what types of conditionals are we allowed to use in an `if / else` statement? Anything that can be evaluated as `True` or `False`! For example, in natural language we might ask the following true/false questions:> is `a` True?> is `a` less than `b`?> is `a` equal to `b`?> is `a` equal to "ATGCTG"?> is (`a` greater than `b`) and (`b` greater than `c`)?To ask these questions in our code, we need to use a special set of symbols/words. These are called the **logical operators**, because they allow us to form logical (true/false) statements. Below is a chart that lists the most common logical operators:Most of these are pretty intuitive. The big one people tend to mess up on in the beginning is `==`. Just remember: a single equals sign means *assignment*, and a double equals means *is the same as/is equal to*. You will NEVER use a single equals sign in a conditional statement because assignment is not allowed in a conditional! Only `True` / `False` questions are allowed! `if / else` statements in actionBelow are several examples of code using `if / else` statements. For each code block, first try to guess what the output will be, and then run the block to see the answer.
###Code
a = True
if a:
print "Hooray, a was true!"
a = True
if a:
print "Hooray, a was true!"
print "Goodbye now!"
a = False
if a:
print "Hooray, a was true!"
print "Goodbye now!"
###Output
Goodbye now!
###Markdown
> Since the line `print "Goodbye now!"` is not indented, it is NOT considered part of the `if`-statement.Therefore, it is always printed regardless of whether the `if`-statement was `True` or `False`.
###Code
a = True
b = False
if a and b:
print "Apple"
else:
print "Banana"
###Output
Banana
###Markdown
> Since `a` and `b` are not both `True`, the conditional statement "`a and b`" as a whole is `False`. Therefore, we execute the `else`-block.
###Code
a = True
b = False
if a and not b:
print "Apple"
else:
print "Banana"
###Output
Apple
###Markdown
> By using "`not`" before `b`, we negate its current value (`False`), making `b` `True`. Thus the entire conditional as a whole becomes `True`, and we execute the `if`-block.
###Code
a = True
b = False
if not a and b:
print "Apple"
else:
print "Banana"
###Output
Banana
###Markdown
>"`not`" only applies to the variable directly in front of it (in this case, `a`). So here, `a` becomes `False`, so the conditional as a whole becomes `False`.
###Code
a = True
b = False
if not (a and b):
print "Apple"
else:
print "Banana"
###Output
Apple
###Markdown
> When we use parentheses in a conditional, whatever is within the parentheses is evaluated first. So here, the evaluation proceeds like this: > First Python decides how to evaluate `(a and b)`. As we saw above, this must be `False` because `a` and `b` are not both `True`. > Then Python applies the "`not`", which flips that `False` into a `True`. So then the final answer is `True`!
###Code
a = True
b = False
if a or b:
print "Apple"
else:
print "Banana"
###Output
Apple
###Markdown
> As you would probably expect, when we use "`or`", we only need `a` *or* `b` to be `True` in order for the whole conditional to be `True`.
###Code
cat = "Mittens"
if cat == "Mittens":
print "Awwww"
else:
print "Get lost, cat"
a = 5
b = 10
if (a == 5) and (b > 0):
print "Apple"
else:
print "Banana"
a = 5
b = 10
if ((a == 1) and (b > 0)) or (b == (2 * a)):
print "Apple"
else:
print "Banana"
###Output
Apple
###Markdown
>Ok, this one is a little bit much! Try to avoid complex conditionals like this if possible, since it can be difficult to tell if they're actually testing what you think they're testing. If you do need to use a complex conditional, use parentheses to make it more obvious which terms will be evaluated first! Note on indentation - Indentation is very important in Python; it’s how Python tells what code belongs to which control statements - Consecutive lines of code with the same indenting are sometimes called "blocks" - Indenting should only be done in specific circumstances (if statements are one example, and we'll see a few more soon). Indent anywhere else and you'll get an error. - You can indent by however much you want, but you must be consistent. Pick one indentation scheme (e.g. 1 tab per indent level, or 4 spaces) and stick to it. [ Check yourself! ] `if/else` practiceThink you got it? In the code block below, write an `if/else` statement to print a different message depending on whether `x` is positive or negative.
###Code
x = 6 * -5 - 4 * 2 + -7 * -8 + 3
# ******add your code here!*********
###Output
_____no_output_____
###Markdown
2. Built-in functions---Python provides some useful built-in functions that perform specific tasks. What makes them "built-in"? Simply that you don’t have to "import" anything in order to use them -- they're always available. This is in contrast the the *non*-built-in functions, which are packaged into modules of similar functions (e.g. "math") that you must import before using. More on this in a minute! We've already seen some examples of built-in functions, such as `print`, `int()`, `float()`, and `str()`. Now we'll look at a few more that are particularly useful: `raw_input()`, `len()`, `abs()`, and `round()`. --- [ Definition ] `raw_input()`**Description:** A built-in function that allows user input to be read from the terminal. **Syntax:** raw_input("Optional prompt: ")**Notes**:- The execution of the code will pause when it reaches the `raw_input()` function and wait for the user to input something. - The input ends when the user hits "enter". - The user input that is read by `raw_input()` can then be stored in a variable and used in the code.- **Important: This function always returns a string, even if the user entered a number!** You must convert the input with int() or float() if you expect a number input.**Examples:**
###Code
name = raw_input("Your name: ")
print "Hi there", name, "!"
age = int(raw_input("Your age: ")) #convert input to an int
print "Wow, I can't believe you're only", age
###Output
Your age: 5
Wow, I can't believe you're only 5
###Markdown
--- [ Definition ] `len()`**Description:** Returns the length of a string (also works on certain data structures). Doesn’t work on numerical types.**Syntax:** len(string)**Examples:**
###Code
print len("cat")
print len("hi there")
seqLength = len("ATGGTCGCAT")
print seqLength
###Output
10
###Markdown
--- [ Definition ] `abs()`**Description:** Returns the absolute value of a numerical value. Doesn't accept strings.**Syntax:** abs(number)**Examples:**
###Code
print abs(-10)
print abs(int("-10"))
positiveNum = abs(-23423)
print positiveNum
###Output
23423
###Markdown
--- [ Definition ] `round()`**Description:** Rounds a float to the indicated number of decimal places. If no number of decimal places is indicated, rounds to zero decimal places.**Synatx:** round(someNumber, numDecimalPlaces)**Examples:**
###Code
print round(10.12345)
print round(10.12345, 2)
print round(10.9999, 2)
###Output
11.0
###Markdown
---If you want to learn more built in functions, go here: https://docs.python.org/2/library/functions.html 3. Modules---Modules are groups of additional functions that come with Python, but unlike the built-in functions we just saw, these functions aren't accessible until you **import** them. Why aren’t all functions just built-in? Basically, it improves speed and memory usage to only import what is needed (there are some other considerations, too, but we won't get into it here).The functions in a module are usually all related to a certain kind of task or subject area. For example, there are modules for doing advanced math, generating random numbers, running code in parallel, accessing your computer's file system, and so on. We’ll go over just two modules today: `math` and `random`. See the full list here: https://docs.python.org/2.7/py-modindex.html How to use a moduleUsing a module is very simple. First you import the module. Add this to the top of your script: import Then, to use a function of the module, you prefix the function name with the name of the module (using a period between them): .(Replace `` with the name of the module you want, and `` with the name of a function in the module.)The `.` synatx is needed so that Python knows where the function comes from. Sometimes, especially when using user created modules, there can be a function with the same name as a function that's already part of Python. Using this syntax prevents functions from overwriting each other or causing ambiguity. --- [ Definition ] The `math` module**Description:** Contains many advanced math-related functions.See full list of functions here: https://docs.python.org/2/library/math.html**Examples:**
###Code
import math
print math.sqrt(4)
print math.log10(1000)
print math.sin(1)
print math.cos(0)
###Output
2.0
3.0
0.841470984808
1.0
###Markdown
--- [ Definition ] The `random` module**Description:** contains functions for generating random numbers.See full list of functions here: https://docs.python.org/2/library/random.html**Examples:**
###Code
import random
print random.random() # Return a random floating point number in the range [0.0, 1.0)
print random.randint(0, 10) # Return a random integer between the specified range (inclusive)
print random.gauss(5, 2) # Draw from the normal distribution given a mean and standard deviation
# this code will output something different every time you run it!
###Output
0.694106858352
8
5.59568094264
###Markdown
4. Test your understanding: practice set 2---For the following blocks of code, **first try to guess what the output will be**, and then run the code yourself. These examples may introduce some ideas and common pitfalls that were not explicitly covered in the text above, ***so be sure to complete this section***.The first block below holds the variables that will be used in the problems. Since variables are shared across blocks in Jupyter notebooks, you just need to run this block once and then those variables can be used in any other code block.
###Code
# RUN THIS BLOCK FIRST TO SET UP VARIABLES!
a = True
b = False
x = 2
y = -2
cat = "Mittens"
print a
print (not a)
print (a == b)
print (a != b)
print (x == y)
print (x > y)
print (x = 2)
print (a and b)
print (a and not b)
print (a or b)
print (not b or a)
print not (b or a)
print (not b) or a
print (not b and a)
print not (b and a)
print (not b) and a
print (x == abs(y))
print len(cat)
print cat + x
print cat + str(x)
print float(x)
print ("i" in cat)
print ("g" in cat)
print ("Mit" in cat)
if (x % 2) == 0:
print "x is even"
else:
print "x is odd"
if (x - 4*y) < 0:
print "Invalid!"
else:
print "Banana"
if "Mit" in cat:
print "Hey Mits!"
else:
print "Where's Mits?"
x = "C"
if x == "A" or "B":
print "yes"
else:
print "no"
x = "C"
if (x == "A") or (x == "B"):
print "yes"
else:
print "no"
###Output
no
###Markdown
Lesson2 : Multiclass Data Classification
###Code
!pip install --process-dependency-links pytorch-sconce==0.10.3
!pip install --no-cache-dir -I Pillow==5.0.0
# You may need to restart the notebook (Menubar: Runtime -> Restart runtime...)
from sconce.datasets.csv_image_folder import CsvImageFolder
from torch.utils import data
from torchvision import transforms
import numpy as np
import sconce
import torch
print(f"Run with pytorch-sconce version: {sconce.__version__}")
###Output
Run with pytorch-sconce version: 0.10.3
###Markdown
Get Kaggle Data (Setup)
###Code
from google.colab import files
uploaded = files.upload() # Choose your local kaggle.json file
# move the file into place and update it's permissions
!mkdir ~/.kaggle
!cp kaggle.json ~/.kaggle
!chmod 600 ~/.kaggle/kaggle.json
###Output
_____no_output_____
###Markdown
Part 1 - Control Flow & Conditionals
Control Flow
Usually, code in Python runs from the top down.
###Code
print("this will print first")
print("this will print second")
###Output
_____no_output_____
###Markdown
But it doesn't always have to be like that. We've seen this already, with functions.
###Code
def myFunction():
print("this will print second")
print("this will print first")
myFunction()
###Output
_____no_output_____
###Markdown
Here, we're manipulating the flow of the code. There are other ways we can do this, alongside our functions.
What happens if we have some code that we want to run in a certain situation, and some different code that we want to run in another situation? For example - if it's raining we want to print one message, and if it's not, we want to print another. How would we convert this code to only print one of the statements, instead of both?
###Code
isRaining = True
print("Don't forget to bring an umbrella!")
print("It's not raining at the moment!")
###Output
_____no_output_____
###Markdown
Recall our discussion of Booleans. We said that sooner or later they were going to prove very useful. if-statements are a great example of that.
A conditional, or "if-statement" uses Boolean values to determine which path in the code to take.
###Code
isRaining = True
if isRaining == True:
print("Don't forget to bring an umbrella!")
else:
print("It's not raining at the moment!")
###Output
_____no_output_____
###Markdown
The `"if isRaining == True"` line here is actually doing:
`if True == True: `
... and since "`True == True`" is True, the whole line just ends up being
`"if True"`
So instead, we could replace it with just:
`if isRaining:`
... without the "`== True`" bit. This is equivalent to:
`if True:`
Bear in mind that it's not just Boolean True or False that we can use in conditionals. If we use other variable types in a Boolean context (i.e. as the condition in a conditional), they will evaulate to True or False. For example;
```
isRaining = 0 Hint: This one evaluates to False
isRaining = ""
isRaining = 1
isRaining = 3249
isRaining = "is this sentence truthy or falsey?!"
isRaining = (1 == 1)
isRaining = (3 > 2)
```
###Code
isRaining = 0 # Value here
if isRaining:
print("Don't forget to bring an umbrella!")
else:
print("It's not raining at the moment")
###Output
_____no_output_____
###Markdown
**Exercise 1**) Take the following code and swap the value of isRaining to the different values above. Which values evaluate to True, and which evaluate to False?
###Code
isRaining = 0
if isRaining == True:
print("Don't forget to bring an umbrella!")
else:
print("It's not raining at the moment")
###Output
_____no_output_____
###Markdown
More Conditionals, More Operators!
As well as the else, we also have elif which stands for "else if", which will run if the first "if" is not matched.
Rather than just acting like a catch-all like else, it evaluates a different condition.
You can add as many elif statements as you like to your conditional:
###Code
a = 3
if a == 1:
print("if a is 1, do this")
elif a == 2:
print("or if a is 2, do this")
elif a == 3:
print("and if a is 3, do this")
else:
print("if a is anything else, print this")
###Output
_____no_output_____
###Markdown
Further, we can apply some logical operators to Booleans.
"and" means that both the variable to the left and the right have to evaluate to True (remember these don't have to explicitly be Booleans)
"or" means that either the variable/value to the left or right has to be True, but not necessarily both.
###Code
a = True
b = False
if a and b:
print("A and b are both true!")
if a or b:
print("a or b is true but it doesn't matter which!")
if not (b == True):
print("b is not true")
###Output
_____no_output_____
###Markdown
Part 2 - Iteration
Think about what you would do, given your current knowledge of Python, if you were asked to print every integer from 1 to 10?
You could, of course, write out 10 print statements. But this is the perfect use case for a looping structure. Loops, such as for-loops, simply execute a series of statements for every item in a list. That list can just be a list of numbers, say from 0 to 10.
Here's an example:
###Code
for x in range(0, 10):
print(x)
###Output
_____no_output_____
###Markdown
Notice that the upper limit, 10, doesn't get printed, but the lower limit, 0, does. This is because the range() function is inclusive of the start number and exclusive of the end number.
###Code
for x in range(10):
print(x)
###Output
_____no_output_____
###Markdown
Note that the code above, with just one argument, will return the same thing, as Python will start iterating from 0 by default if no start argument is given.
You can also use for loops in many other different situations, for example:
###Code
for x in "hello":
print(x)
###Output
_____no_output_____
###Markdown
**Exercise 2**) Write a function that takes a number from the user, and prints every second-tetration (a number to the power of itself e.g. 10^10) from 1 up to and including the number. For example, for the number 10, your function should print:
* 1
* 4
* 27
* 256
* 3125
* 46656
* 8235543
* 16777216
* 387420489
* 1000000000
###Code
# write your function here.
# hint: You'll need to use input(), then use a for-loop)
###Output
_____no_output_____
###Markdown
**Exercise 3**) We need to write a function that populates our map with some food.
Create a function `add_food(num_food, verbose)` that takes two arguments:
1. `num_food` count of food to add, and
2. a boolean called `verbose` which tells us whether we want to print details or not
Call the function `create_food()` the number of times specified by num_food, passing it a random value between 1-20 for x and y. Make sure you pass verbose to the `create_food()` function too.
###Code
import random
def add_food(num_food, verbose):
# code here
def create_food(x, y, verbose):
if verbose:
print("Adding piece of food at (" + str(x) + "," + str(y) + ")")
add_food(10, True)
###Output
_____no_output_____
###Markdown
While-loops Have a think about the following scenario - we want to make a program that allows the user to guess a number, and if they get it correct it outputs "Correct!". If they get it wrong, they keep trying until they get it correct.
Given what you know about for-loops, have a think about what this code might look like.
You might come up with something like this...
###Code
import random
# generate a random number between 0 and 9
randomNum = random.randint(0, 10)
print("Guess the number to win! ")
# loop 10 times, and each time ask the user for an input and then compare it to the random number
for i in range(0, 10):
guess = int(input())
# if they are equal, exit the loop. Otherwise, say "Incorrect!"
if guess == randomNum:
print("Correct!")
break
else:
print("Incorrect! Try again...")
###Output
_____no_output_____
###Markdown
However, what if we don't guess the nubmer within 10 tries? We could make the loop iterate 10000000000 times instead, but there is still a chance that we may never guess the correct number.
This is what a `while` loop is for. A while-loop will keep looping until the condition is satisfied, instead of just a certain number of times like a for-loop.
###Code
import random
# generate a random number between 0 and 9
randomNum = random.randint(0, 10)
print("Guess the number to win!")
# this will keep asking the user for a number, until it is equal to the random number
while randomNum != int(input()):
print("Incorrect, try again!")
print("Correct!")
###Output
_____no_output_____
###Markdown
**Exercise 4**) Create a menu, that loops indefinitely, until the user inputs "0", we should print "Exiting program" and then exit.
* If the user inputs 1, call `function1()`.
* If the user inputs 2, call `function2`.
* If the user inputs 3, call `function3`.
If the user inputs anything other than 0, 1, 2 or 3, then do nothing.
Hint: You'll need to use a while-loop with an `if-else` statement. Use the code above for some inspiration.
###Code
def function1():
print("The user has input the number 1!")
def function2():
print("The user has input the number 2!")
def function3():
print("The user has input the number 3!")
# your code goes here...
###Output
_____no_output_____
###Markdown
Курс "Линейная алгебра" Урок 2. Матрицы и матричные операции Домашняя работа к уроку 2
###Code
import numpy as np
import math
###Output
_____no_output_____
###Markdown
Задание 1 Установить, какие произведения матриц $AB$ и $BA$ определены, и найти размерности полученных матриц: а) $A$ — матрица $4\times 2$, $B$ — матрица $4\times 2$; б) $A$ — матрица $2\times 5$, $B$ — матрица $5\times 3$; в) $A$ — матрица $8\times 3$, $B$ — матрица $3\times 8$; г) $A$ — квадратная матрица $4\times 4$, $B$ — квадратная матрица $4\times 4$. Исходя из определения __матрицей__ размера $m\times n$ называется прямоугольная таблица, состоящая из $m$ строк и $n$ столбцов, можем заключить а)
###Code
A = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
B = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
# np.dot(A, B)
# np.dot(B, A)
###Output
_____no_output_____
###Markdown
б)
###Code
A = np.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])
B = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12], [13, 14, 15]])
np.dot(A, B)
# np.dot(B, A)
###Output
_____no_output_____
###Markdown
в)
###Code
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12], [13, 14, 15], [16, 17, 18], [19, 20, 21], [22, 23, 24]])
B = np.array([[1, 2, 3, 4, 5, 6, 7, 8], [9, 10, 11, 12, 13, 14, 15, 16], [17, 18, 19, 20, 21, 22, 23, 24]])
np.dot(A, B)
np.dot(B, A)
###Output
_____no_output_____
###Markdown
г)
###Code
A = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]])
B = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]])
np.dot(A, B)
np.dot(B, A)
###Output
_____no_output_____
###Markdown
Задание 2 Найти сумму и произведение матриц $A=\begin{pmatrix}1 & -2\\ 3 & 0\end{pmatrix}$ и $B=\begin{pmatrix}4 & -1\\ 0 & 5\end{pmatrix}.$
###Code
A = np.array([[1, -2], [3, 0]])
B = np.array([[4, -1], [0, 5]])
A + B
np.dot(A, B)
###Output
_____no_output_____
###Markdown
Задание 3 Из закономерностей сложения и умножения матриц на число можно сделать вывод, что матрицы одного размера образуют линейное пространство. Вычислить линейную комбинацию $3A-2B+4C$ для матриц $A=\begin{pmatrix}1 & 7\\ 3 & -6\end{pmatrix}$, $B=\begin{pmatrix}0 & 5\\ 2 & -1\end{pmatrix}$, $C=\begin{pmatrix}2 & -4\\ 1 & 1\end{pmatrix}.$
###Code
A = np.array([[1, 7], [3, -6]])
B = np.array([[0, 5], [2, -1]])
C = np.array([[2, -4], [1, 1]])
3 * 𝐴 - 2 * 𝐵 + 4 * 𝐶
###Output
_____no_output_____
###Markdown
Задание 4 Дана матрица $A=\begin{pmatrix}4 & 1\\ 5 & -2\\ 2 & 3\end{pmatrix}$.Вычислить $AA^{T}$ и $A^{T}A$.
###Code
A = np.array([[4, 1], [5, -2], [2, 3]])
np.dot(A, A.T)
np.dot(A.T, A)
###Output
_____no_output_____
###Markdown
Задание 5* Написать на Python функцию для перемножения двух произвольных матриц, не используя NumPy.
###Code
def dot(A, B):
if (len(A[0]) != len(B)):
raise ValueError('Порядки перемножаемых матриц не соответствуют правилу умножения матриц.')
count_i = len(A)
count_j = len(A[0])
count_k = len(B[0])
result = [[0 for k in range(count_k)] for l in range(count_i)]
for i in range(count_i):
for k in range(count_k):
for j in range(count_j):
result[i][k] += A[i][j] * B[j][k]
return result
A = [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]
B = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12], [13, 14, 15]]
dot(A, B)
# dot(B, A)
print(np.dot(np.array(A), np.array(B)))
###Output
[[135 150 165]
[310 350 390]]
###Markdown
Задание 1 Вычислить определитель: a)$$\begin{vmatrix}sinx & -cosx\\ cosx & sinx\end{vmatrix};$$ б) $$\begin{vmatrix}4 & 2 & 3\\ 0 & 5 & 1\\ 0 & 0 & 9\end{vmatrix};$$ в)$$\begin{vmatrix}1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9\end{vmatrix}.$$
###Code
x = math.pi / 2
A = np.array([[math.sin(x), -math.cos(x)], [math.cos(x), math.sin(x)]])
np.linalg.det(A)
A = np.array([[4, 2, 3], [0, 5, 1], [0, 0, 9]])
np.linalg.det(A)
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
np.linalg.det(A)
###Output
_____no_output_____
###Markdown
Задание 2 Определитель матрицы $A$ равен $4$. Найти: а) $det(A^{2})$; б) $det(A^{T})$; в) $det(2A)$.
###Code
A = np.array([[2, 1], [2, 3]])
np.linalg.det(A)
np.linalg.det(np.dot(A, A))
np.linalg.det(A.T)
2 * A
np.linalg.det(2 * A)
4 * 6 - 4 * 2
###Output
_____no_output_____
###Markdown
Согласно пункту 2: __2.__ Умножение строки или столбца матрицы на число $\lambda$ приведет к умножению определителя матрицы на то же число. - Доказательство этого свойства элементарно, так как, исходя из формулы определителя, множитель из этой строки будет в каждом из слагаемых при нахождении определителя разложением по этой строке/столбцу, что равнозначно его умножению на это число. должно было получиться $8$. Но это неправильно, так как при умножении всех элементов матрицы, мы выносим за скобки не просто число $\lambda$, а $\lambda^n$, где $n$ ранг матрицы. Правильный ответ: $(2 \cdot 2)\cdot(2 \cdot 3) - (2 \cdot 1)\cdot(2 \cdot 2) = 2^2\cdot(2\cdot3-1\cdot2) = 4\cdot(6 - 2) = 4\cdot4 = 16$ Задание 3 Доказать, что матрица$$\begin{pmatrix}-2 & 7 & -3\\ 4 & -14 & 6\\ -3 & 7 & 13\end{pmatrix}$$ вырожденная.
###Code
A = np.array([[-2, 7, -3], [4, -14, 6], [-3, 7, 13]])
np.linalg.det(A)
###Output
_____no_output_____
###Markdown
Задание 4 Найти ранг матрицы: а) $\begin{pmatrix}1 & 2 & 3\\ 1 & 1 & 1\\ 2 & 3 & 4\end{pmatrix};$ б) $\begin{pmatrix}0 & 0 & 2 & 1\\ 0 & 0 & 2 & 2\\ 0 & 0 & 4 & 3\\ 2 & 3 & 5 & 6\end{pmatrix}.$
###Code
A = np.array([[1, 2, 3], [1, 1, 1], [2, 3, 4]])
np.linalg.matrix_rank(A)
A = np.array([[0, 0, 2, 1], [0, 0, 2, 2], [0, 0, 4, 3], [2, 3, 5, 6]])
np.linalg.matrix_rank(A)
###Output
_____no_output_____
###Markdown
Lesson 2: Comparison Operators
###Code
1>2
1==1
1!=2
'string'=='string'
'bell'=='boy'
(1==2)and(2==2)
(1==2) or (2==2)
(1==1) and not (1==2)
###Output
_____no_output_____
###Markdown
Control Flow of Python If Statement
###Code
if True:
print('yes')
if False:
print('no')
if (1==5):
print('true')
elif (2!=2):
print ('yes')
else:
print('hi')
if (1==3):
print('true')
elif (2!=3):
print('yes')
###Output
yes
###Markdown
for loops
###Code
seq=[10,202,30,40,50]
for item in seq:
print('hi')
for num in seq:
print(num)
for num in seq:
print(num**2)
###Output
100
40804
900
1600
2500
###Markdown
While loops
###Code
i=1
while i<5:
print('i is cuurently {}'.format(i))
i=i+1
###Output
i is cuurently 1
i is cuurently 2
i is cuurently 3
i is cuurently 4
###Markdown
Range Function
###Code
range(5)
for item in range(5):
print('item currently is {}'.format(item))
list(range(1,11))
###Output
_____no_output_____
###Markdown
List comprehension
###Code
x=[1,2,3,4]
out =[]
for num in x:
out.append(num**2)
out
# 1,4,9,16
###Output
_____no_output_____
###Markdown
The above same code can be written in comprehensive way
###Code
x=[10,20,30,40]
[num**2 for num in x]
#100,400,900,1600
###Output
_____no_output_____
###Markdown
Lesson 3: Functions 1. Functions2. Lambda Expressions3. Vaiours useful method
###Code
def my_func():
print('hello')
my_func()
###Output
hello
###Markdown
functions with parameter
###Code
def myfunc(param,param2='class'):
print(param,param2)
myfunc('this is my class ApDev')
###Output
this is my class ApDev class
###Markdown
functions with default parameter
###Code
def myfunc1(param=5):
"""
docstring goes here!
"""
print(param)
#return param
myfunc1() # here we are not passing any parameter to the function
#since it is already declared as default in function definition
def myfunc1(argument):
"""
docstring goes here!
"""
return (argument *5)
x=myfunc1(6)
x
#30
def times_two(var):
return var*2
result = times_two(4)
result
###Output
_____no_output_____
###Markdown
instead of the code mentioned above we can lambda function
###Code
lambda var: var*2
###Output
_____no_output_____
###Markdown
showing the usage of lambda in map function
###Code
seq=[1,2,3,4,5]
list(map(times_two,seq))
###Output
_____no_output_____
###Markdown
lambda with only one argument
###Code
#x=15
list(map(lambda num:num*2,seq))
###Output
_____no_output_____
###Markdown
lambda functions can accept zero or more arguments but only one expression
###Code
f=lambda x, y: x*y
f(5,2)
###Output
_____no_output_____
###Markdown
Methods String Upper and String Lower
###Code
st="hello i'm JEFF"
st.lower()
###Output
_____no_output_____
###Markdown
Split method
###Code
tweet="Go sports ! #cool"
#splits with white space. This is the default one
tweet.split()
###Output
_____no_output_____
###Markdown
Excellent Tutorials Series (ETS) Author: Thomas K Torku Topic: Overview Data Structures- `Lists`: ex. [1,2,3,4] It is an object that contains data items. Lists are mutable-they can change during the program execution. Items can be added or remove from it. They are dynamic data structures. They are one-dimension- `Arrays`: They can take from 0 -n dimension - `Dictionary`: They contain keys and values as data structures- `Tuples`: They are not mutable.
###Code
## Lists
myList =[1,2,3,4]
print(myList)
###Output
[1, 2, 3, 4]
###Markdown
Slicing
###Code
num =['5,10,15,20]
num[0:2]'
num1 =[1]*10
num1
# num1
print(num[-1])
print(num[2])
print(num[-2])
###Output
20
15
15
###Markdown
Iterating over List
###Code
#Example 1
for i in num:
print(i)
#Example 2
i =0
while i<len(num):
print(num[i])
i+=1
# Iterate over slist=[3,4,5,10,9,3] using for loop or while loop
###Output
_____no_output_____
###Markdown
Using in or not operator in List
###Code
prod_num =['V475', 'F987', 'Q143', 'R688']
search =input('Enter product number')
if search in prod_num:
print('{} was found in list'.format(search))
else:
print('{} was not found in list'.format(search))
###Output
Enter product number V475
###Markdown
Lists are mutuable
###Code
mynum =[9,4,57,90, 34, 56]
print(mynum)
#change the value of the first index
mynum[0] =20
print(mynum)
mynum[6]
mynum1 =range(1,10,1)
# mynum1 =range(10)
#how to access the value in the list
for i in mynum1:
print(i)
# mynum1
##change the values of the list
nn =[0]*6 #zero value of the list
#Now fill with new values
i =0
while i <len(nn):
nn[i]=9
#update the iteration
i+=1
print(nn)
for i in range(len(nn)):
nn[i]=2
print(nn)
##Design a program that accepts the sales values from the user on each day
no =5
# sales =[]
sales =[0]*5 #list with zeros
i=0 #base index
while i <no:
sales[i] =float(input('Day #'+ str(i+1)+ ''))
#update the iterate
i+=1
print('Here are the values you entered:')
for i in sales:
print('{:.2f}'.format(i))
sales1 =[0]*5
for i in range(5):
sales1[i]=float(input('Day #'+ str(i+1)+ ''))
print('Here are the values have entered:')
for i in sales1:
print('%.2f'%(i))
###Output
Here are the values have entered:
89.76
76.45
90.56
34.67
75.67
###Markdown
List Methods
###Code
# s =[]#empty list
# s =list() #empty list
s =[]
for i in range(5):
s.append(sales1[i])
s
# s.insert(1,67.89)
# del s[1:5]
s.remove(75.67)
s
###Output
_____no_output_____
###Markdown
- `append()`: adds item to the end of the list- `index()`: returns the index of the first element equal to the item- `insert()`: inserts item to the specified index- `sort()`: sorts the list from ascending order- `remove()`: removes the occurence of the item from the list- `reverse()`: reverses the order.
###Code
#example on append
nList =[] #empty list
# nlist =list()
old =[1]*4
i =0
for i in range(len(old)):
old[i] =90
#append that to a new list
nList.append(old[i])
print(old)
print(nList)
old
nList
##The index method
food =['pizza', 'burgers', 'chips', 'bread']
#Which item should I change
item =input('Which item should I change?')
try:
#get the item's index in the list
item_i =food.index(item)
new_i =input('Enter the new value:')
#replace the old item with new item
food[item_i]=new_i
#here is the revised list
print('Here is the revised list:')
print(food)
except ValueError:
print('Item not found in the list')
##insert method
nam1 =['James', 'Kafui', 'Thomas']
print(nam1)
nam1.insert(2, 'Priscy')
print(nam1)
##sort method
mnlist =[1,6,3,9,4,7]
print('Original list: ',mnlist)
mnlist.sort()
print('sorted list: ',mnlist)
## The reverse method
mnlist.reverse()
print('Reverse order:', mnlist)
del mnlist[2]
print('Remove item:',mnlist )
print('Minimum value is:', min(mnlist))
print('Maximum value is:', max(mnlist))
##Concatenating two lsits together
list1 =[0,2,3,1,4,5,6,7]
list2 =[1,5,9,3,5,0,7]
list3 =list1 +list2
print(list3)
from copy import deepcopy, copy
l1 =[]
#Any changes made do not affect the original copy
l1 =deepcopy(list2)
l1[0]=3
# list2
l1
###Output
_____no_output_____
###Markdown
List comprehesion
###Code
x =[]
for i in range(6):
x.append(1)
x
x =[1]*6
#list comprehension
xx =[x[i] for i in x]
xx
###Output
_____no_output_____
###Markdown
Arrays
###Code
v1 =[1]*4
v2 =[3]*4
#create a two by two arrays
v =[v1, v2]
v
###Output
_____no_output_____
###Markdown
Dictionary
###Code
##key-value store room. They also widely used.
dic ={'Name': 'Thomas Torku', 'Country': 'Ghana', 'Profession':'Instructor'}
dic.values()
dic.keys()
dic['Name']
dic['Country']
#Loop through dictionary
for i in dic.items():
print(i)
for i in dic.values():
print(i)
###Output
Thomas Torku
Ghana
Instructor
###Markdown
Tuples- Items are defined within parenthesis- Limited in application
###Code
tt =(1,2,3,4)
type(tt)
tt[1]
###Output
_____no_output_____
###Markdown
Data Type Conversion
###Code
name = input('What is your name? ')
print(f'Hello {name}!')
birth_year = input('What is your birth year? ')
print(f'You were born in {birth_year}')
name = input('What is your name? ')
print(f'Hello {name}!')
birth_year = input('What is your birth year? ')
print(f'You were born in {birth_year}')
print(f'You are {2021 - int(birth_year)} years old') #type conversion line
#input takes a string format - read the documentation, so you need to convert it to int/ float
###Output
_____no_output_____
###Markdown
Project Multiply two numbers
###Code
num_1 = float(input('Enter a number to multiply: '))
num_2 = float(input('Enter another number to multiply: '))
result = num_1 * num_2
print(result)
###Output
Enter a number to multiply: 2
Enter another number to multiply: 3
6.0
###Markdown
Days since you were born
###Code
import time
epoch = time.time()
print(round(epoch))
year_now = 2021
born_year = int(input('Hey! What year were you born in? '))
age = year_now - born_year
days = age * 365
print(f'Hey, based on that, you must be {days} days old! You are wise!')
###Output
Hey! What year were you born in? 4
Hey, based on that, you must be 736205 days old! You are wise!
###Markdown
Tip Calculator
###Code
cost = float(input('What is your total cost? '))
tip_percentage = float(input('How much % do you want to tip? Enter 0-100'))
tip_amount = cost * tip_percentage/100
print(f'For {cost} pounds the tip at {tip_percentage} % is {round(tip_amount,3)} pounds. Total amount to give to restaurant is {cost + tip_amount} pounds')
###Output
_____no_output_____
###Markdown
1. разбить число на цифры
###Code
def splitter(a):
return([int(x) for x in str(a)])
some_num = 4056
print(splitter(some_num))
###Output
[4, 0, 5, 6]
None
###Markdown
2. сколько четных и нечетных цифр в числе
###Code
def odds_events_cnt(num):
odds = [x for x in str(num) if int(x)%2]
evens = [x for x in str(num) if not int(x)%2]
return(len(odds), len(evens))
some_num = 4156
print(odds_events_cnt(some_num))
###Output
(2, 2)
###Markdown
3. развернуть список
###Code
def reverse(my_list):
list_len = len(my_list)
reversed_list = []
while list_len > 0:
list_len -= 1
reversed_list.append(my_list[list_len])
return reversed_list
my_list = [0, 1, 2, 3, 4, 7, 2]
print(reverse(my_list))
###Output
[2, 7, 4, 3, 2, 1, 0]
###Markdown
4. left join where b.key is NULL элементы первого которых нет во втором
###Code
def left_join(list_1, list_2):
list_3 = set(list_1).difference(set(list_2))
return list_3
list_1 = [0, 1, 2, 3, 4, 7, 2]
list_2 = [1, 4, 7]
print(left_join(list_1, list_2))
###Output
{0, 2, 3}
###Markdown
5. Убрать дубликаты в списке
###Code
import numpy as np
def no_duplicates(list_dup):
return(np.unique(list_dup).tolist())
def no_duplicates(list_dup):
tuple_dup = set(list_dup)
return(list(tuple_dup))
my_list = [0, 1, 2, 2, 3, 3, 4, 5, 2]
print(no_duplicates(my_list))
###Output
[0, 1, 2, 3, 4, 5]
###Markdown
6. Подсчитать кол-во неуникальных эл-тов в списке/кортеже
###Code
def nonunique_cnt(my_list):
list_len = len(my_list)
nodup_len = len(no_duplicates(my_list))
return list_len - nodup_len
my_list = [0, 1, 2, 2, 3, 3, 4, 5, 2]
print(nonunique_cnt(my_list))
my_list = (0, 1, 2, 2, 3, 3, 4, 5, 2)
print(nonunique_cnt(my_list))
###Output
3
###Markdown
7. Удалить из списка эл-ты, не удовлетворяющие условию
###Code
def list_filter(my_list):
return [x for x in my_list if x % 2 == 0]
my_list = [0, 1, 2, 2, 3, 3, 4, 5, 2]
print(list_filter(my_list))
###Output
[0, 2, 2, 4, 2]
###Markdown
8. разбить строку на слова и посчитать эл-ты
###Code
def splitter(text):
text_list = text.split(' ')
text_dict = {}
for t in text_list:
if t in text_dict:
text_dict[t] += 1
else:
text_dict[t] = 1
return text_dict
foo = {'bar': 0}
text = 'привет привет привет меня зовут зовут Вася'
splitter(text)
###Output
_____no_output_____
###Markdown
9. Заменить несколько пробелов, идущих подряд в строке, на один
###Code
def space_killer(my_list):
return
my_list = [0, 1, 2, 2, 3, 3, 4, 5, 2]
print(space_killer(my_list))
###Output
None
###Markdown
10. Дан список строк. Нужно оставить только те, в которых строки содержат заданную подстроку 11. Дан список пар координат. Вывести те, которые заданы неверно (широта должна быть от -90.0 до 90.0, долгота от -180.0 до 180.0) 12. Найти неверно закрывающуюся строку в выражении () ((([]))}
###Code
() ((([]))}
###Output
_____no_output_____ |
docs/probability/docs/source_zh_cn/using_bnn.ipynb | ###Markdown
使用贝叶斯神经网络实现图片分类应用[](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvcHJvYmFiaWxpdHkvemhfY24vbWluZHNwb3JlX3VzaW5nX2Jubi5pcHluYg==&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/probability/zh_cn/mindspore_using_bnn.ipynb) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/probability/zh_cn/mindspore_using_bnn.py) [](https://gitee.com/mindspore/docs/blob/master/docs/probability/docs/source_zh_cn/using_bnn.ipynb)深度学习模型具有强大的拟合能力,而贝叶斯理论具有很好的可解释能力。MindSpore深度概率编程(MindSpore Probability)将深度学习和贝叶斯学习结合,通过设置网络权重为分布、引入隐空间分布等,可以对分布进行采样前向传播,由此引入了不确定性,从而增强了模型的鲁棒性和可解释性。本章将详细介绍深度概率编程中的贝叶斯神经网络在MindSpore上的应用。在动手进行实践之前,确保,你已经正确安装了MindSpore 0.7.0-beta及其以上版本。> 本例面向GPU或Ascend 910 AI处理器平台,你可以在这里下载完整的样例代码:。>> 贝叶斯神经网络目前只支持图模式,需要在代码中设置`context.set_context(mode=context.GRAPH_MODE)`。 使用贝叶斯神经网络贝叶斯神经网络是由概率模型和神经网络组成的基本模型,它的权重不再是一个确定的值,而是一个分布。本例介绍了如何使用MDP中的`bnn_layers`模块实现贝叶斯神经网络,并利用贝叶斯神经网络实现一个简单的图片分类功能,整体流程如下:1. 处理MNIST数据集;2. 定义贝叶斯LeNet网络;3. 定义损失函数和优化器;4. 加载数据集并进行训练。 环境准备设置训练模式为图模式,计算平台为GPU。
###Code
from mindspore import context
context.set_context(mode=context.GRAPH_MODE, save_graphs=False, device_target="GPU")
###Output
_____no_output_____
###Markdown
数据准备 下载数据集以下示例代码将MNIST数据集下载并解压到指定位置。
###Code
import os
import requests
requests.packages.urllib3.disable_warnings()
def download_dataset(dataset_url, path):
filename = dataset_url.split("/")[-1]
save_path = os.path.join(path, filename)
if os.path.exists(save_path):
return
if not os.path.exists(path):
os.makedirs(path)
res = requests.get(dataset_url, stream=True, verify=False)
with open(save_path, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(dataset_url), path))
train_path = "datasets/MNIST_Data/train"
test_path = "datasets/MNIST_Data/test"
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte", train_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte", train_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte", test_path)
download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte", test_path)
###Output
_____no_output_____
###Markdown
下载的数据集文件的目录结构如下:```text./datasets/MNIST_Data├── test│ ├── t10k-images-idx3-ubyte│ └── t10k-labels-idx1-ubyte└── train ├── train-images-idx3-ubyte └── train-labels-idx1-ubyte``` 定义数据集增强方法MNIST数据集的原始训练数据集是60000张$28\times28$像素的单通道数字图片,本次训练用到的含贝叶斯层的LeNet5网络接收到训练数据的张量为`(32,1,32,32)`,通过自定义create_dataset函数将原始数据集增强为适应训练要求的数据,具体的增强操作解释可参考[初学入门](https://www.mindspore.cn/tutorials/zh-CN/master/quick_start.html)。
###Code
import mindspore.dataset.vision.c_transforms as CV
import mindspore.dataset.transforms.c_transforms as C
from mindspore.dataset.vision import Inter
from mindspore import dataset as ds
def create_dataset(data_path, batch_size=32, repeat_size=1,
num_parallel_workers=1):
# define dataset
mnist_ds = ds.MnistDataset(data_path)
# define some parameters needed for data enhancement and rough justification
resize_height, resize_width = 32, 32
rescale = 1.0 / 255.0
shift = 0.0
rescale_nml = 1 / 0.3081
shift_nml = -1 * 0.1307 / 0.3081
# according to the parameters, generate the corresponding data enhancement method
c_trans = [
CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR),
CV.Rescale(rescale_nml, shift_nml),
CV.Rescale(rescale, shift),
CV.HWC2CHW()
]
type_cast_op = C.TypeCast(mstype.int32)
# using map to apply operations to a dataset
mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=c_trans, input_columns="image", num_parallel_workers=num_parallel_workers)
# process the generated dataset
buffer_size = 10000
mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)
mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
mnist_ds = mnist_ds.repeat(repeat_size)
return mnist_ds
###Output
_____no_output_____
###Markdown
定义贝叶斯神经网络在经典LeNet5网络中,数据经过如下计算过程:卷积1->激活->池化->卷积2->激活->池化->降维->全连接1->全连接2->全连接3。 本例中将引入概率编程方法,利用`bnn_layers`模块将卷层和全连接层改造成贝叶斯层
###Code
import mindspore.nn as nn
from mindspore.nn.probability import bnn_layers
import mindspore.ops as ops
from mindspore import dtype as mstype
class BNNLeNet5(nn.Cell):
def __init__(self, num_class=10):
super(BNNLeNet5, self).__init__()
self.num_class = num_class
self.conv1 = bnn_layers.ConvReparam(1, 6, 5, stride=1, padding=0, has_bias=False, pad_mode="valid")
self.conv2 = bnn_layers.ConvReparam(6, 16, 5, stride=1, padding=0, has_bias=False, pad_mode="valid")
self.fc1 = bnn_layers.DenseReparam(16 * 5 * 5, 120)
self.fc2 = bnn_layers.DenseReparam(120, 84)
self.fc3 = bnn_layers.DenseReparam(84, self.num_class)
self.relu = nn.ReLU()
self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
self.flatten = nn.Flatten()
def construct(self, x):
x = self.max_pool2d(self.relu(self.conv1(x)))
x = self.max_pool2d(self.relu(self.conv2(x)))
x = self.flatten(x)
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
network = BNNLeNet5(num_class=10)
for layer in network.trainable_params():
print(layer.name)
###Output
conv1.weight_posterior.mean
conv1.weight_posterior.untransformed_std
conv2.weight_posterior.mean
conv2.weight_posterior.untransformed_std
fc1.weight_posterior.mean
fc1.weight_posterior.untransformed_std
fc1.bias_posterior.mean
fc1.bias_posterior.untransformed_std
fc2.weight_posterior.mean
fc2.weight_posterior.untransformed_std
fc2.bias_posterior.mean
fc2.bias_posterior.untransformed_std
fc3.weight_posterior.mean
fc3.weight_posterior.untransformed_std
fc3.bias_posterior.mean
fc3.bias_posterior.untransformed_std
###Markdown
打印信息表明,使用`bnn_layers`模块构建的LeNet网络,其卷积层和全连接层均为贝叶斯层。 定义损失函数和优化器接下来需要定义损失函数(Loss)和优化器(Optimizer)。损失函数是深度学习的训练目标,也叫目标函数,可以理解为神经网络的输出(Logits)和标签(Labels)之间的距离,是一个标量数据。常见的损失函数包括均方误差、L2损失、Hinge损失、交叉熵等等。图像分类应用通常采用交叉熵损失(CrossEntropy)。优化器用于神经网络求解(训练)。由于神经网络参数规模庞大,无法直接求解,因而深度学习中采用随机梯度下降算法(SGD)及其改进算法进行求解。MindSpore封装了常见的优化器,如`SGD`、`Adam`、`Momemtum`等等。本例采用`Adam`优化器,通常需要设定两个参数,学习率(`learning_rate`)和权重衰减项(`weight_decay`)。MindSpore中定义损失函数和优化器的代码样例如下:
###Code
import mindspore.nn as nn
# loss function definition
criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
# optimization definition
optimizer = nn.AdamWeightDecay(params=network.trainable_params(), learning_rate=0.0001)
###Output
_____no_output_____
###Markdown
训练网络贝叶斯神经网络的训练过程与DNN基本相同,唯一不同的是将`WithLossCell`替换为适用于BNN的`WithBNNLossCell`。除了`backbone`和`loss_fn`两个参数之外,`WithBNNLossCell`增加了`dnn_factor`和`bnn_factor`两个参数。这两个参数是用来平衡网络整体损失和贝叶斯层的KL散度的,防止KL散度的值过大掩盖了网络整体损失。- `dnn_factor`是由损失函数计算得到的网络整体损失的系数。- `bnn_factor`是每个贝叶斯层的KL散度的系数。构建模型训练函数`train_model`和模型验证函数`validate_model`。
###Code
def train_model(train_net, net, dataset):
accs = []
loss_sum = 0
for _, data in enumerate(dataset.create_dict_iterator()):
train_x = Tensor(data['image'].asnumpy().astype(np.float32))
label = Tensor(data['label'].asnumpy().astype(np.int32))
loss = train_net(train_x, label)
output = net(train_x)
log_output = ops.LogSoftmax(axis=1)(output)
acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy())
accs.append(acc)
loss_sum += loss.asnumpy()
loss_sum = loss_sum / len(accs)
acc_mean = np.mean(accs)
return loss_sum, acc_mean
def validate_model(net, dataset):
accs = []
for _, data in enumerate(dataset.create_dict_iterator()):
train_x = Tensor(data['image'].asnumpy().astype(np.float32))
label = Tensor(data['label'].asnumpy().astype(np.int32))
output = net(train_x)
log_output = ops.LogSoftmax(axis=1)(output)
acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy())
accs.append(acc)
acc_mean = np.mean(accs)
return acc_mean
###Output
_____no_output_____
###Markdown
执行训练。
###Code
from mindspore.nn import TrainOneStepCell
from mindspore import Tensor
import numpy as np
net_with_loss = bnn_layers.WithBNNLossCell(network, criterion, dnn_factor=60000, bnn_factor=0.000001)
train_bnn_network = TrainOneStepCell(net_with_loss, optimizer)
train_bnn_network.set_train()
train_set = create_dataset('./datasets/MNIST_Data/train', 64, 1)
test_set = create_dataset('./datasets/MNIST_Data/test', 64, 1)
epoch = 10
for i in range(epoch):
train_loss, train_acc = train_model(train_bnn_network, network, train_set)
valid_acc = validate_model(network, test_set)
print('Epoch: {} \tTraining Loss: {:.4f} \tTraining Accuracy: {:.4f} \tvalidation Accuracy: {:.4f}'.
format(i+1, train_loss, train_acc, valid_acc))
###Output
Epoch: 1 Training Loss: 21444.8605 Training Accuracy: 0.8928 validation Accuracy: 0.9513
Epoch: 2 Training Loss: 9396.3887 Training Accuracy: 0.9536 validation Accuracy: 0.9635
Epoch: 3 Training Loss: 7320.2412 Training Accuracy: 0.9641 validation Accuracy: 0.9674
Epoch: 4 Training Loss: 6221.6970 Training Accuracy: 0.9685 validation Accuracy: 0.9731
Epoch: 5 Training Loss: 5450.9543 Training Accuracy: 0.9725 validation Accuracy: 0.9733
Epoch: 6 Training Loss: 4898.9741 Training Accuracy: 0.9754 validation Accuracy: 0.9767
Epoch: 7 Training Loss: 4505.7502 Training Accuracy: 0.9775 validation Accuracy: 0.9784
Epoch: 8 Training Loss: 4099.8783 Training Accuracy: 0.9797 validation Accuracy: 0.9791
Epoch: 9 Training Loss: 3795.2288 Training Accuracy: 0.9810 validation Accuracy: 0.9796
Epoch: 10 Training Loss: 3581.4254 Training Accuracy: 0.9823 validation Accuracy: 0.9773
###Markdown
使用贝叶斯神经网络实现图片分类应用[](https://gitee.com/mindspore/docs/blob/master/docs/probability/docs/source_zh_cn/using_bnn.ipynb) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/probability/zh_cn/mindspore_using_bnn.ipynb) [](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvcHJvYmFiaWxpdHkvemhfY24vbWluZHNwb3JlX3VzaW5nX2Jubi5pcHluYg==&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c)深度学习模型具有强大的拟合能力,而贝叶斯理论具有很好的可解释能力。MindSpore深度概率编程(MindSpore Probability)将深度学习和贝叶斯学习结合,通过设置网络权重为分布、引入隐空间分布等,可以对分布进行采样前向传播,由此引入了不确定性,从而增强了模型的鲁棒性和可解释性。本章将详细介绍深度概率编程中的贝叶斯神经网络在MindSpore上的应用。在动手进行实践之前,确保,你已经正确安装了MindSpore 0.7.0-beta及其以上版本。> 本例面向GPU或Ascend 910 AI处理器平台,你可以在这里下载完整的样例代码:。> > 贝叶斯神经网络目前只支持图模式,需要在代码中设置`context.set_context(mode=context.GRAPH_MODE)`。 使用贝叶斯神经网络贝叶斯神经网络是由概率模型和神经网络组成的基本模型,它的权重不再是一个确定的值,而是一个分布。本例介绍了如何使用MDP中的`bnn_layers`模块实现贝叶斯神经网络,并利用贝叶斯神经网络实现一个简单的图片分类功能,整体流程如下:1. 处理MNIST数据集;2. 定义贝叶斯LeNet网络;3. 定义损失函数和优化器;4. 加载数据集并进行训练。 环境准备设置训练模式为图模式,计算平台为GPU。
###Code
from mindspore import context
context.set_context(mode=context.GRAPH_MODE, save_graphs=False, device_target="GPU")
###Output
_____no_output_____
###Markdown
数据准备 下载数据集下载MNIST数据集并解压到指定位置,在Jupyter Notebook中执行如下命令:
###Code
!mkdir -p ./datasets/MNIST_Data/train ./datasets/MNIST_Data/test
!wget -NP ./datasets/MNIST_Data/train https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte --no-check-certificate
!wget -NP ./datasets/MNIST_Data/train https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte --no-check-certificate
!wget -NP ./datasets/MNIST_Data/test https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte --no-check-certificate
!wget -NP ./datasets/MNIST_Data/test https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte --no-check-certificate
!tree ./datasets/MNIST_Data
###Output
./datasets/MNIST_Data
├── test
│ ├── t10k-images-idx3-ubyte
│ └── t10k-labels-idx1-ubyte
└── train
├── train-images-idx3-ubyte
└── train-labels-idx1-ubyte
2 directories, 4 files
###Markdown
定义数据集增强方法MNIST数据集的原始训练数据集是60000张$28\times28$像素的单通道数字图片,本次训练用到的含贝叶斯层的LeNet5网络接收到训练数据的张量为`(32,1,32,32)`,通过自定义create_dataset函数将原始数据集增强为适应训练要求的数据,具体的增强操作解释可参考官网快速入门[实现一个图片分类应用](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/quick_start/quick_start.html)。
###Code
import mindspore.dataset.vision.c_transforms as CV
import mindspore.dataset.transforms.c_transforms as C
from mindspore.dataset.vision import Inter
from mindspore import dataset as ds
def create_dataset(data_path, batch_size=32, repeat_size=1,
num_parallel_workers=1):
# define dataset
mnist_ds = ds.MnistDataset(data_path)
# define some parameters needed for data enhancement and rough justification
resize_height, resize_width = 32, 32
rescale = 1.0 / 255.0
shift = 0.0
rescale_nml = 1 / 0.3081
shift_nml = -1 * 0.1307 / 0.3081
# according to the parameters, generate the corresponding data enhancement method
c_trans = [
CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR),
CV.Rescale(rescale_nml, shift_nml),
CV.Rescale(rescale, shift),
CV.HWC2CHW()
]
type_cast_op = C.TypeCast(mstype.int32)
# using map to apply operations to a dataset
mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers)
mnist_ds = mnist_ds.map(operations=c_trans, input_columns="image", num_parallel_workers=num_parallel_workers)
# process the generated dataset
buffer_size = 10000
mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)
mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
mnist_ds = mnist_ds.repeat(repeat_size)
return mnist_ds
###Output
_____no_output_____
###Markdown
定义贝叶斯神经网络在经典LeNet5网络中,数据经过如下计算过程:卷积1->激活->池化->卷积2->激活->池化->降维->全连接1->全连接2->全连接3。 本例中将引入概率编程方法,利用`bnn_layers`模块将卷层和全连接层改造成贝叶斯层
###Code
from mindspore.common.initializer import Normal
import mindspore.nn as nn
from mindspore.nn.probability import bnn_layers
import mindspore.ops as ops
from mindspore import dtype as mstype
class BNNLeNet5(nn.Cell):
def __init__(self, num_class=10):
super(BNNLeNet5, self).__init__()
self.num_class = num_class
self.conv1 = bnn_layers.ConvReparam(1, 6, 5, stride=1, padding=0, has_bias=False, pad_mode="valid")
self.conv2 = bnn_layers.ConvReparam(6, 16, 5, stride=1, padding=0, has_bias=False, pad_mode="valid")
self.fc1 = bnn_layers.DenseReparam(16 * 5 * 5, 120)
self.fc2 = bnn_layers.DenseReparam(120, 84)
self.fc3 = bnn_layers.DenseReparam(84, self.num_class)
self.relu = nn.ReLU()
self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
self.flatten = nn.Flatten()
def construct(self, x):
x = self.max_pool2d(self.relu(self.conv1(x)))
x = self.max_pool2d(self.relu(self.conv2(x)))
x = self.flatten(x)
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x
network = BNNLeNet5(num_class=10)
for layer in network.trainable_params():
print(layer.name)
###Output
conv1.weight_posterior.mean
conv1.weight_posterior.untransformed_std
conv2.weight_posterior.mean
conv2.weight_posterior.untransformed_std
fc1.weight_posterior.mean
fc1.weight_posterior.untransformed_std
fc1.bias_posterior.mean
fc1.bias_posterior.untransformed_std
fc2.weight_posterior.mean
fc2.weight_posterior.untransformed_std
fc2.bias_posterior.mean
fc2.bias_posterior.untransformed_std
fc3.weight_posterior.mean
fc3.weight_posterior.untransformed_std
fc3.bias_posterior.mean
fc3.bias_posterior.untransformed_std
###Markdown
打印信息表明,使用`bnn_layers`模块构建的LeNet网络,其卷积层和全连接层均为贝叶斯层。 定义损失函数和优化器接下来需要定义损失函数(Loss)和优化器(Optimizer)。损失函数是深度学习的训练目标,也叫目标函数,可以理解为神经网络的输出(Logits)和标签(Labels)之间的距离,是一个标量数据。常见的损失函数包括均方误差、L2损失、Hinge损失、交叉熵等等。图像分类应用通常采用交叉熵损失(CrossEntropy)。优化器用于神经网络求解(训练)。由于神经网络参数规模庞大,无法直接求解,因而深度学习中采用随机梯度下降算法(SGD)及其改进算法进行求解。MindSpore封装了常见的优化器,如`SGD`、`Adam`、`Momemtum`等等。本例采用`Adam`优化器,通常需要设定两个参数,学习率(`learning_rate`)和权重衰减项(`weight_decay`)。MindSpore中定义损失函数和优化器的代码样例如下:
###Code
import mindspore.nn as nn
# loss function definition
criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
# optimization definition
optimizer = nn.AdamWeightDecay(params=network.trainable_params(), learning_rate=0.0001)
###Output
_____no_output_____
###Markdown
训练网络贝叶斯神经网络的训练过程与DNN基本相同,唯一不同的是将`WithLossCell`替换为适用于BNN的`WithBNNLossCell`。除了`backbone`和`loss_fn`两个参数之外,`WithBNNLossCell`增加了`dnn_factor`和`bnn_factor`两个参数。这两个参数是用来平衡网络整体损失和贝叶斯层的KL散度的,防止KL散度的值过大掩盖了网络整体损失。- `dnn_factor`是由损失函数计算得到的网络整体损失的系数。- `bnn_factor`是每个贝叶斯层的KL散度的系数。构建模型训练函数`train_model`和模型验证函数`validate_model`。
###Code
def train_model(train_net, net, dataset):
accs = []
loss_sum = 0
for _, data in enumerate(dataset.create_dict_iterator()):
train_x = Tensor(data['image'].asnumpy().astype(np.float32))
label = Tensor(data['label'].asnumpy().astype(np.int32))
loss = train_net(train_x, label)
output = net(train_x)
log_output = ops.LogSoftmax(axis=1)(output)
acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy())
accs.append(acc)
loss_sum += loss.asnumpy()
loss_sum = loss_sum / len(accs)
acc_mean = np.mean(accs)
return loss_sum, acc_mean
def validate_model(net, dataset):
accs = []
for _, data in enumerate(dataset.create_dict_iterator()):
train_x = Tensor(data['image'].asnumpy().astype(np.float32))
label = Tensor(data['label'].asnumpy().astype(np.int32))
output = net(train_x)
log_output = ops.LogSoftmax(axis=1)(output)
acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy())
accs.append(acc)
acc_mean = np.mean(accs)
return acc_mean
###Output
_____no_output_____
###Markdown
执行训练。
###Code
from mindspore.nn import TrainOneStepCell
from mindspore import Tensor
import numpy as np
net_with_loss = bnn_layers.WithBNNLossCell(network, criterion, dnn_factor=60000, bnn_factor=0.000001)
train_bnn_network = TrainOneStepCell(net_with_loss, optimizer)
train_bnn_network.set_train()
train_set = create_dataset('./datasets/MNIST_Data/train', 64, 1)
test_set = create_dataset('./datasets/MNIST_Data/test', 64, 1)
epoch = 10
for i in range(epoch):
train_loss, train_acc = train_model(train_bnn_network, network, train_set)
valid_acc = validate_model(network, test_set)
print('Epoch: {} \tTraining Loss: {:.4f} \tTraining Accuracy: {:.4f} \tvalidation Accuracy: {:.4f}'.
format(i+1, train_loss, train_acc, valid_acc))
###Output
Epoch: 1 Training Loss: 21444.8605 Training Accuracy: 0.8928 validation Accuracy: 0.9513
Epoch: 2 Training Loss: 9396.3887 Training Accuracy: 0.9536 validation Accuracy: 0.9635
Epoch: 3 Training Loss: 7320.2412 Training Accuracy: 0.9641 validation Accuracy: 0.9674
Epoch: 4 Training Loss: 6221.6970 Training Accuracy: 0.9685 validation Accuracy: 0.9731
Epoch: 5 Training Loss: 5450.9543 Training Accuracy: 0.9725 validation Accuracy: 0.9733
Epoch: 6 Training Loss: 4898.9741 Training Accuracy: 0.9754 validation Accuracy: 0.9767
Epoch: 7 Training Loss: 4505.7502 Training Accuracy: 0.9775 validation Accuracy: 0.9784
Epoch: 8 Training Loss: 4099.8783 Training Accuracy: 0.9797 validation Accuracy: 0.9791
Epoch: 9 Training Loss: 3795.2288 Training Accuracy: 0.9810 validation Accuracy: 0.9796
Epoch: 10 Training Loss: 3581.4254 Training Accuracy: 0.9823 validation Accuracy: 0.9773
|
lecture_08/assignment/g4/G4_benchmark.ipynb | ###Markdown
Setup
###Code
!pip install biopython
import urllib.request
from pathlib import Path
from Bio import SeqIO
import numpy as np
import gzip
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
###Output
_____no_output_____
###Markdown
Reshaping data from fasta to txt
###Code
classes = ['notpresent', 'present']
sets = ['train', 'valid']
for c in classes:
for s in sets:
urllib.request.urlretrieve(f"https://github.com/simecek/dspracticum2020/raw/master/lecture_08/assignment/g4/g4_{c}_{s}.fa.gz", f"g4_{c}_{s}.fa.gz")
for c in classes:
for s in sets:
Path(f"data/{s}/{c}").mkdir(parents=True, exist_ok=True)
for c in classes:
for s in sets:
with gzip.open(f"g4_{c}_{s}.fa.gz", "rt") as handle:
for record in SeqIO.parse(handle, "fasta"):
id = record.id
with open(f"data/{s}/{c}/{id}.txt", "w") as fw:
fw.writelines([" ".join(str(record.seq))])
###Output
_____no_output_____
###Markdown
Reading data
###Code
batch_size = 128
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'data/train/',
batch_size=batch_size,
class_names=classes)
raw_valid_ds = tf.keras.preprocessing.text_dataset_from_directory(
'data/valid/',
batch_size=batch_size,
class_names=classes)
vectorize_layer = TextVectorization(output_mode='int')
train_text = raw_train_ds.map(lambda x, y: x)
vectorize_layer.adapt(train_text)
vectorize_layer.set_vocabulary(vocab=np.asarray(['a', 'c', 't', 'g', 'n']))
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text)-2, label
train_ds = raw_train_ds.map(vectorize_text)
valid_ds = raw_valid_ds.map(vectorize_text)
###Output
_____no_output_____
###Markdown
Model training
###Code
# one-hot encoding
onehot_layer = keras.layers.Lambda(lambda x: tf.one_hot(tf.cast(x,'int64'), 4))
model_lstm = tf.keras.Sequential([
onehot_layer,
keras.layers.LSTM(32, return_sequences=True),
keras.layers.LSTM(32, return_sequences=False),
keras.layers.Dense(1, activation="sigmoid")])
model_lstm.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
epochs = 1
history = model_lstm.fit(
train_ds,
epochs=epochs, # unstable
validation_data = valid_ds)
model_cnn = tf.keras.Sequential([
onehot_layer,
keras.layers.Conv1D(32, kernel_size=6, data_format='channels_last', activation='relu'),
keras.layers.BatchNormalization(),
keras.layers.MaxPooling1D(),
keras.layers.Conv1D(16, kernel_size=6, data_format='channels_last', activation='relu'),
keras.layers.BatchNormalization(),
keras.layers.MaxPooling1D(),
keras.layers.Conv1D(4, kernel_size=6, data_format='channels_last', activation='relu'),
keras.layers.BatchNormalization(),
keras.layers.MaxPooling1D(),
keras.layers.Dropout(0.3),
keras.layers.GlobalAveragePooling1D(),
keras.layers.Dense(1, activation="sigmoid")
])
model_cnn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model_cnn.fit(
train_ds,
epochs=epochs,
validation_data = valid_ds)
###Output
_____no_output_____ |
tutorials/applications/Control-Z Gate Sequence.ipynb | ###Markdown
Control-Z Gate Sequence IntroductionIn this tutorial we show how to prepare the pulse sequence that generates a *Controlled - Z* gate. We will prepare our state with atoms in any of the "digital" states that we shall call $|g\rangle$ and $|h \rangle$ ( for "ground" and "hyperfine", respectively). Then we will use the *Rydberg blockade* effect to create the logic gate. The levels that each atom can take are the following: We will be using *NumPy* and *Matplotlib* for calculations and plots. Many additional details about the CZ gate construction can be found in [1111.6083v2](https://arxiv.org/abs/1111.6083)
###Code
import numpy as np
import matplotlib.pyplot as plt
import qutip
from itertools import product
###Output
_____no_output_____
###Markdown
We import the following Classes from Pulser:
###Code
from pulser import Pulse, Sequence, Register
from pulser.devices import Chadoq2
from pulser.simulation import Simulation
from pulser.waveforms import BlackmanWaveform,ConstantWaveform
###Output
_____no_output_____
###Markdown
1. Loading the Register on a Device Defining an atom register can simply be done by choosing one of the predetermined shapes included in the `Register`class. We can also construct a dictionary with specific labels for each atom. The atoms must lie inside the *Rydberg blockade radius* $R_b$, which we will characterize by $$\hbar \Omega^{\text{Max}}_{\text{Rabi}} \sim U_{ij} = \frac{C_6}{R_{b}^6},$$where the coefficient $C_6$ determines the strength of the interaction ($C_6/\hbar \approx 5008$ GHz.$\mu m^6$). We can obtain the corresponding Rydberg blockade radius from a given $\Omega_{\text{Rabi}}^{\text{max}}$ using the `rydberg_blockade_radius()` method from `Chadoq2`. For the pulses in this tutorial, $\Omega^{\text{Max}}_{\text{Rabi}}$ is below $2\pi \times 10$ Mhz so:
###Code
Rabi = np.linspace(1, 10, 10)
R_blockade = [Chadoq2.rydberg_blockade_radius(2.*np.pi*rabi) for rabi in Rabi]
plt.figure()
plt.plot(Rabi, R_blockade,'--o')
plt.xlabel(r"$\Omega/(2\pi)$ [MHz]", fontsize=14)
plt.ylabel(r"$R_b$ [$\mu\.m$]", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Thus, we place our atoms at relative distances below $5$ µm, therefore ensuring we are inside the Rydberg blockade volume.
###Code
# Atom Register and Device
q_dict = {"control":np.array([-2,0.]),
"target": np.array([2,0.]),
}
reg = Register(q_dict)
reg.draw()
###Output
_____no_output_____
###Markdown
2. State Preparation The first part of our sequence will correspond to preparing the different states on which the CZ gate will act. For this, we define the following `Pulse` instances that correspond to $\pi$ and $2\pi$ pulses (notice that the area can be easily fixed using the predefined `BlackmanWaveform`): Let us construct a function that takes the label string (or "id") of a state and turns it into a ket state. This ket can be in any of the "digital" (ground-hyperfine levels), "ground-rydberg" or "all" levels. We also include a three-atom system case, which will be useful in the CCZ gate in the last section.
###Code
def build_state_from_id(s_id, basis_name):
if len(s_id) not in {2,3}:
raise ValueError("Not a valid state ID string")
ids = {'digital': 'gh', 'ground-rydberg': 'rg', 'all': 'rgh'}
if basis_name not in ids:
raise ValueError('Not a valid basis')
pool = {''.join(x) for x in product(ids[basis_name], repeat=len(s_id))}
if s_id not in pool:
raise ValueError('Not a valid state id for the given basis.')
ket = {op: qutip.basis(len(ids[basis_name]), i)
for i, op in enumerate(ids[basis_name])}
if len(s_id) == 3:
#Recall that s_id = 'C1'+'C2'+'T' while in the register reg_id = 'C1'+'T'+'C2'.
reg_id = s_id[0]+s_id[2]+s_id[1]
return qutip.tensor([ket[x] for x in reg_id])
else:
return qutip.tensor([ket[x] for x in s_id])
###Output
_____no_output_____
###Markdown
We try this out:
###Code
build_state_from_id('hg','digital')
###Output
_____no_output_____
###Markdown
Let's now write the state preparation sequence. We will also create the prepared state to be able to calculate its overlap during the simulation. First, let us define a π-pulse along the Y axis that will excite the atoms to the hyperfine state if requested:
###Code
duration = 300
pi_Y = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0., -np.pi/2)
pi_Y.draw()
###Output
_____no_output_____
###Markdown
The sequence preparation itself acts with the Raman channel if the desired initial state has atoms in the hyperfine level. We have also expanded it for the case of a CCZ in order to use it below:
###Code
def preparation_sequence(state_id, reg):
global seq
if not set(state_id) <= {'g','h'} or len(state_id) != len(reg.qubits):
raise ValueError('Not a valid state ID')
if len(reg.qubits) == 2:
seq_dict = {'1':'target', '0':'control'}
elif len(reg.qubits) == 3:
seq_dict = {'2':'target', '1':'control2', '0':'control1'}
seq = Sequence(reg, Chadoq2)
if set(state_id) == {'g'}:
basis = 'ground-rydberg'
print(f'Warning: {state_id} state does not require a preparation sequence.')
else:
basis = 'all'
for k in range(len(reg.qubits)):
if state_id[k] == 'h':
if 'raman' not in seq.declared_channels:
seq.declare_channel('raman','raman_local', seq_dict[str(k)])
else:
seq.target(seq_dict[str(k)],'raman')
seq.add(pi_Y,'raman')
prep_state = build_state_from_id(state_id, basis) # Raises error if not a valid `state_id` for the register
return prep_state
###Output
_____no_output_____
###Markdown
Let's test this sequence. Notice that the state "gg" (both atoms in the ground state) is automatically fed to the Register so a pulse sequence is not needed to prepare it.
###Code
# Define sequence and Set channels
prep_state = preparation_sequence('hh', reg)
seq.draw(draw_phase_area=True)
###Output
_____no_output_____
###Markdown
3. Constructing the Gate Sequence We apply the common $\pi-2\pi-\pi$ sequence for the CZ gate
###Code
pi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0., 0)
twopi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, 2*np.pi), 0., 0)
def CZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max((seq._last(ch).tf for ch in seq.declared_channels), default=0)
# Declare Rydberg channel
seq.declare_channel('ryd', 'rydberg_local', 'control')
# Write CZ sequence:
seq.add(pi_pulse, 'ryd', 'wait-for-all') # Wait for state preparation to finish.
seq.target('target', 'ryd') # Changes to target qubit
seq.add(twopi_pulse, 'ryd')
seq.target('control', 'ryd') # Changes back to control qubit
seq.add(pi_pulse, 'ryd')
return prep_state, prep_time
prep_state, prep_time = CZ_sequence('gh') # constructs seq, prep_state and prep_time
seq.draw(draw_phase_area=True)
print(f'Prepared state: {prep_state}')
print(f'Preparation time: {prep_time}ns')
###Output
_____no_output_____
###Markdown
4. Simulating the CZ sequence
###Code
CZ = {}
for state_id in {'gg','hg','gh','hh'}:
# Get CZ sequence
prep_state, prep_time = CZ_sequence(state_id) # constructs seq, prep_state and prep_time
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data=[st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(fr'$ \langle\,{state_id} |\, \psi(t)\rangle$')
plt.axvspan(0, prep_time, alpha=0.06, color='royalblue')
plt.title(fr"Action of gate on state $|${state_id}$\rangle$")
CZ
###Output
_____no_output_____
###Markdown
5. CCZ Gate The same principle can be applied for composite gates. As an application, let us construct the *CCZ* gate, which determines the phase depending on the level of *two* control atoms. We begin by reconstructing the Register:
###Code
# Atom Register and Device
q_dict = {"control1":np.array([-2.0, 0.]),
"target": np.array([0., 2*np.sqrt(3.001)]),
"control2": np.array([2.0, 0.])}
reg = Register(q_dict)
reg.draw()
preparation_sequence('hhh', reg)
seq.draw(draw_phase_area=True)
def CCZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max((seq._last(ch).tf for ch in seq.declared_channels), default=0)
# Declare Rydberg channel
seq.declare_channel('ryd', 'rydberg_local', 'control1')
# Write CCZ sequence:
seq.add(pi_pulse, 'ryd', protocol='wait-for-all') # Wait for state preparation to finish.
seq.target('control2', 'ryd')
seq.add(pi_pulse, 'ryd')
seq.target('target','ryd')
seq.add(twopi_pulse, 'ryd')
seq.target('control2','ryd')
seq.add(pi_pulse, 'ryd')
seq.target('control1','ryd')
seq.add(pi_pulse,'ryd')
return prep_state, prep_time
CCZ_sequence('hhh')
seq.draw(draw_phase_area=True)
CCZ = {}
for state_id in {''.join(x) for x in product('gh', repeat=3)}:
# Get CCZ sequence
prep_state, prep_time = CCZ_sequence(state_id)
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data=[st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CCZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(fr'$ \langle\,{state_id} | \psi(t)\rangle$')
plt.axvspan(0, prep_time, alpha=0.06, color='royalblue')
plt.title(fr"Action of gate on state $|${state_id}$\rangle$")
CCZ
###Output
_____no_output_____
###Markdown
Control-Z Gate Sequence IntroductionIn this tutorial we show how to prepare the pulse sequence that generates a *Controlled - Z* gate. We will prepare our state with atoms in any of the "digital" states that we shall call $|g\rangle$ and $|h \rangle$ ( for "ground" and "hyperfine", respectively). Then we will use the *Rydberg blockade* effect to create the logic gate. The levels that each atom can take are the following: We will be using *NumPy* and *Matplotlib* for calculations and plots. Many additional details about the CZ gate construction can be found in [1111.6083v2](https://arxiv.org/abs/1111.6083)
###Code
import numpy as np
import matplotlib.pyplot as plt
import qutip
from itertools import product
###Output
_____no_output_____
###Markdown
We import the following Classes from Pulser:
###Code
from pulser import Pulse, Sequence, Register
from pulser.devices import Chadoq2
from pulser.simulation import Simulation
from pulser.waveforms import BlackmanWaveform,ConstantWaveform
###Output
_____no_output_____
###Markdown
1. Loading the Register on a Device Defining an atom register can simply be done by choosing one of the predetermined shapes included in the `Register`class. We can also construct a dictionary with specific labels for each atom. The atoms must lie inside the *Rydberg blockade radius* $R_b$, which we will characterize by $$\hbar \Omega^{\text{Max}}_{\text{Rabi}} \sim U_{ij} = \frac{C_6}{R_{b}^6},$$where the coefficient $C_6$ determines the strength of the interaction ($C_6/\hbar \approx 5008$ GHz.$\mu m^6$). We can obtain the corresponding Rydberg blockade radius from a given $\Omega_{\text{Rabi}}^{\text{max}}$ using the `rydberg_blockade_radius()` method from `Chadoq2`. For the pulses in this tutorial, $\Omega^{\text{Max}}_{\text{Rabi}}$ is below $2\pi \times 10$ Mhz so:
###Code
Rabi = np.linspace(1, 10, 10)
R_blockade = [Chadoq2.rydberg_blockade_radius(2.*np.pi*rabi) for rabi in Rabi]
plt.figure()
plt.plot(Rabi, R_blockade,'--o')
plt.xlabel(r"$\Omega/(2\pi)$ [MHz]", fontsize=14)
plt.ylabel(r"$R_b$ [$\mu\.m$]", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Thus, we place our atoms at relative distances below $5$ µm, therefore ensuring we are inside the Rydberg blockade volume.
###Code
# Atom Register and Device
q_dict = {"control":np.array([-2,0.]),
"target": np.array([2,0.]),
}
reg = Register(q_dict)
reg.draw()
###Output
_____no_output_____
###Markdown
2. State Preparation The first part of our sequence will correspond to preparing the different states on which the CZ gate will act. For this, we define the following `Pulse` instances that correspond to $\pi$ and $2\pi$ pulses (notice that the area can be easily fixed using the predefined `BlackmanWaveform`): Let us construct a function that takes the label string (or "id") of a state and turns it into a ket state. This ket can be in any of the "digital" (ground-hyperfine levels), "ground-rydberg" or "all" levels. We also include a three-atom system case, which will be useful in the CCZ gate in the last section.
###Code
def build_state_from_id(s_id, basis_name):
if len(s_id) not in {2,3}:
raise ValueError("Not a valid state ID string")
ids = {'digital': 'gh', 'ground-rydberg': 'rg', 'all': 'rgh'}
if basis_name not in ids:
raise ValueError('Not a valid basis')
pool = {''.join(x) for x in product(ids[basis_name], repeat=len(s_id))}
if s_id not in pool:
raise ValueError('Not a valid state id for the given basis.')
ket = {op: qutip.basis(len(ids[basis_name]), i)
for i, op in enumerate(ids[basis_name])}
if len(s_id) == 3:
#Recall that s_id = 'C1'+'C2'+'T' while in the register reg_id = 'C1'+'T'+'C2'.
reg_id = s_id[0]+s_id[2]+s_id[1]
return qutip.tensor([ket[x] for x in reg_id])
else:
return qutip.tensor([ket[x] for x in s_id])
###Output
_____no_output_____
###Markdown
We try this out:
###Code
build_state_from_id('hg','digital')
###Output
_____no_output_____
###Markdown
Let's now write the state preparation sequence. We will also create the prepared state to be able to calculate its overlap during the simulation. First, let us define a π-pulse along the Y axis that will excite the atoms to the hyperfine state if requested:
###Code
duration = 300
pi_Y = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0., -np.pi/2)
pi_Y.draw()
###Output
_____no_output_____
###Markdown
The sequence preparation itself acts with the Raman channel if the desired initial state has atoms in the hyperfine level. We have also expanded it for the case of a CCZ in order to use it below:
###Code
def preparation_sequence(state_id, reg):
global seq
if not set(state_id) <= {'g','h'} or len(state_id) != len(reg.qubits):
raise ValueError('Not a valid state ID')
if len(reg.qubits) == 2:
seq_dict = {'1':'target', '0':'control'}
elif len(reg.qubits) == 3:
seq_dict = {'2':'target', '1':'control2', '0':'control1'}
seq = Sequence(reg, Chadoq2)
if set(state_id) == {'g'}:
basis = 'ground-rydberg'
print(f'Warning: {state_id} state does not require a preparation sequence.')
else:
basis = 'all'
for k in range(len(reg.qubits)):
if state_id[k] == 'h':
if 'raman' not in seq.declared_channels:
seq.declare_channel('raman','raman_local', seq_dict[str(k)])
else:
seq.target(seq_dict[str(k)],'raman')
seq.add(pi_Y,'raman')
prep_state = build_state_from_id(state_id, basis) # Raises error if not a valid `state_id` for the register
return prep_state
###Output
_____no_output_____
###Markdown
Let's test this sequence. Notice that the state "gg" (both atoms in the ground state) is automatically fed to the Register so a pulse sequence is not needed to prepare it.
###Code
# Define sequence and Set channels
prep_state = preparation_sequence('hh', reg)
seq.draw()
###Output
_____no_output_____
###Markdown
3. Constructing the Gate Sequence We apply the common $\pi-2\pi-\pi$ sequence for the CZ gate
###Code
pi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0., 0)
twopi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, 2*np.pi), 0., 0)
def CZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max((seq._last(ch).tf for ch in seq.declared_channels), default=0)
# Declare Rydberg channel
seq.declare_channel('ryd', 'rydberg_local', 'control')
# Write CZ sequence:
seq.add(pi_pulse, 'ryd', 'wait-for-all') # Wait for state preparation to finish.
seq.target('target', 'ryd') # Changes to target qubit
seq.add(twopi_pulse, 'ryd')
seq.target('control', 'ryd') # Changes back to control qubit
seq.add(pi_pulse, 'ryd')
return prep_state, prep_time
prep_state, prep_time = CZ_sequence('gh') # constructs seq, prep_state and prep_time
seq.draw()
print(f'Prepared state: {prep_state}')
print(f'Preparation time: {prep_time}ns')
###Output
_____no_output_____
###Markdown
4. Simulating the CZ sequence
###Code
CZ = {}
for state_id in {'gg','hg','gh','hh'}:
# Get CZ sequence
prep_state, prep_time = CZ_sequence(state_id) # constructs seq, prep_state and prep_time
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data=[st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(fr'$ \langle\,{state_id} |\, \psi(t)\rangle$')
plt.axvspan(0, prep_time, alpha=0.06, color='royalblue')
plt.title(fr"Action of gate on state $|${state_id}$\rangle$")
CZ
###Output
_____no_output_____
###Markdown
5. CCZ Gate The same principle can be applied for composite gates. As an application, let us construct the *CCZ* gate, which determines the phase depending on the level of *two* control atoms. We begin by reconstructing the Register:
###Code
# Atom Register and Device
q_dict = {"control1":np.array([-2.0, 0.]),
"target": np.array([0., 2*np.sqrt(3.001)]),
"control2": np.array([2.0, 0.])}
reg = Register(q_dict)
reg.draw()
preparation_sequence('hhh', reg)
seq.draw()
def CCZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max((seq._last(ch).tf for ch in seq.declared_channels), default=0)
# Declare Rydberg channel
seq.declare_channel('ryd', 'rydberg_local', 'control1')
# Write CZ sequence:
seq.add(pi_pulse, 'ryd', protocol='wait-for-all') # Wait for state preparation to finish.
seq.target('control2', 'ryd')
seq.add(pi_pulse, 'ryd')
seq.target('target','ryd')
seq.add(twopi_pulse, 'ryd')
seq.target('control2','ryd')
seq.add(pi_pulse, 'ryd')
seq.target('control1','ryd')
seq.add(pi_pulse,'ryd')
return prep_state, prep_time
CCZ_sequence('hhh')
seq.draw()
CCZ = {}
for state_id in {''.join(x) for x in product('gh', repeat=3)}:
# Get CZ sequence
prep_state, prep_time = CCZ_sequence(state_id)
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data=[st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CCZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(fr'$ \langle\,{state_id} | \psi(t)\rangle$')
plt.axvspan(0, prep_time, alpha=0.06, color='royalblue')
plt.title(fr"Action of gate on state $|${state_id}$\rangle$")
CCZ
###Output
_____no_output_____
###Markdown
Control-Z Gate Sequence IntroductionIn this tutorial we show how to prepare the pulse sequence that generates a *Controlled - Z* gate. We will prepare our state with atoms in any of the "digital" states that we shall call $|g\rangle$ and $|h \rangle$ ( for "ground" and "hyperfine", respectively). Then we will use the *Rydberg blockade* effect to create the logic gate. The levels that each atom can take are the following: We will be using *NumPy* and *Matplotlib* for calculations and plots. Many additional details about the CZ gate construction can be found in [1111.6083v2](https://arxiv.org/abs/1111.6083)
###Code
import numpy as np
import matplotlib.pyplot as plt
import qutip
from itertools import product
###Output
_____no_output_____
###Markdown
We import the following Classes from Pulser:
###Code
from pulser import Pulse, Sequence, Register
from pulser.devices import Chadoq2
from pulser.simulation import Simulation
from pulser.waveforms import BlackmanWaveform,ConstantWaveform
###Output
_____no_output_____
###Markdown
1. Loading the Register on a Pasqal Device Defining an atom register can simply be done by choosing one of the predetermined shapes included in the `Register`class. We can also construct a dictionary with specific labels for each atom. The atoms must lie inside the *Rydberg blockade radius* $R_b$, which we will characterize by $$\hbar \Omega^{\text{Max}}_{\text{Rabi}} \sim U_{ij} = \frac{C_6}{R_{b}^6},$$where the coefficient $C_6$ determines the strength of the interaction ($C_6/\hbar \approx 5008$ GHz.$\mu m^6$). We can obtain the corresponding Rydberg blockade radius from a given $\Omega_{\text{Rabi}}^{\text{max}}$ using the `rydberg_blockade_radius()` method from `Chadoq2`. For the pulses in this tutorial, $\Omega^{\text{Max}}_{\text{Rabi}}$ is below $2\pi \times 10$ Mhz so:
###Code
Rabi = np.linspace(1, 10, 10)
R_blockade = [Chadoq2.rydberg_blockade_radius(2.*np.pi*rabi) for rabi in Rabi]
plt.figure()
plt.plot(Rabi, R_blockade,'--o')
plt.xlabel(r"$\Omega/(2\pi)$ [MHz]", fontsize=14)
plt.ylabel(r"$R_b$ [$\mu\.m$]", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Thus, we place our atoms at relative distances below $5$ µm, therefore ensuring we are inside the Rydberg blockade volume.
###Code
# Atom Register and Device
q_dict = {"control":np.array([-2,0.]),
"target": np.array([2,0.]),
}
reg = Register(q_dict)
reg.draw()
###Output
_____no_output_____
###Markdown
2. State Preparation The first part of our sequence will correspond to preparing the different states on which the CZ gate will act. For this, we define the following `Pulse` instances that correspond to $\pi$ and $2\pi$ pulses (notice that the area can be easily fixed using the predefined `BlackmanWaveform`): Let us construct a function that takes the label string (or "id") of a state and turns it into a ket state. This ket can be in any of the "digital" (ground-hyperfine levels), "ground-rydberg" or "all" levels. We also include a three-atom system case, which will be useful in the CCZ gate in the last section.
###Code
def build_state_from_id(s_id, basis_name):
if len(s_id) not in {2,3}:
raise ValueError("Not a valid state ID string")
ids = {'digital': 'gh', 'ground-rydberg': 'rg', 'all': 'rgh'}
if basis_name not in ids:
raise ValueError('Not a valid basis')
pool = {''.join(x) for x in product(ids[basis_name], repeat=len(s_id))}
if s_id not in pool:
raise ValueError('Not a valid state id for the given basis.')
ket = {op: qutip.basis(len(ids[basis_name]), i)
for i, op in enumerate(ids[basis_name])}
if len(s_id) == 3:
#Recall that s_id = 'C1'+'C2'+'T' while in the register reg_id = 'C1'+'T'+'C2'.
reg_id = s_id[0]+s_id[2]+s_id[1]
return qutip.tensor([ket[x] for x in reg_id])
else:
return qutip.tensor([ket[x] for x in s_id])
###Output
_____no_output_____
###Markdown
We try this out:
###Code
build_state_from_id('hg','digital')
###Output
_____no_output_____
###Markdown
Let's now write the state preparation sequence. We will also create the prepared state to be able to calculate its overlap during the simulation. First, let us define a π-pulse along the Y axis that will excite the atoms to the hyperfine state if requested:
###Code
duration = 300
pi_Y = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0., -np.pi/2)
pi_Y.draw()
###Output
_____no_output_____
###Markdown
The sequence preparation itself acts with the Raman channel if the desired initial state has atoms in the hyperfine level. We have also expanded it for the case of a CCZ in order to use it below:
###Code
def preparation_sequence(state_id, reg):
global seq
if not set(state_id) <= {'g','h'} or len(state_id) != len(reg.qubits):
raise ValueError('Not a valid state ID')
if len(reg.qubits) == 2:
seq_dict = {'1':'target', '0':'control'}
elif len(reg.qubits) == 3:
seq_dict = {'2':'target', '1':'control2', '0':'control1'}
seq = Sequence(reg, Chadoq2)
if set(state_id) == {'g'}:
basis = 'ground-rydberg'
print(f'Warning: {state_id} state does not require a preparation sequence.')
else:
basis = 'all'
for k in range(len(reg.qubits)):
if state_id[k] == 'h':
if 'raman' not in seq.declared_channels:
seq.declare_channel('raman','raman_local', seq_dict[str(k)])
else:
seq.target(seq_dict[str(k)],'raman')
seq.add(pi_Y,'raman')
prep_state = build_state_from_id(state_id, basis) # Raises error if not a valid `state_id` for the register
return prep_state
###Output
_____no_output_____
###Markdown
Let's test this sequence. Notice that the state "gg" (both atoms in the ground state) is automatically fed to the Register so a pulse sequence is not needed to prepare it.
###Code
# Define sequence and Set channels
prep_state = preparation_sequence('hh', reg)
seq.draw()
###Output
_____no_output_____
###Markdown
3. Constructing the Gate Sequence We apply the common $\pi-2\pi-\pi$ sequence for the CZ gate
###Code
pi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0., 0)
twopi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, 2*np.pi), 0., 0)
def CZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max((seq._last(ch).tf for ch in seq.declared_channels), default=0)
# Declare Rydberg channel
seq.declare_channel('ryd', 'rydberg_local', 'control')
# Write CZ sequence:
seq.add(pi_pulse, 'ryd', 'wait-for-all') # Wait for state preparation to finish.
seq.target('target', 'ryd') # Changes to target qubit
seq.add(twopi_pulse, 'ryd')
seq.target('control', 'ryd') # Changes back to control qubit
seq.add(pi_pulse, 'ryd')
return prep_state, prep_time
prep_state, prep_time = CZ_sequence('gh') # constructs seq, prep_state and prep_time
seq.draw()
print(f'Prepared state: {prep_state}')
print(f'Preparation time: {prep_time}ns')
###Output
_____no_output_____
###Markdown
4. Simulating the CZ sequence
###Code
CZ = {}
for state_id in {'gg','hg','gh','hh'}:
# Get CZ sequence
prep_state, prep_time = CZ_sequence(state_id) # constructs seq, prep_state and prep_time
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data=[st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(fr'$ \langle\,{state_id} |\, \psi(t)\rangle$')
plt.axvspan(0, prep_time, alpha=0.06, color='royalblue')
plt.title(fr"Action of gate on state $|${state_id}$\rangle$")
CZ
###Output
_____no_output_____
###Markdown
5. CCZ Gate The same principle can be applied for composite gates. As an application, let us construct the *CCZ* gate, which determines the phase depending on the level of *two* control atoms. We begin by reconstructing the Register:
###Code
# Atom Register and Device
q_dict = {"control1":np.array([-2.0, 0.]),
"target": np.array([0., 2*np.sqrt(3.001)]),
"control2": np.array([2.0, 0.])}
reg = Register(q_dict)
reg.draw()
preparation_sequence('hhh', reg)
seq.draw()
def CCZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max((seq._last(ch).tf for ch in seq.declared_channels), default=0)
# Declare Rydberg channel
seq.declare_channel('ryd', 'rydberg_local', 'control1')
# Write CZ sequence:
seq.add(pi_pulse, 'ryd', protocol='wait-for-all') # Wait for state preparation to finish.
seq.target('control2', 'ryd')
seq.add(pi_pulse, 'ryd')
seq.target('target','ryd')
seq.add(twopi_pulse, 'ryd')
seq.target('control2','ryd')
seq.add(pi_pulse, 'ryd')
seq.target('control1','ryd')
seq.add(pi_pulse,'ryd')
return prep_state, prep_time
CCZ_sequence('hhh')
seq.draw()
CCZ = {}
for state_id in {''.join(x) for x in product('gh', repeat=3)}:
# Get CZ sequence
prep_state, prep_time = CCZ_sequence(state_id)
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data=[st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CCZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(fr'$ \langle\,{state_id} | \psi(t)\rangle$')
plt.axvspan(0, prep_time, alpha=0.06, color='royalblue')
plt.title(fr"Action of gate on state $|${state_id}$\rangle$")
CCZ
###Output
_____no_output_____
###Markdown
Control-Z Gate Sequence IntroductionIn this tutorial we show how to prepare the pulse sequence that generates a *Controlled - Z* gate. We will prepare our state with atoms in any of the "digital" states that we shall call $|g\rangle$ and $|h \rangle$ ( for "ground" and "hyperfine", respectively). Then we will use the *Rydberg blockade* effect to create the logic gate. The levels that each atom can take are the following: We will be using *NumPy* and *Matplotlib* for calculations and plots. Many additional details about the CZ gate construction can be found in [1111.6083v2](https://arxiv.org/abs/1111.6083)
###Code
import numpy as np
import matplotlib.pyplot as plt
import qutip
from itertools import product
###Output
_____no_output_____
###Markdown
We import the following Classes from Pulser:
###Code
from pulser import Pulse, Sequence, Register
from pulser.devices import Chadoq2
from pulser.simulation import Simulation
from pulser.waveforms import BlackmanWaveform, ConstantWaveform
###Output
_____no_output_____
###Markdown
1. Loading the Register on a Device Defining an atom register can simply be done by choosing one of the predetermined shapes included in the `Register`class. We can also construct a dictionary with specific labels for each atom. The atoms must lie inside the *Rydberg blockade radius* $R_b$, which we will characterize by $$\hbar \Omega^{\text{Max}}_{\text{Rabi}} \sim U_{ij} = \frac{C_6}{R_{b}^6},$$where the coefficient $C_6$ determines the strength of the interaction ($C_6/\hbar \approx 5008$ GHz.$\mu m^6$). We can obtain the corresponding Rydberg blockade radius from a given $\Omega_{\text{Rabi}}^{\text{max}}$ using the `rydberg_blockade_radius()` method from `Chadoq2`. For the pulses in this tutorial, $\Omega^{\text{Max}}_{\text{Rabi}}$ is below $2\pi \times 10$ Mhz so:
###Code
Rabi = np.linspace(1, 10, 10)
R_blockade = [
Chadoq2.rydberg_blockade_radius(2.0 * np.pi * rabi) for rabi in Rabi
]
plt.figure()
plt.plot(Rabi, R_blockade, "--o")
plt.xlabel(r"$\Omega/(2\pi)$ [MHz]", fontsize=14)
plt.ylabel(r"$R_b$ [$\mu\.m$]", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Thus, we place our atoms at relative distances below $5$ µm, therefore ensuring we are inside the Rydberg blockade volume.
###Code
# Atom Register and Device
q_dict = {
"control": np.array([-2, 0.0]),
"target": np.array([2, 0.0]),
}
reg = Register(q_dict)
reg.draw()
###Output
_____no_output_____
###Markdown
2. State Preparation The first part of our sequence will correspond to preparing the different states on which the CZ gate will act. For this, we define the following `Pulse` instances that correspond to $\pi$ and $2\pi$ pulses (notice that the area can be easily fixed using the predefined `BlackmanWaveform`): Let us construct a function that takes the label string (or "id") of a state and turns it into a ket state. This ket can be in any of the "digital" (ground-hyperfine levels), "ground-rydberg" or "all" levels. We also include a three-atom system case, which will be useful in the CCZ gate in the last section.
###Code
def build_state_from_id(s_id, basis_name):
if len(s_id) not in {2, 3}:
raise ValueError("Not a valid state ID string")
ids = {"digital": "gh", "ground-rydberg": "rg", "all": "rgh"}
if basis_name not in ids:
raise ValueError("Not a valid basis")
pool = {"".join(x) for x in product(ids[basis_name], repeat=len(s_id))}
if s_id not in pool:
raise ValueError("Not a valid state id for the given basis.")
ket = {
op: qutip.basis(len(ids[basis_name]), i)
for i, op in enumerate(ids[basis_name])
}
if len(s_id) == 3:
# Recall that s_id = 'C1'+'C2'+'T' while in the register reg_id = 'C1'+'T'+'C2'.
reg_id = s_id[0] + s_id[2] + s_id[1]
return qutip.tensor([ket[x] for x in reg_id])
else:
return qutip.tensor([ket[x] for x in s_id])
###Output
_____no_output_____
###Markdown
We try this out:
###Code
build_state_from_id("hg", "digital")
###Output
_____no_output_____
###Markdown
Let's now write the state preparation sequence. We will also create the prepared state to be able to calculate its overlap during the simulation. First, let us define a π-pulse along the Y axis that will excite the atoms to the hyperfine state if requested:
###Code
duration = 300
pi_Y = Pulse.ConstantDetuning(
BlackmanWaveform(duration, np.pi), 0.0, -np.pi / 2
)
pi_Y.draw()
###Output
_____no_output_____
###Markdown
The sequence preparation itself acts with the Raman channel if the desired initial state has atoms in the hyperfine level. We have also expanded it for the case of a CCZ in order to use it below:
###Code
def preparation_sequence(state_id, reg):
global seq
if not set(state_id) <= {"g", "h"} or len(state_id) != len(reg.qubits):
raise ValueError("Not a valid state ID")
if len(reg.qubits) == 2:
seq_dict = {"1": "target", "0": "control"}
elif len(reg.qubits) == 3:
seq_dict = {"2": "target", "1": "control2", "0": "control1"}
seq = Sequence(reg, Chadoq2)
if set(state_id) == {"g"}:
basis = "ground-rydberg"
print(
f"Warning: {state_id} state does not require a preparation sequence."
)
else:
basis = "all"
for k in range(len(reg.qubits)):
if state_id[k] == "h":
if "raman" not in seq.declared_channels:
seq.declare_channel(
"raman", "raman_local", seq_dict[str(k)]
)
else:
seq.target(seq_dict[str(k)], "raman")
seq.add(pi_Y, "raman")
prep_state = build_state_from_id(
state_id, basis
) # Raises error if not a valid `state_id` for the register
return prep_state
###Output
_____no_output_____
###Markdown
Let's test this sequence. Notice that the state "gg" (both atoms in the ground state) is automatically fed to the Register so a pulse sequence is not needed to prepare it.
###Code
# Define sequence and Set channels
prep_state = preparation_sequence("hh", reg)
seq.draw(draw_phase_area=True)
###Output
_____no_output_____
###Markdown
3. Constructing the Gate Sequence We apply the common $\pi-2\pi-\pi$ sequence for the CZ gate
###Code
pi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0.0, 0)
twopi_pulse = Pulse.ConstantDetuning(
BlackmanWaveform(duration, 2 * np.pi), 0.0, 0
)
def CZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max(
(seq._last(ch).tf for ch in seq.declared_channels), default=0
)
# Declare Rydberg channel
seq.declare_channel("ryd", "rydberg_local", "control")
# Write CZ sequence:
seq.add(
pi_pulse, "ryd", "wait-for-all"
) # Wait for state preparation to finish.
seq.target("target", "ryd") # Changes to target qubit
seq.add(twopi_pulse, "ryd")
seq.target("control", "ryd") # Changes back to control qubit
seq.add(pi_pulse, "ryd")
return prep_state, prep_time
prep_state, prep_time = CZ_sequence(
"gh"
) # constructs seq, prep_state and prep_time
seq.draw(draw_phase_area=True)
print(f"Prepared state: {prep_state}")
print(f"Preparation time: {prep_time}ns")
###Output
_____no_output_____
###Markdown
4. Simulating the CZ sequence
###Code
CZ = {}
for state_id in {"gg", "hg", "gh", "hh"}:
# Get CZ sequence
prep_state, prep_time = CZ_sequence(
state_id
) # constructs seq, prep_state and prep_time
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data = [st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(rf"$ \langle\,{state_id} |\, \psi(t)\rangle$")
plt.axvspan(0, prep_time, alpha=0.06, color="royalblue")
plt.title(rf"Action of gate on state $|${state_id}$\rangle$")
CZ
###Output
_____no_output_____
###Markdown
5. CCZ Gate The same principle can be applied for composite gates. As an application, let us construct the *CCZ* gate, which determines the phase depending on the level of *two* control atoms. We begin by reconstructing the Register:
###Code
# Atom Register and Device
q_dict = {
"control1": np.array([-2.0, 0.0]),
"target": np.array([0.0, 2 * np.sqrt(3.001)]),
"control2": np.array([2.0, 0.0]),
}
reg = Register(q_dict)
reg.draw()
preparation_sequence("hhh", reg)
seq.draw(draw_phase_area=True)
def CCZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max(
(seq._last(ch).tf for ch in seq.declared_channels), default=0
)
# Declare Rydberg channel
seq.declare_channel("ryd", "rydberg_local", "control1")
# Write CCZ sequence:
seq.add(
pi_pulse, "ryd", protocol="wait-for-all"
) # Wait for state preparation to finish.
seq.target("control2", "ryd")
seq.add(pi_pulse, "ryd")
seq.target("target", "ryd")
seq.add(twopi_pulse, "ryd")
seq.target("control2", "ryd")
seq.add(pi_pulse, "ryd")
seq.target("control1", "ryd")
seq.add(pi_pulse, "ryd")
return prep_state, prep_time
CCZ_sequence("hhh")
seq.draw(draw_phase_area=True)
CCZ = {}
for state_id in {"".join(x) for x in product("gh", repeat=3)}:
# Get CCZ sequence
prep_state, prep_time = CCZ_sequence(state_id)
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data = [st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CCZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(rf"$ \langle\,{state_id} | \psi(t)\rangle$")
plt.axvspan(0, prep_time, alpha=0.06, color="royalblue")
plt.title(rf"Action of gate on state $|${state_id}$\rangle$")
CCZ
###Output
_____no_output_____
###Markdown
Control-Z Gate Sequence IntroductionIn this tutorial we show how to prepare the pulse sequence that generates a *Controlled - Z* gate. We will prepare our state with atoms in any of the "digital" states that we shall call $|g\rangle$ and $|h \rangle$ ( for "ground" and "hyperfine", respectively). Then we will use the *Rydberg blockade* effect to create the logic gate. The levels that each atom can take are the following: We will be using *NumPy* and *Matplotlib* for calculations and plots. Many additional details about the CZ gate construction can be found in [1111.6083v2](https://arxiv.org/abs/1111.6083)
###Code
import numpy as np
import matplotlib.pyplot as plt
import qutip
from itertools import product
###Output
_____no_output_____
###Markdown
We import the following Classes from Pulser:
###Code
from pulser import Pulse, Sequence, Register
from pulser.devices import Chadoq2
from pulser_simulation import Simulation
from pulser.waveforms import BlackmanWaveform, ConstantWaveform
###Output
_____no_output_____
###Markdown
1. Loading the Register on a Device Defining an atom register can simply be done by choosing one of the predetermined shapes included in the `Register`class. We can also construct a dictionary with specific labels for each atom. The atoms must lie inside the *Rydberg blockade radius* $R_b$, which we will characterize by $$\hbar \Omega^{\text{Max}}_{\text{Rabi}} \sim U_{ij} = \frac{C_6}{R_{b}^6},$$where the coefficient $C_6$ determines the strength of the interaction ($C_6/\hbar \approx 5008$ GHz.$\mu m^6$). We can obtain the corresponding Rydberg blockade radius from a given $\Omega_{\text{Rabi}}^{\text{max}}$ using the `rydberg_blockade_radius()` method from `Chadoq2`. For the pulses in this tutorial, $\Omega^{\text{Max}}_{\text{Rabi}}$ is below $2\pi \times 10$ Mhz so:
###Code
Rabi = np.linspace(1, 10, 10)
R_blockade = [
Chadoq2.rydberg_blockade_radius(2.0 * np.pi * rabi) for rabi in Rabi
]
plt.figure()
plt.plot(Rabi, R_blockade, "--o")
plt.xlabel(r"$\Omega/(2\pi)$ [MHz]", fontsize=14)
plt.ylabel(r"$R_b$ [$\mu\.m$]", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Thus, we place our atoms at relative distances below $5$ µm, therefore ensuring we are inside the Rydberg blockade volume.
###Code
# Atom Register and Device
q_dict = {
"control": np.array([-2, 0.0]),
"target": np.array([2, 0.0]),
}
reg = Register(q_dict)
reg.draw()
###Output
_____no_output_____
###Markdown
2. State Preparation The first part of our sequence will correspond to preparing the different states on which the CZ gate will act. For this, we define the following `Pulse` instances that correspond to $\pi$ and $2\pi$ pulses (notice that the area can be easily fixed using the predefined `BlackmanWaveform`): Let us construct a function that takes the label string (or "id") of a state and turns it into a ket state. This ket can be in any of the "digital" (ground-hyperfine levels), "ground-rydberg" or "all" levels. We also include a three-atom system case, which will be useful in the CCZ gate in the last section.
###Code
def build_state_from_id(s_id, basis_name):
if len(s_id) not in {2, 3}:
raise ValueError("Not a valid state ID string")
ids = {"digital": "gh", "ground-rydberg": "rg", "all": "rgh"}
if basis_name not in ids:
raise ValueError("Not a valid basis")
pool = {"".join(x) for x in product(ids[basis_name], repeat=len(s_id))}
if s_id not in pool:
raise ValueError("Not a valid state id for the given basis.")
ket = {
op: qutip.basis(len(ids[basis_name]), i)
for i, op in enumerate(ids[basis_name])
}
if len(s_id) == 3:
# Recall that s_id = 'C1'+'C2'+'T' while in the register reg_id = 'C1'+'T'+'C2'.
reg_id = s_id[0] + s_id[2] + s_id[1]
return qutip.tensor([ket[x] for x in reg_id])
else:
return qutip.tensor([ket[x] for x in s_id])
###Output
_____no_output_____
###Markdown
We try this out:
###Code
build_state_from_id("hg", "digital")
###Output
_____no_output_____
###Markdown
Let's now write the state preparation sequence. We will also create the prepared state to be able to calculate its overlap during the simulation. First, let us define a π-pulse along the Y axis that will excite the atoms to the hyperfine state if requested:
###Code
duration = 300
pi_Y = Pulse.ConstantDetuning(
BlackmanWaveform(duration, np.pi), 0.0, -np.pi / 2
)
pi_Y.draw()
###Output
_____no_output_____
###Markdown
The sequence preparation itself acts with the Raman channel if the desired initial state has atoms in the hyperfine level. We have also expanded it for the case of a CCZ in order to use it below:
###Code
def preparation_sequence(state_id, reg):
global seq
if not set(state_id) <= {"g", "h"} or len(state_id) != len(reg.qubits):
raise ValueError("Not a valid state ID")
if len(reg.qubits) == 2:
seq_dict = {"1": "target", "0": "control"}
elif len(reg.qubits) == 3:
seq_dict = {"2": "target", "1": "control2", "0": "control1"}
seq = Sequence(reg, Chadoq2)
if set(state_id) == {"g"}:
basis = "ground-rydberg"
print(
f"Warning: {state_id} state does not require a preparation sequence."
)
else:
basis = "all"
for k in range(len(reg.qubits)):
if state_id[k] == "h":
if "raman" not in seq.declared_channels:
seq.declare_channel(
"raman", "raman_local", seq_dict[str(k)]
)
else:
seq.target(seq_dict[str(k)], "raman")
seq.add(pi_Y, "raman")
prep_state = build_state_from_id(
state_id, basis
) # Raises error if not a valid `state_id` for the register
return prep_state
###Output
_____no_output_____
###Markdown
Let's test this sequence. Notice that the state "gg" (both atoms in the ground state) is automatically fed to the Register so a pulse sequence is not needed to prepare it.
###Code
# Define sequence and Set channels
prep_state = preparation_sequence("hh", reg)
seq.draw(draw_phase_area=True)
###Output
_____no_output_____
###Markdown
3. Constructing the Gate Sequence We apply the common $\pi-2\pi-\pi$ sequence for the CZ gate
###Code
pi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0.0, 0)
twopi_pulse = Pulse.ConstantDetuning(
BlackmanWaveform(duration, 2 * np.pi), 0.0, 0
)
def CZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max(
(seq._last(ch).tf for ch in seq.declared_channels), default=0
)
# Declare Rydberg channel
seq.declare_channel("ryd", "rydberg_local", "control")
# Write CZ sequence:
seq.add(
pi_pulse, "ryd", "wait-for-all"
) # Wait for state preparation to finish.
seq.target("target", "ryd") # Changes to target qubit
seq.add(twopi_pulse, "ryd")
seq.target("control", "ryd") # Changes back to control qubit
seq.add(pi_pulse, "ryd")
return prep_state, prep_time
prep_state, prep_time = CZ_sequence(
"gh"
) # constructs seq, prep_state and prep_time
seq.draw(draw_phase_area=True)
print(f"Prepared state: {prep_state}")
print(f"Preparation time: {prep_time}ns")
###Output
_____no_output_____
###Markdown
4. Simulating the CZ sequence
###Code
CZ = {}
for state_id in {"gg", "hg", "gh", "hh"}:
# Get CZ sequence
prep_state, prep_time = CZ_sequence(
state_id
) # constructs seq, prep_state and prep_time
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data = [st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(rf"$ \langle\,{state_id} |\, \psi(t)\rangle$")
plt.axvspan(0, prep_time, alpha=0.06, color="royalblue")
plt.title(rf"Action of gate on state $|${state_id}$\rangle$")
CZ
###Output
_____no_output_____
###Markdown
5. CCZ Gate The same principle can be applied for composite gates. As an application, let us construct the *CCZ* gate, which determines the phase depending on the level of *two* control atoms. We begin by reconstructing the Register:
###Code
# Atom Register and Device
q_dict = {
"control1": np.array([-2.0, 0.0]),
"target": np.array([0.0, 2 * np.sqrt(3.001)]),
"control2": np.array([2.0, 0.0]),
}
reg = Register(q_dict)
reg.draw()
preparation_sequence("hhh", reg)
seq.draw(draw_phase_area=True)
def CCZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max(
(seq._last(ch).tf for ch in seq.declared_channels), default=0
)
# Declare Rydberg channel
seq.declare_channel("ryd", "rydberg_local", "control1")
# Write CCZ sequence:
seq.add(
pi_pulse, "ryd", protocol="wait-for-all"
) # Wait for state preparation to finish.
seq.target("control2", "ryd")
seq.add(pi_pulse, "ryd")
seq.target("target", "ryd")
seq.add(twopi_pulse, "ryd")
seq.target("control2", "ryd")
seq.add(pi_pulse, "ryd")
seq.target("control1", "ryd")
seq.add(pi_pulse, "ryd")
return prep_state, prep_time
CCZ_sequence("hhh")
seq.draw(draw_phase_area=True)
CCZ = {}
for state_id in {"".join(x) for x in product("gh", repeat=3)}:
# Get CCZ sequence
prep_state, prep_time = CCZ_sequence(state_id)
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data = [st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CCZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(rf"$ \langle\,{state_id} | \psi(t)\rangle$")
plt.axvspan(0, prep_time, alpha=0.06, color="royalblue")
plt.title(rf"Action of gate on state $|${state_id}$\rangle$")
CCZ
###Output
_____no_output_____
###Markdown
Control-Z Gate Sequence IntroductionIn this tutorial we show how to prepare the pulse sequence that generates a *Controlled - Z* gate. We will prepare our state with atoms in any of the "digital" states that we shall call $|g\rangle$ and $|h \rangle$ ( for "ground" and "hyperfine", respectively). Then we will use the *Rydberg blockade* effect to create the logic gate. The levels that each atom can take are the following: We will be using *NumPy* and *Matplotlib* for calculations and plots. Many additional details about the CZ gate construction can be found in [1111.6083v2](https://arxiv.org/abs/1111.6083)
###Code
import numpy as np
import matplotlib.pyplot as plt
import qutip
from itertools import product
###Output
_____no_output_____
###Markdown
We import the following Classes from Pulser:
###Code
from pulser import Pulse, Sequence, Register
from pulser.devices import Chadoq2
from pulser.simulation import Simulation
from pulser.waveforms import BlackmanWaveform,ConstantWaveform
###Output
_____no_output_____
###Markdown
1. Loading the Register on a Device Defining an atom register can simply be done by choosing one of the predetermined shapes included in the `Register`class. We can also construct a dictionary with specific labels for each atom. The atoms must lie inside the *Rydberg blockade radius* $R_b$, which we will characterize by $$\hbar \Omega^{\text{Max}}_{\text{Rabi}} \sim U_{ij} = \frac{C_6}{R_{b}^6},$$where the coefficient $C_6$ determines the strength of the interaction ($C_6/\hbar \approx 5008$ GHz.$\mu m^6$). We can obtain the corresponding Rydberg blockade radius from a given $\Omega_{\text{Rabi}}^{\text{max}}$ using the `rydberg_blockade_radius()` method from `Chadoq2`. For the pulses in this tutorial, $\Omega^{\text{Max}}_{\text{Rabi}}$ is below $2\pi \times 10$ Mhz so:
###Code
Rabi = np.linspace(1, 10, 10)
R_blockade = [Chadoq2.rydberg_blockade_radius(2.*np.pi*rabi) for rabi in Rabi]
plt.figure()
plt.plot(Rabi, R_blockade,'--o')
plt.xlabel(r"$\Omega/(2\pi)$ [MHz]", fontsize=14)
plt.ylabel(r"$R_b$ [$\mu\.m$]", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Thus, we place our atoms at relative distances below $5$ µm, therefore ensuring we are inside the Rydberg blockade volume.
###Code
# Atom Register and Device
q_dict = {"control":np.array([-2,0.]),
"target": np.array([2,0.]),
}
reg = Register(q_dict)
reg.draw()
###Output
_____no_output_____
###Markdown
2. State Preparation The first part of our sequence will correspond to preparing the different states on which the CZ gate will act. For this, we define the following `Pulse` instances that correspond to $\pi$ and $2\pi$ pulses (notice that the area can be easily fixed using the predefined `BlackmanWaveform`): Let us construct a function that takes the label string (or "id") of a state and turns it into a ket state. This ket can be in any of the "digital" (ground-hyperfine levels), "ground-rydberg" or "all" levels. We also include a three-atom system case, which will be useful in the CCZ gate in the last section.
###Code
def build_state_from_id(s_id, basis_name):
if len(s_id) not in {2,3}:
raise ValueError("Not a valid state ID string")
ids = {'digital': 'gh', 'ground-rydberg': 'rg', 'all': 'rgh'}
if basis_name not in ids:
raise ValueError('Not a valid basis')
pool = {''.join(x) for x in product(ids[basis_name], repeat=len(s_id))}
if s_id not in pool:
raise ValueError('Not a valid state id for the given basis.')
ket = {op: qutip.basis(len(ids[basis_name]), i)
for i, op in enumerate(ids[basis_name])}
if len(s_id) == 3:
#Recall that s_id = 'C1'+'C2'+'T' while in the register reg_id = 'C1'+'T'+'C2'.
reg_id = s_id[0]+s_id[2]+s_id[1]
return qutip.tensor([ket[x] for x in reg_id])
else:
return qutip.tensor([ket[x] for x in s_id])
###Output
_____no_output_____
###Markdown
We try this out:
###Code
build_state_from_id('hg','digital')
###Output
_____no_output_____
###Markdown
Let's now write the state preparation sequence. We will also create the prepared state to be able to calculate its overlap during the simulation. First, let us define a π-pulse along the Y axis that will excite the atoms to the hyperfine state if requested:
###Code
duration = 300
pi_Y = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0., -np.pi/2)
pi_Y.draw()
###Output
_____no_output_____
###Markdown
The sequence preparation itself acts with the Raman channel if the desired initial state has atoms in the hyperfine level. We have also expanded it for the case of a CCZ in order to use it below:
###Code
def preparation_sequence(state_id, reg):
global seq
if not set(state_id) <= {'g','h'} or len(state_id) != len(reg.qubits):
raise ValueError('Not a valid state ID')
if len(reg.qubits) == 2:
seq_dict = {'1':'target', '0':'control'}
elif len(reg.qubits) == 3:
seq_dict = {'2':'target', '1':'control2', '0':'control1'}
seq = Sequence(reg, Chadoq2)
if set(state_id) == {'g'}:
basis = 'ground-rydberg'
print(f'Warning: {state_id} state does not require a preparation sequence.')
else:
basis = 'all'
for k in range(len(reg.qubits)):
if state_id[k] == 'h':
if 'raman' not in seq.declared_channels:
seq.declare_channel('raman','raman_local', seq_dict[str(k)])
else:
seq.target(seq_dict[str(k)],'raman')
seq.add(pi_Y,'raman')
prep_state = build_state_from_id(state_id, basis) # Raises error if not a valid `state_id` for the register
return prep_state
###Output
_____no_output_____
###Markdown
Let's test this sequence. Notice that the state "gg" (both atoms in the ground state) is automatically fed to the Register so a pulse sequence is not needed to prepare it.
###Code
# Define sequence and Set channels
prep_state = preparation_sequence('hh', reg)
seq.draw()
###Output
_____no_output_____
###Markdown
3. Constructing the Gate Sequence We apply the common $\pi-2\pi-\pi$ sequence for the CZ gate
###Code
pi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, np.pi), 0., 0)
twopi_pulse = Pulse.ConstantDetuning(BlackmanWaveform(duration, 2*np.pi), 0., 0)
def CZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max((seq._last(ch).tf for ch in seq.declared_channels), default=0)
# Declare Rydberg channel
seq.declare_channel('ryd', 'rydberg_local', 'control')
# Write CZ sequence:
seq.add(pi_pulse, 'ryd', 'wait-for-all') # Wait for state preparation to finish.
seq.target('target', 'ryd') # Changes to target qubit
seq.add(twopi_pulse, 'ryd')
seq.target('control', 'ryd') # Changes back to control qubit
seq.add(pi_pulse, 'ryd')
return prep_state, prep_time
prep_state, prep_time = CZ_sequence('gh') # constructs seq, prep_state and prep_time
seq.draw()
print(f'Prepared state: {prep_state}')
print(f'Preparation time: {prep_time}ns')
###Output
_____no_output_____
###Markdown
4. Simulating the CZ sequence
###Code
CZ = {}
for state_id in {'gg','hg','gh','hh'}:
# Get CZ sequence
prep_state, prep_time = CZ_sequence(state_id) # constructs seq, prep_state and prep_time
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data=[st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(fr'$ \langle\,{state_id} |\, \psi(t)\rangle$')
plt.axvspan(0, prep_time, alpha=0.06, color='royalblue')
plt.title(fr"Action of gate on state $|${state_id}$\rangle$")
CZ
###Output
_____no_output_____
###Markdown
5. CCZ Gate The same principle can be applied for composite gates. As an application, let us construct the *CCZ* gate, which determines the phase depending on the level of *two* control atoms. We begin by reconstructing the Register:
###Code
# Atom Register and Device
q_dict = {"control1":np.array([-2.0, 0.]),
"target": np.array([0., 2*np.sqrt(3.001)]),
"control2": np.array([2.0, 0.])}
reg = Register(q_dict)
reg.draw()
preparation_sequence('hhh', reg)
seq.draw()
def CCZ_sequence(initial_id):
# Prepare State
prep_state = preparation_sequence(initial_id, reg)
prep_time = max((seq._last(ch).tf for ch in seq.declared_channels), default=0)
# Declare Rydberg channel
seq.declare_channel('ryd', 'rydberg_local', 'control1')
# Write CCZ sequence:
seq.add(pi_pulse, 'ryd', protocol='wait-for-all') # Wait for state preparation to finish.
seq.target('control2', 'ryd')
seq.add(pi_pulse, 'ryd')
seq.target('target','ryd')
seq.add(twopi_pulse, 'ryd')
seq.target('control2','ryd')
seq.add(pi_pulse, 'ryd')
seq.target('control1','ryd')
seq.add(pi_pulse,'ryd')
return prep_state, prep_time
CCZ_sequence('hhh')
seq.draw()
CCZ = {}
for state_id in {''.join(x) for x in product('gh', repeat=3)}:
# Get CCZ sequence
prep_state, prep_time = CCZ_sequence(state_id)
# Construct Simulation instance
simul = Simulation(seq)
res = simul.run()
data=[st.overlap(prep_state) for st in res.states]
final_st = res.states[-1]
CCZ[state_id] = final_st.overlap(prep_state)
plt.figure()
plt.plot(np.real(data))
plt.xlabel(r"Time [ns]")
plt.ylabel(fr'$ \langle\,{state_id} | \psi(t)\rangle$')
plt.axvspan(0, prep_time, alpha=0.06, color='royalblue')
plt.title(fr"Action of gate on state $|${state_id}$\rangle$")
CCZ
###Output
_____no_output_____ |
Day_013_HW.ipynb | ###Markdown
練習時間參考 Day 12 範例程式,離散化你覺得有興趣的欄位,並嘗試找出有趣的訊息
###Code
# Import 需要的套件
import os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
之前做過的處理
###Code
# 設定 data_path
dir_data = './data/'
f_app_train = os.path.join(dir_data, 'application_train.csv')
f_app_test = os.path.join(dir_data, 'application_test.csv')
app_train = pd.read_csv(f_app_train)
app_test = pd.read_csv(f_app_test)
from sklearn.preprocessing import LabelEncoder
# Create a label encoder object
le = LabelEncoder()
le_count = 0
# Iterate through the columns
for col in app_train:
if app_train[col].dtype == 'object':
# If 2 or fewer unique categories
if len(list(app_train[col].unique())) <= 2:
# Train on the training data
le.fit(app_train[col])
# Transform both training and testing data
app_train[col] = le.transform(app_train[col])
app_test[col] = le.transform(app_test[col])
# Keep track of how many columns were label encoded
le_count += 1
app_train = pd.get_dummies(app_train)
app_test = pd.get_dummies(app_test)
# Create an anomalous flag column
app_train['DAYS_EMPLOYED_ANOM'] = app_train["DAYS_EMPLOYED"] == 365243
app_train['DAYS_EMPLOYED'].replace({365243: np.nan}, inplace = True)
# also apply to testing dataset
app_test['DAYS_EMPLOYED_ANOM'] = app_test["DAYS_EMPLOYED"] == 365243
app_test["DAYS_EMPLOYED"].replace({365243: np.nan}, inplace = True)
# absolute the value of DAYS_BIRTH
app_train['DAYS_BIRTH'] = abs(app_train['DAYS_BIRTH'])
app_test['DAYS_BIRTH'] = abs(app_test['DAYS_BIRTH'])
app_train['YEAR_BIRTH'] = app_train['DAYS_BIRTH']/365
lower_bound = np.floor(app_train['YEAR_BIRTH'].min()).astype(np.int)
higher_bound = np.ceil(app_train['YEAR_BIRTH'].max()).astype(np.int)
step_list = [5, 10]
for step in step_list:
app_train['equal_width_age'] = pd.cut(x=app_train['YEAR_BIRTH'], bins=range(lower_bound, higher_bound + step, step))
age_group = app_train.groupby(by=['equal_width_age']).mean()
temp_x = list(range(1, len(age_group.index) + 1))
x = temp_x
y = age_group['AMT_CREDIT']
plt.bar(x, y)
plt.xticks(temp_x, age_group.index, rotation=50, fontsize=8)
plt.title('AMT_CREDIT by AGE Group')
plt.show()
x = age_group.index
y = age_group['AMT_CREDIT']
sns.barplot(x ,y)
plt.xticks(rotation=50, fontsize=8)
plt.title('AMT_CREDIT by AGE Group')
plt.show()
###Output
_____no_output_____
###Markdown
常用的 DataFrame 操作* merge / transform* subset* groupby [作業目標]- 練習填入對應的欄位資料或公式, 完成題目的要求 [作業重點]- 填入適當的輸入資料, 讓後面的程式顯示題目要求的結果 (Hint: 填入對應區間或欄位即可, In[4]~In[6], Out[4]~In[6])- 填入z轉換的計算方式, 完成轉換後的數值 (Hint: 參照標準化公式, In[7])
###Code
# Import 需要的套件
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
app_train = pd.read_csv('drive/My Drive/Colab Notebooks/ML100Days/data/application_train.csv')
app_train.head()
###Output
_____no_output_____
###Markdown
作業1. 請將 app_train 中的 CNT_CHILDREN 依照下列規則分為四組,並將其結果在原本的 dataframe 命名為 CNT_CHILDREN_GROUP * 0 個小孩 * 有 1 - 2 個小孩 * 有 3 - 5 個小孩 * 有超過 5 個小孩2. 請根據 CNT_CHILDREN_GROUP 以及 TARGET,列出各組的平均 AMT_INCOME_TOTAL,並繪製 baxplot3. 請根據 CNT_CHILDREN_GROUP 以及 TARGET,對 AMT_INCOME_TOTAL 計算 [Z 轉換](https://en.wikipedia.org/wiki/Standard_score) 後的分數
###Code
df_CNT_CHILDREN = app_train['CNT_CHILDREN']
print(df_CNT_CHILDREN.max())
df_CNT_CHILDREN.value_counts()
#1
cut_rule = [0, 0.9, 2.9, 5.9, app_train['CNT_CHILDREN'].max() ]
app_train['CNT_CHILDREN_GROUP'] = pd.cut(app_train['CNT_CHILDREN'].values, cut_rule, include_lowest=True)
app_train['CNT_CHILDREN_GROUP'].value_counts()
#2-1
grp = ['CNT_CHILDREN_GROUP', 'TARGET']
grouped_df = app_train.groupby(grp)['AMT_INCOME_TOTAL']
grouped_df.mean()
#2-2
plt_column = 'AMT_INCOME_TOTAL'
plt_by = grp
app_train.boxplot(column=plt_column, by = plt_by, showfliers = False, figsize=(12,12), grid=False)
plt.suptitle('')
plt.show()
#3
app_train['AMT_INCOME_TOTAL_Z_BY_CHILDREN_GRP-TARGET'] = grouped_df.apply(lambda x: (x-np.mean(x))/np.std(x))
app_train[['AMT_INCOME_TOTAL','AMT_INCOME_TOTAL_Z_BY_CHILDREN_GRP-TARGET']].head()
###Output
_____no_output_____
###Markdown
常用的 DataFrame 操作* merge / transform* subset* groupby [作業目標]- 練習填入對應的欄位資料或公式, 完成題目的要求 [作業重點]- 填入適當的輸入資料, 讓後面的程式顯示題目要求的結果 (Hint: 填入對應區間或欄位即可, In[4]~In[6], Out[4]~In[6])- 填入z轉換的計算方式, 完成轉換後的數值 (Hint: 參照標準化公式, In[7])
###Code
# Import 需要的套件
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# 設定 data_path
# dir_data = './data/'
# f_app = os.path.join(dir_data, 'application_train.csv')
# print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv('application_train.csv')
app_train.head()
###Output
_____no_output_____
###Markdown
作業1. 請將 app_train 中的 CNT_CHILDREN 依照下列規則分為四組,並將其結果在原本的 dataframe 命名為 CNT_CHILDREN_GROUP * 0 個小孩 * 有 1 - 2 個小孩 * 有 3 - 5 個小孩 * 有超過 5 個小孩2. 請根據 CNT_CHILDREN_GROUP 以及 TARGET,列出各組的平均 AMT_INCOME_TOTAL,並繪製 baxplot3. 請根據 CNT_CHILDREN_GROUP 以及 TARGET,對 AMT_INCOME_TOTAL 計算 [Z 轉換](https://en.wikipedia.org/wiki/Standard_score) 後的分數
###Code
#1
"""
Your code here
"""
cut_rule = [-np.inf, 0, 2, 5, app_train['CNT_CHILDREN'].max()]
app_train['CNT_CHILDREN_GROUP'] = pd.cut(app_train['CNT_CHILDREN'].values, cut_rule, include_lowest=True)
app_train['CNT_CHILDREN_GROUP'].value_counts()
#2-1
"""
Your code here
"""
grp = ['CNT_CHILDREN_GROUP', 'TARGET']
grouped_df = app_train.groupby(grp)['AMT_INCOME_TOTAL']
grouped_df.mean()
#2-2
"""
Your code here
"""
plt_column = plt_column = 'AMT_INCOME_TOTAL'
plt_by = ['CNT_CHILDREN_GROUP', 'TARGET']
app_train.boxplot(column=plt_column, by = plt_by, showfliers = False, figsize=(12,12))
plt.suptitle('')
plt.show()
#3
"""
Your code here
"""
app_train['AMT_INCOME_TOTAL_Z_BY_CHILDREN_GRP-TARGET'] = grouped_df.apply(lambda x:(x-np.mean(x))/np.std(x) )
app_train[['AMT_INCOME_TOTAL','AMT_INCOME_TOTAL_Z_BY_CHILDREN_GRP-TARGET']].head()
###Output
_____no_output_____
###Markdown
練習時間參考 Day 12 範例程式,離散化你覺得有興趣的欄位,並嘗試找出有趣的訊息
###Code
# Import 需要的套件
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns # 另一個繪圖-樣式套件
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
之前做過的處理
###Code
# 設定 data_path
dir_data = './data/'
f_app_train = os.path.join(dir_data, 'application_train.csv')
f_app_test = os.path.join(dir_data, 'application_test.csv')
app_train = pd.read_csv(f_app_train)
app_test = pd.read_csv(f_app_test)
from sklearn.preprocessing import LabelEncoder
# Create a label encoder object
le = LabelEncoder()
le_count = 0
# Iterate through the columns
for col in app_train:
if app_train[col].dtype == 'object':
# If 2 or fewer unique categories
if len(list(app_train[col].unique())) <= 2:
# Train on the training data
le.fit(app_train[col])
# Transform both training and testing data
app_train[col] = le.transform(app_train[col])
app_test[col] = le.transform(app_test[col])
# Keep track of how many columns were label encoded
le_count += 1
app_train = pd.get_dummies(app_train)
app_test = pd.get_dummies(app_test)
# Create an anomalous flag column
app_train['DAYS_EMPLOYED_ANOM'] = app_train["DAYS_EMPLOYED"] == 365243
app_train['DAYS_EMPLOYED'].replace({365243: np.nan}, inplace = True)
# also apply to testing dataset
app_test['DAYS_EMPLOYED_ANOM'] = app_test["DAYS_EMPLOYED"] == 365243
app_test["DAYS_EMPLOYED"].replace({365243: np.nan}, inplace = True)
# absolute the value of DAYS_BIRTH
app_train['DAYS_BIRTH'] = abs(app_train['DAYS_BIRTH'])
app_test['DAYS_BIRTH'] = abs(app_test['DAYS_BIRTH'])
set(app_train.dtypes)
# Find continuous variables to plot KDE
app_train.nunique()[app_train.nunique()>50]
app_train.nunique()[app_train.nunique()==2]
###Output
_____no_output_____
###Markdown
YEARS_EMPLOYED vs. FLAG_OWN_REALTY
###Code
app_train['YEARS_EMPLOYED'] = abs(app_train['DAYS_EMPLOYED']) / 365
print(app_train['YEARS_EMPLOYED'].describe())
print('\r\n')
app_train['YEARS_EMPLOYED'].hist()
app_train['YEARS_BINNED'] = pd.qcut(app_train[~app_train.DAYS_EMPLOYED.isna()]['YEARS_EMPLOYED'], 5)
print(app_train['YEARS_BINNED'].value_counts())
year_group_sorted = app_train['YEARS_BINNED'].unique()
plt.figure(figsize=(8,6))
for i in range(len(year_group_sorted)):
sns.distplot(app_train.loc[(app_train['YEARS_BINNED'] == year_group_sorted[i]) & \
(app_train['FLAG_OWN_REALTY'] == 0), 'YEARS_EMPLOYED'],
label = 'FLAG_OWN_REALTY = 0 (w/o), YEARS_EMPLOYED =' + str(year_group_sorted[i]))
sns.distplot(app_train.loc[(app_train['YEARS_BINNED'] == year_group_sorted[i]) & \
(app_train['FLAG_OWN_REALTY'] == 1), 'YEARS_EMPLOYED'],
label = 'FLAG_OWN_REALTY = 1 (w/ ), YEARS_EMPLOYED =' + str(year_group_sorted[i]))
plt.legend(loc=(1.02, 0))
plt.title('KDE with Age groups')
plt.show()
year_group_sorted = app_train['YEARS_BINNED'].unique()
plt.figure(figsize=(8,6))
for i in range(len(year_group_sorted)):
sns.distplot(app_train.loc[(app_train['YEARS_BINNED'] == year_group_sorted[i]) & \
(app_train['FLAG_OWN_REALTY'] == 0), 'YEARS_EMPLOYED'],
label = 'FLAG_OWN_REALTY = 0 (w/o), YEARS_EMPLOYED =' + str(year_group_sorted[i]))
plt.legend(loc=(1.02, 0))
plt.title('KDE with Age groups')
plt.show()
year_group_sorted = app_train['YEARS_BINNED'].unique()
plt.figure(figsize=(8,6))
for i in range(len(year_group_sorted)):
sns.distplot(app_train.loc[(app_train['YEARS_BINNED'] == year_group_sorted[i]) & \
(app_train['FLAG_OWN_REALTY'] == 1), 'YEARS_EMPLOYED'],
label = 'FLAG_OWN_REALTY = 1 (w/ ), YEARS_EMPLOYED =' + str(year_group_sorted[i]))
plt.legend(loc=(1.02, 0))
plt.title('KDE with Age groups')
plt.show()
###Output
C:\Users\user\Anaconda3\lib\site-packages\seaborn\distributions.py:195: RuntimeWarning: Mean of empty slice.
line, = ax.plot(a.mean(), 0)
C:\Users\user\Anaconda3\lib\site-packages\numpy\core\_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
C:\Users\user\Anaconda3\lib\site-packages\numpy\lib\function_base.py:838: RuntimeWarning: invalid value encountered in true_divide
return n/db/n.sum(), bin_edges
###Markdown
YEARS_EMPLOYED vs. ORGANIZATION_TYPE_University
###Code
year_group_sorted = app_train['YEARS_BINNED'].unique()
plt.figure(figsize=(8,6))
for i in range(len(year_group_sorted)):
sns.distplot(app_train.loc[(app_train['YEARS_BINNED'] == year_group_sorted[i]) & \
(app_train['ORGANIZATION_TYPE_University'] == 0), 'YEARS_EMPLOYED'],
label = 'ORGANIZATION_TYPE_University = 0 (w/o), YEARS_EMPLOYED =' + str(year_group_sorted[i]))
sns.distplot(app_train.loc[(app_train['YEARS_BINNED'] == year_group_sorted[i]) & \
(app_train['FLAG_OWN_REALTY'] == 1), 'YEARS_EMPLOYED'],
label = 'ORGANIZATION_TYPE_University = 1 (w/ ), YEARS_EMPLOYED =' + str(year_group_sorted[i]))
plt.legend(loc=(1.02, 0))
plt.title('KDE with Age groups')
plt.show()
year_group_sorted = app_train['YEARS_BINNED'].unique()
plt.figure(figsize=(8,6))
for i in range(len(year_group_sorted)):
sns.distplot(app_train.loc[(app_train['YEARS_BINNED'] == year_group_sorted[i]) & \
(app_train['ORGANIZATION_TYPE_University'] == 0), 'YEARS_EMPLOYED'],
label = 'ORGANIZATION_TYPE_University = 0 (w/o), YEARS_EMPLOYED =' + str(year_group_sorted[i]))
plt.legend(loc=(1.02, 0))
plt.title('KDE with Age groups')
plt.show()
year_group_sorted = app_train['YEARS_BINNED'].unique()
plt.figure(figsize=(8,6))
for i in range(len(year_group_sorted)):
sns.distplot(app_train.loc[(app_train['YEARS_BINNED'] == year_group_sorted[i]) & \
(app_train['FLAG_OWN_REALTY'] == 1), 'YEARS_EMPLOYED'],
label = 'ORGANIZATION_TYPE_University = 1 (w/ ), YEARS_EMPLOYED =' + str(year_group_sorted[i]))
plt.legend(loc=(1.02, 0))
plt.title('KDE with Age groups')
plt.show()
###Output
C:\Users\user\Anaconda3\lib\site-packages\seaborn\distributions.py:195: RuntimeWarning: Mean of empty slice.
line, = ax.plot(a.mean(), 0)
C:\Users\user\Anaconda3\lib\site-packages\numpy\core\_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
C:\Users\user\Anaconda3\lib\site-packages\numpy\lib\function_base.py:838: RuntimeWarning: invalid value encountered in true_divide
return n/db/n.sum(), bin_edges
|
Lead Score Case Study model building.ipynb | ###Markdown
Imputing cells with entry 'Select' by Null value since it is as good as Null (mentioned in the problem statement)
###Code
df_leads=df_leads.replace('Select',np.nan)
(df_leads=='Select').sum()
#Inspecting master data frame dimesnions and size
print(df_leads.shape)
print(df_leads.info())
###Output
(9240, 37)
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 9240 entries, 0 to 9239
Data columns (total 37 columns):
Prospect ID 9240 non-null object
Lead Number 9240 non-null int64
Lead Origin 9240 non-null object
Lead Source 9204 non-null object
Do Not Email 9240 non-null object
Do Not Call 9240 non-null object
Converted 9240 non-null int64
TotalVisits 9103 non-null float64
Total Time Spent on Website 9240 non-null int64
Page Views Per Visit 9103 non-null float64
Last Activity 9137 non-null object
Country 6779 non-null object
Specialization 5860 non-null object
How did you hear about X Education 1990 non-null object
What is your current occupation 6550 non-null object
What matters most to you in choosing a course 6531 non-null object
Search 9240 non-null object
Magazine 9240 non-null object
Newspaper Article 9240 non-null object
X Education Forums 9240 non-null object
Newspaper 9240 non-null object
Digital Advertisement 9240 non-null object
Through Recommendations 9240 non-null object
Receive More Updates About Our Courses 9240 non-null object
Tags 5887 non-null object
Lead Quality 4473 non-null object
Update me on Supply Chain Content 9240 non-null object
Get updates on DM Content 9240 non-null object
Lead Profile 2385 non-null object
City 5571 non-null object
Asymmetrique Activity Index 5022 non-null object
Asymmetrique Profile Index 5022 non-null object
Asymmetrique Activity Score 5022 non-null float64
Asymmetrique Profile Score 5022 non-null float64
I agree to pay the amount through cheque 9240 non-null object
A free copy of Mastering The Interview 9240 non-null object
Last Notable Activity 9240 non-null object
dtypes: float64(4), int64(3), object(30)
memory usage: 2.6+ MB
None
###Markdown
Data Cleaning
###Code
#Inspecting column wise total null values
df_leads.isnull().sum()
#Inspecting column wise percentage of null values
round(100*(((df_leads.isnull()).sum())/len(df_leads.index)),2)
###Output
_____no_output_____
###Markdown
It is impossible to either delete or impute the rows corresponding to such large number of missing values (>30%)without losing a lot of data or introducing heavy bias.
###Code
#Dropping irrelevant columns from master data frame
df_leads.drop(['City','Lead Profile','Specialization','How did you hear about X Education','Lead Quality','Asymmetrique Activity Index','Asymmetrique Profile Index','Asymmetrique Activity Score','Asymmetrique Profile Score','Tags'],axis=1,inplace=True)
#Again inspecting column wise null value percentage after dropping irrelevant columns
round(100*((df_leads.isnull().sum())/len(df_leads.index)),2)
#Inspecting master data frame entries after dropping unimportant columns
df_leads.head()
###Output
_____no_output_____
###Markdown
Treating missing values in row
###Code
#Inspecting number of rows with more than 5 missing values
len(df_leads[df_leads.isnull().sum(axis=1)>4].index)
#Inspecting null values percentage of rows
round(100*(len(df_leads[df_leads.isnull().sum(axis=1)>4].index)/len(df_leads.index)),2)
# Removing all the rows with null values greater than 5
df_leads=df_leads[df_leads.isnull().sum(axis=1)<=5]
#Inspecting master data frame after cleaning all null value rows.
round(100*((df_leads.isnull().sum())/len(df_leads.index)),2)
#Checking country column stats as it contains 265 of null values
df_leads['Country'].describe()
#Removing the Country column due to redundance and large percentage of null values
df_leads.drop(['Country'],axis=1,inplace=True)
#Dropping two more valiables to eliminate large percentage of null values in columns
df_leads.drop(['What matters most to you in choosing a course','What is your current occupation'],axis=1,inplace=True)
#Inspecting final master data frame after removal of columns with large %age of null values
round(100*((df_leads.isnull().sum())/len(df_leads.index)),2)
#Inspecting 'Total Visits' stats as it contains 1.48% of null values
print(df_leads['TotalVisits'].describe())
df_leads['TotalVisits'].isnull().sum()
#Removing Nans in 'TotalVisits' columns
df_leads=df_leads[~np.isnan(df_leads['TotalVisits'])]
# Removing Nulls in 'Lead Source' as it contains 0.39% of null values
df_leads=df_leads[~df_leads['Lead Source'].isnull()]
#Inspecting final master data frame after removing null values
round(100*((df_leads.isnull().sum())/len(df_leads.index)),2)
#Inspecting total no of available rows without any null values.
df_leads.shape
###Output
_____no_output_____
###Markdown
We are left with over 9000 rows and 24 columns and no null values
###Code
# Inspecting the cleaned master dataframe
print(df_leads.info())
print(df_leads.describe())
#From below result we can see that we have 5 measurable features and rest all are categorical variables.
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9074 entries, 0 to 9239
Data columns (total 24 columns):
Prospect ID 9074 non-null object
Lead Number 9074 non-null int64
Lead Origin 9074 non-null object
Lead Source 9074 non-null object
Do Not Email 9074 non-null object
Do Not Call 9074 non-null object
Converted 9074 non-null int64
TotalVisits 9074 non-null float64
Total Time Spent on Website 9074 non-null int64
Page Views Per Visit 9074 non-null float64
Last Activity 9074 non-null object
Search 9074 non-null object
Magazine 9074 non-null object
Newspaper Article 9074 non-null object
X Education Forums 9074 non-null object
Newspaper 9074 non-null object
Digital Advertisement 9074 non-null object
Through Recommendations 9074 non-null object
Receive More Updates About Our Courses 9074 non-null object
Update me on Supply Chain Content 9074 non-null object
Get updates on DM Content 9074 non-null object
I agree to pay the amount through cheque 9074 non-null object
A free copy of Mastering The Interview 9074 non-null object
Last Notable Activity 9074 non-null object
dtypes: float64(2), int64(3), object(19)
memory usage: 1.7+ MB
None
Lead Number Converted TotalVisits Total Time Spent on Website \
count 9074.000000 9074.000000 9074.000000 9074.000000
mean 617032.619352 0.378554 3.456028 482.887481
std 23348.029512 0.485053 4.858802 545.256560
min 579533.000000 0.000000 0.000000 0.000000
25% 596406.000000 0.000000 1.000000 11.000000
50% 615278.500000 0.000000 3.000000 246.000000
75% 637176.500000 1.000000 5.000000 922.750000
max 660737.000000 1.000000 251.000000 2272.000000
Page Views Per Visit
count 9074.000000
mean 2.370151
std 2.160871
min 0.000000
25% 1.000000
50% 2.000000
75% 3.200000
max 55.000000
###Markdown
Data Preparation
###Code
#Mapping all categorical features with [Yes/No] to corresponding numerical output (Yes:1 and No:0)
df_leads['Do Not Email']=df_leads['Do Not Email'].map({'Yes' : 1, "No" : 0})
df_leads['Do Not Call']=df_leads['Do Not Call'].map({'Yes' : 1, "No" : 0})
df_leads['Search']=df_leads['Search'].map({'Yes' : 1, "No" : 0})
df_leads['Magazine']=df_leads['Magazine'].map({'Yes' : 1, "No" : 0})
df_leads['Newspaper Article']=df_leads['Newspaper Article'].map({'Yes' : 1, "No" : 0})
df_leads['X Education Forums']=df_leads['X Education Forums'].map({'Yes' : 1, "No" : 0})
df_leads['Newspaper']=df_leads['Newspaper'].map({'Yes' : 1, "No" : 0})
df_leads['Digital Advertisement']=df_leads['Digital Advertisement'].map({'Yes' : 1, "No" : 0})
df_leads['Through Recommendations']=df_leads['Through Recommendations'].map({'Yes' : 1, "No" : 0})
df_leads['Receive More Updates About Our Courses']=df_leads['Receive More Updates About Our Courses'].map({'Yes' : 1, "No" : 0})
df_leads['Update me on Supply Chain Content']=df_leads['Update me on Supply Chain Content'].map({'Yes' : 1, "No" : 0})
df_leads['Get updates on DM Content']=df_leads['Get updates on DM Content'].map({'Yes' : 1, "No" : 0})
df_leads['I agree to pay the amount through cheque']=df_leads['I agree to pay the amount through cheque'].map({'Yes' : 1, "No" : 0})
df_leads['A free copy of Mastering The Interview']=df_leads['A free copy of Mastering The Interview'].map({'Yes' : 1, "No" : 0})
# Creating Dummy Variables for categorical variables with more than two levels
# Lead Origin
LO=pd.get_dummies(df_leads['Lead Origin'],prefix='Lead Origin',drop_first=True)
df_leads=pd.concat([df_leads,LO],axis=1)
# Lead Source
LS=pd.get_dummies(df_leads['Lead Source'],prefix='Lead Source',drop_first=True)
df_leads=pd.concat([df_leads,LS],axis=1)
# Last Notable Activity
LNA=pd.get_dummies(df_leads['Last Notable Activity'],prefix='Last Notable Activity',drop_first=True)
df_leads=pd.concat([df_leads,LNA],axis=1)
# Last Activity
LA=pd.get_dummies(df_leads['Last Activity'],prefix='Last Activity',drop_first=True)
df_leads=pd.concat([df_leads,LA],axis=1)
# Dropping the repeated columns for which we created dummy variables above.
df_leads=df_leads.drop(['Lead Origin','Lead Source','Last Notable Activity','Last Activity'],1)
#Final master data frame has all numerical columns including measures and categorical variables
df_leads.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9074 entries, 0 to 9239
Data columns (total 74 columns):
Prospect ID 9074 non-null object
Lead Number 9074 non-null int64
Do Not Email 9074 non-null int64
Do Not Call 9074 non-null int64
Converted 9074 non-null int64
TotalVisits 9074 non-null float64
Total Time Spent on Website 9074 non-null int64
Page Views Per Visit 9074 non-null float64
Search 9074 non-null int64
Magazine 9074 non-null int64
Newspaper Article 9074 non-null int64
X Education Forums 9074 non-null int64
Newspaper 9074 non-null int64
Digital Advertisement 9074 non-null int64
Through Recommendations 9074 non-null int64
Receive More Updates About Our Courses 9074 non-null int64
Update me on Supply Chain Content 9074 non-null int64
Get updates on DM Content 9074 non-null int64
I agree to pay the amount through cheque 9074 non-null int64
A free copy of Mastering The Interview 9074 non-null int64
Lead Origin_Landing Page Submission 9074 non-null uint8
Lead Origin_Lead Add Form 9074 non-null uint8
Lead Origin_Lead Import 9074 non-null uint8
Lead Source_Direct Traffic 9074 non-null uint8
Lead Source_Facebook 9074 non-null uint8
Lead Source_Google 9074 non-null uint8
Lead Source_Live Chat 9074 non-null uint8
Lead Source_NC_EDM 9074 non-null uint8
Lead Source_Olark Chat 9074 non-null uint8
Lead Source_Organic Search 9074 non-null uint8
Lead Source_Pay per Click Ads 9074 non-null uint8
Lead Source_Press_Release 9074 non-null uint8
Lead Source_Reference 9074 non-null uint8
Lead Source_Referral Sites 9074 non-null uint8
Lead Source_Social Media 9074 non-null uint8
Lead Source_WeLearn 9074 non-null uint8
Lead Source_Welingak Website 9074 non-null uint8
Lead Source_bing 9074 non-null uint8
Lead Source_blog 9074 non-null uint8
Lead Source_google 9074 non-null uint8
Lead Source_testone 9074 non-null uint8
Lead Source_welearnblog_Home 9074 non-null uint8
Lead Source_youtubechannel 9074 non-null uint8
Last Notable Activity_Email Bounced 9074 non-null uint8
Last Notable Activity_Email Link Clicked 9074 non-null uint8
Last Notable Activity_Email Marked Spam 9074 non-null uint8
Last Notable Activity_Email Opened 9074 non-null uint8
Last Notable Activity_Email Received 9074 non-null uint8
Last Notable Activity_Form Submitted on Website 9074 non-null uint8
Last Notable Activity_Had a Phone Conversation 9074 non-null uint8
Last Notable Activity_Modified 9074 non-null uint8
Last Notable Activity_Olark Chat Conversation 9074 non-null uint8
Last Notable Activity_Page Visited on Website 9074 non-null uint8
Last Notable Activity_Resubscribed to emails 9074 non-null uint8
Last Notable Activity_SMS Sent 9074 non-null uint8
Last Notable Activity_Unreachable 9074 non-null uint8
Last Notable Activity_Unsubscribed 9074 non-null uint8
Last Notable Activity_View in browser link Clicked 9074 non-null uint8
Last Activity_Converted to Lead 9074 non-null uint8
Last Activity_Email Bounced 9074 non-null uint8
Last Activity_Email Link Clicked 9074 non-null uint8
Last Activity_Email Marked Spam 9074 non-null uint8
Last Activity_Email Opened 9074 non-null uint8
Last Activity_Email Received 9074 non-null uint8
Last Activity_Form Submitted on Website 9074 non-null uint8
Last Activity_Had a Phone Conversation 9074 non-null uint8
Last Activity_Olark Chat Conversation 9074 non-null uint8
Last Activity_Page Visited on Website 9074 non-null uint8
Last Activity_Resubscribed to emails 9074 non-null uint8
Last Activity_SMS Sent 9074 non-null uint8
Last Activity_Unreachable 9074 non-null uint8
Last Activity_Unsubscribed 9074 non-null uint8
Last Activity_View in browser link Clicked 9074 non-null uint8
Last Activity_Visited Booth in Tradeshow 9074 non-null uint8
dtypes: float64(2), int64(17), object(1), uint8(54)
memory usage: 1.9+ MB
###Markdown
All variables are numeric
###Code
#Inspecting data variance of numerical columns
df_num=df_leads[['TotalVisits','Total Time Spent on Website','Page Views Per Visit']]
df_num.describe(percentiles=[.25,.5,.75,.90,.99])
#Plotting all three above numerical columns to look for any outliers
plt.figure(figsize=(16,10))
plt.subplot(2,2,1)
plt.boxplot(df_leads['TotalVisits'])
plt.subplot(2,2,2)
plt.boxplot(df_leads['Total Time Spent on Website'])
plt.subplot(2,2,3)
plt.boxplot(df_leads['Page Views Per Visit'])
###Output
_____no_output_____
###Markdown
Outliers exist but can be dealt with after creating Principal Components Standardise the data
###Code
#Importing sklearn package for reducing all numerical variables to same scale.
from sklearn.preprocessing import StandardScaler
scaler=StandardScaler()
df_leads[['TotalVisits','Total Time Spent on Website','Page Views Per Visit']]=scaler.fit_transform(df_leads[['TotalVisits','Total Time Spent on Website','Page Views Per Visit']])
#Inspecting variance of numerical features after standardization
df_leads[['TotalVisits','Total Time Spent on Website','Page Views Per Visit']].describe()
#Checking for outliers after standardization
plt.figure(figsize=(16,10))
plt.subplot(2,2,1)
plt.boxplot(df_leads['TotalVisits'])
plt.subplot(2,2,2)
plt.boxplot(df_leads['Total Time Spent on Website'])
plt.subplot(2,2,3)
plt.boxplot(df_leads['Page Views Per Visit'])
# Checking the churn rate to check the overall balance in master leads data
churn = (sum(df_leads['Converted'])/len(df_leads['Converted'].index))*100
churn
###Output
_____no_output_____
###Markdown
We have almost 38% churn rate
###Code
#Dropping unique index columns before performing PCA as it can be done only on numerical data
df_Id=df_leads[['Prospect ID','Lead Number']]
#Dropping 'Prospect ID','Lead Number' as they are just identification number.
df_leads.drop(['Prospect ID','Lead Number'],axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
PCA
###Code
#Splitting master data into test and train for model building and evaluation
from sklearn.cross_validation import train_test_split
y=df_leads['Converted']
X=df_leads.drop(['Converted'],axis=1)
# split the data set into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
#Checking train set size
X_train.shape
#Importing the PCA module to identify principal components using train data
from sklearn.decomposition import PCA
pca = PCA(svd_solver='randomized', random_state=42)
pca.fit(X_train)
#Checking variance of all principal components identified
pca.explained_variance_ratio_
#Making the screeplot - plotting the cumulative variance against the number of components to identify optimum number of principal components
%matplotlib inline
fig = plt.figure(figsize = (12,6))
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
plt.show()
###Output
_____no_output_____
###Markdown
We see that first 15 principal components explains more than 90% variance.
###Code
#Doing PCA with 15 components as they show approx 95% of variance in train data set
pca_final = PCA(svd_solver='randomized', random_state=42,n_components=15)
pca_final.fit(X_train)
#Fitting train data set of principal components
train_pca = pca_final.fit_transform(X_train)
train_pca.shape
#Creating correlation matrix for the principal components to see correaltion between all of them
corrmat = np.corrcoef(train_pca.transpose())
#plotting the correlation matrix
%matplotlib inline
plt.figure(figsize = (20,10))
sns.heatmap(corrmat,annot = True)
###Output
_____no_output_____
###Markdown
We see that there are no correlations among variables.
###Code
#Converting trained pca data into data frame and inspecting size of train pca data frame
df_train_pca=pd.DataFrame(train_pca)
print(df_train_pca.shape)
df_train_pca.head()
#Predicting the output of lead score on train data set.(Converted)
columnList=df_train_pca.columns
y_train=y_train.reset_index()
df_train_pca['output']=y_train['Converted']
# Removing outliers from final pca data frame
# Taking 2.5IQR since 1.5 leads to huge loss in data
for col in columnList:
Q1 = df_train_pca[col].quantile(0.25)
Q3 = df_train_pca[col].quantile(0.75)
IQR = Q3 - Q1
df_train_pca=df_train_pca[(df_train_pca[col] >= Q1 - 2.5*IQR) & (df_train_pca[col] <= Q3 + 2.5*IQR)]
#Inpsecting final pca data set for total no of rows available.
y_train=df_train_pca['output']
df_train_pca.drop(['output'],axis=1,inplace=True)
print(df_train_pca.shape)
#Applying selected components to the test data - 15 components
df_test_pca = pca_final.transform(X_test)
df_test_pca.shape
#Plotting two principal commponents to check the variance of data
%matplotlib inline
fig = plt.figure(figsize = (8,8))
plt.scatter(df_train_pca[:][0], df_train_pca[:][1], c = y_train.map({0:'green',1:'red'}))
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.tight_layout()
plt.show()
#From below plot we can see that data points are quite clearly segggregated.
###Output
_____no_output_____
###Markdown
Checking 3d plot for better separation
###Code
#Inspecting 3d view of principal components plot
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8,8))
ax = Axes3D(fig)
# ax = plt.axes(projection='3d')
ax.scatter( df_train_pca.iloc[:,0], df_train_pca.iloc[:,1], c=y_train.map({0:'green',1:'red'}))
plt.show()
###Output
_____no_output_____
###Markdown
Continue with model building
###Code
#Applying Logistic Regression
#Training the model on the train data
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
learner_pca = LogisticRegression()
model_pca = learner_pca.fit(df_train_pca,y_train)
#Making prediction on the test data
pred_probs_test = model_pca.predict_proba(df_test_pca)[:,1]
"{:2.2}".format(metrics.roc_auc_score(y_test, pred_probs_test))
#With 15 selected principal components and logistic regression model we have achieved 85% of accuracy
###Output
_____no_output_____
###Markdown
Predictions and Evaluation
###Code
# Predicted probabilities of conversion on test data set
y_pred=model_pca.predict_proba(df_test_pca)
# Converting it into dataframe
y_pred_df=pd.DataFrame(y_pred)
# Converting to column dataframe
y_pred_1=y_pred_df.iloc[:,[1]]
y_pred_1.head()
# Converting y_y_test_df = pd.DataFrame(y_test)
y_test_df = pd.DataFrame(y_test)
y_test_df.head()
# Putting CustID to index to final output variable data set
y_test_df['CustID'] = y_test_df.index
# Removing index for both dataframes to append them side by side
y_pred_1.reset_index(drop=True, inplace=True)
y_test_df.reset_index(drop=True, inplace=True)
# Appending y_test_df and y_pred_1
y_pred_final = pd.concat([y_test_df,y_pred_1],axis=1)
# Renaming the column
y_pred_final= y_pred_final.rename(columns={ 1 : 'Conv_Prob'})
# Rearranging the columns
y_pred_final = y_pred_final.reindex_axis(['CustID','Converted','Conv_Prob'], axis=1)
# Let's see the head of y_pred_final
y_pred_final.head()
# Creating new column 'predicted' with 1 if Churn_Prob>0.5 else 0
y_pred_final['predicted'] = y_pred_final.Conv_Prob.map( lambda x: 1 if x > 0.5 else 0)
# Let's see the head
y_pred_final.head()
#We have chosen probability cutoff as 0.5 i.e if probability >0.5 then potential lead is likely to convert positively else negative
#Importing metrics module to calculate accuracy of the final predicted values
from sklearn import metrics
metrics.accuracy_score(y_pred_final.Converted, y_pred_final.predicted)
#We have achieved 80% of accuracy using selected model.
###Output
_____no_output_____
###Markdown
Our accuracy of model is 80%
###Code
#Plotting ROC curve
def draw_roc_curve( y_test, pred_proba_test ):
fpr_test, tpr_test, thresholds = metrics.roc_curve(y_test, pred_proba_test)
#fpr_tr, tpr_tr, thresholds = metrics.roc_curve(y_tr, pred_proba_tr[:,1])
auc_test=metrics.roc_auc_score(y_test,pred_proba_test)
#auc_tr=roc_auc_score(y_tr,pred_proba_tr[:,1])
#plt.plot(fpr_tr, tpr_tr, 'b-', label='Train_ROC= %.2f' %(auc_tr))
plt.plot(fpr_test, tpr_test, 'r-', label='Test_ROC= %.2f' %(auc_test))
plt.plot(fpr_test, fpr_test, 'g-', label='x=y')
plt.xlabel('False Positive Rate or [1 - True Negative Rate]')
plt.grid(True)
plt.title('Receiver operating characteristic example')
plt.ylabel('True Positive Rate')
plt.legend(loc='best')
plt.show()
return auc_test
auc_test=draw_roc_curve( y_pred_final.Converted, y_pred_final.Conv_Prob )
print(auc_test)
###Output
_____no_output_____
###Markdown
Calculating the precision(Lead Score)
###Code
#Plotting consfusion matirx to check total positive prediction rate
def plotconfusionmatrix(y_test,pred_test):
df=metrics.confusion_matrix(y_test, pred_test);
labels = ['Negative', 'Positive']
ax= plt.subplot()
sns.heatmap(df, annot=True, ax = ax,fmt='g');
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.xaxis.set_ticklabels(labels); ax.yaxis.set_ticklabels(labels);
ax.set_title('Confusion Matrix');
plt.show();
return df
# Confusion matrix
confusion=plotconfusionmatrix( y_pred_final.Converted, y_pred_final.predicted )
TP = confusion[1,1] # true positive
TN = confusion[0,0] # true negatives
FP = confusion[0,1] # false positives
FN = confusion[1,0] # false negatives
Precision=TP/(TP+FP)
Precision
###Output
_____no_output_____
###Markdown
Providing score between 0 and 100 to each customer We can use probability score of Logisitc regression to provide Lead score to each customer
###Code
df_pca = pca_final.fit_transform(X)
# Predicted probabilities
y_pred=model_pca.predict_proba(df_pca)
# Converting it into dataframe
y_pred_df=pd.DataFrame(y_pred)
# Converting to column dataframe
y_pred_1=y_pred_df.iloc[:,[1]]
y_df = pd.DataFrame(y)
y_df.head()
# Putting CustID to index
y_df['CustID'] = y_df.index
# Removing index for both dataframes to append them side by side
y_pred_1.reset_index(drop=True, inplace=True)
y_df.reset_index(drop=True, inplace=True)
# Appending y_df and y_pred_1
y_pred_final = pd.concat([y_df,y_pred_1],axis=1)
# Renaming the column
y_pred_final= y_pred_final.rename(columns={ 1 : 'Conv_Prob'})
# Rearranging the columns
y_pred_final = y_pred_final.reindex_axis(['CustID','Converted','Conv_Prob'], axis=1)
# Let's see the head of y_pred_final
y_pred_final.head()
# Creating new column 'predicted' with 1 if Churn_Prob>0.5 else 0
y_pred_final['predicted'] = y_pred_final.Conv_Prob.map( lambda x: 1 if x > 0.5 else 0)
# Let's see the head
y_pred_final.head()
#Inspecting final data set of predicted lead score
y_pred_final.shape
#Setting unique index
df_Id=df_Id.reset_index()
#Converting lead score into percentage
df_Id['Lead Score']=round(y_pred_final['Conv_Prob']*100)
df_Id['Converted']=y_pred_final['Converted']
#Final data frame which has lead score associated with each lead number and Converted output variable which shows whether
#they will be converting positively or not.
df_Id
###Output
_____no_output_____ |
hyperparameter_tuning (1).ipynb | ###Markdown
Hyperparameter Tuning using HyperDriveTODO: Import Dependencies. In the cell below, import all the dependencies that you will need to complete the project.
###Code
import logging
import os
import csv
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import pkg_resources
from azureml.train.hyperdrive import RandomParameterSampling
from azureml.train.hyperdrive import normal, uniform, choice
from azureml.core import Workspace, Experiment
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.dataset import Dataset
from azureml.data.dataset_factory import TabularDatasetFactory
from azureml.widgets import RunDetails
from azureml.train.sklearn import SKLearn
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.policy import BanditPolicy
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.parameter_expressions import uniform
###Output
_____no_output_____
###Markdown
DatasetTODO: Get data. In the cell below, write code to access the data you will be using in this project. Remember that the dataset needs to be external.
###Code
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
experiment_name = 'ChurnPrediction'
experiment=Experiment(ws, experiment_name)
run = experiment.start_logging()
# TODO: Create compute cluster
# max_nodes should be no greater than 4.
# choose a name for your cluster
cluster_name = "notebook143048"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS3_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it uses the scale settings for the cluster
#compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=30)
# use get_status() to get a detailed status for the current cluster.
#print(compute_target.get_status().serialize())
found = False
key = "Churn Prediction Dataset"
description_text = "Churn Prediction for Capstone Project"
if key in ws.datasets.keys():
found = True
ds = ws.datasets[key]
if not found:
# Create Dataset and register it into Workspace
dataset_link = 'https://raw.githubusercontent.com/tejasbangera/Udacity-Captstone-Project/main/WA_Fn-UseC_-Telco-Customer-Churn.csv'
ds = TabularDatasetFactory.from_delimited_files(path = dataset_link)
#Register Dataset in Workspace
ds = ds.register(workspace=ws,name=key,description=description_text)
###Output
_____no_output_____
###Markdown
Hyperdrive ConfigurationTODO: Explain the model you are using and the reason for chosing the different hyperparameters, termination policy and config settings.
###Code
from azureml.widgets import RunDetails
from azureml.train.sklearn import SKLearn
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.policy import BanditPolicy
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.parameter_expressions import uniform, choice , normal
import os
# Specify parameter sampler
parameter_sampler = RandomParameterSampling( {
"--C": uniform(0.05, 0.1),
"--max_iter": choice(16, 32, 64, 128)}) ### YOUR CODE HERE ###
# Specify a Policy
policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=2, delay_evaluation=5) ### YOUR CODE HERE ###
"""Bandit terminates runs where the primary metric is not within
the specified slack factor/slack amount compared to the best performing run."""
if "training" not in os.listdir():
os.mkdir("./training")
# Create a SKLearn estimator for use with train.py
est = SKLearn(source_directory="./",
compute_target=compute_target, entry_script="train.py")### YOUR CODE HERE ###
# Create a HyperDriveConfig using the estimator, hyperparameter sampler, and policy.
hyperdrive_config = HyperDriveConfig(estimator = est,
hyperparameter_sampling = parameter_sampler,
policy = policy,
primary_metric_name="Accuracy",
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=50,
max_concurrent_runs = 5
)### YOUR CODE HERE ###
###Output
'SKLearn' estimator is deprecated. Please use 'ScriptRunConfig' from 'azureml.core.script_run_config' with your own defined environment or the AzureML-Tutorial curated environment.
'enabled' is deprecated. Please use the azureml.core.runconfig.DockerConfiguration object with the 'use_docker' param instead.
###Markdown
Run Details
###Code
# Submit your hyperdrive run to the experiment and show run details with the widget.
hyperdrive_run = experiment.submit(hyperdrive_config)
RunDetails(hyperdrive_run).show()
###Output
WARNING:root:If 'script' has been provided here and a script file name has been specified in 'run_config', 'script' provided in ScriptRunConfig initialization will take precedence.
###Markdown
Best Run
###Code
import joblib
# Get your best run and save the model from that run.
best_run = hyperdrive_run.get_best_run_by_primary_metric()
best_run_metrics = best_run.get_metrics()
print('Best Run Id: ', best_run.id)
print('Accuracy: ', best_run_metrics['Accuracy'])
best_run.get_file_names() #To get the actual model file
best_run.download_file(name="outputs/model.joblib", output_file_path="./outputs/")
best_run
print(best_run.get_file_names())
###Output
['azureml-logs/55_azureml-execution-tvmps_5d3e12c72d52fca97ab1343c91f2869c67751cb6f69fd97ae068a74367385df6_d.txt', 'azureml-logs/65_job_prep-tvmps_5d3e12c72d52fca97ab1343c91f2869c67751cb6f69fd97ae068a74367385df6_d.txt', 'azureml-logs/70_driver_log.txt', 'azureml-logs/75_job_post-tvmps_5d3e12c72d52fca97ab1343c91f2869c67751cb6f69fd97ae068a74367385df6_d.txt', 'logs/azureml/102_azureml.log', 'logs/azureml/dataprep/backgroundProcess.log', 'logs/azureml/dataprep/backgroundProcess_Telemetry.log', 'logs/azureml/job_prep_azureml.log', 'logs/azureml/job_release_azureml.log', 'outputs/model.joblib']
###Markdown
Hyperparameter Tuning using HyperDriveTODO: Import Dependencies. In the cell below, import all the dependencies that you will need to complete the project.
###Code
from azureml.core import Workspace, Experiment
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.data.dataset_factory import TabularDatasetFactory
import pandas as pd
from azureml.train.automl import AutoMLConfig
from azureml.widgets import RunDetails
from azureml.core.model import Model
from azureml.core.model import InferenceConfig
from azureml.core import Workspace, Environment
from azureml.core import Model
from azureml.core.webservice import AciWebservice, Webservice
import json
import joblib
import os
from azureml.core import Workspace, Experiment
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.widgets import RunDetails
from azureml.train.sklearn import SKLearn
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.policy import BanditPolicy
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.parameter_expressions import choice
import joblib
from sklearn.linear_model import LogisticRegression
import argparse
import os
import numpy as np
from sklearn.metrics import mean_squared_error
import joblib
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
from azureml.core.run import Run
from azureml.data.dataset_factory import TabularDatasetFactory
ws = Workspace.from_config()
experiment_name = 'creditcard_fraud_prediction'
experiment=Experiment(workspace=ws, name=experiment_name)
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
run = experiment.start_logging()
# compute cluster
amlcompute_cluster_name = "cpu-clusters"
try:
remote_run_compute = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',
max_nodes=4)
remote_run_compute = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
remote_run_compute.wait_for_completion(show_output=True , min_node_count = 1, timeout_in_minutes = 2)
###Output
Creating
Succeeded...................
AmlCompute wait for completion finished
Wait timeout has been reached
Current provisioning state of AmlCompute is "Succeeded" and current node count is "0"
###Markdown
DatasetTODO: Get data. In the cell below, write code to access the data you will be using in this project. Remember that the dataset needs to be external.
###Code
# Create TabularDataset using TabularDatasetFactory
# Data is located at:
data_path = "https://media.githubusercontent.com/media/Tekhunt/Creditcard-fraud-detection/master/fraud-data.csv"
data = TabularDatasetFactory.from_delimited_files(path= data_path)
data.to_pandas_dataframe().head()
from train import *
x_data, y_data = my_dataset(data)
# TODO: Split data into train and test sets.
### YOUR CODE HERE ###
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size = 0.3, random_state = 6)
###Output
_____no_output_____
###Markdown
Hyperdrive ConfigurationLogisticRegression is the algorithm used in this classification task. The algorithm is a two class classification to predict between two categories(fraudulent or not fraudulent). And To improve the model we optimized the hyperparameters using the powers of Azure Machine Learning's HyperdriveThe hyperparameter space defined implies tuning the C and max_iter parameters. Random sampling, which supports discrete and continuous hyperparameters was used and the primary metric to optimize was accuracy and the the goal was to maximize.Early termination policy was Bandit Policy and the parameters are slack_factor and evaluation_interval. A slack factor equal to 0.1 as criteria for evaluation to conserve resources by terminating runs where the primary metric is not within the specified slack factor/slack amount compared to the best performing run. Once completed we create the SKLearn estimator
###Code
# TODO: Create an early termination policy. This is not required if you are using Bayesian sampling.
policy = BanditPolicy(slack_factor = 0.1, evaluation_interval=1)
#TODO: Create the different params that you will be using during training
param_sampling = RandomParameterSampling( {
"--C": choice(0.001, 0.01, 0.1, 1, 10, 100, 1000),
"--max_iter": choice(100, 150, 200, 250,400, 500)
}
)
#experiment_folder = 'train_file'
#TODO: Create your estimator and hyperdrive config
estimator = SKLearn(source_directory = './',
entry_script = 'train.py',
compute_target = remote_run_compute)
hyperdrive_run_config = HyperDriveConfig(estimator=estimator,
hyperparameter_sampling=param_sampling,
policy = policy,
primary_metric_name='Accuracy',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=4,
max_concurrent_runs=4)
#TODO: Submit your experiment
hyperdrive_run = experiment.submit(hyperdrive_run_config)
###Output
WARNING:root:If 'script' has been provided here and a script file name has been specified in 'run_config', 'script' provided in ScriptRunConfig initialization will take precedence.
###Markdown
Run DetailsOPTIONAL: Write about the different models trained and their performance. Why do you think some models did better than others?TODO: In the cell below, use the `RunDetails` widget to show the different experiments.
###Code
# Visualize hyperparameter tuning runs
RunDetails(hyperdrive_run).show()
hyperdrive_run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Best ModelTODO: In the cell below, get the best model from the hyperdrive experiments and display all the properties of the model.
###Code
# Get your best run and save the model from that run.
best_run = hyperdrive_run.get_best_run_by_primary_metric()
best_run_metrics = best_run.get_metrics()
parameter_values = best_run.get_details()['runDefinition']['arguments']
run_file_names = best_run.get_file_names()
print(parameter_values)
print('/n')
print(run_file_names)
print('/n')
print(best_run_metrics)
best_run.get_details()
print(best_run.get_file_names())
# Save the best model
best_run.download_file('/outputs/model.joblib', 'hyperdrive_model.joblib')
# Register the best model
model = best_run.register_model(model_name='hyperdrive_loan-detection_model',
model_path='outputs/model.joblib',
model_framework=Model.Framework.SCIKITLEARN)
print(model)
print('Best Run Id: ', best_run.id)
print('\n Accuracy:', best_run_metrics['Accuracy'])
print('\n learning rate:',parameter_values[3])
model = best_run.register_model(model_name = 'best_hyperdrive_model', model_path = 'outputs/model.joblib')
#TODO: Save the best model
#Save and register the best model
###Output
_____no_output_____ |
exercise-categorical-encodings.ipynb | ###Markdown
**This notebook is an exercise in the [Feature Engineering](https://www.kaggle.com/learn/feature-engineering) course. You can reference the tutorial at [this link](https://www.kaggle.com/matleonard/categorical-encodings).**--- IntroductionIn this exercise you'll apply more advanced encodings to encode the categorical variables ito improve your classifier model. The encodings you will implement are:- Count Encoding- Target Encoding- CatBoost EncodingYou'll refit the classifier after each encoding to check its performance on hold-out data. Begin by running the next code cell to set up the notebook.
###Code
# Set up code checking
# This can take a few seconds
from learntools.core import binder
binder.bind(globals())
from learntools.feature_engineering.ex2 import *
###Output
/opt/conda/lib/python3.7/site-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/opt/conda/lib/python3.7/site-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
The next code cell repeats the work that you did in the previous exercise.
###Code
import numpy as np
import pandas as pd
from sklearn import preprocessing, metrics
import lightgbm as lgb
clicks = pd.read_parquet('../input/feature-engineering-data/baseline_data.pqt')
###Output
_____no_output_____
###Markdown
Next, we define a couple functions that you'll use to test the encodings that you implement in this exercise.
###Code
def get_data_splits(dataframe, valid_fraction=0.1):
"""Splits a dataframe into train, validation, and test sets.
First, orders by the column 'click_time'. Set the size of the
validation and test sets with the valid_fraction keyword argument.
"""
dataframe = dataframe.sort_values('click_time')
valid_rows = int(len(dataframe) * valid_fraction)
train = dataframe[:-valid_rows * 2]
# valid size == test size, last two sections of the data
valid = dataframe[-valid_rows * 2:-valid_rows]
test = dataframe[-valid_rows:]
return train, valid, test
def train_model(train, valid, test=None, feature_cols=None):
if feature_cols is None:
feature_cols = train.columns.drop(['click_time', 'attributed_time',
'is_attributed'])
dtrain = lgb.Dataset(train[feature_cols], label=train['is_attributed'])
dvalid = lgb.Dataset(valid[feature_cols], label=valid['is_attributed'])
param = {'num_leaves': 64, 'objective': 'binary',
'metric': 'auc', 'seed': 7}
num_round = 1000
bst = lgb.train(param, dtrain, num_round, valid_sets=[dvalid],
early_stopping_rounds=20, verbose_eval=False)
valid_pred = bst.predict(valid[feature_cols])
valid_score = metrics.roc_auc_score(valid['is_attributed'], valid_pred)
print(f"Validation AUC score: {valid_score}")
if test is not None:
test_pred = bst.predict(test[feature_cols])
test_score = metrics.roc_auc_score(test['is_attributed'], test_pred)
return bst, valid_score, test_score
else:
return bst, valid_score
###Output
_____no_output_____
###Markdown
Run this cell to get a baseline score.
###Code
print("Baseline model")
train, valid, test = get_data_splits(clicks)
_ = train_model(train, valid)
###Output
Baseline model
Validation AUC score: 0.9622743228943659
###Markdown
1) Categorical encodings and leakageThese encodings are all based on statistics calculated from the dataset like counts and means. Considering this, what data should you be using to calculate the encodings? Specifically, can you use the validation data? Can you use the test data?Run the following line after you've decided your answer.
###Code
# Check your answer (Run this code cell to receive credit!)
q_1.solution()
###Output
_____no_output_____
###Markdown
2) Count encodingsBegin by running the next code cell to get started.
###Code
import category_encoders as ce
cat_features = ['ip', 'app', 'device', 'os', 'channel']
train, valid, test = get_data_splits(clicks)
###Output
_____no_output_____
###Markdown
Next, encode the categorical features `['ip', 'app', 'device', 'os', 'channel']` using the count of each value in the data set. - Using `CountEncoder` from the `category_encoders` library, fit the encoding using the categorical feature columns defined in `cat_features`. - Then apply the encodings to the train and validation sets, adding them as new columns with names suffixed `"_count"`.
###Code
# Create the count encoder
count_enc = ce.CountEncoder(cols=cat_features)
# Learn encoding from the training set
count_enc.fit(train[cat_features])
# Apply encoding to the train and validation sets
train_encoded = train.join(count_enc.transform(train[cat_features]).add_suffix('_count'))
valid_encoded = valid.join(count_enc.transform(valid[cat_features]).add_suffix('_count'))
# Check your answer
q_2.check()
# Uncomment if you need some guidance
# q_2.hint()
q_2.solution()
###Output
_____no_output_____
###Markdown
Run the next code cell to see how count encoding changes the results.
###Code
# Train the model on the encoded datasets
# This can take around 30 seconds to complete
_ = train_model(train_encoded, valid_encoded)
###Output
Validation AUC score: 0.9653051135205329
###Markdown
Count encoding improved our model's score! 3) Why is count encoding effective?At first glance, it could be surprising that count encoding helps make accurate models. Why do you think is count encoding is a good idea, or how does it improve the model score?Run the following line after you've decided your answer.
###Code
# Check your answer (Run this code cell to receive credit!)
q_3.solution()
###Output
_____no_output_____
###Markdown
4) Target encodingHere you'll try some supervised encodings that use the labels (the targets) to transform categorical features. The first one is target encoding. - Create the target encoder from the `category_encoders` library. - Then, learn the encodings from the training dataset, apply the encodings to all the datasets, and retrain the model.
###Code
# Create the target encoder. You can find this easily by using tab completion.
# Start typing ce. the press Tab to bring up a list of classes and functions.
target_enc = ce.TargetEncoder(cols=cat_features)
# Learn encoding from the training set
target_enc.fit(train[cat_features], train['is_attributed'])
# Apply encoding to the train and validation sets
train_encoded = train.join(target_enc.transform(train[cat_features]).add_suffix('_target'))
valid_encoded = valid.join(target_enc.transform(valid[cat_features]).add_suffix('_target'))
# Check your answer
q_4.check()
# Uncomment these if you need some guidance
#q_4.hint()
q_4.solution()
###Output
_____no_output_____
###Markdown
Run the next cell to see how target encoding affects your results.
###Code
_ = train_model(train_encoded, valid_encoded)
###Output
Validation AUC score: 0.9540530347873288
###Markdown
5) Try removing IP encodingIf you leave `ip` out of the encoded features and retrain the model with target encoding, you should find that the score increases and is above the baseline score! Why do you think the score is below baseline when we encode the IP address but above baseline when we don't?Run the following line after you've decided your answer.
###Code
# Check your answer (Run this code cell to receive credit!)
q_5.solution()
###Output
_____no_output_____
###Markdown
6) CatBoost EncodingThe CatBoost encoder is supposed to work well with the LightGBM model. Encode the categorical features with `CatBoostEncoder` and train the model on the encoded data again.
###Code
# Remove IP from the encoded features
cat_features = ['app', 'device', 'os', 'channel']
train, valid, test = get_data_splits(clicks)
# Have to tell it which features are categorical when they aren't strings
cb_enc = ce.CatBoostEncoder(cols=cat_features, random_state=7)
# Learn encoding from the training set
cb_enc.fit(train[cat_features], train['is_attributed'])
# Apply encoding to the train and validation sets
train_encoded = train.join(cb_enc.transform(train[cat_features]).add_suffix('_cb'))
valid_encoded = valid.join(cb_enc.transform(valid[cat_features]).add_suffix('_cb'))
# Check your answer
q_6.check()
# Uncomment these if you need some guidance
#q_6.hint()
q_6.solution()
###Output
_____no_output_____
###Markdown
Run the next code cell to see how the CatBoost encoder changes your results.
###Code
_ = train_model(train_encoded, valid_encoded)
###Output
_____no_output_____ |
experimental_design_figure.ipynb | ###Markdown
Experimental design figure
###Code
import numpy as np
from numpy import array
import pandas as pd
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
sns.set(style="white", context="paper")
%matplotlib inline
mpl.rc("savefig", dpi=150)
def savefig(fig, name):
fig.savefig("figures/{}.pdf".format(name), dpi=120)
fig.savefig("figures/{}.png".format(name), dpi=120)
fig.savefig("tiffs/{}.tiff".format(name), dpi=300)
def fixation_point(ax, color="white"):
gray = ".33"
ax.add_artist(plt.Rectangle((0, 0), 1, 1, fill=True, facecolor=gray,
linewidth=1, edgecolor="white"))
ax.add_artist(plt.Circle((.5, .5), .012, color=color, zorder=5))
ax.set(xlim=(0, 1), ylim=(0, 1))
def cue_frame(ax, which):
# Parameters of the cue frame
gray = ".33"
colors = ".85", ".15"
pos, size = .05, .9
width = .06
# Long frame
if which == 0:
for i in range(3):
color = colors[i % 2]
ax.add_artist(plt.Rectangle((pos, pos), size, size,
fill=True, facecolor=color, linewidth=0))
pos += width / 3
size -= (width / 3) * 2
# Short frame
else:
white, black = colors
# Draw a white rectangle
ax.add_artist(plt.Rectangle((pos, pos), size, size,
fill=True, facecolor=white, linewidth=0))
# Draw black dashes over it
lw = 3.75
dash = 1.4
# Vertical sides of the stimulus
l, r = pos + width / 2, pos + size - width / 2
b, t = pos + width, pos + size - width
ax.plot((l, l), (b, t), ls=":", lw=lw, dashes=[dash, dash], color=black)
ax.plot((r, r), (b, t), ls=":", lw=lw, dashes=[dash, dash], color=black)
# Horizontal sides of the stimulus
l, r = pos + width, pos + size - width
b, t = pos + width / 2, pos + size - width / 2
ax.plot((l, r), (b, b), ls=":", lw=lw, dashes=[dash, dash], color=black)
ax.plot((l, r), (t, t), ls=":", lw=lw, dashes=[dash, dash], color=black)
# Update the position variables so the
# center rectangle gets drawn correctly
pos += .02 * 3
size -= .04 * 3
# Center gray rectangle
ax.add_artist(plt.Rectangle((pos, pos), size, size,
fill=True, facecolor=gray, linewidth=0))
def dot_stimulus(ax, which):
# Parameters of the dots -------------------------------------------
# x positions of the two possible stimuli
xs = array([[0.19, 0.31, 0.53, 0.68, 0.81,
0.18, 0.34, 0.5, 0.66, 0.82,
0.195, 0.305, 0.44, 0.69, 0.79,
0.175, 0.345, 0.515, 0.63, 0.825,
0.2, 0.335, 0.48, 0.6275, 0.8],
[0.19, 0.31, 0.53, 0.68, 0.82,
0.14, 0.34, 0.51, 0.645, 0.81,
0.215, 0.305, 0.44, 0.71, 0.79,
0.185, 0.345, 0.52, 0.64, 0.83,
0.21, 0.335, 0.48, 0.6275, 0.81]])[which]
# y positions of the two possible stimuli
ys = array([[0.17, 0.15, 0.175, 0.19, 0.165,
0.34, 0.31, 0.33, 0.34, 0.36,
0.485, 0.5, 0.53, 0.53, 0.525,
0.635, 0.68, 0.66, 0.6725, 0.64,
0.82 , 0.79, 0.81, 0.80 , 0.78],
[0.19, 0.15, 0.175, 0.19, 0.165,
0.34, 0.31, 0.33, 0.35, 0.36,
0.495, 0.5, 0.53, 0.53, 0.525,
0.655, 0.68, 0.66, 0.6725, 0.64,
0.81 , 0.79, 0.81, 0.82 , 0.78]])[which]
# Colors of the two possible stimuli
hues = dict(r=(0.93226, 0.53991, 0.26735),
g=(0., 0.74055, 0.22775))
cs = [['g', 'g', 'g', 'r', 'r',
'g', 'r', 'g', 'r', 'g',
'g', 'g', 'g', 'r', 'r',
'r', 'r', 'r', 'g', 'g',
'g', 'g', 'r', 'g', 'g'],
['g', 'r', 'g', 'r', 'r',
'r', 'g', 'r', 'r', 'g',
'r', 'r', 'g', 'r', 'g',
'g', 'r', 'r', 'g', 'r',
'r', 'r', 'g', 'r', 'r']][which]
cs = [hues[c] for c in cs]
# Angles of motion of the two possibly stimuli
thetas = [[90, 184, 123, 186, 205,
128, 90, 202, 131, 68,
37, 296, 90, 358, 90,
90, 166, 49, 146, 291,
193, 90, 341, 90, 234],
[220, 80, 5, 65, 162,
10, 176, 42, 270, 43,
270, 8, 270, 140, 213,
270, 212, 163, 270, 244,
220, 161, 141, 6, 74]][which]
# Size of the dots
dot_size = .022
# Draw the dots -----------------------------------------------------
for x, y, c in zip(xs, ys, cs):
x, y = x - (dot_size / 2), y - (dot_size / 2)
ax.add_artist(plt.Rectangle((x, y), dot_size, dot_size, color=c))
# Parameters for motion representation
arrow_length = .04
arrow_start = .022
arrow_width = .02
# Draw the arrows to indicate direction of motion
for x, y, c, theta in zip(xs, ys, cs, thetas):
theta = np.deg2rad(theta)
x += arrow_start * np.cos(theta)
y += arrow_start * np.sin(theta)
dx = arrow_length * np.cos(theta)
dy = arrow_length * np.sin(theta)
color = sns.desaturate(c, .75)
ax.add_artist(plt.Arrow(x, y, dx, dy, arrow_width, color=color))
def screen(x, y, size=.25, ratio=2, fixcolor="white",
frame=None, dots=None, text=None):
# Add the axes for the current screen to the figure
fig = plt.gcf()
width, height = size, size / ratio
x -= width / 2
y -= height / 2
ax = fig.add_axes([x, y, width, height], frameon=False)
ax.set_axis_off()
# Draw the stimulus
fixation_point(ax, fixcolor)
if frame is not None or dots is not None:
cue_frame(ax, frame)
if dots is not None:
dot_stimulus(ax, dots)
# Add text information (used for timing)
if text is not None:
fig.text(x + width, y + height, text,
size=8, ha="right", va="bottom")
return ax
def frequency_manipulation(ax):
# Load the design
design = pd.read_csv("data/scan_design.csv")
# Plot the line of generating color frequency
# (note reversed due to bug in design code)
ax.plot(1 - design.color_freq, ls=":", color=".3", lw=1, dashes=[.75, 1])
# Set up the positions of the indiviudal trial scatter points
trial_colors = design.context.map({1: "#9666BD", 0: "#404040"})
jitterer = np.random.RandomState(99)
spreader = (np.arange(len(design)) % 4) / 30. - .04
trial_height = design.context.map({0: .05, 1: .95}) + spreader
trial_height += jitterer.uniform(-.015, .015, len(design))
# Draw the trial context scatter
ax.scatter(design.index, trial_height, 5, trial_colors,
alpha=.9, linewidth=.2, edgecolor="white")
# Add semantic labels to the plot
ax.set_xlabel("Trial", labelpad=.8)
ax.set_ylabel("p(color trial)", labelpad=2.5)
ax.set(xlim=(-7, 907), ylim=(-.05, 1.05),
#yticks=[.1, .3, .5, .7, .9],
#yticklabels=[".1", ".3", ".5", ".7", ".9"]
yticks=[.2, .4, .6, .8],
yticklabels=[".2", ".4", ".6", ".8"]
)
ax.set_xticks([0, 300, 600, 900])
ax.set_xticklabels([0, 300, 600, 900], ha="right")
sns.despine(ax=ax, bottom=True, trim=True)
###Output
_____no_output_____
###Markdown
--- Draw the figure
###Code
# Size and shape variables
figwidth = 3.5
ratio = .95
size = .275
fig = plt.figure(figsize=(figwidth, figwidth * ratio))
# Positioning variables
top_start = .78
top_end = .62
top = np.linspace(top_start, top_end, 4)
left_start = .18
left_end = .84
left = np.linspace(left_start, left_end, 4)
# Example sequence of an early-cue trial
screen(left[0], top[0], size, ratio, text=".5 s")
screen(left[1], top[1], size, ratio, frame=0, text="0 or 1 s")
screen(left[2], top[2], size, ratio, frame=0, dots=0, text="0 or 2 s")
screen(left[3], top[3], size, ratio, fixcolor="black", text="2 - 10 s")
# Diagram of the context frequency manipulation
f_ax = fig.add_axes([.11, .11, .87, .32])
frequency_manipulation(f_ax)
# Panel labels
fig.text(.02, .95, "A", size=12)
fig.text(.02, .43, "B", size=12)
savefig(fig, "experimental_design")
###Output
_____no_output_____ |
Google IT Automation with Python/Google - Crash Course on Python/Week 4/Module 4 Graded Assessment.ipynb | ###Markdown
Module 4 Graded Assessment
###Code
"""
1.Question 1
The format_address function separates out parts of the address string into new strings: house_number and street_name, and returns: "house number X on street named Y". The format of the input string is: numeric house number, followed by the street name which may contain numbers, but never by themselves, and could be several words long. For example, "123 Main Street", "1001 1st Ave", or "55 North Center Drive".
Fill in the gaps to complete this function.
"""
def format_address(address_string):
# Declare variables
house_no = ""
street_no = ""
# Separate the address string into parts
sep_addr = address_string.split()
# Traverse through the address parts
for addr in sep_addr:
# Determine if the address part is the
if addr.isdigit():
house_no = addr
else:
street_no = street_no+addr
street_no = street_no + " "
# house number or part of the street name
# Does anything else need to be done
# before returning the result?
# Return the formatted string
return "house number {} on street named {}".format(house_no,street_no)
print(format_address("123 Main Street"))
# Should print: "house number 123 on street named Main Street"
print(format_address("1001 1st Ave"))
# Should print: "house number 1001 on street named 1st Ave"
print(format_address("55 North Center Drive"))
# Should print "house number 55 on street named North Center Drive"
"""
2.Question 2
The highlight_word function changes the given word in a sentence to its upper-case version. For example, highlight_word("Have a nice day", "nice") returns "Have a NICE day".
Can you write this function in just one line?
"""
def highlight_word(sentence, word):
return(sentence.replace(word,word.upper()))
print(highlight_word("Have a nice day", "nice"))
print(highlight_word("Shhh, don't be so loud!", "loud"))
print(highlight_word("Automating with Python is fun", "fun"))
"""
3.Question 3
A professor with two assistants, Jamie and Drew, wants an attendance list of the students,
in the order that they arrived in the classroom. Drew was the first one to note which students
arrived, and then Jamie took over. After the class, they each entered their lists into the computer
and emailed them to the professor, who needs to combine them into one, in the order of each student's
arrival. Jamie emailed a follow-up, saying that her list is in reverse order. Complete the steps to
combine them into one list as follows: the contents of Drew's list, followed by Jamie's list in reverse order,
to get an accurate list of the students as they arrived.
"""
def combine_lists(list1, list2):
# Generate a new list containing the elements of list2
# Followed by the elements of list1 in reverse order
new_list = list2
for i in reversed(range(len(list1))):
new_list.append(list1[i])
return new_list
Jamies_list = ["Alice", "Cindy", "Bobby", "Jan", "Peter"]
Drews_list = ["Mike", "Carol", "Greg", "Marcia"]
"""
4.Question 4
Use a list comprehension to create a list of squared numbers (n*n).
The function receives the variables start and end, and returns a list of squares of consecutive numbers
between start and end inclusively.
For example, squares(2, 3) should return [4, 9].
"""
def squares(start, end):
return [(x*x) for x in range(start,end+1)]
print(squares(2, 3)) # Should be [4, 9]
print(squares(1, 5)) # Should be [1, 4, 9, 16, 25]
print(squares(0, 10)) # Should be [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
"""
5.Question 5
Complete the code to iterate through the keys and values of the car_prices dictionary,
printing out some information about each one.
"""
def car_listing(car_prices):
result = ""
for key,value in car_prices.items():
result += "{} costs {} dollars".format(key,value) + "\n"
return result
print(car_listing({"Kia Soul":19000, "Lamborghini Diablo":55000, "Ford Fiesta":13000, "Toyota Prius":24000}))
"""
6.Question 6
Taylor and Rory are hosting a party. They sent out invitations, and each one collected responses
into dictionaries, with names of their friends and how many guests each friend is bringing.
Each dictionary is a partial list, but Rory's list has more current information about the number of guests.
Fill in the blanks to combine both dictionaries into one, with each friend listed only once, and the number
of guests from Rory's dictionary taking precedence, if a name is included in both dictionaries.
Then print the resulting dictionary.
"""
from copy import deepcopy
def combine_guests(guests1, guests2):
backup = deepcopy(guests1)
guests1.update(guests2)
for guest in guests1:
if guest in backup:
guests1[guest] = backup[guest]
return guests1
Rorys_guests = { "Adam":2, "Brenda":3, "David":1, "Jose":3, "Charlotte":2, "Terry":1, "Robert":4}
Taylors_guests = { "David":4, "Nancy":1, "Robert":2, "Adam":1, "Samantha":3, "Chris":5}
print(combine_guests(Rorys_guests, Taylors_guests))
"""
7.Question 7
Use a dictionary to count the frequency of letters in the input string. Only letters should be counted, not blank spaces, numbers, or punctuation. Upper case should be considered the same as lower case. For example, count_letters("This is a sentence.") should return {'t': 2, 'h': 1, 'i': 2, 's': 3, 'a': 1, 'e': 3, 'n': 2, 'c': 1}.
"""
def count_letters(text):
elements = text.replace(" ","").lower()
result = {}
for letter in elements:
if letter.isalpha():
if letter not in result:
result[letter] = 1
else:
result[letter] +=1
return result
print(count_letters("AaBbCc"))
# Should be {'a': 2, 'b': 2, 'c': 2}
print(count_letters("Math is fun! 2+2=4"))
# Should be {'m': 1, 'a': 1, 't': 1, 'h': 1, 'i': 1, 's': 1, 'f': 1, 'u': 1, 'n': 1}
print(count_letters("This is a sentence."))
# Should be {'t': 2, 'h': 1, 'i': 2, 's': 3, 'a': 1, 'e': 3, 'n': 2, 'c': 1}
###Output
_____no_output_____ |
Aviacao_Stocks_COVID/Stock_Airlines_x_COVID.ipynb | ###Markdown
**Análise dos Preço das Ações das Cias Aéreas Brasileiras na Pandemia COVID-19** O objetivo deste notebook é o de analisar o impacto que as três maiores Cias Áerea Brasileiras (Azul, Gol e Latam) sofreram no período inicial da pandemia do COVID-19 com o cancelamento dos voos e fechamento das fronteiras aéreas. Para isso, vamos analisar a variação no valor das ações comercializadas na Bolsa de Valores de *New York* (NYSE) através da biblioteca **yfinance**, observando o movimento financeiro um pouco antes que as medidas de contenção da pandemia entrasse em vigor neste país (metade de março de 2020).---Este notebook faz parte dos meus estudos inciais em ciência de dados. Não tenho a pretensão de verificar se este é o momento, ou não, para comprar ações destas cias aérea. Alias, nem investidor eu sou! Também não pretendo influenciar ninguém neste sentido. Como eu trabalho com aviação, apenas quis fazer algo em relação a esta área.
###Code
# Instalando o yfinance
!pip install yfinance
# Importando a biblioteca yfinance depois da instalação. Bem simples de utilizar, assim como sua documentação.
# As Açoes se referem à bolsa americana sendo, os valores em USD.
import yfinance as yf
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
###Output
_____no_output_____
###Markdown
1. Importando as ações objeto do meu estudo pelo yfinance (apenas 2 linhas de código) e definindo o período desejado. Maiores Informações sobre a biblioteca yfinace [clique aqui](https://pypi.org/project/yfinance/). Você também pode conferir este artigo do Ritvik Kharkar [clicando aqui](https://towardsdatascience.com/how-to-get-stock-data-using-python-c0de1df17e75)2. Optei pelas ações da Cias Aéreas Azul, Gol e Latam (LTM).3. Ações dia a dia (período = 1d mas há outras opções) no período de 01 de janeiro à 12 de agosto de 2020 (padrão americano yyyy-mm-dd).
###Code
tickers = yf.Tickers('azul gol ltm')
tickerdf = tickers.history(period='1d', start='2020-1-1', end='2020-8-13')
tickerdf
###Output
[*********************100%***********************] 3 of 3 completed
###Markdown
Após a importação dos dados, vemos que temos 155 linhas com 21 colunas. O dataframe possui 6 (seis) colunas (3 em _Dividends_ e 3 em _Stock Splits_) sem valores que não servirão para a condução das análises e por este morivo serão deletadas do dataframe.Serão utilizadas somente os valores das colunas _**Volume**_ e _**Close**_. As colunas _Open_, _High_, _Low_, _Dividends_ e _Stock Splits_ pois não serão utilizadas de acordo com o objetivo desta análise.
###Code
# exluindo colunas Dividends e Stock Splits
tickerdf.drop(['Open', 'High', 'Low','Dividends', 'Stock Splits'], axis=1, inplace = True)
tickerdf.head()
###Output
_____no_output_____
###Markdown
Significado de cada coluna:* **`Close:`** valor da ação no final do dia.* **`High:`** o maior valor que a ação atingiu no dia.* **`Low:`** o menor valor que a ação atingiu no dia.* **`Open:`** valor da ação no início do dia.* **`Volume:`** quantas ações foram negociadas naquele dia.
###Code
# Plotando o gráfico referente ao preço de fechamento das ações
fig,ax = plt.subplots(figsize=(16,6))
plt.plot(tickerdf.index, tickerdf.Close);
plt.title('Preço de Fechamento das Ações das Cias Aérea AZUL, GOL e LATAM', fontsize=20)
plt.ylabel('Preço do Fechamento (USD)')
plt.xlabel('Período Estudado')
plt.legend(tickerdf.Close)
plt.show()
# Plotando o gráfico referente ao volume de negociação das ações
fig,ax = plt.subplots(figsize=(16,6))
plt.plot(tickerdf.index, tickerdf.Volume);
plt.title('Volume de Negociação das Ações das Cias Aérea AZUL, GOL e LATAM', fontsize = 20)
plt.ylabel('Preço do Fechamento (USD)')
plt.xlabel('Período Estudado')
plt.legend(tickerdf.Volume)
plt.show()
###Output
_____no_output_____ |
prediction/table_lr-nn.ipynb | ###Markdown
Forward inference
###Code
for metric in metrics:
df = pd.DataFrame()
for framework in frameworks:
df["LR"] = pd.read_csv("logistic_regression/data/{}_obs_{}_forward.csv".format(metric, framework), header=None, index_col=0)[1]
df["NN"] = pd.read_csv("neural_network/data/{}_obs_{}_forward.csv".format(metric, framework), header=None, index_col=0)[1]
ci = pd.read_csv("data/{}_lr-nn_{}_forward.csv".format(metric, framework), header=0, index_col=None).round(decimals=2)
df["CI (99.9%)"] = ["{:4.2f} to {:4.2f}".format(ci["CI_LOWER"][i], ci["CI_UPPER"][i]) for i in range(len(ci))]
df["LR"] = ["{:4.2f}".format(v) for v in df["LR"]]
df["NN"] = ["{:4.2f}".format(v) for v in df["NN"]]
df.to_csv("data/{}_lr-nn_ci_{}_forward.csv".format(metric, framework), columns=["LR", "NN", "CI (99.9%)"])
###Output
_____no_output_____
###Markdown
Reverse inference
###Code
for metric in metrics:
for framework in frameworks:
df = pd.DataFrame()
df["LR"] = pd.read_csv("logistic_regression/data/{}_obs_{}_reverse.csv".format(metric, framework), header=None, index_col=0)[1]
df_nn = pd.read_csv("neural_network/data/{}_obs_{}_reverse.csv".format(metric, framework), header=None, index_col=0)[1]
df_nn.index = df.index
df["NN"] = df_nn
df = df.round(decimals=2)
ci = pd.read_csv("data/{}_lr-nn_{}_reverse.csv".format(metric, framework), header=0, index_col=None).round(decimals=2)
df["CI (99.9%)"] = ["{:4.2f} to {:4.2f}".format(ci["CI_LOWER"][i], ci["CI_UPPER"][i]) for i in range(len(ci))]
df["LR"] = ["{:4.2f}".format(v) for v in df["LR"]]
df["NN"] = ["{:4.2f}".format(v) for v in df["NN"]]
df.to_csv("data/{}_lr-nn_ci_{}_reverse.csv".format(metric, framework), columns=["LR", "NN", "CI (99.9%)"])
###Output
_____no_output_____ |
notebooks/RNN-Morse-features.ipynb | ###Markdown
Train model with noisy envelope - using dataset and data loaderSame flow as in `RNN-Morse-feature` but uses a data loader.
###Code
!pip install sounddevice torchinfo
!sudo apt-get install libportaudio2
###Output
_____no_output_____
###Markdown
Generate annotated raw signalGenerates the envelope after audio preprocessing. The resulting decimation factor is 128 thus we will take 1 every 128 samples from the original signal modulated at 8 kHz sample rate. This uses a modified version of `encode_df` (`encode_df_decim`) of `MorseGen` thus the original ratio in samples per dit is respected. This effectively takes a floating point ratio (shown in display) for the samples per dit decimation (about 5.77 for the nominal values of 8 kHz sampling rate and 13 WPM Morse code speed) The SNR must be calculated in the FFT bin bandwidth. In the original `RNN-Morse-pytorch` notebook the bandwidth is 4 kHz / 256 = 15,625 Hz and SNR is 3 dB. Theoretically you would apply the FFT ratio to the original SNR but this does not work in practice. You have to take a much lower SNR to obtain a similar envelope. Base functions
###Code
import random
import string
import numpy as np
def random_partition(k, iterable):
results = [[] for i in range(k)]
for value in iterable:
x = random.randrange(k)
results[x].append(value)
return results
def random_strings(k, rawchars):
results = ["" for i in range(k)]
for c in rawchars:
x = random.randrange(k)
results[x] += c
return results
def get_morse_str(nchars=132, nwords=27):
np.random.seed(0)
rawchars = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(nchars))
words = random_strings(nwords, rawchars)
morsestr = ' '.join(words)
return morsestr
###Output
_____no_output_____
###Markdown
Try it ...
###Code
morsestr = get_morse_str()
print(len(morsestr), morsestr)
###Output
_____no_output_____
###Markdown
Signal and labels
###Code
import MorseGen
import matplotlib.pyplot as plt
import numpy as np
def get_new_data(SNR_dB=-23, nchars=132, nwords=27, phrase=None):
if not phrase:
phrase = MorseGen.get_morse_str(nchars=nchars, nwords=nwords)
print(len(phrase), phrase)
Fs = 8000
morse_gen = MorseGen.Morse()
samples_per_dit = morse_gen.nb_samples_per_dit(Fs, 13)
n_prev = int((samples_per_dit/128)*12) + 1 # number of samples to look back is slightly more than a dit-dah and a word space (2+3+7=12)
print(f'Samples per dit at {Fs} Hz is {samples_per_dit}. Decimation is {samples_per_dit/128:.2f}. Look back is {n_prev}.')
label_df = morse_gen.encode_df_decim(phrase, samples_per_dit, 128)
# keep the envelope
label_df_env = label_df.drop(columns=['dit','dah', 'ele', 'chr', 'wrd'])
# remove the envelope
label_df.drop(columns=['env'], inplace=True)
SNR_linear = 10.0**(SNR_dB/10.0)
SNR_linear *= 256 # Apply original FFT
print(f'Resulting SNR for original {SNR_dB} dB is {(10.0 * np.log10(SNR_linear)):.2f} dB')
t = np.linspace(0, len(label_df_env)-1, len(label_df_env))
morsecode = label_df_env.env
power = np.sum(morsecode**2)/len(morsecode)
noise_power = power/SNR_linear
noise = np.sqrt(noise_power)*np.random.normal(0, 1, len(morsecode))
# noise = butter_lowpass_filter(raw_noise, 0.9, 3) # Noise is also filtered in the original setup from audio. This empirically simulates it
signal = morsecode + noise
return signal, label_df, n_prev
###Output
_____no_output_____
###Markdown
Try it ...
###Code
signal, label_df, n_prev = get_new_data(-17)
# Show
print(n_prev)
print(type(signal), signal.shape)
print(type(label_df), label_df.shape)
x0 = 0
x1 = 1500
plt.figure(figsize=(50,6))
plt.plot(signal[x0:x1]*0.5, label="sig")
plt.plot(label_df[x0:x1].dit*0.9 + 1.0, label='dit')
plt.plot(label_df[x0:x1].dah*0.9 + 2.0, label='dah')
plt.plot(label_df[x0:x1].ele*0.9 + 3.0, label='ele')
plt.plot(label_df[x0:x1].chr*0.9 + 4.0, label='chr')
plt.plot(label_df[x0:x1].wrd*0.9 + 5.0, label='wrd')
plt.title("signal and labels")
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
Create data loader Define dataset
###Code
import torch
class MorsekeyingDataset(torch.utils.data.Dataset):
def __init__(self, device, SNR_dB=-23, nchars=132, nwords=27, phrase=None):
self.signal, self.label_df, self.seq_len = get_new_data(SNR_dB, nchars, nwords, phrase)
self.X = torch.FloatTensor(self.signal.values).to(device)
self.y = torch.FloatTensor(self.label_df.values).to(device)
def __len__(self):
return self.X.__len__() - self.seq_len
def __getitem__(self, index):
return (self.X[index:index+self.seq_len], self.y[index+self.seq_len])
def get_signal(self):
return self.signal
def get_labels(self):
return self.label_df
def get_seq_len(self):
return self.seq_len()
###Output
_____no_output_____
###Markdown
Define data loader
###Code
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_dataset = MorsekeyingDataset(device, -25, 132*2, 27*2)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=False) # Batch size must be 1
signal = train_dataset.get_signal()
label_df = train_dataset.get_labels()
print(type(signal), signal.shape)
print(type(label_df), label_df.shape)
x0 = 0
x1 = 1500
plt.figure(figsize=(50,6))
plt.plot(signal[x0:x1]*0.5, label="sig")
plt.plot(label_df[x0:x1].dit*0.9 + 1.0, label='dit')
plt.plot(label_df[x0:x1].dah*0.9 + 2.0, label='dah')
plt.plot(label_df[x0:x1].ele*0.9 + 3.0, label='ele')
plt.plot(label_df[x0:x1].chr*0.9 + 4.0, label='chr')
plt.plot(label_df[x0:x1].wrd*0.9 + 5.0, label='wrd')
plt.title("signal and labels")
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
Create modelLet's create the model now so we have an idea of its inputs and outputs
###Code
import torch
import torch.nn as nn
class MorseEnvLSTM(nn.Module):
"""
Initial implementation
"""
def __init__(self, device, input_size=1, hidden_layer_size=8, output_size=6):
super().__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size).to(self.device),
torch.zeros(1, 1, self.hidden_layer_size).to(self.device))
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
def zero_hidden_cell(self):
self.hidden_cell = (
torch.zeros(1, 1, self.hidden_layer_size).to(device),
torch.zeros(1, 1, self.hidden_layer_size).to(device)
)
class MorseEnvBatchedLSTM(nn.Module):
"""
Initial implementation
"""
def __init__(self, device, input_size=1, hidden_layer_size=8, output_size=6):
super().__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size).to(self.device),
torch.zeros(1, 1, self.hidden_layer_size).to(self.device))
self.m = nn.Softmax(dim=-1)
def forward(self, input_seq):
#print(len(input_seq), input_seq.shape, input_seq.view(-1, 1, 1).shape)
lstm_out, self.hidden_cell = self.lstm(input_seq.view(-1, 1, 1), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return self.m(predictions[-1])
def zero_hidden_cell(self):
self.hidden_cell = (
torch.zeros(1, 1, self.hidden_layer_size).to(device),
torch.zeros(1, 1, self.hidden_layer_size).to(device)
)
class MorseEnvLSTM2(nn.Module):
"""
LSTM stack
"""
def __init__(self, device, input_size=1, hidden_layer_size=8, output_size=6, dropout=0.2):
super().__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size, num_layers=2, dropout=dropout)
self.linear = nn.Linear(hidden_layer_size, output_size)
self.hidden_cell = (torch.zeros(2, 1, self.hidden_layer_size).to(self.device),
torch.zeros(2, 1, self.hidden_layer_size).to(self.device))
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
def zero_hidden_cell(self):
self.hidden_cell = (
torch.zeros(2, 1, self.hidden_layer_size).to(device),
torch.zeros(2, 1, self.hidden_layer_size).to(device)
)
class MorseEnvNoHLSTM(nn.Module):
"""
Do not keep hidden cell
"""
def __init__(self, device, input_size=1, hidden_layer_size=8, output_size=6):
super().__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size)
self.linear = nn.Linear(hidden_layer_size, output_size)
def forward(self, input_seq):
h0 = torch.zeros(1, 1, self.hidden_layer_size).to(self.device)
c0 = torch.zeros(1, 1, self.hidden_layer_size).to(self.device)
lstm_out, _ = self.lstm(input_seq.view(len(input_seq), 1, -1), (h0, c0))
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]
class MorseEnvBiLSTM(nn.Module):
"""
Attempt Bidirectional LSTM: does not work
"""
def __init__(self, device, input_size=1, hidden_size=12, num_layers=1, num_classes=6):
super(MorseEnvBiLSTM, self).__init__()
self.device = device # This is the only way to get things work properly with device
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, bidirectional=True)
self.fc = nn.Linear(hidden_size*2, num_classes) # 2 for bidirection
def forward(self, x):
# Set initial states
h0 = torch.zeros(self.num_layers*2, x.size(0), self.hidden_size).to(device) # 2 for bidirection
c0 = torch.zeros(self.num_layers*2, x.size(0), self.hidden_size).to(device)
# Forward propagate LSTM
out, _ = self.lstm(x.view(len(x), 1, -1), (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size*2)
# Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out[-1]
###Output
_____no_output_____
###Markdown
Create the model instance and print the details
###Code
# Hidden layers:
# 4: good at reconstructing signal, some post-processing necessary for dit/dah, word silence is weak and undistinguishable from character silence
# 5: fairly good at reconstructing signal, but word space sense is lost
# 6: more contrast on all signals and word space sense is good but a spike appears in the silence in predicted envelope
morse_env_model = MorseEnvBatchedLSTM(device, hidden_layer_size=7, output_size=5).to(device) # This is the only way to get things work properly with device
morse_env_loss_function = nn.MSELoss()
morse_env_optimizer = torch.optim.Adam(morse_env_model.parameters(), lr=0.001)
print(morse_env_model)
print(morse_env_model.device)
# Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
for m in morse_env_model.parameters():
print(m.shape, m.device)
X_t = torch.rand(n_prev)
#X_t = torch.tensor([-0.9648, -0.9385, -0.8769, -0.8901, -0.9253, -0.8637, -0.8066, -0.8066, -0.8593, -0.9341, -1.0000, -0.9385])
X_t = X_t.cuda()
print(X_t)
morse_env_model(X_t)
import torchinfo
channels=10
H=n_prev
W=1
torchinfo.summary(morse_env_model, input_size=(channels, H, W))
###Output
_____no_output_____
###Markdown
Train model
###Code
it = iter(train_loader)
X, y = next(it)
print(X.reshape(70,1).shape, X[0].shape, y[0].shape)
print(X[0], y[0])
X, y = next(it)
print(X[0], y[0])
%%time
epochs = 30
morse_env_model.train()
for i in range(epochs):
train_losses = []
for j, train in enumerate(train_loader):
X_train = train[0][0]
y_train = train[1][0]
morse_env_optimizer.zero_grad()
if morse_env_model.__class__.__name__ in ["MorseEnvLSTM", "MorseEnvLSTM2", "MorseEnvBatchedLSTM"]:
morse_env_model.zero_hidden_cell() # this model needs to reset the hidden cell
y_pred = morse_env_model(X_train)
single_loss = morse_env_loss_function(y_pred, y_train)
single_loss.backward()
morse_env_optimizer.step()
train_losses.append(single_loss.item())
if j % 1000 == 0:
train_loss = np.mean(train_losses)
train_std = np.std(train_losses)
print(f' train {j}/{len(train_loader)} loss: {train_loss:6.4f} std: {train_std:6.4f}')
train_loss = np.mean(train_losses)
print(f'epoch: {i+1:3} loss: {train_loss:6.4f} std: {train_std:6.4f}')
print(f'final: {i+1:3} epochs loss: {train_loss:6.4f} std: {train_std:6.4f}')
torch.save(morse_env_model.state_dict(), 'models/morse_env_model')
###Output
_____no_output_____
###Markdown
Predict (test)
###Code
new_phrase = "VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB VVV DE F4EXB"
test_dataset = MorsekeyingDataset(device, -24, 132, 27, new_phrase)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1, shuffle=False) # Batch size must be 1
signal = test_dataset.get_signal()
label_df = test_dataset.get_labels()
print(type(signal), signal.shape)
print(type(label_df), label_df.shape)
x0 = 0
x1 = 3000
plt.figure(figsize=(50,6))
plt.plot(signal[x0:x1]*0.5, label="sig")
plt.plot(label_df[x0:x1].dit*0.9 + 1.0, label='dit')
plt.plot(label_df[x0:x1].dah*0.9 + 2.0, label='dah')
plt.plot(label_df[x0:x1].ele*0.9 + 3.0, label='ele')
plt.plot(label_df[x0:x1].chr*0.9 + 4.0, label='chr')
plt.plot(label_df[x0:x1].wrd*0.9 + 5.0, label='wrd')
plt.title("signal and labels")
plt.legend()
plt.grid()
%%time
p_dit_l = []
p_dah_l = []
p_ele_l = []
p_chr_l = []
p_wrd_l = []
y_test_a = []
morse_env_model.eval()
for X_test0, y_test0 in test_loader:
X_test = X_test0[0]
pred_val = morse_env_model(X_test).cpu()
p_dit_l.append(pred_val[0].item())
p_dah_l.append(pred_val[1].item())
p_ele_l.append(pred_val[2].item())
p_chr_l.append(pred_val[3].item())
p_wrd_l.append(pred_val[4].item())
y_test_a.append(y_test0[0,0] + y_test0[0,1])
p_dit = np.array(p_dit_l)
p_dah = np.array(p_dah_l)
p_ele = np.array(p_ele_l)
p_chr = np.array(p_chr_l)
p_wrd = np.array(p_wrd_l)
y_test_v = np.array(y_test_a)
# trim negative values
p_dit[p_dit < 0] = 0
p_dah[p_dah < 0] = 0
p_ele[p_ele < 0] = 0
p_chr[p_chr < 0] = 0
p_wrd[p_wrd < 0] = 0
plt.figure(figsize=(50,6))
plt.plot(y_test_v[:x1]*0.9, label="y")
plt.plot(p_dit[:x1]*0.9 + 1.0, label="dit")
plt.plot(p_dah[:x1]*0.9 + 2.0, label="dah")
plt.plot(p_ele[:x1]*0.9 + 3.0, label="ele")
plt.plot(p_chr[:x1]*0.9 + 4.0, label="chr")
plt.plot(p_wrd[:x1]*0.9 + 5.0, label="wrd")
plt.title("Predictions")
plt.legend()
plt.grid()
plt.savefig('img/pred.png')
l_test = signal[n_prev:].to_numpy()
sig = p_dit[:x1] + p_dah[:x1]
sig = (sig - min(sig)) / (max(sig) - min(sig))
mor = y_test_v[:x1]
plt.figure(figsize=(30,3))
plt.plot(sig, label="mod")
plt.plot(l_test[:x1] + 1.0, label="sig")
plt.plot(mor*2.2, label="mor", linestyle='--')
plt.title("reconstructed signal modulation with 'dah' and 'dit'")
plt.legend()
plt.grid()
plt.figure(figsize=(25,4))
plt.plot(p_dit[:x1], label='dit')
plt.plot(p_dah[:x1], label='dah')
plt.plot(mor*0.5 + 1.0, label='mor')
plt.title("'dit' and 'dah' symbols prediction vs modulation")
plt.legend()
plt.grid()
plt.figure(figsize=(25,3))
plt.plot(p_ele[:x1], label='ele')
plt.plot(mor, label='mor')
plt.title("Element space prediction vs modulation")
plt.legend()
plt.figure(figsize=(25,3))
plt.plot(p_chr[:x1] ,label='chr')
plt.plot(mor, label='mor')
plt.title("Character space prediction vs modulation")
plt.legend()
plt.figure(figsize=(25,3))
plt.plot(p_wrd[:x1], label='wrd')
plt.plot(mor, label='mor')
plt.title("Word space prediction vs modulation")
plt.legend()
#p_sig = 1.0 - (p_ele + p_chr + p_wrd)
p_sig = p_dit + p_dah
p_ditd = p_dit - p_dah
p_dahd = p_dah - p_dit
plt.figure(figsize=(50,8))
plt.plot(l_test[:x1]*0.9, label="inp")
plt.plot(p_sig[:x1]*0.9 + 1.0, label="sig")
plt.plot(p_dit[:x1]*0.9 + 2.0, label="dit")
plt.plot(p_dah[:x1]*0.9 + 3.0, label="dah")
plt.plot(p_ele[:x1]*0.9 + 4.0, label="ele")
plt.plot(p_chr[:x1]*0.9 + 5.0, label="chr")
plt.plot(p_wrd[:x1]*0.9 + 6.0, label="wrd")
plt.plot(mor*7.2, label="mor")
plt.title("Altogether vs signal and modulation")
plt.legend()
plt.grid()
plt.figure(figsize=(50,4))
plt.plot(p_dit[:x1]*0.9 + 0.0, label="dit")
plt.plot(p_dahd[:x1]*0.9 + 1.0, label="dahd")
plt.plot(p_ele[:x1]*0.9 + 2.0, label="ele")
plt.plot(mor*3.2, label="mor")
plt.title("Differential dah")
plt.legend()
plt.grid()
import scipy as sp
import scipy.special
from scipy.io import wavfile
Fcode = 600
Fs = 8000
noverlap = 128
decim = 128
emod = np.array([sp.special.expit(8*(0.9*x-0.5)) for x in sig])
#emod = sig
emod /= max(emod)
remod = np.array([[x]*noverlap for x in emod]).flatten()
wt = (Fcode / Fs)*2*np.pi
tone = np.sin(np.arange(len(remod))*wt)
wavfile.write('audio/re.wav', Fs, tone*remod)
ref_mod = np.array([[x]*decim for x in mor]).flatten()
plt.figure(figsize=(50,5))
plt.plot(tone*remod)
plt.plot(ref_mod*1.2, label='mor')
plt.title("reconstructed signal")
plt.grid()
# .4QTV4PB EZ1 JBGJ TT1W4M...
# 7U7K 0DC55B H ZN0J Q9 H2X0 LZ16A ECA2DE 6A2 NUPU 67IL6EIH YVZA 5OTGC3U C3R PGW RS0 84QTV4PB EZ1 JBGJ TT1W4M5PBJ GZVLWXQG 7POU6 FMTXA N3CZ Y1Q9VZ6 9TVL CWP8KSB'
omod = l_test[:x1]
orig_mod = np.array([[x]*decim for x in omod]).flatten()
orig_mod /= max(orig_mod)
orig_mod *= 1.5
wavfile.write('audio/or.wav', Fs, tone*orig_mod)
plt.figure(figsize=(25,5))
plt.plot(tone*orig_mod)
plt.plot(ref_mod*1.2, label='mor')
plt.title("original filtered signal")
plt.grid()
import scipy as sp
sx = np.linspace(0, 1, 121)
sy = sp.special.expit(8*(0.8*sx-0.5))
plt.plot(sx, sy)
plt.grid()
plt.xlabel('x')
plt.title('expit(x)')
plt.show()
###Output
_____no_output_____ |
docs/_downloads/cad5020cab595c3bf83a518b7e4d4125/neural_style_tutorial.ipynb | ###Markdown
PyTorch를 이용한 신경망-변환(Neural-Transfer)======================================================**저자**: `Alexis Jacq `_ **번역**: `김봉모 `_소개------------------환영합니다!. 이 문서는 Leon A. Gatys와 Alexander S. Ecker, Matthias Bethge 가 개발한알고리즘인 `Neural-Style `__ 를 구현하는 방법에 대해설명하는 튜토리얼입니다.신경망 뭐라고?~~~~~~~~~~~~~~~~~~~신경망 스타일(Neural-Style), 혹은 신경망 변화(Neural-Transfer)는 콘텐츠 이미지(예, 거북이)와 스타일 이미지(예, 파도를 그린 예술 작품) 을 입력으로 받아 콘텐츠 이미지의 모양대로 스타일 이미지의'그리는 방식'을 이용해 그린 것처럼 결과를 내는 알고리즘입니다:.. figure:: /_static/img/neural-style/neuralstyle.png :alt: content1어떻게 동작합니까?~~~~~~~~~~~~~~~~~~~~~~~원리는 간단합니다. 2개의 거리(distance)를 정의합니다. 하나는 콘텐츠( $D_C$ )를 위한 것이고 다른 하나는 스타일( $D_S$ )을 위한 것입니다.$D_C$ 는 콘텐츠 이미지와 스타일 이미지 간의 콘텐츠가 얼마나 차이가 있는지 측정을 합니다. 반면에, $D_S$ 는 콘텐츠 이미지와 스타일 이미지 간의 스타일에서 얼마나 차이가 있는지를 측정합니다.그런 다음, 세 번째 이미지를 입력(예, 노이즈로 구성된 이미지)으로부터 콘텐츠 이미지와의 콘텐츠 거리 및 스타일 이미지와의 스타일 거리를 최소화하는 방향으로 세 번째 이미지를 변환합니다.그래서. 어떻게 동작하냐고요?^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^자, 더 나아가려면 수학이 필요합니다. $C_{nn}$ 를 사전 훈련된 깊은 합성곱 신경망 네트워크(pre-trained deep convolutional neural network)라고 하고, $X$ 를 어떤 이미지라고 해보겠습니다.$C_{nn}(X)$ 은 입력 이미지 X를 입력으로 해서 CNN 을 통과한 네트워크(모든 레이어들의 특징 맵(feature map)을 포함하는)를 의미합니다.$F_{XL} \in C_{nn}(X)$ 는 깊이 레벨 L에서의 특징 맵(feature map)을 의미하고, 모두 벡터화(vectorized)되고 연결된(concatenated) 하나의 단일 벡터입니다.그리고, $Y$ 를 이미지 $X$ 와 크기가 같은 이미지라고 하면, 레이어 $L$ 에 해당하는 콘텐츠의 거리를 정의할 수 있습니다:\begin{align}D_C^L(X,Y) = \|F_{XL} - F_{YL}\|^2 = \sum_i (F_{XL}(i) - F_{YL}(i))^2\end{align}$F_{XL}(i)$ 는 $F_{XL}$ 의 $i^{번째}$ 요소(element) 입니다.스타일에 해당하는 내용은 위 내용보다 조금 더 신경 쓸 부분이 있습니다.$F_{XL}^k$ 를 레이어 $L$ 에서 특징 맵(feature map) $K$ 의 $k^{번째}$ 에 해당하는벡터화된 $k \leq K$ 라고 해 보겠습니다.스타일 $G_{XL}$ 의 $X$ 레이어에서 $L$ 은 모든 벡터화된 특징 맵(feature map) $F_{XL}^k$ 에서 $k \leq K$ 그람(Gram)으로 정의 됩니다.다시 말하면, $G_{XL}$ 는 $K$\ x\ $K$ 행렬과 요소 $G_{XL}(k,l)$ 의 $k^{번째}$ 줄과$l^{번째}$ 행의 $G_{XL}$ 는 $F_{XL}^k$ 와 $F_{XL}^l$ 간의벡터화 곱을 의미합니다:\begin{align}G_{XL}(k,l) = \langle F_{XL}^k, F_{XL}^l\rangle = \sum_i F_{XL}^k(i) . F_{XL}^l(i)\end{align}$F_{XL}^k(i)$ 는 $F_{XL}^k$ 의 $i^{번째}$ 요소 입니다.우리는 $G_{XL}(k,l)$ 를 특징 맵(feature map) $k$ 와 $l$ 간의 상관 관계(correlation)에 대한 척도로 볼 수 있습니다.그런 의미에서, $G_{XL}$ 는 특징 맵(feature map) $X$ 의 레이어 $L$ 에서의 상관 관계 행렬을 나타냅니다.$G_{XL}$ 의 크기는 단지 특징 맵(feature map)의 숫자에만 의존성이 있고,$X$ 의 크기에는 의존성이 없다는 것을 유의 해야 합니다.그러면, 만약 $Y$ 가 다른 *어떤 크기의* 이미지라면,우리는 다음과 같이 레이어 $L$ 에서 스타일의 거리를 정의 합니다.\begin{align}D_S^L(X,Y) = \|G_{XL} - G_{YL}\|^2 = \sum_{k,l} (G_{XL}(k,l) - G_{YL}(k,l))^2\end{align}$D_C(X,C)$ 의 한 번의 최소화를 위해서, 이미지 변수 $X$ 와 대상 콘텐츠-이미지 $C$ 와$D_S(X,S)$ 와 $X$ 와 대상 스타일-이미지 $S$ , 둘 다 여러 레이어들에 대해서 계산되야 하고,우리는 원하는 레이어 각각에서의 거리의 그라디언트를 계산하고 더합니다( $X$ 와 관련된 도함수):\begin{align}\nabla_{ extit{total}}(X,S,C) = \sum_{L_C} w_{CL_C}.\nabla_{ extit{content}}^{L_C}(X,C) + \sum_{L_S} w_{SL_S}.\nabla_{ extit{style}}^{L_S}(X,S)\end{align}$L_C$ 와 $L_S$ 는 각각 콘텐츠와 스타일의 원하는 (임의 상태의) 레이어들을 의미하고,$w_{CL_C}$ 와 $w_{SL_S}$ 는 원하는 레이어에서의스타일 또는 콘텐츠의 가중치를 (임의 상태의) 의미합니다.그리고 나서, 우리는 $X$ 에 대해 경사 하강법을 실행합니다.\begin{align}X \leftarrow X - \alpha \nabla_{ extit{total}}(X,S,C)\end{align}네, 수학은 이정도면 충분합니다. 만약 더 깊이 알고 싶다면 (그레이언트를 어떻게 계산하는지),Leon A. Gatys and AL가 작성한 **원래의 논문을 읽어 볼 것을 권장합니다** 논문에는 앞서 설명한 내용들 모두에 대해 보다 자세하고 명확하게 얘기합니다.구현을 위해서 PyTorch에서는 이미 우리가 필요로하는 모든 것을 갖추고 있습니다. 실제로 PyTorch를 사용하면 라이브러리의 함수를 사용하는 동안 모든 그라디언트(Gradient)가 자동,동적으로 계산됩니다.(라이브러리에서 함수를 사용하는 동안)이런 점이 PyTorch에서 알고리즘 구현을 매우 편리하게 합니다.PyTorch 구현----------------------위의 모든 수학을 이해할 수 없다면, 구현함으로써 이해도를 높여 갈 수 있을 것 입니다. PyTorch를 이용할 예정이라면, 먼저 이 문서 :doc:`Introduction to PyTorch ` 를 읽어볼 것을 추천 합니다.패키지들~~~~~~~~우리는 다음 패키지들을 활용 할 것입니다:- ``torch`` , ``torch.nn``, ``numpy`` (PyTorch로 신경망 처리를 위한 필수 패키지)- ``torch.optim`` (효율적인 그라디언트 디센트)- ``PIL`` , ``PIL.Image`` , ``matplotlib.pyplot`` (이미지를 읽고 보여주는 패키지)- ``torchvision.transforms`` (PIL타입의 이미지들을 토치 텐서 형태로 변형해주는 패키지)- ``torchvision.models`` (사전 훈련된 모델들의 학습 또는 읽기 패키지)- ``copy`` (모델들의 깊은 복사를 위한 시스템 패키지)
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from PIL import Image
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
import torchvision.models as models
import copy
###Output
_____no_output_____
###Markdown
쿠다(CUDA)~~~~~~~~~~~~~~컴퓨터에 GPU가 있는 경우, 특히 VGG와 같이 깊은 네트워크를 사용하려는 경우 알고리즘을 CUDA 환경에서 실행하는 것이 좋습니다. CUDA를 쓰기 위해서 Pytorch에서는 ``torch.cuda.is_available()`` 를 제공하는데, 작업하는 컴퓨터에서 GPU 사용이 가능하면 ``True`` 를 리턴 합니다.이후로, 우리는 ``.cuda()`` 라는 메소드를 사용하여 모듈과 관련된 할당된 프로세스를 CPU에서 GPU로 수 있습니다.이 모듈을 CPU로 되돌리고 싶을 때에는 (예 : numpy에서 사용), 우리는 ``.cpu ()`` 메소드를 사용하면 됩니다.마지막으로, ``.type(dtype)`` 메소드는 ``torch.FloatTensor`` 타입을 GPU에서 사용 할 수 있도록 ``torch.cuda.FloatTensor`` 로 변환하는데 사용할 수 있습니다.
###Code
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
이미지 읽기~~~~~~~~~~~~~구현을 간단하게 하기 위해서, 스타일 이미지와 콘텐츠 이미지의 크기를 동일하게 맞추어서 시작합니다.그런 다음 원하는 출력 이미지 크기로 확장 시킵니다.(본 예제에서는 128이나 512로 하는데 GPU가 가능한 상황에 맞게 선택해서 하세요.)그리고 영상 데이터를 토치 텐서로 변환하고, 신경망 네트워크에 사용할 수 있도록 준비합니다... Note:: 튜토리얼을 실행하는 데 필요한 이미지를 다운로드하는 링크는 다음과 같습니다.: `picasso.jpg `__ 와 `dancing.jpg `__. 위 두개의 이미지를 다운로드 받아 디렉토리 이름 ``images`` 에 추가하세요.
###Code
# 출력 이미지의 원하는 크기를 정하세요.
imsize = 512 if torch.cuda.is_available() else 128 # gpu가 없다면 작은 크기로
loader = transforms.Compose([
transforms.Resize(imsize), # 입력 영상 크기를 맞춤
transforms.ToTensor()]) # 토치 텐서로 변환
def image_loader(image_name):
image = Image.open(image_name)
# 네트워크의 입력 차원을 맞추기 위해 필요한 가짜 배치 차원
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
style_img = image_loader("./data/images/neural-style/picasso.jpg")
content_img = image_loader("./data/images/neural-style/dancing.jpg")
assert style_img.size() == content_img.size(), \
"we need to import style and content images of the same size"
###Output
_____no_output_____
###Markdown
가져온 PIL 이미지는 0에서 255 사이의 이미지 픽셀값을 가집니다. 토치 텐서로 변환하면 0에서 1의 값으로 변환됩니다. 이는 중요한 디테일로: 토치 라이브러리의 신경망은 0에서 1의 텐서 이미지로 학습하게 됩니다.0-255 텐서 이미지를 네트워크에 공급 하려고 하면 활성화된(activated) 특징 맵(feature map)은 의미가 없습니다.(역자주, 입력 값에 따라 RELU와 같은 활성화 레이어에서 입력으로 되는 값의 범위가 완전히 다르기 때문)Caffe 라이브러리의 사전 훈련된 네트워크의 경우는 그렇지 않습니다: 해당 모델들은 0에서 255 사이 값의 텐서 이미지로 학습 되었습니다.이미지 표시하기~~~~~~~~~~~~~~~~~~~~우리는 이미지를 표시하기 위해 ``plt.imshow`` 를 이용합니다. 그러기 위해 우선 텐서를 PIL 이미지로 변환해 주겠습니다:
###Code
unloader = transforms.ToPILImage() # PIL 이미지로 재변환 합니다
plt.ion()
def imshow(tensor, title=None):
image = tensor.cpu().clone() # 텐서의 값에 변화가 적용되지 않도록 텐서를 복제합니다
image = image.squeeze(0) # 페이크 배치 차원을 제거 합니다
image = unloader(image)
plt.imshow(image)
if title is not None:
plt.title(title)
plt.pause(0.001) # 그리는 부분이 업데이트 될 수 있게 잠시 정지합니다
plt.figure()
imshow(style_img, title='Style Image')
plt.figure()
imshow(content_img, title='Content Image')
###Output
_____no_output_____
###Markdown
콘텐츠 로스~~~~~~~~~~~~콘텐츠 로스는 네트워크에서 $X$ 로 입력을 받았을 때 레이어 $L$ 에서 특징 맵(feature map) $F_{XL}$ 을 입력으로 가져 와서 이 이미지와 콘텐츠 이미지 사이의 가중치 콘텐츠 거리 $w_{CL}.D_C^L(X,C)$ 를 반환하는 기능입니다. 따라서, 가중치 $w_{CL}$ 및 목표 콘텐츠 $F_{CL}$ 은 함수의 파라미터 입니다.우리는 이 매개 변수를 입력으로 사용하는 생성자(constructor)가 있는 토치 모듈로 함수를 구현합니다. 거리 $\|F_{XL} - F_{YL}\|^2$ 는 세 번째 매개 변수로 명시된 기준 ``nn.MSELoss`` 를 사용하여계산할 수 있는 두 세트의 특징 맵(feature map) 사이의 평균 제곱 오차(MSE, Mean Square Error)입니다.우리는 신경망의 추가 모듈로서 각 레이어에 컨텐츠 로스를 추가 할 것 입니다. 이렇게 하면 입력 영상 $X$ 를 네트워크에 보낼 때마다 원하는 모든 레이어에서 모든 컨텐츠 로스가 계산되고 자동 그라디언트로 인해 모든 그라디언트가 계산됩니다. 이를 위해 우리는 입력을 리턴하는 ``forward`` 메소드를 만들기만 하면 됩니다: 모듈은 신경망의 ''투명 레이어'' 가 됩니다. 계산된 로스는 모듈의 매개 변수로 저장됩니다.마지막으로 그라디언트를 재구성하기 위해 nn.MSELoss의 ``backward`` 메서드를 호출하는 가짜 backward 메서드를 정의 합니다. 이 메서드는 계산된 로스를 반환 합니다. 이는 스타일 및 콘텐츠 로스의 진화를 표시하기 위해 그라디언트 디센트를 실행할 때 유용합니다.
###Code
class ContentLoss(nn.Module):
def __init__(self, target,):
super(ContentLoss, self).__init__()
# 그라디언트를 동적으로 계산하는 데 사용되는 트리에서 대상 콘텐츠를 '분리' 합니다.
# :이 값은 변수(variable)가 아니라 명시된 값입니다.
# 그렇지 않으면 기준의 전달 메소드가 오류를 발생 시킵니다.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
###Output
_____no_output_____
###Markdown
.. Note:: **중요한 디테일**: 이 모듈은 ``ContentLoss`` 라고 이름 지어졌지만 진정한 PyTorch Loss 함수는 아닙니다. 컨텐츠 손실을 PyTorch Loss로 정의 하려면 PyTorch autograd Function을 생성 하고 ``backward`` 메소드에서 직접 그라디언트를 재계산/구현 해야 합니다.스타일 로스~~~~~~~~~~~~~~~~~~스타일 손실을 위해 우리는 레이어 $L$ 에서 $X$ 로 공급된(입력으로 하는) 신경망의 특징 맵(feature map) $F_{XL}$ 이 주어진 경우그램 생성 $G_{XL}$ 을 계산하는 모듈을 먼저 정의 해야 합니다. $\hat{F}_{XL}$ 을 KxN 행렬에 대한 $F_{XL}$의 모양을 변경한 버전이라고 하겠습니다.여기서, $K$는 레이어 $L$에서의 특징 맵(feature map)들의 수이고, $N$ 은 임의의 벡터화 된 특징 맵(feature map) $F_{XL}^k$ 의 길이가 됩니다. $F_{XL}^k$ 의 $k^{번째}$ 번째 줄은 $F_{XL}^k$ 입니다. math:`\hat{F}_{XL} \cdot \hat{F}_{XL}^T = G_{XL}` 인지 확인 해보길 바랍니다. 이를 확인해보면 모듈을 구현하는 것이 쉬워 집니다:
###Code
def gram_matrix(input):
a, b, c, d = input.size() # a=배치 크기(=1)
# b=특징 맵의 크기
# (c,d)=특징 맵(N=c*d)의 차원
features = input.view(a * b, c * d) # F_XL을 \hat F_XL로 크기 조정합니다
G = torch.mm(features, features.t()) # 그램 곱을 수행합니다
# 그램 행렬의 값을 각 특징 맵의 요소 숫자로 나누는 방식으로 '정규화'를 수행합니다.
return G.div(a * b * c * d)
###Output
_____no_output_____
###Markdown
특징 맵(feature map) 차원 $N$이 클수록, 그램(Gram) 행렬의 값이 커집니다. 따라서 $N$으로 정규화하지 않으면 첫번째 레이어에서 계산된 로스 (풀링 레이어 전에)는경사 하강법 동안 훨씬 더 중요하게 됩니다. (역자주 : 정규화를 하지 않으면 첫번째 레이어에서 계산된 값들의 가중치가 높아져 상대적으로 다른 레이어에서 계산한 값들의 반영이 적게 되버리기 때문에 정규화가 필요해집니다.)스타일 특징의 흥미로운 부분들은 가장 깊은 레이어에 있기 때문에 그렇게 동작하지 않도록 해야 합니다!그런 다음 스타일 로스 모듈은 콘텐츠 로스 모듈과 완전히 동일한 방식으로 구현되지만대상과 입력 간의 그램 매트릭스의 차이를 비교하게 됩니다
###Code
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
###Output
_____no_output_____
###Markdown
뉴럴 네트워크 읽기~~~~~~~~~~~~~~~~~~~~~~~자, 우리는 사전 훈련된 신경망을 가져와야 합니다. 이 논문에서와 같이, 우리는 19 레이어 층을 가지는 VGG(VGG19) 네트워크를 사전 훈련된 네트워크로 사용할 것입니다.PyTorch의 VGG 구현은 두 개의 하위 순차 모듈로 나뉜 모듈 입니다. ``특징(features)`` 모듈 : 합성곱과 풀링 레이어들을 포함 합니다.``분류(classifier)`` 모듈 : fully connected 레이어들을 포함 합니다.우리는 여기서 ``특징`` 모듈에 관심이 있습니다.일부 레이어는 학습 및 평가에 있어서 상황에 따라 다른 동작을 합니다. 이후 우리는 그것을 특징 추출자로 사용하고 있습니다. 우리는 .eval() 을 사용하여 네트워크를 평가 모드로 설정 할 수 있습니다.
###Code
cnn = models.vgg19(pretrained=True).features.to(device).eval()
###Output
_____no_output_____
###Markdown
또한 VGG 네트워크는 평균 = [0.485, 0.456, 0.406] 및 표준편차 = [0.229, 0.224, 0.225]로 정규화 된 각 채널의 이미지에 대해 학습된 모델입니다.(역자, 일반적으로 네트워크는 이미지넷으로 학습이 되고 이미지넷 데이터의 평균과 표준편차가 위의 값과 같습니다.)우리는 입력 이미지를 네트워크로 보내기 전에 정규화 하는데 위 평균과 표준편차 값을 사용합니다.
###Code
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
# 입력 이미지를 정규화하는 모듈을 만들어 nn.Sequential에 쉽게 입력 할 수 있게 하세요.
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view(텐서의 모양을 바꾸는 함수)로 평균과 표준 편차 텐서를 [C x 1 x 1] 형태로 만들어
# 바로 입력 이미지 텐서의 모양인 [B x C x H x W] 에 연산할 수 있도록 만들어 주세요.
# B는 배치 크기, C는 채널 값, H는 높이, W는 넓이 입니다.
self.mean = torch.tensor(mean).view(-1, 1, 1)
self.std = torch.tensor(std).view(-1, 1, 1)
def forward(self, img):
# img 값 정규화(normalize)
return (img - self.mean) / self.std
###Output
_____no_output_____
###Markdown
``순차(Sequential)`` 모듈에는 하위 모듈의 정렬된 목록이 있습니다. 예를 들어 ``vgg19.features`` 은 vgg19 구조의 올바른 순서로 정렬된 순서 정보(Conv2d, ReLU, MaxPool2d, Conv2d, ReLU ...)를 포함합니다. 콘텐츠 로스 섹션에서 말했듯이 우리는 네트워크의 원하는 레이어에 추가 레이어 '투명(transparent)'레이어로 스타일 및 콘텐츠 손실 모듈을 추가하려고 합니다. 이를 위해 새로운 순차 모듈을 구성합니다.이 모듈에서는 vgg19의 모듈과 손실 모듈을 올바른 순서로 추가합니다.
###Code
# 스타일/콘텐츠 로스로 계산하길 원하는 깊이의 레이어들:
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers=content_layers_default,
style_layers=style_layers_default):
cnn = copy.deepcopy(cnn)
# 표준화(normalization) 모듈
normalization = Normalization(normalization_mean, normalization_std).to(device)
# 단지 반복 가능한 접근을 갖거나 콘텐츠/스타일의 리스트를 갖기 위함
# 로스값
content_losses = []
style_losses = []
# cnn은 nn.Sequential 하다고 가정하므로, 새로운 nn.Sequential을 만들어
# 우리가 순차적으로 활성화 하고자하는 모듈들을 넣겠습니다.
model = nn.Sequential(normalization)
i = 0 # conv레이어를 찾을때마다 값을 증가 시킵니다
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
# in-place(입력 값을 직접 업데이트) 버전은 콘텐츠로스와 스타일로스에
# 좋은 결과를 보여주지 못합니다.
# 그래서 여기선 out-of-place로 대체 하겠습니다.
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
# 콘텐츠 로스 추가:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
# 스타일 로스 추가:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
# 이제 우리는 마지막 콘텐츠 및 스타일 로스 이후의 레이어들을 잘라냅니다.
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
return model, style_losses, content_losses
###Output
_____no_output_____
###Markdown
.. Note:: 논문에서는 맥스 풀링(Max Pooling) 레이어를 에버리지 풀링(Average Pooling) 레이어로 바꾸는 것을 추천합니다. AlexNet에서는 논문에서 사용된 VGG19 네트워크보다 상대적으로 작은 네트워크라 결과 품질에서 큰 차이를 확인하기 어려울 수 있습니다. 그러나, 만약 당신이 대체해 보기를 원한다면 아래 코드들을 사용할 수 있습니다: :: avgpool = nn.AvgPool2d(kernel_size=layer.kernel_size, stride=layer.stride, padding = layer.padding) model.add_module(name,avgpool) 입력 이미지~~~~~~~~~~~~~~~~~~~다시, 코드를 간단하게 하기 위해, 콘텐츠와 스타일 이미지들의 같은 차원의 이미지를 가져옵니다.해당 이미지는 백색 노이즈일 수 있거나 콘텐츠-이미지의 값들을 복사해도 좋습니다.
###Code
input_img = content_img.clone()
# 대신에 백색 노이즈를 이용하길 원한다면 아래 줄의 주석처리를 제거하세요:
# input_img = torch.randn(content_img.data.size(), device=device)
# 원본 입력 이미지를 창에 추가합니다:
plt.figure()
imshow(input_img, title='Input Image')
###Output
_____no_output_____
###Markdown
경사 하강법~~~~~~~~~~~~~~~~알고리즘의 저자인 Len Gatys 가 `여기서 `__ 제안한 방식대로경사 하강법을 실행하는데 L-BFGS 알고리즘을 사용 하겠습니다.일반적인 네트워크 학습과는 다르게, 우리는 콘텐츠/스타일 로스를 최소화 하는 방향으로 입력 영상을 학습 시키려고 합니다.우리는 간단히 PyTorch L-BFGS 옵티마이저 ``optim.LBFGS`` 를 생성하려고 하며, 최적화를 위해 입력 이미지를 텐서 타입으로 전달합니다. 우리는 ``.requires_grad_()`` 를 사용하여 해당 이미지가 그라디언트가 필요함을 확실하게 합니다.
###Code
def get_input_optimizer(input_img):
# 이 줄은 입력은 그레이던트가 필요한 파라미터라는 것을 보여주기 위해 있습니다.
optimizer = optim.LBFGS([input_img.requires_grad_()])
return optimizer
###Output
_____no_output_____
###Markdown
**마지막 단계**: 경사 하강의 반복. 각 단계에서 우리는 네트워크의 새로운 로스를 계산하기 위해업데이트 된 입력을 네트워크에 공급해야 합니다. 우리는 그라디언트를 동적으로 계산하고 그라디언트 디센트의 단계를 수행하기 위해 각 손실의 ``역방향(backward)`` 메소드를 실행해야 합니다.옵티마이저는 인수로서 "클로저(closure)"를 필요로 합니다: 즉, 모델을 재평가하고 로스를 반환 하는 함수입니다.그러나, 여기에 작은 함정이 있습니다. 최적화 된 이미지는 0 과 1 사이에 머물지 않고 $-\infty$과 $+\infty$ 사이의 값을 가질 수 있습니다. 다르게 말하면, 이미지는 잘 최적화될 수 있고(0-1 사이의 정해진 값 범위내의 값을 가질 수 있고) 이상한 값을 가질 수도 있습니다. 사실 우리는 입력 이미지가 올바른 범위의 값을 유지할 수 있도록 제약 조건 하에서 최적화를 수행해야 합니다. 각 단계마다 0-1 간격으로 값을 유지하기 위해 이미지를 수정하는 간단한 해결책이 있습니다.
###Code
def run_style_transfer(cnn, normalization_mean, normalization_std,
content_img, style_img, input_img, num_steps=300,
style_weight=1000000, content_weight=1):
"""스타일 변환을 실행합니다."""
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn,
normalization_mean, normalization_std, style_img, content_img)
optimizer = get_input_optimizer(input_img)
print('Optimizing..')
run = [0]
while run[0] <= num_steps:
def closure():
# 입력 이미지의 업데이트된 값들을 보정합니다
input_img.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_img)
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
style_score *= style_weight
content_score *= content_weight
loss = style_score + content_score
loss.backward()
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
optimizer.step(closure)
# 마지막 보정...
input_img.data.clamp_(0, 1)
return input_img
###Output
_____no_output_____
###Markdown
마지막으로, 알고리즘을 실행 시킵니다.
###Code
output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std,
content_img, style_img, input_img)
plt.figure()
imshow(output, title='Output Image')
# sphinx_gallery_thumbnail_number = 4
plt.ioff()
plt.show()
###Output
_____no_output_____
###Markdown
Neural Transfer Using PyTorch=============================**Author**: `Alexis Jacq `_**Edited by**: `Winston Herring `_Introduction------------This tutorial explains how to implement the `Neural-Style algorithm `__developed by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge.Neural-Style, or Neural-Transfer, allows you to take an image andreproduce it with a new artistic style. The algorithm takes three images,an input image, a content-image, and a style-image, and changes the inputto resemble the content of the content-image and the artistic style of the style-image... figure:: /_static/img/neural-style/neuralstyle.png :alt: content1 Underlying Principle--------------------The principle is simple: we define two distances, one for the content($D_C$) and one for the style ($D_S$). $D_C$ measures how different the contentis between two images while $D_S$ measures how different the style isbetween two images. Then, we take a third image, the input, andtransform it to minimize both its content-distance with thecontent-image and its style-distance with the style-image. Now we canimport the necessary packages and begin the neural transfer.Importing Packages and Selecting a Device-----------------------------------------Below is a list of the packages needed to implement the neural transfer.- ``torch``, ``torch.nn``, ``numpy`` (indispensables packages for neural networks with PyTorch)- ``torch.optim`` (efficient gradient descents)- ``PIL``, ``PIL.Image``, ``matplotlib.pyplot`` (load and display images)- ``torchvision.transforms`` (transform PIL images into tensors)- ``torchvision.models`` (train or load pre-trained models)- ``copy`` (to deep copy the models; system package)
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from PIL import Image
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
import torchvision.models as models
import copy
###Output
_____no_output_____
###Markdown
Next, we need to choose which device to run the network on and import thecontent and style images. Running the neural transfer algorithm on largeimages takes longer and will go much faster when running on a GPU. We canuse ``torch.cuda.is_available()`` to detect if there is a GPU available.Next, we set the ``torch.device`` for use throughout the tutorial. Also the ``.to(device)``method is used to move tensors or modules to a desired device.
###Code
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
Loading the Images------------------Now we will import the style and content images. The original PIL images have values between 0 and 255, but whentransformed into torch tensors, their values are converted to be between0 and 1. The images also need to be resized to have the same dimensions.An important detail to note is that neural networks from thetorch library are trained with tensor values ranging from 0 to 1. If youtry to feed the networks with 0 to 255 tensor images, then the activatedfeature maps will be unable to sense the intended content and style.However, pre-trained networks from the Caffe library are trained with 0to 255 tensor images... Note:: Here are links to download the images required to run the tutorial: `picasso.jpg `__ and `dancing.jpg `__. Download these two images and add them to a directory with name ``images`` in your current working directory.
###Code
# desired size of the output image
imsize = 512 if torch.cuda.is_available() else 128 # use small size if no gpu
loader = transforms.Compose([
transforms.Resize(imsize), # scale imported image
transforms.ToTensor()]) # transform it into a torch tensor
def image_loader(image_name):
image = Image.open(image_name)
# fake batch dimension required to fit network's input dimensions
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
style_img = image_loader("./data/images/neural-style/picasso.jpg")
content_img = image_loader("./data/images/neural-style/dancing.jpg")
assert style_img.size() == content_img.size(), \
"we need to import style and content images of the same size"
###Output
_____no_output_____
###Markdown
Now, let's create a function that displays an image by reconverting acopy of it to PIL format and displaying the copy using``plt.imshow``. We will try displaying the content and style imagesto ensure they were imported correctly.
###Code
unloader = transforms.ToPILImage() # reconvert into PIL image
plt.ion()
def imshow(tensor, title=None):
image = tensor.cpu().clone() # we clone the tensor to not do changes on it
image = image.squeeze(0) # remove the fake batch dimension
image = unloader(image)
plt.imshow(image)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
plt.figure()
imshow(style_img, title='Style Image')
plt.figure()
imshow(content_img, title='Content Image')
###Output
_____no_output_____
###Markdown
Loss Functions--------------Content Loss~~~~~~~~~~~~The content loss is a function that represents a weighted version of thecontent distance for an individual layer. The function takes the featuremaps $F_{XL}$ of a layer $L$ in a network processing input $X$ and returns theweighted content distance $w_{CL}.D_C^L(X,C)$ between the image $X$ and thecontent image $C$. The feature maps of the content image($F_{CL}$) must beknown by the function in order to calculate the content distance. Weimplement this function as a torch module with a constructor that takes$F_{CL}$ as an input. The distance $\|F_{XL} - F_{CL}\|^2$ is the mean square errorbetween the two sets of feature maps, and can be computed using ``nn.MSELoss``.We will add this content loss module directly after the convolutionlayer(s) that are being used to compute the content distance. This wayeach time the network is fed an input image the content losses will becomputed at the desired layers and because of auto grad, all thegradients will be computed. Now, in order to make the content loss layertransparent we must define a ``forward`` method that computes the contentloss and then returns the layer’s input. The computed loss is saved as aparameter of the module.
###Code
class ContentLoss(nn.Module):
def __init__(self, target,):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
###Output
_____no_output_____
###Markdown
.. Note:: **Important detail**: although this module is named ``ContentLoss``, it is not a true PyTorch Loss function. If you want to define your content loss as a PyTorch Loss function, you have to create a PyTorch autograd function to recompute/implement the gradient manually in the ``backward`` method. Style Loss~~~~~~~~~~The style loss module is implemented similarly to the content lossmodule. It will act as a transparent layer in anetwork that computes the style loss of that layer. In order tocalculate the style loss, we need to compute the gram matrix $G_{XL}$. A grammatrix is the result of multiplying a given matrix by its transposedmatrix. In this application the given matrix is a reshaped version ofthe feature maps $F_{XL}$ of a layer $L$. $F_{XL}$ is reshaped to form $\hat{F}_{XL}$, a $K$\ x\ $N$matrix, where $K$ is the number of feature maps at layer $L$ and $N$ is thelength of any vectorized feature map $F_{XL}^k$. For example, the first lineof $\hat{F}_{XL}$ corresponds to the first vectorized feature map $F_{XL}^1$.Finally, the gram matrix must be normalized by dividing each element bythe total number of elements in the matrix. This normalization is tocounteract the fact that $\hat{F}_{XL}$ matrices with a large $N$ dimension yieldlarger values in the Gram matrix. These larger values will cause thefirst layers (before pooling layers) to have a larger impact during thegradient descent. Style features tend to be in the deeper layers of thenetwork so this normalization step is crucial.
###Code
def gram_matrix(input):
a, b, c, d = input.size() # a=batch size(=1)
# b=number of feature maps
# (c,d)=dimensions of a f. map (N=c*d)
features = input.view(a * b, c * d) # resise F_XL into \hat F_XL
G = torch.mm(features, features.t()) # compute the gram product
# we 'normalize' the values of the gram matrix
# by dividing by the number of element in each feature maps.
return G.div(a * b * c * d)
###Output
_____no_output_____
###Markdown
Now the style loss module looks almost exactly like the content lossmodule. The style distance is also computed using the mean squareerror between $G_{XL}$ and $G_{SL}$.
###Code
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
###Output
_____no_output_____
###Markdown
Importing the Model-------------------Now we need to import a pre-trained neural network. We will use a 19layer VGG network like the one used in the paper.PyTorch’s implementation of VGG is a module divided into two child``Sequential`` modules: ``features`` (containing convolution and pooling layers),and ``classifier`` (containing fully connected layers). We will use the``features`` module because we need the output of the individualconvolution layers to measure content and style loss. Some layers havedifferent behavior during training than evaluation, so we must set thenetwork to evaluation mode using ``.eval()``.
###Code
cnn = models.vgg19(pretrained=True).features.to(device).eval()
###Output
_____no_output_____
###Markdown
Additionally, VGG networks are trained on images with each channelnormalized by mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].We will use them to normalize the image before sending it into the network.
###Code
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
# create a module to normalize input image so we can easily put it in a
# nn.Sequential
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view the mean and std to make them [C x 1 x 1] so that they can
# directly work with image Tensor of shape [B x C x H x W].
# B is batch size. C is number of channels. H is height and W is width.
self.mean = torch.tensor(mean).view(-1, 1, 1)
self.std = torch.tensor(std).view(-1, 1, 1)
def forward(self, img):
# normalize img
return (img - self.mean) / self.std
###Output
_____no_output_____
###Markdown
A ``Sequential`` module contains an ordered list of child modules. Forinstance, ``vgg19.features`` contains a sequence (Conv2d, ReLU, MaxPool2d,Conv2d, ReLU…) aligned in the right order of depth. We need to add ourcontent loss and style loss layers immediately after the convolutionlayer they are detecting. To do this we must create a new ``Sequential``module that has content loss and style loss modules correctly inserted.
###Code
# desired depth layers to compute style/content losses :
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers=content_layers_default,
style_layers=style_layers_default):
cnn = copy.deepcopy(cnn)
# normalization module
normalization = Normalization(normalization_mean, normalization_std).to(device)
# just in order to have an iterable access to or list of content/syle
# losses
content_losses = []
style_losses = []
# assuming that cnn is a nn.Sequential, so we make a new nn.Sequential
# to put in modules that are supposed to be activated sequentially
model = nn.Sequential(normalization)
i = 0 # increment every time we see a conv
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
# The in-place version doesn't play very nicely with the ContentLoss
# and StyleLoss we insert below. So we replace with out-of-place
# ones here.
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
# add content loss:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
# add style loss:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
# now we trim off the layers after the last content and style losses
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
return model, style_losses, content_losses
###Output
_____no_output_____
###Markdown
Next, we select the input image. You can use a copy of the content imageor white noise.
###Code
input_img = content_img.clone()
# if you want to use white noise instead uncomment the below line:
# input_img = torch.randn(content_img.data.size(), device=device)
# add the original input image to the figure:
plt.figure()
imshow(input_img, title='Input Image')
###Output
_____no_output_____
###Markdown
Gradient Descent----------------As Leon Gatys, the author of the algorithm, suggested `here `__, we will useL-BFGS algorithm to run our gradient descent. Unlike training a network,we want to train the input image in order to minimise the content/stylelosses. We will create a PyTorch L-BFGS optimizer ``optim.LBFGS`` and passour image to it as the tensor to optimize.
###Code
def get_input_optimizer(input_img):
# this line to show that input is a parameter that requires a gradient
optimizer = optim.LBFGS([input_img.requires_grad_()])
return optimizer
###Output
_____no_output_____
###Markdown
Finally, we must define a function that performs the neural transfer. Foreach iteration of the networks, it is fed an updated input and computesnew losses. We will run the ``backward`` methods of each loss module todynamicaly compute their gradients. The optimizer requires a “closure”function, which reevaluates the module and returns the loss.We still have one final constraint to address. The network may try tooptimize the input with values that exceed the 0 to 1 tensor range forthe image. We can address this by correcting the input values to bebetween 0 to 1 each time the network is run.
###Code
def run_style_transfer(cnn, normalization_mean, normalization_std,
content_img, style_img, input_img, num_steps=300,
style_weight=1000000, content_weight=1):
"""Run the style transfer."""
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn,
normalization_mean, normalization_std, style_img, content_img)
optimizer = get_input_optimizer(input_img)
print('Optimizing..')
run = [0]
while run[0] <= num_steps:
def closure():
# correct the values of updated input image
input_img.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_img)
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
style_score *= style_weight
content_score *= content_weight
loss = style_score + content_score
loss.backward()
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
optimizer.step(closure)
# a last correction...
input_img.data.clamp_(0, 1)
return input_img
###Output
_____no_output_____
###Markdown
Finally, we can run the algorithm.
###Code
output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std,
content_img, style_img, input_img)
plt.figure()
imshow(output, title='Output Image')
# sphinx_gallery_thumbnail_number = 4
plt.ioff()
plt.show()
###Output
_____no_output_____
###Markdown
PyTorch를 이용한 신경망-변환(Neural-Transfer)======================================================**저자**: `Alexis Jacq `_ **번역**: `김봉모 `_소개------------------환영합니다!. 이 문서는 Leon A. Gatys와 Alexander S. Ecker, Matthias Bethge 가 개발한알고리즘인 `Neural-Style `__ 를 구현하는 방법에 대해설명하는 튜토리얼입니다.신경망 뭐라고?~~~~~~~~~~~~~~~~~~~신경망 스타일(Neural-Style), 혹은 신경망 변화(Neural-Transfer)는 콘텐츠 이미지(예, 거북이)와 스타일 이미지(예, 파도를 그린 예술 작품) 을 입력으로 받아 콘텐츠 이미지의 모양대로 스타일 이미지의'그리는 방식'을 이용해 그린 것처럼 결과를 내는 알고리즘입니다:.. figure:: /_static/img/neural-style/neuralstyle.png :alt: content1어떻게 동작합니까?~~~~~~~~~~~~~~~~~~~~~~~원리는 간단합니다. 2개의 거리(distance)를 정의합니다. 하나는 콘텐츠( $D_C$ )를 위한 것이고 다른 하나는 스타일( $D_S$ )을 위한 것입니다.$D_C$ 는 콘텐츠 이미지와 스타일 이미지 간의 콘텐츠가 얼마나 차이가 있는지 측정을 합니다. 반면에, $D_S$ 는 콘텐츠 이미지와 스타일 이미지 간의 스타일에서 얼마나 차이가 있는지를 측정합니다.그런 다음, 세 번째 이미지를 입력(예, 노이즈로 구성된 이미지)으로부터 콘텐츠 이미지와의 콘텐츠 거리 및 스타일 이미지와의 스타일 거리를 최소화하는 방향으로 세 번째 이미지를 변환합니다.그래서. 어떻게 동작하냐고요?^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^자, 더 나아가려면 수학이 필요합니다. $C_{nn}$ 를 사전 훈련된 깊은 합성곱 신경망 네트워크(pre-trained deep convolutional neural network)라고 하고, $X$ 를 어떤 이미지라고 해보겠습니다.$C_{nn}(X)$ 은 입력 이미지 X를 입력으로 해서 CNN 을 통과한 네트워크(모든 레이어들의 특징 맵(feature map)을 포함하는)를 의미합니다.$F_{XL} \in C_{nn}(X)$ 는 깊이 레벨 L에서의 특징 맵(feature map)을 의미하고, 모두 벡터화(vectorized)되고 연결된(concatenated) 하나의 단일 벡터입니다.그리고, $Y$ 를 이미지 $X$ 와 크기가 같은 이미지라고 하면, 레이어 $L$ 에 해당하는 콘텐츠의 거리를 정의할 수 있습니다:\begin{align}D_C^L(X,Y) = \|F_{XL} - F_{YL}\|^2 = \sum_i (F_{XL}(i) - F_{YL}(i))^2\end{align}$F_{XL}(i)$ 는 $F_{XL}$ 의 $i^{번째}$ 요소(element) 입니다.스타일에 해당하는 내용은 위 내용보다 조금 더 신경 쓸 부분이 있습니다.$F_{XL}^k$ 를 레이어 $L$ 에서 특징 맵(feature map) $K$ 의 $k^{번째}$ 에 해당하는벡터화된 $k \leq K$ 라고 해 보겠습니다.스타일 $G_{XL}$ 의 $X$ 레이어에서 $L$ 은 모든 벡터화된 특징 맵(feature map) $F_{XL}^k$ 에서 $k \leq K$ 그람(Gram)으로 정의 됩니다.다시 말하면, $G_{XL}$ 는 $K$\ x\ $K$ 행렬과 요소 $G_{XL}(k,l)$ 의 $k^{번째}$ 줄과$l^{번째}$ 행의 $G_{XL}$ 는 $F_{XL}^k$ 와 $F_{XL}^l$ 간의벡터화 곱을 의미합니다:\begin{align}G_{XL}(k,l) = \langle F_{XL}^k, F_{XL}^l\rangle = \sum_i F_{XL}^k(i) . F_{XL}^l(i)\end{align}$F_{XL}^k(i)$ 는 $F_{XL}^k$ 의 $i^{번째}$ 요소 입니다.우리는 $G_{XL}(k,l)$ 를 특징 맵(feature map) $k$ 와 $l$ 간의 상관 관계(correlation)에 대한 척도로 볼 수 있습니다.그런 의미에서, $G_{XL}$ 는 특징 맵(feature map) $X$ 의 레이어 $L$ 에서의 상관 관계 행렬을 나타냅니다.$G_{XL}$ 의 크기는 단지 특징 맵(feature map)의 숫자에만 의존성이 있고,$X$ 의 크기에는 의존성이 없다는 것을 유의 해야 합니다.그러면, 만약 $Y$ 가 다른 *어떤 크기의* 이미지라면,우리는 다음과 같이 레이어 $L$ 에서 스타일의 거리를 정의 합니다.\begin{align}D_S^L(X,Y) = \|G_{XL} - G_{YL}\|^2 = \sum_{k,l} (G_{XL}(k,l) - G_{YL}(k,l))^2\end{align}$D_C(X,C)$ 의 한 번의 최소화를 위해서, 이미지 변수 $X$ 와 대상 콘텐츠-이미지 $C$ 와$D_S(X,S)$ 와 $X$ 와 대상 스타일-이미지 $S$ , 둘 다 여러 레이어들에 대해서 계산되야 하고,우리는 원하는 레이어 각각에서의 거리의 그라디언트를 계산하고 더합니다( $X$ 와 관련된 도함수):\begin{align}\nabla_{ extit{total}}(X,S,C) = \sum_{L_C} w_{CL_C}.\nabla_{ extit{content}}^{L_C}(X,C) + \sum_{L_S} w_{SL_S}.\nabla_{ extit{style}}^{L_S}(X,S)\end{align}$L_C$ 와 $L_S$ 는 각각 콘텐츠와 스타일의 원하는 (임의 상태의) 레이어들을 의미하고,$w_{CL_C}$ 와 $w_{SL_S}$ 는 원하는 레이어에서의스타일 또는 콘텐츠의 가중치를 (임의 상태의) 의미합니다.그리고 나서, 우리는 $X$ 에 대해 경사 하강법을 실행합니다.\begin{align}X \leftarrow X - \alpha \nabla_{ extit{total}}(X,S,C)\end{align}네, 수학은 이정도면 충분합니다. 만약 더 깊이 알고 싶다면 (그레이언트를 어떻게 계산하는지),Leon A. Gatys and AL가 작성한 **원래의 논문을 읽어 볼 것을 권장합니다** 논문에는 앞서 설명한 내용들 모두에 대해 보다 자세하고 명확하게 얘기합니다.구현을 위해서 PyTorch에서는 이미 우리가 필요로하는 모든 것을 갖추고 있습니다. 실제로 PyTorch를 사용하면 라이브러리의 함수를 사용하는 동안 모든 그라디언트(Gradient)가 자동,동적으로 계산됩니다.(라이브러리에서 함수를 사용하는 동안)이런 점이 PyTorch에서 알고리즘 구현을 매우 편리하게 합니다.PyTorch 구현----------------------위의 모든 수학을 이해할 수 없다면, 구현함으로써 이해도를 높여 갈 수 있을 것 입니다. PyTorch를 이용할 예정이라면, 먼저 이 문서 :doc:`Introduction to PyTorch ` 를 읽어볼 것을 추천 합니다.패키지들~~~~~~~~우리는 다음 패키지들을 활용 할 것입니다:- ``torch`` , ``torch.nn``, ``numpy`` (PyTorch로 신경망 처리를 위한 필수 패키지)- ``torch.optim`` (효율적인 그라디언트 디센트)- ``PIL`` , ``PIL.Image`` , ``matplotlib.pyplot`` (이미지를 읽고 보여주는 패키지)- ``torchvision.transforms`` (PIL타입의 이미지들을 토치 텐서 형태로 변형해주는 패키지)- ``torchvision.models`` (사전 훈련된 모델들의 학습 또는 읽기 패키지)- ``copy`` (모델들의 깊은 복사를 위한 시스템 패키지)
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from PIL import Image
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
import torchvision.models as models
import copy
###Output
_____no_output_____
###Markdown
쿠다(CUDA)~~~~~~~~~~~~~~컴퓨터에 GPU가 있는 경우, 특히 VGG와 같이 깊은 네트워크를 사용하려는 경우 알고리즘을 CUDA 환경에서 실행하는 것이 좋습니다. CUDA를 쓰기 위해서 Pytorch에서는 ``torch.cuda.is_available()`` 를 제공하는데, 작업하는 컴퓨터에서 GPU 사용이 가능하면 ``True`` 를 리턴 합니다.이후로, 우리는 ``.cuda()`` 라는 메소드를 사용하여 모듈과 관련된 할당된 프로세스를 CPU에서 GPU로 수 있습니다.이 모듈을 CPU로 되돌리고 싶을 때에는 (예 : numpy에서 사용), 우리는 ``.cpu ()`` 메소드를 사용하면 됩니다.마지막으로, ``.type(dtype)`` 메소드는 ``torch.FloatTensor`` 타입을 GPU에서 사용 할 수 있도록 ``torch.cuda.FloatTensor`` 로 변환하는데 사용할 수 있습니다.
###Code
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
이미지 읽기~~~~~~~~~~~~~구현을 간단하게 하기 위해서, 스타일 이미지와 콘텐츠 이미지의 크기를 동일하게 맞추어서 시작합니다.그런 다음 원하는 출력 이미지 크기로 확장 시킵니다.(본 예제에서는 128이나 512로 하는데 GPU가 가능한 상황에 맞게 선택해서 하세요.)그리고 영상 데이터를 토치 텐서로 변환하고, 신경망 네트워크에 사용할 수 있도록 준비합니다... Note:: 튜토리얼을 실행하는 데 필요한 이미지를 다운로드하는 링크는 다음과 같습니다.: `picasso.jpg `__ 와 `dancing.jpg `__. 위 두개의 이미지를 다운로드 받아 디렉토리 이름 ``images`` 에 추가하세요.
###Code
# 출력 이미지의 원하는 크기를 정하세요.
imsize = 512 if torch.cuda.is_available() else 128 # gpu가 없다면 작은 크기로
loader = transforms.Compose([
transforms.Resize(imsize), # 입력 영상 크기를 맞춤
transforms.ToTensor()]) # 토치 텐서로 변환
def image_loader(image_name):
image = Image.open(image_name)
# 네트워크의 입력 차원을 맞추기 위해 필요한 가짜 배치 차원
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
style_img = image_loader("./data/images/neural-style/picasso.jpg")
content_img = image_loader("./data/images/neural-style/dancing.jpg")
assert style_img.size() == content_img.size(), \
"we need to import style and content images of the same size"
###Output
_____no_output_____
###Markdown
가져온 PIL 이미지는 0에서 255 사이의 이미지 픽셀값을 가집니다. 토치 텐서로 변환하면 0에서 1의 값으로 변환됩니다. 이는 중요한 디테일로: 토치 라이브러리의 신경망은 0에서 1의 텐서 이미지로 학습하게 됩니다.0-255 텐서 이미지를 네트워크에 공급 하려고 하면 활성화된(activated) 특징 맵(feature map)은 의미가 없습니다.(역자주, 입력 값에 따라 RELU와 같은 활성화 레이어에서 입력으로 되는 값의 범위가 완전히 다르기 때문)Caffe 라이브러리의 사전 훈련된 네트워크의 경우는 그렇지 않습니다: 해당 모델들은 0에서 255 사이 값의 텐서 이미지로 학습 되었습니다.이미지 표시하기~~~~~~~~~~~~~~~~~~~~우리는 이미지를 표시하기 위해 ``plt.imshow`` 를 이용합니다. 그러기 위해 우선 텐서를 PIL 이미지로 변환해 주겠습니다:
###Code
unloader = transforms.ToPILImage() # PIL 이미지로 재변환 합니다
plt.ion()
def imshow(tensor, title=None):
image = tensor.cpu().clone() # 텐서의 값에 변화가 적용되지 않도록 텐서를 복제합니다
image = image.squeeze(0) # 페이크 배치 차원을 제거 합니다
image = unloader(image)
plt.imshow(image)
if title is not None:
plt.title(title)
plt.pause(0.001) # 그리는 부분이 업데이트 될 수 있게 잠시 정지합니다
plt.figure()
imshow(style_img, title='Style Image')
plt.figure()
imshow(content_img, title='Content Image')
###Output
_____no_output_____
###Markdown
콘텐츠 로스~~~~~~~~~~~~콘텐츠 로스는 네트워크에서 $X$ 로 입력을 받았을 때 레이어 $L$ 에서 특징 맵(feature map) $F_{XL}$ 을 입력으로 가져 와서 이 이미지와 콘텐츠 이미지 사이의 가중치 콘텐츠 거리 $w_{CL}.D_C^L(X,C)$ 를 반환하는 기능입니다. 따라서, 가중치 $w_{CL}$ 및 목표 콘텐츠 $F_{CL}$ 은 함수의 파라미터 입니다.우리는 이 매개 변수를 입력으로 사용하는 생성자(constructor)가 있는 토치 모듈로 함수를 구현합니다. 거리 $\|F_{XL} - F_{YL}\|^2$ 는 세 번째 매개 변수로 명시된 기준 ``nn.MSELoss`` 를 사용하여계산할 수 있는 두 세트의 특징 맵(feature map) 사이의 평균 제곱 오차(MSE, Mean Square Error)입니다.우리는 신경망의 추가 모듈로서 각 레이어에 컨텐츠 로스를 추가 할 것 입니다. 이렇게 하면 입력 영상 $X$ 를 네트워크에 보낼 때마다 원하는 모든 레이어에서 모든 컨텐츠 로스가 계산되고 자동 그라디언트로 인해 모든 그라디언트가 계산됩니다. 이를 위해 우리는 입력을 리턴하는 ``forward`` 메소드를 만들기만 하면 됩니다: 모듈은 신경망의 ''투명 레이어'' 가 됩니다. 계산된 로스는 모듈의 매개 변수로 저장됩니다.마지막으로 그라디언트를 재구성하기 위해 nn.MSELoss의 ``backward`` 메서드를 호출하는 가짜 backward 메서드를 정의 합니다. 이 메서드는 계산된 로스를 반환 합니다. 이는 스타일 및 콘텐츠 로스의 진화를 표시하기 위해 그라디언트 디센트를 실행할 때 유용합니다.
###Code
class ContentLoss(nn.Module):
def __init__(self, target,):
super(ContentLoss, self).__init__()
# 그라디언트를 동적으로 계산하는 데 사용되는 트리에서 대상 콘텐츠를 '분리' 합니다.
# :이 값은 변수(variable)가 아니라 명시된 값입니다.
# 그렇지 않으면 기준의 전달 메소드가 오류를 발생 시킵니다.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
###Output
_____no_output_____
###Markdown
.. Note:: **중요한 디테일**: 이 모듈은 ``ContentLoss`` 라고 이름 지어졌지만 진정한 PyTorch Loss 함수는 아닙니다. 컨텐츠 손실을 PyTorch Loss로 정의 하려면 PyTorch autograd Function을 생성 하고 ``backward`` 메소드에서 직접 그라디언트를 재계산/구현 해야 합니다.스타일 로스~~~~~~~~~~~~~~~~~~스타일 손실을 위해 우리는 레이어 $L$ 에서 $X$ 로 공급된(입력으로 하는) 신경망의 특징 맵(feature map) $F_{XL}$ 이 주어진 경우그램 생성 $G_{XL}$ 을 계산하는 모듈을 먼저 정의 해야 합니다. $\hat{F}_{XL}$ 을 KxN 행렬에 대한 $F_{XL}$의 모양을 변경한 버전이라고 하겠습니다.여기서, $K$는 레이어 $L$에서의 특징 맵(feature map)들의 수이고, $N$ 은 임의의 벡터화 된 특징 맵(feature map) $F_{XL}^k$ 의 길이가 됩니다. $F_{XL}^k$ 의 $k^{번째}$ 번째 줄은 $F_{XL}^k$ 입니다. math:`\hat{F}_{XL} \cdot \hat{F}_{XL}^T = G_{XL}` 인지 확인 해보길 바랍니다. 이를 확인해보면 모듈을 구현하는 것이 쉬워 집니다:
###Code
def gram_matrix(input):
a, b, c, d = input.size() # a=배치 크기(=1)
# b=특징 맵의 크기
# (c,d)=특징 맵(N=c*d)의 차원
features = input.view(a * b, c * d) # F_XL을 \hat F_XL로 크기 조정합니다
G = torch.mm(features, features.t()) # 그램 곱을 수행합니다
# 그램 행렬의 값을 각 특징 맵의 요소 숫자로 나누는 방식으로 '정규화'를 수행합니다.
return G.div(a * b * c * d)
###Output
_____no_output_____
###Markdown
특징 맵(feature map) 차원 $N$이 클수록, 그램(Gram) 행렬의 값이 커집니다. 따라서 $N$으로 정규화하지 않으면 첫번째 레이어에서 계산된 로스 (풀링 레이어 전에)는경사 하강법 동안 훨씬 더 중요하게 됩니다. (역자주 : 정규화를 하지 않으면 첫번째 레이어에서 계산된 값들의 가중치가 높아져 상대적으로 다른 레이어에서 계산한 값들의 반영이 적게 되버리기 때문에 정규화가 필요해집니다.)스타일 특징의 흥미로운 부분들은 가장 깊은 레이어에 있기 때문에 그렇게 동작하지 않도록 해야 합니다!그런 다음 스타일 로스 모듈은 콘텐츠 로스 모듈과 완전히 동일한 방식으로 구현되지만대상과 입력 간의 그램 매트릭스의 차이를 비교하게 됩니다
###Code
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
###Output
_____no_output_____
###Markdown
뉴럴 네트워크 읽기~~~~~~~~~~~~~~~~~~~~~~~자, 우리는 사전 훈련된 신경망을 가져와야 합니다. 이 논문에서와 같이, 우리는 19 레이어 층을 가지는 VGG(VGG19) 네트워크를 사전 훈련된 네트워크로 사용할 것입니다.PyTorch의 VGG 구현은 두 개의 하위 순차 모듈로 나뉜 모듈 입니다. ``특징(features)`` 모듈 : 합성곱과 풀링 레이어들을 포함 합니다.``분류(classifier)`` 모듈 : fully connected 레이어들을 포함 합니다.우리는 여기서 ``특징`` 모듈에 관심이 있습니다.일부 레이어는 학습 및 평가에 있어서 상황에 따라 다른 동작을 합니다. 이후 우리는 그것을 특징 추출자로 사용하고 있습니다. 우리는 .eval() 을 사용하여 네트워크를 평가 모드로 설정 할 수 있습니다.
###Code
cnn = models.vgg19(pretrained=True).features.to(device).eval()
###Output
_____no_output_____
###Markdown
또한 VGG 네트워크는 평균 = [0.485, 0.456, 0.406] 및 표준편차 = [0.229, 0.224, 0.225]로 정규화 된 각 채널의 이미지에 대해 학습된 모델입니다.(역자, 일반적으로 네트워크는 이미지넷으로 학습이 되고 이미지넷 데이터의 평균과 표준편차가 위의 값과 같습니다.)우리는 입력 이미지를 네트워크로 보내기 전에 정규화 하는데 위 평균과 표준편차 값을 사용합니다.
###Code
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
# 입력 이미지를 정규화하는 모듈을 만들어 nn.Sequential에 쉽게 입력 할 수 있게 하세요.
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view(텐서의 모양을 바꾸는 함수)로 평균과 표준 편차 텐서를 [C x 1 x 1] 형태로 만들어
# 바로 입력 이미지 텐서의 모양인 [B x C x H x W] 에 연산할 수 있도록 만들어 주세요.
# B는 배치 크기, C는 채널 값, H는 높이, W는 넓이 입니다.
self.mean = torch.tensor(mean).view(-1, 1, 1)
self.std = torch.tensor(std).view(-1, 1, 1)
def forward(self, img):
# img 값 정규화(normalize)
return (img - self.mean) / self.std
###Output
_____no_output_____
###Markdown
``순차(Sequential)`` 모듈에는 하위 모듈의 정렬된 목록이 있습니다. 예를 들어 ``vgg19.features`` 은 vgg19 구조의 올바른 순서로 정렬된 순서 정보(Conv2d, ReLU, MaxPool2d, Conv2d, ReLU ...)를 포함합니다. 콘텐츠 로스 섹션에서 말했듯이 우리는 네트워크의 원하는 레이어에 추가 레이어 '투명(transparent)'레이어로 스타일 및 콘텐츠 손실 모듈을 추가하려고 합니다. 이를 위해 새로운 순차 모듈을 구성합니다.이 모듈에서는 vgg19의 모듈과 손실 모듈을 올바른 순서로 추가합니다.
###Code
# 스타일/콘텐츠 로스로 계산하길 원하는 깊이의 레이어들:
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers=content_layers_default,
style_layers=style_layers_default):
cnn = copy.deepcopy(cnn)
# 표준화(normalization) 모듈
normalization = Normalization(normalization_mean, normalization_std).to(device)
# 단지 반복 가능한 접근을 갖거나 콘텐츠/스타일의 리스트를 갖기 위함
# 로스값
content_losses = []
style_losses = []
# cnn은 nn.Sequential 하다고 가정하므로, 새로운 nn.Sequential을 만들어
# 우리가 순차적으로 활성화 하고자하는 모듈들을 넣겠습니다.
model = nn.Sequential(normalization)
i = 0 # conv레이어를 찾을때마다 값을 증가 시킵니다
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
# in-place(입력 값을 직접 업데이트) 버전은 콘텐츠로스와 스타일로스에
# 좋은 결과를 보여주지 못합니다.
# 그래서 여기선 out-of-place로 대체 하겠습니다.
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
# 콘텐츠 로스 추가:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
# 스타일 로스 추가:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
# 이제 우리는 마지막 콘텐츠 및 스타일 로스 이후의 레이어들을 잘라냅니다.
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
return model, style_losses, content_losses
###Output
_____no_output_____
###Markdown
.. Note:: 논문에서는 맥스 풀링(Max Pooling) 레이어를 에버리지 풀링(Average Pooling) 레이어로 바꾸는 것을 추천합니다. AlexNet에서는 논문에서 사용된 VGG19 네트워크보다 상대적으로 작은 네트워크라 결과 품질에서 큰 차이를 확인하기 어려울 수 있습니다. 그러나, 만약 당신이 대체해 보기를 원한다면 아래 코드들을 사용할 수 있습니다: :: avgpool = nn.AvgPool2d(kernel_size=layer.kernel_size, stride=layer.stride, padding = layer.padding) model.add_module(name,avgpool) 입력 이미지~~~~~~~~~~~~~~~~~~~다시, 코드를 간단하게 하기 위해, 콘텐츠와 스타일 이미지들의 같은 차원의 이미지를 가져옵니다.해당 이미지는 백색 노이즈일 수 있거나 콘텐츠-이미지의 값들을 복사해도 좋습니다.
###Code
input_img = content_img.clone()
# 대신에 백색 노이즈를 이용하길 원한다면 아래 줄의 주석처리를 제거하세요:
# input_img = torch.randn(content_img.data.size(), device=device)
# 원본 입력 이미지를 창에 추가합니다:
plt.figure()
imshow(input_img, title='Input Image')
###Output
_____no_output_____
###Markdown
경사 하강법~~~~~~~~~~~~~~~~알고리즘의 저자인 Len Gatys 가 `여기서 `__ 제안한 방식대로경사 하강법을 실행하는데 L-BFGS 알고리즘을 사용 하겠습니다.일반적인 네트워크 학습과는 다르게, 우리는 콘텐츠/스타일 로스를 최소화 하는 방향으로 입력 영상을 학습 시키려고 합니다.우리는 간단히 PyTorch L-BFGS 옵티마이저 ``optim.LBFGS`` 를 생성하려고 하며, 최적화를 위해 입력 이미지를 텐서 타입으로 전달합니다. 우리는 ``.requires_grad_()`` 를 사용하여 해당 이미지가 그라디언트가 필요함을 확실하게 합니다.
###Code
def get_input_optimizer(input_img):
# 이 줄은 입력은 그레이던트가 필요한 파라미터라는 것을 보여주기 위해 있습니다.
optimizer = optim.LBFGS([input_img.requires_grad_()])
return optimizer
###Output
_____no_output_____
###Markdown
**마지막 단계**: 경사 하강의 반복. 각 단계에서 우리는 네트워크의 새로운 로스를 계산하기 위해업데이트 된 입력을 네트워크에 공급해야 합니다. 우리는 그라디언트를 동적으로 계산하고 그라디언트 디센트의 단계를 수행하기 위해 각 손실의 ``역방향(backward)`` 메소드를 실행해야 합니다.옵티마이저는 인수로서 "클로저(closure)"를 필요로 합니다: 즉, 모델을 재평가하고 로스를 반환 하는 함수입니다.그러나, 여기에 작은 함정이 있습니다. 최적화 된 이미지는 0 과 1 사이에 머물지 않고 $-\infty$과 $+\infty$ 사이의 값을 가질 수 있습니다. 다르게 말하면, 이미지는 잘 최적화될 수 있고(0-1 사이의 정해진 값 범위내의 값을 가질 수 있고) 이상한 값을 가질 수도 있습니다. 사실 우리는 입력 이미지가 올바른 범위의 값을 유지할 수 있도록 제약 조건 하에서 최적화를 수행해야 합니다. 각 단계마다 0-1 간격으로 값을 유지하기 위해 이미지를 수정하는 간단한 해결책이 있습니다.
###Code
def run_style_transfer(cnn, normalization_mean, normalization_std,
content_img, style_img, input_img, num_steps=300,
style_weight=1000000, content_weight=1):
"""스타일 변환을 실행합니다."""
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn,
normalization_mean, normalization_std, style_img, content_img)
optimizer = get_input_optimizer(input_img)
print('Optimizing..')
run = [0]
while run[0] <= num_steps:
def closure():
# 입력 이미지의 업데이트된 값들을 보정합니다
input_img.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_img)
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
style_score *= style_weight
content_score *= content_weight
loss = style_score + content_score
loss.backward()
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
optimizer.step(closure)
# 마지막 보정...
input_img.data.clamp_(0, 1)
return input_img
###Output
_____no_output_____
###Markdown
마지막으로, 알고리즘을 실행 시킵니다.
###Code
output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std,
content_img, style_img, input_img)
plt.figure()
imshow(output, title='Output Image')
# sphinx_gallery_thumbnail_number = 4
plt.ioff()
plt.show()
###Output
_____no_output_____ |
analysis_notebooks/BSA_sensor_R8_cext_wave.ipynb | ###Markdown
Case d=1 nm at +/-z 2 proteins EF -0.0037, R8 nm
###Code
s_file_0 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_d=1_2pz/BSA_sensorR80_d=infty_ef0.0037_total.txt'
p_file_0 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_d=1_2pz/BSA_sensorR80_2pz_d=1_00_ef0.0037_total.txt'
fig_name_0 = '2pz_00_ef-0.0037_R8nm'
report(s_file_0, p_file_0, fig_name_0)
###Output
Cext max at d=infty is 4010.09400027 and it occurs at a wavelength of 3840.0
Cext max at d=1 nm is 3901.06202839 and it occurs at a wavelength of 3845.0
###Markdown
Case d=1 nm at +/-z 2 proteins EF -0.0037, R8 nm dipole tilt 30 deg (RH rule z axis)
###Code
s_file_30 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_tilt_30/BSA_sensorR80_d=infty_ef0.0037_total.txt'
p_file_30 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_tilt_30/BSA_sensorR80_2pz_d=1_tilt_30_total.txt'
fig_name_30 = '2pz_30_ef-0.0037_R8nm'
report(s_file_30, p_file_30, fig_name_30)
###Output
Cext max at d=infty is 4010.09400027 and it occurs at a wavelength of 3840.0
Cext max at d=1 nm is 3901.29154274 and it occurs at a wavelength of 3845.0
###Markdown
Case d=1 nm at +/-z 2 proteins EF -0.0037, R8 nm tilt 45 deg
###Code
s_file_45 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_tilt_45/BSA_sensorR80_d=infty_ef0.0037_total.txt'
p_file_45 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_tilt_45/BSA_sensorR80_2pz_d=1_tilt_45_total.txt'
fig_name_45 = '2pz_45_ef-0.0037_R8nm'
report(s_file_45, p_file_45, fig_name_45)
###Output
Cext max at d=infty is 4010.09400027 and it occurs at a wavelength of 3840.0
Cext max at d=1 nm is 3901.12117915 and it occurs at a wavelength of 3845.0
###Markdown
Case d=1 nm at +/-z 2 proteins EF -0.0037, R8 nm dipole tilt 30 deg (RH rule x axis)
###Code
s_file_30_x = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_tilt_30_x/BSA_sensorR80_d=infty_ef0.0037_total.txt'
p_file_30_x = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_tilt_30_x/BSA_sensorR80_2pz_d=1_tilt_30_x_total.txt'
fig_name_30_x = '2pz_30_x_ef-0.0037_R8nm'
report(s_file_30_x, p_file_30_x, fig_name_30_x)
###Output
Cext max at d=infty is 4010.09400027 and it occurs at a wavelength of 3840.0
Cext max at d=1 nm is 3903.53850794 and it occurs at a wavelength of 3845.0
###Markdown
Case d=1 nm at +/-z 2 proteins EF -0.0037, R8 nm rot 45 deg
###Code
s_file_rot_45 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_rot_45/BSA_sensorR80_d=infty_ef0.0037_total.txt'
p_file_rot_45 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_rot_45/BSA_sensorR80_2pz_d=1_rot_45_total.txt'
fig_name_rot_45 = '2pz_rot_45_ef-0.0037_R8nm'
report(s_file_rot_45, p_file_rot_45, fig_name_rot_45)
###Output
Cext max at d=infty is 4010.09400027 and it occurs at a wavelength of 3840.0
Cext max at d=1 nm is 3937.91661669 and it occurs at a wavelength of 3842.5
###Markdown
Case d=1 nm at +/-z 2 proteins EF -0.0037, R8 nm rot 90 deg
###Code
s_file_rot_90 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_rot_90/BSA_sensorR80_d=infty_ef0.0037_total.txt'
p_file_rot_90 = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_2pz_d=1_rot_90/BSA_sensorR80_2pz_d=1_rot_90_total.txt'
fig_name_rot_90 = '2pz_rot_90_ef-0.0037_R8nm'
report(s_file_rot_90, p_file_rot_90, fig_name_rot_90)
###Output
Cext max at d=infty is 4010.09400027 and it occurs at a wavelength of 3840.0
Cext max at d=1 nm is 3943.76489942 and it occurs at a wavelength of 3842.5
###Markdown
Case d=1 nm at +/-x 2 proteins EF -0.0037, R8 nm
###Code
s_file_x = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_d=1_2px/BSA_sensorR80_d=infty_ef0.0037_total.txt'
p_file_x = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_d=1_2px/BSA_sensorR80_2px_d=1_00_total.txt'
fig_name_x = '2px_ef-0.0037_R8nm'
report(s_file_x, p_file_x, fig_name_x)
###Output
Cext max at d=infty is 4010.09400027 and it occurs at a wavelength of 3840.0
Cext max at d=1 nm is 3979.0490072 and it occurs at a wavelength of 3840.0
###Markdown
Case d=1 nm at +/-y 2 proteins EF -0.0037, R8 nm
###Code
s_file_y = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_d=1_2py/BSA_sensorR80_d=infty_ef0.0037_total.txt'
p_file_y = '../data/wave_cext_d_prot_sensor/test_join_sort/BSA_sensorR80_d=1_2py/BSA_sensorR80_2py_d=1_00_total.txt'
fig_name_y = '2py_ef-0.0037_R8nm'
report(s_file_y, p_file_y, fig_name_y)
###Output
Cext max at d=infty is 4010.09400027 and it occurs at a wavelength of 3840.0
Cext max at d=1 nm is 3985.95109048 and it occurs at a wavelength of 3840.0
|
KMeans_Hierarchical_Clusteringv2.ipynb | ###Markdown
**Clustering Methods in Python**--- Version: 1.0Prepared by: Updated and Maintained by: [QuantUniversity](https://www.quantuniversity.com)Author: Matthew DixonFor support or additional information, email us at : *Copyright 2020 CFA Institute* NOTE: This section to be appended after getting info from CFA Institute--- How to run this notebook?This notebook is *view only* and uses Google Colab to run. To **run this Colab notebook**, either:- **Make a copy to your Google Drive so you can make local changes:** File > Save a copy in Drive...- **Run in playground mode:** File > Open in playground mode- **Download the Jupyter notebook, so you can run it on your computer configured with Jupyter:** File > Download .ipynb  The purpose of this python notebook is to generate the unsupervised learning mini-case study results in the CFA Machine learning reading 7: Machine Learningfor the case study: **"CLUSTERING STOCKS BASED ON CO-MOVEMENT SIMILARITY"** Import Packages needed to run
###Code
# restart run time
!pip install plotly -U
import pandas as pd
import plotly
import plotly.figure_factory as ff
import plotly.express as px
import copy
from scipy.cluster.hierarchy import linkage, dendrogram
from scipy.spatial import distance
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
###Output
_____no_output_____
###Markdown
Step 1: Collect panel data on adjusted closing prices for the stocks under investigation.
###Code
# The 8 S&P 500 member stocks
names=['JPM', 'UBS', 'GS', 'FB', 'AAPL', 'GOOG', 'GM', 'F']
# Load data
SP500=pd.DataFrame()
# Use a for loop to load different files into single dataframe
for name in names:
df=pd.read_csv('https://cfa-dataset.s3-us-west-2.amazonaws.com/kmeans-hierarchical-clustering/' + name + '.csv', index_col='Date')
SP500[name]=df['Adj Close']
# Round the number value to keep two decimals
pd.set_option('display.float_format', lambda x: '%.2f' % x)
# Log dataframe information
SP500.head()
SP500.tail()
stock = 'AAPL' #@param ['JPM', 'UBS', 'GS', 'FB', 'AAPL', 'GOOG', 'GM', 'F'] {allow-input: true}
# Using graph_objects
import plotly.graph_objects as go
SP500R = SP500.reset_index()
fig = go.Figure([go.Scatter(x=SP500R['Date'], y=SP500R[stock])])
fig.update_xaxes(title_text="Date")
fig.update_yaxes(title_text= stock)
fig.show()
###Output
_____no_output_____
###Markdown
 Step 2: Calculate daily returns for each stock
###Code
# Transfer data to percentage of change
SP500_pct_change = SP500.pct_change().dropna()
# Round the number value to keep two decimals
pd.set_option('display.float_format', lambda x: '%.3f' % x)
SP500_pct_change.head()
# Using graph_objects
fig = px.line(SP500_pct_change, x=SP500_pct_change.index, y=SP500_pct_change.columns, title="Stock Daily Return")
fig.update_xaxes(title_text="Date")
fig.update_yaxes(title_text="Daily Return")
fig.show()
import plotly.express as px
SP500_pct_change = SP500_pct_change.rename_axis(index='date', columns = 'company')
fig = px.area(SP500_pct_change, facet_col="company", facet_col_wrap=2)
fig.update_yaxes(title_text="Daily Return")
fig.show()
###Output
_____no_output_____
###Markdown
 Step 3: Distance matrix computationHow does cluster analysis recognise "similar" assets? It does so by calculating the relative distances of price-series vectors in $n$-dimensional space where $n$ is the number of observations. We know from foundational Linear Algebra and Geometry that the distance between two vectors can be calculated via a number of ways. In this tutorial we will use Euclidean or $L^2$ norm to calculate the relative distances between price vectors.Formally, given two Cartesian coordinates $P = (p_1,p_2, ... p_n)$ and $Q = (q_1,q_2, ... q_n)$, the Euclidean norm $d(P,Q)$ can be computed as follows:$$ d(P,Q) = d(Q,P) = \sqrt{(q_1 - p_1)^2+(q_2 - p_2)^2+ ... (q_n - p_n)^2} $$ To start performing cluster analysis we compute a distance matrix $D$ where entry $(i,j)$ represents the $L^2$ norm between $i$th and $j$th vector. After initial computation our matrix $D$ can be represented as follows: $$D = \begin{matrix}d_{11} & d_{12} & \ldots & d_{1i} \\d_{21} & d_{22} & \ldots & d_{2i}\\\vdots & \vdots & \ddots & \vdots\\d_{i1} & d_{i2} &\ldots & d_{ii}\end{matrix}$$ It may become evident that matrix $D$ has some nice properties. We proceed to find the closest vectors. For example, if distance between vectors 1 and 2 was the smallest than the distance between any other two vectors, we would shape the first cluster out of vetors 1 and 2. Next step requires us to link the newly created cluster with the rest of matrix $D$, i.e. we need to find the distance of the new cluster relative to other vectors. This process is called **linkage**. There are several approaches that can be used for linkage: minimum (single), average (centroid), maximum (full). Whichever linkage method we choose, we proceed in the same fashion until we collapse our matrix $D$ to a single cluster. To calculate $L^2$ norms we will use `scipy`'s `distance` module. The result will be a distance matrix as described above. Note that the distance matrix will be calculated using percentage changes, not raw prices. Also note that calculating such matrix has $O(n^2)$ complexity.
###Code
from scipy.spatial import distance
# Init empty dataframe as a two dimension array
SP500_distances = pd.DataFrame(index=names, columns = names, dtype=float)
# Use two for loop to calculate the distance
for sym1 in names:
for sym2 in names:
SP500_distances[sym1][sym2] = distance.euclidean(SP500_pct_change[sym1].values,
SP500_pct_change[sym2].values)
# Explore the result
import seaborn as sns
fig = plt.figure(figsize=(14, 10))
sns.heatmap(SP500_distances, annot = True, fmt='.3f', vmin=0, vmax=0.5, cmap= 'coolwarm', xticklabels=True, yticklabels=True)
fig.show()
###Output
_____no_output_____
###Markdown
 In the next three parts, use three different algorithms in clustering the dataset and store the result in same dataframe Agglomerative clusteringThe **Dendrogram** is a convenient way of visualizing hierarchichal clusters. Below we define and create a dendrogram using `scipy` library. Vertical distance connecting various clusters represents euclidean distance between clusters. Linkage is performed by averaging the distances. Colors all the descendent links below a cluster node the same color if the node is the first node below the cut threshold value.The default value is 0.7*max(Z[:,2]) (scipy and matlab)
###Code
color_threshold = 0.36#@param {type:"number"}
# Draw figure using scipy and get data in function return as dendro
plt.figure(figsize=(16, 6))
dendro = dendrogram(linkage(SP500_pct_change.T.values, method = 'average', metric = 'euclidean'), labels=names, color_threshold=color_threshold)
# Explore data
for i in dendro:
print(i,dendro[i])
# Generate clustering result by color using code
color_map = {}
leaves_cluster = [None] * len(dendro['leaves'])
for link_x, link_y, link_color in zip(dendro['icoord'],dendro['dcoord'],dendro['color_list']):
for (xi, yi) in zip(link_x, link_y):
if yi == 0.0: # if yi is 0.0, the point is a leaf
# xi of leaves are 5, 15, 25, 35, ... (see `iv_ticks`)
# index of leaves are 0, 1, 2, 3, ... as below
leaf_index = (int(xi) - 5) // 10
# each leaf has a same color of its link.
if link_color not in color_map:
color_map[link_color] = len(color_map)
leaves_cluster[leaf_index] = color_map[link_color]
leaves_cluster
# Or by observation directly
# leaves_cluster = [2, 0, 0, 1, 1, 1, 1, 1]
# Store labeld result in dataframe
df_cluster = pd.DataFrame(leaves_cluster, columns=['Agglomerative'])
df_cluster.index = dendro['ivl']
df_cluster.sort_index(inplace=True)
df_cluster
def decode_clusters(labels, clusters):
result = {}
for i in range(len(clusters)):
if clusters[i] not in result:
result[clusters[i]] = []
result[clusters[i]].append(labels[i])
return list(result.values())
result_comparison = {}
result_comparison['Agglomerative'] = decode_clusters(dendro['ivl'], leaves_cluster)
result_comparison
###Output
_____no_output_____
###Markdown
 K-means++ clustering
###Code
import numpy as np
from sklearn import cluster
cl = cluster.KMeans(init='k-means++', n_clusters=3, max_iter=10000, n_init=1000, tol=0.000001)
cl.fit(np.transpose(SP500_pct_change))
cl.labels_
df_cluster['K-means']=df_cluster['Agglomerative']
df_cluster['K-means'][SP500_pct_change.columns]=cl.labels_
df_cluster.sort_index(inplace=True)
df_cluster
result_comparison['K-means'] = decode_clusters(SP500_pct_change.columns, cl.labels_)
result_comparison
###Output
_____no_output_____
###Markdown
 Divisive Clustering Use the sliders to change the number of clusters in result. If cannot be categorized in this number, will use the larger nearest one.
###Code
num_clusters = 3 #@param {type:"slider", min:1, max:8, step:1}
import numpy as np;
import pandas as pd
all_elements = copy.copy(names)
dissimilarity_matrix = pd.DataFrame(SP500_distances,index=SP500_distances.columns, columns=SP500_distances.columns)
def avg_dissim_within_group_element(ele, element_list):
max_diameter = -np.inf
sum_dissm = 0
for i in element_list:
sum_dissm += dissimilarity_matrix[ele][i]
if( dissimilarity_matrix[ele][i] > max_diameter):
max_diameter = dissimilarity_matrix[ele][i]
if(len(element_list)>1):
avg = sum_dissm/(len(element_list)-1)
else:
avg = 0
return avg
def avg_dissim_across_group_element(ele, main_list, splinter_list):
if len(splinter_list) == 0:
return 0
sum_dissm = 0
for j in splinter_list:
sum_dissm = sum_dissm + dissimilarity_matrix[ele][j]
avg = sum_dissm/(len(splinter_list))
return avg
def splinter(main_list, splinter_group):
most_dissm_object_value = -np.inf
most_dissm_object_index = None
for ele in main_list:
x = avg_dissim_within_group_element(ele, main_list)
y = avg_dissim_across_group_element(ele, main_list, splinter_group)
diff= x -y
if diff > most_dissm_object_value:
most_dissm_object_value = diff
most_dissm_object_index = ele
if(most_dissm_object_value>0):
return (most_dissm_object_index, 1)
else:
return (-1, -1)
def split(element_list):
main_list = element_list
splinter_group = []
(most_dissm_object_index,flag) = splinter(main_list, splinter_group)
while(flag > 0):
main_list.remove(most_dissm_object_index)
splinter_group.append(most_dissm_object_index)
(most_dissm_object_index,flag) = splinter(element_list, splinter_group)
return (main_list, splinter_group)
def max_diameter(cluster_list):
max_diameter_cluster_index = None
max_diameter_cluster_value = -np.inf
index = 0
for element_list in cluster_list:
for i in element_list:
for j in element_list:
if dissimilarity_matrix[i][j] > max_diameter_cluster_value:
max_diameter_cluster_value = dissimilarity_matrix[i][j]
max_diameter_cluster_index = index
index +=1
if(max_diameter_cluster_value <= 0):
return -1
return max_diameter_cluster_index
current_clusters = ([all_elements])
level = 1
index = 0
result = None
while(index!=-1):
if (result is None) and (len(current_clusters) >= num_clusters):
result = copy.deepcopy(current_clusters)
print(level, '*', current_clusters)
else:
print(level, current_clusters)
(a_clstr, b_clstr) = split(current_clusters[index])
del current_clusters[index]
current_clusters.append(a_clstr)
current_clusters.append(b_clstr)
index = max_diameter(current_clusters)
level +=1
if result is None:
result = current_clusters
print(level, '*', current_clusters)
else:
print(level, current_clusters)
# Generate the result by code
df_cluster['Divisive'] = df_cluster['Agglomerative']
for i in range(len(result)):
for col in result[i]:
df_cluster['Divisive'][col]=i
# Or by observation directly
# df_cluster['Divisive'] = [2, 0, 0, 0, 0, 0, 1, 1]
df_cluster.sort_index(inplace=True)
df_cluster
result_comparison['Divisive'] = result
result_comparison
###Output
_____no_output_____ |
sorting-algorithms.ipynb | ###Markdown
Sorting algorithmsHere some of the algorithms been used in almost everywhere in databases, operating system, servers, client software like browsers, android, IOS, Windows Phone, ect ..., where data are manipulated as well its rapresentation from the very simple one like sequence of couples of *key-value* to structured data with nested structures, methods, functions, propterties. Selection sortSuppose for example a vector of 10 elements :
###Code
# SOME IMPORTS FIRST
import random, matplotlib.pyplot as plt, numpy as np
def swap(s1 , s2):
return s2, s1
vector = random.sample(range(1,10), 9)
print(vector)
###Output
[8, 1, 4, 2, 3, 6, 7, 5, 9]
###Markdown
Lets write a function in python that return a new vector with the same elements all sorted like *[1,2,3,4,5,...]* assuming repeted elemets
###Code
def selection_sort(v):
for i in range(0, len(vector) - 1):
pmin = i
for j in range(i+1, len(vector)):
if v[j] < v[pmin]:
pmin = j
if pmin != v[i]:
v[i], v[pmin] = swap(v[i], v[pmin])
###Output
_____no_output_____
###Markdown
It modifys the input vector in a sorted vector as follow :
###Code
selection_sort(vector)
print("New Vector : " + str(vector))
###Output
New Vector : [1, 2, 4, 3, 5, 6, 7, 8, 9]
###Markdown
This algorithm is very inefficient because for any $n$ elements vector we have allways an $O(n^2)$ complessity, that means it has a quadratic exponential complessity for a vector of $n$ elements for $n \in \mathbb{N}$Here it comes the computational cost: * first *for* loop scan $(n-1)$ elements* second *for* loop nested to the previus one scan the $(n-1)$ elements* The main instructions can be assumed to be for a total $\sum_{i=1}^{n} (n-1)^2$* The relative function $f(x)=(n-1)^2$ behaves like $n^2$ for $n \rightarrow +\infty$* It make sense writing the algorithm has a complessity of $O(n^2)$ Bubble SortThis algorithm is based on the idea of sorting the vector throwing a tuple (bubble) of elements where for each iteration elements with less value tend to throw at the begining of the the vector and elements with more value tend to throw to the end of the vector.Here a new vector :
###Code
vector = random.sample(range(1,10), 9)
print("Vector : " + str(vector) )
def bubble_sort(v):
while True:
swaps = 0
n = len(vector)
for j in range(0, n -1):
if v[j] > v[j+1]:
v[j], v[j+1] = swap(v[j], v[j+1])
swaps += 1
if swaps == 0:
break
n -= 1
bubble_sort(vector)
print(vector)
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
Note that it is optimized at the last line because we asssume that after the first loop, the max value element throws at the end of the vector, recursively we can immagine that this happens for each loop where gradually the vector become more and more samller.At this point we can redefine the function in order to make a counting of how many simple instructions the algorithm does for a single vector of elements, but first lets formalize 2 use cases in computational cost for a vector $V$:- $O(n) : \forall v_i \in V : v_i < v_{i+1}, i \in [1,n]$- $O(n^2) : \exists | \{v_1, v_n\} \in V : $ to be sorted on their complementary $\overline{i} \in [1,n]$
###Code
# Algorithm for counting number of iterations
def bubble_sort_counting(v):
counting = 0
while True:
swaps = 0
n = len(vector)
for j in range(0, n -1):
if v[j] > v[j+1]:
v[j], v[j+1] = swap(v[j], v[j+1])
swaps += 1
counting += 1
if swaps == 0:
break
n -= 1
return counting
n_max = 100
n_list = list()
i_list = list()
for j in range(1, n_max + 1):
n = random.randint(1,n_max+1)
vector = random.sample(range(1,n+1), n)
i = bubble_sort_counting(vector)
n_list.append(n)
i_list.append(i)
###Output
_____no_output_____
###Markdown
Here a graph that show how the bubble_sort complexity evolve for a range of vectors. In this example I have run on my machine a pseudo-randomizated a hundred vectors of random length and random values both as follow : random lenght settled to a max one hundred elements and random values settled in the range $[1,100]$ in a random sequence. **Each point represent a single vector** where the *x- coordinates* represents the the vector length (number of elements), the *y-coordinates* represents the number of iteration reached by bubble sort to get it sorted
###Code
plt.scatter(n_list, i_list)
plt.show()
###Output
_____no_output_____
###Markdown
The set of points that represents the vectors show up something for sure: on average bubble sort works as an $O(n^2)$ sorting method because it seams following not a straight line as an $O(n)$ funciton does.Here the following graph summarize the total set of points in a linear funciton, thanks to the interpolation method, where we use a polinome of second degree to show up a line the represent : > How much the computational cost rise compared to the number of elements to be sorted
###Code
p = np.poly1d(np.polyfit(n_list, i_list, 2))
xp = np.linspace(0, max(n_list))
plt.plot(xp, p(xp))
plt.show()
###Output
_____no_output_____
###Markdown
Note how big the number of iteration becomes bigger aroung sorted vectors of 100 elements: more than 8000 iterations Quick sortThe idea is to pick an element $v_p \in V$ as a *pivot* and recursively apply the algorithm as follow :$$ {P_1, P_2} \quad \textit{partitions of} \quad V \quad \| \quad \forall v_i \in P_1 : v_i < v_p, \forall v_j \in P_2 : v_p < v_j$$
###Code
# DEFINITION
def quick_sort(V, i, j):
if i < j:
p = partition(V, i, j)
quick_sort(V, i, p - 1)
quick_sort(V, p + 1, j)
def partition(V, i, j):
p = V[j]
small = i - 1
for k in range(i, j):
if V[k] <= p:
small += 1
V[k], V[small] = swap(V[k], V[small])
V[j], V[small+1] = swap(V[j], V[small+1])
return small + 1
###Output
_____no_output_____
###Markdown
Quick sort has an $O(n \ log(n))$ complexity, lets find out getting throw my machine data and plot the graph
###Code
# QUICK SORT WITH COUNTING
def quick_sort_counting(V, i, j):
count = 0
if i < j:
p, count = partition(V, i, j)
count += quick_sort_counting(V, i, p - 1)
count += quick_sort_counting(V, p + 1, j)
return count
def partition(V, i, j):
count = 0
p = V[j]
small = i - 1
for k in range(i, j):
if V[k] <= p:
small += 1
V[k], V[small] = swap(V[k], V[small])
count += 1
V[j], V[small+1] = swap(V[j], V[small+1])
return small + 1, count
n_max = 100
n_list = list()
i_list = list()
for j in range(1, n_max + 1):
n = random.randint(1,n_max+1)
V = random.sample(range(1,n+1), n)
i = quick_sort_counting(V, 0, len(V)-1 )
n_list.append(n)
i_list.append(i)
#p = np.poly1d(np.polyfit(n_list, i_list, 2))
xp = np.linspace(0.01, max(n_list))
#plt.plot(xp, p(xp), '-r')
plt.plot(xp, xp * np.log2(xp) , '-r')
plt.scatter(n_list, i_list)
plt.show()
###Output
_____no_output_____
###Markdown
As before the dots represents random-generated vectors sorted by quick sort algorithm where has coordinates : (*vector length*, *number of iterations done to be sorted*). The **red line** is the logarithmic function : $ p(n) = n \log_2 (n)$. As you can see $p(n)$ does fit pretty well all the dots, so we can say pretty sure that: > quick sort algorithm follows on average a $O(n \log(n) )$ complessity. Merge sortIt applys recursively a merging between 2 sorted vectors from the original one. Recursively it divide the vector in 2 parts until it cannot be divide any further when the vector has length 1. For each function call it digs into the very last smaller vector of one element where for definition it is already sorted, than on the first return callback merge 2 vectors of only one elements into a sorted vector of two elements and so on.
###Code
def merge_sort(V):
if len(V) == 1:
return V
result = []
mid = int(len(V) / 2)
y = merge_sort(V[:mid])
z = merge_sort(V[mid:])
i = 0
j = 0
while i < len(y) and j < len(z):
if y[i] > z[j]:
result.append(z[j])
j += 1
else:
result.append(y[i])
i += 1
result += y[i:]
result += z[j:]
return result
###Output
_____no_output_____
###Markdown
Lets generate 100 vectors to be merge-sorted and counting iteration, what kind of complexity is going to show and witch function best fit all th evectors ?
###Code
# merge sort counting
def merge_sort_counting(V):
count = 0
if len(V) == 1:
count += 1
return V, count
result = []
mid = int(len(V) / 2)
y, c1 = merge_sort_counting(V[:mid])
z, c2 = merge_sort_counting(V[mid:])
count += c1 + c2
i = 0
j = 0
while i < len(y) and j < len(z):
if y[i] > z[j]:
result.append(z[j])
j += 1
else:
result.append(y[i])
i += 1
count += 1
result += y[i:]
result += z[j:]
count += (len(y) - i) + (len(z) - j)
return result, count
n_max = 100
n_list = list()
i_list = list()
for j in range(1, n_max + 1):
n = random.randint(1,n_max+1)
V = random.sample(range(1,n+1), n)
V, i = merge_sort_counting(V)
n_list.append(n)
i_list.append(i)
#p = np.poly1d(np.polyfit(n_list, i_list, 2))
xp = np.linspace(0.1, max(n_list))
#plt.plot(xp, p(xp), '-r')
plt.plot(xp, xp * ( np.log(xp)/np.log(1.8) ), 'r-')
plt.scatter(n_list, i_list)
plt.show()
###Output
_____no_output_____ |
02_Weather.ipynb | ###Markdown
Weather Data> This notebook did not go as planned. The format of [NOAA Integrated Surface Database (ISD)](https://www.ncdc.noaa.gov/isd) data proved too challenging for me to understand. I did find [Jasper Slingsby's](http://www.ecologi.st/post/weather/) blog insightful but its for ```R``` - if you happen to know how to transform it with ```python``` please let me know.> I reverted to the recommended [Reliable Prognosis](https://rp5.ru/Weather_in_the_world); where another problem arose.> Only one weather station, ```cape town airport METAR```, provides hourly data, the other stations have 2-to-3-hour gaps. > We are thus presented with a choice: 1. select the one, with consistent hourly data, and apply it everywhere;2. select the other five and interpolate the data; then create a voronoi diagram, dividing the area into regions and assign each road segment its own weather based on a ```intersects``` and ```within```; ```spatial join```; i.e.: from the weather station closest to it.> We choose the second. Here [NOAA](https://www.ncdc.noaa.gov/isd) did however prove useful. Its ```isd-history.csv``` supplies wgs84 coordinates for most weather stations. These were harvested and used. A preliminary voronoi was viewed in Colab but polygons were created and some spatial manipulation were conducted with [QGIS](https://www.qgis.org/en/site/). > This ```notebook``` is mostly more data wrangling.
###Code
#because we're on google colab
!pip install --upgrade pandas
!pip install --upgrade geopandas
!pip install --upgrade seaborn
#import the models that make the magic possible
import pandas as pd
import geopandas as gpd
import numpy as np
from datetime import datetime, timedelta
from pathlib import Path
import matplotlib.pyplot as plt
from scipy.spatial import Voronoi,voronoi_plot_2d
#import seaborn as sns
# mount google drive as a file system
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
#set path
path = Path('/content/gdrive/My Drive/Zindi_Accident')
###Output
_____no_output_____
###Markdown
Lets look at the [NOAA](https://www.ncdc.noaa.gov/isd) ```isd-history.csv```
###Code
stations = pd.read_csv(path/'data/isd-history.csv',parse_dates=['BEGIN','END'])
# Weather records are queried by a concatenation of USAF and WBAN.
stations['station_id'] = stations.apply(lambda x: str(x['USAF'])+str(x['WBAN']), axis=1)
stations = stations.rename({'STATION NAME':'STATION_NAME'}, axis=1)
stations = stations.set_index('station_id')
stations.head()
cpt_stations = stations.loc[stations['STATION_NAME'].isin(['PAARL', 'STRAND', 'YSTERPLANT(SAAFB)', 'MOLTENO RESERVIOR', 'CAPE TOWN INTL'])]
cpt_stations.head(5)
# Let's have a look at a preliminary voronoi
start = pd.Timestamp(2017,1,1)
end = pd.Timestamp(2018,12,31)
valid_stations = cpt_stations[(cpt_stations.BEGIN < start) & (cpt_stations.END > start)]
plt.figure()
lons = valid_stations.LON.values
lats = valid_stations.LAT.values
plt.plot(lons, lats,'ko')
vor = Voronoi(np.vstack((lons,lats)).T)
voronoi_plot_2d(vor,ax=plt.gca())
plt.gca().set_aspect(1)
plt.show()
#save it
cpt_stations.to_csv(path/'data/cpt_weather_stns.csv', index = False)
###Output
_____no_output_____
###Markdown
Here I took the ```csv``` into QGIS to create proper voronio polygons and conduct a ```intersects``` and ```within```; ```spatial join``` with the SANRAL road segments. This meant every road was associated with its own weather station. The results are below.
###Code
#load the voronoi and new road_segments
voronoi = gpd.read_file(path/'data/voronoi.shp')
road_voronoi = gpd.read_file(path/'data/roads_voronoi.shp')
#rename a column
road_voronoi = road_voronoi.rename({'STATION NA':'STATION_NA'}, axis=1)
# plot
#the voronoi polygons
ax = voronoi.plot(cmap='inferno', linewidth=0.5, alpha=0.6,edgecolor='white', figsize=(20,8))
#the weather stations
ax.scatter(cpt_stations.LON, cpt_stations.LAT, zorder=1, c='b', s=10)
#the new road_segments
road_voronoi.plot(cmap='viridis', alpha=0.5, ax=ax)
#plt.plot(ax=ax, lons, lats,)
ax.set_title('Roads and Voronoi with Weather Stations')
plt.show()
###Output
_____no_output_____
###Markdown
Now lets have a look at the weather from [Reliable Prognosis](https://rp5.ru/Weather_in_the_world). We start with one; ```resample``` and then ```interpolate```. See how it works and then do the other 4.
###Code
#read the cape town airport weather station data
cpt_air = pd.read_csv(path/'data/weather/cpt_air_weather.csv', sep = ';', skiprows=6, usecols=range(29),
parse_dates = ['Local time in Cape Town (airport)'])
#rename some columns
cpt_air.rename(columns={'Local time in Cape Town (airport)': 'dt', 'T': 'Air_temp','Po': 'Atmos_press', 'P': 'Atmos_press_MeanSea',
'U': 'Humidity', 'Pa': 'PressureTendency', 'Ff': 'MeanWindSpeed', 'VV': 'Visibility','Td':'DewPoint',
'RRR': 'Rainfall'}, inplace=True)
#delete some columns
cpt_air.drop(['DD', 'ff10', 'ff3', 'N', 'WW','W1', 'W2', 'Tn', 'Tx', 'Cl', 'Nh', 'H', 'Cm', 'Ch',
'tR', 'E', 'Tg', 'E_' ,'sss',], axis=1, inplace=True)
cpt_air.head(3)
cpt_air.tail(3)
###Output
_____no_output_____
###Markdown
You can immediatley see the 3-hour gaps. Furthermore when you check the ```NaN``` the data has "*holes*".
###Code
#check NaN
cpt_air.isnull().sum(axis = 0)
cpt_air.info()
#check some values
cpt_air.Rainfall.unique()
###Output
_____no_output_____
###Markdown
Right here you can see why I did not automate this process. Some columns contain unique ```text``` along with ```values```. These need to be transformed as required.
###Code
#change some text
cpt_air.loc[cpt_air['Rainfall'] == 'Trace of precipitation', 'Rainfall'] = 0.1
cpt_air.loc[cpt_air['Rainfall'] == 'No precipitation', 'Rainfall'] = 0
#transform to numeric
cpt_air["Rainfall"] = pd.to_numeric(cpt_air["Rainfall"])
#set as datetime index
cpt_air = cpt_air.set_index(pd.DatetimeIndex(cpt_air['dt']))
###Output
_____no_output_____
###Markdown
```resample``` to 1-hour periods and ```interpolate``` - we cannot ```interpolate``` over the entire timeperiod because our results would be false. We can however limit the ```interpolation``` to fill one ```NaN``` either side of a value; if it exists. This means; that if values need to be ```interpolated```; they will follow the trend for one-hour but leave other ```NaN``` inplace.
###Code
columns = ['Air_temp', 'Atmos_press', 'Atmos_press_MeanSea', 'PressureTendency', 'Humidity', 'MeanWindSpeed',
'Visibility', 'DewPoint', 'Rainfall']
#resample to every hour
cpt_air_h = cpt_air.resample('H', on='dt').mean()
# linear interpolation in both directions and fill only one consecutive NaN
cpt_air_inter = cpt_air_h[columns].interpolate(limit_direction = 'both', method='linear', limit = 1)
cpt_air_inter.head(4)
cpt_air_inter.tail(4)
#check some values
print(cpt_air_inter.Rainfall.unique())
cpt_air_inter.tail(4)
###Output
_____no_output_____
###Markdown
Lets create some graphs to understand the data a bit better
###Code
fig, axes = plt.subplots(ncols = 1, nrows = 8, figsize=(17, 11))
cols_plot = ['Air_temp', 'Atmos_press', 'Atmos_press_MeanSea', 'Humidity', 'MeanWindSpeed', 'Visibility',
'DewPoint', 'Rainfall']
cpt_air_inter[cols_plot].plot(ax=axes, marker='.', alpha=0.5, linestyle='-', subplots=True)
plt.subplots_adjust(hspace = 0.8, wspace= 0.3)
plt.show()
###Output
_____no_output_____
###Markdown
Its a bit noisy. Lets look at two slices of time, three days in Summer (Feb.) and three in Winter (Jul).
###Code
sum_start, sum_end = '2017-02-08', '2017-02-12'
win_start, win_end = '2017-07-19', '2017-07-23'
# Plot daily and weekly resampled time series together
fig, ((ax1,ax2),(ax3,ax4),(ax5,ax6)) = plt.subplots(nrows=3, ncols=2, figsize=(17,7))
ax1.plot(cpt_air_inter.loc[sum_start:sum_end, 'Air_temp'], color= 'red', linestyle='-',marker='.')
ax1.set_title('Summer Air Temp')
ax2.plot(cpt_air_inter.loc[win_start:win_end, 'Air_temp'],marker='.', color = 'orange')
ax2.set_title('Winter Air Temp')
ax3.plot(cpt_air_inter.loc[sum_start:sum_end, 'Rainfall'], marker='.', linestyle='-', color = 'blue')
ax3.set_title('Summer Rainfall')
ax4.plot(cpt_air_inter.loc[win_start:win_end, 'Rainfall'], marker='.', linestyle='-', color = 'navy')
ax4.set_title('Winter Rainfall')
ax5.plot(cpt_air_inter.loc[sum_start:sum_end, 'Visibility'], marker='.', linestyle='-', color = 'brown')
ax5.set_title('Summer Visibility')
ax6.plot(cpt_air_inter.loc[win_start:win_end, 'Visibility'], marker='.', linestyle='-', color = 'brown')
ax6.set_title('Winter Visibility')
plt.subplots_adjust(hspace = 0.8, wspace= 0.3)
plt.show()
###Output
_____no_output_____
###Markdown
We can see the effect of restricting the interpolation to fill one ```NaN``` in either direction. The gaps are *'narrower'* but still represent the general trend. I feel this is better.
###Code
#reset index
cpt_air_inter.reset_index(inplace=True)
# add a column to identify the weather station
cpt_air_inter['Weather_Stn'] = 'Cape_Town_International'
#have a look
cpt_air_inter.head(3)
cpt_air_inter.shape
###Output
_____no_output_____
###Markdown
Now the other weather stations
###Code
#read the molteno weather station data
mol_weat = pd.read_csv(path/'data/weather/molteno_weather.csv', sep = ';', skiprows=6, usecols=range(29),
parse_dates = ['Local time in Cape Town / Molteno Reservoir'])
#rename some columns
mol_weat.rename(columns={'Local time in Cape Town / Molteno Reservoir': 'dt', 'T': 'Air_temp','Po': 'Atmos_press',
'P': 'Atmos_press_MeanSea', 'U': 'Humidity', 'Pa': 'PressureTendency', 'Ff': 'MeanWindSpeed',
'VV': 'Visibility','Td':'DewPoint', 'RRR': 'Rainfall'}, inplace=True)
#delete some columns
mol_weat.drop(['DD', 'ff10', 'ff3', 'N', 'WW','W1', 'W2', 'Tn', 'Tx', 'Cl', 'Nh', 'H', 'Cm', 'Ch',
'tR', 'E', 'Tg', 'E_' ,'sss',], axis=1, inplace=True)
mol_weat.head(3)
mol_weat.info()
mol_weat.isnull().sum(axis = 0)
#check some values
mol_weat.Rainfall.unique()
#set as datetime index
mol_weat = mol_weat.set_index(pd.DatetimeIndex(mol_weat['dt']))
#resample to every hour
mol_weat_h = mol_weat.resample('H', on='dt').mean()
# linear interpolation in both directions and fill only one consecutive NaN
mol_weat_inter = mol_weat_h[columns].interpolate(limit_direction = 'both', method='linear', limit = 1)
#reset index
mol_weat_inter.reset_index(inplace=True)
# add a column to identify the weather station and create a join field
mol_weat_inter['Weather_Stn'] = 'Molteno'
#have a look
mol_weat_inter.head(3)
mol_weat_inter.tail(3)
###Output
_____no_output_____
###Markdown
Lets create a ```weather``` df that contains all the weather
###Code
weather = cpt_air_inter.append(mol_weat_inter)
#check some values
print(weather.Weather_Stn.unique())
print('')
print(weather.shape)
#weather.head(2)
###Output
['Cape_Town_International' 'Molteno']
(35036, 11)
###Markdown
Now the next
###Code
#read the ysterplaat weather station data
yster_weat = pd.read_csv(path/'data/weather/yster_weath.csv', sep = ';', skiprows=6, usecols=range(29),
parse_dates = ['Local time in Ysterplaat (airbase)'])
#rename some columns
yster_weat.rename(columns={'Local time in Ysterplaat (airbase)': 'dt', 'T': 'Air_temp','Po': 'Atmos_press',
'P': 'Atmos_press_MeanSea', 'U': 'Humidity', 'Pa': 'PressureTendency',
'Ff': 'MeanWindSpeed', 'VV': 'Visibility','Td':'DewPoint', 'RRR': 'Rainfall'},
inplace=True)
#delete some columns
yster_weat.drop(['DD', 'ff10', 'ff3', 'N', 'WW','W1', 'W2', 'Tn', 'Tx', 'Cl', 'Nh', 'H', 'Cm', 'Ch',
'tR', 'E', 'Tg', 'E_' ,'sss',], axis=1, inplace=True)
yster_weat.head(3)
yster_weat.info()
yster_weat.isnull().sum(axis = 0)
#check some values
yster_weat.Visibility.unique()
#change some text
yster_weat.loc[yster_weat['Visibility'] == 'less than 0.1', 'Visibility'] = 0.1
#transform to numeric
yster_weat["Visibility"] = pd.to_numeric(yster_weat["Visibility"])
#set as datetime index
yster_weat = yster_weat.set_index(pd.DatetimeIndex(yster_weat['dt']))
#resample to every hour
yster_weat_h = yster_weat.resample('H', on='dt').mean()
# linear interpolation in both directions and fill only one consecutive NaN
yster_weat_inter = yster_weat_h[columns].interpolate(limit_direction = 'both', method='linear', limit = 1)
#reset index
yster_weat_inter.reset_index(inplace=True)
# add a column to identify the weather station and create a join field
yster_weat_inter['Weather_Stn'] = 'Ysterplaat'
#have a look
yster_weat_inter.head(3)
weather = weather.append(yster_weat_inter)
#check some values
print(weather.Weather_Stn.unique())
print('')
print(weather.shape)
weather.tail(2)
###Output
_____no_output_____
###Markdown
One more
###Code
#read the paarl weather station data
paarl_weat = pd.read_csv(path/'data/weather/paarl_weather.csv', sep = ';', skiprows=6, usecols=range(29),
parse_dates = ['Local time in Paarl'])
#rename some columns
paarl_weat.rename(columns={'Local time in Paarl': 'dt', 'T': 'Air_temp','Po': 'Atmos_press',
'P': 'Atmos_press_MeanSea', 'U': 'Humidity', 'Pa': 'PressureTendency', 'Ff': 'MeanWindSpeed',
'VV': 'Visibility','Td':'DewPoint', 'RRR': 'Rainfall'}, inplace=True)
#delete some columns
paarl_weat.drop(['DD', 'ff10', 'ff3', 'N', 'WW','W1', 'W2', 'Tn', 'Tx', 'Cl', 'Nh', 'H', 'Cm', 'Ch',
'tR', 'E', 'Tg', 'E_' ,'sss'], axis=1, inplace=True)
paarl_weat.head(3)
paarl_weat.info()
paarl_weat.isnull().sum(axis = 0)
#check some values
paarl_weat.Rainfall.unique()
#set as datetime index
paarl_weat = paarl_weat.set_index(pd.DatetimeIndex(paarl_weat['dt']))
#resample to every hour
paarl_weat_h = paarl_weat.resample('H', on='dt').mean()
# linear interpolation in both directions and fill only one consecutive NaN
paarl_weat_inter = paarl_weat_h[columns].interpolate(limit_direction = 'both', method='linear', limit = 1)
#reset index
paarl_weat_inter.reset_index(inplace=True)
# add a column to identify the weather station and create a join field
paarl_weat_inter['Weather_Stn'] = 'Paarl'
#have a look
paarl_weat_inter.head(3)
#append
weather = weather.append(paarl_weat_inter)
#check some values
print(weather.Weather_Stn.unique())
print('')
print(weather.shape)
#weather.tail(2)
###Output
['Cape_Town_International' 'Molteno' 'Ysterplaat' 'Paarl']
(70072, 11)
###Markdown
And the last one.
###Code
#read the strand weather station data
stra_weat = pd.read_csv(path/'data/weather/strand_weather.csv', sep = ';', skiprows=6, usecols=range(29),
parse_dates = ['Local time in Strand'])
#rename some columns
stra_weat.rename(columns={'Local time in Strand': 'dt', 'T': 'Air_temp','Po': 'Atmos_press',
'P': 'Atmos_press_MeanSea', 'U': 'Humidity', 'Pa': 'PressureTendency', 'Ff': 'MeanWindSpeed',
'VV': 'Visibility','Td':'DewPoint', 'RRR': 'Rainfall'}, inplace=True)
#delete some columns
stra_weat.drop(['DD', 'ff10', 'ff3', 'N', 'WW','W1', 'W2', 'Tn', 'Tx', 'Cl', 'Nh', 'H', 'Cm', 'Ch',
'tR', 'E', 'Tg', 'E_' ,'sss',], axis=1, inplace=True)
stra_weat.head(3)
stra_weat.info()
stra_weat.isnull().sum(axis = 0)
#check some values
stra_weat.Rainfall.unique()
#set as datetime index
stra_weat = stra_weat.set_index(pd.DatetimeIndex(stra_weat['dt']))
#resample to every hour
stra_weat_h = stra_weat.resample('H', on='dt').mean()
# linear interpolation in both directions and fill only one consecutive NaN
stra_weat_inter = stra_weat_h[columns].interpolate(limit_direction = 'both', method='linear', limit = 1)
#reset index
stra_weat_inter.reset_index(inplace=True)
# add a column to identify the weather station and create a join field
stra_weat_inter['Weather_Stn'] = 'Strand'
#have a look
stra_weat_inter.head(3)
#append
weather = weather.append(stra_weat_inter)
print(weather.Weather_Stn.unique())
print('')
print(weather.shape)
#weather.tail(2)
weather.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 87590 entries, 0 to 17517
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 dt 87590 non-null datetime64[ns]
1 Air_temp 81632 non-null float64
2 Atmos_press 81704 non-null float64
3 Atmos_press_MeanSea 33398 non-null float64
4 PressureTendency 78218 non-null float64
5 Humidity 81629 non-null float64
6 MeanWindSpeed 81560 non-null float64
7 Visibility 21751 non-null float64
8 DewPoint 81635 non-null float64
9 Rainfall 8322 non-null float64
10 Weather_Stn 87590 non-null object
dtypes: datetime64[ns](1), float64(9), object(1)
memory usage: 8.0+ MB
###Markdown
Then we update the ```train``` and ```test``` set with the new columns.
###Code
#load the train and from the previous notebook
train = pd.read_csv(path/'data/train_basic.csv', parse_dates = ['datetime'])
test = pd.read_csv(path/'data/test_basic.csv', parse_dates = ['datetime'])
print(train.shape)
print('')
print(test.shape)
#merge the [STATION NA] from the roads_voronoi
train = pd.merge(train, road_voronoi[['segment_id', 'STATION_NA']], on='segment_id', how='left')
test = pd.merge(test, road_voronoi[['segment_id', 'STATION_NA']], on='segment_id', how='left')
#check some values
print(train.STATION_NA.unique())
print('')
print(test.STATION_NA.unique())
#train.head(3)
train.head(3)
train.tail(3)
###Output
_____no_output_____
###Markdown
Now we add the weather
###Code
# update train
cols = ['dt', 'Air_temp', 'Atmos_press', 'Atmos_press_MeanSea', 'Humidity', 'MeanWindSpeed',
'Visibility', 'DewPoint', 'Rainfall', 'Weather_Stn']
# we merge on two columns: time and weather station
train = pd.merge(train, weather[cols], left_on=['datetime', 'STATION_NA'],
right_on=['dt', 'Weather_Stn'], how='left')
test = pd.merge(test, weather[cols], left_on=['datetime', 'STATION_NA'],
right_on=['dt', 'Weather_Stn'], how='left')
train.tail(3)
print(train.shape)
print('')
print(test.shape)
train.info()
#delete
train.drop(['dt', 'Weather_Stn'], axis=1, inplace=True)
test.drop(['dt', 'Weather_Stn'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Save it to add car-count and travel time data later.
###Code
#save it
train.to_csv(path/'data/train_with_weather.csv', index = False)
test.to_csv(path/'data/test_with_weather.csv', index = False)
#save the weather as well
weather.to_csv(path/'data/weather/weather_all.csv', index = False)
#clean up
stra_weat_inter, paarl_weat_inter, cpt_air_inter, yster_weat_inter, mol_weat_inter = 0, 0, 0, 0, 0
stra_weat_h, paarl_weat_h, cpt_air_h, yster_weat_h, mol_weat_h = 0, 0, 0, 0, 0
stra_weat, paarl_weat, cpt_air, yster_weat, mol_weat = 0, 0, 0, 0, 0
inter_columns, weather, stations, cpt_stations = 0, 0, 0, 0
###Output
_____no_output_____ |
.ipynb_checkpoints/02 - Emotion Recognition-checkpoint.ipynb | ###Markdown
Based on:https://github.com/Amol2709/EMOTION-RECOGITION-USING-KERAS/tree/master/emotion_recognitionhttps://medium.com/@ee18m003/emotion-recognition-using-keras-ad7881e2c3c6
###Code
from keras.preprocessing.image import img_to_array
from keras.models import load_model
import numpy as np
import argparse
import imutils
import cv2
# load the face detector cascade, emotion detection CNN, then define
# the list of emotion labels
detector = cv2.CascadeClassifier('models/haarcascade_frontalface_default.xml')
model = load_model('models/epoch_60.hdf5')
EMOTIONS = ["angry", "scared", "happy", "sad", "surprised","neutral"]
###Output
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4185: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:245: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /home/benitez/.local/lib/python3.6/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
WARNING:tensorflow:From /home/benitez/anaconda3/envs/devCPU/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
Image analysis
###Code
import matplotlib.pyplot as plt
frame= cv2.imread('data_samples/sample_img.jpg')
# resize the frame and convert it to grayscale
frame = imutils.resize(frame, width=300)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# initialize the canvas for the visualization, then clone
# the frame so we can draw on it
canvas = np.zeros((220, 300, 3), dtype="uint8")
frameClone = frame.copy()
# detect faces in the input frame, then clone the frame so that
# we can draw on it
rects = detector.detectMultiScale(gray, scaleFactor=1.1,minNeighbors=5, minSize=(30, 30),flags=cv2.CASCADE_SCALE_IMAGE)
# ensure at least one face was found before continuing
for i in range(0,len(rects)):
# determine the largest face area
#rect = sorted(rects, reverse=True,key=lambda x: (x[2] - x[0]) * (x[3] - x[1]))[0]
(fX, fY, fW, fH) = rects[i]
# extract the face ROI from the image, then pre-process
# it for the network
roi = gray[fY:fY + fH, fX:fX + fW]
roi = cv2.resize(roi, (48, 48))
roi = roi.astype("float") / 255.0
roi = img_to_array(roi)
roi = np.expand_dims(roi, axis=0)
# make a prediction on the ROI, then lookup the class# label
preds = model.predict(roi)[0]
label = EMOTIONS[preds.argmax()]
# loop over the labels + probabilities and draw them
for (i, (emotion, prob)) in enumerate(zip(EMOTIONS, preds)):
# construct the label text
text = "{}: {:.2f}%".format(emotion, prob * 100)
# draw the label + probability bar on the canvas
w = int(prob * 300)
cv2.rectangle(canvas, (5, (i * 35) + 5),(w, (i * 35) + 35), (40, 50, 155), -1)
cv2.putText(canvas, text, (10, (i * 35) + 23),cv2.FONT_HERSHEY_SIMPLEX, 0.45,(55, 25, 5), 2)
cv2.putText(frameClone, label, (fX, fY - 10),cv2.FONT_HERSHEY_SIMPLEX, 0.45, (40, 50, 155), 2)
cv2.rectangle(frameClone, (fX, fY), (fX + fW, fY + fH),(140, 50, 155), 2)
# show our classifications + probabilities
cv2.imshow('image', frameClone)
cv2.imshow('emotions', canvas)
# cleanup the camera and close any open windows
cv2.waitKey(0) # PRESS ANY KEY TO EXIT
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Video analysis
###Code
camera = cv2.VideoCapture('data_samples/sample_video.mp4')
#writer = cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc(*"MJPG"), 30,(640,480))
while True:
(grabbed, frame) = camera.read()
if not grabbed: break # end of video
# resize the frame and convert it to grayscale
frame = imutils.resize(frame, width=300)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# canvas to draw on it
canvas = np.zeros((220, 300, 3), dtype="uint8")
frameClone = frame.copy()
rects = detector.detectMultiScale(gray, scaleFactor=1.1,minNeighbors=5, minSize=(30, 30),flags=cv2.CASCADE_SCALE_IMAGE)
# ensure at least one face was found before continuing
if len(rects) > 0:
# determine the largest face area
rect = sorted(rects, reverse=True,key=lambda x: (x[2] - x[0]) * (x[3] - x[1]))[0]
(fX, fY, fW, fH) = rect
# extract the face ROI from the image, then pre-process
# it for the network
roi = gray[fY:fY + fH, fX:fX + fW]
roi = cv2.resize(roi, (48, 48))
roi = roi.astype("float") / 255.0
roi = img_to_array(roi)
roi = np.expand_dims(roi, axis=0)
# make a prediction on the ROI, then lookup the class# label
preds = model.predict(roi)[0]
label = EMOTIONS[preds.argmax()]
# loop over the labels + probabilities and draw them
for (i, (emotion, prob)) in enumerate(zip(EMOTIONS, preds)):
# construct the label text
text = "{}: {:.2f}%".format(emotion, prob * 100)
# draw the label + probability bar on the canvas
w = int(prob * 300)
cv2.rectangle(canvas, (5, (i * 35) + 5),(w, (i * 35) + 35), (0, 0, 255), -1)
cv2.putText(canvas, text, (10, (i * 35) + 23),cv2.FONT_HERSHEY_SIMPLEX, 0.45,(255, 255, 255), 2)
# draw the label on the frame
cv2.putText(frameClone, label, (fX, fY - 10),cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 0, 255), 2)
cv2.rectangle(frameClone, (fX, fY), (fX + fW, fY + fH),(0, 0, 255), 2)
# show our classifications + probabilities
cv2.imshow("Face", frameClone)
#cv2.imshow("Probabilities", canvas)
#out.write(frameClone)
# if the ’q’ key is pressed, stop the loop
if cv2.waitKey(1) & 0xFF == ord("q"):
break
# cleanup the camera and close any open windows
camera.release()
#out.release()
cv2.destroyAllWindows()
###Output
_____no_output_____ |
notebooks/lyrizz/download_lyrics.ipynb | ###Markdown
Download lyrics from GENIUSFrom the list of songs (represented by artists/title) in df_tracks.csv, this notebook allows to search lyrics on Genius and download them.
###Code
import requests
import pandas as pd
import numpy as np
import unidecode
import urllib.parse
from bs4 import BeautifulSoup
import os
import re
import os.path
from requests.utils import requote_uri
import pickle
import pandas as pd
df_tracks = pd.read_csv('lyrizz/csv/df_tracks.csv', sep=';')
# GENIUS API
TOKEN_GENIUS = 'YOUR***GENIUS***TOKEN'
HEADERS = {'Authorization': f'Bearer {TOKEN_GENIUS}'}
###Output
_____no_output_____
###Markdown
Functions definition
###Code
def filter_title(name):
# Try de remove "- Remastered ..."
name = name.split(' - ')[0]
# Try de remove " (Remastered ...)"
name = name.split('(')[0]
# Remove space at begin/end
name = name.strip()
return name
def filter_artist(name):
# Try de remove others artists
name = name.split(',')[0]
# Try de remove " (Feat ...)"
name = name.split('(')[0]
# Remove space at begin/end
name = name.strip()
return name
def search_song(artist, title):
"""
Search on Genius from artist and title
"""
url = requote_uri(f"https://api.genius.com/search?q={artist} - {title}")
r = requests.get(url, headers=HEADERS)
hits = r.json()['response']['hits']
# No response in search
if len(hits) == 0:
return None,None,None,None
search = hits[0]['result']
img = search['header_image_url']
url2 = search['url']
id_song = search['api_path'].split('/')[-1]
if 'media' in search:
spotify_url = [e['url'] for e in search['media'] if e['provider']=='spotify']
if len(spotify_url)==1:
spotify_url = spotify_url[0]
else:
spotify_url = None
else:
spotify_url = None
url3 = requote_uri(f"https://api.genius.com/songs/{id_song}")
r3 = requests.get(url3, headers=HEADERS)
search3 = r3.json()['response']
apple_id = search3['song']['apple_music_id']
return url2, img, spotify_url, apple_id
def process_text(s):
s = s.replace('genius', '')
s = s.replace('lyrics', '')
s = unidecode.unidecode(s.lower())
s = re.sub('[\W_]', '', s)
return s
def get_raw_lyrics(url, artist, title):
"""
From Genius lyric page url, get lyrics and check (True if lyrics seem to be correct)
"""
page = requests.get(url)
html = BeautifulSoup(page.text, "html.parser")
for br in html.find_all("br"):
br.replace_with("\n")
div = html.find("div", id="lyrics-root")
if div == None:
div = html.find("div", class_="lyrics")
if div == None:
div = html.find("div", class_="Lyrics__Container-sc-1ynbvzw-2 jgQsqn")
if div == None:
return None, None
text = div.get_text()
parts = text.split("\n\n")#.find_all("span")
lyrics = [p.split("\n") for p in parts]
lyrics[-1][-1] = re.sub(r'\d*EmbedShare URLCopyEmbedCopy','', lyrics[-1][-1])
### Check
infos = html.find("title").get_text().lower().replace(u'\xa0', u' ')
check=False
if process_text(artist) in process_text(infos) and process_text(title) in process_text(infos):
check=True
return lyrics, check
def write_txt_file(lyrics, track_id):
s=""
for parts in lyrics:
for p in parts:
s+=p+"\n"
s+="\n"
with open(f'lyrizz/txt/{track_id}.txt', 'w') as f:
f.write(s)
def is_available(lyrics, check, spotify_url, apple_id):
res = True
if lyrics == None:
res = False
if not check:
res = False
return res
def save_image(img_url, track_id):
img_data = requests.get(img_url).content
file_name = img_url.split('/')[-1]
if '.' not in file_name:
ext='jpg'
else:
ext = file_name.split('.')[-1]
with open(f'lyrizz/images/{track_id}.{ext}', 'wb') as handler:
handler.write(img_data)
###Output
_____no_output_____
###Markdown
Process- Clean artist and title- Search song on Genius API- If song exists and lyrics available on Genius, download lyrics and image
###Code
LIST_BUG=[]
for i in range(len(df_tracks)):
track = df_tracks.iloc[i]
track_id = track['track_id']
artist, title = track['artists'], track['name']
artist = filter_artist(artist)
title = filter_title(title)
if os.path.isfile(f'lyrizz/txt/{track_id}.txt'):
print('[ALREADY]', artist, title)
elif if track_id in LIST_BUG:
print('[BUG]', artist, title)
pass
else:
# print(artist, title)
url, img, spotify_url, apple_id = search_song(artist, title)
if url == None:
available = False
else:
lyrics, check = get_raw_lyrics(url, artist, title)
available = is_available(lyrics, check, spotify_url, apple_id)
if available:
print(artist, title, track_id)
save_image(img, track_id)
write_txt_file(lyrics, track_id)
else:
LIST_BUG.append(track_id)
print('[BUG]', artist, title, url)
###Output
_____no_output_____ |
PostgreSQL_AWScloud_metabase_dashboard/db_final.ipynb | ###Markdown
with selection as(select * from customers as c inner join orders as o on o."customerID" = c."customerID"inner join order_details as od on od."orderID"=o."orderID"inner join products as p on p."productID"=od."productID")select c."customerID", c."companyName", c.country from selectionwhere p."productID" in (select distinct p."productID" from selection where c.country= 'Brazil') and c.country != 'Brazil';
###Code
# 13. Display the names of customers who ordered the same set of products as customers from Brazil
engine.execute('CREATE TABLE q_13 AS (selection AS(select c."customerID", c."companyName", c.country, p."productID" from customers as c inner join orders as o on o."customerID" = c."customerID" inner join order_details as od on od."orderID"=o."orderID" inner join products as p on p."productID"=od."productID") select * from selection where "productID" in (select distinct "productID" from selection where country = 'Brazil' ) and country != 'Brazil' );')
df_13 = pd.read_sql('q_13', engine, index_col='customerID')
df_13.head()
# Metabase dashboard link:
# http://ec2-18-196-157-106.eu-central-1.compute.amazonaws.com/public/dashboard/9174d18a-8ffa-4a0f-9dd3-1bc16b396ec0
###Output
_____no_output_____ |
20190325_Ulfs_Toelich_KAUST_python.ipynb | ###Markdown
Import Modules and read in data
###Code
import pandas as pd
import datetime
import matplotlib.pyplot as plt
df = pd.read_csv('https://dataverse.harvard.edu/api/access/datafile/3005330')
df.head()
###Output
_____no_output_____
###Markdown
Add age column
###Code
df['age'] = datetime.datetime.now().year - df['year_born'].astype(int)
###Output
_____no_output_____
###Markdown
Filter outliers (person younger than 0 or older than 100)
###Code
df = df[(df['age'] >= 0) & (df['age'] <= 100)]
###Output
_____no_output_____
###Markdown
Select columns of interest
###Code
df_subset = df[["Sex","age"]]
df_subset.head()
###Output
_____no_output_____
###Markdown
Compute statistics
###Code
mn = df_subset.groupby('Sex')['age'].mean()
sd = df_subset.groupby('Sex')['age'].std()
sem = df_subset.groupby('Sex')['age'].sem()
Stats = pd.concat([mn, sd,sem], axis=1)
Stats.columns = ['MEAN','SD','SEM']
Stats
###Output
_____no_output_____
###Markdown
Two df for women and men
###Code
##DF for women and men
Male = df_subset[df_subset['Sex']=='Male']
Female = df_subset[df_subset['Sex']=='Female']
###Output
_____no_output_____
###Markdown
Boxplot: Displayed are median, 1st and 3rd quartile, range and outliers.
###Code
plt.boxplot([Male['age'] , Female['age']],0,'g.')
plt.xlabel('Gender')
plt.ylabel('Mean Age')
plt.xticks([1, 2], ['Male', 'Female'])
plt.show()
###Output
_____no_output_____ |
examples/1_iris/notebooks/predict.ipynb | ###Markdown
Framework imports
###Code
from noronha.tools.serving import OnlinePredict
from noronha.tools.shortcuts import model_path
###Output
_____no_output_____
###Markdown
Application imports
###Code
import json
import numpy as np
import joblib
###Output
_____no_output_____
###Markdown
Loading the model
###Code
clf_path = model_path('clf.pkl')
clf = joblib.load(clf_path)
###Output
_____no_output_____
###Markdown
Defining the prediction function
###Code
def predict(x):
features = json.loads(x)
features = np.array(features).reshape(1, -1)
return clf.predict(features)[0]
###Output
_____no_output_____
###Markdown
Creating the prediction service
###Code
OnlinePredict(predict_func=predict)()
###Output
_____no_output_____
###Markdown
Framework imports
###Code
from noronha.tools.serving import OnlinePredict, LazyModelServer
from noronha.tools.shortcuts import model_path
###Output
_____no_output_____
###Markdown
Application imports
###Code
import json
import numpy as np
import joblib
import pickle
###Output
_____no_output_____
###Markdown
Loading the model
###Code
#clf_path = model_path('clf.pkl')
#clf = joblib.load(clf_path)
###Output
_____no_output_____
###Markdown
Defining the prediction function
###Code
def predict(x):
features = json.loads(x)
features = np.array(features).reshape(1, -1)
return clf.predict(features)[0]
###Output
_____no_output_____
###Markdown
Creating the prediction service
###Code
def load(path, meta):
return joblib.load(path + '/clf.pkl')
def pred(x, clf, meta):
data = json.loads(x)
return clf.predict(np.array(data).reshape(1,-1))[0]
server = LazyModelServer(
predict_func=pred,
load_model_func=load,
model_name='iris-clf3',
#server_type='gunicorn',
#webapp='fastapi',
server_conf={'timeout':300, 'loglevel':'debug'}#, 'worker_class': 'uvicorn.workers.UvicornWorker'}
#server_conf={'timeout':300, 'threads':12, 'worker_class': 'uvicorn.workers.UvicornWorker'}
)
server()
OnlinePredict(
predict_func=predict,
# webapp='fastapi',
# server_type = 'gunicorn',
# server_conf = {'worker_class': 'uvicorn.workers.UvicornWorker'}
)()
###Output
_____no_output_____ |
numpyro/_downloads/6a91c95220c1db02b557a1eccd2b2942/neutra.ipynb | ###Markdown
Neural Transport================This example illustrates how to use a trained AutoBNAFNormal autoguide to transform a posterior to aGaussian-like one. The transform will be used to get better mixing rate for NUTS sampler.**References:** 1. Hoffman, M. et al. (2019), "NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport", (https://arxiv.org/abs/1903.03704)
###Code
import argparse
import os
from matplotlib.gridspec import GridSpec
import matplotlib.pyplot as plt
import seaborn as sns
from jax import lax, random
import jax.numpy as jnp
from jax.scipy.special import logsumexp
import numpyro
from numpyro import optim
from numpyro.diagnostics import print_summary
import numpyro.distributions as dist
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, SVI, Trace_ELBO
from numpyro.infer.autoguide import AutoBNAFNormal
from numpyro.infer.reparam import NeuTraReparam
class DualMoonDistribution(dist.Distribution):
support = constraints.real_vector
def __init__(self):
super(DualMoonDistribution, self).__init__(event_shape=(2,))
def sample(self, key, sample_shape=()):
# it is enough to return an arbitrary sample with correct shape
return jnp.zeros(sample_shape + self.event_shape)
def log_prob(self, x):
term1 = 0.5 * ((jnp.linalg.norm(x, axis=-1) - 2) / 0.4) ** 2
term2 = -0.5 * ((x[..., :1] + jnp.array([-2., 2.])) / 0.6) ** 2
pe = term1 - logsumexp(term2, axis=-1)
return -pe
def dual_moon_model():
numpyro.sample('x', DualMoonDistribution())
def main(args):
print("Start vanilla HMC...")
nuts_kernel = NUTS(dual_moon_model)
mcmc = MCMC(nuts_kernel, args.num_warmup, args.num_samples, num_chains=args.num_chains,
progress_bar=False if "NUMPYRO_SPHINXBUILD" in os.environ else True)
mcmc.run(random.PRNGKey(0))
mcmc.print_summary()
vanilla_samples = mcmc.get_samples()['x'].copy()
guide = AutoBNAFNormal(dual_moon_model, hidden_factors=[args.hidden_factor, args.hidden_factor])
svi = SVI(dual_moon_model, guide, optim.Adam(0.003), Trace_ELBO())
svi_state = svi.init(random.PRNGKey(1))
print("Start training guide...")
last_state, losses = lax.scan(lambda state, i: svi.update(state), svi_state, jnp.zeros(args.num_iters))
params = svi.get_params(last_state)
print("Finish training guide. Extract samples...")
guide_samples = guide.sample_posterior(random.PRNGKey(2), params,
sample_shape=(args.num_samples,))['x'].copy()
print("\nStart NeuTra HMC...")
neutra = NeuTraReparam(guide, params)
neutra_model = neutra.reparam(dual_moon_model)
nuts_kernel = NUTS(neutra_model)
mcmc = MCMC(nuts_kernel, args.num_warmup, args.num_samples, num_chains=args.num_chains,
progress_bar=False if "NUMPYRO_SPHINXBUILD" in os.environ else True)
mcmc.run(random.PRNGKey(3))
mcmc.print_summary()
zs = mcmc.get_samples(group_by_chain=True)["auto_shared_latent"]
print("Transform samples into unwarped space...")
samples = neutra.transform_sample(zs)
print_summary(samples)
zs = zs.reshape(-1, 2)
samples = samples['x'].reshape(-1, 2).copy()
# make plots
# guide samples (for plotting)
guide_base_samples = dist.Normal(jnp.zeros(2), 1.).sample(random.PRNGKey(4), (1000,))
guide_trans_samples = neutra.transform_sample(guide_base_samples)['x']
x1 = jnp.linspace(-3, 3, 100)
x2 = jnp.linspace(-3, 3, 100)
X1, X2 = jnp.meshgrid(x1, x2)
P = jnp.exp(DualMoonDistribution().log_prob(jnp.stack([X1, X2], axis=-1)))
fig = plt.figure(figsize=(12, 8), constrained_layout=True)
gs = GridSpec(2, 3, figure=fig)
ax1 = fig.add_subplot(gs[0, 0])
ax2 = fig.add_subplot(gs[1, 0])
ax3 = fig.add_subplot(gs[0, 1])
ax4 = fig.add_subplot(gs[1, 1])
ax5 = fig.add_subplot(gs[0, 2])
ax6 = fig.add_subplot(gs[1, 2])
ax1.plot(losses[1000:])
ax1.set_title('Autoguide training loss\n(after 1000 steps)')
ax2.contourf(X1, X2, P, cmap='OrRd')
sns.kdeplot(guide_samples[:, 0], guide_samples[:, 1], n_levels=30, ax=ax2)
ax2.set(xlim=[-3, 3], ylim=[-3, 3],
xlabel='x0', ylabel='x1', title='Posterior using\nAutoBNAFNormal guide')
sns.scatterplot(guide_base_samples[:, 0], guide_base_samples[:, 1], ax=ax3,
hue=guide_trans_samples[:, 0] < 0.)
ax3.set(xlim=[-3, 3], ylim=[-3, 3],
xlabel='x0', ylabel='x1', title='AutoBNAFNormal base samples\n(True=left moon; False=right moon)')
ax4.contourf(X1, X2, P, cmap='OrRd')
sns.kdeplot(vanilla_samples[:, 0], vanilla_samples[:, 1], n_levels=30, ax=ax4)
ax4.plot(vanilla_samples[-50:, 0], vanilla_samples[-50:, 1], 'bo-', alpha=0.5)
ax4.set(xlim=[-3, 3], ylim=[-3, 3],
xlabel='x0', ylabel='x1', title='Posterior using\nvanilla HMC sampler')
sns.scatterplot(zs[:, 0], zs[:, 1], ax=ax5, hue=samples[:, 0] < 0.,
s=30, alpha=0.5, edgecolor="none")
ax5.set(xlim=[-5, 5], ylim=[-5, 5],
xlabel='x0', ylabel='x1', title='Samples from the\nwarped posterior - p(z)')
ax6.contourf(X1, X2, P, cmap='OrRd')
sns.kdeplot(samples[:, 0], samples[:, 1], n_levels=30, ax=ax6)
ax6.plot(samples[-50:, 0], samples[-50:, 1], 'bo-', alpha=0.2)
ax6.set(xlim=[-3, 3], ylim=[-3, 3],
xlabel='x0', ylabel='x1', title='Posterior using\nNeuTra HMC sampler')
plt.savefig("neutra.pdf")
if __name__ == "__main__":
assert numpyro.__version__.startswith('0.4.0')
parser = argparse.ArgumentParser(description="NeuTra HMC")
parser.add_argument('-n', '--num-samples', nargs='?', default=4000, type=int)
parser.add_argument('--num-warmup', nargs='?', default=1000, type=int)
parser.add_argument("--num-chains", nargs='?', default=1, type=int)
parser.add_argument('--hidden-factor', nargs='?', default=8, type=int)
parser.add_argument('--num-iters', nargs='?', default=10000, type=int)
parser.add_argument('--device', default='cpu', type=str, help='use "cpu" or "gpu".')
args = parser.parse_args()
numpyro.set_platform(args.device)
numpyro.set_host_device_count(args.num_chains)
main(args)
###Output
_____no_output_____
###Markdown
Neural Transport================This example illustrates how to use a trained AutoBNAFNormal autoguide to transform a posterior to aGaussian-like one. The transform will be used to get better mixing rate for NUTS sampler.**References:** 1. Hoffman, M. et al. (2019), "NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport", (https://arxiv.org/abs/1903.03704)
###Code
import argparse
from functools import partial
import os
from matplotlib.gridspec import GridSpec
import matplotlib.pyplot as plt
import seaborn as sns
from jax import lax, random, vmap
import jax.numpy as np
from jax.tree_util import tree_map
import numpyro
from numpyro import optim
from numpyro.contrib.autoguide import AutoContinuousELBO, AutoBNAFNormal
from numpyro.diagnostics import print_summary
import numpyro.distributions as dist
from numpyro.distributions import constraints
from numpyro.infer import MCMC, NUTS, SVI
from numpyro.infer.util import initialize_model, transformed_potential_energy
# XXX: upstream logsumexp throws NaN under fast-math mode + MCMC's progress_bar=True
def logsumexp(x, axis=0):
return np.log(np.sum(np.exp(x), axis=axis))
class DualMoonDistribution(dist.Distribution):
support = constraints.real_vector
def __init__(self):
super(DualMoonDistribution, self).__init__(event_shape=(2,))
def sample(self, key, sample_shape=()):
# it is enough to return an arbitrary sample with correct shape
return np.zeros(sample_shape + self.event_shape)
def log_prob(self, x):
term1 = 0.5 * ((np.linalg.norm(x, axis=-1) - 2) / 0.4) ** 2
term2 = -0.5 * ((x[..., :1] + np.array([-2., 2.])) / 0.6) ** 2
pe = term1 - logsumexp(term2, axis=-1)
return -pe
def dual_moon_model():
numpyro.sample('x', DualMoonDistribution())
def main(args):
print("Start vanilla HMC...")
nuts_kernel = NUTS(dual_moon_model)
mcmc = MCMC(nuts_kernel, args.num_warmup, args.num_samples,
progress_bar=False if "NUMPYRO_SPHINXBUILD" in os.environ else True)
mcmc.run(random.PRNGKey(0))
mcmc.print_summary()
vanilla_samples = mcmc.get_samples()['x'].copy()
guide = AutoBNAFNormal(dual_moon_model, hidden_factors=[args.hidden_factor, args.hidden_factor])
svi = SVI(dual_moon_model, guide, optim.Adam(0.003), AutoContinuousELBO())
svi_state = svi.init(random.PRNGKey(1))
print("Start training guide...")
last_state, losses = lax.scan(lambda state, i: svi.update(state), svi_state, np.zeros(args.num_iters))
params = svi.get_params(last_state)
print("Finish training guide. Extract samples...")
guide_samples = guide.sample_posterior(random.PRNGKey(0), params,
sample_shape=(args.num_samples,))['x'].copy()
transform = guide.get_transform(params)
_, potential_fn, constrain_fn = initialize_model(random.PRNGKey(2), dual_moon_model)
transformed_potential_fn = partial(transformed_potential_energy, potential_fn, transform)
transformed_constrain_fn = lambda x: constrain_fn(transform(x)) # noqa: E731
print("\nStart NeuTra HMC...")
nuts_kernel = NUTS(potential_fn=transformed_potential_fn)
mcmc = MCMC(nuts_kernel, args.num_warmup, args.num_samples,
progress_bar=False if "NUMPYRO_SPHINXBUILD" in os.environ else True)
init_params = np.zeros(guide.latent_size)
mcmc.run(random.PRNGKey(3), init_params=init_params)
mcmc.print_summary()
zs = mcmc.get_samples()
print("Transform samples into unwarped space...")
samples = vmap(transformed_constrain_fn)(zs)
print_summary(tree_map(lambda x: x[None, ...], samples))
samples = samples['x'].copy()
# make plots
# guide samples (for plotting)
guide_base_samples = dist.Normal(np.zeros(2), 1.).sample(random.PRNGKey(4), (1000,))
guide_trans_samples = vmap(transformed_constrain_fn)(guide_base_samples)['x']
x1 = np.linspace(-3, 3, 100)
x2 = np.linspace(-3, 3, 100)
X1, X2 = np.meshgrid(x1, x2)
P = np.exp(DualMoonDistribution().log_prob(np.stack([X1, X2], axis=-1)))
fig = plt.figure(figsize=(12, 8), constrained_layout=True)
gs = GridSpec(2, 3, figure=fig)
ax1 = fig.add_subplot(gs[0, 0])
ax2 = fig.add_subplot(gs[1, 0])
ax3 = fig.add_subplot(gs[0, 1])
ax4 = fig.add_subplot(gs[1, 1])
ax5 = fig.add_subplot(gs[0, 2])
ax6 = fig.add_subplot(gs[1, 2])
ax1.plot(losses[1000:])
ax1.set_title('Autoguide training loss\n(after 1000 steps)')
ax2.contourf(X1, X2, P, cmap='OrRd')
sns.kdeplot(guide_samples[:, 0], guide_samples[:, 1], n_levels=30, ax=ax2)
ax2.set(xlim=[-3, 3], ylim=[-3, 3],
xlabel='x0', ylabel='x1', title='Posterior using\nAutoBNAFNormal guide')
sns.scatterplot(guide_base_samples[:, 0], guide_base_samples[:, 1], ax=ax3,
hue=guide_trans_samples[:, 0] < 0.)
ax3.set(xlim=[-3, 3], ylim=[-3, 3],
xlabel='x0', ylabel='x1', title='AutoBNAFNormal base samples\n(True=left moon; False=right moon)')
ax4.contourf(X1, X2, P, cmap='OrRd')
sns.kdeplot(vanilla_samples[:, 0], vanilla_samples[:, 1], n_levels=30, ax=ax4)
ax4.plot(vanilla_samples[-50:, 0], vanilla_samples[-50:, 1], 'bo-', alpha=0.5)
ax4.set(xlim=[-3, 3], ylim=[-3, 3],
xlabel='x0', ylabel='x1', title='Posterior using\nvanilla HMC sampler')
sns.scatterplot(zs[:, 0], zs[:, 1], ax=ax5, hue=samples[:, 0] < 0.,
s=30, alpha=0.5, edgecolor="none")
ax5.set(xlim=[-5, 5], ylim=[-5, 5],
xlabel='x0', ylabel='x1', title='Samples from the\nwarped posterior - p(z)')
ax6.contourf(X1, X2, P, cmap='OrRd')
sns.kdeplot(samples[:, 0], samples[:, 1], n_levels=30, ax=ax6)
ax6.plot(samples[-50:, 0], samples[-50:, 1], 'bo-', alpha=0.2)
ax6.set(xlim=[-3, 3], ylim=[-3, 3],
xlabel='x0', ylabel='x1', title='Posterior using\nNeuTra HMC sampler')
plt.savefig("neutra.pdf")
if __name__ == "__main__":
assert numpyro.__version__.startswith('0.2.4')
parser = argparse.ArgumentParser(description="NeuTra HMC")
parser.add_argument('-n', '--num-samples', nargs='?', default=4000, type=int)
parser.add_argument('--num-warmup', nargs='?', default=1000, type=int)
parser.add_argument('--hidden-factor', nargs='?', default=8, type=int)
parser.add_argument('--num-iters', nargs='?', default=10000, type=int)
parser.add_argument('--device', default='cpu', type=str, help='use "cpu" or "gpu".')
args = parser.parse_args()
numpyro.set_platform(args.device)
main(args)
###Output
_____no_output_____ |
assignments/hw6-trees/CART-GBM-skeleton-code.ipynb | ###Markdown
Load Data
###Code
data_train = np.loadtxt('svm-train.txt')
data_test = np.loadtxt('svm-test.txt')
x_train, y_train = data_train[:, 0: 2], data_train[:, 2].reshape(-1, 1)
x_test, y_test = data_test[:, 0: 2], data_test[:, 2].reshape(-1, 1)
# Change target to 0-1 label
y_train_label = np.array(list(map(lambda x: 1 if x > 0 else 0, y_train))).reshape(-1, 1)
###Output
_____no_output_____
###Markdown
Decision Tree Class
###Code
class Decision_Tree(BaseEstimator):
def __init__(self, split_loss_function, leaf_value_estimator,
depth=0, min_sample=5, max_depth=10):
'''
Initialize the decision tree classifier
:param split_loss_function: method for splitting node
:param leaf_value_estimator: method for estimating leaf value
:param depth: depth indicator, default value is 0, representing root node
:param min_sample: an internal node can be splitted only if it contains points more than min_smaple
:param max_depth: restriction of tree depth.
'''
self.split_loss_function = split_loss_function
self.leaf_value_estimator = leaf_value_estimator
self.depth = depth
self.min_sample = min_sample
self.max_depth = max_depth
def fit(self, X, y=None):
'''
This should fit the tree classifier by setting the values self.is_leaf,
self.split_id (the index of the feature we want ot split on, if we're splitting),
self.split_value (the corresponding value of that feature where the split is),
and self.value, which is the prediction value if the tree is a leaf node. If we are
splitting the node, we should also init self.left and self.right to be Decision_Tree
objects corresponding to the left and right subtrees. These subtrees should be fit on
the data that fall to the left and right,respectively, of self.split_value.
This is a recurisive tree building procedure.
:param X: a numpy array of training data, shape = (n, m)
:param y: a numpy array of labels, shape = (n, 1)
:return self
'''
# Your code goes here
return self
def predict_instance(self, instance):
'''
Predict label by decision tree
:param instance: a numpy array with new data, shape (1, m)
:return whatever is returned by leaf_value_estimator for leaf containing instance
'''
if self.is_leaf:
return self.value
if instance[self.split_id] <= self.split_value:
return self.left.predict_instance(instance)
else:
return self.right.predict_instance(instance)
###Output
_____no_output_____
###Markdown
Decision Tree Classifier
###Code
def compute_entropy(label_array):
'''
Calulate the entropy of given label list
:param label_array: a numpy array of labels shape = (n, 1)
:return entropy: entropy value
'''
# Your code goes here
return entropy
def compute_gini(label_array):
'''
Calulate the gini index of label list
:param label_array: a numpy array of labels shape = (n, 1)
:return gini: gini index value
'''
# Your code goes here
return gini
def most_common_label(y):
'''
Find most common label
'''
label_cnt = Counter(y.reshape(len(y)))
label = label_cnt.most_common(1)[0][0]
return label
class Classification_Tree(BaseEstimator, ClassifierMixin):
loss_function_dict = {
'entropy': compute_entropy,
'gini': compute_gini
}
def __init__(self, loss_function='entropy', min_sample=5, max_depth=10):
'''
:param loss_function(str): loss function for splitting internal node
'''
self.tree = Decision_Tree(self.loss_function_dict[loss_function],
most_common_label,
0, min_sample, max_depth)
def fit(self, X, y=None):
self.tree.fit(X,y)
return self
def predict_instance(self, instance):
value = self.tree.predict_instance(instance)
return value
###Output
_____no_output_____
###Markdown
Decision Tree Boundary
###Code
# Training classifiers with different depth
clf1 = Classification_Tree(max_depth=1)
clf1.fit(x_train, y_train_label)
clf2 = Classification_Tree(max_depth=2)
clf2.fit(x_train, y_train_label)
clf3 = Classification_Tree(max_depth=3)
clf3.fit(x_train, y_train_label)
clf4 = Classification_Tree(max_depth=4)
clf4.fit(x_train, y_train_label)
clf5 = Classification_Tree(max_depth=5)
clf5.fit(x_train, y_train_label)
clf6 = Classification_Tree(max_depth=6)
clf6.fit(x_train, y_train_label)
# Plotting decision regions
x_min, x_max = x_train[:, 0].min() - 1, x_train[:, 0].max() + 1
y_min, y_max = x_train[:, 1].min() - 1, x_train[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(2, 3, sharex='col', sharey='row', figsize=(10, 8))
for idx, clf, tt in zip(product([0, 1], [0, 1, 2]),
[clf1, clf2, clf3, clf4, clf5, clf6],
['Depth = {}'.format(n) for n in range(1, 7)]):
Z = np.array([clf.predict_instance(x) for x in np.c_[xx.ravel(), yy.ravel()]])
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.4)
axarr[idx[0], idx[1]].scatter(x_train[:, 0], x_train[:, 1], c=y_train_label, alpha=0.8)
axarr[idx[0], idx[1]].set_title(tt)
plt.show()
###Output
_____no_output_____
###Markdown
Compare decision tree with tree model in sklearn
###Code
clf = DecisionTreeClassifier(criterion='entropy', max_depth=10, min_samples_split=5)
clf.fit(x_train, y_train_label)
export_graphviz(clf, out_file='tree_classifier.dot')
# Visualize decision tree
!dot -Tpng tree_classifier.dot -o tree_classifier.png
Image(filename='tree_classifier.png')
###Output
_____no_output_____
###Markdown
Decision Tree Regressor
###Code
# Regression Tree Specific Code
def mean_absolute_deviation_around_median(y):
'''
Calulate the mean absolute deviation around the median of a given target list
:param y: a numpy array of targets shape = (n, 1)
:return mae
'''
# Your code goes here
return mae
class Regression_Tree():
'''
:attribute loss_function_dict: dictionary containing the loss functions used for splitting
:attribute estimator_dict: dictionary containing the estimation functions used in leaf nodes
'''
loss_function_dict = {
'mse': np.var,
'mae': mean_absolute_deviation_around_median
}
estimator_dict = {
'mean': np.mean,
'median': np.median
}
def __init__(self, loss_function='mse', estimator='mean', min_sample=5, max_depth=10):
'''
Initialize Regression_Tree
:param loss_function(str): loss function used for splitting internal nodes
:param estimator(str): value estimator of internal node
'''
self.tree = Decision_Tree(self.loss_function_dict[loss_function],
self.estimator_dict[estimator],
0, min_sample, max_depth)
def fit(self, X, y=None):
self.tree.fit(X,y)
return self
def predict_instance(self, instance):
value = self.tree.predict_instance(instance)
return value
###Output
_____no_output_____
###Markdown
Fit regression tree to one-dimensional regression data
###Code
data_krr_train = np.loadtxt('krr-train.txt')
data_krr_test = np.loadtxt('krr-test.txt')
x_krr_train, y_krr_train = data_krr_train[:,0].reshape(-1,1),data_krr_train[:,1].reshape(-1,1)
x_krr_test, y_krr_test = data_krr_test[:,0].reshape(-1,1),data_krr_test[:,1].reshape(-1,1)
# Training regression trees with different depth
clf1 = Regression_Tree(max_depth=1, min_sample=1, loss_function='mae', estimator='median')
clf1.fit(x_krr_train, y_krr_train)
clf2 = Regression_Tree(max_depth=2, min_sample=1, loss_function='mae', estimator='median')
clf2.fit(x_krr_train, y_krr_train)
clf3 = Regression_Tree(max_depth=3, min_sample=1, loss_function='mae', estimator='median')
clf3.fit(x_krr_train, y_krr_train)
clf4 = Regression_Tree(max_depth=4, min_sample=1, loss_function='mae', estimator='median')
clf4.fit(x_krr_train, y_krr_train)
clf5 = Regression_Tree(max_depth=5, min_sample=1, loss_function='mae', estimator='median')
clf5.fit(x_krr_train, y_krr_train)
clf6 = Regression_Tree(max_depth=6, min_sample=1, loss_function='mae', estimator='median')
clf6.fit(x_krr_train, y_krr_train)
plot_size = 0.001
x_range = np.arange(0., 1., plot_size).reshape(-1, 1)
f2, axarr2 = plt.subplots(2, 3, sharex='col', sharey='row', figsize=(15, 10))
for idx, clf, tt in zip(product([0, 1], [0, 1, 2]),
[clf1, clf2, clf3, clf4, clf5, clf6],
['Depth = {}'.format(n) for n in range(1, 7)]):
y_range_predict = np.array([clf.predict_instance(x) for x in x_range]).reshape(-1, 1)
axarr2[idx[0], idx[1]].plot(x_range, y_range_predict, color='r')
axarr2[idx[0], idx[1]].scatter(x_krr_train, y_krr_train, alpha=0.8)
axarr2[idx[0], idx[1]].set_title(tt)
axarr2[idx[0], idx[1]].set_xlim(0, 1)
plt.show()
###Output
_____no_output_____
###Markdown
Gradient Boosting Method
###Code
#Pseudo-residual function.
#Here you can assume that we are using L2 loss
def pseudo_residual_L2(train_target, train_predict):
'''
Compute the pseudo-residual based on current predicted value.
'''
return train_target - train_predict
class gradient_boosting():
'''
Gradient Boosting regressor class
:method fit: fitting model
'''
def __init__(self, n_estimator, pseudo_residual_func, learning_rate=0.1, min_sample=5, max_depth=3):
'''
Initialize gradient boosting class
:param n_estimator: number of estimators (i.e. number of rounds of gradient boosting)
:pseudo_residual_func: function used for computing pseudo-residual
:param learning_rate: step size of gradient descent
'''
self.n_estimator = n_estimator
self.pseudo_residual_func = pseudo_residual_func
self.learning_rate = learning_rate
self.min_sample = min_sample
self.max_depth = max_depth
def fit(self, train_data, train_target):
'''
Fit gradient boosting model
'''
# Your code goes here
def predict(self, test_data):
'''
Predict value
'''
# Your code goes here
###Output
_____no_output_____
###Markdown
2-D GBM visualization - SVM data
###Code
# Plotting decision regions
x_min, x_max = x_train[:, 0].min() - 1, x_train[:, 0].max() + 1
y_min, y_max = x_train[:, 1].min() - 1, x_train[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
f, axarr = plt.subplots(2, 3, sharex='col', sharey='row', figsize=(10, 8))
for idx, i, tt in zip(product([0, 1], [0, 1, 2]),
[1, 5, 10, 20, 50, 100],
['n_estimator = {}'.format(n) for n in [1, 5, 10, 20, 50, 100]]):
gbt = gradient_boosting(n_estimator=i, pseudo_residual_func=pseudo_residual_L2, max_depth=2)
gbt.fit(x_train, y_train)
Z = np.sign(gbt.predict(np.c_[xx.ravel(), yy.ravel()]))
Z = Z.reshape(xx.shape)
axarr[idx[0], idx[1]].contourf(xx, yy, Z, alpha=0.4)
axarr[idx[0], idx[1]].scatter(x_train[:, 0], x_train[:, 1], c=y_train_label, alpha=0.8)
axarr[idx[0], idx[1]].set_title(tt)
###Output
_____no_output_____
###Markdown
1-D GBM visualization - KRR data
###Code
plot_size = 0.001
x_range = np.arange(0., 1., plot_size).reshape(-1, 1)
f2, axarr2 = plt.subplots(2, 3, sharex='col', sharey='row', figsize=(15, 10))
for idx, i, tt in zip(product([0, 1], [0, 1, 2]),
[1, 5, 10, 20, 50, 100],
['n_estimator = {}'.format(n) for n in [1, 5, 10, 20, 50, 100]]):
gbm_1d = gradient_boosting(n_estimator=i, pseudo_residual_func=pseudo_residual_L2, max_depth=2)
gbm_1d.fit(x_krr_train, y_krr_train)
y_range_predict = gbm_1d.predict(x_range)
axarr2[idx[0], idx[1]].plot(x_range, y_range_predict, color='r')
axarr2[idx[0], idx[1]].scatter(x_krr_train, y_krr_train, alpha=0.8)
axarr2[idx[0], idx[1]].set_title(tt)
axarr2[idx[0], idx[1]].set_xlim(0, 1)
###Output
_____no_output_____ |
docs/gallery/plot_BIAS.ipynb | ###Markdown
BIAS histogramm examples
###Code
import pandas as pd
import toto
import matplotlib.pyplot as plt
from toto.inputs.txt import TXTfile
import os
# read the file
hindcast='https://raw.githubusercontent.com/calypso-science/Toto/master/_tests/txt_file/tahuna_hindcast.txt'
measured='https://raw.githubusercontent.com/calypso-science/Toto/master/_tests/txt_file/tahuna_measured.txt'
os.system('wget %s ' % hindcast)
os.system('wget %s ' % measured)
me=TXTfile(['tahuna_measured.txt'],colNamesLine=1,skiprows=1,unitNamesLine=0,time_col_name={'Year':'year','Month':'month','Day':'day','Hour':'hour','Min':'Minute'})
me.reads()
me.read_time()
me=me._toDataFrame()
hd=TXTfile(['tahuna_hindcast.txt'],colNamesLine=1,skiprows=1,unitNamesLine=0,time_col_name={'Year':'year','Month':'month','Day':'day','Hour':'hour','Min':'Minute'})
hd.reads()
hd.read_time()
hd=hd._toDataFrame()
tmp=me[0].reindex(hd[0].index,method='nearest')
hd[0]['hs_measured']=tmp['Sig. Wave']
hd[0].filename='Tahuna'
# # Processing
hd[0].StatPlots.BIAS_histogramm(measured='hs_measured',modelled='hs',
args={'Nb of bins':30,
'Xlabel':'Significant wave height',
'units':'m',
'display':'On',
})
###Output
_____no_output_____ |
notebooks/WK_3-Assignment_1_SQL.ipynb | ###Markdown
Assignment 1: NYC Taxi Data
###Code
from pyspark.sql import SparkSession
# Create a local spark session
spark = SparkSession.builder \
.appName('nyc-taxi-sql') \
.getOrCreate()
# Read parquet file
df = spark.read.load("./output")
df.createOrReplaceTempView("nyc_taxi_data_2017_18")
###Output
_____no_output_____
###Markdown
Business Questions Q1.a. For each year and month: What was the total number of trips?
###Code
spark.sql("""
SELECT year
,month
,COUNT(*) AS number_of_trips
FROM nyc_taxi_data_2017_18
GROUP BY year
,month
ORDER BY year, month
""").show(24)
###Output
+----+-----+---------------+
|year|month|number_of_trips|
+----+-----+---------------+
|2017| 1| 10759055|
|2017| 2| 10170592|
|2017| 3| 11429334|
|2017| 4| 11104411|
|2017| 5| 11139331|
|2017| 6| 10612182|
|2017| 7| 9483901|
|2017| 8| 9271000|
|2017| 9| 9808837|
|2017| 10| 10673291|
|2017| 11| 10137773|
|2017| 12| 10393990|
|2018| 1| 9535011|
|2018| 2| 9244198|
|2018| 3| 10246590|
|2018| 4| 10086530|
|2018| 5| 10002146|
|2018| 6| 9433947|
|2018| 7| 8516297|
|2018| 8| 8497622|
|2018| 9| 8688822|
|2018| 10| 9508435|
|2018| 11| 8781808|
|2018| 12| 8837417|
+----+-----+---------------+
###Markdown
Q1.b. For each year and month: Which weekday had the most trips?
###Code
spark.sql("""
SELECT year
,month
,pickup_weekday
,total_trips
FROM (SELECT year
,month
,DATE_FORMAT(pickup_datetime, "EEEE") AS pickup_weekday
,COUNT(*) AS total_trips
,ROW_NUMBER() OVER (PARTITION BY year,month ORDER BY COUNT(*) DESC) AS row_num
FROM nyc_taxi_data_2017_18
GROUP BY year
,month
,pickup_weekday
)
WHERE row_num = 1
ORDER BY year
,month
""").show(24)
###Output
+----+-----+--------------+-----------+
|year|month|pickup_weekday|total_trips|
+----+-----+--------------+-----------+
|2017| 1| Tuesday| 1698667|
|2017| 2| Saturday| 1613115|
|2017| 3| Friday| 2030231|
|2017| 4| Saturday| 1965173|
|2017| 5| Wednesday| 1857762|
|2017| 6| Thursday| 1852070|
|2017| 7| Saturday| 1526780|
|2017| 8| Thursday| 1603485|
|2017| 9| Friday| 1721426|
|2017| 10| Tuesday| 1673294|
|2017| 11| Wednesday| 1740282|
|2017| 12| Friday| 1827482|
|2018| 1| Wednesday| 1624943|
|2018| 2| Friday| 1462063|
|2018| 3| Friday| 1808358|
|2018| 4| Monday| 1520937|
|2018| 5| Thursday| 1741622|
|2018| 6| Friday| 1641972|
|2018| 7| Tuesday| 1453861|
|2018| 8| Wednesday| 1485514|
|2018| 9| Saturday| 1469617|
|2018| 10| Wednesday| 1572695|
|2018| 11| Friday| 1520943|
|2018| 12| Saturday| 1505080|
+----+-----+--------------+-----------+
###Markdown
Q1.c. For each year and month: What was the average number of passengers?
###Code
spark.sql("""
SELECT year
,month
,AVG(passenger_count) AS avg_passengers_per_trip
FROM nyc_taxi_data_2017_18
GROUP BY year
,month
ORDER BY year
,month
""").show(24)
###Output
+----+-----+-----------------------+
|year|month|avg_passengers_per_trip|
+----+-----+-----------------------+
|2017| 1| 1.6035315369240142|
|2017| 2| 1.5991538152351408|
|2017| 3| 1.5928098697614401|
|2017| 4| 1.6020269782881775|
|2017| 5| 1.5956274214313229|
|2017| 6| 1.5996936351072757|
|2017| 7| 1.6155018910467327|
|2017| 8| 1.6097582785028584|
|2017| 9| 1.6050604164387685|
|2017| 10| 1.5993137449358403|
|2017| 11| 1.5957080514625845|
|2017| 12| 1.6152579519510795|
|2018| 1| 1.5930920268471636|
|2018| 2| 1.5828088061289902|
|2018| 3| 1.5889266575514391|
|2018| 4| 1.5892638003356951|
|2018| 5| 1.5853707794307341|
|2018| 6| 1.586684025254753|
|2018| 7| 1.5937331683007299|
|2018| 8| 1.5902972619869418|
|2018| 9| 1.5784291587513244|
|2018| 10| 1.5640660108629865|
|2018| 11| 1.5717419465331057|
|2018| 12| 1.5884660642357376|
+----+-----+-----------------------+
###Markdown
Q1.d. For each year and month: What was the average amount paid per trip (total_amount)?
###Code
spark.sql("""
SELECT year
,month
,AVG(total_amount) AS avg_total_amount_per_trip
FROM nyc_taxi_data_2017_18
GROUP BY year
,month
ORDER BY year
,month
""").show(24)
###Output
+----+-----+-------------------------+
|year|month|avg_total_amount_per_trip|
+----+-----+-------------------------+
|2017| 1| 15.30173950320955|
|2017| 2| 15.470210002462021|
|2017| 3| 16.00380195564449|
|2017| 4| 16.10720418329613|
|2017| 5| 16.560673703888867|
|2017| 6| 16.47228243213232|
|2017| 7| 16.2144495542399|
|2017| 8| 16.30985790548921|
|2017| 9| 16.515653416610583|
|2017| 10| 16.576218103559412|
|2017| 11| 16.32591823573312|
|2017| 12| 16.032728606820424|
|2018| 1| 15.38746873304433|
|2018| 2| 15.387113757366647|
|2018| 3| 15.901031332181095|
|2018| 4| 16.261353141312924|
|2018| 5| 16.755073060361443|
|2018| 6| 16.653265007935776|
|2018| 7| 16.569750328297385|
|2018| 8| 16.601714209171014|
|2018| 9| 16.834233158417458|
|2018| 10| 16.933848321180925|
|2018| 11| 16.818571270466034|
|2018| 12| 16.470735408327634|
+----+-----+-------------------------+
###Markdown
Q1.d. For each year and month: What was the average amount paid per passenger (total_amount)?
###Code
spark.sql("""
SELECT year
,month
,AVG(total_amount / passenger_count) AS avg_total_amount_per_passenger
FROM nyc_taxi_data_2017_18
GROUP BY year
,month
ORDER BY year
,month
""").show(24)
###Output
+----+-----+------------------------------+
|year|month|avg_total_amount_per_passenger|
+----+-----+------------------------------+
|2017| 1| 12.646149725201287|
|2017| 2| 12.764816082660923|
|2017| 3| 13.245625458581475|
|2017| 4| 13.265101183416846|
|2017| 5| 13.651144429622907|
|2017| 6| 13.604502486696209|
|2017| 7| 13.287119010142096|
|2017| 8| 13.39625704158913|
|2017| 9| 13.590143123322886|
|2017| 10| 13.68152108161739|
|2017| 11| 13.480742929428729|
|2017| 12| 13.117433840563063|
|2018| 1| 12.735796628332748|
|2018| 2| 12.776375687262949|
|2018| 3| 13.154252048935492|
|2018| 4| 13.445791517382709|
|2018| 5| 13.87456607832239|
|2018| 6| 13.777108741419482|
|2018| 7| 13.682049332549617|
|2018| 8| 13.717627626053176|
|2018| 9| 13.958107445881346|
|2018| 10| 14.096704885521154|
|2018| 11| 13.980369209989615|
|2018| 12| 13.577776159503674|
+----+-----+------------------------------+
###Markdown
Q2.a. For each taxi colour (yellow and green): What was the average, median, minimum and maximum trip duration in seconds?
###Code
spark.sql("""
SELECT taxi_type
,AVG(trip_duration_seconds) AS avg_trip_duration_seconds
,PERCENTILE(trip_duration_seconds, 0.5) AS median_trip_duration_seconds
,MIN(trip_duration_seconds) AS min_trip_duration_seconds
,MAX(trip_duration_seconds) AS max_trip_duration_seconds
FROM nyc_taxi_data_2017_18
GROUP BY taxi_type
""").show(2)
###Output
+---------+-------------------------+----------------------------+-------------------------+-------------------------+
|taxi_type|avg_trip_duration_seconds|median_trip_duration_seconds|min_trip_duration_seconds|max_trip_duration_seconds|
+---------+-------------------------+----------------------------+-------------------------+-------------------------+
| green| 1266.2004888441165| 627.0| 1| 202989|
| yellow| 1022.0828914491414| 670.0| 1| 45466304|
+---------+-------------------------+----------------------------+-------------------------+-------------------------+
###Markdown
Q2.b. For each taxi colour (yellow and green): What was the average, median, minimum and maximum trip distance in km?
###Code
spark.sql("""
SELECT taxi_type
,AVG(trip_distance_km) AS avg_trip_distance_km
,PERCENTILE(trip_distance_km, 0.5) AS median_trip_distance_km
,MIN(trip_distance_km) AS min_trip_distance_km
,MAX(trip_distance_km) AS max_trip_distance_km
FROM nyc_taxi_data_2017_18
GROUP BY taxi_type
""").show(2)
###Output
+---------+--------------------+-----------------------+--------------------+--------------------+
|taxi_type|avg_trip_distance_km|median_trip_distance_km|min_trip_distance_km|max_trip_distance_km|
+---------+--------------------+-----------------------+--------------------+--------------------+
| yellow| 4.728245869247112| 2.6232241999999997| 0.0| 4059.157815|
+---------+--------------------+-----------------------+--------------------+--------------------+
###Markdown
Q2.c. For each taxi colour (yellow and green): What was the average, median, minimum and maximum speed in km per hour?
###Code
spark.sql("""
SELECT taxi_type
,AVG(trip_distance_km/(trip_duration_seconds / 3600)) AS avg_km_per_hour
,PERCENTILE(trip_distance_km/(trip_duration_seconds / 3600), 0.5) AS median_km_per_hour
,MIN(trip_distance_km/(trip_duration_seconds / 3600)) AS min_km_per_hour
,MAX(trip_distance_km/(trip_duration_seconds / 3600)) AS max_km_per_hour
FROM nyc_taxi_data_2017_18
GROUP BY taxi_type
""").show(2)
###Output
+---------+-----------------+------------------+---------------+---------------+
|taxi_type| avg_km_per_hour|median_km_per_hour|min_km_per_hour|max_km_per_hour|
+---------+-----------------+------------------+---------------+---------------+
| green|22.64211146324986| 17.79052218181818| 0.0| 194955.4476|
+---------+-----------------+------------------+---------------+---------------+
###Markdown
Q2.d. For each taxi colour (yellow and green): What was the percentage of trips where the driver received tips?
###Code
spark.sql("""
SELECT ((SELECT COUNT(*) FROM nyc_taxi_data_2017_18 WHERE tip_amount > 0) / COUNT(*)) * 100 AS pct_trips_with_tip
FROM nyc_taxi_data_2017_18
""").show(1)
###Output
+------------------+
|pct_trips_with_tip|
+------------------+
| 63.05336311357655|
+------------------+
###Markdown
Q3. For trips where the driver received tips, What was the percentage where the driver received tips of at least $10.
###Code
spark.sql("""
SELECT ((SELECT COUNT(*) FROM nyc_taxi_data_2017_18 WHERE tip_amount >= 10) / COUNT(*)) * 100 AS pct_trips_top_gt_10
FROM nyc_taxi_data_2017_18
""").show(1)
###Output
+-------------------+
|pct_trips_top_gt_10|
+-------------------+
| 2.1053562129901136|
+-------------------+
###Markdown
Q4.a. For each duration bin calculate: Average speed (km per hour)Bins are Under 5 Mins, From 5 mins to 10 mins, From 10 mins to 20 mins, From 20 mins to 30 mins, At least 30 mins:
###Code
spark.sql("""
SELECT trip_duration_category
,AVG(trip_distance_km / (trip_duration_seconds / 3600)) AS avg_km_per_hour
FROM nyc_taxi_data_2017_18
GROUP BY trip_duration_category
""").show(5)
###Output
+----------------------+------------------+
|trip_duration_category| avg_km_per_hour|
+----------------------+------------------+
| Above 30 mins|21.521682982544082|
| 10-20 mins| 20.07051347804941|
| 5-10 mins|17.981705341505787|
| 20-30 mins| 21.78188930509953|
| Under 5 mins| 37.06728243111635|
+----------------------+------------------+
###Markdown
Q4.b. For each duration bin calculate: Average distance per dollar (km per $)Bins are Under 5 Mins, From 5 mins to 10 mins, From 10 mins to 20 mins, From 20 mins to 30 mins, At least 30 mins.Assuming total US dollars received for journey, which includes tips, special fees and taxes
###Code
spark.sql("""
SELECT trip_duration_category
,AVG(trip_distance_km / total_amount) AS avg_distance_per_dollar
FROM nyc_taxi_data_2017_18
GROUP BY trip_duration_category
""").show(5)
###Output
+----------------------+-----------------------+
|trip_duration_category|avg_distance_per_dollar|
+----------------------+-----------------------+
| Above 30 mins| 0.40398652813664593|
| 10-20 mins| 0.3094178375755084|
| 5-10 mins| 0.24283247313513143|
| 20-30 mins| 0.3585894731795521|
| Under 5 mins| 0.17535589774042296|
+----------------------+-----------------------+
|
Machine_Learning_intro_scikit-learn/Irises Data Analysis Workflow_classwork_2019_12.ipynb | ###Markdown
Introductory Data Analysis Workflow https://xkcd.com/2054 An example machine learning notebook* Original Notebook by [Randal S. Olson](http://www.randalolson.com/)* Supported by [Jason H. Moore](http://www.epistasis.org/)* [University of Pennsylvania Institute for Bioinformatics](http://upibi.org/)* Adapted for LU Py-Sem 2018 by [Valdis Saulespurens]([email protected]) **You can also [execute the code in this notebook on Binder](https://mybinder.org/v2/gh/ValRCS/RigaComm_DataAnalysis/master) - no local installation required.**
###Code
# text 17.04.2019
import datetime
print(datetime.datetime.now())
print('hello')
###Output
2020-02-08 00:17:36.973224
hello
###Markdown
Table of contents1. [Introduction](Introduction)2. [License](License)3. [Required libraries](Required-libraries)4. [The problem domain](The-problem-domain)5. [Step 1: Answering the question](Step-1:-Answering-the-question)6. [Step 2: Checking the data](Step-2:-Checking-the-data)7. [Step 3: Tidying the data](Step-3:-Tidying-the-data) - [Bonus: Testing our data](Bonus:-Testing-our-data)8. [Step 4: Exploratory analysis](Step-4:-Exploratory-analysis)9. [Step 5: Classification](Step-5:-Classification) - [Cross-validation](Cross-validation) - [Parameter tuning](Parameter-tuning)10. [Step 6: Reproducibility](Step-6:-Reproducibility)11. [Conclusions](Conclusions)12. [Further reading](Further-reading)13. [Acknowledgements](Acknowledgements) Introduction[[ go back to the top ]](Table-of-contents)In the time it took you to read this sentence, terabytes of data have been collectively generated across the world — more data than any of us could ever hope to process, much less make sense of, on the machines we're using to read this notebook.In response to this massive influx of data, the field of Data Science has come to the forefront in the past decade. Cobbled together by people from a diverse array of fields — statistics, physics, computer science, design, and many more — the field of Data Science represents our collective desire to understand and harness the abundance of data around us to build a better world.In this notebook, I'm going to go over a basic Python data analysis pipeline from start to finish to show you what a typical data science workflow looks like.In addition to providing code examples, I also hope to imbue in you a sense of good practices so you can be a more effective — and more collaborative — data scientist.I will be following along with the data analysis checklist from [The Elements of Data Analytic Style](https://leanpub.com/datastyle), which I strongly recommend reading as a free and quick guidebook to performing outstanding data analysis.**This notebook is intended to be a public resource. As such, if you see any glaring inaccuracies or if a critical topic is missing, please feel free to point it out or (preferably) submit a pull request to improve the notebook.** License[[ go back to the top ]](Table-of-contents)Please see the [repository README file](https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projectslicense) for the licenses and usage terms for the instructional material and code in this notebook. In general, I have licensed this material so that it is as widely usable and shareable as possible. Required libraries[[ go back to the top ]](Table-of-contents)If you don't have Python on your computer, you can use the [Anaconda Python distribution](http://continuum.io/downloads) to install most of the Python packages you need. Anaconda provides a simple double-click installer for your convenience.This notebook uses several Python packages that come standard with the Anaconda Python distribution. The primary libraries that we'll be using are:* **NumPy**: Provides a fast numerical array structure and helper functions.* **pandas**: Provides a DataFrame structure to store data in memory and work with it easily and efficiently.* **scikit-learn**: The essential Machine Learning package in Python.* **matplotlib**: Basic plotting library in Python; most other Python plotting libraries are built on top of it.* **Seaborn**: Advanced statistical plotting library.* **watermark**: A Jupyter Notebook extension for printing timestamps, version numbers, and hardware information.**Note:** I will not be providing support for people trying to run this notebook outside of the Anaconda Python distribution. The problem domain[[ go back to the top ]](Table-of-contents)For the purposes of this exercise, let's pretend we're working for a startup that just got funded to create a smartphone app that automatically identifies species of flowers from pictures taken on the smartphone. We're working with a moderately-sized team of data scientists and will be building part of the data analysis pipeline for this app.We've been tasked by our company's Head of Data Science to create a demo machine learning model that takes four measurements from the flowers (sepal length, sepal width, petal length, and petal width) and identifies the species based on those measurements alone.We've been given a [data set](https://github.com/ValRCS/RCS_Data_Analysis_Python/blob/master/data/iris-data.csv) from our field researchers to develop the demo, which only includes measurements for three types of *Iris* flowers: *Iris setosa* *Iris versicolor* *Iris virginica*The four measurements we're using currently come from hand-measurements by the field researchers, but they will be automatically measured by an image processing model in the future.**Note:** The data set we're working with is the famous [*Iris* data set](https://archive.ics.uci.edu/ml/datasets/Iris) — included with this notebook — which I have modified slightly for demonstration purposes. Step 1: Answering the question[[ go back to the top ]](Table-of-contents)The first step to any data analysis project is to define the question or problem we're looking to solve, and to define a measure (or set of measures) for our success at solving that task. The data analysis checklist has us answer a handful of questions to accomplish that, so let's work through those questions.>Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?We're trying to classify the species (i.e., class) of the flower based on four measurements that we're provided: sepal length, sepal width, petal length, and petal width.Petal - ziedlapiņa, sepal - arī ziedlapiņa>Did you define the metric for success before beginning?Let's do that now. Since we're performing classification, we can use [accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) — the fraction of correctly classified flowers — to quantify how well our model is performing. Our company's Head of Data has told us that we should achieve at least 90% accuracy.>Did you understand the context for the question and the scientific or business application?We're building part of a data analysis pipeline for a smartphone app that will be able to classify the species of flowers from pictures taken on the smartphone. In the future, this pipeline will be connected to another pipeline that automatically measures from pictures the traits we're using to perform this classification.>Did you record the experimental design?Our company's Head of Data has told us that the field researchers are hand-measuring 50 randomly-sampled flowers of each species using a standardized methodology. The field researchers take pictures of each flower they sample from pre-defined angles so the measurements and species can be confirmed by the other field researchers at a later point. At the end of each day, the data is compiled and stored on a private company GitHub repository.>Did you consider whether the question could be answered with the available data?The data set we currently have is only for three types of *Iris* flowers. The model built off of this data set will only work for those *Iris* flowers, so we will need more data to create a general flower classifier.Notice that we've spent a fair amount of time working on the problem without writing a line of code or even looking at the data.**Thinking about and documenting the problem we're working on is an important step to performing effective data analysis that often goes overlooked.** Don't skip it. Step 2: Checking the data[[ go back to the top ]](Table-of-contents)The next step is to look at the data we're working with. Even curated data sets from the government can have errors in them, and it's vital that we spot these errors before investing too much time in our analysis.Generally, we're looking to answer the following questions:* Is there anything wrong with the data?* Are there any quirks with the data?* Do I need to fix or remove any of the data?Let's start by reading the data into a pandas DataFrame.
###Code
import pandas as pd
iris_data = pd.read_csv('../data/iris-data.csv')
#lets take a look at the first 5 rows
iris_data.head()
iris_data.tail()
# Resources for loading data from nonlocal sources
# Pandas Can generally handle most common formats
# https://pandas.pydata.org/pandas-docs/stable/io.html
# SQL https://stackoverflow.com/questions/39149243/how-do-i-connect-to-a-sql-server-database-with-python
# NoSQL MongoDB https://realpython.com/introduction-to-mongodb-and-python/
# Apache Hadoop: https://dzone.com/articles/how-to-get-hadoop-data-into-a-python-model
# Apache Spark: https://www.datacamp.com/community/tutorials/apache-spark-python
# Data Scraping / Crawling libraries : https://elitedatascience.com/python-web-scraping-libraries Big Topic in itself
# Most data resources have some form of Python API / Library
iris_data.head()
###Output
_____no_output_____
###Markdown
We're in luck! The data seems to be in a usable format.The first row in the data file defines the column headers, and the headers are descriptive enough for us to understand what each column represents. The headers even give us the units that the measurements were recorded in, just in case we needed to know at a later point in the project.Each row following the first row represents an entry for a flower: four measurements and one class, which tells us the species of the flower.**One of the first things we should look for is missing data.** Thankfully, the field researchers already told us that they put a 'NA' into the spreadsheet when they were missing a measurement.We can tell pandas to automatically identify missing values if it knows our missing value marker.
###Code
iris_data.shape
iris_data.info()
iris_data.describe()
# with na_values we can pass what cells to mark as na
iris_data = pd.read_csv('../data/iris-data.csv', na_values=['NA', 'N/A'])
###Output
_____no_output_____
###Markdown
Voilà! Now pandas knows to treat rows with 'NA' as missing values. Next, it's always a good idea to look at the distribution of our data — especially the outliers.Let's start by printing out some summary statistics about the data set.
###Code
iris_data.describe()
###Output
_____no_output_____
###Markdown
We can see several useful values from this table. For example, we see that five `petal_width_cm` entries are missing.If you ask me, though, tables like this are rarely useful unless we know that our data should fall in a particular range. It's usually better to visualize the data in some way. Visualization makes outliers and errors immediately stand out, whereas they might go unnoticed in a large table of numbers.Since we know we're going to be plotting in this section, let's set up the notebook so we can plot inside of it.
###Code
# This line tells the notebook to show plots inside of the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
###Output
_____no_output_____
###Markdown
Next, let's create a **scatterplot matrix**. Scatterplot matrices plot the distribution of each column along the diagonal, and then plot a scatterplot matrix for the combination of each variable. They make for an efficient tool to look for errors in our data.We can even have the plotting package color each entry by its class to look for trends within the classes.
###Code
sb.pairplot(iris_data, hue='class')
# We have to temporarily drop the rows with 'NA' values
# because the Seaborn plotting function does not know
# what to do with them
sb.pairplot(iris_data.dropna(), hue='class')
###Output
_____no_output_____
###Markdown
From the scatterplot matrix, we can already see some issues with the data set:1. There are five classes when there should only be three, meaning there were some coding errors.2. There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.3. We had to drop those rows with missing values.In all of these cases, we need to figure out what to do with the erroneous data. Which takes us to the next step... Step 3: Tidying the data GIGO principle[[ go back to the top ]](Table-of-contents)Now that we've identified several errors in the data set, we need to fix them before we proceed with the analysis.Let's walk through the issues one-by-one.>There are five classes when there should only be three, meaning there were some coding errors.After talking with the field researchers, it sounds like one of them forgot to add `Iris-` before their `Iris-versicolor` entries. The other extraneous class, `Iris-setossa`, was simply a typo that they forgot to fix.Let's use the DataFrame to fix these errors.
###Code
iris_data['class'].unique()
len(iris_data['class'].unique())
# Copy and Replace
# in df.loc[rows, thencolumns]
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data['class'].unique()
# So we take a row where a specific column('class' here) matches our bad values
# and change them to good values
iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa'
iris_data['class'].unique()
iris_data.tail()
iris_data[98:103]
iris_data['class'].unique()
###Output
_____no_output_____
###Markdown
Much better! Now we only have three class types. Imagine how embarrassing it would've been to create a model that used the wrong classes.>There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.Fixing outliers can be tricky business. It's rarely clear whether the outlier was caused by measurement error, recording the data in improper units, or if the outlier is a real anomaly. For that reason, we should be judicious when working with outliers: if we decide to exclude any data, we need to make sure to document what data we excluded and provide solid reasoning for excluding that data. (i.e., "This data didn't fit my hypothesis" will not stand peer review.)In the case of the one anomalous entry for `Iris-setosa`, let's say our field researchers know that it's impossible for `Iris-setosa` to have a sepal width below 2.5 cm. Clearly this entry was made in error, and we're better off just scrapping the entry than spending hours finding out what happened.
###Code
# here we see all flowers with sepal_width_cm under 2.5m
iris_data.loc[(iris_data['sepal_width_cm'] < 2.5)]
## for multiple filters we use & for AND , and use | for OR
smallpetals = iris_data.loc[(iris_data['sepal_width_cm'] < 2.5) & (iris_data['class'] == 'Iris-setosa')]
smallpetals
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
len(iris_data)
# This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm
# Let's go over this command in class
iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >= 2.5)]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
len(iris_data)
###Output
_____no_output_____
###Markdown
Excellent! Now all of our `Iris-setosa` rows have a sepal width greater than 2.5.The next data issue to address is the several near-zero sepal lengths for the `Iris-versicolor` rows. Let's take a look at those rows.
###Code
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
###Output
_____no_output_____
###Markdown
How about that? All of these near-zero `sepal_length_cm` entries seem to be off by two orders of magnitude, as if they had been recorded in meters instead of centimeters.After some brief correspondence with the field researchers, we find that one of them forgot to convert those measurements to centimeters. Let's do that for them.
###Code
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
iris_data['sepal_length_cm'].hist()
# we double check before changing anyting if our filter works
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)].head()
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
# Here we fix the wrong units
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0),
'sepal_length_cm'] *= 100.0
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
;
iris_data['sepal_length_cm'].hist()
###Output
_____no_output_____
###Markdown
Phew! Good thing we fixed those outliers. They could've really thrown our analysis off.>We had to drop those rows with missing values.Let's take a look at the rows with missing values:
###Code
iris_data.notnull()
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
###Output
_____no_output_____
###Markdown
It's not ideal that we had to drop those rows, especially considering they're all `Iris-setosa` entries. Since it seems like the missing data is systematic — all of the missing values are in the same column for the same *Iris* type — this error could potentially bias our analysis.One way to deal with missing data is **mean imputation**: If we know that the values for a measurement fall in a certain range, we can fill in empty values with the average of that measurement.Let's see if we can do that here.
###Code
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
###Output
_____no_output_____
###Markdown
Most of the petal widths for `Iris-setosa` fall within the 0.2-0.3 range, so let's fill in these entries with the average measured petal width.
###Code
iris_setosa_avg = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
iris_setosa_avg
type(iris_setosa_avg)
round(iris_setosa_avg, 2)
# for our purposes 4 digita accuracy is sufficient, add why here :)
iris_setosa_avg = round(iris_setosa_avg, 4)
average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
print(average_petal_width)
average_petal_width = iris_setosa_avg
# we find iris-setosa rows where petal_width_cm is missing
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'].isnull()),
'petal_width_cm'] = average_petal_width
# we find all iris-setosa with the average
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'] == average_petal_width)]
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
# if we want to drop rows with missing data
# and save them into a new dataframe
dfwithoutmissingvalues = iris_data.dropna()
len(dfwithoutmissingvalues)
###Output
_____no_output_____
###Markdown
Great! Now we've recovered those rows and no longer have missing data in our data set.**Note:** If you don't feel comfortable imputing your data, you can drop all rows with missing data with the `dropna()` call: iris_data.dropna(inplace=True)After all this hard work, we don't want to repeat this process every time we work with the data set. Let's save the tidied data file *as a separate file* and work directly with that data file from now on.
###Code
import json
iris_data.to_json('../data/iris-clean.json')
# to bypass pandas missing json formatter we can format the data ourselves
df_json_pretty = json.dumps(json.loads(iris_data.to_json()), indent=4)
type(df_json_pretty)
df_json_pretty[:100]
with open('data.json', 'w', encoding='utf-8') as f:
f.write(df_json_pretty)
iris_data.to_csv('../data/iris-data-clean.csv', index=False)
# for saving in the same folder
iris_data.to_csv('iris-data-clean.csv', index=False)
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
###Output
_____no_output_____
###Markdown
Now, let's take a look at the scatterplot matrix now that we've tidied the data.
###Code
myplot = sb.pairplot(iris_data_clean, hue='class')
myplot.savefig('irises.png')
import scipy.stats as stats
iris_data = pd.read_csv('../data/iris-data.csv')
iris_data.columns.unique()
stats.entropy(iris_data_clean['sepal_length_cm'])
iris_data.columns[:-1]
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
for col in iris_data.columns[:-1]:
print("Entropy for: ", col, stats.entropy(iris_data[col].dropna()))
###Output
Entropy for: sepal_length_cm 4.96909746125432
Entropy for: sepal_width_cm 5.000701325982732
Entropy for: petal_length_cm 4.888113822938816
Entropy for: petal_width_cm 4.754264731532864
###Markdown
Of course, I purposely inserted numerous errors into this data set to demonstrate some of the many possible scenarios you may face while tidying your data.The general takeaways here should be:* Make sure your data is encoded properly* Make sure your data falls within the expected range, and use domain knowledge whenever possible to define that expected range* Deal with missing data in one way or another: replace it if you can or drop it* Never tidy your data manually because that is not easily reproducible* Use code as a record of how you tidied your data* Plot everything you can about the data at this stage of the analysis so you can *visually* confirm everything looks correct Bonus: Testing our data[[ go back to the top ]](Table-of-contents)At SciPy 2015, I was exposed to a great idea: We should test our data. Just how we use unit tests to verify our expectations from code, we can similarly set up unit tests to verify our expectations about a data set.We can quickly test our data using `assert` statements: We assert that something must be true, and if it is, then nothing happens and the notebook continues running. However, if our assertion is wrong, then the notebook stops running and brings it to our attention. For example,```Pythonassert 1 == 2```will raise an `AssertionError` and stop execution of the notebook because the assertion failed.Let's test a few things that we know about our data set now.
###Code
assert 1 == 3
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
assert len(iris_data['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
# We know that our data set should have no missing measurements
assert len(iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]) == 0
###Output
_____no_output_____
###Markdown
And so on. If any of these expectations are violated, then our analysis immediately stops and we have to return to the tidying stage. Data Cleanup & Wrangling > 80% time spent in Data Science Step 4: Exploratory analysis[[ go back to the top ]](Table-of-contents)Now after spending entirely too much time tidying our data, we can start analyzing it!Exploratory analysis is the step where we start delving deeper into the data set beyond the outliers and errors. We'll be looking to answer questions such as:* How is my data distributed?* Are there any correlations in my data?* Are there any confounding factors that explain these correlations?This is the stage where we plot all the data in as many ways as possible. Create many charts, but don't bother making them pretty — these charts are for internal use.Let's return to that scatterplot matrix that we used earlier.
###Code
sb.pairplot(iris_data_clean)
;
###Output
_____no_output_____
###Markdown
Our data is normally distributed for the most part, which is great news if we plan on using any modeling methods that assume the data is normally distributed.There's something strange going on with the petal measurements. Maybe it's something to do with the different `Iris` types. Let's color code the data by the class again to see if that clears things up.
###Code
sb.pairplot(iris_data_clean, hue='class')
;
###Output
_____no_output_____
###Markdown
Sure enough, the strange distribution of the petal measurements exist because of the different species. This is actually great news for our classification task since it means that the petal measurements will make it easy to distinguish between `Iris-setosa` and the other `Iris` types.Distinguishing `Iris-versicolor` and `Iris-virginica` will prove more difficult given how much their measurements overlap.There are also correlations between petal length and petal width, as well as sepal length and sepal width. The field biologists assure us that this is to be expected: Longer flower petals also tend to be wider, and the same applies for sepals.We can also make [**violin plots**](https://en.wikipedia.org/wiki/Violin_plot) of the data to compare the measurement distributions of the classes. Violin plots contain the same information as [box plots](https://en.wikipedia.org/wiki/Box_plot), but also scales the box according to the density of the data.
###Code
plt.figure(figsize=(10, 10))
for column_index, column in enumerate(iris_data_clean.columns):
if column == 'class':
continue
plt.subplot(2, 2, column_index + 1)
sb.violinplot(x='class', y=column, data=iris_data_clean)
###Output
_____no_output_____
###Markdown
Enough flirting with the data. Let's get to modeling. Step 5: Classification[[ go back to the top ]](Table-of-contents)Wow, all this work and we *still* haven't modeled the data!As tiresome as it can be, tidying and exploring our data is a vital component to any data analysis. If we had jumped straight to the modeling step, we would have created a faulty classification model.Remember: **Bad data leads to bad models.** Always check your data first.Assured that our data is now as clean as we can make it — and armed with some cursory knowledge of the distributions and relationships in our data set — it's time to make the next big step in our analysis: Splitting the data into training and testing sets.A **training set** is a random subset of the data that we use to train our models.A **testing set** is a random subset of the data (mutually exclusive from the training set) that we use to validate our models on unforseen data.Especially in sparse data sets like ours, it's easy for models to **overfit** the data: The model will learn the training set so well that it won't be able to handle most of the cases it's never seen before. This is why it's important for us to build the model with the training set, but score it with the testing set.Note that once we split the data into a training and testing set, we should treat the testing set like it no longer exists: We cannot use any information from the testing set to build our model or else we're cheating.Let's set up our data first.
###Code
# iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# We're using all four measurements as inputs
# Note that scikit-learn expects each entry to be a list of values, e.g.,
# [ [val1, val2, val3],
# [val1, val2, val3],
# ... ]
# such that our input data set is represented as a list of lists
# We can extract the data in this format from pandas like this:
# usually called X
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
# Similarly, we can extract the class labels
# answers/label often called little y
all_labels = iris_data_clean['class'].values
# Make sure that you don't mix up the order of the entries
# all_inputs[5] inputs should correspond to the class in all_labels[5]
# Here's what a subset of our inputs looks like:
all_inputs[:5]
type(all_inputs)
all_labels[:5]
type(all_labels)
###Output
_____no_output_____
###Markdown
Now our data is ready to be split.
###Code
all_inputs[:3]
iris_data_clean.head(3)
all_labels[:3]
from sklearn.model_selection import train_test_split
# Here we split our data into training and testing data
# you can read more on split function at
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25, random_state=1)
len(all_inputs)
len(training_inputs)
0.75*149
149*0.25
len(testing_inputs)
training_inputs[:5]
testing_inputs[:5]
testing_classes[:5]
training_classes[:5]
###Output
_____no_output_____
###Markdown
With our data split, we can start fitting models to our data. Our company's Head of Data is all about decision tree classifiers, so let's start with one of those.Decision tree classifiers are incredibly simple in theory. In their simplest form, decision tree classifiers ask a series of Yes/No questions about the data — each time getting closer to finding out the class of each entry — until they either classify the data set perfectly or simply can't differentiate a set of entries. Think of it like a game of [Twenty Questions](https://en.wikipedia.org/wiki/Twenty_Questions), except the computer is *much*, *much* better at it.Here's an example decision tree classifier:Notice how the classifier asks Yes/No questions about the data — whether a certain feature is <= 1.75, for example — so it can differentiate the records. This is the essence of every decision tree.The nice part about decision tree classifiers is that they are **scale-invariant**, i.e., the scale of the features does not affect their performance, unlike many Machine Learning models. In other words, it doesn't matter if our features range from 0 to 1 or 0 to 1,000; decision tree classifiers will work with them just the same.There are several [parameters](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) that we can tune for decision tree classifiers, but for now let's use a basic decision tree classifier.
###Code
from sklearn.tree import DecisionTreeClassifier
# Create the classifier
decision_tree_classifier = DecisionTreeClassifier()
# Train the classifier on the training set
decision_tree_classifier.fit(training_inputs, training_classes)
# Validate the classifier on the testing set using classification accuracy
decision_tree_classifier.score(testing_inputs, testing_classes)
1-1/38
decision_tree_classifier.score(training_inputs, training_classes)
150*0.25
len(testing_inputs)
# How the accuracy score came about 37 out of 38 correct
37/38
# lets try a cooler model SVM - Support Vector Machines
from sklearn import svm
svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)
svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)
###Output
_____no_output_____
###Markdown
Heck yeah! Our model achieves 97% classification accuracy without much effort.However, there's a catch: Depending on how our training and testing set was sampled, our model can achieve anywhere from 80% to 100% accuracy:
###Code
import matplotlib.pyplot as plt
# here we randomly split data 1000 times in differrent training and test sets
model_accuracies = []
for repetition in range(1000):
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
# notice how we do not specify a seed so 1000 times we perform a random split
decision_tree_classifier = DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs, training_classes)
classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies.append(classifier_accuracy)
plt.hist(model_accuracies)
;
max(model_accuracies)
min(model_accuracies)
1-7/38
from collections import Counter
acc_count = Counter(model_accuracies)
acc_count
1/38
100/38
###Output
_____no_output_____
###Markdown
It's obviously a problem that our model performs quite differently depending on the subset of the data it's trained on. This phenomenon is known as **overfitting**: The model is learning to classify the training set so well that it doesn't generalize and perform well on data it hasn't seen before. Cross-validation[[ go back to the top ]](Table-of-contents)This problem is the main reason that most data scientists perform ***k*-fold cross-validation** on their models: Split the original data set into *k* subsets, use one of the subsets as the testing set, and the rest of the subsets are used as the training set. This process is then repeated *k* times such that each subset is used as the testing set exactly once.10-fold cross-validation is the most common choice, so let's use that here. Performing 10-fold cross-validation on our data set looks something like this:(each square is an entry in our data set)
###Code
iris_data_clean.head(15)
iris_data_clean.tail()
# new text
import numpy as np
from sklearn.model_selection import StratifiedKFold
def plot_cv(cv, features, labels):
masks = []
for train, test in cv.split(features, labels):
mask = np.zeros(len(labels), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.figure(figsize=(15, 15))
plt.imshow(masks, interpolation='none', cmap='gray_r')
plt.ylabel('Fold')
plt.xlabel('Row #')
plot_cv(StratifiedKFold(n_splits=10), all_inputs, all_labels)
###Output
_____no_output_____
###Markdown
You'll notice that we used **Stratified *k*-fold cross-validation** in the code above. Stratified *k*-fold keeps the class proportions the same across all of the folds, which is vital for maintaining a representative subset of our data set. (e.g., so we don't have 100% `Iris setosa` entries in one of the folds.)We can perform 10-fold cross-validation on our model with the following code:
###Code
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_score
decision_tree_classifier = DecisionTreeClassifier()
# cross_val_score returns a list of the scores, which we can visualize
# to get a reasonable estimate of our classifier's performance
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
cv_scores
1-1/15
len(all_inputs.T[1])
import scipy.stats as stats
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.entropy.html
# https://en.wikipedia.org/wiki/Entropy_(information_theory)
print("Entropy for: ", stats.entropy(all_inputs.T[1]))
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
def printEntropy(npdata):
for i, col in enumerate(npdata.T):
print("Entropy for column:", i, stats.entropy(col))
printEntropy(all_inputs)
###Output
Entropy for column: 0 4.9947332367061925
Entropy for column: 1 4.994187360273029
Entropy for column: 2 4.88306851089088
Entropy for column: 3 4.76945055275522
###Markdown
Now we have a much more consistent rating of our classifier's general classification accuracy. Parameter tuning[[ go back to the top ]](Table-of-contents)Every Machine Learning model comes with a variety of parameters to tune, and these parameters can be vitally important to the performance of our classifier. For example, if we severely limit the depth of our decision tree classifier:
###Code
decision_tree_classifier = DecisionTreeClassifier(max_depth=1)
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
###Output
_____no_output_____
###Markdown
the classification accuracy falls tremendously.Therefore, we need to find a systematic method to discover the best parameters for our model and data set.The most common method for model parameter tuning is **Grid Search**. The idea behind Grid Search is simple: explore a range of parameters and find the best-performing parameter combination. Focus your search on the best range of parameters, then repeat this process several times until the best parameters are discovered.Let's tune our decision tree classifier. We'll stick to only two parameters for now, but it's possible to simultaneously explore dozens of parameters if we want.
###Code
# prepare to grid and to fit
from sklearn.model_selection import GridSearchCV
decision_tree_classifier = DecisionTreeClassifier()
# the parameters will depend on the model we use above
parameter_grid = {'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
# here the grid search will loop through all parameter combinations and fit the model to cross validated splits
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
###Output
Best score: 0.959731543624161
Best parameters: {'max_depth': 3, 'max_features': 3}
###Markdown
Now let's visualize the grid search to see how the parameters interact.
###Code
type(grid_search)
grid_search.estimator
grid_search.param_grid
type(grid_search.param_grid)
grid_search.cv
grid_search.cv_results_['mean_test_score']
cv_res = grid_search.cv_results_['mean_test_score']
cv_res.shape
import seaborn as sb
grid_visualization = grid_search.cv_results_['mean_test_score']
grid_visualization.shape = (5, 4)
sb.heatmap(grid_visualization, cmap='Oranges', annot=True)
plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features'])
plt.yticks(np.arange(5) + 0.5, grid_search.param_grid['max_depth'])
plt.xlabel('max_features')
plt.ylabel('max_depth')
plt.savefig("grid_heatmap.png")
;
plt.savefig("empty.jpg")
###Output
_____no_output_____
###Markdown
Now we have a better sense of the parameter space: We know that we need a `max_depth` of at least 2 to allow the decision tree to make more than a one-off decision.`max_features` doesn't really seem to make a big difference here as long as we have 2 of them, which makes sense since our data set has only 4 features and is relatively easy to classify. (Remember, one of our data set's classes was easily separable from the rest based on a single feature.)Let's go ahead and use a broad grid search to find the best settings for a handful of parameters.
###Code
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random'],
'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
149*grid_search.best_score_
143/149
145/149
###Output
_____no_output_____
###Markdown
Now we can take the best classifier from the Grid Search and use that:
###Code
# we pick the best one and save for now in a different variable
decision_tree_classifier = grid_search.best_estimator_
decision_tree_classifier
###Output
_____no_output_____
###Markdown
We can even visualize the decision tree with [GraphViz](http://www.graphviz.org/) to see how it's making the classifications:
###Code
import sklearn.tree as tree
from sklearn.externals.six import StringIO
with open('iris_dtc.dot', 'w') as out_file:
out_file = tree.export_graphviz(decision_tree_classifier, out_file=out_file)
###Output
_____no_output_____
###Markdown
(This classifier may look familiar from earlier in the notebook.)Alright! We finally have our demo classifier. Let's create some visuals of its performance so we have something to show our company's Head of Data.
###Code
decision_tree_classifier
dt_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(dt_scores)
sb.stripplot(dt_scores, jitter=True, color='black')
;
###Output
_____no_output_____
###Markdown
Hmmm... that's a little boring by itself though. How about we compare another classifier to see how they perform?We already know from previous projects that Random Forest classifiers usually work better than individual decision trees. A common problem that decision trees face is that they're prone to overfitting: They complexify to the point that they classify the training set near-perfectly, but fail to generalize to data they have not seen before.**Random Forest classifiers** work around that limitation by creating a whole bunch of decision trees (hence "forest") — each trained on random subsets of training samples (drawn with replacement) and features (drawn without replacement) — and have the decision trees work together to make a more accurate classification.Let that be a lesson for us: **Even in Machine Learning, we get better results when we work together!**Let's see if a Random Forest classifier works better here.The great part about scikit-learn is that the training, testing, parameter tuning, etc. process is the same for all models, so we only need to plug in the new classifier.
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
random_forest_classifier = RandomForestClassifier()
parameter_grid = {'n_estimators': [10, 25, 50, 100],
'criterion': ['gini', 'entropy'],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(random_forest_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_search.best_estimator_
###Output
Best score: 0.9664429530201343
Best parameters: {'criterion': 'gini', 'max_features': 1, 'n_estimators': 50}
###Markdown
Now we can compare their performance:
###Code
random_forest_classifier = grid_search.best_estimator_
rf_df = pd.DataFrame({'accuracy': cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Random Forest'] * 10})
dt_df = pd.DataFrame({'accuracy': cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Decision Tree'] * 10})
both_df = rf_df.append(dt_df)
both_df.head()
both_df
sb.boxplot(x='classifier', y='accuracy', data=both_df)
sb.stripplot(x='classifier', y='accuracy', data=both_df, jitter=True, color='black')
;
###Output
_____no_output_____
###Markdown
How about that? They both seem to perform about the same on this data set. This is probably because of the limitations of our data set: We have only 4 features to make the classification, and Random Forest classifiers excel when there's hundreds of possible features to look at. In other words, there wasn't much room for improvement with this data set. Step 6: Reproducibility[[ go back to the top ]](Table-of-contents)Ensuring that our work is reproducible is the last and — arguably — most important step in any analysis. **As a rule, we shouldn't place much weight on a discovery that can't be reproduced**. As such, if our analysis isn't reproducible, we might as well not have done it.Notebooks like this one go a long way toward making our work reproducible. Since we documented every step as we moved along, we have a written record of what we did and why we did it — both in text and code.Beyond recording what we did, we should also document what software and hardware we used to perform our analysis. This typically goes at the top of our notebooks so our readers know what tools to use.[Sebastian Raschka](http://sebastianraschka.com/) created a handy [notebook tool](https://github.com/rasbt/watermark) for this:
###Code
!pip install watermark
%load_ext watermark
myversions = pd.show_versions()
myversions
%watermark -a 'RCS_12' -nmv --packages numpy,pandas,sklearn,matplotlib,seaborn
###Output
RCS_12 Sat Dec 14 2019
CPython 3.7.3
IPython 7.4.0
numpy 1.16.2
pandas 0.24.2
sklearn 0.20.3
matplotlib 3.0.3
seaborn 0.9.0
compiler : MSC v.1915 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
CPU cores : 12
interpreter: 64bit
###Markdown
Finally, let's extract the core of our work from Steps 1-5 and turn it into a single pipeline.
###Code
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
# get inputs and labels in NumPY (out of Pandas dataframe)
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
len(testing_inputs)
for input_features, prediction, actual in zip(testing_inputs,
random_forest_classifier.predict(testing_inputs),
testing_classes):
if (prediction == actual):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
else:
print('!!!!!MISMATCH***{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
mismatches = findMismatches(all_inputs, all_labels, random_forest_classifier)
mismatches
random_forest_classifier.score(all_inputs, all_labels)
def findMismatches(inputs, answers, classifier):
mismatches = []
predictions = classifier.predict(inputs)
for X, answer, prediction in zip(inputs, answers, predictions):
if answer != prediction:
mismatches.append([X,answer, prediction])
return mismatches
numbers = [1,2,5,6,6,6]
for number in numbers:
print(number)
146/149
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
def processData(filename):
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv(filename)
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
return rf_classifier_scores
myscores = processData('../data/iris-data-clean.csv')
myscores
###Output
_____no_output_____
###Markdown
Introductory Data Analysis Workflow https://xkcd.com/2054 An example machine learning notebook* Original Notebook by [Randal S. Olson](http://www.randalolson.com/)* Supported by [Jason H. Moore](http://www.epistasis.org/)* [University of Pennsylvania Institute for Bioinformatics](http://upibi.org/)* Adapted for LU Py-Sem 2018 by [Valdis Saulespurens]([email protected]) **You can also [execute the code in this notebook on Binder](https://mybinder.org/v2/gh/ValRCS/RigaComm_DataAnalysis/master) - no local installation required.**
###Code
# text 17.04.2019
import datetime
print(datetime.datetime.now())
print('hello')
###Output
2019-12-14 10:17:38.473839
hello
###Markdown
Table of contents1. [Introduction](Introduction)2. [License](License)3. [Required libraries](Required-libraries)4. [The problem domain](The-problem-domain)5. [Step 1: Answering the question](Step-1:-Answering-the-question)6. [Step 2: Checking the data](Step-2:-Checking-the-data)7. [Step 3: Tidying the data](Step-3:-Tidying-the-data) - [Bonus: Testing our data](Bonus:-Testing-our-data)8. [Step 4: Exploratory analysis](Step-4:-Exploratory-analysis)9. [Step 5: Classification](Step-5:-Classification) - [Cross-validation](Cross-validation) - [Parameter tuning](Parameter-tuning)10. [Step 6: Reproducibility](Step-6:-Reproducibility)11. [Conclusions](Conclusions)12. [Further reading](Further-reading)13. [Acknowledgements](Acknowledgements) Introduction[[ go back to the top ]](Table-of-contents)In the time it took you to read this sentence, terabytes of data have been collectively generated across the world — more data than any of us could ever hope to process, much less make sense of, on the machines we're using to read this notebook.In response to this massive influx of data, the field of Data Science has come to the forefront in the past decade. Cobbled together by people from a diverse array of fields — statistics, physics, computer science, design, and many more — the field of Data Science represents our collective desire to understand and harness the abundance of data around us to build a better world.In this notebook, I'm going to go over a basic Python data analysis pipeline from start to finish to show you what a typical data science workflow looks like.In addition to providing code examples, I also hope to imbue in you a sense of good practices so you can be a more effective — and more collaborative — data scientist.I will be following along with the data analysis checklist from [The Elements of Data Analytic Style](https://leanpub.com/datastyle), which I strongly recommend reading as a free and quick guidebook to performing outstanding data analysis.**This notebook is intended to be a public resource. As such, if you see any glaring inaccuracies or if a critical topic is missing, please feel free to point it out or (preferably) submit a pull request to improve the notebook.** License[[ go back to the top ]](Table-of-contents)Please see the [repository README file](https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projectslicense) for the licenses and usage terms for the instructional material and code in this notebook. In general, I have licensed this material so that it is as widely usable and shareable as possible. Required libraries[[ go back to the top ]](Table-of-contents)If you don't have Python on your computer, you can use the [Anaconda Python distribution](http://continuum.io/downloads) to install most of the Python packages you need. Anaconda provides a simple double-click installer for your convenience.This notebook uses several Python packages that come standard with the Anaconda Python distribution. The primary libraries that we'll be using are:* **NumPy**: Provides a fast numerical array structure and helper functions.* **pandas**: Provides a DataFrame structure to store data in memory and work with it easily and efficiently.* **scikit-learn**: The essential Machine Learning package in Python.* **matplotlib**: Basic plotting library in Python; most other Python plotting libraries are built on top of it.* **Seaborn**: Advanced statistical plotting library.* **watermark**: A Jupyter Notebook extension for printing timestamps, version numbers, and hardware information.**Note:** I will not be providing support for people trying to run this notebook outside of the Anaconda Python distribution. The problem domain[[ go back to the top ]](Table-of-contents)For the purposes of this exercise, let's pretend we're working for a startup that just got funded to create a smartphone app that automatically identifies species of flowers from pictures taken on the smartphone. We're working with a moderately-sized team of data scientists and will be building part of the data analysis pipeline for this app.We've been tasked by our company's Head of Data Science to create a demo machine learning model that takes four measurements from the flowers (sepal length, sepal width, petal length, and petal width) and identifies the species based on those measurements alone.We've been given a [data set](https://github.com/ValRCS/RCS_Data_Analysis_Python/blob/master/data/iris-data.csv) from our field researchers to develop the demo, which only includes measurements for three types of *Iris* flowers: *Iris setosa* *Iris versicolor* *Iris virginica*The four measurements we're using currently come from hand-measurements by the field researchers, but they will be automatically measured by an image processing model in the future.**Note:** The data set we're working with is the famous [*Iris* data set](https://archive.ics.uci.edu/ml/datasets/Iris) — included with this notebook — which I have modified slightly for demonstration purposes. Step 1: Answering the question[[ go back to the top ]](Table-of-contents)The first step to any data analysis project is to define the question or problem we're looking to solve, and to define a measure (or set of measures) for our success at solving that task. The data analysis checklist has us answer a handful of questions to accomplish that, so let's work through those questions.>Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?We're trying to classify the species (i.e., class) of the flower based on four measurements that we're provided: sepal length, sepal width, petal length, and petal width.Petal - ziedlapiņa, sepal - arī ziedlapiņa>Did you define the metric for success before beginning?Let's do that now. Since we're performing classification, we can use [accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) — the fraction of correctly classified flowers — to quantify how well our model is performing. Our company's Head of Data has told us that we should achieve at least 90% accuracy.>Did you understand the context for the question and the scientific or business application?We're building part of a data analysis pipeline for a smartphone app that will be able to classify the species of flowers from pictures taken on the smartphone. In the future, this pipeline will be connected to another pipeline that automatically measures from pictures the traits we're using to perform this classification.>Did you record the experimental design?Our company's Head of Data has told us that the field researchers are hand-measuring 50 randomly-sampled flowers of each species using a standardized methodology. The field researchers take pictures of each flower they sample from pre-defined angles so the measurements and species can be confirmed by the other field researchers at a later point. At the end of each day, the data is compiled and stored on a private company GitHub repository.>Did you consider whether the question could be answered with the available data?The data set we currently have is only for three types of *Iris* flowers. The model built off of this data set will only work for those *Iris* flowers, so we will need more data to create a general flower classifier.Notice that we've spent a fair amount of time working on the problem without writing a line of code or even looking at the data.**Thinking about and documenting the problem we're working on is an important step to performing effective data analysis that often goes overlooked.** Don't skip it. Step 2: Checking the data[[ go back to the top ]](Table-of-contents)The next step is to look at the data we're working with. Even curated data sets from the government can have errors in them, and it's vital that we spot these errors before investing too much time in our analysis.Generally, we're looking to answer the following questions:* Is there anything wrong with the data?* Are there any quirks with the data?* Do I need to fix or remove any of the data?Let's start by reading the data into a pandas DataFrame.
###Code
import pandas as pd
iris_data = pd.read_csv('../data/iris-data.csv')
#lets take a look at the first 5 rows
iris_data.head()
iris_data.tail()
# Resources for loading data from nonlocal sources
# Pandas Can generally handle most common formats
# https://pandas.pydata.org/pandas-docs/stable/io.html
# SQL https://stackoverflow.com/questions/39149243/how-do-i-connect-to-a-sql-server-database-with-python
# NoSQL MongoDB https://realpython.com/introduction-to-mongodb-and-python/
# Apache Hadoop: https://dzone.com/articles/how-to-get-hadoop-data-into-a-python-model
# Apache Spark: https://www.datacamp.com/community/tutorials/apache-spark-python
# Data Scraping / Crawling libraries : https://elitedatascience.com/python-web-scraping-libraries Big Topic in itself
# Most data resources have some form of Python API / Library
iris_data.head()
###Output
_____no_output_____
###Markdown
We're in luck! The data seems to be in a usable format.The first row in the data file defines the column headers, and the headers are descriptive enough for us to understand what each column represents. The headers even give us the units that the measurements were recorded in, just in case we needed to know at a later point in the project.Each row following the first row represents an entry for a flower: four measurements and one class, which tells us the species of the flower.**One of the first things we should look for is missing data.** Thankfully, the field researchers already told us that they put a 'NA' into the spreadsheet when they were missing a measurement.We can tell pandas to automatically identify missing values if it knows our missing value marker.
###Code
iris_data.shape
iris_data.info()
iris_data.describe()
# with na_values we can pass what cells to mark as na
iris_data = pd.read_csv('../data/iris-data.csv', na_values=['NA', 'N/A'])
###Output
_____no_output_____
###Markdown
Voilà! Now pandas knows to treat rows with 'NA' as missing values. Next, it's always a good idea to look at the distribution of our data — especially the outliers.Let's start by printing out some summary statistics about the data set.
###Code
iris_data.describe()
###Output
_____no_output_____
###Markdown
We can see several useful values from this table. For example, we see that five `petal_width_cm` entries are missing.If you ask me, though, tables like this are rarely useful unless we know that our data should fall in a particular range. It's usually better to visualize the data in some way. Visualization makes outliers and errors immediately stand out, whereas they might go unnoticed in a large table of numbers.Since we know we're going to be plotting in this section, let's set up the notebook so we can plot inside of it.
###Code
# This line tells the notebook to show plots inside of the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
###Output
_____no_output_____
###Markdown
Next, let's create a **scatterplot matrix**. Scatterplot matrices plot the distribution of each column along the diagonal, and then plot a scatterplot matrix for the combination of each variable. They make for an efficient tool to look for errors in our data.We can even have the plotting package color each entry by its class to look for trends within the classes.
###Code
sb.pairplot(iris_data, hue='class')
# We have to temporarily drop the rows with 'NA' values
# because the Seaborn plotting function does not know
# what to do with them
sb.pairplot(iris_data.dropna(), hue='class')
###Output
_____no_output_____
###Markdown
From the scatterplot matrix, we can already see some issues with the data set:1. There are five classes when there should only be three, meaning there were some coding errors.2. There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.3. We had to drop those rows with missing values.In all of these cases, we need to figure out what to do with the erroneous data. Which takes us to the next step... Step 3: Tidying the data GIGO principle[[ go back to the top ]](Table-of-contents)Now that we've identified several errors in the data set, we need to fix them before we proceed with the analysis.Let's walk through the issues one-by-one.>There are five classes when there should only be three, meaning there were some coding errors.After talking with the field researchers, it sounds like one of them forgot to add `Iris-` before their `Iris-versicolor` entries. The other extraneous class, `Iris-setossa`, was simply a typo that they forgot to fix.Let's use the DataFrame to fix these errors.
###Code
iris_data['class'].unique()
len(iris_data['class'].unique())
# Copy and Replace
# in df.loc[rows, thencolumns]
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data['class'].unique()
# So we take a row where a specific column('class' here) matches our bad values
# and change them to good values
iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa'
iris_data['class'].unique()
iris_data.tail()
iris_data[98:103]
iris_data['class'].unique()
###Output
_____no_output_____
###Markdown
Much better! Now we only have three class types. Imagine how embarrassing it would've been to create a model that used the wrong classes.>There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.Fixing outliers can be tricky business. It's rarely clear whether the outlier was caused by measurement error, recording the data in improper units, or if the outlier is a real anomaly. For that reason, we should be judicious when working with outliers: if we decide to exclude any data, we need to make sure to document what data we excluded and provide solid reasoning for excluding that data. (i.e., "This data didn't fit my hypothesis" will not stand peer review.)In the case of the one anomalous entry for `Iris-setosa`, let's say our field researchers know that it's impossible for `Iris-setosa` to have a sepal width below 2.5 cm. Clearly this entry was made in error, and we're better off just scrapping the entry than spending hours finding out what happened.
###Code
# here we see all flowers with sepal_width_cm under 2.5m
iris_data.loc[(iris_data['sepal_width_cm'] < 2.5)]
## for multiple filters we use & for AND , and use | for OR
smallpetals = iris_data.loc[(iris_data['sepal_width_cm'] < 2.5) & (iris_data['class'] == 'Iris-setosa')]
smallpetals
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
len(iris_data)
# This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm
# Let's go over this command in class
iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >= 2.5)]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
len(iris_data)
###Output
_____no_output_____
###Markdown
Excellent! Now all of our `Iris-setosa` rows have a sepal width greater than 2.5.The next data issue to address is the several near-zero sepal lengths for the `Iris-versicolor` rows. Let's take a look at those rows.
###Code
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
###Output
_____no_output_____
###Markdown
How about that? All of these near-zero `sepal_length_cm` entries seem to be off by two orders of magnitude, as if they had been recorded in meters instead of centimeters.After some brief correspondence with the field researchers, we find that one of them forgot to convert those measurements to centimeters. Let's do that for them.
###Code
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
iris_data['sepal_length_cm'].hist()
# we double check before changing anyting if our filter works
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)].head()
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
# Here we fix the wrong units
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0),
'sepal_length_cm'] *= 100.0
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
;
iris_data['sepal_length_cm'].hist()
###Output
_____no_output_____
###Markdown
Phew! Good thing we fixed those outliers. They could've really thrown our analysis off.>We had to drop those rows with missing values.Let's take a look at the rows with missing values:
###Code
iris_data.notnull()
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
###Output
_____no_output_____
###Markdown
It's not ideal that we had to drop those rows, especially considering they're all `Iris-setosa` entries. Since it seems like the missing data is systematic — all of the missing values are in the same column for the same *Iris* type — this error could potentially bias our analysis.One way to deal with missing data is **mean imputation**: If we know that the values for a measurement fall in a certain range, we can fill in empty values with the average of that measurement.Let's see if we can do that here.
###Code
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
###Output
_____no_output_____
###Markdown
Most of the petal widths for `Iris-setosa` fall within the 0.2-0.3 range, so let's fill in these entries with the average measured petal width.
###Code
iris_setosa_avg = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
iris_setosa_avg
type(iris_setosa_avg)
round(iris_setosa_avg, 2)
# for our purposes 4 digita accuracy is sufficient, add why here :)
iris_setosa_avg = round(iris_setosa_avg, 4)
average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
print(average_petal_width)
average_petal_width = iris_setosa_avg
# we find iris-setosa rows where petal_width_cm is missing
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'].isnull()),
'petal_width_cm'] = average_petal_width
# we find all iris-setosa with the average
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'] == average_petal_width)]
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
# if we want to drop rows with missing data
# and save them into a new dataframe
dfwithoutmissingvalues = iris_data.dropna()
len(dfwithoutmissingvalues)
###Output
_____no_output_____
###Markdown
Great! Now we've recovered those rows and no longer have missing data in our data set.**Note:** If you don't feel comfortable imputing your data, you can drop all rows with missing data with the `dropna()` call: iris_data.dropna(inplace=True)After all this hard work, we don't want to repeat this process every time we work with the data set. Let's save the tidied data file *as a separate file* and work directly with that data file from now on.
###Code
import json
iris_data.to_json('../data/iris-clean.json')
# to bypass pandas missing json formatter we can format the data ourselves
df_json_pretty = json.dumps(json.loads(iris_data.to_json()), indent=4)
type(df_json_pretty)
df_json_pretty[:100]
with open('data.json', 'w', encoding='utf-8') as f:
f.write(df_json_pretty)
iris_data.to_csv('../data/iris-data-clean.csv', index=False)
# for saving in the same folder
iris_data.to_csv('iris-data-clean.csv', index=False)
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
###Output
_____no_output_____
###Markdown
Now, let's take a look at the scatterplot matrix now that we've tidied the data.
###Code
myplot = sb.pairplot(iris_data_clean, hue='class')
myplot.savefig('irises.png')
import scipy.stats as stats
iris_data = pd.read_csv('../data/iris-data.csv')
iris_data.columns.unique()
stats.entropy(iris_data_clean['sepal_length_cm'])
iris_data.columns[:-1]
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
for col in iris_data.columns[:-1]:
print("Entropy for: ", col, stats.entropy(iris_data[col].dropna()))
###Output
Entropy for: sepal_length_cm 4.96909746125432
Entropy for: sepal_width_cm 5.000701325982732
Entropy for: petal_length_cm 4.888113822938816
Entropy for: petal_width_cm 4.754264731532864
###Markdown
Of course, I purposely inserted numerous errors into this data set to demonstrate some of the many possible scenarios you may face while tidying your data.The general takeaways here should be:* Make sure your data is encoded properly* Make sure your data falls within the expected range, and use domain knowledge whenever possible to define that expected range* Deal with missing data in one way or another: replace it if you can or drop it* Never tidy your data manually because that is not easily reproducible* Use code as a record of how you tidied your data* Plot everything you can about the data at this stage of the analysis so you can *visually* confirm everything looks correct Bonus: Testing our data[[ go back to the top ]](Table-of-contents)At SciPy 2015, I was exposed to a great idea: We should test our data. Just how we use unit tests to verify our expectations from code, we can similarly set up unit tests to verify our expectations about a data set.We can quickly test our data using `assert` statements: We assert that something must be true, and if it is, then nothing happens and the notebook continues running. However, if our assertion is wrong, then the notebook stops running and brings it to our attention. For example,```Pythonassert 1 == 2```will raise an `AssertionError` and stop execution of the notebook because the assertion failed.Let's test a few things that we know about our data set now.
###Code
assert 1 == 3
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
assert len(iris_data['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
# We know that our data set should have no missing measurements
assert len(iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]) == 0
###Output
_____no_output_____
###Markdown
And so on. If any of these expectations are violated, then our analysis immediately stops and we have to return to the tidying stage. Data Cleanup & Wrangling > 80% time spent in Data Science Step 4: Exploratory analysis[[ go back to the top ]](Table-of-contents)Now after spending entirely too much time tidying our data, we can start analyzing it!Exploratory analysis is the step where we start delving deeper into the data set beyond the outliers and errors. We'll be looking to answer questions such as:* How is my data distributed?* Are there any correlations in my data?* Are there any confounding factors that explain these correlations?This is the stage where we plot all the data in as many ways as possible. Create many charts, but don't bother making them pretty — these charts are for internal use.Let's return to that scatterplot matrix that we used earlier.
###Code
sb.pairplot(iris_data_clean)
;
###Output
_____no_output_____
###Markdown
Our data is normally distributed for the most part, which is great news if we plan on using any modeling methods that assume the data is normally distributed.There's something strange going on with the petal measurements. Maybe it's something to do with the different `Iris` types. Let's color code the data by the class again to see if that clears things up.
###Code
sb.pairplot(iris_data_clean, hue='class')
;
###Output
_____no_output_____
###Markdown
Sure enough, the strange distribution of the petal measurements exist because of the different species. This is actually great news for our classification task since it means that the petal measurements will make it easy to distinguish between `Iris-setosa` and the other `Iris` types.Distinguishing `Iris-versicolor` and `Iris-virginica` will prove more difficult given how much their measurements overlap.There are also correlations between petal length and petal width, as well as sepal length and sepal width. The field biologists assure us that this is to be expected: Longer flower petals also tend to be wider, and the same applies for sepals.We can also make [**violin plots**](https://en.wikipedia.org/wiki/Violin_plot) of the data to compare the measurement distributions of the classes. Violin plots contain the same information as [box plots](https://en.wikipedia.org/wiki/Box_plot), but also scales the box according to the density of the data.
###Code
plt.figure(figsize=(10, 10))
for column_index, column in enumerate(iris_data_clean.columns):
if column == 'class':
continue
plt.subplot(2, 2, column_index + 1)
sb.violinplot(x='class', y=column, data=iris_data_clean)
###Output
_____no_output_____
###Markdown
Enough flirting with the data. Let's get to modeling. Step 5: Classification[[ go back to the top ]](Table-of-contents)Wow, all this work and we *still* haven't modeled the data!As tiresome as it can be, tidying and exploring our data is a vital component to any data analysis. If we had jumped straight to the modeling step, we would have created a faulty classification model.Remember: **Bad data leads to bad models.** Always check your data first.Assured that our data is now as clean as we can make it — and armed with some cursory knowledge of the distributions and relationships in our data set — it's time to make the next big step in our analysis: Splitting the data into training and testing sets.A **training set** is a random subset of the data that we use to train our models.A **testing set** is a random subset of the data (mutually exclusive from the training set) that we use to validate our models on unforseen data.Especially in sparse data sets like ours, it's easy for models to **overfit** the data: The model will learn the training set so well that it won't be able to handle most of the cases it's never seen before. This is why it's important for us to build the model with the training set, but score it with the testing set.Note that once we split the data into a training and testing set, we should treat the testing set like it no longer exists: We cannot use any information from the testing set to build our model or else we're cheating.Let's set up our data first.
###Code
# iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# We're using all four measurements as inputs
# Note that scikit-learn expects each entry to be a list of values, e.g.,
# [ [val1, val2, val3],
# [val1, val2, val3],
# ... ]
# such that our input data set is represented as a list of lists
# We can extract the data in this format from pandas like this:
# usually called X
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
# Similarly, we can extract the class labels
# answers/label often called little y
all_labels = iris_data_clean['class'].values
# Make sure that you don't mix up the order of the entries
# all_inputs[5] inputs should correspond to the class in all_labels[5]
# Here's what a subset of our inputs looks like:
all_inputs[:5]
type(all_inputs)
all_labels[:5]
type(all_labels)
###Output
_____no_output_____
###Markdown
Now our data is ready to be split.
###Code
all_inputs[:3]
iris_data_clean.head(3)
all_labels[:3]
from sklearn.model_selection import train_test_split
# Here we split our data into training and testing data
# you can read more on split function at
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25, random_state=1)
len(all_inputs)
len(training_inputs)
0.75*149
149*0.25
len(testing_inputs)
training_inputs[:5]
testing_inputs[:5]
testing_classes[:5]
training_classes[:5]
###Output
_____no_output_____
###Markdown
With our data split, we can start fitting models to our data. Our company's Head of Data is all about decision tree classifiers, so let's start with one of those.Decision tree classifiers are incredibly simple in theory. In their simplest form, decision tree classifiers ask a series of Yes/No questions about the data — each time getting closer to finding out the class of each entry — until they either classify the data set perfectly or simply can't differentiate a set of entries. Think of it like a game of [Twenty Questions](https://en.wikipedia.org/wiki/Twenty_Questions), except the computer is *much*, *much* better at it.Here's an example decision tree classifier:Notice how the classifier asks Yes/No questions about the data — whether a certain feature is <= 1.75, for example — so it can differentiate the records. This is the essence of every decision tree.The nice part about decision tree classifiers is that they are **scale-invariant**, i.e., the scale of the features does not affect their performance, unlike many Machine Learning models. In other words, it doesn't matter if our features range from 0 to 1 or 0 to 1,000; decision tree classifiers will work with them just the same.There are several [parameters](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) that we can tune for decision tree classifiers, but for now let's use a basic decision tree classifier.
###Code
from sklearn.tree import DecisionTreeClassifier
# Create the classifier
decision_tree_classifier = DecisionTreeClassifier()
# Train the classifier on the training set
decision_tree_classifier.fit(training_inputs, training_classes)
# Validate the classifier on the testing set using classification accuracy
decision_tree_classifier.score(testing_inputs, testing_classes)
1-1/38
decision_tree_classifier.score(training_inputs, training_classes)
150*0.25
len(testing_inputs)
# How the accuracy score came about 37 out of 38 correct
37/38
# lets try a cooler model SVM - Support Vector Machines
from sklearn import svm
svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)
svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)
###Output
_____no_output_____
###Markdown
Heck yeah! Our model achieves 97% classification accuracy without much effort.However, there's a catch: Depending on how our training and testing set was sampled, our model can achieve anywhere from 80% to 100% accuracy:
###Code
import matplotlib.pyplot as plt
# here we randomly split data 1000 times in differrent training and test sets
model_accuracies = []
for repetition in range(1000):
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
# notice how we do not specify a seed so 1000 times we perform a random split
decision_tree_classifier = DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs, training_classes)
classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies.append(classifier_accuracy)
plt.hist(model_accuracies)
;
max(model_accuracies)
min(model_accuracies)
1-7/38
from collections import Counter
acc_count = Counter(model_accuracies)
acc_count
1/38
100/38
###Output
_____no_output_____
###Markdown
It's obviously a problem that our model performs quite differently depending on the subset of the data it's trained on. This phenomenon is known as **overfitting**: The model is learning to classify the training set so well that it doesn't generalize and perform well on data it hasn't seen before. Cross-validation[[ go back to the top ]](Table-of-contents)This problem is the main reason that most data scientists perform ***k*-fold cross-validation** on their models: Split the original data set into *k* subsets, use one of the subsets as the testing set, and the rest of the subsets are used as the training set. This process is then repeated *k* times such that each subset is used as the testing set exactly once.10-fold cross-validation is the most common choice, so let's use that here. Performing 10-fold cross-validation on our data set looks something like this:(each square is an entry in our data set)
###Code
iris_data_clean.head(15)
iris_data_clean.tail()
# new text
import numpy as np
from sklearn.model_selection import StratifiedKFold
def plot_cv(cv, features, labels):
masks = []
for train, test in cv.split(features, labels):
mask = np.zeros(len(labels), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.figure(figsize=(15, 15))
plt.imshow(masks, interpolation='none', cmap='gray_r')
plt.ylabel('Fold')
plt.xlabel('Row #')
plot_cv(StratifiedKFold(n_splits=10), all_inputs, all_labels)
###Output
_____no_output_____
###Markdown
You'll notice that we used **Stratified *k*-fold cross-validation** in the code above. Stratified *k*-fold keeps the class proportions the same across all of the folds, which is vital for maintaining a representative subset of our data set. (e.g., so we don't have 100% `Iris setosa` entries in one of the folds.)We can perform 10-fold cross-validation on our model with the following code:
###Code
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_score
decision_tree_classifier = DecisionTreeClassifier()
# cross_val_score returns a list of the scores, which we can visualize
# to get a reasonable estimate of our classifier's performance
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
cv_scores
1-1/15
len(all_inputs.T[1])
import scipy.stats as stats
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.entropy.html
# https://en.wikipedia.org/wiki/Entropy_(information_theory)
print("Entropy for: ", stats.entropy(all_inputs.T[1]))
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
def printEntropy(npdata):
for i, col in enumerate(npdata.T):
print("Entropy for column:", i, stats.entropy(col))
printEntropy(all_inputs)
###Output
Entropy for column: 0 4.9947332367061925
Entropy for column: 1 4.994187360273029
Entropy for column: 2 4.88306851089088
Entropy for column: 3 4.76945055275522
###Markdown
Now we have a much more consistent rating of our classifier's general classification accuracy. Parameter tuning[[ go back to the top ]](Table-of-contents)Every Machine Learning model comes with a variety of parameters to tune, and these parameters can be vitally important to the performance of our classifier. For example, if we severely limit the depth of our decision tree classifier:
###Code
decision_tree_classifier = DecisionTreeClassifier(max_depth=1)
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
###Output
_____no_output_____
###Markdown
the classification accuracy falls tremendously.Therefore, we need to find a systematic method to discover the best parameters for our model and data set.The most common method for model parameter tuning is **Grid Search**. The idea behind Grid Search is simple: explore a range of parameters and find the best-performing parameter combination. Focus your search on the best range of parameters, then repeat this process several times until the best parameters are discovered.Let's tune our decision tree classifier. We'll stick to only two parameters for now, but it's possible to simultaneously explore dozens of parameters if we want.
###Code
# prepare to grid and to fit
from sklearn.model_selection import GridSearchCV
decision_tree_classifier = DecisionTreeClassifier()
# the parameters will depend on the model we use above
parameter_grid = {'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
# here the grid search will loop through all parameter combinations and fit the model to cross validated splits
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
###Output
Best score: 0.959731543624161
Best parameters: {'max_depth': 3, 'max_features': 3}
###Markdown
Now let's visualize the grid search to see how the parameters interact.
###Code
type(grid_search)
grid_search.estimator
grid_search.param_grid
type(grid_search.param_grid)
grid_search.cv
grid_search.cv_results_['mean_test_score']
cv_res = grid_search.cv_results_['mean_test_score']
cv_res.shape
import seaborn as sb
grid_visualization = grid_search.cv_results_['mean_test_score']
grid_visualization.shape = (5, 4)
sb.heatmap(grid_visualization, cmap='Oranges', annot=True)
plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features'])
plt.yticks(np.arange(5) + 0.5, grid_search.param_grid['max_depth'])
plt.xlabel('max_features')
plt.ylabel('max_depth')
plt.savefig("grid_heatmap.png")
;
plt.savefig("empty.jpg")
###Output
_____no_output_____
###Markdown
Now we have a better sense of the parameter space: We know that we need a `max_depth` of at least 2 to allow the decision tree to make more than a one-off decision.`max_features` doesn't really seem to make a big difference here as long as we have 2 of them, which makes sense since our data set has only 4 features and is relatively easy to classify. (Remember, one of our data set's classes was easily separable from the rest based on a single feature.)Let's go ahead and use a broad grid search to find the best settings for a handful of parameters.
###Code
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random'],
'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
149*grid_search.best_score_
143/149
145/149
###Output
_____no_output_____
###Markdown
Now we can take the best classifier from the Grid Search and use that:
###Code
# we pick the best one and save for now in a different variable
decision_tree_classifier = grid_search.best_estimator_
decision_tree_classifier
###Output
_____no_output_____
###Markdown
We can even visualize the decision tree with [GraphViz](http://www.graphviz.org/) to see how it's making the classifications:
###Code
import sklearn.tree as tree
from sklearn.externals.six import StringIO
with open('iris_dtc.dot', 'w') as out_file:
out_file = tree.export_graphviz(decision_tree_classifier, out_file=out_file)
###Output
_____no_output_____
###Markdown
(This classifier may look familiar from earlier in the notebook.)Alright! We finally have our demo classifier. Let's create some visuals of its performance so we have something to show our company's Head of Data.
###Code
decision_tree_classifier
dt_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(dt_scores)
sb.stripplot(dt_scores, jitter=True, color='black')
;
###Output
_____no_output_____
###Markdown
Hmmm... that's a little boring by itself though. How about we compare another classifier to see how they perform?We already know from previous projects that Random Forest classifiers usually work better than individual decision trees. A common problem that decision trees face is that they're prone to overfitting: They complexify to the point that they classify the training set near-perfectly, but fail to generalize to data they have not seen before.**Random Forest classifiers** work around that limitation by creating a whole bunch of decision trees (hence "forest") — each trained on random subsets of training samples (drawn with replacement) and features (drawn without replacement) — and have the decision trees work together to make a more accurate classification.Let that be a lesson for us: **Even in Machine Learning, we get better results when we work together!**Let's see if a Random Forest classifier works better here.The great part about scikit-learn is that the training, testing, parameter tuning, etc. process is the same for all models, so we only need to plug in the new classifier.
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
random_forest_classifier = RandomForestClassifier()
parameter_grid = {'n_estimators': [10, 25, 50, 100],
'criterion': ['gini', 'entropy'],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(random_forest_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_search.best_estimator_
###Output
Best score: 0.9664429530201343
Best parameters: {'criterion': 'gini', 'max_features': 1, 'n_estimators': 50}
###Markdown
Now we can compare their performance:
###Code
random_forest_classifier = grid_search.best_estimator_
rf_df = pd.DataFrame({'accuracy': cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Random Forest'] * 10})
dt_df = pd.DataFrame({'accuracy': cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Decision Tree'] * 10})
both_df = rf_df.append(dt_df)
both_df.head()
both_df
sb.boxplot(x='classifier', y='accuracy', data=both_df)
sb.stripplot(x='classifier', y='accuracy', data=both_df, jitter=True, color='black')
;
###Output
_____no_output_____
###Markdown
How about that? They both seem to perform about the same on this data set. This is probably because of the limitations of our data set: We have only 4 features to make the classification, and Random Forest classifiers excel when there's hundreds of possible features to look at. In other words, there wasn't much room for improvement with this data set. Step 6: Reproducibility[[ go back to the top ]](Table-of-contents)Ensuring that our work is reproducible is the last and — arguably — most important step in any analysis. **As a rule, we shouldn't place much weight on a discovery that can't be reproduced**. As such, if our analysis isn't reproducible, we might as well not have done it.Notebooks like this one go a long way toward making our work reproducible. Since we documented every step as we moved along, we have a written record of what we did and why we did it — both in text and code.Beyond recording what we did, we should also document what software and hardware we used to perform our analysis. This typically goes at the top of our notebooks so our readers know what tools to use.[Sebastian Raschka](http://sebastianraschka.com/) created a handy [notebook tool](https://github.com/rasbt/watermark) for this:
###Code
!pip install watermark
%load_ext watermark
myversions = pd.show_versions()
myversions
%watermark -a 'RCS_12' -nmv --packages numpy,pandas,sklearn,matplotlib,seaborn
###Output
RCS_12 Sat Dec 14 2019
CPython 3.7.3
IPython 7.4.0
numpy 1.16.2
pandas 0.24.2
sklearn 0.20.3
matplotlib 3.0.3
seaborn 0.9.0
compiler : MSC v.1915 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
CPU cores : 12
interpreter: 64bit
###Markdown
Finally, let's extract the core of our work from Steps 1-5 and turn it into a single pipeline.
###Code
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
# get inputs and labels in NumPY (out of Pandas dataframe)
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
len(testing_inputs)
for input_features, prediction, actual in zip(testing_inputs,
random_forest_classifier.predict(testing_inputs),
testing_classes):
if (prediction == actual):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
else:
print('!!!!!MISMATCH***{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
mismatches = findMismatches(all_inputs, all_labels, random_forest_classifier)
mismatches
random_forest_classifier.score(all_inputs, all_labels)
def findMismatches(inputs, answers, classifier):
mismatches = []
predictions = classifier.predict(inputs)
for X, answer, prediction in zip(inputs, answers, predictions):
if answer != prediction:
mismatches.append([X,answer, prediction])
return mismatches
numbers = [1,2,5,6,6,6]
for number in numbers:
print(number)
146/149
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
def processData(filename):
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv(filename)
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
return rf_classifier_scores
myscores = processData('../data/iris-data-clean.csv')
myscores
###Output
_____no_output_____
###Markdown
Introductory Data Analysis Workflow https://xkcd.com/2054 An example machine learning notebook* Original Notebook by [Randal S. Olson](http://www.randalolson.com/)* Supported by [Jason H. Moore](http://www.epistasis.org/)* [University of Pennsylvania Institute for Bioinformatics](http://upibi.org/)* Adapted for LU Py-Sem 2018 by [Valdis Saulespurens]([email protected]) **You can also [execute the code in this notebook on Binder](https://mybinder.org/v2/gh/ValRCS/RigaComm_DataAnalysis/master) - no local installation required.**
###Code
# text 17.04.2019
import datetime
print(datetime.datetime.now())
print('hello')
###Output
2019-12-23 12:26:32.102786
hello
###Markdown
Table of contents1. [Introduction](Introduction)2. [License](License)3. [Required libraries](Required-libraries)4. [The problem domain](The-problem-domain)5. [Step 1: Answering the question](Step-1:-Answering-the-question)6. [Step 2: Checking the data](Step-2:-Checking-the-data)7. [Step 3: Tidying the data](Step-3:-Tidying-the-data) - [Bonus: Testing our data](Bonus:-Testing-our-data)8. [Step 4: Exploratory analysis](Step-4:-Exploratory-analysis)9. [Step 5: Classification](Step-5:-Classification) - [Cross-validation](Cross-validation) - [Parameter tuning](Parameter-tuning)10. [Step 6: Reproducibility](Step-6:-Reproducibility)11. [Conclusions](Conclusions)12. [Further reading](Further-reading)13. [Acknowledgements](Acknowledgements) Introduction[[ go back to the top ]](Table-of-contents)In the time it took you to read this sentence, terabytes of data have been collectively generated across the world — more data than any of us could ever hope to process, much less make sense of, on the machines we're using to read this notebook.In response to this massive influx of data, the field of Data Science has come to the forefront in the past decade. Cobbled together by people from a diverse array of fields — statistics, physics, computer science, design, and many more — the field of Data Science represents our collective desire to understand and harness the abundance of data around us to build a better world.In this notebook, I'm going to go over a basic Python data analysis pipeline from start to finish to show you what a typical data science workflow looks like.In addition to providing code examples, I also hope to imbue in you a sense of good practices so you can be a more effective — and more collaborative — data scientist.I will be following along with the data analysis checklist from [The Elements of Data Analytic Style](https://leanpub.com/datastyle), which I strongly recommend reading as a free and quick guidebook to performing outstanding data analysis.**This notebook is intended to be a public resource. As such, if you see any glaring inaccuracies or if a critical topic is missing, please feel free to point it out or (preferably) submit a pull request to improve the notebook.** License[[ go back to the top ]](Table-of-contents)Please see the [repository README file](https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projectslicense) for the licenses and usage terms for the instructional material and code in this notebook. In general, I have licensed this material so that it is as widely usable and shareable as possible. Required libraries[[ go back to the top ]](Table-of-contents)If you don't have Python on your computer, you can use the [Anaconda Python distribution](http://continuum.io/downloads) to install most of the Python packages you need. Anaconda provides a simple double-click installer for your convenience.This notebook uses several Python packages that come standard with the Anaconda Python distribution. The primary libraries that we'll be using are:* **NumPy**: Provides a fast numerical array structure and helper functions.* **pandas**: Provides a DataFrame structure to store data in memory and work with it easily and efficiently.* **scikit-learn**: The essential Machine Learning package in Python.* **matplotlib**: Basic plotting library in Python; most other Python plotting libraries are built on top of it.* **Seaborn**: Advanced statistical plotting library.* **watermark**: A Jupyter Notebook extension for printing timestamps, version numbers, and hardware information.**Note:** I will not be providing support for people trying to run this notebook outside of the Anaconda Python distribution. The problem domain[[ go back to the top ]](Table-of-contents)For the purposes of this exercise, let's pretend we're working for a startup that just got funded to create a smartphone app that automatically identifies species of flowers from pictures taken on the smartphone. We're working with a moderately-sized team of data scientists and will be building part of the data analysis pipeline for this app.We've been tasked by our company's Head of Data Science to create a demo machine learning model that takes four measurements from the flowers (sepal length, sepal width, petal length, and petal width) and identifies the species based on those measurements alone.We've been given a [data set](https://github.com/ValRCS/RCS_Data_Analysis_Python/blob/master/data/iris-data.csv) from our field researchers to develop the demo, which only includes measurements for three types of *Iris* flowers: *Iris setosa* *Iris versicolor* *Iris virginica*The four measurements we're using currently come from hand-measurements by the field researchers, but they will be automatically measured by an image processing model in the future.**Note:** The data set we're working with is the famous [*Iris* data set](https://archive.ics.uci.edu/ml/datasets/Iris) — included with this notebook — which I have modified slightly for demonstration purposes. Step 1: Answering the question[[ go back to the top ]](Table-of-contents)The first step to any data analysis project is to define the question or problem we're looking to solve, and to define a measure (or set of measures) for our success at solving that task. The data analysis checklist has us answer a handful of questions to accomplish that, so let's work through those questions.>Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?We're trying to classify the species (i.e., class) of the flower based on four measurements that we're provided: sepal length, sepal width, petal length, and petal width.Petal - ziedlapiņa, sepal - arī ziedlapiņa>Did you define the metric for success before beginning?Let's do that now. Since we're performing classification, we can use [accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) — the fraction of correctly classified flowers — to quantify how well our model is performing. Our company's Head of Data has told us that we should achieve at least 90% accuracy.>Did you understand the context for the question and the scientific or business application?We're building part of a data analysis pipeline for a smartphone app that will be able to classify the species of flowers from pictures taken on the smartphone. In the future, this pipeline will be connected to another pipeline that automatically measures from pictures the traits we're using to perform this classification.>Did you record the experimental design?Our company's Head of Data has told us that the field researchers are hand-measuring 50 randomly-sampled flowers of each species using a standardized methodology. The field researchers take pictures of each flower they sample from pre-defined angles so the measurements and species can be confirmed by the other field researchers at a later point. At the end of each day, the data is compiled and stored on a private company GitHub repository.>Did you consider whether the question could be answered with the available data?The data set we currently have is only for three types of *Iris* flowers. The model built off of this data set will only work for those *Iris* flowers, so we will need more data to create a general flower classifier.Notice that we've spent a fair amount of time working on the problem without writing a line of code or even looking at the data.**Thinking about and documenting the problem we're working on is an important step to performing effective data analysis that often goes overlooked.** Don't skip it. Step 2: Checking the data[[ go back to the top ]](Table-of-contents)The next step is to look at the data we're working with. Even curated data sets from the government can have errors in them, and it's vital that we spot these errors before investing too much time in our analysis.Generally, we're looking to answer the following questions:* Is there anything wrong with the data?* Are there any quirks with the data?* Do I need to fix or remove any of the data?Let's start by reading the data into a pandas DataFrame.
###Code
import pandas as pd
iris_data = pd.read_csv('../data/iris-data.csv')
#lets take a look at the first 5 rows
iris_data.head()
iris_data.tail()
# Resources for loading data from nonlocal sources
# Pandas Can generally handle most common formats
# https://pandas.pydata.org/pandas-docs/stable/io.html
# SQL https://stackoverflow.com/questions/39149243/how-do-i-connect-to-a-sql-server-database-with-python
# NoSQL MongoDB https://realpython.com/introduction-to-mongodb-and-python/
# Apache Hadoop: https://dzone.com/articles/how-to-get-hadoop-data-into-a-python-model
# Apache Spark: https://www.datacamp.com/community/tutorials/apache-spark-python
# Data Scraping / Crawling libraries : https://elitedatascience.com/python-web-scraping-libraries Big Topic in itself
# Most data resources have some form of Python API / Library
iris_data.head()
###Output
_____no_output_____
###Markdown
We're in luck! The data seems to be in a usable format.The first row in the data file defines the column headers, and the headers are descriptive enough for us to understand what each column represents. The headers even give us the units that the measurements were recorded in, just in case we needed to know at a later point in the project.Each row following the first row represents an entry for a flower: four measurements and one class, which tells us the species of the flower.**One of the first things we should look for is missing data.** Thankfully, the field researchers already told us that they put a 'NA' into the spreadsheet when they were missing a measurement.We can tell pandas to automatically identify missing values if it knows our missing value marker.
###Code
iris_data.shape
iris_data.info()
iris_data.describe()
# with na_values we can pass what cells to mark as na
iris_data = pd.read_csv('../data/iris-data.csv', na_values=['NA', 'N/A'])
###Output
_____no_output_____
###Markdown
Voilà! Now pandas knows to treat rows with 'NA' as missing values. Next, it's always a good idea to look at the distribution of our data — especially the outliers.Let's start by printing out some summary statistics about the data set.
###Code
iris_data.describe()
###Output
_____no_output_____
###Markdown
We can see several useful values from this table. For example, we see that five `petal_width_cm` entries are missing.If you ask me, though, tables like this are rarely useful unless we know that our data should fall in a particular range. It's usually better to visualize the data in some way. Visualization makes outliers and errors immediately stand out, whereas they might go unnoticed in a large table of numbers.Since we know we're going to be plotting in this section, let's set up the notebook so we can plot inside of it.
###Code
# This line tells the notebook to show plots inside of the notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
###Output
_____no_output_____
###Markdown
Next, let's create a **scatterplot matrix**. Scatterplot matrices plot the distribution of each column along the diagonal, and then plot a scatterplot matrix for the combination of each variable. They make for an efficient tool to look for errors in our data.We can even have the plotting package color each entry by its class to look for trends within the classes.
###Code
sb.pairplot(iris_data, hue='class')
# We have to temporarily drop the rows with 'NA' values
# because the Seaborn plotting function does not know
# what to do with them
sb.pairplot(iris_data.dropna(), hue='class')
###Output
_____no_output_____
###Markdown
From the scatterplot matrix, we can already see some issues with the data set:1. There are five classes when there should only be three, meaning there were some coding errors.2. There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.3. We had to drop those rows with missing values.In all of these cases, we need to figure out what to do with the erroneous data. Which takes us to the next step... Step 3: Tidying the data GIGO principle[[ go back to the top ]](Table-of-contents)Now that we've identified several errors in the data set, we need to fix them before we proceed with the analysis.Let's walk through the issues one-by-one.>There are five classes when there should only be three, meaning there were some coding errors.After talking with the field researchers, it sounds like one of them forgot to add `Iris-` before their `Iris-versicolor` entries. The other extraneous class, `Iris-setossa`, was simply a typo that they forgot to fix.Let's use the DataFrame to fix these errors.
###Code
iris_data['class'].unique()
len(iris_data['class'].unique())
# Copy and Replace
# in df.loc[rows, thencolumns]
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data['class'].unique()
# So we take a row where a specific column('class' here) matches our bad values
# and change them to good values
iris_data.loc[iris_data['class'] == 'Iris-setossa', 'class'] = 'Iris-setosa'
iris_data['class'].unique()
iris_data.tail()
iris_data[98:103]
iris_data['class'].unique()
###Output
_____no_output_____
###Markdown
Much better! Now we only have three class types. Imagine how embarrassing it would've been to create a model that used the wrong classes.>There are some clear outliers in the measurements that may be erroneous: one `sepal_width_cm` entry for `Iris-setosa` falls well outside its normal range, and several `sepal_length_cm` entries for `Iris-versicolor` are near-zero for some reason.Fixing outliers can be tricky business. It's rarely clear whether the outlier was caused by measurement error, recording the data in improper units, or if the outlier is a real anomaly. For that reason, we should be judicious when working with outliers: if we decide to exclude any data, we need to make sure to document what data we excluded and provide solid reasoning for excluding that data. (i.e., "This data didn't fit my hypothesis" will not stand peer review.)In the case of the one anomalous entry for `Iris-setosa`, let's say our field researchers know that it's impossible for `Iris-setosa` to have a sepal width below 2.5 cm. Clearly this entry was made in error, and we're better off just scrapping the entry than spending hours finding out what happened.
###Code
# here we see all flowers with sepal_width_cm under 2.5m
iris_data.loc[(iris_data['sepal_width_cm'] < 2.5)]
## for multiple filters we use & for AND , and use | for OR
smallpetals = iris_data.loc[(iris_data['sepal_width_cm'] < 2.5) & (iris_data['class'] == 'Iris-setosa') ]
smallpetals
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
len(iris_data)
# This line drops any 'Iris-setosa' rows with a separal width less than 2.5 cm
# Let's go over this command in class
iris_data = iris_data.loc[(iris_data['class'] != 'Iris-setosa') | (iris_data['sepal_width_cm'] >= 2.5)]
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'sepal_width_cm'].hist()
len(iris_data)
###Output
_____no_output_____
###Markdown
Excellent! Now all of our `Iris-setosa` rows have a sepal width greater than 2.5.The next data issue to address is the several near-zero sepal lengths for the `Iris-versicolor` rows. Let's take a look at those rows.
###Code
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
###Output
_____no_output_____
###Markdown
How about that? All of these near-zero `sepal_length_cm` entries seem to be off by two orders of magnitude, as if they had been recorded in meters instead of centimeters.After some brief correspondence with the field researchers, we find that one of them forgot to convert those measurements to centimeters. Let's do that for them.
###Code
iris_data.loc[iris_data['class'] == 'Iris-versicolor', 'sepal_length_cm'].hist()
iris_data.loc[(iris_data['class'] == 'Iris-versicolor') &
(iris_data['sepal_length_cm'] < 1.0)]
iris_data['sepal_length_cm'].hist()
###Output
_____no_output_____
###Markdown
Phew! Good thing we fixed those outliers. They could've really thrown our analysis off.>We had to drop those rows with missing values.Let's take a look at the rows with missing values:
###Code
iris_data.notnull()
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
###Output
_____no_output_____
###Markdown
It's not ideal that we had to drop those rows, especially considering they're all `Iris-setosa` entries. Since it seems like the missing data is systematic — all of the missing values are in the same column for the same *Iris* type — this error could potentially bias our analysis.One way to deal with missing data is **mean imputation**: If we know that the values for a measurement fall in a certain range, we can fill in empty values with the average of that measurement.Let's see if we can do that here.
###Code
iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].hist()
###Output
_____no_output_____
###Markdown
Most of the petal widths for `Iris-setosa` fall within the 0.2-0.3 range, so let's fill in these entries with the average measured petal width.
###Code
iris_setosa_avg = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
iris_setosa_avg
type(iris_setosa_avg)
round(iris_setosa_avg, 2)
# for our purposes 4 digita accuracy is sufficient, add why here :)
iris_setosa_avg = round(iris_setosa_avg, 4)
average_petal_width = iris_data.loc[iris_data['class'] == 'Iris-setosa', 'petal_width_cm'].mean()
print(average_petal_width)
average_petal_width = iris_setosa_avg
# we find iris-setosa rows where petal_width_cm is missing
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'].isnull()),
'petal_width_cm'] = average_petal_width
# we find all iris-setosa with the average
iris_data.loc[(iris_data['class'] == 'Iris-setosa') &
(iris_data['petal_width_cm'] == average_petal_width)]
iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]
# if we want to drop rows with missing data
# and save them into a new dataframe
dfwithoutmissingvalues = iris_data.dropna()
len(dfwithoutmissingvalues)
###Output
_____no_output_____
###Markdown
Great! Now we've recovered those rows and no longer have missing data in our data set.**Note:** If you don't feel comfortable imputing your data, you can drop all rows with missing data with the `dropna()` call: iris_data.dropna(inplace=True)After all this hard work, we don't want to repeat this process every time we work with the data set. Let's save the tidied data file *as a separate file* and work directly with that data file from now on.
###Code
import json
iris_data.to_json('../data/iris-clean.json')
# to bypass pandas missing json formatter we can format the data ourselves
df_json_pretty = json.dumps(json.loads(iris_data.to_json()), indent=4)
type(df_json_pretty)
df_json_pretty[:100]
with open('data.json', 'w', encoding='utf-8') as f:
f.write(df_json_pretty)
# for saving in the same folder
iris_data.to_csv('iris-data-clean.csv', index=False)
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
iris_data_clean.head()
###Output
_____no_output_____
###Markdown
Now, let's take a look at the scatterplot matrix now that we've tidied the data.
###Code
myplot = sb.pairplot(iris_data_clean, hue='class')
myplot.savefig('irises.png')
import scipy.stats as stats
iris_data = pd.read_csv('../data/iris-data.csv')
iris_data.columns.unique()
stats.entropy(iris_data_clean['sepal_length_cm'])
iris_data.columns[:-1]
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
for col in iris_data.columns[:-1]:
print("Entropy for: ", col, stats.entropy(iris_data[col].dropna()))
###Output
Entropy for: sepal_length_cm 4.96909746125432
Entropy for: sepal_width_cm 5.000701325982732
Entropy for: petal_length_cm 4.888113822938816
Entropy for: petal_width_cm 4.754264731532864
###Markdown
Of course, I purposely inserted numerous errors into this data set to demonstrate some of the many possible scenarios you may face while tidying your data.The general takeaways here should be:* Make sure your data is encoded properly* Make sure your data falls within the expected range, and use domain knowledge whenever possible to define that expected range* Deal with missing data in one way or another: replace it if you can or drop it* Never tidy your data manually because that is not easily reproducible* Use code as a record of how you tidied your data* Plot everything you can about the data at this stage of the analysis so you can *visually* confirm everything looks correct Bonus: Testing our data[[ go back to the top ]](Table-of-contents)At SciPy 2015, I was exposed to a great idea: We should test our data. Just how we use unit tests to verify our expectations from code, we can similarly set up unit tests to verify our expectations about a data set.We can quickly test our data using `assert` statements: We assert that something must be true, and if it is, then nothing happens and the notebook continues running. However, if our assertion is wrong, then the notebook stops running and brings it to our attention. For example,```Pythonassert 1 == 2```will raise an `AssertionError` and stop execution of the notebook because the assertion failed.Let's test a few things that we know about our data set now.
###Code
assert 1 == 3
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
assert len(iris_data['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
# We know that our data set should have no missing measurements
assert len(iris_data.loc[(iris_data['sepal_length_cm'].isnull()) |
(iris_data['sepal_width_cm'].isnull()) |
(iris_data['petal_length_cm'].isnull()) |
(iris_data['petal_width_cm'].isnull())]) == 0
###Output
_____no_output_____
###Markdown
And so on. If any of these expectations are violated, then our analysis immediately stops and we have to return to the tidying stage. Data Cleanup & Wrangling > 80% time spent in Data Science Step 4: Exploratory analysis[[ go back to the top ]](Table-of-contents)Now after spending entirely too much time tidying our data, we can start analyzing it!Exploratory analysis is the step where we start delving deeper into the data set beyond the outliers and errors. We'll be looking to answer questions such as:* How is my data distributed?* Are there any correlations in my data?* Are there any confounding factors that explain these correlations?This is the stage where we plot all the data in as many ways as possible. Create many charts, but don't bother making them pretty — these charts are for internal use.Let's return to that scatterplot matrix that we used earlier.
###Code
sb.pairplot(iris_data_clean)
;
###Output
_____no_output_____
###Markdown
Our data is normally distributed for the most part, which is great news if we plan on using any modeling methods that assume the data is normally distributed.There's something strange going on with the petal measurements. Maybe it's something to do with the different `Iris` types. Let's color code the data by the class again to see if that clears things up.
###Code
sb.pairplot(iris_data_clean, hue='class')
;
###Output
_____no_output_____
###Markdown
Sure enough, the strange distribution of the petal measurements exist because of the different species. This is actually great news for our classification task since it means that the petal measurements will make it easy to distinguish between `Iris-setosa` and the other `Iris` types.Distinguishing `Iris-versicolor` and `Iris-virginica` will prove more difficult given how much their measurements overlap.There are also correlations between petal length and petal width, as well as sepal length and sepal width. The field biologists assure us that this is to be expected: Longer flower petals also tend to be wider, and the same applies for sepals.We can also make [**violin plots**](https://en.wikipedia.org/wiki/Violin_plot) of the data to compare the measurement distributions of the classes. Violin plots contain the same information as [box plots](https://en.wikipedia.org/wiki/Box_plot), but also scales the box according to the density of the data.
###Code
plt.figure(figsize=(10, 10))
for column_index, column in enumerate(iris_data_clean.columns):
if column == 'class':
continue
plt.subplot(2, 2, column_index + 1)
sb.violinplot(x='class', y=column, data=iris_data_clean)
###Output
_____no_output_____
###Markdown
Enough flirting with the data. Let's get to modeling. Step 5: Classification[[ go back to the top ]](Table-of-contents)Wow, all this work and we *still* haven't modeled the data!As tiresome as it can be, tidying and exploring our data is a vital component to any data analysis. If we had jumped straight to the modeling step, we would have created a faulty classification model.Remember: **Bad data leads to bad models.** Always check your data first.Assured that our data is now as clean as we can make it — and armed with some cursory knowledge of the distributions and relationships in our data set — it's time to make the next big step in our analysis: Splitting the data into training and testing sets.A **training set** is a random subset of the data that we use to train our models.A **testing set** is a random subset of the data (mutually exclusive from the training set) that we use to validate our models on unforseen data.Especially in sparse data sets like ours, it's easy for models to **overfit** the data: The model will learn the training set so well that it won't be able to handle most of the cases it's never seen before. This is why it's important for us to build the model with the training set, but score it with the testing set.Note that once we split the data into a training and testing set, we should treat the testing set like it no longer exists: We cannot use any information from the testing set to build our model or else we're cheating.Let's set up our data first.
###Code
# iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# We're using all four measurements as inputs
# Note that scikit-learn expects each entry to be a list of values, e.g.,
# [ [val1, val2, val3],
# [val1, val2, val3],
# ... ]
# such that our input data set is represented as a list of lists
# We can extract the data in this format from pandas like this:
# usually called X
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
# Similarly, we can extract the class labels
# answers/label often called little y
all_labels = iris_data_clean['class'].values
# Make sure that you don't mix up the order of the entries
# all_inputs[5] inputs should correspond to the class in all_labels[5]
# Here's what a subset of our inputs looks like:
all_inputs[:5]
type(all_inputs)
all_labels[:5]
type(all_labels)
###Output
_____no_output_____
###Markdown
Now our data is ready to be split.
###Code
all_inputs[:3]
iris_data_clean.head(3)
all_labels[:3]
from sklearn.model_selection import train_test_split
# Here we split our data into training and testing data
# you can read more on split function at
# https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25, random_state=1)
len(all_inputs)
len(training_inputs)
0.75*149
149*0.25
len(testing_inputs)
training_inputs[:5]
testing_inputs[:5]
testing_classes[:5]
training_classes[:5]
###Output
_____no_output_____
###Markdown
With our data split, we can start fitting models to our data. Our company's Head of Data is all about decision tree classifiers, so let's start with one of those.Decision tree classifiers are incredibly simple in theory. In their simplest form, decision tree classifiers ask a series of Yes/No questions about the data — each time getting closer to finding out the class of each entry — until they either classify the data set perfectly or simply can't differentiate a set of entries. Think of it like a game of [Twenty Questions](https://en.wikipedia.org/wiki/Twenty_Questions), except the computer is *much*, *much* better at it.Here's an example decision tree classifier:Notice how the classifier asks Yes/No questions about the data — whether a certain feature is <= 1.75, for example — so it can differentiate the records. This is the essence of every decision tree.The nice part about decision tree classifiers is that they are **scale-invariant**, i.e., the scale of the features does not affect their performance, unlike many Machine Learning models. In other words, it doesn't matter if our features range from 0 to 1 or 0 to 1,000; decision tree classifiers will work with them just the same.There are several [parameters](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) that we can tune for decision tree classifiers, but for now let's use a basic decision tree classifier.
###Code
from sklearn.tree import DecisionTreeClassifier
# Create the classifier
decision_tree_classifier = DecisionTreeClassifier()
# Train the classifier on the training set
decision_tree_classifier.fit(training_inputs, training_classes)
# here we have a working classifier after the fit
# Validate the classifier on the testing set using classification accuracy
decision_tree_classifier.score(testing_inputs, testing_classes)
1-1/38
decision_tree_classifier.score(training_inputs, training_classes)
150*0.25
len(testing_inputs)
# How the accuracy score came about 37 out of 38 correct
37/38
# lets try a cooler model SVM - Support Vector Machines
from sklearn import svm
svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)
svm_classifier = svm.SVC(gamma = 'scale')
svm_classifier.fit(training_inputs, training_classes)
svm_classifier.score(testing_inputs, testing_classes)
###Output
_____no_output_____
###Markdown
Heck yeah! Our model achieves 97% classification accuracy without much effort.However, there's a catch: Depending on how our training and testing set was sampled, our model can achieve anywhere from 80% to 100% accuracy:
###Code
import matplotlib.pyplot as plt
# here we randomly split data 1000 times in differrent training and test sets
model_accuracies = []
for repetition in range(1000):
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
# notice how we do not specify a seed so 1000 times we perform a random split
decision_tree_classifier = DecisionTreeClassifier()
decision_tree_classifier.fit(training_inputs, training_classes)
classifier_accuracy = decision_tree_classifier.score(testing_inputs, testing_classes)
model_accuracies.append(classifier_accuracy)
plt.hist(model_accuracies)
;
plt.hist(model_accuracies, bins=10)
max(model_accuracies)
min(model_accuracies)
1-9/38
from collections import Counter
acc_count = Counter(model_accuracies)
acc_count
1/38
100/38
###Output
_____no_output_____
###Markdown
It's obviously a problem that our model performs quite differently depending on the subset of the data it's trained on. This phenomenon is known as **overfitting**: The model is learning to classify the training set so well that it doesn't generalize and perform well on data it hasn't seen before. Cross-validation[[ go back to the top ]](Table-of-contents)This problem is the main reason that most data scientists perform ***k*-fold cross-validation** on their models: Split the original data set into *k* subsets, use one of the subsets as the testing set, and the rest of the subsets are used as the training set. This process is then repeated *k* times such that each subset is used as the testing set exactly once.10-fold cross-validation is the most common choice, so let's use that here. Performing 10-fold cross-validation on our data set looks something like this:(each square is an entry in our data set)
###Code
iris_data_clean.head(15)
iris_data_clean.tail()
# new text
import numpy as np
from sklearn.model_selection import StratifiedKFold
def plot_cv(cv, features, labels):
masks = []
for train, test in cv.split(features, labels):
mask = np.zeros(len(labels), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.figure(figsize=(15, 15))
plt.imshow(masks, interpolation='none', cmap='gray_r')
plt.ylabel('Fold')
plt.xlabel('Row #')
plot_cv(StratifiedKFold(n_splits=10), all_inputs, all_labels)
###Output
_____no_output_____
###Markdown
You'll notice that we used **Stratified *k*-fold cross-validation** in the code above. Stratified *k*-fold keeps the class proportions the same across all of the folds, which is vital for maintaining a representative subset of our data set. (e.g., so we don't have 100% `Iris setosa` entries in one of the folds.)We can perform 10-fold cross-validation on our model with the following code:
###Code
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import cross_val_score
decision_tree_classifier = DecisionTreeClassifier()
# cross_val_score returns a list of the scores, which we can visualize
# to get a reasonable estimate of our classifier's performance
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
cv_scores
1-1/15
len(all_inputs.T[1])
import scipy.stats as stats
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.entropy.html
# https://en.wikipedia.org/wiki/Entropy_(information_theory)
print("Entropy for: ", stats.entropy(all_inputs.T[1]))
# we go through list of column names except last one and get entropy
# for data (without missing values) in each column
def printEntropy(npdata):
for i, col in enumerate(npdata.T):
print("Entropy for column:", i, stats.entropy(col))
printEntropy(all_inputs)
###Output
Entropy for column: 0 4.9947332367061925
Entropy for column: 1 4.994187360273029
Entropy for column: 2 4.88306851089088
Entropy for column: 3 4.76945055275522
###Markdown
Now we have a much more consistent rating of our classifier's general classification accuracy. Parameter tuning[[ go back to the top ]](Table-of-contents)Every Machine Learning model comes with a variety of parameters to tune, and these parameters can be vitally important to the performance of our classifier. For example, if we severely limit the depth of our decision tree classifier:
###Code
decision_tree_classifier = DecisionTreeClassifier(max_depth=1)
cv_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
plt.hist(cv_scores)
plt.title('Average score: {}'.format(np.mean(cv_scores)))
;
###Output
_____no_output_____
###Markdown
the classification accuracy falls tremendously.Therefore, we need to find a systematic method to discover the best parameters for our model and data set.The most common method for model parameter tuning is **Grid Search**. The idea behind Grid Search is simple: explore a range of parameters and find the best-performing parameter combination. Focus your search on the best range of parameters, then repeat this process several times until the best parameters are discovered.Let's tune our decision tree classifier. We'll stick to only two parameters for now, but it's possible to simultaneously explore dozens of parameters if we want.
###Code
# prepare to grid and to fit
from sklearn.model_selection import GridSearchCV
decision_tree_classifier = DecisionTreeClassifier()
# the parameters will depend on the model we use above
parameter_grid = {'max_depth': [1, 2, 3, 4, 5, 6, 7],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
# here the grid search will loop through all parameter combinations and fit the model to cross validated splits
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
###Output
Best score: 0.959731543624161
Best parameters: {'max_depth': 3, 'max_features': 4}
###Markdown
Now let's visualize the grid search to see how the parameters interact.
###Code
type(grid_search)
grid_search.estimator
grid_search.param_grid
type(grid_search.param_grid)
grid_search.cv
grid_search.cv_results_['mean_test_score']
cv_res = grid_search.cv_results_['mean_test_score']
cv_res.shape
import seaborn as sb
grid_visualization = grid_search.cv_results_['mean_test_score']
grid_visualization.shape = (7, 4)
sb.heatmap(grid_visualization, cmap='Oranges', annot=True)
plt.xticks(np.arange(4) + 0.5, grid_search.param_grid['max_features'])
plt.yticks(np.arange(7) + 0.5, grid_search.param_grid['max_depth'])
plt.xlabel('max_features')
plt.ylabel('max_depth')
plt.savefig("grid_heatmap.png")
;
###Output
_____no_output_____
###Markdown
Now we have a better sense of the parameter space: We know that we need a `max_depth` of at least 2 to allow the decision tree to make more than a one-off decision.`max_features` doesn't really seem to make a big difference here as long as we have 2 of them, which makes sense since our data set has only 4 features and is relatively easy to classify. (Remember, one of our data set's classes was easily separable from the rest based on a single feature.)Let's go ahead and use a broad grid search to find the best settings for a handful of parameters.
###Code
decision_tree_classifier = DecisionTreeClassifier()
parameter_grid = {'criterion': ['gini', 'entropy'],
'splitter': ['best', 'random'],
'max_depth': [1, 2, 3, 4, 5],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(decision_tree_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
149*grid_search.best_score_
143/149
145/149
###Output
_____no_output_____
###Markdown
Now we can take the best classifier from the Grid Search and use that:
###Code
# we pick the best one and save for now in a different variable
decision_tree_classifier = grid_search.best_estimator_
decision_tree_classifier
###Output
_____no_output_____
###Markdown
We can even visualize the decision tree with [GraphViz](http://www.graphviz.org/) to see how it's making the classifications:
###Code
import sklearn.tree as tree
from sklearn.externals.six import StringIO
with open('iris_dtc.dot', 'w') as out_file:
out_file = tree.export_graphviz(decision_tree_classifier, out_file=out_file)
###Output
_____no_output_____
###Markdown
(This classifier may look familiar from earlier in the notebook.)Alright! We finally have our demo classifier. Let's create some visuals of its performance so we have something to show our company's Head of Data.
###Code
decision_tree_classifier
dt_scores = cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(dt_scores)
sb.stripplot(dt_scores, jitter=True, color='orange')
;
###Output
_____no_output_____
###Markdown
Hmmm... that's a little boring by itself though. How about we compare another classifier to see how they perform?We already know from previous projects that Random Forest classifiers usually work better than individual decision trees. A common problem that decision trees face is that they're prone to overfitting: They complexify to the point that they classify the training set near-perfectly, but fail to generalize to data they have not seen before.**Random Forest classifiers** work around that limitation by creating a whole bunch of decision trees (hence "forest") — each trained on random subsets of training samples (drawn with replacement) and features (drawn without replacement) — and have the decision trees work together to make a more accurate classification.Let that be a lesson for us: **Even in Machine Learning, we get better results when we work together!**Let's see if a Random Forest classifier works better here.The great part about scikit-learn is that the training, testing, parameter tuning, etc. process is the same for all models, so we only need to plug in the new classifier.
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestClassifier
random_forest_classifier = RandomForestClassifier()
parameter_grid = {'n_estimators': [10, 25, 50, 100],
'criterion': ['gini', 'entropy'],
'max_features': [1, 2, 3, 4]}
cross_validation = StratifiedKFold(n_splits=10)
grid_search = GridSearchCV(random_forest_classifier,
param_grid=parameter_grid,
cv=cross_validation)
grid_search.fit(all_inputs, all_labels)
print('Best score: {}'.format(grid_search.best_score_))
print('Best parameters: {}'.format(grid_search.best_params_))
grid_search.best_estimator_
###Output
Best score: 0.9664429530201343
Best parameters: {'criterion': 'gini', 'max_features': 2, 'n_estimators': 25}
###Markdown
Now we can compare their performance:
###Code
random_forest_classifier = grid_search.best_estimator_
rf_df = pd.DataFrame({'accuracy': cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Random Forest'] * 10})
dt_df = pd.DataFrame({'accuracy': cross_val_score(decision_tree_classifier, all_inputs, all_labels, cv=10),
'classifier': ['Decision Tree'] * 10})
both_df = rf_df.append(dt_df)
both_df.head()
both_df
sb.boxplot(x='classifier', y='accuracy', data=both_df)
sb.stripplot(x='classifier', y='accuracy', data=both_df, jitter=True, color='orange')
;
###Output
_____no_output_____
###Markdown
How about that? They both seem to perform about the same on this data set. This is probably because of the limitations of our data set: We have only 4 features to make the classification, and Random Forest classifiers excel when there's hundreds of possible features to look at. In other words, there wasn't much room for improvement with this data set. Step 6: Reproducibility[[ go back to the top ]](Table-of-contents)Ensuring that our work is reproducible is the last and — arguably — most important step in any analysis. **As a rule, we shouldn't place much weight on a discovery that can't be reproduced**. As such, if our analysis isn't reproducible, we might as well not have done it.Notebooks like this one go a long way toward making our work reproducible. Since we documented every step as we moved along, we have a written record of what we did and why we did it — both in text and code.Beyond recording what we did, we should also document what software and hardware we used to perform our analysis. This typically goes at the top of our notebooks so our readers know what tools to use.[Sebastian Raschka](http://sebastianraschka.com/) created a handy [notebook tool](https://github.com/rasbt/watermark) for this:
###Code
!pip install watermark
%load_ext watermark
myversions = pd.show_versions()
myversions
%watermark -a 'RCS_12' -nmv --packages numpy,pandas,sklearn,matplotlib,seaborn
###Output
RCS_12 Mon Dec 23 2019
CPython 3.7.3
IPython 7.4.0
numpy 1.16.2
pandas 0.24.2
sklearn 0.20.3
matplotlib 3.0.3
seaborn 0.9.0
compiler : MSC v.1915 64 bit (AMD64)
system : Windows
release : 10
machine : AMD64
processor : Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
CPU cores : 12
interpreter: 64bit
###Markdown
Finally, let's extract the core of our work from Steps 1-5 and turn it into a single pipeline.
###Code
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv('../data/iris-data-clean.csv')
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
# get inputs and labels in NumPY (out of Pandas dataframe)
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
len(testing_inputs)
for input_features, prediction, actual in zip(testing_inputs,
random_forest_classifier.predict(testing_inputs),
testing_classes):
if (prediction == actual):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
else:
print('!!!!!MISMATCH***{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
mismatches = findMismatches(all_inputs, all_labels, random_forest_classifier)
mismatches
random_forest_classifier.score(all_inputs, all_labels)
def findMismatches(inputs, answers, classifier):
mismatches = []
predictions = classifier.predict(inputs)
for X, answer, prediction in zip(inputs, answers, predictions):
if answer != prediction:
mismatches.append([X,answer, prediction])
return mismatches
numbers = [1,2,5,6,6,6]
for number in numbers:
print(number)
146/149
%matplotlib inline
import pandas as pd
import seaborn as sb
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
def processData(filename):
# We can jump directly to working with the clean data because we saved our cleaned data set
iris_data_clean = pd.read_csv(filename)
# Testing our data: Our analysis will stop here if any of these assertions are wrong
# We know that we should only have three classes
assert len(iris_data_clean['class'].unique()) == 3
# We know that sepal lengths for 'Iris-versicolor' should never be below 2.5 cm
assert iris_data_clean.loc[iris_data_clean['class'] == 'Iris-versicolor', 'sepal_length_cm'].min() >= 2.5
# We know that our data set should have no missing measurements
assert len(iris_data_clean.loc[(iris_data_clean['sepal_length_cm'].isnull()) |
(iris_data_clean['sepal_width_cm'].isnull()) |
(iris_data_clean['petal_length_cm'].isnull()) |
(iris_data_clean['petal_width_cm'].isnull())]) == 0
all_inputs = iris_data_clean[['sepal_length_cm', 'sepal_width_cm',
'petal_length_cm', 'petal_width_cm']].values
all_labels = iris_data_clean['class'].values
# This is the classifier that came out of Grid Search
random_forest_classifier = RandomForestClassifier(criterion='gini', max_features=3, n_estimators=50)
# All that's left to do now is plot the cross-validation scores
rf_classifier_scores = cross_val_score(random_forest_classifier, all_inputs, all_labels, cv=10)
sb.boxplot(rf_classifier_scores)
sb.stripplot(rf_classifier_scores, jitter=True, color='black')
# ...and show some of the predictions from the classifier
(training_inputs,
testing_inputs,
training_classes,
testing_classes) = train_test_split(all_inputs, all_labels, test_size=0.25)
random_forest_classifier.fit(training_inputs, training_classes)
for input_features, prediction, actual in zip(testing_inputs[:10],
random_forest_classifier.predict(testing_inputs[:10]),
testing_classes[:10]):
print('{}\t-->\t{}\t(Actual: {})'.format(input_features, prediction, actual))
return rf_classifier_scores
myscores = processData('../data/iris-data-clean.csv')
type(myscores)
myscores.max()
myscores[:5]
###Output
_____no_output_____ |
_site/software/hw2.3.ipynb | ###Markdown
Homework 2.3: Microtubule catastrophe and ECDFs [SOLO] (30 pts)[Data set download](https://s3.amazonaws.com/bebi103.caltech.edu/data/gardner_time_to_catastrophe_dic_tidy.csv) In a [future lesson](../../lessons/07/iqplot.iypnb), you will learn about **emprical cumulative distribution functions**, or ECDFs. These are useful ways to visualize how measured data are distributed. An ECDF evaluated at point _x_ is defined asECDF(_x_) = fraction of data points ≤ _x_.The ECDF is defined on the entire real number line, with $\mathrm{ECDF}(x\to-\infty) = 0$ and $\mathrm{ECDF}(x\to\infty) = 1$. However, the ECDF is often plotted as discrete points, $\{(x_i, y_i)\}$, where for point $i$, $x_i$ is the value of the measured quantity and $y_i$ is $\mathrm{ECDF}(x_i)$. For example, if I have a set of measured data with values (1.1, –6.7, 2.3, 9.8, 2.3), the points on the ECDF plot are| x | y ||:------:|:---:|| –6.7 | 0.2 || 1.1 | 0.4 || 2.3 | 0.6 || 2.3 | 0.8 || 9.8 | 1.0 |In this problem, you will use you newly acquired skills using Numpy and Bokeh to compute ECDFs from a real data set and plot them.[Gardner, Zanic, and coworkers](http://dx.doi.org/10.1016/j.cell.2011.10.037) investigated the dynamics of microtubule catastrophe, the switching of a microtubule from a growing to a shrinking state. In particular, they were interested in the time between the start of growth of a microtubule and the catastrophe event. They monitored microtubules by using tubulin (the monomer that comprises a microtubule) that was labeled with a fluorescent marker. As a control to make sure that fluorescent labels and exposure to laser light did not affect the microtubule dynamics, they performed a similar experiment using differential interference contrast (DIC) microscopy. They measured the time until catastrophe with labeled and unlabeled tubulin.We will look at the data used to generate Fig. 2a of their paper. In the end, you will generate a plot similar to that figure.**a)** Write a function with the call signature `ecdfvals(data)`, which takes a one-dimensional Numpy array (or Pandas `Series`; the same construction of your function will work for both) of data and returns the `x` and `y` values for plotting the ECDF in the "dots" style, as in Fig. 2a of the Gardner, Zanic, et al. paper. As a reminder, > ECDF(*x*) = fraction of data points ≤ x.When you write this function, you may only use base Python and the standard library, in addition to Numpy and Pandas.
###Code
# import statements
import numpy as np
import pandas as pd
# plot in bokeh
import bokeh.io
import bokeh.plotting
# function to take in 1D array and returns x and y for plotting ECDF in dots style
def ecdfvals(data):
# extract all unique timing values
data_vals = np.unique(data)
# initialize x and y
x = np.array(data_vals)
y = np.zeros(len(data_vals))
for i, val in enumerate(data_vals):
y[i]= len(np.where(data == val)[0])
# normalize counts to percentage
y = y/np.sum(y)
# correct counts to cumulative percentage
y = np.cumsum(y)
return x, y
###Output
_____no_output_____
###Markdown
**b)** Use the `ecdfvals()` function that you wrote to plot the ECDFs shown in Fig. 2a of the Gardner, Zanic, et al. paper. By looking this plot, do you think that the fluorescent labeling makes a difference in the onset of catastrophe? (We will do a more careful statistical inference later in the course, but for now, does it pass the eye test? Eye tests are an important part of EDA.) You can access the data set here: [https://s3.amazonaws.com/bebi103.caltech.edu/data/gardner_time_to_catastrophe_dic_tidy.csv](https://s3.amazonaws.com/bebi103.caltech.edu/data/gardner_time_to_catastrophe_dic_tidy.csv)
###Code
# read csv into dataframe, some tidying of data
# df = pd.read_csv("..\data\gardner_time_to_catastrophe_dic_tidy.csv",header=[0])
df = pd.read_csv("../data/gardner_time_to_catastrophe_dic_tidy.csv",header=[0])
df.drop(columns=df.columns[0], axis=1, inplace=True)
# separate false and true catastrophe data
df_false = df[df['labeled']==np.unique(df['labeled'])[0]].iloc[:,0]
df_true = df[df['labeled']==np.unique(df['labeled'])[1]].iloc[:,0]
# obtain values for plotting using ecdfvals function
x_false, y_false = ecdfvals(df_false)
x_true, y_true = ecdfvals(df_true)
df
###Output
_____no_output_____
###Markdown
Notebook did not run properly. I think you might have an issue with the line endings VS code uses? -1
###Code
# Enable viewing Bokeh plots in the notebook
bokeh.io.output_notebook()
p = bokeh.plotting.figure(
width=400,
height=300,
x_axis_label="time to catastrophe (s)",
y_axis_label="ECDF",
)
p.circle(
x=x_true,
y=y_true,
legend_label="Labeled",
)
p.circle(
x=x_false,
y=y_false,
legend_label="Unlabeled",
color="orange"
)
p.legend.location = "bottom_right"
bokeh.io.show(p)
###Output
_____no_output_____ |
module2-oop-code-style-and-reviews/Python_OOP_Cheat_Sheet.ipynb | ###Markdown
ClassesClasses are the object factories of many programming languages. The objects that classes create are typically called instances. Classes can also be used to organize code and/or data. Python Classes are similar to classes in other languages but in many ways they are quite different.[Python Class | python.org](https://docs.python.org/3/tutorial/classes.html?highlight=inheritanceclasses) Class Instantiation & The Instance ObjectWhen a class is called directly you get back an instance object.
###Code
class MyClass:
pass
instance_object = MyClass()
###Output
_____no_output_____
###Markdown
Magic methodsAlso known as Dunder Methods - these are invoked by Python and do not need to be called directly. For example, the `__call__()` method is automatically called when you call the object itself. See Callable Object below.[Python Magic Methods | python.org](https://docs.python.org/3/reference/datamodel.htmlspecial-method-names) Define Fields with `__init__()`This is the Init Method. It is used to populate fields on the instance object. The init method allows us to load the instance object with fields, this is the last step of the instantiation process. Fortunately the object already has all the class variables, instance methods, static methods and class methods pre-loaded. Inside any instance method the instance object has the name: self, this is an implict argument. You need to declare it in the method def but it is not expected to be passed in - that's the implicit part.Sometime this `__init__()` method is called the constructor, however it would be better to call it the initiallizer as the object has already been constructed at this point. There is another magic method `__new__()` - this is the proper constructor. The `__new__()` magic method will not be covered here as it is almost never used.[Python Init method | python.org](https://docs.python.org/3/reference/datamodel.htmlobject.__init__)
###Code
class Name:
def __init__(self, name):
self.name = name # instance variable
name_object = Name("Jim Bob Joe") # name passed to __init__
print(name_object.name)
###Output
Jim Bob Joe
###Markdown
Callable Object with `__call__()` In this example we'll see how we can add to the instance objects the ability to call them as if they where functions.
###Code
class Callable:
fourty_two = 42 # class variable
def __call__(self):
return self.fourty_two
callable_obj = Callable()
print(callable_obj) # not called
print(callable_obj()) # called
###Output
<__main__.Callable object at 0x7f8b43b030f0>
42
###Markdown
Printable Object with `__str__()` and/or `__repr__()``__str__()`: This magic method should return a string. This is used when the object is to be printed or any time the object is cast to a string.`__repr__()`: This magic method should also return a string. Typically this is a string of the class signature.So long as one of these methods are defined, the objects will be printable directly.
###Code
class Printable:
class_answer = 42
def __str__(self):
return f"The answer is {self.class_answer}"
def __repr__(self):
return "Printable()"
answer = Printable()
print(answer)
print(repr(answer))
###Output
The answer is 42
Printable()
###Markdown
InheritanceIt can be said that Wizard & Fighter both inherit from Character. All fields and methods from any base classes will automatically be present in all derived classes. This is one way to share behavior and data across many classes.
###Code
class Character:
""" Base Class """
health = 10
class Wizard(Character):
""" Derived Class """
mana = 20
class Fighter(Character):
""" Derived Class """
power = 15
wizard_object = Wizard()
print("Wizard Health:", wizard_object.health)
print("Wizard Mana:", wizard_object.mana)
print()
fighter_object = Fighter()
print("Fighter Health:", fighter_object.health)
print("Fighter Power:", fighter_object.power)
###Output
Wizard Health: 10
Wizard Mana: 20
Fighter Health: 10
Fighter Power: 15
###Markdown
Avoid Multiple InheritanceThe JunkYardShip below, only fires with the power of a StarFighter. This is due to the order that the base classes are inherited... `JunkYardShip(StarFighter, IonCanon)` should be `JunkYardShip(IonCanon, StarFighter)`, and this is weird. This seems backwards to anyone that knows how CSS works. Multiple Inheritance is not considered Pythonic and generally it's best avoided. Composition is a much better pattern, see the `StarDestroyer()` class.
###Code
class StarFighter:
def fire(self):
return 10
class IonCanon:
def fire(self):
return 100
class JunkYardShip(StarFighter, IonCanon): # Don't do this
""" I have a bad feeling about this. """
pass
class StarDestroyer(StarFighter): # Do this instead
""" This class uses composition to gain
the full fire power of the IonCanon. """
primary_weapon = IonCanon()
def fire(self):
return self.primary_weapon.fire()
fighter = StarFighter()
print(f"StarFighter: {fighter.fire()}")
junk_ship = JunkYardShip()
print(f"JunkYardShip: {junk_ship.fire()}")
destroyer = StarDestroyer()
print(f"StarDestroyer: {destroyer.fire()}")
###Output
StarFighter: 10
JunkYardShip: 10
StarDestroyer: 100
###Markdown
PolymorphismThe example below uses inheritance to achieve full polymorphism between Monsters and Bosses. All fields and methods match in name and logical behavior. They do not need to hold the same data. This allows the two types of objects to be used interchangeably - and yet leverage their logical differences. Inheritance is not the only way to achieve polymorphism.
###Code
import random
def dice(rolls, sides):
return sum(random.randint(1, sides) for _ in range(rolls))
class Monster:
creature_type = "Monster"
hit_dice = 8
damage_dice = 6
names = ("Goblin", "Troll", "Giant", "Zombie", "Ghoul", "Vampire")
def __init__(self, level=1):
self.level = level
self.name = self.random_name()
self.total_health = dice(self.level, self.hit_dice)
self.current_health = self.total_health
def take_damage(self, amount):
print(f"{self.name} takes {amount} damage!")
self.current_health -= amount
def deal_damage(self):
return dice(self.level, self.damage_dice)
def __str__(self):
output = (
f"{self.creature_type}: {self.name}",
f"Level: {self.level}",
f"Health: {self.current_health}/{self.total_health}",
)
return "\n".join(output)
def random_name(self):
return random.choice(self.names)
class Boss(Monster):
creature_type = "Boss"
hit_dice = 12
damage_dice = 8
names = (
"The Loch Ness Monster", "Godzilla", "Nero the Sunblade",
"The Spider Queen", "Palladia Morris", "The Blood Countess",
)
some_monster = Monster(10)
print(some_monster, '\n')
dungeon_boss = Boss(20)
print(dungeon_boss, '\n')
dungeon_boss.take_damage(some_monster.deal_damage())
print(dungeon_boss)
some_monster.take_damage(dungeon_boss.deal_damage())
print(some_monster)
###Output
Monster: Giant
Level: 10
Health: -50/42
###Markdown
Class ScopeThis can be tricky. It's better not to think of what is going on here as scope. But rather a blueprint to make objects. Sometimes the blueprint would like to refer to itself. This complicates things a great deal. What is self? Is it the class or the instance object? We want both abilities, and here we are. The convention is that when we use param 'self' we mean the instance object, when we actually mean the class, meaning in class methods, we will instead use the param 'cls'.In Java it's required to declare what are known as 'get' and 'set' methods to read and write class fields. In Python we may we drink java, but we never write get or set methods. We have direct access to all fields all the time. This is only partially true, see class methods and static methods for exceptions to this rule.
###Code
class ClassScope:
# self does not exit yet.
class_variable = "class_variable"
def __init__(self):
"""
Local scope inside a method is just like function scope. However,
methods also have access to class scope and instance scope
through self. """
self.instance_variable = "instance_variable"
def instance_method(self):
""" This is a regular Instance Method.
We have access to everything from here.
Don't over think it, most of the time this is what you want.
While it is common to modify instance variables here, it is not wise to
declare them here. Use the `__init__()` method for that. Use instance
methods, like this one, to read and update instance variables. """
return self.instance_variable + ": via instance method"
@classmethod
def classy_method(cls):
""" This is a Class Method.
It's more restricted than regular methods. Instead of the `self`
param we use the `cls` param. This is a convention to indicate
we expect this method to live on a class that might possibly never
be instantiated. This is the whole point of having class methods.
This ability comes at a cost: everything we access from this scope
must live on the class itself, not an instance. Only static methods,
class methods and class variables are accessible here. """
return cls.class_variable + ": via class method"
@staticmethod
def selfless_method():
""" This is a Static Method.
It's way more restricted than regular methods. Static Methods
have no concept of `self` or `cls` and cannot access anything.
This is a prime candidate to refactor into a function. """
local_variable = "local_variable"
return local_variable + ": via static method"
# Class Scope
print("From the Class:")
print(ClassScope.class_variable) # There is no spoon, i mean...
print(ClassScope.classy_method()) # There is no instance.
print(ClassScope.selfless_method()) # But we have lots of class!
print()
# Instance Scope
print("From the Instance:")
instance_object = ClassScope() # instance object instantiated.
print(instance_object.instance_variable) # now we have everything...
print(instance_object.instance_method()) # ...except local variables.
print(instance_object.class_variable)
print(instance_object.classy_method())
print(instance_object.selfless_method())
###Output
From the Class:
class_variable
class_variable: via class method
local_variable: via static method
From the Instance:
instance_variable
instance_variable: via instance method
class_variable
class_variable: via class method
local_variable: via static method
###Markdown
Advanced Class Topics - [Python's Class Development Toolkit | YouTube.com](https://www.youtube.com/watch?v=HTLu2DFOdTg&t=943s) Raymond Hettinger Super FunctionThe super function is required when more than one class in a hierarchy has an `__init__()` method. Below `Wizard` inherits from `Player` and they both have an `__init__()` method. To make this work we need to call `super().__init__()` in the child class's `__init__()`, and we should usually do that first. The super call will have the same signature as the `__init__()` of the parent class. See below.- [Super Considered Super! | YouTube.com]() Raymond Hettinger
###Code
class Player:
def __init__(self, name, level):
self.Name = name
self.Class = "Villager"
self.Level = min(max(1, level), 20) # Min: 1, Max: 20
self.Health = self.Level * 8
def __str__(self):
_fields = (f"{k}: {v}" for k, v in self.__dict__.items())
return '\n '.join(_fields) + '\n'
class Wizard(Player):
def __init__(self, name, level, school):
super().__init__(name, level)
self.Class = f"Wizard of {school}"
self.Mana = self.Level * 10
print(Player("George", 1))
print(Wizard("Jim Darkmagic", level=10, school="Illusion"))
###Output
Name: George
Class: Villager
Level: 1
Health: 8
Name: Jim Darkmagic
Class: Wizard of Illusion
Level: 10
Health: 80
Mana: 100
###Markdown
Meta Classes* [Meta Programming | YouTube.com](https://youtu.be/sPiWg5jSoZI) David BeazleyIf a class is an object factory, then a meta class is a class factory. Meta Classes are often considered black magic, please use them with caution. Meta classes should never be your first impulse as a solution to solve any given puzzle. Often a simple decorator will be faster, easier and less surprising.Custom meta classes typically inherit from `type` and redefine the `__new__()` method. A meta class is like a class decorator in capability but the meta class allows modifications to take place before the instances are created. Decorators do their magic strictly after the fact. While a decorator can affects any decorated class individually, a meta class at the top level will affect an entire class hierarchy.
###Code
class Foo(type):
def __new__(cls, name, bases, clsdict):
print(f"A New {cls.__qualname__} named {name}!")
return super().__new__(cls, name, bases, clsdict)
class Bar(metaclass=Foo):
""" If Foo must be declared as a metaclass `metaclass=Foo`.
This will not work the same if we just inherit from Foo. """
pass
class Baz(Bar):
""" Now we can inherit from Bar and get the same behavior. """
pass
b = Bar()
z = Baz()
###Output
A New Foo named Bar!
A New Foo named Baz!
###Markdown
Structure Example
###Code
from inspect import Parameter, Signature
class StructMeta(type):
def __new__(cls, clsname, bases, clsdict):
clsobj = super().__new__(cls, clsname, bases, clsdict)
sig = cls.make_signature(clsobj._fields)
setattr(clsobj, '__signature__', sig)
return clsobj
@staticmethod
def make_signature(names):
return Signature(
Parameter(name, Parameter.POSITIONAL_OR_KEYWORD)
for name in names)
class Structure(metaclass=StructMeta):
_fields = []
def __init__(self, *args, **kwargs):
bound = self.__signature__.bind(*args, **kwargs)
for name, val in bound.arguments.items():
setattr(self, name, val)
def __str__(self):
out = (f"{name}: {val}" for name, val in self.__dict__.items())
return '\n'.join(out)
class Struct(Structure):
_fields = ['name']
s = Struct("Baz")
print(s)
###Output
name: Baz
###Markdown
DataclassesThe dataclass is a class decorator for quickly defining a common type of class without all the boilerplate.- [Dataclasses | YouTube.com](https://youtu.be/T-TwcmT6Rcw?t=110) Raymond Hettinger
###Code
from dataclasses import dataclass
@dataclass
class Color:
hue: int
saturation: float
lightness: float = 0.5
blue = Color(hue=240, saturation=0.75, lightness=0.75)
print(blue)
print(blue.hue)
print(blue.saturation)
print(blue.lightness)
light_blue = Color(hue=240, saturation=0.75, lightness=0.25)
print(light_blue == blue)
blue2 = Color(hue=240, saturation=0.75, lightness=0.75)
print(blue == blue2)
###Output
True
|
notebooks/autopower.ipynb | ###Markdown
Plot first power spectrum
###Code
freqs = ['353', '545', '857']
n_freqs = len(freqs)
ref_names = ['Planck14Data', 'Planck14Model', 'Mak17', 'Maniyar18Model']
fig, axes = plt.subplots(ncols=n_freqs, nrows=n_freqs, figsize=(4 * n_freqs, 4*n_freqs))
for row_idx, row in enumerate(axes):
for col_idx, col in enumerate(row):
ax = axes[row_idx][col_idx]
# Skip lower triangle
if row_idx > col_idx:
ax.axis('off')
continue
for ref_name in ref_names:
ref = getattr(TT, ref_name)(freq1=freqs[row_idx], freq2=freqs[col_idx], unit='Jy^2/sr')
# plot
ax.plot(ref.l, ref.l*ref.Cl, label=ref_name)
ax.legend()
ax.set_title(f"{freqs[row_idx]}x{freqs[col_idx]}")
# Limits
ax.set_xlim(1, 2048)
ref_for_ylim = getattr(TT, 'Planck14Model')(freq1=freqs[row_idx], freq2=freqs[col_idx], unit='Jy^2/sr')
ref_for_ylim = ref_for_ylim.l*ref_for_ylim.Cl
ax.set_ylim(0., ref_for_ylim.max()*2.)
plt.plot(model.l, model.Cl, label='model')
plt.scatter(planck2014.l, planck2014.Cl + planck2014.S, label='Planck 2014', c='C1')
# labels & legend
plt.xlabel(r'$l$')
plt.ylabel(r'$C_l$')
plt.legend()
plt.loglog();
###Output
_____no_output_____
###Markdown
Multi-frequency
###Code
fig = plt.figure(figsize=(15, 12))
freqs = [353, 545, 857]
for i, freqs in enumerate(it.product(freqs, repeat=2)):
model = PaoloModel(*freqs)
planck2014 = Planck2014(*freqs)
ax = fig.add_subplot(3,3, i+1)
# plot
ax.plot(model.l, model.Cl, label='model')
ax.scatter(planck2014.l, planck2014.Cl + planck2014.S, label='Planck 2014', c='C1')
# limits
ax.set_xlim([10, None])
# labels & legend
ax.set_title(str(freqs))
if i >= 7:
ax.set_xlabel(r'$l$')
if i in [1, 4, 7]:
ax.set_ylabel(r'$C_l$')
ax.legend()
ax.loglog();
###Output
_____no_output_____ |
Code/4_operationalization.ipynb | ###Markdown
Step 4: Model operationalization & DeploymentIn this script, we load the model from the `Code/3_model_building.ipynb` Jupyter notebook and the labeled feature data set constructed in the `Code/2_feature_engineering.ipynb` notebook in order to build the model deployment artifacts. We create deployment functions, which we test locally in the notebook. We package a model schema file, the deployment run functions file, and the model created in the previous notebook into a deployment file. We load this package onto our Azure blob storage for deployment.The remainder of this notebook details steps required to deploy and operationalize the model using Azure Machine Learning Model Management environment for use in production in realtime.**Note:** This notebook will take about 1 minute to execute all cells, depending on the compute configuration you have setup.
###Code
from azureml.core import Workspace, Experiment
# Load workspace using configuration file
ws = Workspace.from_config(path = '../aml_config/PredictiveMaintenanceWSConfig.json')
# Data Ingestion will be run within a separate experiment
exp = Experiment(name = 'ModelOperationalization', workspace = ws)
# New Run is created
run = exp.start_logging()
# Now we can log any information we want
import time
run.log('Starting Model Operationalization', time.asctime(time.localtime(time.time())))
run.tag('Description', 'Model Operationalization')
# Enter your Azure blob storage details here
ACCOUNT_NAME = "predictistorageinugjxfr"
# You can find the account key under the _Access Keys_ link in the
# [Azure Portal](portal.azure.com) page for your Azure storage container.
ACCOUNT_KEY = "moQjKkXdNdA2xHGuPpC4YdxDeotmTkkm+Pa7zopIHcy1xNhVf5hvU+tO9OQLC3cxVG01IKvEZeSHAOEgmdrV1w=="
## setup our environment by importing required libraries
import json
import os
import shutil
import time
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
# for creating pipelines and model
from pyspark.ml.feature import StringIndexer, VectorAssembler, VectorIndexer
# setup the pyspark environment
from pyspark.sql import SparkSession
from azureml.api.schema.dataTypes import DataTypes
from azureml.api.schema.sampleDefinition import SampleDefinition
from azureml.api.realtime.services import generate_schema
# For Azure blob storage access
from azure.storage.blob import BlockBlobService
from azure.storage.blob import PublicAccess
# For logging model evaluation parameters back into the
# AML Workbench run history plots.
#import logging
#from azureml.logging import get_azureml_logger
#amllog = logging.getLogger("azureml")
#amllog.level = logging.INFO
# Turn on cell level logging.
#%azureml history on
#%azureml history show
# Time the notebook execution.
# This will only make sense if you "Run all cells"
tic = time.time()
#logger = get_azureml_logger() # logger writes to AMLWorkbench runtime view
spark = SparkSession.builder.getOrCreate()
# Telemetry
#logger.log('amlrealworld.predictivemaintenance.operationalization','true')
run.log('amlrealworld.predictivemaintenance.operationalization', True)
###Output
_____no_output_____
###Markdown
We need to load the feature data set from memory to construct the operationalization schema. We again will require your storage account name and account key to connect to the blob storage.
###Code
# Enter your Azure blob storage details here
#ACCOUNT_NAME = ""
# You can find the account key under the _Access Keys_ link in the
# [Azure Portal](portal.azure.com) page for your Azure storage container.
#ACCOUNT_KEY = ""
#-------------------------------------------------------------------------------------------
# We will create this container to hold the results of executing this notebook.
# If this container name already exists, we will use that instead, however
# This notebook will ERASE ALL CONTENTS.
CONTAINER_NAME = "featureengineering"
FE_DIRECTORY = 'featureengineering_files.parquet'
MODEL_CONTAINER = 'modeldeploy'
# Connect to your blob service
az_blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)
# Create a new container if necessary, otherwise you can use an existing container.
# This command creates the container if it does not already exist. Else it does nothing.
az_blob_service.create_container(CONTAINER_NAME,
fail_on_exist=False,
public_access=PublicAccess.Container)
# create a local path where to store the results later.
if not os.path.exists(FE_DIRECTORY):
os.makedirs(FE_DIRECTORY)
# download the entire parquet result folder to local path for a new run
for blob in az_blob_service.list_blobs(CONTAINER_NAME):
if CONTAINER_NAME in blob.name:
local_file = os.path.join(FE_DIRECTORY, os.path.basename(blob.name))
az_blob_service.get_blob_to_path(CONTAINER_NAME, blob.name, local_file)
fedata = spark.read.parquet(FE_DIRECTORY)
fedata.limit(5).toPandas().head(5)
###Output
_____no_output_____
###Markdown
Define deployment functionsThe init() function initializes your web service, loading in any data or models that you need to score your inputs. In the example below, we load in the trained model. This command is run when the Docker container containing your service initializes.The run() function defines what is executed on a scoring call. In our simple example, we simply load in the input as a data frame, and run our pipeline on the input, and return the prediction.Start by defining the init() and run() functions, test them with example data. Then write them to the `score.py` file for deployment.
###Code
# Initialize the deployment environment
def init():
# read in the model file
from pyspark.ml import PipelineModel
global pipeline
pipeline = PipelineModel.load(os.environ['AZUREML_NATIVE_SHARE_DIRECTORY']+'pdmrfull.model')
# Run the model and return the scored result.
def run(input_df):
import json
response = ''
try:
#Get prediction results for the dataframe
# We'll use the known label, key variables and
# a few extra columns we won't need.
key_cols =['label_e','machineID','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaing feature names from the data
input_features = input_df.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# Vectorize as in model building
va = VectorAssembler(inputCols=(input_features), outputCol='features')
data = va.transform(input_df).select('machineID','features')
score = pipeline.transform(data)
predictions = score.collect()
#Get each scored result
preds = [str(x['prediction']) for x in predictions]
response = ",".join(preds)
except Exception as e:
print("Error: {0}",str(e))
return (str(e))
# Return results
print(json.dumps(response))
return json.dumps(response)
###Output
_____no_output_____
###Markdown
Create schema fileThe deployment requires a schema file to define the incoming data.
###Code
# We'll use the known label, key variables and
# a few extra columns we won't need. (machineID is required)
key_cols =['label_e','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaining feature names from the data
input_features = fedata.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# define the input data frame
inputs = {"input_df": SampleDefinition(DataTypes.SPARK,
fedata.select(input_features))}
json_schema = generate_schema(run_func=run, inputs=inputs, filepath='service_schema.json')
###Output
_____no_output_____
###Markdown
Test the functionsWe can then test the `init()` and `run()` functions right here in the notebook. It's about impossible to debug after publish a web service.First we get a sample test observation that we can score. For this, we can randomly select a single record from the test data we've loaded from Azure blob.
###Code
# Randomly select a record from the loaded test data.
smple = fedata.sample(False, .8).limit(1).select(input_features)
smple.toPandas().head()
###Output
_____no_output_____
###Markdown
The deployment requires first initializing (`init()`) the environment, then running the model with the supplied data fields (`run()`). The `run()` function returns the predicted label, `0.0` indicates a healthy record, other values correspond to the component predicted to fail within the next 7 days (`1.0, 2.0, 3.0, 4.0`).
###Code
# test init() in local notebook
init()
# test run() in local notebook
run(smple)
###Output
"0.0"
###Markdown
The model returned a `0.0`, indicating a healthy prediction. Comparing this to the actual value of the `label_e` variable for this record would determine how the model actually did. However we did not include this feature in the sampled data, as it would not be available in the production environment. In the following code block, we use the `filter` function to select 10 records with a specific failure label (`4.0`) indicating a failure for component 4 is probable within the next 7 days. You can see this by scrolling to the right to find the `label_e` variable.
###Code
smple_f = fedata.filter(fedata.label_e == 4.0).sample(False, .8).limit(10)
smple_f.toPandas().head()
###Output
_____no_output_____
###Markdown
Since we have already initialized the environment, we can submit this new record to the model for scoring. We need the record to align with the specified scheme, so we select out the features according to the `input_features` vector.
###Code
run(smple_f.select(input_features))
###Output
"0.0,3.0,0.0,0.0,0.0,0.0,0.0,0.0,3.0,3.0"
###Markdown
Comparing the output of this to the actual value indicates a mismatch in the failure prediction. Model assetsNext we package the model assets into a zip file and store them to azure blob deployment into an operationalization environment. First write out the tested assets to local storage.
###Code
# save the schema file for deployment
out = json.dumps(json_schema)
with open(os.environ['AZUREML_NATIVE_SHARE_DIRECTORY'] + 'service_schema.json', 'w') as f:
f.write(out)
###Output
_____no_output_____
###Markdown
We will use `%%writefile` meta command to save the `init()` and `run()` functions to the `pdmscore.py` file. Because of how the `%%writefile` command works, we have to copy these functions from the tested versions above into this code block.
###Code
%%writefile {os.environ['AZUREML_NATIVE_SHARE_DIRECTORY']}/pdmscore.py
import json
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier, DecisionTreeClassifier
# for creating pipelines and model
from pyspark.ml.feature import StringIndexer, VectorAssembler, VectorIndexer
def init():
# read in the model file
from pyspark.ml import PipelineModel
# read in the model file
global pipeline
pipeline = PipelineModel.load('pdmrfull.model')
def run(input_df):
response = ''
try:
# We'll use the known label, key variables and
# a few extra columns we won't need.
key_cols =['label_e','machineID','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaing feature names from the data
input_features = input_df.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# Vectorize as in model building
va = VectorAssembler(inputCols=(input_features), outputCol='features')
data = va.transform(input_df).select('machineID','features')
score = pipeline.transform(data)
predictions = score.collect()
#Get each scored result
preds = [str(x['prediction']) for x in predictions]
response = ",".join(preds)
except Exception as e:
print("Error: {0}",str(e))
return (str(e))
# Return results
print(json.dumps(response))
return json.dumps(response)
if __name__ == "__main__":
init()
run("{\"input_df\":[{\"machineID\":114,\"volt_rollingmean_3\":163.375732902,\"rotate_rollingmean_3\":333.149484586,\"pressure_rollingmean_3\":100.183951698,\"vibration_rollingmean_3\":44.0958812638,\"volt_rollingmean_24\":164.114723991,\"rotate_rollingmean_24\":277.191815232,\"pressure_rollingmean_24\":97.6289110707,\"vibration_rollingmean_24\":50.8853505161,\"volt_rollingstd_3\":21.0049565219,\"rotate_rollingstd_3\":67.5287259378,\"pressure_rollingstd_3\":12.9361526861,\"vibration_rollingstd_3\":4.61359760918,\"volt_rollingstd_24\":15.5377738062,\"rotate_rollingstd_24\":67.6519885441,\"pressure_rollingstd_24\":10.528274633,\"vibration_rollingstd_24\":6.94129487555,\"error1sum_rollingmean_24\":0.0,\"error2sum_rollingmean_24\":0.0,\"error3sum_rollingmean_24\":0.0,\"error4sum_rollingmean_24\":0.0,\"error5sum_rollingmean_24\":0.0,\"comp1sum\":489.0,\"comp2sum\":549.0,\"comp3sum\":549.0,\"comp4sum\":564.0,\"age\":18.0}]}")
###Output
Overwriting /azureml-share//pdmscore.py
###Markdown
These files are stored in the `['AZUREML_NATIVE_SHARE_DIRECTORY']` location on the kernel host machine with the model stored in the `3_model_building.ipynb` notebook. In order to share these assets and operationalize the model, we create a new blob container and store a compressed file containing those assets for later retrieval from the deployment location.
###Code
# Compress the operationalization assets for easy blob storage transfer
MODEL_O16N = shutil.make_archive('o16n', 'zip', os.environ['AZUREML_NATIVE_SHARE_DIRECTORY'])
# Create a new container if necessary, otherwise you can use an existing container.
# This command creates the container if it does not already exist. Else it does nothing.
az_blob_service.create_container(MODEL_CONTAINER,
fail_on_exist=False,
public_access=PublicAccess.Container)
# Transfer the compressed operationalization assets into the blob container.
az_blob_service.create_blob_from_path(MODEL_CONTAINER, "o16n.zip", str(MODEL_O16N) )
# Time the notebook execution.
# This will only make sense if you "Run All" cells
toc = time.time()
print("Full run took %.2f minutes" % ((toc - tic)/60))
#logger.log("Operationalization Run time", ((toc - tic)/60))
run.log('Operationalization Run time', ((toc - tic)/60))
# Mark the run as completed
run.complete()
###Output
_____no_output_____
###Markdown
Step 4: Model operationalization & DeploymentIn this script, we load the model from the `Code/3_model_building.ipynb` Jupyter notebook and the labeled feature data set constructed in the `Code/2_feature_engineering.ipynb` notebook in order to build the model deployment artifacts. We create deployment functions, which we test locally in the notebook. We package a model schema file, the deployment run functions file, and the model created in the previous notebook into a deployment file. We load this package onto our Azure blob storage for deployment.The remainder of this notebook details steps required to deploy and operationalize the model using Azure Machine Learning Model Management environment for use in production in realtime.**Note:** This notebook will take about 1 minute to execute all cells, depending on the compute configuration you have setup.
###Code
## setup our environment by importing required libraries
import json
import os
import shutil
import time
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
# for creating pipelines and model
from pyspark.ml.feature import StringIndexer, VectorAssembler, VectorIndexer
# setup the pyspark environment
from pyspark.sql import SparkSession
from azureml.api.schema.dataTypes import DataTypes
from azureml.api.schema.sampleDefinition import SampleDefinition
from azureml.api.realtime.services import generate_schema
# For Azure blob storage access
from azure.storage.blob import BlockBlobService
from azure.storage.blob import PublicAccess
# For logging model evaluation parameters back into the
# AML Workbench run history plots.
import logging
from azureml.logging import get_azureml_logger
amllog = logging.getLogger("azureml")
amllog.level = logging.INFO
# Turn on cell level logging.
%azureml history on
%azureml history show
# Time the notebook execution.
# This will only make sense if you "Run all cells"
tic = time.time()
logger = get_azureml_logger() # logger writes to AMLWorkbench runtime view
spark = SparkSession.builder.getOrCreate()
# Telemetry
logger.log('amlrealworld.predictivemaintenance.operationalization','true')
###Output
_____no_output_____
###Markdown
We need to load the feature data set from memory to construct the operationalization schema. We again will require your storage account name and account key to connect to the blob storage.
###Code
# Enter your Azure blob storage details here
ACCOUNT_NAME = "<your blob storage account name>"
# You can find the account key under the _Access Keys_ link in the
# [Azure Portal](portal.azure.com) page for your Azure storage container.
ACCOUNT_KEY = "<your blob storage account key>"
#-------------------------------------------------------------------------------------------
# We will create this container to hold the results of executing this notebook.
# If this container name already exists, we will use that instead, however
# This notebook will ERASE ALL CONTENTS.
CONTAINER_NAME = "featureengineering"
FE_DIRECTORY = 'featureengineering_files.parquet'
MODEL_CONTAINER = 'modeldeploy'
# Connect to your blob service
az_blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)
# Create a new container if necessary, otherwise you can use an existing container.
# This command creates the container if it does not already exist. Else it does nothing.
az_blob_service.create_container(CONTAINER_NAME,
fail_on_exist=False,
public_access=PublicAccess.Container)
# create a local path where to store the results later.
if not os.path.exists(FE_DIRECTORY):
os.makedirs(FE_DIRECTORY)
# download the entire parquet result folder to local path for a new run
for blob in az_blob_service.list_blobs(CONTAINER_NAME):
if CONTAINER_NAME in blob.name:
local_file = os.path.join(FE_DIRECTORY, os.path.basename(blob.name))
az_blob_service.get_blob_to_path(CONTAINER_NAME, blob.name, local_file)
fedata = spark.read.parquet(FE_DIRECTORY)
fedata.limit(5).toPandas().head(5)
###Output
_____no_output_____
###Markdown
Define deployment functionsThe init() function initializes your web service, loading in any data or models that you need to score your inputs. In the example below, we load in the trained model. This command is run when the Docker container containing your service initializes.The run() function defines what is executed on a scoring call. In our simple example, we simply load in the input as a data frame, and run our pipeline on the input, and return the prediction.Start by defining the init() and run() functions, test them with example data. Then write them to the `score.py` file for deployment.
###Code
# Initialize the deployment environment
def init():
# read in the model file
from pyspark.ml import PipelineModel
global pipeline
pipeline = PipelineModel.load(os.environ['AZUREML_NATIVE_SHARE_DIRECTORY']+'pdmrfull.model')
# Run the model and return the scored result.
def run(input_df):
import json
response = ''
try:
#Get prediction results for the dataframe
# We'll use the known label, key variables and
# a few extra columns we won't need.
key_cols =['label_e','machineID','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaing feature names from the data
input_features = input_df.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# Vectorize as in model building
va = VectorAssembler(inputCols=(input_features), outputCol='features')
data = va.transform(input_df).select('machineID','features')
score = pipeline.transform(data)
predictions = score.collect()
#Get each scored result
preds = [str(x['prediction']) for x in predictions]
response = ",".join(preds)
except Exception as e:
print("Error: {0}",str(e))
return (str(e))
# Return results
print(json.dumps(response))
return json.dumps(response)
###Output
_____no_output_____
###Markdown
Create schema fileThe deployment requires a schema file to define the incoming data.
###Code
# We'll use the known label, key variables and
# a few extra columns we won't need. (machineID is required)
key_cols =['label_e','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaining feature names from the data
input_features = fedata.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# define the input data frame
inputs = {"input_df": SampleDefinition(DataTypes.SPARK,
fedata.select(input_features))}
json_schema = generate_schema(run_func=run, inputs=inputs, filepath='service_schema.json')
###Output
_____no_output_____
###Markdown
Test the functionsWe can then test the `init()` and `run()` functions right here in the notebook. It's about impossible to debug after publish a web service.First we get a sample test observation that we can score. For this, we can randomly select a single record from the test data we've loaded from Azure blob.
###Code
# Randomly select a record from the loaded test data.
smple = fedata.sample(False, .8).limit(1).select(input_features)
smple.toPandas().head()
###Output
_____no_output_____
###Markdown
The deployment requires first initializing (`init()`) the environment, then running the model with the supplied data fields (`run()`). The `run()` function returns the predicted label, `0.0` indicates a healthy record, other values correspond to the component predicted to fail within the next 7 days (`1.0, 2.0, 3.0, 4.0`).
###Code
# test init() in local notebook
init()
# test run() in local notebook
run(smple)
###Output
"0.0"
###Markdown
The model returned a `0.0`, indicating a healthy prediction. Comparing this to the actual value of the `label_e` variable for this record would determine how the model actually did. However we did not include this feature in the sampled data, as it would not be available in the production environment. In the following code block, we use the `filter` function to select 10 records with a specific failure label (`4.0`) indicating a failure for component 4 is probable within the next 7 days. You can see this by scrolling to the right to find the `label_e` variable.
###Code
smple_f = fedata.filter(fedata.label_e == 4.0).sample(False, .8).limit(10)
smple_f.toPandas().head()
###Output
_____no_output_____
###Markdown
Since we have already initialized the environment, we can submit this new record to the model for scoring. We need the record to align with the specified scheme, so we select out the features according to the `input_features` vector.
###Code
run(smple_f.select(input_features))
###Output
"0.0,3.0,0.0,0.0,0.0,0.0,0.0,0.0,3.0,3.0"
###Markdown
Comparing the output of this to the actual value indicates a mismatch in the failure prediction. Model assetsNext we package the model assets into a zip file and store them to azure blob deployment into an operationalization environment. First write out the tested assets to local storage.
###Code
# save the schema file for deployment
out = json.dumps(json_schema)
with open(os.environ['AZUREML_NATIVE_SHARE_DIRECTORY'] + 'service_schema.json', 'w') as f:
f.write(out)
###Output
_____no_output_____
###Markdown
We will use `%%writefile` meta command to save the `init()` and `run()` functions to the `pdmscore.py` file. Because of how the `%%writefile` command works, we have to copy these functions from the tested versions above into this code block.
###Code
%%writefile {os.environ['AZUREML_NATIVE_SHARE_DIRECTORY']}/pdmscore.py
import json
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier, DecisionTreeClassifier
# for creating pipelines and model
from pyspark.ml.feature import StringIndexer, VectorAssembler, VectorIndexer
def init():
# read in the model file
from pyspark.ml import PipelineModel
# read in the model file
global pipeline
pipeline = PipelineModel.load('pdmrfull.model')
def run(input_df):
response = ''
try:
# We'll use the known label, key variables and
# a few extra columns we won't need.
key_cols =['label_e','machineID','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaing feature names from the data
input_features = input_df.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# Vectorize as in model building
va = VectorAssembler(inputCols=(input_features), outputCol='features')
data = va.transform(input_df).select('machineID','features')
score = pipeline.transform(data)
predictions = score.collect()
#Get each scored result
preds = [str(x['prediction']) for x in predictions]
response = ",".join(preds)
except Exception as e:
print("Error: {0}",str(e))
return (str(e))
# Return results
print(json.dumps(response))
return json.dumps(response)
if __name__ == "__main__":
init()
run("{\"input_df\":[{\"machineID\":114,\"volt_rollingmean_3\":163.375732902,\"rotate_rollingmean_3\":333.149484586,\"pressure_rollingmean_3\":100.183951698,\"vibration_rollingmean_3\":44.0958812638,\"volt_rollingmean_24\":164.114723991,\"rotate_rollingmean_24\":277.191815232,\"pressure_rollingmean_24\":97.6289110707,\"vibration_rollingmean_24\":50.8853505161,\"volt_rollingstd_3\":21.0049565219,\"rotate_rollingstd_3\":67.5287259378,\"pressure_rollingstd_3\":12.9361526861,\"vibration_rollingstd_3\":4.61359760918,\"volt_rollingstd_24\":15.5377738062,\"rotate_rollingstd_24\":67.6519885441,\"pressure_rollingstd_24\":10.528274633,\"vibration_rollingstd_24\":6.94129487555,\"error1sum_rollingmean_24\":0.0,\"error2sum_rollingmean_24\":0.0,\"error3sum_rollingmean_24\":0.0,\"error4sum_rollingmean_24\":0.0,\"error5sum_rollingmean_24\":0.0,\"comp1sum\":489.0,\"comp2sum\":549.0,\"comp3sum\":549.0,\"comp4sum\":564.0,\"age\":18.0}]}")
###Output
Overwriting /azureml-share//pdmscore.py
###Markdown
These files are stored in the `['AZUREML_NATIVE_SHARE_DIRECTORY']` location on the kernel host machine with the model stored in the `3_model_building.ipynb` notebook. In order to share these assets and operationalize the model, we create a new blob container and store a compressed file containing those assets for later retrieval from the deployment location.
###Code
# Compress the operationalization assets for easy blob storage transfer
MODEL_O16N = shutil.make_archive('o16n', 'zip', os.environ['AZUREML_NATIVE_SHARE_DIRECTORY'])
# Create a new container if necessary, otherwise you can use an existing container.
# This command creates the container if it does not already exist. Else it does nothing.
az_blob_service.create_container(MODEL_CONTAINER,
fail_on_exist=False,
public_access=PublicAccess.Container)
# Transfer the compressed operationalization assets into the blob container.
az_blob_service.create_blob_from_path(MODEL_CONTAINER, "o16n.zip", str(MODEL_O16N) )
# Time the notebook execution.
# This will only make sense if you "Run All" cells
toc = time.time()
print("Full run took %.2f minutes" % ((toc - tic)/60))
logger.log("Operationalization Run time", ((toc - tic)/60))
###Output
Full run took 0.78 minutes
###Markdown
Step 4: Model operationalization & DeploymentIn this script, we load the model from the `Code/3_model_building.ipynb` Jupyter notebook and the labeled feature data set constructed in the `Code/2_feature_engineering.ipynb` notebook in order to build the model deployment artifacts. We create deployment functions, which we test locally in the notebook. We package a model schema file, the deployment run functions file, and the model created in the previous notebook into a deployment file. We load this package onto our Azure blob storage for deployment.The remainder of this notebook details steps required to deploy and operationalize the model using Azure Machine Learning Model Management environment for use in production in realtime.**Note:** This notebook will take about 1 minute to execute all cells, depending on the compute configuration you have setup.
###Code
## setup our environment by importing required libraries
import json
import os
import shutil
import time
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier
# for creating pipelines and model
from pyspark.ml.feature import StringIndexer, VectorAssembler, VectorIndexer
# setup the pyspark environment
from pyspark.sql import SparkSession
from azureml.api.schema.dataTypes import DataTypes
from azureml.api.schema.sampleDefinition import SampleDefinition
from azureml.api.realtime.services import generate_schema
# For Azure blob storage access
from azure.storage.blob import BlockBlobService
from azure.storage.blob import PublicAccess
# For logging model evaluation parameters back into the
# AML Workbench run history plots.
import logging
from azureml.logging import get_azureml_logger
amllog = logging.getLogger("azureml")
amllog.level = logging.INFO
# Turn on cell level logging.
%azureml history on
%azureml history show
# Time the notebook execution.
# This will only make sense if you "Run all cells"
tic = time.time()
logger = get_azureml_logger() # logger writes to AMLWorkbench runtime view
spark = SparkSession.builder.getOrCreate()
# Telemetry
logger.log('amlrealworld.predictivemaintenance.operationalization','true')
###Output
_____no_output_____
###Markdown
We need to load the feature data set from memory to construct the operationalization schema. We again will require your storage account name and account key to connect to the blob storage.
###Code
# Enter your Azure blob storage details here
ACCOUNT_NAME = "<your blob storage account name>"
# You can find the account key under the _Access Keys_ link in the
# [Azure Portal](portal.azure.com) page for your Azure storage container.
ACCOUNT_KEY = "<your blob storage account key>"
#-------------------------------------------------------------------------------------------
# We will create this container to hold the results of executing this notebook.
# If this container name already exists, we will use that instead, however
# This notebook will ERASE ALL CONTENTS.
CONTAINER_NAME = "featureengineering"
FE_DIRECTORY = 'featureengineering_files.parquet'
MODEL_CONTAINER = 'modeldeploy'
# Connect to your blob service
az_blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)
# Create a new container if necessary, otherwise you can use an existing container.
# This command creates the container if it does not already exist. Else it does nothing.
az_blob_service.create_container(CONTAINER_NAME,
fail_on_exist=False,
public_access=PublicAccess.Container)
# create a local path where to store the results later.
if not os.path.exists(FE_DIRECTORY):
os.makedirs(FE_DIRECTORY)
# download the entire parquet result folder to local path for a new run
for blob in az_blob_service.list_blobs(CONTAINER_NAME):
if CONTAINER_NAME in blob.name:
local_file = os.path.join(FE_DIRECTORY, os.path.basename(blob.name))
az_blob_service.get_blob_to_path(CONTAINER_NAME, blob.name, local_file)
fedata = spark.read.parquet(FE_DIRECTORY)
fedata.limit(5).toPandas().head(5)
###Output
_____no_output_____
###Markdown
Define deployment functionsThe init() function initializes your web service, loading in any data or models that you need to score your inputs. In the example below, we load in the trained model. This command is run when the Docker container containing your service initializes.The run() function defines what is executed on a scoring call. In our simple example, we simply load in the input as a data frame, and run our pipeline on the input, and return the prediction.Start by defining the init() and run() functions, test them with example data. Then write them to the `score.py` file for deployment.
###Code
# Initialize the deployment environment
def init():
# read in the model file
from pyspark.ml import PipelineModel
global pipeline
pipeline = PipelineModel.load(os.environ['AZUREML_NATIVE_SHARE_DIRECTORY']+'pdmrfull.model')
# Run the model and return the scored result.
def run(input_df):
import json
response = ''
try:
#Get prediction results for the dataframe
# We'll use the known label, key variables and
# a few extra columns we won't need.
key_cols =['label_e','machineID','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaing feature names from the data
input_features = input_df.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# Vectorize as in model building
va = VectorAssembler(inputCols=(input_features), outputCol='features')
data = va.transform(input_df).select('machineID','features')
score = pipeline.transform(data)
predictions = score.collect()
#Get each scored result
preds = [str(x['prediction']) for x in predictions]
response = ",".join(preds)
except Exception as e:
print("Error: {0}",str(e))
return (str(e))
# Return results
print(json.dumps(response))
return json.dumps(response)
###Output
_____no_output_____
###Markdown
Create schema fileThe deployment requires a schema file to define the incoming data.
###Code
# We'll use the known label, key variables and
# a few extra columns we won't need. (machineID is required)
key_cols =['label_e','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaining feature names from the data
input_features = fedata.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# define the input data frame
inputs = {"input_df": SampleDefinition(DataTypes.SPARK,
fedata.select(input_features))}
json_schema = generate_schema(run_func=run, inputs=inputs, filepath='service_schema.json')
###Output
_____no_output_____
###Markdown
Test the functionsWe can then test the `init()` and `run()` functions right here in the notebook. It's about impossible to debug after publish a web service.First we get a sample test observation that we can score. For this, we can randomly select a single record from the test data we've loaded from Azure blob.
###Code
# Randomly select a record from the loaded test data.
smple = fedata.sample(False, .8).limit(1).select(input_features)
smple.toPandas().head()
###Output
_____no_output_____
###Markdown
The deployment requires first initializing (`init()`) the environment, then running the model with the supplied data fields (`run()`). The `run()` function returns the predicted label, `0.0` indicates a healthy record, other values correspond to the component predicted to fail within the next 7 days (`1.0, 2.0, 3.0, 4.0`).
###Code
# test init() in local notebook
init()
# test run() in local notebook
run(smple)
###Output
"0.0"
###Markdown
The model returned a `0.0`, indicating a healthy prediction. Comparing this to the actual value of the `label_e` variable for this record would determine how the model actually did. However we did not include this feature in the sampled data, as it would not be available in the production environment. In the following code block, we use the `filter` function to select 10 records with a specific failure label (`4.0`) indicating a failure for component 4 is probable within the next 7 days. You can see this by scrolling to the right to find the `label_e` variable.
###Code
smple_f = fedata.filter(fedata.label_e == 4.0).sample(False, .8).limit(10)
smple_f.toPandas().head()
###Output
_____no_output_____
###Markdown
Since we have already initialized the environment, we can submit this new record to the model for scoring. We need the record to align with the specified scheme, so we select out the features according to the `input_features` vector.
###Code
run(smple_f.select(input_features))
###Output
"0.0,3.0,0.0,0.0,0.0,0.0,0.0,0.0,3.0,3.0"
###Markdown
Comparing the output of this to the actual value indicates a mismatch in the failure prediction. Model assetsNext we package the model assets into a zip file and store them to azure blob deployment into an operationalization environment. First write out the tested assets to local storage.
###Code
# save the schema file for deployment
out = json.dumps(json_schema)
with open(os.environ['AZUREML_NATIVE_SHARE_DIRECTORY'] + 'service_schema.json', 'w') as f:
f.write(out)
###Output
_____no_output_____
###Markdown
We will use `%%writefile` meta command to save the `init()` and `run()` functions to the `pdmscore.py` file. Because of how the `%%writefile` command works, we have to copy these functions from the tested versions above into this code block.
###Code
%%writefile {os.environ['AZUREML_NATIVE_SHARE_DIRECTORY']}/pdmscore.py
import json
from pyspark.ml import Pipeline
from pyspark.ml.classification import RandomForestClassifier, DecisionTreeClassifier
# for creating pipelines and model
from pyspark.ml.feature import StringIndexer, VectorAssembler, VectorIndexer
def init():
# read in the model file
from pyspark.ml import PipelineModel
# read in the model file
global pipeline
pipeline = PipelineModel.load('pdmrfull.model')
def run(input_df):
response = ''
try:
# We'll use the known label, key variables and
# a few extra columns we won't need.
key_cols =['label_e','machineID','dt_truncated', 'failure','model_encoded','model' ]
# Then get the remaing feature names from the data
input_features = input_df.columns
# Remove the extra stuff if it's in the input_df
input_features = [x for x in input_features if x not in set(key_cols)]
# Vectorize as in model building
va = VectorAssembler(inputCols=(input_features), outputCol='features')
data = va.transform(input_df).select('machineID','features')
score = pipeline.transform(data)
predictions = score.collect()
#Get each scored result
preds = [str(x['prediction']) for x in predictions]
response = ",".join(preds)
except Exception as e:
print("Error: {0}",str(e))
return (str(e))
# Return results
print(json.dumps(response))
return json.dumps(response)
if __name__ == "__main__":
init()
run("{\"input_df\":[{\"machineID\":114,\"volt_rollingmean_3\":163.375732902,\"rotate_rollingmean_3\":333.149484586,\"pressure_rollingmean_3\":100.183951698,\"vibration_rollingmean_3\":44.0958812638,\"volt_rollingmean_24\":164.114723991,\"rotate_rollingmean_24\":277.191815232,\"pressure_rollingmean_24\":97.6289110707,\"vibration_rollingmean_24\":50.8853505161,\"volt_rollingstd_3\":21.0049565219,\"rotate_rollingstd_3\":67.5287259378,\"pressure_rollingstd_3\":12.9361526861,\"vibration_rollingstd_3\":4.61359760918,\"volt_rollingstd_24\":15.5377738062,\"rotate_rollingstd_24\":67.6519885441,\"pressure_rollingstd_24\":10.528274633,\"vibration_rollingstd_24\":6.94129487555,\"error1sum_rollingmean_24\":0.0,\"error2sum_rollingmean_24\":0.0,\"error3sum_rollingmean_24\":0.0,\"error4sum_rollingmean_24\":0.0,\"error5sum_rollingmean_24\":0.0,\"comp1sum\":489.0,\"comp2sum\":549.0,\"comp3sum\":549.0,\"comp4sum\":564.0,\"age\":18.0}]}")
###Output
Overwriting /azureml-share//pdmscore.py
###Markdown
These files are stored in the `['AZUREML_NATIVE_SHARE_DIRECTORY']` location on the kernel host machine with the model stored in the `3_model_building.ipynb` notebook. In order to share these assets and operationalize the model, we create a new blob container and store a compressed file containing those assets for later retrieval from the deployment location.
###Code
# Compress the operationalization assets for easy blob storage transfer
MODEL_O16N = shutil.make_archive('o16n', 'zip', os.environ['AZUREML_NATIVE_SHARE_DIRECTORY'])
# Create a new container if necessary, otherwise you can use an existing container.
# This command creates the container if it does not already exist. Else it does nothing.
az_blob_service.create_container(MODEL_CONTAINER,
fail_on_exist=False,
public_access=PublicAccess.Container)
# Transfer the compressed operationalization assets into the blob container.
az_blob_service.create_blob_from_path(MODEL_CONTAINER, "o16n.zip", str(MODEL_O16N) )
# Time the notebook execution.
# This will only make sense if you "Run All" cells
toc = time.time()
print("Full run took %.2f minutes" % ((toc - tic)/60))
logger.log("Operationalization Run time", ((toc - tic)/60))
###Output
Full run took 0.78 minutes
|
ch14/ch14_part2.ipynb | ###Markdown
머신 러닝 교과서 3판 14장 - 텐서플로의 구조 자세히 알아보기 (2/3) **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 목차 - 텐서플로 추정기 - 특성 열 사용하기 - 사전에 준비된 추정기로 머신 러닝 수행하기
###Code
import numpy as np
import tensorflow as tf
import pandas as pd
from IPython.display import Image
tf.__version__
###Output
_____no_output_____
###Markdown
텐서플로 추정기 사전에 준비된 추정기 사용하는 단계 * **단계 1:** 데이터 로딩을 위해 입력 함수 정의하기 * **단계 2:** 추정기와 데이터 사이를 연결하기 위해 특성 열 정의하기 * **단계 3:** 추정기 객체를 만들거나 케라스 모델을 추정기로 바꾸기 * **단계 4:** 추정기 사용하기: train() evaluate() predict()
###Code
tf.random.set_seed(1)
np.random.seed(1)
###Output
_____no_output_____
###Markdown
특성 열 사용하기 * 정의: https://developers.google.com/machine-learning/glossary/feature_columns * 문서: https://www.tensorflow.org/api_docs/python/tf/feature_column
###Code
Image(url='https://git.io/JL56E', width=700)
dataset_path = tf.keras.utils.get_file("auto-mpg.data",
("http://archive.ics.uci.edu/ml/machine-learning-databases"
"/auto-mpg/auto-mpg.data"))
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'ModelYear', 'Origin']
df = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
df.tail()
print(df.isna().sum())
df = df.dropna()
df = df.reset_index(drop=True)
df.tail()
import sklearn
import sklearn.model_selection
df_train, df_test = sklearn.model_selection.train_test_split(df, train_size=0.8)
train_stats = df_train.describe().transpose()
train_stats
numeric_column_names = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration']
df_train_norm, df_test_norm = df_train.copy(), df_test.copy()
for col_name in numeric_column_names:
mean = train_stats.loc[col_name, 'mean']
std = train_stats.loc[col_name, 'std']
df_train_norm.loc[:, col_name] = (df_train_norm.loc[:, col_name] - mean)/std
df_test_norm.loc[:, col_name] = (df_test_norm.loc[:, col_name] - mean)/std
df_train_norm.tail()
###Output
_____no_output_____
###Markdown
수치형 열
###Code
numeric_features = []
for col_name in numeric_column_names:
numeric_features.append(tf.feature_column.numeric_column(key=col_name))
numeric_features
feature_year = tf.feature_column.numeric_column(key="ModelYear")
bucketized_features = []
bucketized_features.append(tf.feature_column.bucketized_column(
source_column=feature_year,
boundaries=[73, 76, 79]))
print(bucketized_features)
feature_origin = tf.feature_column.categorical_column_with_vocabulary_list(
key='Origin',
vocabulary_list=[1, 2, 3])
categorical_indicator_features = []
categorical_indicator_features.append(tf.feature_column.indicator_column(feature_origin))
print(categorical_indicator_features)
###Output
[IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='Origin', vocabulary_list=(1, 2, 3), dtype=tf.int64, default_value=-1, num_oov_buckets=0))]
###Markdown
사전에 준비된 추정기로 머신러닝 수행하기
###Code
def train_input_fn(df_train, batch_size=8):
df = df_train.copy()
train_x, train_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(train_x), train_y))
# 셔플, 반복, 배치
return dataset.shuffle(1000).repeat().batch(batch_size)
## 조사
ds = train_input_fn(df_train_norm)
batch = next(iter(ds))
print('키:', batch[0].keys())
print('ModelYear:', batch[0]['ModelYear'])
all_feature_columns = (numeric_features +
bucketized_features +
categorical_indicator_features)
print(all_feature_columns)
regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
model_dir='models/autompg-dnnregressor/')
EPOCHS = 1000
BATCH_SIZE = 8
total_steps = EPOCHS * int(np.ceil(len(df_train) / BATCH_SIZE))
print('훈련 스텝:', total_steps)
regressor.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE),
steps=total_steps)
reloaded_regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
warm_start_from='models/autompg-dnnregressor/',
model_dir='models/autompg-dnnregressor/')
def eval_input_fn(df_test, batch_size=8):
df = df_test.copy()
test_x, test_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))
return dataset.batch(batch_size)
eval_results = reloaded_regressor.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
for key in eval_results:
print('{:15s} {}'.format(key, eval_results[key]))
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
pred_res = regressor.predict(input_fn=lambda: eval_input_fn(df_test_norm, batch_size=8))
print(next(iter(pred_res)))
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-40000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
{'predictions': array([22.583801], dtype=float32)}
###Markdown
Boosted Tree Regressor
###Code
boosted_tree = tf.estimator.BoostedTreesRegressor(
feature_columns=all_feature_columns,
n_batches_per_layer=20,
n_trees=200)
boosted_tree.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE))
eval_results = boosted_tree.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
print(eval_results)
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
###Output
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp746f1h5a
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp746f1h5a', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:397: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmp746f1h5a/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 837.8687, step = 0
INFO:tensorflow:loss = 219.3074, step = 80 (0.894 sec)
INFO:tensorflow:global_step/sec: 86.4973
INFO:tensorflow:loss = 109.478325, step = 180 (0.812 sec)
INFO:tensorflow:global_step/sec: 146.743
INFO:tensorflow:loss = 21.706694, step = 280 (0.660 sec)
INFO:tensorflow:global_step/sec: 151.963
INFO:tensorflow:loss = 12.801405, step = 380 (0.647 sec)
INFO:tensorflow:global_step/sec: 155.564
INFO:tensorflow:loss = 18.742104, step = 480 (0.666 sec)
INFO:tensorflow:global_step/sec: 149.251
INFO:tensorflow:loss = 5.484076, step = 580 (0.663 sec)
INFO:tensorflow:global_step/sec: 149.341
INFO:tensorflow:loss = 2.4553428, step = 680 (0.666 sec)
INFO:tensorflow:global_step/sec: 151.175
INFO:tensorflow:loss = 2.500944, step = 780 (0.684 sec)
INFO:tensorflow:global_step/sec: 145.051
INFO:tensorflow:loss = 1.064991, step = 880 (0.674 sec)
INFO:tensorflow:global_step/sec: 149.881
INFO:tensorflow:loss = 3.018689, step = 980 (0.652 sec)
INFO:tensorflow:global_step/sec: 155.232
INFO:tensorflow:loss = 0.7638693, step = 1080 (0.652 sec)
INFO:tensorflow:global_step/sec: 151.555
INFO:tensorflow:loss = 2.092829, step = 1180 (0.652 sec)
INFO:tensorflow:global_step/sec: 152.79
INFO:tensorflow:loss = 2.6152208, step = 1280 (0.651 sec)
INFO:tensorflow:global_step/sec: 153.208
INFO:tensorflow:loss = 0.77570367, step = 1380 (0.660 sec)
INFO:tensorflow:global_step/sec: 152.572
INFO:tensorflow:loss = 1.7483119, step = 1480 (0.656 sec)
INFO:tensorflow:global_step/sec: 149.532
INFO:tensorflow:loss = 1.90478, step = 1580 (0.692 sec)
INFO:tensorflow:global_step/sec: 147.166
INFO:tensorflow:loss = 1.4025686, step = 1680 (0.677 sec)
INFO:tensorflow:global_step/sec: 147.67
INFO:tensorflow:loss = 2.4188242, step = 1780 (0.660 sec)
INFO:tensorflow:global_step/sec: 147.763
INFO:tensorflow:loss = 1.4844778, step = 1880 (0.666 sec)
INFO:tensorflow:global_step/sec: 152.603
INFO:tensorflow:loss = 1.5705873, step = 1980 (0.677 sec)
INFO:tensorflow:global_step/sec: 148.503
INFO:tensorflow:loss = 0.7602021, step = 2080 (0.670 sec)
INFO:tensorflow:global_step/sec: 147.624
INFO:tensorflow:loss = 0.5329679, step = 2180 (0.674 sec)
INFO:tensorflow:global_step/sec: 150.968
INFO:tensorflow:loss = 1.406549, step = 2280 (0.685 sec)
INFO:tensorflow:global_step/sec: 145.078
INFO:tensorflow:loss = 2.3533897, step = 2380 (0.675 sec)
INFO:tensorflow:global_step/sec: 146.43
INFO:tensorflow:loss = 0.629879, step = 2480 (0.665 sec)
INFO:tensorflow:global_step/sec: 152.977
INFO:tensorflow:loss = 0.3250631, step = 2580 (0.673 sec)
INFO:tensorflow:global_step/sec: 147.393
INFO:tensorflow:loss = 1.4166944, step = 2680 (0.668 sec)
INFO:tensorflow:global_step/sec: 148.665
INFO:tensorflow:loss = 0.7377922, step = 2780 (0.664 sec)
INFO:tensorflow:global_step/sec: 152.958
INFO:tensorflow:loss = 1.1060591, step = 2880 (0.662 sec)
INFO:tensorflow:global_step/sec: 151.334
INFO:tensorflow:loss = 0.34892416, step = 2980 (0.652 sec)
INFO:tensorflow:global_step/sec: 148.936
INFO:tensorflow:loss = 0.25539124, step = 3080 (0.675 sec)
INFO:tensorflow:global_step/sec: 150.445
INFO:tensorflow:loss = 1.1944735, step = 3180 (0.672 sec)
INFO:tensorflow:global_step/sec: 150.43
INFO:tensorflow:loss = 0.9333307, step = 3280 (0.642 sec)
INFO:tensorflow:global_step/sec: 156.407
INFO:tensorflow:loss = 0.43315756, step = 3380 (0.667 sec)
INFO:tensorflow:global_step/sec: 150.729
INFO:tensorflow:loss = 0.93331456, step = 3480 (0.653 sec)
INFO:tensorflow:global_step/sec: 152.593
INFO:tensorflow:loss = 0.30828488, step = 3580 (0.648 sec)
INFO:tensorflow:global_step/sec: 151.252
INFO:tensorflow:loss = 0.30939305, step = 3680 (0.665 sec)
INFO:tensorflow:global_step/sec: 149.645
INFO:tensorflow:loss = 0.4340995, step = 3780 (0.687 sec)
INFO:tensorflow:global_step/sec: 148.544
INFO:tensorflow:loss = 0.6970409, step = 3880 (0.672 sec)
INFO:tensorflow:global_step/sec: 147.923
INFO:tensorflow:loss = 0.23747596, step = 3980 (0.675 sec)
INFO:tensorflow:global_step/sec: 149.609
INFO:tensorflow:loss = 0.36530724, step = 4080 (0.661 sec)
INFO:tensorflow:global_step/sec: 150.428
INFO:tensorflow:loss = 0.21810669, step = 4180 (0.665 sec)
INFO:tensorflow:global_step/sec: 149.047
INFO:tensorflow:loss = 1.1884477, step = 4280 (0.658 sec)
INFO:tensorflow:global_step/sec: 150.751
INFO:tensorflow:loss = 0.39963245, step = 4380 (0.668 sec)
INFO:tensorflow:global_step/sec: 151.998
INFO:tensorflow:loss = 0.1955626, step = 4480 (0.662 sec)
INFO:tensorflow:global_step/sec: 148.913
INFO:tensorflow:loss = 0.29662588, step = 4580 (0.676 sec)
INFO:tensorflow:global_step/sec: 150.589
INFO:tensorflow:loss = 0.22540914, step = 4680 (0.672 sec)
INFO:tensorflow:global_step/sec: 149.814
INFO:tensorflow:loss = 0.6256131, step = 4780 (0.640 sec)
INFO:tensorflow:global_step/sec: 154.075
INFO:tensorflow:loss = 0.746071, step = 4880 (0.651 sec)
INFO:tensorflow:global_step/sec: 155.057
INFO:tensorflow:loss = 0.26733723, step = 4980 (0.656 sec)
INFO:tensorflow:global_step/sec: 150.574
INFO:tensorflow:loss = 0.17589232, step = 5080 (0.658 sec)
INFO:tensorflow:global_step/sec: 153.673
INFO:tensorflow:loss = 0.1606029, step = 5180 (0.658 sec)
INFO:tensorflow:global_step/sec: 150.927
INFO:tensorflow:loss = 0.3958196, step = 5280 (0.672 sec)
INFO:tensorflow:global_step/sec: 147.852
INFO:tensorflow:loss = 0.26347825, step = 5380 (0.672 sec)
INFO:tensorflow:global_step/sec: 149.548
INFO:tensorflow:loss = 0.25687283, step = 5480 (0.647 sec)
INFO:tensorflow:global_step/sec: 154.759
INFO:tensorflow:loss = 0.18431589, step = 5580 (0.662 sec)
INFO:tensorflow:global_step/sec: 150.858
INFO:tensorflow:loss = 0.07221815, step = 5680 (0.664 sec)
INFO:tensorflow:global_step/sec: 151.439
INFO:tensorflow:loss = 0.15109919, step = 5780 (0.673 sec)
INFO:tensorflow:global_step/sec: 148.703
INFO:tensorflow:loss = 0.15371259, step = 5880 (0.647 sec)
INFO:tensorflow:global_step/sec: 153.576
INFO:tensorflow:loss = 0.28414395, step = 5980 (0.668 sec)
INFO:tensorflow:global_step/sec: 149.675
INFO:tensorflow:loss = 0.12412469, step = 6080 (0.649 sec)
INFO:tensorflow:global_step/sec: 153.935
INFO:tensorflow:loss = 0.17493099, step = 6180 (0.650 sec)
INFO:tensorflow:global_step/sec: 156.49
INFO:tensorflow:loss = 0.20161584, step = 6280 (0.656 sec)
INFO:tensorflow:global_step/sec: 149.864
INFO:tensorflow:loss = 0.15605098, step = 6380 (0.675 sec)
INFO:tensorflow:global_step/sec: 150.537
INFO:tensorflow:loss = 0.2289162, step = 6480 (0.656 sec)
INFO:tensorflow:global_step/sec: 152.08
INFO:tensorflow:loss = 0.25568965, step = 6580 (0.669 sec)
INFO:tensorflow:global_step/sec: 147.127
INFO:tensorflow:loss = 0.1600621, step = 6680 (0.663 sec)
INFO:tensorflow:global_step/sec: 148.97
INFO:tensorflow:loss = 0.16423121, step = 6780 (0.692 sec)
INFO:tensorflow:global_step/sec: 146.828
INFO:tensorflow:loss = 0.08757869, step = 6880 (0.672 sec)
INFO:tensorflow:global_step/sec: 151.277
INFO:tensorflow:loss = 0.09908368, step = 6980 (0.665 sec)
INFO:tensorflow:global_step/sec: 149.872
INFO:tensorflow:loss = 0.12666771, step = 7080 (0.658 sec)
INFO:tensorflow:global_step/sec: 151.286
INFO:tensorflow:loss = 0.1907526, step = 7180 (0.665 sec)
INFO:tensorflow:global_step/sec: 150.787
INFO:tensorflow:loss = 0.20322911, step = 7280 (0.654 sec)
INFO:tensorflow:global_step/sec: 153.073
INFO:tensorflow:loss = 0.03985022, step = 7380 (0.654 sec)
INFO:tensorflow:global_step/sec: 152.314
INFO:tensorflow:loss = 0.076568246, step = 7480 (0.666 sec)
INFO:tensorflow:global_step/sec: 150.321
INFO:tensorflow:loss = 0.14039947, step = 7580 (0.663 sec)
INFO:tensorflow:global_step/sec: 147.414
INFO:tensorflow:loss = 0.06898737, step = 7680 (0.670 sec)
INFO:tensorflow:global_step/sec: 152.61
INFO:tensorflow:loss = 0.026434075, step = 7780 (0.674 sec)
INFO:tensorflow:global_step/sec: 147.718
INFO:tensorflow:loss = 0.10698028, step = 7880 (0.670 sec)
INFO:tensorflow:global_step/sec: 147.104
INFO:tensorflow:loss = 0.15419021, step = 7980 (0.672 sec)
INFO:tensorflow:global_step/sec: 151.595
INFO:tensorflow:loss = 0.028564457, step = 8080 (0.678 sec)
INFO:tensorflow:global_step/sec: 146.126
INFO:tensorflow:loss = 0.08336664, step = 8180 (0.687 sec)
INFO:tensorflow:global_step/sec: 144.349
INFO:tensorflow:loss = 0.047345236, step = 8280 (0.690 sec)
INFO:tensorflow:global_step/sec: 147.472
INFO:tensorflow:loss = 0.06706374, step = 8380 (0.680 sec)
INFO:tensorflow:global_step/sec: 146.082
INFO:tensorflow:loss = 0.050071187, step = 8480 (0.664 sec)
INFO:tensorflow:global_step/sec: 148.622
INFO:tensorflow:loss = 0.037193336, step = 8580 (0.708 sec)
INFO:tensorflow:global_step/sec: 144.32
INFO:tensorflow:loss = 0.029223727, step = 8680 (0.671 sec)
INFO:tensorflow:global_step/sec: 149.24
INFO:tensorflow:loss = 0.051640965, step = 8780 (0.665 sec)
INFO:tensorflow:global_step/sec: 147.059
INFO:tensorflow:loss = 0.06752524, step = 8880 (0.673 sec)
INFO:tensorflow:global_step/sec: 152.683
INFO:tensorflow:loss = 0.026380707, step = 8980 (0.667 sec)
INFO:tensorflow:global_step/sec: 147.011
INFO:tensorflow:loss = 0.032367367, step = 9080 (0.684 sec)
INFO:tensorflow:global_step/sec: 145.188
INFO:tensorflow:loss = 0.019801598, step = 9180 (0.697 sec)
INFO:tensorflow:global_step/sec: 146.64
INFO:tensorflow:loss = 0.063862294, step = 9280 (0.674 sec)
INFO:tensorflow:global_step/sec: 147.496
INFO:tensorflow:loss = 0.06304033, step = 9380 (0.675 sec)
INFO:tensorflow:global_step/sec: 146.633
INFO:tensorflow:loss = 0.07337183, step = 9480 (0.666 sec)
INFO:tensorflow:global_step/sec: 151.397
INFO:tensorflow:loss = 0.037572272, step = 9580 (0.680 sec)
INFO:tensorflow:global_step/sec: 146.522
INFO:tensorflow:loss = 0.044301596, step = 9680 (0.672 sec)
INFO:tensorflow:global_step/sec: 146.382
INFO:tensorflow:loss = 0.028739471, step = 9780 (0.684 sec)
INFO:tensorflow:global_step/sec: 149.796
INFO:tensorflow:loss = 0.03379544, step = 9880 (0.685 sec)
INFO:tensorflow:global_step/sec: 146.433
INFO:tensorflow:loss = 0.0344553, step = 9980 (0.657 sec)
INFO:tensorflow:global_step/sec: 150.518
INFO:tensorflow:loss = 0.08908106, step = 10080 (0.668 sec)
INFO:tensorflow:global_step/sec: 150.427
INFO:tensorflow:loss = 0.013899835, step = 10180 (0.692 sec)
INFO:tensorflow:global_step/sec: 143.874
INFO:tensorflow:loss = 0.061976884, step = 10280 (0.668 sec)
INFO:tensorflow:global_step/sec: 147.521
INFO:tensorflow:loss = 0.03084368, step = 10380 (0.681 sec)
INFO:tensorflow:global_step/sec: 149.166
INFO:tensorflow:loss = 0.01999862, step = 10480 (0.696 sec)
INFO:tensorflow:global_step/sec: 144.189
INFO:tensorflow:loss = 0.040555064, step = 10580 (0.701 sec)
INFO:tensorflow:global_step/sec: 143.296
INFO:tensorflow:loss = 0.027770132, step = 10680 (0.684 sec)
INFO:tensorflow:global_step/sec: 143.188
INFO:tensorflow:loss = 0.017593568, step = 10780 (0.682 sec)
INFO:tensorflow:global_step/sec: 149.262
INFO:tensorflow:loss = 0.018787973, step = 10880 (0.685 sec)
INFO:tensorflow:global_step/sec: 143.196
INFO:tensorflow:loss = 0.02845195, step = 10980 (0.695 sec)
INFO:tensorflow:global_step/sec: 146.746
INFO:tensorflow:loss = 0.034323335, step = 11080 (0.710 sec)
INFO:tensorflow:global_step/sec: 141.464
INFO:tensorflow:loss = 0.023412295, step = 11180 (0.697 sec)
INFO:tensorflow:global_step/sec: 141.584
INFO:tensorflow:loss = 0.007297569, step = 11280 (0.707 sec)
INFO:tensorflow:global_step/sec: 142.198
INFO:tensorflow:loss = 0.03247303, step = 11380 (0.683 sec)
INFO:tensorflow:global_step/sec: 146.013
INFO:tensorflow:loss = 0.022519596, step = 11480 (0.687 sec)
INFO:tensorflow:global_step/sec: 146.047
INFO:tensorflow:loss = 0.0234599, step = 11580 (0.681 sec)
INFO:tensorflow:global_step/sec: 146.486
INFO:tensorflow:loss = 0.026627034, step = 11680 (0.698 sec)
INFO:tensorflow:global_step/sec: 143.134
INFO:tensorflow:loss = 0.02978642, step = 11780 (0.697 sec)
INFO:tensorflow:global_step/sec: 143.115
INFO:tensorflow:loss = 0.0319783, step = 11880 (0.958 sec)
INFO:tensorflow:global_step/sec: 105.318
INFO:tensorflow:loss = 0.016719932, step = 11980 (0.679 sec)
INFO:tensorflow:global_step/sec: 145.919
INFO:tensorflow:loss = 0.041200273, step = 12080 (0.682 sec)
INFO:tensorflow:global_step/sec: 147.132
INFO:tensorflow:loss = 0.052656718, step = 12180 (0.703 sec)
INFO:tensorflow:global_step/sec: 142.326
INFO:tensorflow:loss = 0.01788846, step = 12280 (0.685 sec)
INFO:tensorflow:global_step/sec: 143.804
INFO:tensorflow:loss = 0.02921438, step = 12380 (0.690 sec)
INFO:tensorflow:global_step/sec: 147.057
INFO:tensorflow:loss = 0.0063668205, step = 12480 (0.694 sec)
INFO:tensorflow:global_step/sec: 144.634
INFO:tensorflow:loss = 0.0077824565, step = 12580 (0.686 sec)
INFO:tensorflow:global_step/sec: 142.048
INFO:tensorflow:loss = 0.03136026, step = 12680 (0.712 sec)
INFO:tensorflow:global_step/sec: 141.94
INFO:tensorflow:loss = 0.014961893, step = 12780 (0.707 sec)
INFO:tensorflow:global_step/sec: 142.562
INFO:tensorflow:loss = 0.010538283, step = 12880 (0.679 sec)
INFO:tensorflow:global_step/sec: 144.709
INFO:tensorflow:loss = 0.0085834535, step = 12980 (0.694 sec)
INFO:tensorflow:global_step/sec: 145.846
INFO:tensorflow:loss = 0.018545985, step = 13080 (0.698 sec)
INFO:tensorflow:global_step/sec: 144.449
INFO:tensorflow:loss = 0.008407678, step = 13180 (0.698 sec)
INFO:tensorflow:global_step/sec: 142.717
INFO:tensorflow:loss = 0.020155149, step = 13280 (0.676 sec)
INFO:tensorflow:global_step/sec: 146.434
INFO:tensorflow:loss = 0.01389962, step = 13380 (0.705 sec)
INFO:tensorflow:global_step/sec: 142.346
INFO:tensorflow:loss = 0.025226854, step = 13480 (0.709 sec)
INFO:tensorflow:global_step/sec: 142.987
INFO:tensorflow:loss = 0.009654707, step = 13580 (0.700 sec)
INFO:tensorflow:global_step/sec: 142.119
INFO:tensorflow:loss = 0.017410286, step = 13680 (0.708 sec)
INFO:tensorflow:global_step/sec: 141.215
INFO:tensorflow:loss = 0.018366385, step = 13780 (0.701 sec)
INFO:tensorflow:global_step/sec: 142.455
INFO:tensorflow:loss = 0.013979387, step = 13880 (0.688 sec)
INFO:tensorflow:global_step/sec: 143.128
INFO:tensorflow:loss = 0.011435907, step = 13980 (0.696 sec)
INFO:tensorflow:global_step/sec: 145.898
INFO:tensorflow:loss = 0.013547455, step = 14080 (0.703 sec)
INFO:tensorflow:global_step/sec: 143.063
INFO:tensorflow:loss = 0.012937121, step = 14180 (0.689 sec)
INFO:tensorflow:global_step/sec: 143.685
INFO:tensorflow:loss = 0.013082356, step = 14280 (0.697 sec)
INFO:tensorflow:global_step/sec: 144.153
INFO:tensorflow:loss = 0.011659414, step = 14380 (0.700 sec)
INFO:tensorflow:global_step/sec: 141.494
INFO:tensorflow:loss = 0.0023336636, step = 14480 (0.700 sec)
INFO:tensorflow:global_step/sec: 141.447
INFO:tensorflow:loss = 0.0056615053, step = 14580 (0.721 sec)
INFO:tensorflow:global_step/sec: 141.757
INFO:tensorflow:loss = 0.003452142, step = 14680 (0.700 sec)
INFO:tensorflow:global_step/sec: 142.967
INFO:tensorflow:loss = 0.013778169, step = 14780 (0.703 sec)
INFO:tensorflow:global_step/sec: 141.203
INFO:tensorflow:loss = 0.0068295673, step = 14880 (0.694 sec)
INFO:tensorflow:global_step/sec: 144.627
INFO:tensorflow:loss = 0.013753871, step = 14980 (0.697 sec)
INFO:tensorflow:global_step/sec: 142.164
INFO:tensorflow:loss = 0.0048329732, step = 15080 (0.717 sec)
INFO:tensorflow:global_step/sec: 140.002
INFO:tensorflow:loss = 0.00623441, step = 15180 (0.715 sec)
INFO:tensorflow:global_step/sec: 139.445
INFO:tensorflow:loss = 0.0046028835, step = 15280 (0.692 sec)
INFO:tensorflow:global_step/sec: 144.284
INFO:tensorflow:loss = 0.0045448802, step = 15380 (0.729 sec)
INFO:tensorflow:global_step/sec: 139.02
INFO:tensorflow:loss = 0.004668856, step = 15480 (0.683 sec)
INFO:tensorflow:global_step/sec: 144.616
INFO:tensorflow:loss = 0.0024793632, step = 15580 (0.695 sec)
INFO:tensorflow:global_step/sec: 143.317
INFO:tensorflow:loss = 0.008329028, step = 15680 (0.714 sec)
INFO:tensorflow:global_step/sec: 141.227
INFO:tensorflow:loss = 0.0068990486, step = 15780 (0.716 sec)
INFO:tensorflow:global_step/sec: 139.468
INFO:tensorflow:loss = 0.006620066, step = 15880 (0.722 sec)
INFO:tensorflow:global_step/sec: 139.091
INFO:tensorflow:loss = 0.0044790916, step = 15980 (0.730 sec)
INFO:tensorflow:global_step/sec: 137.055
INFO:tensorflow:loss = 0.003690621, step = 16080 (0.727 sec)
INFO:tensorflow:global_step/sec: 136.509
INFO:tensorflow:loss = 0.0038718886, step = 16180 (0.719 sec)
INFO:tensorflow:global_step/sec: 137.523
INFO:tensorflow:loss = 0.0022332452, step = 16280 (0.722 sec)
INFO:tensorflow:global_step/sec: 140.238
INFO:tensorflow:loss = 0.0030439221, step = 16380 (0.727 sec)
INFO:tensorflow:global_step/sec: 139.161
INFO:tensorflow:loss = 0.0070886184, step = 16480 (0.711 sec)
INFO:tensorflow:global_step/sec: 140.352
INFO:tensorflow:loss = 0.0037126257, step = 16580 (0.714 sec)
INFO:tensorflow:global_step/sec: 139.912
INFO:tensorflow:loss = 0.0019242018, step = 16680 (0.709 sec)
INFO:tensorflow:global_step/sec: 142.295
INFO:tensorflow:loss = 0.0029795493, step = 16780 (0.692 sec)
INFO:tensorflow:global_step/sec: 142.729
INFO:tensorflow:loss = 0.0030748053, step = 16880 (0.704 sec)
INFO:tensorflow:global_step/sec: 141.819
INFO:tensorflow:loss = 0.005926127, step = 16980 (0.723 sec)
INFO:tensorflow:global_step/sec: 138.655
INFO:tensorflow:loss = 0.0010825595, step = 17080 (0.725 sec)
INFO:tensorflow:global_step/sec: 137.582
INFO:tensorflow:loss = 0.0024114987, step = 17180 (0.709 sec)
INFO:tensorflow:global_step/sec: 140.605
INFO:tensorflow:loss = 0.006037759, step = 17280 (0.729 sec)
INFO:tensorflow:global_step/sec: 137.333
INFO:tensorflow:loss = 0.0033740005, step = 17380 (0.717 sec)
INFO:tensorflow:global_step/sec: 139.343
INFO:tensorflow:loss = 0.0022279082, step = 17480 (0.729 sec)
INFO:tensorflow:global_step/sec: 136.674
INFO:tensorflow:loss = 0.0015717878, step = 17580 (0.739 sec)
INFO:tensorflow:global_step/sec: 135.705
INFO:tensorflow:loss = 0.0048260475, step = 17680 (0.748 sec)
INFO:tensorflow:global_step/sec: 134.693
INFO:tensorflow:loss = 0.0023479688, step = 17780 (0.742 sec)
INFO:tensorflow:global_step/sec: 134.4
INFO:tensorflow:loss = 0.0027532578, step = 17880 (0.738 sec)
INFO:tensorflow:global_step/sec: 134.912
INFO:tensorflow:loss = 0.0021364442, step = 17980 (0.723 sec)
INFO:tensorflow:global_step/sec: 138.134
INFO:tensorflow:loss = 0.001539866, step = 18080 (0.706 sec)
INFO:tensorflow:global_step/sec: 142.905
INFO:tensorflow:loss = 0.001128615, step = 18180 (0.710 sec)
INFO:tensorflow:global_step/sec: 141.472
INFO:tensorflow:loss = 0.002287311, step = 18280 (0.703 sec)
INFO:tensorflow:global_step/sec: 141.299
INFO:tensorflow:loss = 0.000665123, step = 18380 (0.736 sec)
INFO:tensorflow:global_step/sec: 135.896
INFO:tensorflow:loss = 0.0019877788, step = 18480 (0.734 sec)
INFO:tensorflow:global_step/sec: 136.542
INFO:tensorflow:loss = 0.0010502317, step = 18580 (0.723 sec)
INFO:tensorflow:global_step/sec: 135.107
INFO:tensorflow:loss = 0.001308484, step = 18680 (0.733 sec)
INFO:tensorflow:global_step/sec: 137.7
INFO:tensorflow:loss = 0.0010152048, step = 18780 (0.725 sec)
INFO:tensorflow:global_step/sec: 140.395
INFO:tensorflow:loss = 0.0005139795, step = 18880 (0.728 sec)
INFO:tensorflow:global_step/sec: 137.054
INFO:tensorflow:loss = 0.0016813644, step = 18980 (0.707 sec)
INFO:tensorflow:global_step/sec: 139.483
INFO:tensorflow:loss = 0.00168139, step = 19080 (0.719 sec)
INFO:tensorflow:global_step/sec: 140.469
INFO:tensorflow:loss = 0.0017786454, step = 19180 (0.734 sec)
INFO:tensorflow:global_step/sec: 134.327
INFO:tensorflow:loss = 0.0024718647, step = 19280 (0.729 sec)
INFO:tensorflow:global_step/sec: 137.355
INFO:tensorflow:loss = 0.0012226652, step = 19380 (0.740 sec)
INFO:tensorflow:global_step/sec: 135.956
INFO:tensorflow:loss = 0.0006462211, step = 19480 (0.750 sec)
INFO:tensorflow:global_step/sec: 133.654
INFO:tensorflow:loss = 0.0022271674, step = 19580 (0.745 sec)
INFO:tensorflow:global_step/sec: 135.036
INFO:tensorflow:loss = 0.0012852926, step = 19680 (0.715 sec)
INFO:tensorflow:global_step/sec: 137.072
INFO:tensorflow:loss = 0.0012187359, step = 19780 (0.742 sec)
INFO:tensorflow:global_step/sec: 137.088
INFO:tensorflow:loss = 0.0011589411, step = 19880 (0.730 sec)
INFO:tensorflow:global_step/sec: 137.125
INFO:tensorflow:loss = 0.0007264713, step = 19980 (0.714 sec)
INFO:tensorflow:global_step/sec: 140.256
INFO:tensorflow:loss = 0.0009884271, step = 20080 (0.705 sec)
INFO:tensorflow:global_step/sec: 141.821
INFO:tensorflow:loss = 0.0011746504, step = 20180 (0.717 sec)
INFO:tensorflow:global_step/sec: 139.325
INFO:tensorflow:loss = 0.0027561653, step = 20280 (0.742 sec)
INFO:tensorflow:global_step/sec: 134.428
INFO:tensorflow:loss = 0.001586243, step = 20380 (0.718 sec)
INFO:tensorflow:global_step/sec: 137.497
INFO:tensorflow:loss = 0.0012851763, step = 20480 (0.709 sec)
INFO:tensorflow:global_step/sec: 142.007
INFO:tensorflow:loss = 0.0005696299, step = 20580 (0.746 sec)
INFO:tensorflow:global_step/sec: 135.354
INFO:tensorflow:loss = 0.0015202692, step = 20680 (0.732 sec)
INFO:tensorflow:global_step/sec: 136.794
INFO:tensorflow:loss = 0.00070091104, step = 20780 (0.741 sec)
INFO:tensorflow:global_step/sec: 133.222
INFO:tensorflow:loss = 0.0006927934, step = 20880 (0.725 sec)
INFO:tensorflow:global_step/sec: 139.476
INFO:tensorflow:loss = 0.0017318781, step = 20980 (0.723 sec)
INFO:tensorflow:global_step/sec: 137.871
INFO:tensorflow:loss = 0.00031344232, step = 21080 (0.707 sec)
INFO:tensorflow:global_step/sec: 140.038
INFO:tensorflow:loss = 0.0008819831, step = 21180 (0.708 sec)
INFO:tensorflow:global_step/sec: 143.39
INFO:tensorflow:loss = 0.00048810212, step = 21280 (0.730 sec)
INFO:tensorflow:global_step/sec: 136.624
INFO:tensorflow:loss = 0.000631156, step = 21380 (0.721 sec)
INFO:tensorflow:global_step/sec: 138.094
INFO:tensorflow:loss = 0.0005440748, step = 21480 (0.730 sec)
INFO:tensorflow:global_step/sec: 136.971
INFO:tensorflow:loss = 0.0011751838, step = 21580 (0.720 sec)
INFO:tensorflow:global_step/sec: 138.982
INFO:tensorflow:loss = 0.0011716125, step = 21680 (0.724 sec)
INFO:tensorflow:global_step/sec: 138.472
INFO:tensorflow:loss = 0.0005460314, step = 21780 (0.721 sec)
INFO:tensorflow:global_step/sec: 135.748
INFO:tensorflow:loss = 0.0012034024, step = 21880 (0.746 sec)
INFO:tensorflow:global_step/sec: 135.572
INFO:tensorflow:loss = 0.00041668452, step = 21980 (0.737 sec)
INFO:tensorflow:global_step/sec: 137.633
INFO:tensorflow:loss = 0.00022059455, step = 22080 (0.731 sec)
INFO:tensorflow:global_step/sec: 136.321
INFO:tensorflow:loss = 0.0005211315, step = 22180 (0.724 sec)
INFO:tensorflow:global_step/sec: 136.309
INFO:tensorflow:loss = 0.00031166515, step = 22280 (0.748 sec)
INFO:tensorflow:global_step/sec: 133.929
INFO:tensorflow:loss = 0.00050176255, step = 22380 (0.747 sec)
INFO:tensorflow:global_step/sec: 135.018
INFO:tensorflow:loss = 0.0008132309, step = 22480 (0.721 sec)
INFO:tensorflow:global_step/sec: 136.858
INFO:tensorflow:loss = 0.00017204777, step = 22580 (0.724 sec)
INFO:tensorflow:global_step/sec: 139.807
INFO:tensorflow:loss = 0.0003143691, step = 22680 (0.741 sec)
INFO:tensorflow:global_step/sec: 135.374
INFO:tensorflow:loss = 0.0007280413, step = 22780 (0.744 sec)
INFO:tensorflow:global_step/sec: 133.94
INFO:tensorflow:loss = 0.00063155184, step = 22880 (0.718 sec)
INFO:tensorflow:global_step/sec: 139.745
INFO:tensorflow:loss = 0.00014397503, step = 22980 (0.736 sec)
INFO:tensorflow:global_step/sec: 133.383
INFO:tensorflow:loss = 0.00060017843, step = 23080 (0.733 sec)
INFO:tensorflow:global_step/sec: 138.561
INFO:tensorflow:loss = 0.0002159341, step = 23180 (0.744 sec)
INFO:tensorflow:global_step/sec: 133.364
INFO:tensorflow:loss = 0.0003697059, step = 23280 (0.726 sec)
INFO:tensorflow:global_step/sec: 138.637
INFO:tensorflow:loss = 0.0004213114, step = 23380 (0.747 sec)
INFO:tensorflow:global_step/sec: 133.594
INFO:tensorflow:loss = 0.00024899258, step = 23480 (0.738 sec)
INFO:tensorflow:global_step/sec: 135.954
INFO:tensorflow:loss = 0.0005113005, step = 23580 (0.715 sec)
INFO:tensorflow:global_step/sec: 138.529
INFO:tensorflow:loss = 0.0001908144, step = 23680 (0.723 sec)
INFO:tensorflow:global_step/sec: 136.735
INFO:tensorflow:loss = 0.00033848634, step = 23780 (0.745 sec)
INFO:tensorflow:global_step/sec: 137.649
INFO:tensorflow:loss = 0.00038831055, step = 23880 (0.744 sec)
INFO:tensorflow:global_step/sec: 130.216
INFO:tensorflow:loss = 0.00029680112, step = 23980 (0.790 sec)
INFO:tensorflow:global_step/sec: 125.405
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 24000...
INFO:tensorflow:Saving checkpoints for 24000 into /tmp/tmp746f1h5a/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 24000...
INFO:tensorflow:Loss for final step: 0.00043947218.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-08-21T08:37:31
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmp746f1h5a/model.ckpt-24000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Inference Time : 0.32449s
INFO:tensorflow:Finished evaluation at 2021-08-21-08:37:31
INFO:tensorflow:Saving dict for global step 24000: average_loss = 12.833512, global_step = 24000, label/mean = 23.611391, loss = 12.711438, prediction/mean = 22.494513
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 24000: /tmp/tmp746f1h5a/model.ckpt-24000
{'average_loss': 12.833512, 'label/mean': 23.611391, 'loss': 12.711438, 'prediction/mean': 22.494513, 'global_step': 24000}
평균 손실 12.8335
###Markdown
머신 러닝 교과서 3판 14장 - 텐서플로의 구조 자세히 알아보기 (2/3) **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 목차 - 텐서플로 추정기 - 특성 열 사용하기 - 사전에 준비된 추정기로 머신 러닝 수행하기
###Code
import numpy as np
import tensorflow as tf
import pandas as pd
from IPython.display import Image
tf.__version__
###Output
_____no_output_____
###Markdown
텐서플로 추정기 사전에 준비된 추정기 사용하는 단계 * **단계 1:** 데이터 로딩을 위해 입력 함수 정의하기 * **단계 2:** 추정기와 데이터 사이를 연결하기 위해 특성 열 정의하기 * **단계 3:** 추정기 객체를 만들거나 케라스 모델을 추정기로 바꾸기 * **단계 4:** 추정기 사용하기: train() evaluate() predict()
###Code
tf.random.set_seed(1)
np.random.seed(1)
###Output
_____no_output_____
###Markdown
특성 열 사용하기 * 정의: https://developers.google.com/machine-learning/glossary/feature_columns * 문서: https://www.tensorflow.org/api_docs/python/tf/feature_column
###Code
Image(url='https://git.io/JL56E', width=700)
dataset_path = tf.keras.utils.get_file("auto-mpg.data",
("http://archive.ics.uci.edu/ml/machine-learning-databases"
"/auto-mpg/auto-mpg.data"))
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'ModelYear', 'Origin']
df = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
df.tail()
print(df.isna().sum())
df = df.dropna()
df = df.reset_index(drop=True)
df.tail()
import sklearn
import sklearn.model_selection
df_train, df_test = sklearn.model_selection.train_test_split(df, train_size=0.8)
train_stats = df_train.describe().transpose()
train_stats
numeric_column_names = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration']
df_train_norm, df_test_norm = df_train.copy(), df_test.copy()
for col_name in numeric_column_names:
mean = train_stats.loc[col_name, 'mean']
std = train_stats.loc[col_name, 'std']
df_train_norm.loc[:, col_name] = (df_train_norm.loc[:, col_name] - mean)/std
df_test_norm.loc[:, col_name] = (df_test_norm.loc[:, col_name] - mean)/std
df_train_norm.tail()
###Output
_____no_output_____
###Markdown
수치형 열
###Code
numeric_features = []
for col_name in numeric_column_names:
numeric_features.append(tf.feature_column.numeric_column(key=col_name))
numeric_features
feature_year = tf.feature_column.numeric_column(key="ModelYear")
bucketized_features = []
bucketized_features.append(tf.feature_column.bucketized_column(
source_column=feature_year,
boundaries=[73, 76, 79]))
print(bucketized_features)
feature_origin = tf.feature_column.categorical_column_with_vocabulary_list(
key='Origin',
vocabulary_list=[1, 2, 3])
categorical_indicator_features = []
categorical_indicator_features.append(tf.feature_column.indicator_column(feature_origin))
print(categorical_indicator_features)
###Output
[IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='Origin', vocabulary_list=(1, 2, 3), dtype=tf.int64, default_value=-1, num_oov_buckets=0))]
###Markdown
사전에 준비된 추정기로 머신러닝 수행하기
###Code
def train_input_fn(df_train, batch_size=8):
df = df_train.copy()
train_x, train_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(train_x), train_y))
# 셔플, 반복, 배치
return dataset.shuffle(1000).repeat().batch(batch_size)
## 조사
ds = train_input_fn(df_train_norm)
batch = next(iter(ds))
print('키:', batch[0].keys())
print('ModelYear:', batch[0]['ModelYear'])
all_feature_columns = (numeric_features +
bucketized_features +
categorical_indicator_features)
print(all_feature_columns)
regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
model_dir='models/autompg-dnnregressor/')
EPOCHS = 1000
BATCH_SIZE = 8
total_steps = EPOCHS * int(np.ceil(len(df_train) / BATCH_SIZE))
print('훈련 스텝:', total_steps)
regressor.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE),
steps=total_steps)
reloaded_regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
warm_start_from='models/autompg-dnnregressor/',
model_dir='models/autompg-dnnregressor/')
def eval_input_fn(df_test, batch_size=8):
df = df_test.copy()
test_x, test_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))
return dataset.batch(batch_size)
eval_results = reloaded_regressor.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
for key in eval_results:
print('{:15s} {}'.format(key, eval_results[key]))
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
pred_res = regressor.predict(input_fn=lambda: eval_input_fn(df_test_norm, batch_size=8))
print(next(iter(pred_res)))
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-40000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
{'predictions': array([24.049746], dtype=float32)}
###Markdown
Boosted Tree Regressor
###Code
boosted_tree = tf.estimator.BoostedTreesRegressor(
feature_columns=all_feature_columns,
n_batches_per_layer=20,
n_trees=200)
boosted_tree.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE))
eval_results = boosted_tree.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
print(eval_results)
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
###Output
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpxc53gz43
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpxc53gz43', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:398: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
###Markdown
머신 러닝 교과서 3판 14장 - 텐서플로의 구조 자세히 알아보기 (2/3) **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 목차 - 텐서플로 추정기 - 특성 열 사용하기 - 사전에 준비된 추정기로 머신 러닝 수행하기
###Code
import numpy as np
import tensorflow as tf
import pandas as pd
from IPython.display import Image
tf.__version__
###Output
_____no_output_____
###Markdown
텐서플로 추정기 사전에 준비된 추정기 사용하는 단계 * **단계 1:** 데이터 로딩을 위해 입력 함수 정의하기 * **단계 2:** 추정기와 데이터 사이를 연결하기 위해 특성 열 정의하기 * **단계 3:** 추정기 객체를 만들거나 케라스 모델을 추정기로 바꾸기 * **단계 4:** 추정기 사용하기: train() evaluate() predict()
###Code
tf.random.set_seed(1)
np.random.seed(1)
###Output
_____no_output_____
###Markdown
특성 열 사용하기 * 정의: https://developers.google.com/machine-learning/glossary/feature_columns * 문서: https://www.tensorflow.org/api_docs/python/tf/feature_column
###Code
Image(url='https://git.io/JL56E', width=700)
dataset_path = tf.keras.utils.get_file("auto-mpg.data",
("http://archive.ics.uci.edu/ml/machine-learning-databases"
"/auto-mpg/auto-mpg.data"))
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'ModelYear', 'Origin']
df = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
df.tail()
print(df.isna().sum())
df = df.dropna()
df = df.reset_index(drop=True)
df.tail()
import sklearn
import sklearn.model_selection
df_train, df_test = sklearn.model_selection.train_test_split(df, train_size=0.8)
train_stats = df_train.describe().transpose()
train_stats
numeric_column_names = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration']
df_train_norm, df_test_norm = df_train.copy(), df_test.copy()
for col_name in numeric_column_names:
mean = train_stats.loc[col_name, 'mean']
std = train_stats.loc[col_name, 'std']
df_train_norm.loc[:, col_name] = (df_train_norm.loc[:, col_name] - mean)/std
df_test_norm.loc[:, col_name] = (df_test_norm.loc[:, col_name] - mean)/std
df_train_norm.tail()
###Output
_____no_output_____
###Markdown
수치형 열
###Code
numeric_features = []
for col_name in numeric_column_names:
numeric_features.append(tf.feature_column.numeric_column(key=col_name))
numeric_features
feature_year = tf.feature_column.numeric_column(key="ModelYear")
bucketized_features = []
bucketized_features.append(tf.feature_column.bucketized_column(
source_column=feature_year,
boundaries=[73, 76, 79]))
print(bucketized_features)
feature_origin = tf.feature_column.categorical_column_with_vocabulary_list(
key='Origin',
vocabulary_list=[1, 2, 3])
categorical_indicator_features = []
categorical_indicator_features.append(tf.feature_column.indicator_column(feature_origin))
print(categorical_indicator_features)
###Output
[IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='Origin', vocabulary_list=(1, 2, 3), dtype=tf.int64, default_value=-1, num_oov_buckets=0))]
###Markdown
사전에 준비된 추정기로 머신러닝 수행하기
###Code
def train_input_fn(df_train, batch_size=8):
df = df_train.copy()
train_x, train_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(train_x), train_y))
# 셔플, 반복, 배치
return dataset.shuffle(1000).repeat().batch(batch_size)
## 조사
ds = train_input_fn(df_train_norm)
batch = next(iter(ds))
print('키:', batch[0].keys())
print('ModelYear:', batch[0]['ModelYear'])
all_feature_columns = (numeric_features +
bucketized_features +
categorical_indicator_features)
print(all_feature_columns)
regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
model_dir='models/autompg-dnnregressor/')
EPOCHS = 1000
BATCH_SIZE = 8
total_steps = EPOCHS * int(np.ceil(len(df_train) / BATCH_SIZE))
print('훈련 스텝:', total_steps)
regressor.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE),
steps=total_steps)
reloaded_regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
warm_start_from='models/autompg-dnnregressor/',
model_dir='models/autompg-dnnregressor/')
def eval_input_fn(df_test, batch_size=8):
df = df_test.copy()
test_x, test_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))
return dataset.batch(batch_size)
eval_results = reloaded_regressor.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
for key in eval_results:
print('{:15s} {}'.format(key, eval_results[key]))
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
pred_res = regressor.predict(input_fn=lambda: eval_input_fn(df_test_norm, batch_size=8))
print(next(iter(pred_res)))
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-40000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
{'predictions': array([22.583801], dtype=float32)}
###Markdown
Boosted Tree Regressor
###Code
boosted_tree = tf.estimator.BoostedTreesRegressor(
feature_columns=all_feature_columns,
n_batches_per_layer=20,
n_trees=200)
boosted_tree.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE))
eval_results = boosted_tree.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
print(eval_results)
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
###Output
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp746f1h5a
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp746f1h5a', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:397: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmp746f1h5a/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 837.8687, step = 0
INFO:tensorflow:loss = 219.3074, step = 80 (0.894 sec)
INFO:tensorflow:global_step/sec: 86.4973
INFO:tensorflow:loss = 109.478325, step = 180 (0.812 sec)
INFO:tensorflow:global_step/sec: 146.743
INFO:tensorflow:loss = 21.706694, step = 280 (0.660 sec)
INFO:tensorflow:global_step/sec: 151.963
INFO:tensorflow:loss = 12.801405, step = 380 (0.647 sec)
INFO:tensorflow:global_step/sec: 155.564
INFO:tensorflow:loss = 18.742104, step = 480 (0.666 sec)
INFO:tensorflow:global_step/sec: 149.251
INFO:tensorflow:loss = 5.484076, step = 580 (0.663 sec)
INFO:tensorflow:global_step/sec: 149.341
INFO:tensorflow:loss = 2.4553428, step = 680 (0.666 sec)
INFO:tensorflow:global_step/sec: 151.175
INFO:tensorflow:loss = 2.500944, step = 780 (0.684 sec)
INFO:tensorflow:global_step/sec: 145.051
INFO:tensorflow:loss = 1.064991, step = 880 (0.674 sec)
INFO:tensorflow:global_step/sec: 149.881
INFO:tensorflow:loss = 3.018689, step = 980 (0.652 sec)
INFO:tensorflow:global_step/sec: 155.232
INFO:tensorflow:loss = 0.7638693, step = 1080 (0.652 sec)
INFO:tensorflow:global_step/sec: 151.555
INFO:tensorflow:loss = 2.092829, step = 1180 (0.652 sec)
INFO:tensorflow:global_step/sec: 152.79
INFO:tensorflow:loss = 2.6152208, step = 1280 (0.651 sec)
INFO:tensorflow:global_step/sec: 153.208
INFO:tensorflow:loss = 0.77570367, step = 1380 (0.660 sec)
INFO:tensorflow:global_step/sec: 152.572
INFO:tensorflow:loss = 1.7483119, step = 1480 (0.656 sec)
INFO:tensorflow:global_step/sec: 149.532
INFO:tensorflow:loss = 1.90478, step = 1580 (0.692 sec)
INFO:tensorflow:global_step/sec: 147.166
INFO:tensorflow:loss = 1.4025686, step = 1680 (0.677 sec)
INFO:tensorflow:global_step/sec: 147.67
INFO:tensorflow:loss = 2.4188242, step = 1780 (0.660 sec)
INFO:tensorflow:global_step/sec: 147.763
INFO:tensorflow:loss = 1.4844778, step = 1880 (0.666 sec)
INFO:tensorflow:global_step/sec: 152.603
INFO:tensorflow:loss = 1.5705873, step = 1980 (0.677 sec)
INFO:tensorflow:global_step/sec: 148.503
INFO:tensorflow:loss = 0.7602021, step = 2080 (0.670 sec)
INFO:tensorflow:global_step/sec: 147.624
INFO:tensorflow:loss = 0.5329679, step = 2180 (0.674 sec)
INFO:tensorflow:global_step/sec: 150.968
INFO:tensorflow:loss = 1.406549, step = 2280 (0.685 sec)
INFO:tensorflow:global_step/sec: 145.078
INFO:tensorflow:loss = 2.3533897, step = 2380 (0.675 sec)
INFO:tensorflow:global_step/sec: 146.43
INFO:tensorflow:loss = 0.629879, step = 2480 (0.665 sec)
INFO:tensorflow:global_step/sec: 152.977
INFO:tensorflow:loss = 0.3250631, step = 2580 (0.673 sec)
INFO:tensorflow:global_step/sec: 147.393
INFO:tensorflow:loss = 1.4166944, step = 2680 (0.668 sec)
INFO:tensorflow:global_step/sec: 148.665
INFO:tensorflow:loss = 0.7377922, step = 2780 (0.664 sec)
INFO:tensorflow:global_step/sec: 152.958
INFO:tensorflow:loss = 1.1060591, step = 2880 (0.662 sec)
INFO:tensorflow:global_step/sec: 151.334
INFO:tensorflow:loss = 0.34892416, step = 2980 (0.652 sec)
INFO:tensorflow:global_step/sec: 148.936
INFO:tensorflow:loss = 0.25539124, step = 3080 (0.675 sec)
INFO:tensorflow:global_step/sec: 150.445
INFO:tensorflow:loss = 1.1944735, step = 3180 (0.672 sec)
INFO:tensorflow:global_step/sec: 150.43
INFO:tensorflow:loss = 0.9333307, step = 3280 (0.642 sec)
INFO:tensorflow:global_step/sec: 156.407
INFO:tensorflow:loss = 0.43315756, step = 3380 (0.667 sec)
INFO:tensorflow:global_step/sec: 150.729
INFO:tensorflow:loss = 0.93331456, step = 3480 (0.653 sec)
INFO:tensorflow:global_step/sec: 152.593
INFO:tensorflow:loss = 0.30828488, step = 3580 (0.648 sec)
INFO:tensorflow:global_step/sec: 151.252
INFO:tensorflow:loss = 0.30939305, step = 3680 (0.665 sec)
INFO:tensorflow:global_step/sec: 149.645
INFO:tensorflow:loss = 0.4340995, step = 3780 (0.687 sec)
INFO:tensorflow:global_step/sec: 148.544
INFO:tensorflow:loss = 0.6970409, step = 3880 (0.672 sec)
INFO:tensorflow:global_step/sec: 147.923
INFO:tensorflow:loss = 0.23747596, step = 3980 (0.675 sec)
INFO:tensorflow:global_step/sec: 149.609
INFO:tensorflow:loss = 0.36530724, step = 4080 (0.661 sec)
INFO:tensorflow:global_step/sec: 150.428
INFO:tensorflow:loss = 0.21810669, step = 4180 (0.665 sec)
INFO:tensorflow:global_step/sec: 149.047
INFO:tensorflow:loss = 1.1884477, step = 4280 (0.658 sec)
INFO:tensorflow:global_step/sec: 150.751
INFO:tensorflow:loss = 0.39963245, step = 4380 (0.668 sec)
INFO:tensorflow:global_step/sec: 151.998
INFO:tensorflow:loss = 0.1955626, step = 4480 (0.662 sec)
INFO:tensorflow:global_step/sec: 148.913
INFO:tensorflow:loss = 0.29662588, step = 4580 (0.676 sec)
INFO:tensorflow:global_step/sec: 150.589
INFO:tensorflow:loss = 0.22540914, step = 4680 (0.672 sec)
INFO:tensorflow:global_step/sec: 149.814
INFO:tensorflow:loss = 0.6256131, step = 4780 (0.640 sec)
INFO:tensorflow:global_step/sec: 154.075
INFO:tensorflow:loss = 0.746071, step = 4880 (0.651 sec)
INFO:tensorflow:global_step/sec: 155.057
INFO:tensorflow:loss = 0.26733723, step = 4980 (0.656 sec)
INFO:tensorflow:global_step/sec: 150.574
INFO:tensorflow:loss = 0.17589232, step = 5080 (0.658 sec)
INFO:tensorflow:global_step/sec: 153.673
INFO:tensorflow:loss = 0.1606029, step = 5180 (0.658 sec)
INFO:tensorflow:global_step/sec: 150.927
INFO:tensorflow:loss = 0.3958196, step = 5280 (0.672 sec)
INFO:tensorflow:global_step/sec: 147.852
INFO:tensorflow:loss = 0.26347825, step = 5380 (0.672 sec)
INFO:tensorflow:global_step/sec: 149.548
INFO:tensorflow:loss = 0.25687283, step = 5480 (0.647 sec)
INFO:tensorflow:global_step/sec: 154.759
INFO:tensorflow:loss = 0.18431589, step = 5580 (0.662 sec)
INFO:tensorflow:global_step/sec: 150.858
INFO:tensorflow:loss = 0.07221815, step = 5680 (0.664 sec)
INFO:tensorflow:global_step/sec: 151.439
INFO:tensorflow:loss = 0.15109919, step = 5780 (0.673 sec)
INFO:tensorflow:global_step/sec: 148.703
INFO:tensorflow:loss = 0.15371259, step = 5880 (0.647 sec)
INFO:tensorflow:global_step/sec: 153.576
INFO:tensorflow:loss = 0.28414395, step = 5980 (0.668 sec)
INFO:tensorflow:global_step/sec: 149.675
INFO:tensorflow:loss = 0.12412469, step = 6080 (0.649 sec)
INFO:tensorflow:global_step/sec: 153.935
INFO:tensorflow:loss = 0.17493099, step = 6180 (0.650 sec)
INFO:tensorflow:global_step/sec: 156.49
INFO:tensorflow:loss = 0.20161584, step = 6280 (0.656 sec)
INFO:tensorflow:global_step/sec: 149.864
INFO:tensorflow:loss = 0.15605098, step = 6380 (0.675 sec)
INFO:tensorflow:global_step/sec: 150.537
INFO:tensorflow:loss = 0.2289162, step = 6480 (0.656 sec)
INFO:tensorflow:global_step/sec: 152.08
INFO:tensorflow:loss = 0.25568965, step = 6580 (0.669 sec)
INFO:tensorflow:global_step/sec: 147.127
INFO:tensorflow:loss = 0.1600621, step = 6680 (0.663 sec)
INFO:tensorflow:global_step/sec: 148.97
INFO:tensorflow:loss = 0.16423121, step = 6780 (0.692 sec)
INFO:tensorflow:global_step/sec: 146.828
INFO:tensorflow:loss = 0.08757869, step = 6880 (0.672 sec)
INFO:tensorflow:global_step/sec: 151.277
INFO:tensorflow:loss = 0.09908368, step = 6980 (0.665 sec)
INFO:tensorflow:global_step/sec: 149.872
INFO:tensorflow:loss = 0.12666771, step = 7080 (0.658 sec)
INFO:tensorflow:global_step/sec: 151.286
INFO:tensorflow:loss = 0.1907526, step = 7180 (0.665 sec)
INFO:tensorflow:global_step/sec: 150.787
INFO:tensorflow:loss = 0.20322911, step = 7280 (0.654 sec)
INFO:tensorflow:global_step/sec: 153.073
INFO:tensorflow:loss = 0.03985022, step = 7380 (0.654 sec)
INFO:tensorflow:global_step/sec: 152.314
INFO:tensorflow:loss = 0.076568246, step = 7480 (0.666 sec)
INFO:tensorflow:global_step/sec: 150.321
INFO:tensorflow:loss = 0.14039947, step = 7580 (0.663 sec)
INFO:tensorflow:global_step/sec: 147.414
INFO:tensorflow:loss = 0.06898737, step = 7680 (0.670 sec)
INFO:tensorflow:global_step/sec: 152.61
INFO:tensorflow:loss = 0.026434075, step = 7780 (0.674 sec)
INFO:tensorflow:global_step/sec: 147.718
INFO:tensorflow:loss = 0.10698028, step = 7880 (0.670 sec)
INFO:tensorflow:global_step/sec: 147.104
INFO:tensorflow:loss = 0.15419021, step = 7980 (0.672 sec)
INFO:tensorflow:global_step/sec: 151.595
INFO:tensorflow:loss = 0.028564457, step = 8080 (0.678 sec)
INFO:tensorflow:global_step/sec: 146.126
INFO:tensorflow:loss = 0.08336664, step = 8180 (0.687 sec)
INFO:tensorflow:global_step/sec: 144.349
INFO:tensorflow:loss = 0.047345236, step = 8280 (0.690 sec)
INFO:tensorflow:global_step/sec: 147.472
INFO:tensorflow:loss = 0.06706374, step = 8380 (0.680 sec)
INFO:tensorflow:global_step/sec: 146.082
INFO:tensorflow:loss = 0.050071187, step = 8480 (0.664 sec)
INFO:tensorflow:global_step/sec: 148.622
INFO:tensorflow:loss = 0.037193336, step = 8580 (0.708 sec)
INFO:tensorflow:global_step/sec: 144.32
INFO:tensorflow:loss = 0.029223727, step = 8680 (0.671 sec)
INFO:tensorflow:global_step/sec: 149.24
INFO:tensorflow:loss = 0.051640965, step = 8780 (0.665 sec)
INFO:tensorflow:global_step/sec: 147.059
INFO:tensorflow:loss = 0.06752524, step = 8880 (0.673 sec)
INFO:tensorflow:global_step/sec: 152.683
INFO:tensorflow:loss = 0.026380707, step = 8980 (0.667 sec)
INFO:tensorflow:global_step/sec: 147.011
INFO:tensorflow:loss = 0.032367367, step = 9080 (0.684 sec)
INFO:tensorflow:global_step/sec: 145.188
INFO:tensorflow:loss = 0.019801598, step = 9180 (0.697 sec)
INFO:tensorflow:global_step/sec: 146.64
INFO:tensorflow:loss = 0.063862294, step = 9280 (0.674 sec)
INFO:tensorflow:global_step/sec: 147.496
INFO:tensorflow:loss = 0.06304033, step = 9380 (0.675 sec)
INFO:tensorflow:global_step/sec: 146.633
INFO:tensorflow:loss = 0.07337183, step = 9480 (0.666 sec)
INFO:tensorflow:global_step/sec: 151.397
INFO:tensorflow:loss = 0.037572272, step = 9580 (0.680 sec)
INFO:tensorflow:global_step/sec: 146.522
INFO:tensorflow:loss = 0.044301596, step = 9680 (0.672 sec)
INFO:tensorflow:global_step/sec: 146.382
INFO:tensorflow:loss = 0.028739471, step = 9780 (0.684 sec)
INFO:tensorflow:global_step/sec: 149.796
INFO:tensorflow:loss = 0.03379544, step = 9880 (0.685 sec)
INFO:tensorflow:global_step/sec: 146.433
INFO:tensorflow:loss = 0.0344553, step = 9980 (0.657 sec)
INFO:tensorflow:global_step/sec: 150.518
INFO:tensorflow:loss = 0.08908106, step = 10080 (0.668 sec)
INFO:tensorflow:global_step/sec: 150.427
INFO:tensorflow:loss = 0.013899835, step = 10180 (0.692 sec)
INFO:tensorflow:global_step/sec: 143.874
INFO:tensorflow:loss = 0.061976884, step = 10280 (0.668 sec)
INFO:tensorflow:global_step/sec: 147.521
INFO:tensorflow:loss = 0.03084368, step = 10380 (0.681 sec)
INFO:tensorflow:global_step/sec: 149.166
INFO:tensorflow:loss = 0.01999862, step = 10480 (0.696 sec)
INFO:tensorflow:global_step/sec: 144.189
INFO:tensorflow:loss = 0.040555064, step = 10580 (0.701 sec)
INFO:tensorflow:global_step/sec: 143.296
INFO:tensorflow:loss = 0.027770132, step = 10680 (0.684 sec)
INFO:tensorflow:global_step/sec: 143.188
INFO:tensorflow:loss = 0.017593568, step = 10780 (0.682 sec)
INFO:tensorflow:global_step/sec: 149.262
INFO:tensorflow:loss = 0.018787973, step = 10880 (0.685 sec)
INFO:tensorflow:global_step/sec: 143.196
INFO:tensorflow:loss = 0.02845195, step = 10980 (0.695 sec)
INFO:tensorflow:global_step/sec: 146.746
INFO:tensorflow:loss = 0.034323335, step = 11080 (0.710 sec)
INFO:tensorflow:global_step/sec: 141.464
INFO:tensorflow:loss = 0.023412295, step = 11180 (0.697 sec)
INFO:tensorflow:global_step/sec: 141.584
INFO:tensorflow:loss = 0.007297569, step = 11280 (0.707 sec)
INFO:tensorflow:global_step/sec: 142.198
INFO:tensorflow:loss = 0.03247303, step = 11380 (0.683 sec)
INFO:tensorflow:global_step/sec: 146.013
INFO:tensorflow:loss = 0.022519596, step = 11480 (0.687 sec)
INFO:tensorflow:global_step/sec: 146.047
INFO:tensorflow:loss = 0.0234599, step = 11580 (0.681 sec)
INFO:tensorflow:global_step/sec: 146.486
INFO:tensorflow:loss = 0.026627034, step = 11680 (0.698 sec)
INFO:tensorflow:global_step/sec: 143.134
INFO:tensorflow:loss = 0.02978642, step = 11780 (0.697 sec)
INFO:tensorflow:global_step/sec: 143.115
INFO:tensorflow:loss = 0.0319783, step = 11880 (0.958 sec)
INFO:tensorflow:global_step/sec: 105.318
INFO:tensorflow:loss = 0.016719932, step = 11980 (0.679 sec)
INFO:tensorflow:global_step/sec: 145.919
INFO:tensorflow:loss = 0.041200273, step = 12080 (0.682 sec)
INFO:tensorflow:global_step/sec: 147.132
INFO:tensorflow:loss = 0.052656718, step = 12180 (0.703 sec)
INFO:tensorflow:global_step/sec: 142.326
INFO:tensorflow:loss = 0.01788846, step = 12280 (0.685 sec)
INFO:tensorflow:global_step/sec: 143.804
INFO:tensorflow:loss = 0.02921438, step = 12380 (0.690 sec)
INFO:tensorflow:global_step/sec: 147.057
INFO:tensorflow:loss = 0.0063668205, step = 12480 (0.694 sec)
INFO:tensorflow:global_step/sec: 144.634
INFO:tensorflow:loss = 0.0077824565, step = 12580 (0.686 sec)
INFO:tensorflow:global_step/sec: 142.048
INFO:tensorflow:loss = 0.03136026, step = 12680 (0.712 sec)
INFO:tensorflow:global_step/sec: 141.94
INFO:tensorflow:loss = 0.014961893, step = 12780 (0.707 sec)
INFO:tensorflow:global_step/sec: 142.562
INFO:tensorflow:loss = 0.010538283, step = 12880 (0.679 sec)
INFO:tensorflow:global_step/sec: 144.709
INFO:tensorflow:loss = 0.0085834535, step = 12980 (0.694 sec)
INFO:tensorflow:global_step/sec: 145.846
INFO:tensorflow:loss = 0.018545985, step = 13080 (0.698 sec)
INFO:tensorflow:global_step/sec: 144.449
INFO:tensorflow:loss = 0.008407678, step = 13180 (0.698 sec)
INFO:tensorflow:global_step/sec: 142.717
INFO:tensorflow:loss = 0.020155149, step = 13280 (0.676 sec)
INFO:tensorflow:global_step/sec: 146.434
INFO:tensorflow:loss = 0.01389962, step = 13380 (0.705 sec)
INFO:tensorflow:global_step/sec: 142.346
INFO:tensorflow:loss = 0.025226854, step = 13480 (0.709 sec)
INFO:tensorflow:global_step/sec: 142.987
INFO:tensorflow:loss = 0.009654707, step = 13580 (0.700 sec)
INFO:tensorflow:global_step/sec: 142.119
INFO:tensorflow:loss = 0.017410286, step = 13680 (0.708 sec)
INFO:tensorflow:global_step/sec: 141.215
INFO:tensorflow:loss = 0.018366385, step = 13780 (0.701 sec)
INFO:tensorflow:global_step/sec: 142.455
INFO:tensorflow:loss = 0.013979387, step = 13880 (0.688 sec)
INFO:tensorflow:global_step/sec: 143.128
INFO:tensorflow:loss = 0.011435907, step = 13980 (0.696 sec)
INFO:tensorflow:global_step/sec: 145.898
INFO:tensorflow:loss = 0.013547455, step = 14080 (0.703 sec)
INFO:tensorflow:global_step/sec: 143.063
INFO:tensorflow:loss = 0.012937121, step = 14180 (0.689 sec)
INFO:tensorflow:global_step/sec: 143.685
INFO:tensorflow:loss = 0.013082356, step = 14280 (0.697 sec)
INFO:tensorflow:global_step/sec: 144.153
INFO:tensorflow:loss = 0.011659414, step = 14380 (0.700 sec)
INFO:tensorflow:global_step/sec: 141.494
INFO:tensorflow:loss = 0.0023336636, step = 14480 (0.700 sec)
INFO:tensorflow:global_step/sec: 141.447
INFO:tensorflow:loss = 0.0056615053, step = 14580 (0.721 sec)
INFO:tensorflow:global_step/sec: 141.757
INFO:tensorflow:loss = 0.003452142, step = 14680 (0.700 sec)
INFO:tensorflow:global_step/sec: 142.967
INFO:tensorflow:loss = 0.013778169, step = 14780 (0.703 sec)
INFO:tensorflow:global_step/sec: 141.203
INFO:tensorflow:loss = 0.0068295673, step = 14880 (0.694 sec)
INFO:tensorflow:global_step/sec: 144.627
INFO:tensorflow:loss = 0.013753871, step = 14980 (0.697 sec)
INFO:tensorflow:global_step/sec: 142.164
INFO:tensorflow:loss = 0.0048329732, step = 15080 (0.717 sec)
INFO:tensorflow:global_step/sec: 140.002
INFO:tensorflow:loss = 0.00623441, step = 15180 (0.715 sec)
INFO:tensorflow:global_step/sec: 139.445
INFO:tensorflow:loss = 0.0046028835, step = 15280 (0.692 sec)
INFO:tensorflow:global_step/sec: 144.284
INFO:tensorflow:loss = 0.0045448802, step = 15380 (0.729 sec)
INFO:tensorflow:global_step/sec: 139.02
INFO:tensorflow:loss = 0.004668856, step = 15480 (0.683 sec)
INFO:tensorflow:global_step/sec: 144.616
INFO:tensorflow:loss = 0.0024793632, step = 15580 (0.695 sec)
INFO:tensorflow:global_step/sec: 143.317
INFO:tensorflow:loss = 0.008329028, step = 15680 (0.714 sec)
INFO:tensorflow:global_step/sec: 141.227
INFO:tensorflow:loss = 0.0068990486, step = 15780 (0.716 sec)
INFO:tensorflow:global_step/sec: 139.468
INFO:tensorflow:loss = 0.006620066, step = 15880 (0.722 sec)
INFO:tensorflow:global_step/sec: 139.091
INFO:tensorflow:loss = 0.0044790916, step = 15980 (0.730 sec)
INFO:tensorflow:global_step/sec: 137.055
INFO:tensorflow:loss = 0.003690621, step = 16080 (0.727 sec)
INFO:tensorflow:global_step/sec: 136.509
INFO:tensorflow:loss = 0.0038718886, step = 16180 (0.719 sec)
INFO:tensorflow:global_step/sec: 137.523
INFO:tensorflow:loss = 0.0022332452, step = 16280 (0.722 sec)
INFO:tensorflow:global_step/sec: 140.238
INFO:tensorflow:loss = 0.0030439221, step = 16380 (0.727 sec)
INFO:tensorflow:global_step/sec: 139.161
INFO:tensorflow:loss = 0.0070886184, step = 16480 (0.711 sec)
INFO:tensorflow:global_step/sec: 140.352
INFO:tensorflow:loss = 0.0037126257, step = 16580 (0.714 sec)
INFO:tensorflow:global_step/sec: 139.912
INFO:tensorflow:loss = 0.0019242018, step = 16680 (0.709 sec)
INFO:tensorflow:global_step/sec: 142.295
INFO:tensorflow:loss = 0.0029795493, step = 16780 (0.692 sec)
INFO:tensorflow:global_step/sec: 142.729
INFO:tensorflow:loss = 0.0030748053, step = 16880 (0.704 sec)
INFO:tensorflow:global_step/sec: 141.819
INFO:tensorflow:loss = 0.005926127, step = 16980 (0.723 sec)
INFO:tensorflow:global_step/sec: 138.655
INFO:tensorflow:loss = 0.0010825595, step = 17080 (0.725 sec)
INFO:tensorflow:global_step/sec: 137.582
INFO:tensorflow:loss = 0.0024114987, step = 17180 (0.709 sec)
INFO:tensorflow:global_step/sec: 140.605
INFO:tensorflow:loss = 0.006037759, step = 17280 (0.729 sec)
INFO:tensorflow:global_step/sec: 137.333
INFO:tensorflow:loss = 0.0033740005, step = 17380 (0.717 sec)
INFO:tensorflow:global_step/sec: 139.343
INFO:tensorflow:loss = 0.0022279082, step = 17480 (0.729 sec)
INFO:tensorflow:global_step/sec: 136.674
INFO:tensorflow:loss = 0.0015717878, step = 17580 (0.739 sec)
INFO:tensorflow:global_step/sec: 135.705
INFO:tensorflow:loss = 0.0048260475, step = 17680 (0.748 sec)
INFO:tensorflow:global_step/sec: 134.693
INFO:tensorflow:loss = 0.0023479688, step = 17780 (0.742 sec)
INFO:tensorflow:global_step/sec: 134.4
INFO:tensorflow:loss = 0.0027532578, step = 17880 (0.738 sec)
INFO:tensorflow:global_step/sec: 134.912
INFO:tensorflow:loss = 0.0021364442, step = 17980 (0.723 sec)
INFO:tensorflow:global_step/sec: 138.134
INFO:tensorflow:loss = 0.001539866, step = 18080 (0.706 sec)
INFO:tensorflow:global_step/sec: 142.905
INFO:tensorflow:loss = 0.001128615, step = 18180 (0.710 sec)
INFO:tensorflow:global_step/sec: 141.472
INFO:tensorflow:loss = 0.002287311, step = 18280 (0.703 sec)
INFO:tensorflow:global_step/sec: 141.299
INFO:tensorflow:loss = 0.000665123, step = 18380 (0.736 sec)
INFO:tensorflow:global_step/sec: 135.896
INFO:tensorflow:loss = 0.0019877788, step = 18480 (0.734 sec)
INFO:tensorflow:global_step/sec: 136.542
INFO:tensorflow:loss = 0.0010502317, step = 18580 (0.723 sec)
INFO:tensorflow:global_step/sec: 135.107
INFO:tensorflow:loss = 0.001308484, step = 18680 (0.733 sec)
INFO:tensorflow:global_step/sec: 137.7
INFO:tensorflow:loss = 0.0010152048, step = 18780 (0.725 sec)
INFO:tensorflow:global_step/sec: 140.395
INFO:tensorflow:loss = 0.0005139795, step = 18880 (0.728 sec)
INFO:tensorflow:global_step/sec: 137.054
INFO:tensorflow:loss = 0.0016813644, step = 18980 (0.707 sec)
INFO:tensorflow:global_step/sec: 139.483
INFO:tensorflow:loss = 0.00168139, step = 19080 (0.719 sec)
INFO:tensorflow:global_step/sec: 140.469
INFO:tensorflow:loss = 0.0017786454, step = 19180 (0.734 sec)
INFO:tensorflow:global_step/sec: 134.327
INFO:tensorflow:loss = 0.0024718647, step = 19280 (0.729 sec)
INFO:tensorflow:global_step/sec: 137.355
INFO:tensorflow:loss = 0.0012226652, step = 19380 (0.740 sec)
INFO:tensorflow:global_step/sec: 135.956
INFO:tensorflow:loss = 0.0006462211, step = 19480 (0.750 sec)
INFO:tensorflow:global_step/sec: 133.654
INFO:tensorflow:loss = 0.0022271674, step = 19580 (0.745 sec)
INFO:tensorflow:global_step/sec: 135.036
INFO:tensorflow:loss = 0.0012852926, step = 19680 (0.715 sec)
INFO:tensorflow:global_step/sec: 137.072
INFO:tensorflow:loss = 0.0012187359, step = 19780 (0.742 sec)
INFO:tensorflow:global_step/sec: 137.088
INFO:tensorflow:loss = 0.0011589411, step = 19880 (0.730 sec)
INFO:tensorflow:global_step/sec: 137.125
INFO:tensorflow:loss = 0.0007264713, step = 19980 (0.714 sec)
INFO:tensorflow:global_step/sec: 140.256
INFO:tensorflow:loss = 0.0009884271, step = 20080 (0.705 sec)
INFO:tensorflow:global_step/sec: 141.821
INFO:tensorflow:loss = 0.0011746504, step = 20180 (0.717 sec)
INFO:tensorflow:global_step/sec: 139.325
INFO:tensorflow:loss = 0.0027561653, step = 20280 (0.742 sec)
INFO:tensorflow:global_step/sec: 134.428
INFO:tensorflow:loss = 0.001586243, step = 20380 (0.718 sec)
INFO:tensorflow:global_step/sec: 137.497
INFO:tensorflow:loss = 0.0012851763, step = 20480 (0.709 sec)
INFO:tensorflow:global_step/sec: 142.007
INFO:tensorflow:loss = 0.0005696299, step = 20580 (0.746 sec)
INFO:tensorflow:global_step/sec: 135.354
INFO:tensorflow:loss = 0.0015202692, step = 20680 (0.732 sec)
INFO:tensorflow:global_step/sec: 136.794
INFO:tensorflow:loss = 0.00070091104, step = 20780 (0.741 sec)
INFO:tensorflow:global_step/sec: 133.222
INFO:tensorflow:loss = 0.0006927934, step = 20880 (0.725 sec)
INFO:tensorflow:global_step/sec: 139.476
INFO:tensorflow:loss = 0.0017318781, step = 20980 (0.723 sec)
INFO:tensorflow:global_step/sec: 137.871
INFO:tensorflow:loss = 0.00031344232, step = 21080 (0.707 sec)
INFO:tensorflow:global_step/sec: 140.038
INFO:tensorflow:loss = 0.0008819831, step = 21180 (0.708 sec)
INFO:tensorflow:global_step/sec: 143.39
INFO:tensorflow:loss = 0.00048810212, step = 21280 (0.730 sec)
INFO:tensorflow:global_step/sec: 136.624
INFO:tensorflow:loss = 0.000631156, step = 21380 (0.721 sec)
INFO:tensorflow:global_step/sec: 138.094
INFO:tensorflow:loss = 0.0005440748, step = 21480 (0.730 sec)
INFO:tensorflow:global_step/sec: 136.971
INFO:tensorflow:loss = 0.0011751838, step = 21580 (0.720 sec)
INFO:tensorflow:global_step/sec: 138.982
INFO:tensorflow:loss = 0.0011716125, step = 21680 (0.724 sec)
INFO:tensorflow:global_step/sec: 138.472
INFO:tensorflow:loss = 0.0005460314, step = 21780 (0.721 sec)
INFO:tensorflow:global_step/sec: 135.748
INFO:tensorflow:loss = 0.0012034024, step = 21880 (0.746 sec)
INFO:tensorflow:global_step/sec: 135.572
INFO:tensorflow:loss = 0.00041668452, step = 21980 (0.737 sec)
INFO:tensorflow:global_step/sec: 137.633
INFO:tensorflow:loss = 0.00022059455, step = 22080 (0.731 sec)
INFO:tensorflow:global_step/sec: 136.321
INFO:tensorflow:loss = 0.0005211315, step = 22180 (0.724 sec)
INFO:tensorflow:global_step/sec: 136.309
INFO:tensorflow:loss = 0.00031166515, step = 22280 (0.748 sec)
INFO:tensorflow:global_step/sec: 133.929
INFO:tensorflow:loss = 0.00050176255, step = 22380 (0.747 sec)
INFO:tensorflow:global_step/sec: 135.018
INFO:tensorflow:loss = 0.0008132309, step = 22480 (0.721 sec)
INFO:tensorflow:global_step/sec: 136.858
INFO:tensorflow:loss = 0.00017204777, step = 22580 (0.724 sec)
INFO:tensorflow:global_step/sec: 139.807
INFO:tensorflow:loss = 0.0003143691, step = 22680 (0.741 sec)
INFO:tensorflow:global_step/sec: 135.374
INFO:tensorflow:loss = 0.0007280413, step = 22780 (0.744 sec)
INFO:tensorflow:global_step/sec: 133.94
INFO:tensorflow:loss = 0.00063155184, step = 22880 (0.718 sec)
INFO:tensorflow:global_step/sec: 139.745
INFO:tensorflow:loss = 0.00014397503, step = 22980 (0.736 sec)
INFO:tensorflow:global_step/sec: 133.383
INFO:tensorflow:loss = 0.00060017843, step = 23080 (0.733 sec)
INFO:tensorflow:global_step/sec: 138.561
INFO:tensorflow:loss = 0.0002159341, step = 23180 (0.744 sec)
INFO:tensorflow:global_step/sec: 133.364
INFO:tensorflow:loss = 0.0003697059, step = 23280 (0.726 sec)
INFO:tensorflow:global_step/sec: 138.637
INFO:tensorflow:loss = 0.0004213114, step = 23380 (0.747 sec)
INFO:tensorflow:global_step/sec: 133.594
INFO:tensorflow:loss = 0.00024899258, step = 23480 (0.738 sec)
INFO:tensorflow:global_step/sec: 135.954
INFO:tensorflow:loss = 0.0005113005, step = 23580 (0.715 sec)
INFO:tensorflow:global_step/sec: 138.529
INFO:tensorflow:loss = 0.0001908144, step = 23680 (0.723 sec)
INFO:tensorflow:global_step/sec: 136.735
INFO:tensorflow:loss = 0.00033848634, step = 23780 (0.745 sec)
INFO:tensorflow:global_step/sec: 137.649
INFO:tensorflow:loss = 0.00038831055, step = 23880 (0.744 sec)
INFO:tensorflow:global_step/sec: 130.216
INFO:tensorflow:loss = 0.00029680112, step = 23980 (0.790 sec)
INFO:tensorflow:global_step/sec: 125.405
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 24000...
INFO:tensorflow:Saving checkpoints for 24000 into /tmp/tmp746f1h5a/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 24000...
INFO:tensorflow:Loss for final step: 0.00043947218.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-08-21T08:37:31
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmp746f1h5a/model.ckpt-24000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Inference Time : 0.32449s
INFO:tensorflow:Finished evaluation at 2021-08-21-08:37:31
INFO:tensorflow:Saving dict for global step 24000: average_loss = 12.833512, global_step = 24000, label/mean = 23.611391, loss = 12.711438, prediction/mean = 22.494513
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 24000: /tmp/tmp746f1h5a/model.ckpt-24000
{'average_loss': 12.833512, 'label/mean': 23.611391, 'loss': 12.711438, 'prediction/mean': 22.494513, 'global_step': 24000}
평균 손실 12.8335
###Markdown
*Python Machine Learning 3rd Edition* by [Sebastian Raschka](https://sebastianraschka.com) & [Vahid Mirjalili](http://vahidmirjalili.com), Packt Publishing Ltd. 2019Code Repository: https://github.com/rasbt/python-machine-learning-book-3rd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-3rd-edition/blob/master/LICENSE.txt) Chapter 14: Going Deeper -- the Mechanics of TensorFlow (Part 2/3) Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka & Vahid Mirjalili" -u -d -p numpy,scipy,matplotlib,tensorflow
import numpy as np
import tensorflow as tf
import pandas as pd
from IPython.display import Image
###Output
_____no_output_____
###Markdown
TensorFlow Estimators Steps for using pre-made estimators * **Step 1:** Define the input function for importing the data * **Step 2:** Define the feature columns to bridge between the estimator and the data * **Step 3:** Instantiate an estimator or convert a Keras model to an estimator * **Step 4:** Use the estimator: train() evaluate() predict()
###Code
tf.random.set_seed(1)
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Working with feature columns * See definition: https://developers.google.com/machine-learning/glossary/feature_columns * Documentation: https://www.tensorflow.org/api_docs/python/tf/feature_column
###Code
Image(filename='images/02.png', width=700)
dataset_path = tf.keras.utils.get_file("auto-mpg.data",
("http://archive.ics.uci.edu/ml/machine-learning-databases"
"/auto-mpg/auto-mpg.data"))
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'ModelYear', 'Origin']
df = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
df.tail()
print(df.isna().sum())
df = df.dropna()
df = df.reset_index(drop=True)
df.tail()
import sklearn
import sklearn.model_selection
df_train, df_test = sklearn.model_selection.train_test_split(df, train_size=0.8)
train_stats = df_train.describe().transpose()
train_stats
numeric_column_names = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration']
df_train_norm, df_test_norm = df_train.copy(), df_test.copy()
for col_name in numeric_column_names:
mean = train_stats.loc[col_name, 'mean']
std = train_stats.loc[col_name, 'std']
df_train_norm.loc[:, col_name] = (df_train_norm.loc[:, col_name] - mean)/std
df_test_norm.loc[:, col_name] = (df_test_norm.loc[:, col_name] - mean)/std
df_train_norm.tail()
###Output
_____no_output_____
###Markdown
Numeric Columns
###Code
numeric_features = []
for col_name in numeric_column_names:
numeric_features.append(tf.feature_column.numeric_column(key=col_name))
numeric_features
feature_year = tf.feature_column.numeric_column(key="ModelYear")
bucketized_features = []
bucketized_features.append(tf.feature_column.bucketized_column(
source_column=feature_year,
boundaries=[73, 76, 79]))
print(bucketized_features)
feature_origin = tf.feature_column.categorical_column_with_vocabulary_list(
key='Origin',
vocabulary_list=[1, 2, 3])
categorical_indicator_features = []
categorical_indicator_features.append(tf.feature_column.indicator_column(feature_origin))
print(categorical_indicator_features)
###Output
[IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='Origin', vocabulary_list=(1, 2, 3), dtype=tf.int64, default_value=-1, num_oov_buckets=0))]
###Markdown
Machine learning with pre-made Estimators
###Code
def train_input_fn(df_train, batch_size=8):
df = df_train.copy()
train_x, train_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(train_x), train_y))
# shuffle, repeat, and batch the examples
return dataset.shuffle(1000).repeat().batch(batch_size)
## inspection
ds = train_input_fn(df_train_norm)
batch = next(iter(ds))
print('Keys:', batch[0].keys())
print('Batch Model Years:', batch[0]['ModelYear'])
all_feature_columns = (numeric_features +
bucketized_features +
categorical_indicator_features)
print(all_feature_columns)
regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
model_dir='models/autompg-dnnregressor/')
EPOCHS = 1000
BATCH_SIZE = 8
total_steps = EPOCHS * int(np.ceil(len(df_train) / BATCH_SIZE))
print('Training Steps:', total_steps)
regressor.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE),
steps=total_steps)
reloaded_regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
warm_start_from='models/autompg-dnnregressor/',
model_dir='models/autompg-dnnregressor/')
def eval_input_fn(df_test, batch_size=8):
df = df_test.copy()
test_x, test_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))
return dataset.batch(batch_size)
eval_results = reloaded_regressor.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
for key in eval_results:
print('{:15s} {}'.format(key, eval_results[key]))
print('Average-Loss {:.4f}'.format(eval_results['average_loss']))
pred_res = regressor.predict(input_fn=lambda: eval_input_fn(df_test_norm, batch_size=8))
print(next(iter(pred_res)))
###Output
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-40000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
{'predictions': array([23.719353], dtype=float32)}
###Markdown
Boosted Tree Regressor
###Code
boosted_tree = tf.estimator.BoostedTreesRegressor(
feature_columns=all_feature_columns,
n_batches_per_layer=20,
n_trees=200)
boosted_tree.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE))
eval_results = boosted_tree.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
print(eval_results)
print('Average-Loss {:.4f}'.format(eval_results['average_loss']))
###Output
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpbzo1p2wi
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpbzo1p2wi', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f47bc30b7d0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /home/vahid/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:214: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpbzo1p2wi/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:loss = 402.19623, step = 0
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
INFO:tensorflow:loss = 289.26328, step = 80 (0.462 sec)
INFO:tensorflow:global_step/sec: 157.704
INFO:tensorflow:loss = 93.58242, step = 180 (0.363 sec)
INFO:tensorflow:global_step/sec: 422.808
INFO:tensorflow:loss = 45.606873, step = 280 (0.243 sec)
INFO:tensorflow:global_step/sec: 416.715
INFO:tensorflow:loss = 19.545433, step = 380 (0.240 sec)
INFO:tensorflow:global_step/sec: 416.626
INFO:tensorflow:loss = 6.4179554, step = 480 (0.245 sec)
INFO:tensorflow:global_step/sec: 407.822
INFO:tensorflow:loss = 4.7701707, step = 580 (0.231 sec)
INFO:tensorflow:global_step/sec: 408.05
INFO:tensorflow:loss = 4.569898, step = 680 (0.244 sec)
INFO:tensorflow:global_step/sec: 420.57
INFO:tensorflow:loss = 2.5075686, step = 780 (0.249 sec)
INFO:tensorflow:global_step/sec: 410.68
INFO:tensorflow:loss = 2.6939745, step = 880 (0.244 sec)
INFO:tensorflow:global_step/sec: 411.964
INFO:tensorflow:loss = 1.5966964, step = 980 (0.248 sec)
INFO:tensorflow:global_step/sec: 403.965
INFO:tensorflow:loss = 3.3678646, step = 1080 (0.250 sec)
INFO:tensorflow:global_step/sec: 398.728
INFO:tensorflow:loss = 2.3181179, step = 1180 (0.238 sec)
INFO:tensorflow:global_step/sec: 396.897
INFO:tensorflow:loss = 1.8086417, step = 1280 (0.250 sec)
INFO:tensorflow:global_step/sec: 414.237
INFO:tensorflow:loss = 0.6904925, step = 1380 (0.246 sec)
INFO:tensorflow:global_step/sec: 411.693
INFO:tensorflow:loss = 1.8734654, step = 1480 (0.250 sec)
INFO:tensorflow:global_step/sec: 401.569
INFO:tensorflow:loss = 2.5979433, step = 1580 (0.254 sec)
INFO:tensorflow:global_step/sec: 395.667
INFO:tensorflow:loss = 2.0128171, step = 1680 (0.256 sec)
INFO:tensorflow:global_step/sec: 392.234
INFO:tensorflow:loss = 2.469627, step = 1780 (0.244 sec)
INFO:tensorflow:global_step/sec: 386.751
INFO:tensorflow:loss = 0.87159, step = 1880 (0.253 sec)
INFO:tensorflow:global_step/sec: 404.765
INFO:tensorflow:loss = 0.80283445, step = 1980 (0.254 sec)
INFO:tensorflow:global_step/sec: 401.5
INFO:tensorflow:loss = 1.524719, step = 2080 (0.261 sec)
INFO:tensorflow:global_step/sec: 385.878
INFO:tensorflow:loss = 1.0228136, step = 2180 (0.261 sec)
INFO:tensorflow:global_step/sec: 382.386
INFO:tensorflow:loss = 1.0036705, step = 2280 (0.263 sec)
INFO:tensorflow:global_step/sec: 382.23
INFO:tensorflow:loss = 1.0771171, step = 2380 (0.245 sec)
INFO:tensorflow:global_step/sec: 388.433
INFO:tensorflow:loss = 0.9643565, step = 2480 (0.251 sec)
INFO:tensorflow:global_step/sec: 409.442
INFO:tensorflow:loss = 1.4598124, step = 2580 (0.264 sec)
INFO:tensorflow:global_step/sec: 382.398
INFO:tensorflow:loss = 0.7518444, step = 2680 (0.260 sec)
INFO:tensorflow:global_step/sec: 387.657
INFO:tensorflow:loss = 0.71297884, step = 2780 (0.260 sec)
INFO:tensorflow:global_step/sec: 387.516
INFO:tensorflow:loss = 0.21006158, step = 2880 (0.261 sec)
INFO:tensorflow:global_step/sec: 380.228
INFO:tensorflow:loss = 0.64975756, step = 2980 (0.252 sec)
INFO:tensorflow:global_step/sec: 375.953
INFO:tensorflow:loss = 0.3568688, step = 3080 (0.262 sec)
INFO:tensorflow:global_step/sec: 394.311
INFO:tensorflow:loss = 1.0947809, step = 3180 (0.260 sec)
INFO:tensorflow:global_step/sec: 389.576
INFO:tensorflow:loss = 0.38473517, step = 3280 (0.262 sec)
INFO:tensorflow:global_step/sec: 383.038
INFO:tensorflow:loss = 0.37087482, step = 3380 (0.266 sec)
INFO:tensorflow:global_step/sec: 377.258
INFO:tensorflow:loss = 0.37313935, step = 3480 (0.268 sec)
INFO:tensorflow:global_step/sec: 375.779
INFO:tensorflow:loss = 0.6371509, step = 3580 (0.253 sec)
INFO:tensorflow:global_step/sec: 376.039
INFO:tensorflow:loss = 0.6737277, step = 3680 (0.258 sec)
INFO:tensorflow:global_step/sec: 397.449
INFO:tensorflow:loss = 0.22763562, step = 3780 (0.264 sec)
INFO:tensorflow:global_step/sec: 379.907
INFO:tensorflow:loss = 0.70576984, step = 3880 (0.270 sec)
INFO:tensorflow:global_step/sec: 375.692
INFO:tensorflow:loss = 0.32033288, step = 3980 (0.266 sec)
INFO:tensorflow:global_step/sec: 376.935
INFO:tensorflow:loss = 0.5732076, step = 4080 (0.271 sec)
INFO:tensorflow:global_step/sec: 369.125
INFO:tensorflow:loss = 0.22866802, step = 4180 (0.257 sec)
INFO:tensorflow:global_step/sec: 370.509
INFO:tensorflow:loss = 0.27701426, step = 4280 (0.262 sec)
INFO:tensorflow:global_step/sec: 388.812
INFO:tensorflow:loss = 0.2290253, step = 4380 (0.273 sec)
INFO:tensorflow:global_step/sec: 373.834
INFO:tensorflow:loss = 0.24748756, step = 4480 (0.270 sec)
INFO:tensorflow:global_step/sec: 373.023
INFO:tensorflow:loss = 0.2879139, step = 4580 (0.275 sec)
INFO:tensorflow:global_step/sec: 364.77
INFO:tensorflow:loss = 0.28078204, step = 4680 (0.272 sec)
INFO:tensorflow:global_step/sec: 368.265
INFO:tensorflow:loss = 0.1984863, step = 4780 (0.261 sec)
INFO:tensorflow:global_step/sec: 363.698
INFO:tensorflow:loss = 0.31559613, step = 4880 (0.271 sec)
INFO:tensorflow:global_step/sec: 377.83
INFO:tensorflow:loss = 0.2904449, step = 4980 (0.277 sec)
INFO:tensorflow:global_step/sec: 363.8
INFO:tensorflow:loss = 0.28680754, step = 5080 (0.275 sec)
INFO:tensorflow:global_step/sec: 367.857
INFO:tensorflow:loss = 0.374867, step = 5180 (0.274 sec)
INFO:tensorflow:global_step/sec: 366.626
INFO:tensorflow:loss = 0.3683201, step = 5280 (0.280 sec)
INFO:tensorflow:global_step/sec: 357.256
INFO:tensorflow:loss = 0.2899915, step = 5380 (0.265 sec)
INFO:tensorflow:global_step/sec: 359.244
INFO:tensorflow:loss = 0.1280297, step = 5480 (0.268 sec)
INFO:tensorflow:global_step/sec: 381.8
INFO:tensorflow:loss = 0.7579371, step = 5580 (0.274 sec)
INFO:tensorflow:global_step/sec: 366.796
INFO:tensorflow:loss = 0.20086025, step = 5680 (0.278 sec)
INFO:tensorflow:global_step/sec: 363.209
INFO:tensorflow:loss = 0.20468965, step = 5780 (0.282 sec)
INFO:tensorflow:global_step/sec: 355.186
INFO:tensorflow:loss = 0.084839374, step = 5880 (0.284 sec)
INFO:tensorflow:global_step/sec: 353.547
INFO:tensorflow:loss = 0.7841339, step = 5980 (0.268 sec)
INFO:tensorflow:global_step/sec: 355.69
INFO:tensorflow:loss = 0.4825125, step = 6080 (0.270 sec)
INFO:tensorflow:global_step/sec: 378.825
INFO:tensorflow:loss = 0.15031722, step = 6180 (0.278 sec)
INFO:tensorflow:global_step/sec: 363.354
INFO:tensorflow:loss = 0.09604564, step = 6280 (0.281 sec)
INFO:tensorflow:global_step/sec: 359.431
INFO:tensorflow:loss = 0.22453651, step = 6380 (0.281 sec)
INFO:tensorflow:global_step/sec: 355.953
INFO:tensorflow:loss = 0.066752866, step = 6480 (0.287 sec)
INFO:tensorflow:global_step/sec: 348.331
INFO:tensorflow:loss = 0.13314456, step = 6580 (0.275 sec)
INFO:tensorflow:global_step/sec: 346.788
INFO:tensorflow:loss = 0.11664696, step = 6680 (0.281 sec)
INFO:tensorflow:global_step/sec: 365.916
INFO:tensorflow:loss = 0.24780986, step = 6780 (0.280 sec)
INFO:tensorflow:global_step/sec: 360.304
INFO:tensorflow:loss = 0.16076241, step = 6880 (0.289 sec)
INFO:tensorflow:global_step/sec: 347.319
INFO:tensorflow:loss = 0.15068403, step = 6980 (0.290 sec)
INFO:tensorflow:global_step/sec: 347.161
INFO:tensorflow:loss = 0.063347995, step = 7080 (0.288 sec)
INFO:tensorflow:global_step/sec: 346.469
INFO:tensorflow:loss = 0.17705172, step = 7180 (0.279 sec)
INFO:tensorflow:global_step/sec: 342.538
INFO:tensorflow:loss = 0.1235522, step = 7280 (0.283 sec)
INFO:tensorflow:global_step/sec: 362.863
INFO:tensorflow:loss = 0.19375022, step = 7380 (0.283 sec)
INFO:tensorflow:global_step/sec: 356.934
INFO:tensorflow:loss = 0.09878422, step = 7480 (0.285 sec)
INFO:tensorflow:global_step/sec: 350.67
INFO:tensorflow:loss = 0.044014893, step = 7580 (0.288 sec)
INFO:tensorflow:global_step/sec: 348.951
INFO:tensorflow:loss = 0.090123355, step = 7680 (0.292 sec)
INFO:tensorflow:global_step/sec: 343.379
INFO:tensorflow:loss = 0.1411789, step = 7780 (0.277 sec)
INFO:tensorflow:global_step/sec: 343.451
INFO:tensorflow:loss = 0.0606481, step = 7880 (0.286 sec)
INFO:tensorflow:global_step/sec: 358.731
INFO:tensorflow:loss = 0.11701955, step = 7980 (0.293 sec)
INFO:tensorflow:global_step/sec: 342.975
INFO:tensorflow:loss = 0.2144481, step = 8080 (0.299 sec)
INFO:tensorflow:global_step/sec: 337.35
INFO:tensorflow:loss = 0.13061918, step = 8180 (0.301 sec)
INFO:tensorflow:global_step/sec: 331.849
INFO:tensorflow:loss = 0.013081398, step = 8280 (0.298 sec)
INFO:tensorflow:global_step/sec: 336.503
INFO:tensorflow:loss = 0.027076408, step = 8380 (0.286 sec)
INFO:tensorflow:global_step/sec: 332.512
INFO:tensorflow:loss = 0.010121934, step = 8480 (0.293 sec)
INFO:tensorflow:global_step/sec: 352.257
INFO:tensorflow:loss = 0.023727953, step = 8580 (0.294 sec)
INFO:tensorflow:global_step/sec: 343.327
INFO:tensorflow:loss = 0.13345344, step = 8680 (0.296 sec)
INFO:tensorflow:global_step/sec: 339.463
INFO:tensorflow:loss = 0.06767905, step = 8780 (0.298 sec)
INFO:tensorflow:global_step/sec: 336.43
INFO:tensorflow:loss = 0.03239054, step = 8880 (0.299 sec)
INFO:tensorflow:global_step/sec: 337.212
INFO:tensorflow:loss = 0.03417517, step = 8980 (0.288 sec)
INFO:tensorflow:global_step/sec: 329.745
INFO:tensorflow:loss = 0.04349177, step = 9080 (0.295 sec)
INFO:tensorflow:global_step/sec: 346.215
INFO:tensorflow:loss = 0.10747677, step = 9180 (0.297 sec)
INFO:tensorflow:global_step/sec: 341.222
INFO:tensorflow:loss = 0.08463769, step = 9280 (0.302 sec)
INFO:tensorflow:global_step/sec: 333.558
INFO:tensorflow:loss = 0.022979608, step = 9380 (0.303 sec)
INFO:tensorflow:global_step/sec: 329.165
INFO:tensorflow:loss = 0.07760788, step = 9480 (0.310 sec)
INFO:tensorflow:global_step/sec: 322.089
INFO:tensorflow:loss = 0.038779423, step = 9580 (0.292 sec)
INFO:tensorflow:global_step/sec: 329.556
INFO:tensorflow:loss = 0.014404967, step = 9680 (0.297 sec)
INFO:tensorflow:global_step/sec: 343.326
INFO:tensorflow:loss = 0.06990504, step = 9780 (0.305 sec)
INFO:tensorflow:global_step/sec: 333.686
INFO:tensorflow:loss = 0.036858298, step = 9880 (0.305 sec)
INFO:tensorflow:global_step/sec: 326.461
INFO:tensorflow:loss = 0.047570646, step = 9980 (0.312 sec)
INFO:tensorflow:global_step/sec: 321.895
INFO:tensorflow:loss = 0.059428196, step = 10080 (0.309 sec)
INFO:tensorflow:global_step/sec: 325.738
INFO:tensorflow:loss = 0.05054853, step = 10180 (0.293 sec)
INFO:tensorflow:global_step/sec: 327.29
INFO:tensorflow:loss = 0.04085783, step = 10280 (0.300 sec)
INFO:tensorflow:global_step/sec: 337.825
INFO:tensorflow:loss = 0.06833278, step = 10380 (0.309 sec)
INFO:tensorflow:global_step/sec: 328.799
INFO:tensorflow:loss = 0.03984513, step = 10480 (0.309 sec)
INFO:tensorflow:global_step/sec: 325.714
INFO:tensorflow:loss = 0.029430978, step = 10580 (0.313 sec)
INFO:tensorflow:global_step/sec: 320.448
INFO:tensorflow:loss = 0.015103683, step = 10680 (0.310 sec)
INFO:tensorflow:global_step/sec: 321.814
INFO:tensorflow:loss = 0.055365227, step = 10780 (0.303 sec)
INFO:tensorflow:global_step/sec: 315.217
INFO:tensorflow:loss = 0.016110064, step = 10880 (0.316 sec)
INFO:tensorflow:global_step/sec: 323.304
INFO:tensorflow:loss = 0.006240257, step = 10980 (0.315 sec)
INFO:tensorflow:global_step/sec: 321.096
INFO:tensorflow:loss = 0.007149349, step = 11080 (0.321 sec)
INFO:tensorflow:global_step/sec: 314.465
INFO:tensorflow:loss = 0.0066786045, step = 11180 (0.312 sec)
INFO:tensorflow:global_step/sec: 320.341
INFO:tensorflow:loss = 0.025937172, step = 11280 (0.312 sec)
INFO:tensorflow:global_step/sec: 321.417
INFO:tensorflow:loss = 0.016570274, step = 11380 (0.303 sec)
INFO:tensorflow:global_step/sec: 317.392
INFO:tensorflow:loss = 0.0033354259, step = 11480 (0.308 sec)
INFO:tensorflow:global_step/sec: 330.218
INFO:tensorflow:loss = 0.017488046, step = 11580 (0.314 sec)
INFO:tensorflow:global_step/sec: 320.864
INFO:tensorflow:loss = 0.02159322, step = 11680 (0.322 sec)
INFO:tensorflow:global_step/sec: 310.693
INFO:tensorflow:loss = 0.020893702, step = 11780 (0.323 sec)
INFO:tensorflow:global_step/sec: 310.939
INFO:tensorflow:loss = 0.017859623, step = 11880 (0.326 sec)
INFO:tensorflow:global_step/sec: 304.814
INFO:tensorflow:loss = 0.014102906, step = 11980 (0.310 sec)
INFO:tensorflow:global_step/sec: 311.383
INFO:tensorflow:loss = 0.014420295, step = 12080 (0.316 sec)
INFO:tensorflow:global_step/sec: 323.922
INFO:tensorflow:loss = 0.012980898, step = 12180 (0.323 sec)
INFO:tensorflow:global_step/sec: 312.002
INFO:tensorflow:loss = 0.008047884, step = 12280 (0.324 sec)
INFO:tensorflow:global_step/sec: 309.195
INFO:tensorflow:loss = 0.005332183, step = 12380 (0.328 sec)
INFO:tensorflow:global_step/sec: 307.363
INFO:tensorflow:loss = 0.009909308, step = 12480 (0.331 sec)
INFO:tensorflow:global_step/sec: 303.166
INFO:tensorflow:loss = 0.018593434, step = 12580 (0.310 sec)
INFO:tensorflow:global_step/sec: 307.677
INFO:tensorflow:loss = 0.009453268, step = 12680 (0.318 sec)
INFO:tensorflow:global_step/sec: 323.497
INFO:tensorflow:loss = 0.0074377223, step = 12780 (0.317 sec)
INFO:tensorflow:global_step/sec: 318.278
INFO:tensorflow:loss = 0.0067944657, step = 12880 (0.326 sec)
INFO:tensorflow:global_step/sec: 307.95
INFO:tensorflow:loss = 0.009621896, step = 12980 (0.332 sec)
INFO:tensorflow:global_step/sec: 303.108
INFO:tensorflow:loss = 0.007392729, step = 13080 (0.329 sec)
INFO:tensorflow:global_step/sec: 303.111
INFO:tensorflow:loss = 0.0070271464, step = 13180 (0.317 sec)
INFO:tensorflow:global_step/sec: 302.852
INFO:tensorflow:loss = 0.01419846, step = 13280 (0.325 sec)
INFO:tensorflow:global_step/sec: 311.988
INFO:tensorflow:loss = 0.00879844, step = 13380 (0.330 sec)
INFO:tensorflow:global_step/sec: 307.168
INFO:tensorflow:loss = 0.0035331238, step = 13480 (0.333 sec)
INFO:tensorflow:global_step/sec: 301.33
INFO:tensorflow:loss = 0.004036055, step = 13580 (0.334 sec)
INFO:tensorflow:global_step/sec: 300.952
INFO:tensorflow:loss = 0.0021674812, step = 13680 (0.335 sec)
INFO:tensorflow:global_step/sec: 298.644
INFO:tensorflow:loss = 0.0044945157, step = 13780 (0.318 sec)
INFO:tensorflow:global_step/sec: 302.261
INFO:tensorflow:loss = 0.008261169, step = 13880 (0.328 sec)
INFO:tensorflow:global_step/sec: 310.556
INFO:tensorflow:loss = 0.007413184, step = 13980 (0.337 sec)
INFO:tensorflow:global_step/sec: 300.33
INFO:tensorflow:loss = 0.01038721, step = 14080 (0.331 sec)
INFO:tensorflow:global_step/sec: 304.304
INFO:tensorflow:loss = 0.0020925598, step = 14180 (0.329 sec)
INFO:tensorflow:global_step/sec: 303.857
INFO:tensorflow:loss = 0.0072769765, step = 14280 (0.337 sec)
INFO:tensorflow:global_step/sec: 297.895
INFO:tensorflow:loss = 0.0018916101, step = 14380 (0.326 sec)
INFO:tensorflow:global_step/sec: 294.092
INFO:tensorflow:loss = 0.0027799625, step = 14480 (0.327 sec)
INFO:tensorflow:global_step/sec: 312.093
INFO:tensorflow:loss = 0.0037557913, step = 14580 (0.334 sec)
INFO:tensorflow:global_step/sec: 301.829
INFO:tensorflow:loss = 0.0015468008, step = 14680 (0.334 sec)
INFO:tensorflow:global_step/sec: 302.182
INFO:tensorflow:loss = 0.0018402252, step = 14780 (0.332 sec)
INFO:tensorflow:global_step/sec: 301.447
INFO:tensorflow:loss = 0.0063510793, step = 14880 (0.339 sec)
INFO:tensorflow:global_step/sec: 294.111
INFO:tensorflow:loss = 0.003960237, step = 14980 (0.327 sec)
INFO:tensorflow:global_step/sec: 295.082
INFO:tensorflow:loss = 0.0021010689, step = 15080 (0.333 sec)
INFO:tensorflow:global_step/sec: 306.512
INFO:tensorflow:loss = 0.0011556938, step = 15180 (0.338 sec)
INFO:tensorflow:global_step/sec: 298.883
INFO:tensorflow:loss = 0.0009854774, step = 15280 (0.337 sec)
INFO:tensorflow:global_step/sec: 299.258
INFO:tensorflow:loss = 0.0059409747, step = 15380 (0.333 sec)
INFO:tensorflow:global_step/sec: 299.457
INFO:tensorflow:loss = 0.0022082897, step = 15480 (0.338 sec)
INFO:tensorflow:global_step/sec: 298.035
INFO:tensorflow:loss = 0.0036195924, step = 15580 (0.323 sec)
INFO:tensorflow:global_step/sec: 297.116
INFO:tensorflow:loss = 0.005268056, step = 15680 (0.332 sec)
INFO:tensorflow:global_step/sec: 304.785
INFO:tensorflow:loss = 0.0021239321, step = 15780 (0.342 sec)
INFO:tensorflow:global_step/sec: 298.12
INFO:tensorflow:loss = 0.0127066765, step = 15880 (0.339 sec)
INFO:tensorflow:global_step/sec: 295.993
INFO:tensorflow:loss = 0.0021492667, step = 15980 (0.341 sec)
INFO:tensorflow:global_step/sec: 293.45
INFO:tensorflow:loss = 0.003911408, step = 16080 (0.343 sec)
INFO:tensorflow:global_step/sec: 291.821
INFO:tensorflow:loss = 0.004051245, step = 16180 (0.334 sec)
INFO:tensorflow:global_step/sec: 287.44
INFO:tensorflow:loss = 0.0049018306, step = 16280 (0.342 sec)
INFO:tensorflow:global_step/sec: 297.459
INFO:tensorflow:loss = 0.0026472202, step = 16380 (0.345 sec)
INFO:tensorflow:global_step/sec: 293.164
INFO:tensorflow:loss = 0.0038542324, step = 16480 (0.348 sec)
INFO:tensorflow:global_step/sec: 288.779
INFO:tensorflow:loss = 0.003773787, step = 16580 (0.346 sec)
INFO:tensorflow:global_step/sec: 289.185
INFO:tensorflow:loss = 0.0026647656, step = 16680 (0.343 sec)
INFO:tensorflow:global_step/sec: 291.876
INFO:tensorflow:loss = 0.0024704284, step = 16780 (0.334 sec)
INFO:tensorflow:global_step/sec: 288.324
INFO:tensorflow:loss = 0.0034512142, step = 16880 (0.347 sec)
INFO:tensorflow:global_step/sec: 292.507
INFO:tensorflow:loss = 0.0062024607, step = 16980 (0.346 sec)
INFO:tensorflow:global_step/sec: 291.147
INFO:tensorflow:loss = 0.0022722099, step = 17080 (0.351 sec)
INFO:tensorflow:global_step/sec: 287.208
INFO:tensorflow:loss = 0.0014444834, step = 17180 (0.352 sec)
INFO:tensorflow:global_step/sec: 283.574
INFO:tensorflow:loss = 0.0074605285, step = 17280 (0.357 sec)
INFO:tensorflow:global_step/sec: 281.604
INFO:tensorflow:loss = 0.003752734, step = 17380 (0.339 sec)
INFO:tensorflow:global_step/sec: 283.366
INFO:tensorflow:loss = 0.0012563546, step = 17480 (0.342 sec)
INFO:tensorflow:global_step/sec: 297.925
INFO:tensorflow:loss = 0.003298856, step = 17580 (0.347 sec)
INFO:tensorflow:global_step/sec: 292.149
INFO:tensorflow:loss = 0.0021164892, step = 17680 (0.346 sec)
INFO:tensorflow:global_step/sec: 289.272
INFO:tensorflow:loss = 0.0027668625, step = 17780 (0.350 sec)
INFO:tensorflow:global_step/sec: 286.518
INFO:tensorflow:loss = 0.0038928108, step = 17880 (0.356 sec)
INFO:tensorflow:global_step/sec: 280.948
INFO:tensorflow:loss = 0.00068626396, step = 17980 (0.340 sec)
INFO:tensorflow:global_step/sec: 281.988
INFO:tensorflow:loss = 0.0011843208, step = 18080 (0.349 sec)
INFO:tensorflow:global_step/sec: 292.284
INFO:tensorflow:loss = 0.0018866074, step = 18180 (0.351 sec)
INFO:tensorflow:global_step/sec: 288.176
INFO:tensorflow:loss = 0.0005333081, step = 18280 (0.352 sec)
INFO:tensorflow:global_step/sec: 285.166
INFO:tensorflow:loss = 0.0005375584, step = 18380 (0.360 sec)
INFO:tensorflow:global_step/sec: 279.2
INFO:tensorflow:loss = 0.0067465273, step = 18480 (0.355 sec)
INFO:tensorflow:global_step/sec: 280.193
INFO:tensorflow:loss = 0.0013988668, step = 18580 (0.344 sec)
INFO:tensorflow:global_step/sec: 281.697
INFO:tensorflow:loss = 0.0014645823, step = 18680 (0.351 sec)
INFO:tensorflow:global_step/sec: 288.122
INFO:tensorflow:loss = 0.0014383025, step = 18780 (0.360 sec)
INFO:tensorflow:global_step/sec: 282.176
INFO:tensorflow:loss = 0.0014143193, step = 18880 (0.361 sec)
INFO:tensorflow:global_step/sec: 277.737
INFO:tensorflow:loss = 0.0013943117, step = 18980 (0.357 sec)
INFO:tensorflow:global_step/sec: 281.123
INFO:tensorflow:loss = 0.0006448065, step = 19080 (0.357 sec)
INFO:tensorflow:global_step/sec: 281.612
INFO:tensorflow:loss = 0.0014809513, step = 19180 (0.348 sec)
INFO:tensorflow:global_step/sec: 274.105
INFO:tensorflow:loss = 0.0008602524, step = 19280 (0.358 sec)
INFO:tensorflow:global_step/sec: 285.319
INFO:tensorflow:loss = 0.0006964795, step = 19380 (0.363 sec)
INFO:tensorflow:global_step/sec: 277.315
INFO:tensorflow:loss = 0.00035264163, step = 19480 (0.366 sec)
INFO:tensorflow:global_step/sec: 274.599
INFO:tensorflow:loss = 0.0010025422, step = 19580 (0.367 sec)
INFO:tensorflow:global_step/sec: 273.115
INFO:tensorflow:loss = 0.0007096651, step = 19680 (0.362 sec)
INFO:tensorflow:global_step/sec: 276.323
INFO:tensorflow:loss = 0.0013329595, step = 19780 (0.351 sec)
INFO:tensorflow:global_step/sec: 274.789
INFO:tensorflow:loss = 0.0008460893, step = 19880 (0.357 sec)
INFO:tensorflow:global_step/sec: 283.638
INFO:tensorflow:loss = 0.0011283578, step = 19980 (0.368 sec)
INFO:tensorflow:global_step/sec: 275.094
INFO:tensorflow:loss = 0.00089822686, step = 20080 (0.365 sec)
INFO:tensorflow:global_step/sec: 275.392
INFO:tensorflow:loss = 0.0014473142, step = 20180 (0.364 sec)
INFO:tensorflow:global_step/sec: 276.458
INFO:tensorflow:loss = 0.0008915104, step = 20280 (0.373 sec)
INFO:tensorflow:global_step/sec: 268.018
INFO:tensorflow:loss = 0.0004781757, step = 20380 (0.353 sec)
INFO:tensorflow:global_step/sec: 272.515
INFO:tensorflow:loss = 0.0004186085, step = 20480 (0.363 sec)
INFO:tensorflow:global_step/sec: 280.449
INFO:tensorflow:loss = 0.0008953349, step = 20580 (0.364 sec)
INFO:tensorflow:global_step/sec: 278.265
INFO:tensorflow:loss = 0.0015090622, step = 20680 (0.371 sec)
INFO:tensorflow:global_step/sec: 270.082
INFO:tensorflow:loss = 0.0010438098, step = 20780 (0.374 sec)
INFO:tensorflow:global_step/sec: 267.97
INFO:tensorflow:loss = 0.00050447625, step = 20880 (0.376 sec)
INFO:tensorflow:global_step/sec: 267.517
INFO:tensorflow:loss = 0.00037436924, step = 20980 (0.364 sec)
INFO:tensorflow:global_step/sec: 262.304
INFO:tensorflow:loss = 0.0005487846, step = 21080 (0.371 sec)
INFO:tensorflow:global_step/sec: 276.63
INFO:tensorflow:loss = 0.0012135495, step = 21180 (0.372 sec)
INFO:tensorflow:global_step/sec: 271.146
INFO:tensorflow:loss = 0.00050225714, step = 21280 (0.374 sec)
INFO:tensorflow:global_step/sec: 266.848
INFO:tensorflow:loss = 0.0005835245, step = 21380 (0.380 sec)
INFO:tensorflow:global_step/sec: 266.179
INFO:tensorflow:loss = 0.0004619556, step = 21480 (0.375 sec)
INFO:tensorflow:global_step/sec: 266.419
INFO:tensorflow:loss = 0.00033856914, step = 21580 (0.363 sec)
INFO:tensorflow:global_step/sec: 265.468
INFO:tensorflow:loss = 0.0008394742, step = 21680 (0.373 sec)
INFO:tensorflow:global_step/sec: 272.316
INFO:tensorflow:loss = 0.00030781276, step = 21780 (0.374 sec)
INFO:tensorflow:global_step/sec: 270.614
INFO:tensorflow:loss = 0.00032267775, step = 21880 (0.375 sec)
INFO:tensorflow:global_step/sec: 266.912
INFO:tensorflow:loss = 0.00024132222, step = 21980 (0.378 sec)
INFO:tensorflow:global_step/sec: 265.246
INFO:tensorflow:loss = 0.00028675678, step = 22080 (0.376 sec)
INFO:tensorflow:global_step/sec: 266.509
INFO:tensorflow:loss = 0.0009781871, step = 22180 (0.365 sec)
INFO:tensorflow:global_step/sec: 263.828
INFO:tensorflow:loss = 0.0010109144, step = 22280 (0.370 sec)
INFO:tensorflow:global_step/sec: 274.649
INFO:tensorflow:loss = 0.00025149249, step = 22380 (0.378 sec)
INFO:tensorflow:global_step/sec: 267.999
INFO:tensorflow:loss = 0.00020908765, step = 22480 (0.378 sec)
INFO:tensorflow:global_step/sec: 265.322
INFO:tensorflow:loss = 0.0004320807, step = 22580 (0.384 sec)
INFO:tensorflow:global_step/sec: 260.926
INFO:tensorflow:loss = 0.0002488165, step = 22680 (0.385 sec)
INFO:tensorflow:global_step/sec: 259.177
INFO:tensorflow:loss = 0.0004015111, step = 22780 (0.376 sec)
INFO:tensorflow:global_step/sec: 256.529
INFO:tensorflow:loss = 0.00037404272, step = 22880 (0.385 sec)
INFO:tensorflow:global_step/sec: 264.999
INFO:tensorflow:loss = 0.00039812157, step = 22980 (0.387 sec)
INFO:tensorflow:global_step/sec: 260.906
INFO:tensorflow:loss = 0.0005162174, step = 23080 (0.386 sec)
INFO:tensorflow:global_step/sec: 259.663
INFO:tensorflow:loss = 0.00032000744, step = 23180 (0.387 sec)
INFO:tensorflow:global_step/sec: 258.701
INFO:tensorflow:loss = 0.00025557584, step = 23280 (0.391 sec)
INFO:tensorflow:global_step/sec: 255.811
INFO:tensorflow:loss = 0.00018507428, step = 23380 (0.372 sec)
INFO:tensorflow:global_step/sec: 260.467
INFO:tensorflow:loss = 0.00010121861, step = 23480 (0.376 sec)
INFO:tensorflow:global_step/sec: 271.391
INFO:tensorflow:loss = 0.00043678225, step = 23580 (0.381 sec)
INFO:tensorflow:global_step/sec: 264.417
INFO:tensorflow:loss = 0.0002813889, step = 23680 (0.391 sec)
INFO:tensorflow:global_step/sec: 257.244
INFO:tensorflow:loss = 9.453914e-05, step = 23780 (0.393 sec)
INFO:tensorflow:global_step/sec: 254.353
INFO:tensorflow:loss = 0.0002390909, step = 23880 (0.390 sec)
INFO:tensorflow:global_step/sec: 240.95
INFO:tensorflow:loss = 0.0008116873, step = 23980 (0.442 sec)
INFO:tensorflow:global_step/sec: 229.283
INFO:tensorflow:Saving checkpoints for 24000 into /tmp/tmpbzo1p2wi/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Loss for final step: 0.00040755837.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2019-11-03T11:19:05Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpbzo1p2wi/model.ckpt-24000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at 2019-11-03-11:19:05
INFO:tensorflow:Saving dict for global step 24000: average_loss = 12.3817, global_step = 24000, label/mean = 23.611393, loss = 12.283247, prediction/mean = 22.392288
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 24000: /tmp/tmpbzo1p2wi/model.ckpt-24000
{'average_loss': 12.3817, 'label/mean': 23.611393, 'loss': 12.283247, 'prediction/mean': 22.392288, 'global_step': 24000}
Average-Loss 12.3817
###Markdown
---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch14_part2.ipynb --output ch14_part2.py
###Output
[NbConvertApp] Converting notebook ch14_part2.ipynb to script
[NbConvertApp] Writing 6364 bytes to ch14_part2.py
###Markdown
머신 러닝 교과서 3판 14장 - 텐서플로의 구조 자세히 알아보기 (2/3) **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 목차 - 텐서플로 추정기 - 특성 열 사용하기 - 사전에 준비된 추정기로 머신 러닝 수행하기
###Code
import numpy as np
import tensorflow as tf
import pandas as pd
from IPython.display import Image
tf.__version__
###Output
_____no_output_____
###Markdown
텐서플로 추정기 사전에 준비된 추정기 사용하는 단계 * **단계 1:** 데이터 로딩을 위해 입력 함수 정의하기 * **단계 2:** 추정기와 데이터 사이를 연결하기 위해 특성 열 정의하기 * **단계 3:** 추정기 객체를 만들거나 케라스 모델을 추정기로 바꾸기 * **단계 4:** 추정기 사용하기: train() evaluate() predict()
###Code
tf.random.set_seed(1)
np.random.seed(1)
###Output
_____no_output_____
###Markdown
특성 열 사용하기 * 정의: https://developers.google.com/machine-learning/glossary/feature_columns * 문서: https://www.tensorflow.org/api_docs/python/tf/feature_column
###Code
Image(url='https://git.io/JL56E', width=700)
dataset_path = tf.keras.utils.get_file("auto-mpg.data",
("http://archive.ics.uci.edu/ml/machine-learning-databases"
"/auto-mpg/auto-mpg.data"))
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'ModelYear', 'Origin']
df = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
df.tail()
print(df.isna().sum())
df = df.dropna()
df = df.reset_index(drop=True)
df.tail()
import sklearn
import sklearn.model_selection
df_train, df_test = sklearn.model_selection.train_test_split(df, train_size=0.8)
train_stats = df_train.describe().transpose()
train_stats
numeric_column_names = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration']
df_train_norm, df_test_norm = df_train.copy(), df_test.copy()
for col_name in numeric_column_names:
mean = train_stats.loc[col_name, 'mean']
std = train_stats.loc[col_name, 'std']
df_train_norm.loc[:, col_name] = (df_train_norm.loc[:, col_name] - mean)/std
df_test_norm.loc[:, col_name] = (df_test_norm.loc[:, col_name] - mean)/std
df_train_norm.tail()
###Output
_____no_output_____
###Markdown
수치형 열
###Code
numeric_features = []
for col_name in numeric_column_names:
numeric_features.append(tf.feature_column.numeric_column(key=col_name))
numeric_features
feature_year = tf.feature_column.numeric_column(key="ModelYear")
bucketized_features = []
bucketized_features.append(tf.feature_column.bucketized_column(
source_column=feature_year,
boundaries=[73, 76, 79]))
print(bucketized_features)
feature_origin = tf.feature_column.categorical_column_with_vocabulary_list(
key='Origin',
vocabulary_list=[1, 2, 3])
categorical_indicator_features = []
categorical_indicator_features.append(tf.feature_column.indicator_column(feature_origin))
print(categorical_indicator_features)
###Output
[IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='Origin', vocabulary_list=(1, 2, 3), dtype=tf.int64, default_value=-1, num_oov_buckets=0))]
###Markdown
사전에 준비된 추정기로 머신러닝 수행하기
###Code
def train_input_fn(df_train, batch_size=8):
df = df_train.copy()
train_x, train_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(train_x), train_y))
# 셔플, 반복, 배치
return dataset.shuffle(1000).repeat().batch(batch_size)
## 조사
ds = train_input_fn(df_train_norm)
batch = next(iter(ds))
print('키:', batch[0].keys())
print('ModelYear:', batch[0]['ModelYear'])
all_feature_columns = (numeric_features +
bucketized_features +
categorical_indicator_features)
print(all_feature_columns)
regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
model_dir='models/autompg-dnnregressor/')
EPOCHS = 1000
BATCH_SIZE = 8
total_steps = EPOCHS * int(np.ceil(len(df_train) / BATCH_SIZE))
print('훈련 스텝:', total_steps)
regressor.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE),
steps=total_steps)
reloaded_regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
warm_start_from='models/autompg-dnnregressor/',
model_dir='models/autompg-dnnregressor/')
def eval_input_fn(df_test, batch_size=8):
df = df_test.copy()
test_x, test_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))
return dataset.batch(batch_size)
eval_results = reloaded_regressor.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
for key in eval_results:
print('{:15s} {}'.format(key, eval_results[key]))
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
pred_res = regressor.predict(input_fn=lambda: eval_input_fn(df_test_norm, batch_size=8))
print(next(iter(pred_res)))
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-40000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
{'predictions': array([22.728632], dtype=float32)}
###Markdown
Boosted Tree Regressor
###Code
boosted_tree = tf.estimator.BoostedTreesRegressor(
feature_columns=all_feature_columns,
n_batches_per_layer=20,
n_trees=200)
boosted_tree.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE))
eval_results = boosted_tree.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
print(eval_results)
print('평균 손실 {:.4f}'.format(eval_results['average_loss']))
###Output
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpe71wdd8q
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpe71wdd8q', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:398: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpe71wdd8q/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:loss = 779.1825, step = 0
INFO:tensorflow:loss = 175.98672, step = 80 (0.708 sec)
INFO:tensorflow:global_step/sec: 112.597
INFO:tensorflow:loss = 88.06142, step = 180 (0.514 sec)
INFO:tensorflow:global_step/sec: 236.325
INFO:tensorflow:loss = 28.334957, step = 280 (0.439 sec)
INFO:tensorflow:global_step/sec: 231.243
INFO:tensorflow:loss = 7.330826, step = 380 (0.421 sec)
INFO:tensorflow:global_step/sec: 236.812
INFO:tensorflow:loss = 28.439013, step = 480 (0.511 sec)
INFO:tensorflow:global_step/sec: 192.031
INFO:tensorflow:loss = 2.9001746, step = 580 (0.428 sec)
INFO:tensorflow:global_step/sec: 237.608
INFO:tensorflow:loss = 5.0455194, step = 680 (0.429 sec)
INFO:tensorflow:global_step/sec: 234.501
INFO:tensorflow:loss = 3.5293148, step = 780 (0.427 sec)
INFO:tensorflow:global_step/sec: 234.698
INFO:tensorflow:loss = 3.3015428, step = 880 (0.431 sec)
INFO:tensorflow:global_step/sec: 232.239
INFO:tensorflow:loss = 1.2589538, step = 980 (0.478 sec)
INFO:tensorflow:global_step/sec: 208.494
INFO:tensorflow:loss = 3.0725284, step = 1080 (0.421 sec)
INFO:tensorflow:global_step/sec: 239.407
INFO:tensorflow:loss = 1.3558465, step = 1180 (0.422 sec)
INFO:tensorflow:global_step/sec: 233.827
INFO:tensorflow:loss = 3.4222438, step = 1280 (0.420 sec)
INFO:tensorflow:global_step/sec: 240.196
INFO:tensorflow:loss = 1.4523966, step = 1380 (0.420 sec)
INFO:tensorflow:global_step/sec: 236.738
INFO:tensorflow:loss = 0.7821708, step = 1480 (0.434 sec)
INFO:tensorflow:global_step/sec: 231.39
INFO:tensorflow:loss = 0.89154446, step = 1580 (0.432 sec)
INFO:tensorflow:global_step/sec: 232.166
INFO:tensorflow:loss = 2.9137201, step = 1680 (0.434 sec)
INFO:tensorflow:global_step/sec: 230.789
INFO:tensorflow:loss = 3.590133, step = 1780 (0.417 sec)
INFO:tensorflow:global_step/sec: 237.99
INFO:tensorflow:loss = 1.9238122, step = 1880 (0.431 sec)
INFO:tensorflow:global_step/sec: 231.652
INFO:tensorflow:loss = 1.3043885, step = 1980 (0.427 sec)
INFO:tensorflow:global_step/sec: 233.25
INFO:tensorflow:loss = 1.0231631, step = 2080 (0.511 sec)
INFO:tensorflow:global_step/sec: 197.584
INFO:tensorflow:loss = 0.7818819, step = 2180 (0.432 sec)
INFO:tensorflow:global_step/sec: 228.415
INFO:tensorflow:loss = 2.3887777, step = 2280 (0.427 sec)
INFO:tensorflow:global_step/sec: 227.075
INFO:tensorflow:loss = 0.40529764, step = 2380 (0.489 sec)
INFO:tensorflow:global_step/sec: 210.38
INFO:tensorflow:loss = 0.3062593, step = 2480 (0.420 sec)
INFO:tensorflow:global_step/sec: 239.896
INFO:tensorflow:loss = 0.5025814, step = 2580 (0.435 sec)
INFO:tensorflow:global_step/sec: 230.695
INFO:tensorflow:loss = 0.99545026, step = 2680 (0.436 sec)
INFO:tensorflow:global_step/sec: 230.301
INFO:tensorflow:loss = 1.8740138, step = 2780 (0.433 sec)
INFO:tensorflow:global_step/sec: 228.792
INFO:tensorflow:loss = 2.5301783, step = 2880 (0.433 sec)
INFO:tensorflow:global_step/sec: 232.843
INFO:tensorflow:loss = 1.8291496, step = 2980 (0.426 sec)
INFO:tensorflow:global_step/sec: 228.888
INFO:tensorflow:loss = 0.6858313, step = 3080 (0.437 sec)
INFO:tensorflow:global_step/sec: 233.735
INFO:tensorflow:loss = 0.53382206, step = 3180 (0.421 sec)
INFO:tensorflow:global_step/sec: 235.949
INFO:tensorflow:loss = 0.61666024, step = 3280 (0.472 sec)
INFO:tensorflow:global_step/sec: 212.431
INFO:tensorflow:loss = 1.5132174, step = 3380 (0.428 sec)
INFO:tensorflow:global_step/sec: 234.108
INFO:tensorflow:loss = 0.26615345, step = 3480 (0.448 sec)
INFO:tensorflow:global_step/sec: 213.462
INFO:tensorflow:loss = 0.5673427, step = 3580 (0.493 sec)
INFO:tensorflow:global_step/sec: 211.63
INFO:tensorflow:loss = 0.6548833, step = 3680 (0.477 sec)
INFO:tensorflow:global_step/sec: 199.918
INFO:tensorflow:loss = 0.4136691, step = 3780 (0.499 sec)
INFO:tensorflow:global_step/sec: 211.15
INFO:tensorflow:loss = 0.44898975, step = 3880 (0.427 sec)
INFO:tensorflow:global_step/sec: 226.425
INFO:tensorflow:loss = 0.12739712, step = 3980 (0.513 sec)
INFO:tensorflow:global_step/sec: 200.577
INFO:tensorflow:loss = 0.27423677, step = 4080 (0.436 sec)
INFO:tensorflow:global_step/sec: 226.914
INFO:tensorflow:loss = 0.30576748, step = 4180 (0.439 sec)
INFO:tensorflow:global_step/sec: 228.385
INFO:tensorflow:loss = 0.15210456, step = 4280 (0.424 sec)
INFO:tensorflow:global_step/sec: 235.308
INFO:tensorflow:loss = 0.22976612, step = 4380 (0.494 sec)
INFO:tensorflow:global_step/sec: 195.692
INFO:tensorflow:loss = 0.24535024, step = 4480 (0.459 sec)
INFO:tensorflow:global_step/sec: 228.835
INFO:tensorflow:loss = 0.45115024, step = 4580 (0.447 sec)
INFO:tensorflow:global_step/sec: 223.004
INFO:tensorflow:loss = 0.27290797, step = 4680 (0.448 sec)
INFO:tensorflow:global_step/sec: 220.322
INFO:tensorflow:loss = 0.2475199, step = 4780 (0.448 sec)
INFO:tensorflow:global_step/sec: 225.264
INFO:tensorflow:loss = 0.23342848, step = 4880 (0.439 sec)
INFO:tensorflow:global_step/sec: 228.466
INFO:tensorflow:loss = 0.25287765, step = 4980 (0.436 sec)
INFO:tensorflow:global_step/sec: 228.304
INFO:tensorflow:loss = 0.07537734, step = 5080 (0.439 sec)
INFO:tensorflow:global_step/sec: 227.195
INFO:tensorflow:loss = 0.20548478, step = 5180 (0.441 sec)
INFO:tensorflow:global_step/sec: 226.893
INFO:tensorflow:loss = 0.7532023, step = 5280 (0.518 sec)
INFO:tensorflow:global_step/sec: 191.774
INFO:tensorflow:loss = 0.21570265, step = 5380 (0.435 sec)
INFO:tensorflow:global_step/sec: 233.052
INFO:tensorflow:loss = 0.24697597, step = 5480 (0.441 sec)
INFO:tensorflow:global_step/sec: 226.153
INFO:tensorflow:loss = 0.12125553, step = 5580 (0.440 sec)
INFO:tensorflow:global_step/sec: 228.699
INFO:tensorflow:loss = 0.21887329, step = 5680 (0.459 sec)
INFO:tensorflow:global_step/sec: 217.953
INFO:tensorflow:loss = 0.12589195, step = 5780 (0.438 sec)
INFO:tensorflow:global_step/sec: 228.668
INFO:tensorflow:loss = 0.8719354, step = 5880 (0.442 sec)
INFO:tensorflow:global_step/sec: 220.02
INFO:tensorflow:loss = 0.24293149, step = 5980 (0.444 sec)
INFO:tensorflow:global_step/sec: 225.677
INFO:tensorflow:loss = 0.197566, step = 6080 (0.463 sec)
INFO:tensorflow:global_step/sec: 218.978
INFO:tensorflow:loss = 0.22314307, step = 6180 (0.462 sec)
INFO:tensorflow:global_step/sec: 219.238
INFO:tensorflow:loss = 0.16728356, step = 6280 (0.441 sec)
INFO:tensorflow:global_step/sec: 226.413
INFO:tensorflow:loss = 0.11892565, step = 6380 (0.449 sec)
INFO:tensorflow:global_step/sec: 223.561
INFO:tensorflow:loss = 0.10035148, step = 6480 (0.434 sec)
INFO:tensorflow:global_step/sec: 227.862
INFO:tensorflow:loss = 0.24382532, step = 6580 (0.474 sec)
INFO:tensorflow:global_step/sec: 200.733
INFO:tensorflow:loss = 0.1128447, step = 6680 (0.480 sec)
INFO:tensorflow:global_step/sec: 220.81
INFO:tensorflow:loss = 0.24076247, step = 6780 (0.483 sec)
INFO:tensorflow:global_step/sec: 207.813
INFO:tensorflow:loss = 0.075261444, step = 6880 (0.432 sec)
INFO:tensorflow:global_step/sec: 230.704
INFO:tensorflow:loss = 0.05876013, step = 6980 (0.462 sec)
INFO:tensorflow:global_step/sec: 208.195
INFO:tensorflow:loss = 0.06491387, step = 7080 (0.510 sec)
INFO:tensorflow:global_step/sec: 203.482
INFO:tensorflow:loss = 0.106327154, step = 7180 (0.440 sec)
INFO:tensorflow:global_step/sec: 223.96
INFO:tensorflow:loss = 0.12552896, step = 7280 (0.445 sec)
INFO:tensorflow:global_step/sec: 227.131
INFO:tensorflow:loss = 0.14993864, step = 7380 (0.438 sec)
INFO:tensorflow:global_step/sec: 229.826
INFO:tensorflow:loss = 0.10687789, step = 7480 (0.448 sec)
INFO:tensorflow:global_step/sec: 222.381
INFO:tensorflow:loss = 0.099866405, step = 7580 (0.449 sec)
INFO:tensorflow:global_step/sec: 222.688
INFO:tensorflow:loss = 0.047058415, step = 7680 (0.487 sec)
INFO:tensorflow:global_step/sec: 204.875
INFO:tensorflow:loss = 0.066626415, step = 7780 (0.441 sec)
INFO:tensorflow:global_step/sec: 223.965
INFO:tensorflow:loss = 0.07019142, step = 7880 (0.446 sec)
INFO:tensorflow:global_step/sec: 221.653
INFO:tensorflow:loss = 0.037252925, step = 7980 (0.464 sec)
INFO:tensorflow:global_step/sec: 219.262
INFO:tensorflow:loss = 0.029428132, step = 8080 (0.456 sec)
INFO:tensorflow:global_step/sec: 220.442
INFO:tensorflow:loss = 0.07954688, step = 8180 (0.450 sec)
INFO:tensorflow:global_step/sec: 221.195
INFO:tensorflow:loss = 0.06761849, step = 8280 (0.459 sec)
INFO:tensorflow:global_step/sec: 219.408
INFO:tensorflow:loss = 0.06025981, step = 8380 (0.447 sec)
INFO:tensorflow:global_step/sec: 223.415
INFO:tensorflow:loss = 0.06555028, step = 8480 (0.484 sec)
INFO:tensorflow:global_step/sec: 195.708
INFO:tensorflow:loss = 0.14992812, step = 8580 (0.485 sec)
INFO:tensorflow:global_step/sec: 218.308
INFO:tensorflow:loss = 0.041804157, step = 8680 (0.440 sec)
INFO:tensorflow:global_step/sec: 226.623
INFO:tensorflow:loss = 0.08093469, step = 8780 (0.464 sec)
INFO:tensorflow:global_step/sec: 216.02
INFO:tensorflow:loss = 0.035671968, step = 8880 (0.445 sec)
INFO:tensorflow:global_step/sec: 224.868
INFO:tensorflow:loss = 0.1250574, step = 8980 (0.447 sec)
INFO:tensorflow:global_step/sec: 218.321
INFO:tensorflow:loss = 0.06651616, step = 9080 (0.449 sec)
INFO:tensorflow:global_step/sec: 225.993
INFO:tensorflow:loss = 0.10398882, step = 9180 (0.450 sec)
INFO:tensorflow:global_step/sec: 224.348
INFO:tensorflow:loss = 0.025028639, step = 9280 (0.451 sec)
INFO:tensorflow:global_step/sec: 217.119
INFO:tensorflow:loss = 0.012650586, step = 9380 (0.526 sec)
INFO:tensorflow:global_step/sec: 191.652
INFO:tensorflow:loss = 0.047671594, step = 9480 (0.462 sec)
INFO:tensorflow:global_step/sec: 216.839
INFO:tensorflow:loss = 0.03253608, step = 9580 (0.453 sec)
INFO:tensorflow:global_step/sec: 212.392
INFO:tensorflow:loss = 0.026947241, step = 9680 (0.477 sec)
INFO:tensorflow:global_step/sec: 219.226
INFO:tensorflow:loss = 0.071996406, step = 9780 (0.450 sec)
INFO:tensorflow:global_step/sec: 223.221
INFO:tensorflow:loss = 0.036975406, step = 9880 (0.456 sec)
INFO:tensorflow:global_step/sec: 218.857
INFO:tensorflow:loss = 0.0287939, step = 9980 (0.455 sec)
INFO:tensorflow:global_step/sec: 220.403
INFO:tensorflow:loss = 0.038904883, step = 10080 (0.477 sec)
INFO:tensorflow:global_step/sec: 209.447
INFO:tensorflow:loss = 0.03822598, step = 10180 (0.451 sec)
INFO:tensorflow:global_step/sec: 219.953
INFO:tensorflow:loss = 0.02723059, step = 10280 (0.455 sec)
INFO:tensorflow:global_step/sec: 216.212
INFO:tensorflow:loss = 0.024398614, step = 10380 (0.463 sec)
INFO:tensorflow:global_step/sec: 220.76
INFO:tensorflow:loss = 0.022796106, step = 10480 (0.452 sec)
INFO:tensorflow:global_step/sec: 219.346
INFO:tensorflow:loss = 0.040350664, step = 10580 (0.467 sec)
INFO:tensorflow:global_step/sec: 214.779
INFO:tensorflow:loss = 0.032954104, step = 10680 (0.459 sec)
INFO:tensorflow:global_step/sec: 205.035
INFO:tensorflow:loss = 0.057137553, step = 10780 (0.544 sec)
INFO:tensorflow:global_step/sec: 191.955
INFO:tensorflow:loss = 0.0147186695, step = 10880 (0.505 sec)
INFO:tensorflow:global_step/sec: 191.07
INFO:tensorflow:loss = 0.023054967, step = 10980 (0.530 sec)
INFO:tensorflow:global_step/sec: 195.83
INFO:tensorflow:loss = 0.048917457, step = 11080 (0.471 sec)
INFO:tensorflow:global_step/sec: 213.787
INFO:tensorflow:loss = 0.025292493, step = 11180 (0.509 sec)
INFO:tensorflow:global_step/sec: 195.252
INFO:tensorflow:loss = 0.023140596, step = 11280 (0.477 sec)
INFO:tensorflow:global_step/sec: 210.539
INFO:tensorflow:loss = 0.009416366, step = 11380 (0.477 sec)
INFO:tensorflow:global_step/sec: 208.174
INFO:tensorflow:loss = 0.015295783, step = 11480 (0.471 sec)
INFO:tensorflow:global_step/sec: 209.569
INFO:tensorflow:loss = 0.011721921, step = 11580 (0.504 sec)
INFO:tensorflow:global_step/sec: 192.07
INFO:tensorflow:loss = 0.017539293, step = 11680 (0.549 sec)
INFO:tensorflow:global_step/sec: 189.797
INFO:tensorflow:loss = 0.03581386, step = 11780 (0.470 sec)
INFO:tensorflow:global_step/sec: 215.889
INFO:tensorflow:loss = 0.025495213, step = 11880 (0.483 sec)
INFO:tensorflow:global_step/sec: 204.375
INFO:tensorflow:loss = 0.019865915, step = 11980 (0.464 sec)
INFO:tensorflow:global_step/sec: 216.999
INFO:tensorflow:loss = 0.038710073, step = 12080 (0.464 sec)
INFO:tensorflow:global_step/sec: 212.916
INFO:tensorflow:loss = 0.009932896, step = 12180 (0.529 sec)
INFO:tensorflow:global_step/sec: 182.102
INFO:tensorflow:loss = 0.026945513, step = 12280 (0.568 sec)
INFO:tensorflow:global_step/sec: 184.744
INFO:tensorflow:loss = 0.020113902, step = 12380 (0.465 sec)
INFO:tensorflow:global_step/sec: 215.823
INFO:tensorflow:loss = 0.0051452513, step = 12480 (0.458 sec)
INFO:tensorflow:global_step/sec: 217.934
INFO:tensorflow:loss = 0.013472352, step = 12580 (0.461 sec)
INFO:tensorflow:global_step/sec: 213.53
INFO:tensorflow:loss = 0.0075852657, step = 12680 (0.466 sec)
INFO:tensorflow:global_step/sec: 216.032
INFO:tensorflow:loss = 0.01434672, step = 12780 (0.463 sec)
INFO:tensorflow:global_step/sec: 214.895
INFO:tensorflow:loss = 0.020459356, step = 12880 (0.464 sec)
INFO:tensorflow:global_step/sec: 218.047
INFO:tensorflow:loss = 0.008625217, step = 12980 (0.458 sec)
INFO:tensorflow:global_step/sec: 219.639
INFO:tensorflow:loss = 0.017844502, step = 13080 (0.459 sec)
INFO:tensorflow:global_step/sec: 217.046
INFO:tensorflow:loss = 0.01284742, step = 13180 (0.486 sec)
INFO:tensorflow:global_step/sec: 196.313
INFO:tensorflow:loss = 0.010080893, step = 13280 (0.495 sec)
INFO:tensorflow:global_step/sec: 211.481
INFO:tensorflow:loss = 0.023105443, step = 13380 (0.466 sec)
INFO:tensorflow:global_step/sec: 214.854
INFO:tensorflow:loss = 0.012591141, step = 13480 (0.465 sec)
INFO:tensorflow:global_step/sec: 212.892
INFO:tensorflow:loss = 0.013794397, step = 13580 (0.466 sec)
INFO:tensorflow:global_step/sec: 216.267
INFO:tensorflow:loss = 0.01258044, step = 13680 (0.468 sec)
INFO:tensorflow:global_step/sec: 214.773
INFO:tensorflow:loss = 0.010764226, step = 13780 (0.477 sec)
INFO:tensorflow:global_step/sec: 202.68
INFO:tensorflow:loss = 0.005942981, step = 13880 (0.476 sec)
INFO:tensorflow:global_step/sec: 216.921
INFO:tensorflow:loss = 0.014338085, step = 13980 (0.459 sec)
INFO:tensorflow:global_step/sec: 212.003
INFO:tensorflow:loss = 0.019534815, step = 14080 (0.533 sec)
INFO:tensorflow:global_step/sec: 191.437
INFO:tensorflow:loss = 0.008095667, step = 14180 (0.458 sec)
INFO:tensorflow:global_step/sec: 219.02
INFO:tensorflow:loss = 0.0028836923, step = 14280 (0.458 sec)
INFO:tensorflow:global_step/sec: 216.513
INFO:tensorflow:loss = 0.0114330305, step = 14380 (0.464 sec)
INFO:tensorflow:global_step/sec: 215.953
INFO:tensorflow:loss = 0.009912422, step = 14480 (0.466 sec)
INFO:tensorflow:global_step/sec: 215.34
INFO:tensorflow:loss = 0.0039477334, step = 14580 (0.467 sec)
INFO:tensorflow:global_step/sec: 212.913
INFO:tensorflow:loss = 0.015556888, step = 14680 (0.465 sec)
INFO:tensorflow:global_step/sec: 209.396
INFO:tensorflow:loss = 0.005233367, step = 14780 (0.482 sec)
INFO:tensorflow:global_step/sec: 216.004
INFO:tensorflow:loss = 0.0070141368, step = 14880 (0.529 sec)
INFO:tensorflow:global_step/sec: 181.032
INFO:tensorflow:loss = 0.0130175855, step = 14980 (0.486 sec)
INFO:tensorflow:global_step/sec: 214.466
INFO:tensorflow:loss = 0.0047422783, step = 15080 (0.473 sec)
INFO:tensorflow:global_step/sec: 211.951
INFO:tensorflow:loss = 0.0061913, step = 15180 (0.470 sec)
INFO:tensorflow:global_step/sec: 212.316
INFO:tensorflow:loss = 0.004428529, step = 15280 (0.467 sec)
INFO:tensorflow:global_step/sec: 211.761
INFO:tensorflow:loss = 0.007156968, step = 15380 (0.472 sec)
INFO:tensorflow:global_step/sec: 213.038
INFO:tensorflow:loss = 0.0044437535, step = 15480 (0.504 sec)
INFO:tensorflow:global_step/sec: 190.879
INFO:tensorflow:loss = 0.004695491, step = 15580 (0.509 sec)
INFO:tensorflow:global_step/sec: 203.367
INFO:tensorflow:loss = 0.0025094748, step = 15680 (0.518 sec)
INFO:tensorflow:global_step/sec: 194.819
INFO:tensorflow:loss = 0.00094408746, step = 15780 (0.475 sec)
INFO:tensorflow:global_step/sec: 212.161
INFO:tensorflow:loss = 0.012425633, step = 15880 (0.468 sec)
INFO:tensorflow:global_step/sec: 213.532
INFO:tensorflow:loss = 0.0042187907, step = 15980 (0.487 sec)
INFO:tensorflow:global_step/sec: 204.339
INFO:tensorflow:loss = 0.0037577068, step = 16080 (0.464 sec)
INFO:tensorflow:global_step/sec: 214.958
INFO:tensorflow:loss = 0.0062155034, step = 16180 (0.481 sec)
INFO:tensorflow:global_step/sec: 206.566
INFO:tensorflow:loss = 0.0022613448, step = 16280 (0.468 sec)
INFO:tensorflow:global_step/sec: 213.882
INFO:tensorflow:loss = 0.0028099597, step = 16380 (0.473 sec)
INFO:tensorflow:global_step/sec: 210.176
INFO:tensorflow:loss = 0.004106181, step = 16480 (0.478 sec)
INFO:tensorflow:global_step/sec: 211.633
INFO:tensorflow:loss = 0.0033143421, step = 16580 (0.474 sec)
INFO:tensorflow:global_step/sec: 211.786
INFO:tensorflow:loss = 0.0035097834, step = 16680 (0.481 sec)
INFO:tensorflow:global_step/sec: 207.87
INFO:tensorflow:loss = 0.0027867071, step = 16780 (0.463 sec)
INFO:tensorflow:global_step/sec: 213.845
INFO:tensorflow:loss = 0.009324459, step = 16880 (0.473 sec)
INFO:tensorflow:global_step/sec: 211.755
INFO:tensorflow:loss = 0.0021615229, step = 16980 (0.472 sec)
INFO:tensorflow:global_step/sec: 212.245
INFO:tensorflow:loss = 0.0048076506, step = 17080 (0.478 sec)
INFO:tensorflow:global_step/sec: 208.963
INFO:tensorflow:loss = 0.0018272446, step = 17180 (0.469 sec)
INFO:tensorflow:global_step/sec: 215.184
INFO:tensorflow:loss = 0.002462379, step = 17280 (0.482 sec)
INFO:tensorflow:global_step/sec: 205.925
INFO:tensorflow:loss = 0.0006275628, step = 17380 (0.469 sec)
INFO:tensorflow:global_step/sec: 211.733
INFO:tensorflow:loss = 0.002109193, step = 17480 (0.475 sec)
INFO:tensorflow:global_step/sec: 211.295
INFO:tensorflow:loss = 0.0029382277, step = 17580 (0.477 sec)
INFO:tensorflow:global_step/sec: 209.936
INFO:tensorflow:loss = 0.0032096568, step = 17680 (0.486 sec)
INFO:tensorflow:global_step/sec: 207.686
INFO:tensorflow:loss = 0.002996812, step = 17780 (0.481 sec)
INFO:tensorflow:global_step/sec: 208.084
INFO:tensorflow:loss = 0.0027301726, step = 17880 (0.487 sec)
INFO:tensorflow:global_step/sec: 203.467
INFO:tensorflow:loss = 0.0016131198, step = 17980 (0.491 sec)
INFO:tensorflow:global_step/sec: 194.249
INFO:tensorflow:loss = 0.0071048774, step = 18080 (0.576 sec)
INFO:tensorflow:global_step/sec: 175.652
INFO:tensorflow:loss = 0.0023194004, step = 18180 (0.542 sec)
INFO:tensorflow:global_step/sec: 191.984
INFO:tensorflow:loss = 0.0015120232, step = 18280 (0.501 sec)
INFO:tensorflow:global_step/sec: 194.325
INFO:tensorflow:loss = 0.0016394173, step = 18380 (0.499 sec)
INFO:tensorflow:global_step/sec: 194.902
INFO:tensorflow:loss = 0.0007376091, step = 18480 (0.546 sec)
INFO:tensorflow:global_step/sec: 184.887
INFO:tensorflow:loss = 0.0028751981, step = 18580 (0.508 sec)
INFO:tensorflow:global_step/sec: 204.618
INFO:tensorflow:loss = 0.0008021246, step = 18680 (0.487 sec)
INFO:tensorflow:global_step/sec: 206.998
INFO:tensorflow:loss = 0.002925751, step = 18780 (0.474 sec)
INFO:tensorflow:global_step/sec: 210.391
INFO:tensorflow:loss = 0.0020086821, step = 18880 (0.479 sec)
INFO:tensorflow:global_step/sec: 208.779
INFO:tensorflow:loss = 0.0009860102, step = 18980 (0.476 sec)
INFO:tensorflow:global_step/sec: 210.898
INFO:tensorflow:loss = 0.0012985889, step = 19080 (0.477 sec)
INFO:tensorflow:global_step/sec: 207.897
INFO:tensorflow:loss = 0.0012460706, step = 19180 (0.526 sec)
INFO:tensorflow:global_step/sec: 182.58
INFO:tensorflow:loss = 0.0013941245, step = 19280 (0.500 sec)
INFO:tensorflow:global_step/sec: 209.864
INFO:tensorflow:loss = 0.0017754486, step = 19380 (0.482 sec)
INFO:tensorflow:global_step/sec: 207.462
INFO:tensorflow:loss = 0.0007509034, step = 19480 (0.482 sec)
INFO:tensorflow:global_step/sec: 203.189
INFO:tensorflow:loss = 0.0013608203, step = 19580 (0.520 sec)
INFO:tensorflow:global_step/sec: 188.442
INFO:tensorflow:loss = 0.001058562, step = 19680 (0.502 sec)
INFO:tensorflow:global_step/sec: 204.703
INFO:tensorflow:loss = 0.0034424188, step = 19780 (0.518 sec)
INFO:tensorflow:global_step/sec: 194.015
INFO:tensorflow:loss = 0.0008957273, step = 19880 (0.475 sec)
INFO:tensorflow:global_step/sec: 207.493
INFO:tensorflow:loss = 0.002313973, step = 19980 (0.489 sec)
INFO:tensorflow:global_step/sec: 208.756
INFO:tensorflow:loss = 0.000694511, step = 20080 (0.484 sec)
INFO:tensorflow:global_step/sec: 203.375
INFO:tensorflow:loss = 0.0006695612, step = 20180 (0.492 sec)
INFO:tensorflow:global_step/sec: 206.74
INFO:tensorflow:loss = 0.0014117493, step = 20280 (0.475 sec)
INFO:tensorflow:global_step/sec: 202.108
INFO:tensorflow:loss = 0.0011933774, step = 20380 (0.502 sec)
INFO:tensorflow:global_step/sec: 206.154
INFO:tensorflow:loss = 0.0008137824, step = 20480 (0.472 sec)
INFO:tensorflow:global_step/sec: 210.589
INFO:tensorflow:loss = 0.0009201659, step = 20580 (0.525 sec)
INFO:tensorflow:global_step/sec: 190.006
INFO:tensorflow:loss = 0.00044394887, step = 20680 (0.481 sec)
INFO:tensorflow:global_step/sec: 209.716
INFO:tensorflow:loss = 0.0010550698, step = 20780 (0.501 sec)
INFO:tensorflow:global_step/sec: 200.574
INFO:tensorflow:loss = 0.00019395063, step = 20880 (0.479 sec)
INFO:tensorflow:global_step/sec: 209.267
INFO:tensorflow:loss = 0.0012454216, step = 20980 (0.500 sec)
INFO:tensorflow:global_step/sec: 198.726
INFO:tensorflow:loss = 0.0010517604, step = 21080 (0.472 sec)
INFO:tensorflow:global_step/sec: 212.335
INFO:tensorflow:loss = 0.0003900857, step = 21180 (0.490 sec)
INFO:tensorflow:global_step/sec: 203.871
INFO:tensorflow:loss = 0.0013436541, step = 21280 (0.482 sec)
INFO:tensorflow:global_step/sec: 208.191
INFO:tensorflow:loss = 0.00020852721, step = 21380 (0.523 sec)
INFO:tensorflow:global_step/sec: 188.887
INFO:tensorflow:loss = 0.00048694198, step = 21480 (0.486 sec)
INFO:tensorflow:global_step/sec: 206.18
INFO:tensorflow:loss = 0.00073513493, step = 21580 (0.502 sec)
INFO:tensorflow:global_step/sec: 196.849
INFO:tensorflow:loss = 0.00039215965, step = 21680 (0.486 sec)
INFO:tensorflow:global_step/sec: 207.478
INFO:tensorflow:loss = 0.00014613547, step = 21780 (0.497 sec)
INFO:tensorflow:global_step/sec: 203.14
INFO:tensorflow:loss = 0.00015599697, step = 21880 (0.479 sec)
INFO:tensorflow:global_step/sec: 207.261
INFO:tensorflow:loss = 0.00063628936, step = 21980 (0.496 sec)
INFO:tensorflow:global_step/sec: 192.981
INFO:tensorflow:loss = 0.00072673126, step = 22080 (0.569 sec)
INFO:tensorflow:global_step/sec: 184.533
INFO:tensorflow:loss = 0.00042106156, step = 22180 (0.490 sec)
INFO:tensorflow:global_step/sec: 202.957
INFO:tensorflow:loss = 0.00062714494, step = 22280 (0.479 sec)
INFO:tensorflow:global_step/sec: 209.223
INFO:tensorflow:loss = 0.0011216395, step = 22380 (0.494 sec)
INFO:tensorflow:global_step/sec: 200.908
INFO:tensorflow:loss = 0.00027068384, step = 22480 (0.502 sec)
INFO:tensorflow:global_step/sec: 190.631
INFO:tensorflow:loss = 0.000450986, step = 22580 (0.597 sec)
INFO:tensorflow:global_step/sec: 176.392
INFO:tensorflow:loss = 0.00010583318, step = 22680 (0.483 sec)
INFO:tensorflow:global_step/sec: 206.564
INFO:tensorflow:loss = 0.0010061075, step = 22780 (0.491 sec)
INFO:tensorflow:global_step/sec: 200.9
INFO:tensorflow:loss = 0.00017677023, step = 22880 (0.508 sec)
INFO:tensorflow:global_step/sec: 191.064
INFO:tensorflow:loss = 0.00046004873, step = 22980 (0.558 sec)
INFO:tensorflow:global_step/sec: 183.679
INFO:tensorflow:loss = 0.00043012408, step = 23080 (0.493 sec)
INFO:tensorflow:global_step/sec: 205.708
INFO:tensorflow:loss = 0.00112921, step = 23180 (0.492 sec)
INFO:tensorflow:global_step/sec: 204.122
INFO:tensorflow:loss = 0.00034221812, step = 23280 (0.480 sec)
INFO:tensorflow:global_step/sec: 208.715
INFO:tensorflow:loss = 0.00030490258, step = 23380 (0.489 sec)
INFO:tensorflow:global_step/sec: 202.505
INFO:tensorflow:loss = 0.00030782583, step = 23480 (0.494 sec)
INFO:tensorflow:global_step/sec: 203.755
INFO:tensorflow:loss = 0.0006354905, step = 23580 (0.521 sec)
INFO:tensorflow:global_step/sec: 190.195
INFO:tensorflow:loss = 0.00061701454, step = 23680 (0.485 sec)
INFO:tensorflow:global_step/sec: 206.651
INFO:tensorflow:loss = 0.00044614554, step = 23780 (0.532 sec)
INFO:tensorflow:global_step/sec: 188.084
INFO:tensorflow:loss = 0.00011296691, step = 23880 (0.493 sec)
INFO:tensorflow:global_step/sec: 198.27
INFO:tensorflow:loss = 0.0002558846, step = 23980 (0.529 sec)
INFO:tensorflow:global_step/sec: 191.401
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 24000...
INFO:tensorflow:Saving checkpoints for 24000 into /tmp/tmpe71wdd8q/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 24000...
INFO:tensorflow:Loss for final step: 0.00039750503.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2021-01-02T15:57:43Z
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmp/tmpe71wdd8q/model.ckpt-24000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Inference Time : 0.23159s
INFO:tensorflow:Finished evaluation at 2021-01-02-15:57:44
INFO:tensorflow:Saving dict for global step 24000: average_loss = 12.793907, global_step = 24000, label/mean = 23.611391, loss = 12.713541, prediction/mean = 22.508915
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 24000: /tmp/tmpe71wdd8q/model.ckpt-24000
{'average_loss': 12.793907, 'label/mean': 23.611391, 'loss': 12.713541, 'prediction/mean': 22.508915, 'global_step': 24000}
평균 손실 12.7939
###Markdown
*Python Machine Learning 3rd Edition* by [Sebastian Raschka](https://sebastianraschka.com) & [Vahid Mirjalili](http://vahidmirjalili.com), Packt Publishing Ltd. 2019Code Repository: https://github.com/rasbt/python-machine-learning-book-3rd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-3rd-edition/blob/master/LICENSE.txt) Chapter 14: Going Deeper -- the Mechanics of TensorFlow (Part 2/3) Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka & Vahid Mirjalili" -u -d -p numpy,scipy,matplotlib,tensorflow
import numpy as np
import tensorflow as tf
import pandas as pd
from IPython.display import Image
###Output
_____no_output_____
###Markdown
TensorFlow Estimators Steps for using pre-made estimators * **Step 1:** Define the input function for importing the data * **Step 2:** Define the feature columns to bridge between the estimator and the data * **Step 3:** Instantiate an estimator or convert a Keras model to an estimator * **Step 4:** Use the estimator: train() evaluate() predict()
###Code
tf.random.set_seed(1)
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Working with feature columns * See definition: https://developers.google.com/machine-learning/glossary/feature_columns * Documentation: https://www.tensorflow.org/api_docs/python/tf/feature_column
###Code
Image(filename='images/02.png', width=700)
dataset_path = tf.keras.utils.get_file("auto-mpg.data",
("http://archive.ics.uci.edu/ml/machine-learning-databases"
"/auto-mpg/auto-mpg.data"))
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower',
'Weight', 'Acceleration', 'ModelYear', 'Origin']
df = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
df.tail()
print(df.isna().sum())
df = df.dropna()
df = df.reset_index(drop=True)
df.tail()
import sklearn
import sklearn.model_selection
df_train, df_test = sklearn.model_selection.train_test_split(df, train_size=0.8)
train_stats = df_train.describe().transpose()
train_stats
numeric_column_names = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration']
df_train_norm, df_test_norm = df_train.copy(), df_test.copy()
for col_name in numeric_column_names:
mean = train_stats.loc[col_name, 'mean']
std = train_stats.loc[col_name, 'std']
df_train_norm.loc[:, col_name] = (df_train_norm.loc[:, col_name] - mean)/std
df_test_norm.loc[:, col_name] = (df_test_norm.loc[:, col_name] - mean)/std
df_train_norm.tail()
###Output
_____no_output_____
###Markdown
Numeric Columns
###Code
numeric_features = []
for col_name in numeric_column_names:
numeric_features.append(tf.feature_column.numeric_column(key=col_name))
numeric_features
feature_year = tf.feature_column.numeric_column(key="ModelYear")
bucketized_features = []
bucketized_features.append(tf.feature_column.bucketized_column(
source_column=feature_year,
boundaries=[73, 76, 79]))
print(bucketized_features)
feature_origin = tf.feature_column.categorical_column_with_vocabulary_list(
key='Origin',
vocabulary_list=[1, 2, 3])
categorical_indicator_features = []
categorical_indicator_features.append(tf.feature_column.indicator_column(feature_origin))
print(categorical_indicator_features)
###Output
[IndicatorColumn(categorical_column=VocabularyListCategoricalColumn(key='Origin', vocabulary_list=(1, 2, 3), dtype=tf.int64, default_value=-1, num_oov_buckets=0))]
###Markdown
Machine learning with pre-made Estimators
###Code
def train_input_fn(df_train, batch_size=8):
df = df_train.copy()
train_x, train_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(train_x), train_y))
# shuffle, repeat, and batch the examples
return dataset.shuffle(1000).repeat().batch(batch_size)
## inspection
ds = train_input_fn(df_train_norm)
batch = next(iter(ds))
print('Keys:', batch[0].keys())
print('Batch Model Years:', batch[0]['ModelYear'])
all_feature_columns = (numeric_features +
bucketized_features +
categorical_indicator_features)
print(all_feature_columns)
regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
model_dir='models/autompg-dnnregressor/')
EPOCHS = 1000
BATCH_SIZE = 8
total_steps = EPOCHS * int(np.ceil(len(df_train) / BATCH_SIZE))
print('Training Steps:', total_steps)
regressor.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE),
steps=total_steps)
reloaded_regressor = tf.estimator.DNNRegressor(
feature_columns=all_feature_columns,
hidden_units=[32, 10],
warm_start_from='models/autompg-dnnregressor/',
model_dir='models/autompg-dnnregressor/')
def eval_input_fn(df_test, batch_size=8):
df = df_test.copy()
test_x, test_y = df, df.pop('MPG')
dataset = tf.data.Dataset.from_tensor_slices((dict(test_x), test_y))
return dataset.batch(batch_size)
eval_results = reloaded_regressor.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
for key in eval_results:
print('{:15s} {}'.format(key, eval_results[key]))
print('Average-Loss {:.4f}'.format(eval_results['average_loss']))
pred_res = regressor.predict(input_fn=lambda: eval_input_fn(df_test_norm, batch_size=8))
print(next(iter(pred_res)))
###Output
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:Layer dnn is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from models/autompg-dnnregressor/model.ckpt-40000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
{'predictions': array([23.719353], dtype=float32)}
###Markdown
Boosted Tree Regressor
###Code
boosted_tree = tf.estimator.BoostedTreesRegressor(
feature_columns=all_feature_columns,
n_batches_per_layer=20,
n_trees=200)
boosted_tree.train(
input_fn=lambda:train_input_fn(df_train_norm, batch_size=BATCH_SIZE))
eval_results = boosted_tree.evaluate(
input_fn=lambda:eval_input_fn(df_test_norm, batch_size=8))
print(eval_results)
print('Average-Loss {:.4f}'.format(eval_results['average_loss']))
###Output
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpbzo1p2wi
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpbzo1p2wi', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f47bc30b7d0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /home/vahid/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/canned/boosted_trees.py:214: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.cast` instead.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpbzo1p2wi/model.ckpt.
WARNING:tensorflow:Issue encountered when serializing resources.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'_Resource' object has no attribute 'name'
INFO:tensorflow:loss = 402.19623, step = 0
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 0 vs previous value: 0. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.
INFO:tensorflow:loss = 289.26328, step = 80 (0.462 sec)
INFO:tensorflow:global_step/sec: 157.704
INFO:tensorflow:loss = 93.58242, step = 180 (0.363 sec)
INFO:tensorflow:global_step/sec: 422.808
INFO:tensorflow:loss = 45.606873, step = 280 (0.243 sec)
INFO:tensorflow:global_step/sec: 416.715
INFO:tensorflow:loss = 19.545433, step = 380 (0.240 sec)
INFO:tensorflow:global_step/sec: 416.626
INFO:tensorflow:loss = 6.4179554, step = 480 (0.245 sec)
INFO:tensorflow:global_step/sec: 407.822
INFO:tensorflow:loss = 4.7701707, step = 580 (0.231 sec)
INFO:tensorflow:global_step/sec: 408.05
INFO:tensorflow:loss = 4.569898, step = 680 (0.244 sec)
INFO:tensorflow:global_step/sec: 420.57
INFO:tensorflow:loss = 2.5075686, step = 780 (0.249 sec)
INFO:tensorflow:global_step/sec: 410.68
INFO:tensorflow:loss = 2.6939745, step = 880 (0.244 sec)
INFO:tensorflow:global_step/sec: 411.964
INFO:tensorflow:loss = 1.5966964, step = 980 (0.248 sec)
INFO:tensorflow:global_step/sec: 403.965
INFO:tensorflow:loss = 3.3678646, step = 1080 (0.250 sec)
INFO:tensorflow:global_step/sec: 398.728
INFO:tensorflow:loss = 2.3181179, step = 1180 (0.238 sec)
INFO:tensorflow:global_step/sec: 396.897
INFO:tensorflow:loss = 1.8086417, step = 1280 (0.250 sec)
INFO:tensorflow:global_step/sec: 414.237
INFO:tensorflow:loss = 0.6904925, step = 1380 (0.246 sec)
INFO:tensorflow:global_step/sec: 411.693
INFO:tensorflow:loss = 1.8734654, step = 1480 (0.250 sec)
INFO:tensorflow:global_step/sec: 401.569
INFO:tensorflow:loss = 2.5979433, step = 1580 (0.254 sec)
INFO:tensorflow:global_step/sec: 395.667
INFO:tensorflow:loss = 2.0128171, step = 1680 (0.256 sec)
INFO:tensorflow:global_step/sec: 392.234
INFO:tensorflow:loss = 2.469627, step = 1780 (0.244 sec)
INFO:tensorflow:global_step/sec: 386.751
INFO:tensorflow:loss = 0.87159, step = 1880 (0.253 sec)
INFO:tensorflow:global_step/sec: 404.765
INFO:tensorflow:loss = 0.80283445, step = 1980 (0.254 sec)
INFO:tensorflow:global_step/sec: 401.5
INFO:tensorflow:loss = 1.524719, step = 2080 (0.261 sec)
INFO:tensorflow:global_step/sec: 385.878
INFO:tensorflow:loss = 1.0228136, step = 2180 (0.261 sec)
INFO:tensorflow:global_step/sec: 382.386
INFO:tensorflow:loss = 1.0036705, step = 2280 (0.263 sec)
INFO:tensorflow:global_step/sec: 382.23
INFO:tensorflow:loss = 1.0771171, step = 2380 (0.245 sec)
INFO:tensorflow:global_step/sec: 388.433
INFO:tensorflow:loss = 0.9643565, step = 2480 (0.251 sec)
INFO:tensorflow:global_step/sec: 409.442
INFO:tensorflow:loss = 1.4598124, step = 2580 (0.264 sec)
INFO:tensorflow:global_step/sec: 382.398
INFO:tensorflow:loss = 0.7518444, step = 2680 (0.260 sec)
INFO:tensorflow:global_step/sec: 387.657
INFO:tensorflow:loss = 0.71297884, step = 2780 (0.260 sec)
INFO:tensorflow:global_step/sec: 387.516
INFO:tensorflow:loss = 0.21006158, step = 2880 (0.261 sec)
INFO:tensorflow:global_step/sec: 380.228
INFO:tensorflow:loss = 0.64975756, step = 2980 (0.252 sec)
INFO:tensorflow:global_step/sec: 375.953
INFO:tensorflow:loss = 0.3568688, step = 3080 (0.262 sec)
INFO:tensorflow:global_step/sec: 394.311
INFO:tensorflow:loss = 1.0947809, step = 3180 (0.260 sec)
INFO:tensorflow:global_step/sec: 389.576
INFO:tensorflow:loss = 0.38473517, step = 3280 (0.262 sec)
INFO:tensorflow:global_step/sec: 383.038
INFO:tensorflow:loss = 0.37087482, step = 3380 (0.266 sec)
INFO:tensorflow:global_step/sec: 377.258
INFO:tensorflow:loss = 0.37313935, step = 3480 (0.268 sec)
INFO:tensorflow:global_step/sec: 375.779
INFO:tensorflow:loss = 0.6371509, step = 3580 (0.253 sec)
INFO:tensorflow:global_step/sec: 376.039
INFO:tensorflow:loss = 0.6737277, step = 3680 (0.258 sec)
INFO:tensorflow:global_step/sec: 397.449
INFO:tensorflow:loss = 0.22763562, step = 3780 (0.264 sec)
INFO:tensorflow:global_step/sec: 379.907
INFO:tensorflow:loss = 0.70576984, step = 3880 (0.270 sec)
INFO:tensorflow:global_step/sec: 375.692
INFO:tensorflow:loss = 0.32033288, step = 3980 (0.266 sec)
INFO:tensorflow:global_step/sec: 376.935
INFO:tensorflow:loss = 0.5732076, step = 4080 (0.271 sec)
INFO:tensorflow:global_step/sec: 369.125
INFO:tensorflow:loss = 0.22866802, step = 4180 (0.257 sec)
INFO:tensorflow:global_step/sec: 370.509
INFO:tensorflow:loss = 0.27701426, step = 4280 (0.262 sec)
INFO:tensorflow:global_step/sec: 388.812
INFO:tensorflow:loss = 0.2290253, step = 4380 (0.273 sec)
INFO:tensorflow:global_step/sec: 373.834
INFO:tensorflow:loss = 0.24748756, step = 4480 (0.270 sec)
###Markdown
---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch14_part2.ipynb --output ch14_part2.py
###Output
[NbConvertApp] Converting notebook ch14_part2.ipynb to script
[NbConvertApp] Writing 6364 bytes to ch14_part2.py
|
001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter08_ml/06_random_forest.ipynb | ###Markdown
8.6. Using a random forest to select important features for regression
###Code
import numpy as np
import sklearn as sk
import sklearn.datasets as skd
import sklearn.ensemble as ske
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
data = skd.load_boston()
reg = ske.RandomForestRegressor()
X = data['data']
y = data['target']
reg.fit(X, y)
fet_ind = np.argsort(reg.feature_importances_)[::-1]
fet_imp = reg.feature_importances_[fet_ind]
fig, ax = plt.subplots(1, 1, figsize=(8, 3))
labels = data['feature_names'][fet_ind]
pd.Series(fet_imp, index=labels).plot(kind='bar', ax=ax)
ax.set_title('Features importance')
fig, ax = plt.subplots(1, 1)
ax.scatter(X[:, -1], y)
ax.set_xlabel('LSTAT indicator')
ax.set_ylabel('Value of houses (k$)')
from sklearn import tree
#tree.export_graphviz(reg.estimators_[0],
# 'tree.dot')
###Output
_____no_output_____ |
jupyter/annotation/english/match-datetime-pipeline/Pretrained-MatchDateTime-Pipeline.ipynb | ###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/annotation/english/match-datetime-pipeline/Pretrained-MatchDateTime-Pipeline.ipynb) 0. Colab Setup
###Code
# This is only to setup PySpark and Spark NLP on Colab
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
###Output
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[K |████████████████████████████████| 215.7MB 60kB/s
[K |████████████████████████████████| 204kB 48.7MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 122kB 3.3MB/s
[?25hopenjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
###Markdown
Use pretrained `match_datetime` Pipeline * DocumentAssembler* SentenceDetector* Tokenizer* DateMatcher `yyyy/MM/dd`
###Code
import sys
#Spark ML and SQL
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql.functions import array_contains
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
#Spark NLP
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.annotator import *
from sparknlp.common import RegexRule
from sparknlp.base import DocumentAssembler, Finisher
###Output
_____no_output_____
###Markdown
Let's create a Spark Session for our app
###Code
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
pipeline = PretrainedPipeline('match_datetime', lang='en')
result=pipeline.annotate("Let's meet on 20th of February.")
result['date']
dfTest = spark.createDataFrame(["I would like to come over and see you in 01/02/2019."], StringType()).toDF("text")
result=pipeline.transform(dfTest)
result.select("date.result").show()
###Output
+------------+
| result|
+------------+
|[2019/01/02]|
+------------+
###Markdown
 Use pretrained `match_datetime` Pipeline * DocumentAssembler* SentenceDetector* Tokenizer* DateMatcher `yyyy/MM/dd`
###Code
import sys
#Spark ML and SQL
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql.functions import array_contains
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
#Spark NLP
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.annotator import *
from sparknlp.common import RegexRule
from sparknlp.base import DocumentAssembler, Finisher
###Output
_____no_output_____
###Markdown
Let's create a Spark Session for our app
###Code
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
pipeline = PretrainedPipeline('match_datetime', lang='en')
result=pipeline.annotate("Let's meet on 20th of February.")
result['date']
dfTest = spark.createDataFrame(["I would like to come over and see you in 01/02/2019."], StringType()).toDF("text")
result=pipeline.transform(dfTest)
result.select("date.result").show()
###Output
+------------+
| result|
+------------+
|[2019/01/02]|
+------------+
###Markdown
 Use pretrained `match_datetime` Pipeline * DocumentAssembler* SentenceDetector* Tokenizer* DateMatcher `yyyy/MM/dd`
###Code
import sys
#Spark ML and SQL
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql.functions import array_contains
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
#Spark NLP
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.annotator import *
from sparknlp.common import RegexRule
from sparknlp.base import DocumentAssembler, Finisher
###Output
_____no_output_____
###Markdown
Let's create a Spark Session for our app
###Code
spark = sparknlp.start()
print("Spark NLP version")
sparknlp.version()
print("Apache Spark version")
spark.version
pipeline = PretrainedPipeline('match_datetime', lang='en')
result=pipeline.annotate("Let's meet on 20th of February.")
result['date']
dfTest = spark.createDataFrame(["I would like to come over and see you in 01/02/2019."], StringType()).toDF("text")
result=pipeline.transform(dfTest)
result.select("date.result").show()
###Output
+------------+
| result|
+------------+
|[2019/01/02]|
+------------+
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/annotation/english/match-datetime-pipeline/Pretrained-MatchDateTime-Pipeline.ipynb) 0. Colab Setup
###Code
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
###Output
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[K |████████████████████████████████| 215.7MB 60kB/s
[K |████████████████████████████████| 204kB 48.7MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 122kB 3.3MB/s
[?25hopenjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
###Markdown
Use pretrained `match_datetime` Pipeline * DocumentAssembler* SentenceDetector* Tokenizer* DateMatcher `yyyy/MM/dd`
###Code
import sys
#Spark ML and SQL
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql.functions import array_contains
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
#Spark NLP
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.annotator import *
from sparknlp.common import RegexRule
from sparknlp.base import DocumentAssembler, Finisher
###Output
_____no_output_____
###Markdown
Let's create a Spark Session for our app
###Code
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
pipeline = PretrainedPipeline('match_datetime', lang='en')
result=pipeline.annotate("Let's meet on 20th of February.")
result['date']
dfTest = spark.createDataFrame(["I would like to come over and see you in 01/02/2019."], StringType()).toDF("text")
result=pipeline.transform(dfTest)
result.select("date.result").show()
###Output
+------------+
| result|
+------------+
|[2019/01/02]|
+------------+
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/annotation/english/match-datetime-pipeline/Pretrained-MatchDateTime-Pipeline.ipynb) 0. Colab Setup
###Code
import os
# Install java
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed -q spark-nlp==2.5.0
###Output
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[K |████████████████████████████████| 215.7MB 60kB/s
[K |████████████████████████████████| 204kB 48.7MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
[K |████████████████████████████████| 122kB 3.3MB/s
[?25hopenjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-8u252-b09-1~18.04-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
###Markdown
Use pretrained `match_datetime` Pipeline * DocumentAssembler* SentenceDetector* Tokenizer* DateMatcher `yyyy/MM/dd`
###Code
import sys
#Spark ML and SQL
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql.functions import array_contains
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, StringType
#Spark NLP
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.annotator import *
from sparknlp.common import RegexRule
from sparknlp.base import DocumentAssembler, Finisher
###Output
_____no_output_____
###Markdown
Let's create a Spark Session for our app
###Code
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
pipeline = PretrainedPipeline('match_datetime', lang='en')
result=pipeline.annotate("Let's meet on 20th of February.")
result['date']
dfTest = spark.createDataFrame(["I would like to come over and see you in 01/02/2019."], StringType()).toDF("text")
result=pipeline.transform(dfTest)
result.select("date.result").show()
###Output
+------------+
| result|
+------------+
|[2019/01/02]|
+------------+
|
Lego-Dillema-/Lego_Dillema_student_template.ipynb | ###Markdown
Load and split the dataset- Load the train data and using all your knowledge of pandas try to explore the different statistical properties of the dataset.- Separate the features and target and then split the train data into train and validation set.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# Code starts here
train = pd.read_csv("E:/GreyAtom/glab proj/LEGO/train.csv")
train.head(10)
# Shape of the data
print("Shape of the data is:", train.shape)
#Checking statistical properties of data
print("Statistical properties of data are as follows")
print(train.describe())
#Dropping column ID
train.drop('Id',axis=1,inplace=True)
train.head()
# Checking for skewness in the features
print("Skewness for different features is shown as below")
print(train.skew())
# Split into features and target
X = train.drop("list_price",axis=1)
y = train['list_price']
#Reading features (X)
X.head(10)
#Reading Target (y)
y.head(10)
# Separate into train and test data
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=6)
###Output
_____no_output_____
###Markdown
Data Visualization- All the features including target variable are continuous. - Check out the best plots for plotting between continuous features and try making some inferences from these plots.
###Code
# Code starts here
cols = X_train.columns
print("Columns in the dataset are : ",cols)
fig, axes = plt.subplots(nrows = 3, ncols = 3, figsize=(20,20))
for i in range(0,3):
for j in range(0,3):
col = cols[i*3 + j]
axes[i,j].set_title(col)
axes[i,j].scatter(X_train[col],y_train)
axes[i,j].set_xlabel(col)
axes[i,j].set_ylabel('list_price')
plt.show()
###Output
_____no_output_____
###Markdown
Feature Selection- Try selecting suitable threshold and accordingly drop the columns.
###Code
# Code starts here
sns.heatmap(train.corr())
plt.show()
# Selecting upper and lower threshold
upper_threshold = 0.5
lower_threshold = -0.5
correlation = train.corr().unstack().sort_values(kind='quicksort')
correlation
# Select the highest correlation pairs having correlation greater than upper threshold and lower than lower threshold
corr_var_list = correlation[((correlation > upper_threshold) | (correlation < lower_threshold)) & (correlation != 1)]
corr_var_list
# drop columns from X_train
X_train.drop(['play_star_rating','val_star_rating'],axis = 1 ,inplace=True)
X_train.head(10)
X_test.drop(['play_star_rating','val_star_rating'], axis = 1 ,inplace=True)
X_test.head(10)
###Output
_____no_output_____
###Markdown
Model building
###Code
# Code starts here
regressor = LinearRegression()
regressor.fit(X_train,y_train)
y_pred = regressor.predict(X_test)
y_pred
# Calculate mse
mse = mean_squared_error(y_test,y_pred)
mse
# Calculate r2_score
r2 = r2_score(y_test,y_pred)
r2
###Output
_____no_output_____
###Markdown
Residual check!- Check the distribution of the residual.
###Code
# Code starts here
residual = y_test - y_pred
print("Residual : ",residual)
plt.figure(figsize=(15,8))
plt.hist(residual, bins=30)
plt.xlabel("Residual")
plt.ylabel("Frequency")
plt.title("Residual Plot")
plt.show()
###Output
Residual : 6272 -2.636174
1262 -12.502869
8379 -9.189409
4989 24.019389
6452 -15.397271
...
5985 -31.703891
7490 -14.285508
3974 -17.602137
7868 25.156446
7750 -4.681132
Name: list_price, Length: 2575, dtype: float64
###Markdown
Prediction on the test data and creating the sample submission file.- Load the test data and store the `Id` column in a separate variable.- Perform the same operations on the test data that you have performed on the train data.- Create the submission file as a `csv` file consisting of the `Id` column from the test data and your prediction as the second column.
###Code
# Code starts here
test = pd.read_csv("E:/GreyAtom/glab proj/LEGO/test.csv")
test.head(10)
id_ = test['Id']
test.drop(['Id','play_star_rating','val_star_rating'],1,inplace=True)
test.head()
y_pred_test = regressor.predict(test)
y_pred_test
final_submission = pd.DataFrame({'Id':id_,'list_price':y_pred_test})
final_submission.head(10)
final_submission.to_csv('final_submission.csv',index=False)
###Output
_____no_output_____ |
code/C2/C2.ipynb | ###Markdown
例 1:使用NumPy标量类型
###Code
dt = np.dtype(np.int32)
print(dt)
###Output
int32
###Markdown
例 2:使用Python内置类型> int8, int16, int32, int64 四种数据类型可以使用字符串 'i1', 'i2','i4','i8' 代替
###Code
dt = np.dtype('i4')
print(dt)
###Output
int32
###Markdown
例 3:标注字节顺序
###Code
dt = np.dtype('<i4')
print(dt)
print(dt.str)
###Output
int32
<i4
###Markdown
例 4:创建结构化数据类型
###Code
dt = np.dtype([('age', np.int8)])
print(dt)
###Output
[('age', 'i1')]
###Markdown
例 5:将数据类型应用于 ndarray 对象
###Code
dt = np.dtype([('age', np.int8)])
a = np.array([(10, ), (20, ), (30, )], dtype = dt)
print(a)
###Output
[(10,) (20,) (30,)]
###Markdown
例 6:类型字段名可以用于存取实际的 age 列
###Code
dt = np.dtype([('age', np.int8)])
a = np.array([(10, ), (20, ), (30, )], dtype = dt)
print(a['age'])
###Output
[10 20 30]
###Markdown
例 7: 定义一个结构化数据类型 student,包含字符串字段 name,整数字段 age,及浮点字段 marks,并将这个 dtype 应用到 ndarray 对象
###Code
import numpy as np
student = np.dtype([('name', 'S20'), ('age', 'i1'), ('marks', 'f4')])
print(student)
###Output
[('name', 'S20'), ('age', 'i1'), ('marks', '<f4')]
###Markdown
例 8:结构化数据类型输出
###Code
student = np.dtype([('name', 'S20'), ('age', 'i1'), ('marks', 'f4')])
a = np.array([('abc', 21, 50),('xyz', 18, 75)], dtype = student)
print(a)
print(a['marks'])
###Output
[(b'abc', 21, 50.) (b'xyz', 18, 75.)]
[50. 75.]
|
03_machine_learning_classification/week_6/quiz.ipynb | ###Markdown
**Quiz Question**: What is the recall value for a classifier that predicts **+1** for all data points in the **test_data**?
###Code
1
###Output
_____no_output_____
###Markdown
Precision-recall tradeoffIn this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve. Varying the thresholdFalse positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold. Write a function called `apply_threshold` that accepts two things* `probabilities` (an SArray of probability values)* `threshold` (a float between 0 and 1).The function should return an SArray, where each element is set to +1 or -1 depending whether the corresponding probability exceeds `threshold`.
###Code
def apply_threshold(probabilities, threshold):
### YOUR CODE GOES HERE
# +1 if >= threshold and -1 otherwise.
prob_threshold = probabilities.to_numpy()
prob_threshold[prob_threshold >= threshold] = 1
prob_threshold[prob_threshold < threshold] = -1
return tc.SArray(prob_threshold, int)
###Output
_____no_output_____
###Markdown
Run prediction with `output_type='probability'` to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
###Code
probabilities = model.predict(test_data, output_type='probability')
predictions_with_default_threshold = apply_threshold(probabilities, 0.5)
predictions_with_high_threshold = apply_threshold(probabilities, 0.9)
print("Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum())
print("Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum())
###Output
Number of positive predicted reviews (threshold = 0.9): 25031
###Markdown
**Quiz Question**: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9? Exploring the associated precision and recall as the threshold varies By changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows:
###Code
# Threshold = 0.5
precision_with_default_threshold = tc.evaluation.precision(test_data['sentiment'],
predictions_with_default_threshold)
recall_with_default_threshold = tc.evaluation.recall(test_data['sentiment'],
predictions_with_default_threshold)
# Threshold = 0.9
precision_with_high_threshold = tc.evaluation.precision(test_data['sentiment'],
predictions_with_high_threshold)
recall_with_high_threshold = tc.evaluation.recall(test_data['sentiment'],
predictions_with_high_threshold)
print("Precision (threshold = 0.5): %s" % precision_with_default_threshold)
print("Recall (threshold = 0.5) : %s" % recall_with_default_threshold)
print("Precision (threshold = 0.9): %s" % precision_with_high_threshold)
print("Recall (threshold = 0.9) : %s" % recall_with_high_threshold)
###Output
Precision (threshold = 0.9): 0.9728736366905038
Recall (threshold = 0.9) : 0.8667734472326036
###Markdown
**Quiz Question (variant 1)**: Does the **precision** increase with a higher threshold?**Quiz Question (variant 2)**: Does the **recall** increase with a higher threshold? Precision-recall curveNow, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve.
###Code
threshold_values = np.linspace(0.5, 1, num=100)
print(threshold_values)
###Output
[0.5 0.50505051 0.51010101 0.51515152 0.52020202 0.52525253
0.53030303 0.53535354 0.54040404 0.54545455 0.55050505 0.55555556
0.56060606 0.56565657 0.57070707 0.57575758 0.58080808 0.58585859
0.59090909 0.5959596 0.6010101 0.60606061 0.61111111 0.61616162
0.62121212 0.62626263 0.63131313 0.63636364 0.64141414 0.64646465
0.65151515 0.65656566 0.66161616 0.66666667 0.67171717 0.67676768
0.68181818 0.68686869 0.69191919 0.6969697 0.7020202 0.70707071
0.71212121 0.71717172 0.72222222 0.72727273 0.73232323 0.73737374
0.74242424 0.74747475 0.75252525 0.75757576 0.76262626 0.76767677
0.77272727 0.77777778 0.78282828 0.78787879 0.79292929 0.7979798
0.8030303 0.80808081 0.81313131 0.81818182 0.82323232 0.82828283
0.83333333 0.83838384 0.84343434 0.84848485 0.85353535 0.85858586
0.86363636 0.86868687 0.87373737 0.87878788 0.88383838 0.88888889
0.89393939 0.8989899 0.9040404 0.90909091 0.91414141 0.91919192
0.92424242 0.92929293 0.93434343 0.93939394 0.94444444 0.94949495
0.95454545 0.95959596 0.96464646 0.96969697 0.97474747 0.97979798
0.98484848 0.98989899 0.99494949 1. ]
###Markdown
For each of the values of threshold, we compute the precision and recall scores.
###Code
precision_all = []
recall_all = []
threshold_old = np.inf
probabilities = model.predict(test_data, output_type='probability')
for threshold in threshold_values:
predictions = apply_threshold(probabilities, threshold)
precision = tc.evaluation.precision(test_data['sentiment'], predictions)
recall = tc.evaluation.recall(test_data['sentiment'], predictions)
precision_all.append(precision)
recall_all.append(recall)
if (precision > 0.965 and threshold < threshold_old):
print(f'Precision (threshold={threshold}) = {precision}')
precision_old = precision
###Output
Precision (threshold=0.8131313131313131) = 0.965418841287125
Precision (threshold=0.8181818181818182) = 0.9657028838189895
Precision (threshold=0.8232323232323233) = 0.9660978556327393
Precision (threshold=0.8282828282828283) = 0.966529097724433
Precision (threshold=0.8333333333333334) = 0.9669107881455622
Precision (threshold=0.8383838383838385) = 0.9673797198538368
Precision (threshold=0.8434343434343434) = 0.9678219711428353
Precision (threshold=0.8484848484848485) = 0.9682460642739495
Precision (threshold=0.8535353535353536) = 0.9686167556562824
Precision (threshold=0.8585858585858586) = 0.9689354813844138
Precision (threshold=0.8636363636363636) = 0.969324204092685
Precision (threshold=0.8686868686868687) = 0.9696675469939413
Precision (threshold=0.8737373737373737) = 0.970340634499961
Precision (threshold=0.8787878787878789) = 0.970922041327489
Precision (threshold=0.8838383838383839) = 0.9711262342158058
Precision (threshold=0.8888888888888888) = 0.9718732717073556
Precision (threshold=0.893939393939394) = 0.9723280927425758
Precision (threshold=0.898989898989899) = 0.9728564585661823
Precision (threshold=0.9040404040404041) = 0.9733221005335579
Precision (threshold=0.9090909090909092) = 0.9739913573765195
Precision (threshold=0.9141414141414141) = 0.9745759264532401
Precision (threshold=0.9191919191919192) = 0.9751588440254151
Precision (threshold=0.9242424242424243) = 0.9758284439302537
Precision (threshold=0.9292929292929293) = 0.9767519373385551
Precision (threshold=0.9343434343434344) = 0.9773749947432608
Precision (threshold=0.9393939393939394) = 0.9780798640611724
Precision (threshold=0.9444444444444444) = 0.9788541711436799
Precision (threshold=0.9494949494949496) = 0.9795401997993282
Precision (threshold=0.9545454545454546) = 0.9806036395916555
Precision (threshold=0.9595959595959596) = 0.9814907000950355
Precision (threshold=0.9646464646464648) = 0.982452891337562
Precision (threshold=0.9696969696969697) = 0.9834848412736186
Precision (threshold=0.9747474747474748) = 0.9841457410142787
Precision (threshold=0.9797979797979799) = 0.9846249097100402
Precision (threshold=0.9848484848484849) = 0.9853645116918844
Precision (threshold=0.98989898989899) = 0.9863047050946497
Precision (threshold=0.994949494949495) = 0.9872171613282398
Precision (threshold=1.0) = 1.0
###Markdown
Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def plot_pr_curve(precision, recall, title):
plt.rcParams['figure.figsize'] = 7, 5
plt.locator_params(axis = 'x', nbins = 5)
plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F')
plt.title(title)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.rcParams.update({'font.size': 16})
plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')
###Output
_____no_output_____
###Markdown
**Quiz Question**: Among all the threshold values tried, what is the **smallest** threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places. **Quiz Question**: Using `threshold` = 0.98, how many **false negatives** do we get on the **test_data**? (**Hint**: You may use the `turicreate.evaluation.confusion_matrix` function implemented in Turi Create.)
###Code
predictions_98threshold = apply_threshold(probabilities, 0.98)
tc.evaluation.confusion_matrix(test_data['sentiment'], predictions_98threshold)
###Output
_____no_output_____
###Markdown
This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier. Evaluating specific search terms So far, we looked at the number of false positives for the **entire test set**. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon. Precision-Recall on all baby related itemsFrom the **test set**, select all the reviews for all products with the word 'baby' in them.
###Code
baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]
###Output
_____no_output_____
###Markdown
Now, let's predict the probability of classifying these reviews as positive:
###Code
probabilities = model.predict(baby_reviews, output_type='probability')
###Output
_____no_output_____
###Markdown
Let's plot the precision-recall curve for the **baby_reviews** dataset.**First**, let's consider the following `threshold_values` ranging from 0.5 to 1:
###Code
threshold_values = np.linspace(0.5, 1, num=100)
###Output
_____no_output_____
###Markdown
**Second**, as we did above, let's compute precision and recall for each value in `threshold_values` on the **baby_reviews** dataset. Complete the code block below.
###Code
precision_all = []
recall_all = []
threshold_old = np.inf
for threshold in threshold_values:
# Make predictions. Use the `apply_threshold` function
## YOUR CODE HERE
predictions = apply_threshold(probabilities, threshold)
# Calculate the precision.
# YOUR CODE HERE
precision = tc.evaluation.precision(baby_reviews['sentiment'], predictions)
# YOUR CODE HERE
recall = tc.evaluation.recall(baby_reviews['sentiment'], predictions)
# Append the precision and recall scores.
precision_all.append(precision)
recall_all.append(recall)
if (precision > 0.965 and threshold < threshold_old):
print(f'Precision (threshold={threshold}) = {precision}')
precision_old = precision
###Output
Precision (threshold=0.8484848484848485) = 0.9651917404129794
Precision (threshold=0.8535353535353536) = 0.9656804733727811
Precision (threshold=0.8585858585858586) = 0.9659405940594059
Precision (threshold=0.8636363636363636) = 0.9666070363744782
Precision (threshold=0.8686868686868687) = 0.9672654690618763
Precision (threshold=0.8737373737373737) = 0.9671868747499
Precision (threshold=0.8787878787878789) = 0.967852119750854
Precision (threshold=0.8838383838383839) = 0.9684912138961825
Precision (threshold=0.8888888888888888) = 0.968978102189781
Precision (threshold=0.893939393939394) = 0.9693752552062066
Precision (threshold=0.898989898989899) = 0.9704129854119581
Precision (threshold=0.9040404040404041) = 0.9708557255064076
Precision (threshold=0.9090909090909092) = 0.9722743381279967
Precision (threshold=0.9141414141414141) = 0.9729389553178099
Precision (threshold=0.9191919191919192) = 0.9734177215189873
Precision (threshold=0.9242424242424243) = 0.9740535942152275
Precision (threshold=0.9292929292929293) = 0.9750912604681126
Precision (threshold=0.9343434343434344) = 0.9762007788836001
Precision (threshold=0.9393939393939394) = 0.9770441626585046
Precision (threshold=0.9444444444444444) = 0.9776548672566372
Precision (threshold=0.9494949494949496) = 0.9784318130757134
Precision (threshold=0.9545454545454546) = 0.9805491990846682
Precision (threshold=0.9595959595959596) = 0.9818012132524498
Precision (threshold=0.9646464646464648) = 0.9828080229226361
Precision (threshold=0.9696969696969697) = 0.983218163869694
Precision (threshold=0.9747474747474748) = 0.9838874680306905
Precision (threshold=0.9797979797979799) = 0.9846526655896607
Precision (threshold=0.9848484848484849) = 0.9853784403669725
Precision (threshold=0.98989898989899) = 0.9854199683042789
Precision (threshold=0.994949494949495) = 0.9855017169019458
Precision (threshold=1.0) = 1.0
###Markdown
**Quiz Question**: Among all the threshold values tried, what is the **smallest** threshold value that achieves a precision of 96.5% or better for the reviews of data in **baby_reviews**? Round your answer to 3 decimal places. **Quiz Question:** Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%?**Finally**, let's plot the precision recall curve.
###Code
plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)")
###Output
_____no_output_____
###Markdown
Exploring precision and recallThe goal of this second notebook is to understand precision-recall in the context of classifiers. * Use Amazon review data in its entirety. * Train a logistic regression model. * Explore various evaluation metrics: accuracy, confusion matrix, precision, recall. * Explore how various metrics can be combined to produce a cost of making an error. * Explore precision and recall curves. Because we are using the full Amazon review dataset (not a subset of words or reviews), in this assignment we return to using Turi Create for its efficiency. As usual, let's start by **firing up Turi Create**.Make sure you have the latest version of Turi Create.
###Code
from __future__ import division
import turicreate as tc
import numpy as np
###Output
_____no_output_____
###Markdown
Load amazon review dataset
###Code
products = tc.SFrame('amazon_baby.sframe/')
###Output
_____no_output_____
###Markdown
Extract word counts and sentiments As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following:1. Remove punctuation.2. Remove reviews with "neutral" sentiment (rating 3).3. Set reviews with rating 4 or more to be positive and those with 2 or less to be negative.
###Code
import string
def remove_punctuation(text):
try: # python 2.x
text = text.translate(None, string.punctuation)
except: # python 3.x
translator = text.maketrans('', '', string.punctuation)
text = text.translate(translator)
return text
# Remove punctuation.
review_clean = products['review'].apply(remove_punctuation)
# Count words
products['word_count'] = tc.text_analytics.count_words(review_clean)
# Drop neutral sentiment reviews.
products = products[products['rating'] != 3]
# Positive sentiment to +1 and negative sentiment to -1
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
###Output
_____no_output_____
###Markdown
Now, let's remember what the dataset looks like by taking a quick peek:
###Code
products.head(5)
###Output
_____no_output_____
###Markdown
Split data into training and test setsWe split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
###Code
train_data, test_data = products.random_split(.8, seed=1)
###Output
_____no_output_____
###Markdown
Train a logistic regression classifierWe will now train a logistic regression classifier with **sentiment** as the target and **word_count** as the features. We will set `validation_set=None` to make sure everyone gets exactly the same results. Remember, even though we now know how to implement logistic regression, we will use Turi Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
###Code
model = tc.logistic_classifier.create(train_data, target='sentiment', features=['word_count'], validation_set=None)
###Output
_____no_output_____
###Markdown
Model Evaluation We will explore the advanced model evaluation concepts that were discussed in the lectures. AccuracyOne performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by$$\mbox{accuracy} = \frac{\mbox{ correctly classified data points}}{\mbox{ total data points}}$$To obtain the accuracy of our trained models using Turi Create, simply pass the option `metric='accuracy'` to the `evaluate` function. We compute the **accuracy** of our logistic regression model on the **test_data** as follows:
###Code
accuracy= model.evaluate(test_data, metric='accuracy')['accuracy']
print("Test Accuracy: %s" % accuracy)
###Output
Test Accuracy: 0.9221862251019919
###Markdown
Baseline: Majority class predictionRecall from an earlier assignment that we used the **majority class classifier** as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points. Typically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows:
###Code
baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data)
print("Baseline accuracy (majority class classifier): %s" % baseline)
###Output
Baseline accuracy (majority class classifier): 0.8427825773938085
###Markdown
**Quiz Question:** Using accuracy as the evaluation metric, was our **logistic regression model** better than the baseline (majority class classifier)? Confusion MatrixThe accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the **confusion matrix**. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows:``` +---------------------------------------------+ | Predicted label | +----------------------+----------------------+ | (+1) | (-1) |+-------+-----+----------------------+----------------------+| True |(+1) | of true positives | of false negatives || label +-----+----------------------+----------------------+| |(-1) | of false positives | of true negatives |+-------+-----+----------------------+----------------------+```To print out the confusion matrix for a classifier, use `metric='confusion_matrix'`:
###Code
confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix']
confusion_matrix
###Output
_____no_output_____
###Markdown
**Quiz Question**: How many predicted values in the **test set** are **false positives**?
###Code
896
###Output
_____no_output_____
###Markdown
Computing the cost of mistakesPut yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, **false positives cost more than false negatives**. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)Suppose you know the costs involved in each kind of mistake: 1. \$100 for each false positive.2. \$1 for each false negative.3. Correctly classified reviews incur no cost.**Quiz Question**: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the **test set**?
###Code
100*1698 + 1*896
###Output
_____no_output_____
###Markdown
Precision and Recall You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where **precision** comes in:$$[\text{precision}] = \frac{[\text{ positive data points with positive predicitions}]}{\text{[ all data points with positive predictions]}} = \frac{[\text{ true positives}]}{[\text{ true positives}] + [\text{ false positives}]}$$ So to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher. **First**, let us compute the precision of the logistic regression classifier on the **test_data**.
###Code
precision = model.evaluate(test_data, metric='precision')['precision']
print("Precision on test data: %s" % precision)
###Output
Precision on test data: 0.941239575042392
###Markdown
**Quiz Question**: Out of all reviews in the **test set** that are predicted to be positive, what fraction of them are **false positives**? (Round to the second decimal place e.g. 0.25)
###Code
pred = model.predict(test_data)
sum(pred[pred != test_data['sentiment']] == 1) / sum(pred == 1)
###Output
_____no_output_____
###Markdown
**Quiz Question:** Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would (select one):- Discard a sufficient number of positive predictions- Discard a sufficient number of negative predictins- Increase threshold for predicting the positive class ($y_{hat} = +1$)- Decrease threshold for predicting the positive class ($y_{hat} = +1$) A complementary metric is **recall**, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews:$$[\text{recall}] = \frac{[\text{ positive data points with positive predicitions}]}{\text{[ all positive data points]}} = \frac{[\text{ true positives}]}{[\text{ true positives}] + [\text{ false negatives}]}$$Let us compute the recall on the **test_data**.
###Code
recall = model.evaluate(test_data, metric='recall')['recall']
print("Recall on test data: %s" % recall)
###Output
Recall on test data: 0.9681082043068162
###Markdown
**Quiz Question**: What fraction of the positive reviews in the **test_set** were correctly predicted as positive by the classifier?
###Code
27199 / (27199 + 896)
###Output
_____no_output_____ |
Sequencing_turning_sentences_into_data.ipynb | ###Markdown
###Code
#importing
import tensorflow as tf
from tensorflow import keras
#getting tokenizer api from tensorflow keras
from tensorflow.keras.preprocessing.text import Tokenizer
#defining sentences as a python array of strings #feeding input
sentences = [
"i love my dog",
"i love myself",
"you love my dog",
"do you think my Dog is amazing?"
]
#create instance of tokenizer objects
tokenizer = Tokenizer(num_words=100,oov_token="<OOV>") #num_words parameter is the max no. of words to keep
#asking tokenizer to go to text and fix them in sentences
tokenizer.fit_on_texts(sentences)
#indexing the words
word_index = tokenizer.word_index
print("word_index=",word_index)
#turning sentences containing these words into sequences of Numbers(data)
sequences = tokenizer.texts_to_sequences(sentences)
#text_to_sequences method that creates sequencing of token thus representing each sentences
print("sequences",sequences)
#when neural network tends to classiy txt , what happen when it encounter words it has not seen before
#Result : Tokenizer ignored the unknown words ,
#"i really love my dog" : 5 ---- 4 , unknown : really
# "my dog loves my house" : 5 --- 3 , unknown : loves, house
test_data = [
"i really love my dog",
"my dog loves my house"
]
#USING OOV TOKEN PROPERTY : HANDLING UNKNOWN WORDS
#the tokenizer will create a token for an unknown word using "<OOV>", and replaced the unknown word with the token no.
#this helps in making sequence length as the same length of sentence
test_seq = tokenizer.texts_to_sequences(test_data)
print("test_seq after oov=",test_seq)
#Handling sentences of Different Length : USING PADDING
from tensorflow.keras.preprocessing.sequence import pad_sequences
padded = pad_sequences(sequences)
print(padded)
padded_post = pad_sequences(sequences, padding="post")
print(padded_post)
padded_post2 = pad_sequences(sequences, padding="post",truncating="post",maxlen=5)
print(padded_post2)
###Output
[[5 2 3 4 0]
[5 2 7 0 0]
[6 2 3 4 0]
[8 6 9 3 4]]
|
char_cnn.ipynb | ###Markdown
kaggle_quora: char_CNN 比赛baseline参考:https://www.kaggle.com/shujian/single-rnn-with-4-folds-clrhttps://www.kaggle.com/gmhost/gru-capsulehttps://github.com/dennybritz/cnn-text-classification-tf
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
###Output
['sample_submission.csv', 'test.csv', 'train.csv', 'embeddings', 'embeddings.zip']
###Markdown
load package
###Code
import os
import time
import random
import re
from tqdm import tqdm
from IPython.display import display
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.model_selection import GridSearchCV, StratifiedKFold
from sklearn.metrics import f1_score, roc_auc_score
from collections import Counter
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
###Output
Using TensorFlow backend.
###Markdown
global parameters
###Code
data_dir = "../input/"
train_file = os.path.join(data_dir, "train.csv")
test_file = os.path.join(data_dir, "test.csv")
embedding_size = 300
max_len = 50
max_features = 120000
batch_size = 512
use_local_test = True
###Output
_____no_output_____
###Markdown
Data preprocess
###Code
# 将特殊字符单独挑出
puncts = [',', '.', '"', ':', ')', '(', '-', '!', '?', '|', ';', "'", '$', '&', '/', '[', ']', '>', '%', '=', '#', '*', '+', '\\', '•', '~', '@', '£',
'·', '_', '{', '}', '©', '^', '®', '`', '<', '→', '°', '€', '™', '›', '♥', '←', '×', '§', '″', '′', 'Â', '█', '½', 'à', '…',
'“', '★', '”', '–', '●', 'â', '►', '−', '¢', '²', '¬', '░', '¶', '↑', '±', '¿', '▾', '═', '¦', '║', '―', '¥', '▓', '—', '‹', '─',
'▒', ':', '¼', '⊕', '▼', '▪', '†', '■', '’', '▀', '¨', '▄', '♫', '☆', 'é', '¯', '♦', '¤', '▲', 'è', '¸', '¾', 'Ã', '⋅', '‘', '∞',
'∙', ')', '↓', '、', '│', '(', '»', ',', '♪', '╩', '╚', '³', '・', '╦', '╣', '╔', '╗', '▬', '❤', 'ï', 'Ø', '¹', '≤', '‡', '√', ]
def clean_text(x):
x = str(x)
for punct in puncts:
if punct in x:
# x = x.replace(punct, f' {punct} ') # 这是python3.6语法
x = x.replace(punct, ' '+punct+' ')
return x
# 清洗数字
def clean_numbers(x):
if bool(re.search(r'\d', x)):
x = re.sub('[0-9]{5,}', '#####', x)
x = re.sub('[0-9]{4}', '####', x)
x = re.sub('[0-9]{3}', '###', x)
x = re.sub('[0-9]{2}', '##', x)
return x
# 清洗拼写
mispell_dict = {"aren't" : "are not",
"can't" : "cannot",
"couldn't" : "could not",
"didn't" : "did not",
"doesn't" : "does not",
"don't" : "do not",
"hadn't" : "had not",
"hasn't" : "has not",
"haven't" : "have not",
"he'd" : "he would",
"he'll" : "he will",
"he's" : "he is",
"i'd" : "I would",
"i'd" : "I had",
"i'll" : "I will",
"i'm" : "I am",
"isn't" : "is not",
"it's" : "it is",
"it'll":"it will",
"i've" : "I have",
"let's" : "let us",
"mightn't" : "might not",
"mustn't" : "must not",
"shan't" : "shall not",
"she'd" : "she would",
"she'll" : "she will",
"she's" : "she is",
"shouldn't" : "should not",
"that's" : "that is",
"there's" : "there is",
"they'd" : "they would",
"they'll" : "they will",
"they're" : "they are",
"they've" : "they have",
"we'd" : "we would",
"we're" : "we are",
"weren't" : "were not",
"we've" : "we have",
"what'll" : "what will",
"what're" : "what are",
"what's" : "what is",
"what've" : "what have",
"where's" : "where is",
"who'd" : "who would",
"who'll" : "who will",
"who're" : "who are",
"who's" : "who is",
"who've" : "who have",
"won't" : "will not",
"wouldn't" : "would not",
"you'd" : "you would",
"you'll" : "you will",
"you're" : "you are",
"you've" : "you have",
"'re": " are",
"wasn't": "was not",
"we'll":" will",
"didn't": "did not",
"tryin'":"trying"}
def _get_mispell(mispell_dict):
mispell_re = re.compile('(%s)' % '|'.join(mispell_dict.keys()))
return mispell_dict, mispell_re
mispellings, mispellings_re = _get_mispell(mispell_dict)
def replace_typical_misspell(text):
def replace(match):
return mispellings[match.group(0)]
return mispellings_re.sub(replace, text)
def load_and_prec(use_local_test=True):
train_df = pd.read_csv(train_file)
test_df = pd.read_csv(test_file)
print("Train shape : ",train_df.shape)
print("Test shape : ",test_df.shape)
display(train_df.head())
display(test_df.head())
# 小写
train_df["question_text"] = train_df["question_text"].str.lower()
test_df["question_text"] = test_df["question_text"].str.lower()
# 数字清洗
train_df["question_text"] = train_df["question_text"].apply(lambda x: clean_numbers(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: clean_numbers(x))
# 清洗拼写
train_df["question_text"] = train_df["question_text"].apply(lambda x: replace_typical_misspell(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: replace_typical_misspell(x))
# 数据清洗
train_df["question_text"] = train_df["question_text"].apply(lambda x: clean_text(x))
test_df["question_text"] = test_df["question_text"].apply(lambda x: clean_text(x))
## fill up the missing values
train_X = train_df["question_text"].fillna("_##_").values
test_X = test_df["question_text"].fillna("_##_").values
## Tokenize the sentences
# 这个方法把所有字母都小写了
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(train_X))
train_X = tokenizer.texts_to_sequences(train_X)
test_X = tokenizer.texts_to_sequences(test_X)
## Get the target values
train_Y = train_df['target'].values
print(np.sum(train_Y))
# # 在pad之前把前30个词去掉
# train_cut = []
# test_cut = []
# for x in train_X:
# train_cut.append([i for i in x if i>30])
# for x in test_X:
# test_cut.append([i for i in x if i>30])
# train_X = train_cut
# test_X = test_cut
## Pad the sentences
train_X = pad_sequences(train_X, maxlen=max_len, padding="post", truncating="post")
test_X = pad_sequences(test_X, maxlen=max_len, padding="post", truncating="post")
# # # 把最常用的40个词去掉,pad为0
# # train_X = np.where(train_X>=40, train_X, 0)
# # test_X = np.where(test_X>=40, test_X, 0)
#shuffling the data
np.random.seed(2019)
trn_idx = np.random.permutation(len(train_X))
train_X = train_X[trn_idx]
train_Y = train_Y[trn_idx]
# 使用本地测试集
if use_local_test:
train_X, local_test_X = (train_X[:-2*len(test_X)], train_X[-2*len(test_X):])
train_Y, local_test_Y = (train_Y[:-2*len(test_X)], train_Y[-2*len(test_X):])
else:
local_test_X = np.zeros(shape=[1,max_len], dtype=np.int32)
local_test_Y = np.zeros(shape=[1], dtype=np.int32)
print(train_X.shape)
print(local_test_X.shape)
print(test_X.shape)
print(len(tokenizer.word_index))
return train_X, test_X, train_Y, local_test_X, local_test_Y, tokenizer.word_index
# load_and_prec()
###Output
_____no_output_____
###Markdown
load embeddings
###Code
def load_glove(word_index):
EMBEDDING_FILE = '../input/embeddings/glove.840B.300d/glove.840B.300d.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE))
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_fasttext(word_index):
"""
这个加载词向量还没有细看
"""
EMBEDDING_FILE = '../input/embeddings/wiki-news-300d-1M/wiki-news-300d-1M.vec'
def get_coefs(word,*arr):
return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE) if len(o)>100)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
# word_index = tokenizer.word_index
nb_words = min(max_features, len(word_index))
embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
def load_para(word_index):
EMBEDDING_FILE = '../input/embeddings/paragram_300_sl999/paragram_300_sl999.txt'
def get_coefs(word,*arr): return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE, encoding="utf8", errors='ignore') if len(o)>100 and o.split(" ")[0] in word_index)
all_embs = np.stack(embeddings_index.values())
emb_mean,emb_std = all_embs.mean(), all_embs.std()
embed_size = all_embs.shape[1]
embedding_matrix = np.random.normal(emb_mean, emb_std, (max_features, embed_size))
for word, i in word_index.items():
if i >= max_features: continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None: embedding_matrix[i] = embedding_vector
return embedding_matrix
###Output
_____no_output_____
###Markdown
Utils
###Code
from tensorflow.python.framework import ops
from tensorflow.python.ops import math_ops
from tensorflow.python.eager import context
def cyclic_learning_rate(global_step,
learning_rate=0.001,
max_lr=0.004,
step_size=20.,
gamma=0.99994,
mode='triangular',
name=None):
if global_step is None:
raise ValueError("global_step is required for cyclic_learning_rate.")
with ops.name_scope(name, "CyclicLearningRate",
[learning_rate, global_step]) as name:
learning_rate = ops.convert_to_tensor(learning_rate, name="learning_rate")
dtype = learning_rate.dtype
global_step = math_ops.cast(global_step, dtype)
step_size = math_ops.cast(step_size, dtype)
def cyclic_lr():
"""Helper to recompute learning rate; most helpful in eager-mode."""
# computing: cycle = floor( 1 + global_step / ( 2 * step_size ) )
double_step = math_ops.multiply(2., step_size)
global_div_double_step = math_ops.divide(global_step, double_step)
cycle = math_ops.floor(math_ops.add(1., global_div_double_step))
# computing: x = abs( global_step / step_size – 2 * cycle + 1 )
double_cycle = math_ops.multiply(2., cycle)
global_div_step = math_ops.divide(global_step, step_size)
tmp = math_ops.subtract(global_div_step, double_cycle)
x = math_ops.abs(math_ops.add(1., tmp))
# computing: clr = learning_rate + ( max_lr – learning_rate ) * max( 0, 1 - x )
a1 = math_ops.maximum(0., math_ops.subtract(1., x))
a2 = math_ops.subtract(max_lr, learning_rate)
clr = math_ops.multiply(a1, a2)
if mode == 'triangular2':
clr = math_ops.divide(clr, math_ops.cast(math_ops.pow(2, math_ops.cast(
cycle-1, tf.int32)), tf.float32))
if mode == 'exp_range':
clr = math_ops.multiply(math_ops.pow(gamma, global_step), clr)
return math_ops.add(clr, learning_rate, name=name)
if not context.executing_eagerly():
cyclic_lr = cyclic_lr()
return cyclic_lr
# dense layer
def dense(inputs, hidden, use_bias=True,
w_initializer=tf.contrib.layers.xavier_initializer(), b_initializer=tf.constant_initializer(0.1), scope="dense"):
"""
全连接层
"""
with tf.variable_scope(scope):
shape = tf.shape(inputs)
dim = inputs.get_shape().as_list()[-1]
out_shape = [shape[idx] for idx in range(
len(inputs.get_shape().as_list()) - 1)] + [hidden]
# 如果是三维的inputs,reshape成二维
flat_inputs = tf.reshape(inputs, [-1, dim])
W = tf.get_variable("W", [dim, hidden], initializer=w_initializer)
res = tf.matmul(flat_inputs, W)
if use_bias:
b = tf.get_variable("b", [hidden], initializer=b_initializer)
res = tf.nn.bias_add(res, b)
# outshape就是input的最后一维变成hidden
res = tf.reshape(res, out_shape)
return res
# dot-product attention
def dot_attention(inputs, memory, mask, hidden, keep_prob, scope="dot_attention"):
"""
门控attention层
"""
def softmax_mask(val, mask):
return -1e30 * (1 - tf.cast(mask, tf.float32)) + val
with tf.variable_scope(scope):
JX = tf.shape(inputs)[1] # inputs的1维度,应该是c_maxlen
with tf.variable_scope("attention"):
# inputs_的shape:[batch_size, c_maxlen, hidden]
inputs_ = tf.nn.relu(
dense(inputs, hidden, use_bias=False, scope="inputs"))
memory_ = tf.nn.relu(
dense(memory, hidden, use_bias=False, scope="memory"))
# 三维矩阵相乘,结果的shape是[batch_size, c_maxlen, q_maxlen]
outputs = tf.matmul(inputs_, tf.transpose(
memory_, [0, 2, 1])) / (hidden ** 0.5)
# 将mask平铺成与outputs相同的形状,这里考虑,改进成input和memory都需要mask
mask = tf.tile(tf.expand_dims(mask, axis=1), [1, JX, 1])
logits = tf.nn.softmax(softmax_mask(outputs, mask))
outputs = tf.matmul(logits, memory)
# res:[batch_size, c_maxlen, 12*hidden]
res = tf.concat([inputs, outputs], axis=2)
return res
# with tf.variable_scope("gate"):
# """
# attention * gate
# """
# dim = res.get_shape().as_list()[-1]
# d_res = dropout(res, keep_prob=keep_prob, is_train=is_train)
# gate = tf.nn.sigmoid(dense(d_res, dim, use_bias=False))
# return res * gate # 向量的逐元素相乘
# 定义一个多层的双向rnn类,使用cudnn加速, 包括lstm和gru。
class cudnn_rnn:
def __init__(self, num_layers, num_units, input_size, neuron="GRU", scope=None):
self.num_layers = num_layers
self.rnns = []
self.scope = scope
self.neuron = neuron
for layer in range(num_layers):
input_size_ = input_size if layer == 0 else 2 * num_units
if self.neuron == "GRU":
rnn_fw = tf.contrib.cudnn_rnn.CudnnGRU(1, num_units, name="f_cudnn_gru")
rnn_bw = tf.contrib.cudnn_rnn.CudnnGRU(1, num_units, name="b_cudnn_gru")
elif self.neuron == "LSTM":
rnn_fw = tf.contrib.cudnn_rnn.CudnnLSTM(1, num_units, name="f_cudnn_lstm")
rnn_bw = tf.contrib.cudnn_rnn.CudnnLSTM(1, num_units, name="b_cudnn_lstm")
else:
raise NameError
self.rnns.append((rnn_fw, rnn_bw, ))
def __call__(self, inputs, seq_len, keep_prob, concat_layers=True):
# cudnn GRU需要交换张量的维度,可能是便于计算
outputs = [tf.transpose(inputs, [1, 0, 2])]
out_states = []
with tf.variable_scope(self.scope):
for layer in range(self.num_layers):
rnn_fw, rnn_bw = self.rnns[layer]
with tf.variable_scope("fw_{}".format(layer)):
if self.neuron == "GRU":
out_fw, (fw_state,) = rnn_fw(outputs[-1])
else:
out_fw, (fw_state,_) = rnn_fw(outputs[-1])
with tf.variable_scope("bw_{}".format(layer)):
inputs_bw = tf.reverse_sequence(outputs[-1], seq_lengths=seq_len, seq_dim=0, batch_dim=1)
if self.neuron == "GRU":
out_bw, (bw_state,) = rnn_bw(outputs[-1])
else:
out_bw, (bw_state,_) = rnn_bw(outputs[-1])
out_bw = tf.reverse_sequence(out_bw, seq_lengths=seq_len, seq_dim=0, batch_dim=1)
outputs.append(tf.concat([out_fw, out_bw], axis=2))
out_states.append(tf.concat([fw_state, bw_state], axis=-1))
if concat_layers:
res = tf.concat(outputs[1:], axis=2)
final_state = tf.squeeze(tf.transpose(tf.concat(out_states, axis=0), [1,0,2]), axis=1)
else:
res = outputs[-1]
final_state = tf.squeeze(out_states[-1], axis=0)
res = tf.transpose(res, [1, 0, 2])
return res, final_state
###Output
_____no_output_____
###Markdown
Models char_CNN
###Code
class model_char_cnn(object):
"""
使用简单的双向GRU实现分类。
"""
def __init__(self, embedding_matrix, sequence_length=50, num_classes=1,
embedding_size=300, trainable=True):
# Placeholders for input, output and dropout
self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x")
self.input_y = tf.placeholder(tf.int32, [None], name="input_y")
self.keep_prob = tf.placeholder(tf.float32, name="keep_prob")
# Some variables
self.embedding_matrix = tf.get_variable("embedding_matrix", initializer=tf.constant(
embedding_matrix, dtype=tf.float32), trainable=False)
self.global_step = tf.get_variable('global_step', shape=[], dtype=tf.int32,
initializer=tf.constant_initializer(0), trainable=False)
with tf.name_scope("process"):
self.seq_len = tf.reduce_sum(tf.cast(tf.cast(self.input_x, dtype=tf.bool), dtype=tf.int32), axis=1, name="seq_len")
self.mask = tf.cast(self.input_x, dtype=tf.bool)
# The structure of the model
self.layers(num_classes)
# optimizer
if trainable:
# self.learning_rate = tf.train.exponential_decay(
# learning_rate=0.0015, global_step=self.global_step, decay_steps=1000, decay_rate=0.95)
self.learning_rate = cyclic_learning_rate(
global_step=self.global_step,
step_size=2000)
self.opt = tf.train.AdamOptimizer(learning_rate=self.learning_rate, epsilon=1e-8)
self.train_op = self.opt.minimize(self.loss, global_step=self.global_step)
def layers(self, num_classes):
# Embedding layer
with tf.variable_scope("embedding"):
self.embedding_inputs = tf.nn.embedding_lookup(self.embedding_matrix, self.input_x)
self.embedding_inputs = tf.nn.dropout(self.embedding_inputs, self.keep_prob)
# Bi-RNN Encoder
with tf.variable_scope("Bi-RNN"):
# LSTM
bi_lstm = cudnn_rnn(
num_layers=1, num_units=64, input_size=self.embedding_inputs.get_shape().as_list()[-1], neuron="LSTM", scope="LSTM")
self.lstm_out, _ = bi_lstm(self.embedding_inputs, seq_len=self.seq_len, keep_prob=self.keep_prob)
self.lstm_out = tf.nn.dropout(self.lstm_out, keep_prob=self.keep_prob)
# GRU
bi_gru = cudnn_rnn(num_layers=1, num_units=64, input_size=self.lstm_out.get_shape().as_list()[-1], scope="GRU")
self.gru_out, _ = bi_gru(self.lstm_out, seq_len=self.seq_len, keep_prob=self.keep_prob)
self.gru_out = tf.nn.dropout(self.gru_out, keep_prob=self.keep_prob)
with tf.variable_scope("double_attention"):
lstm_att = dot_attention(
inputs=self.lstm_out, memory=self.lstm_out, mask=self.mask, hidden=128, keep_prob=self.keep_prob, scope="l_att")
gru_att = dot_attention(
inputs=self.gru_out, memory=self.gru_out, mask=self.mask, hidden=128, keep_prob=self.keep_prob, scope="g_att")
# pooling
att_out_lstm = tf.reduce_mean(lstm_att, axis=1) # shape: [batch_size, 256]
att_out_gru = tf.reduce_max(gru_att, axis=1)
self.att_out = tf.concat([att_out_lstm, att_out_gru], axis=1)
with tf.variable_scope("fully_connected"):
"""
全连接层
"""
fc_1 = dense(inputs=self.att_out, hidden=64, use_bias=True, scope="FC_1")
fc_1 = tf.nn.relu(fc_1)
fc_1_drop = tf.nn.dropout(fc_1, self.keep_prob)
fc_2 = dense(inputs=fc_1_drop, hidden=num_classes, use_bias=True, scope="FC_2")
self.logits = tf.squeeze(fc_2, name="logits")
with tf.variable_scope("sigmoid_and_loss"):
"""
用sigmoid函数加阈值代替softmax的多分类
"""
self.sigmoid = tf.nn.sigmoid(self.logits)
self.loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=self.logits, labels=tf.cast(self.input_y, dtype=tf.float32)))
###Output
_____no_output_____
###Markdown
Training Tools
###Code
# batch生成器
def batch_generator(train_X, train_Y, batch_size, is_train=True, seed=1234):
"""
batch生成器:
在is_train为true的情况下,补充batch,并shuffle
"""
data_number = train_X.shape[0]
batch_count = 0
while True:
if batch_count * batch_size + batch_size > data_number:
# 最后一个batch的操作
if is_train:
# 后面的直接舍弃,重新开始
# shuffle
np.random.seed(seed)
trn_idx = np.random.permutation(data_number)
train_X = train_X[trn_idx]
train_Y = train_Y[trn_idx]
one_batch_X = train_X[0:batch_size]
one_batch_Y = train_Y[0:batch_size]
batch_count = 1
yield one_batch_X, one_batch_Y
else:
one_batch_X = train_X[batch_count * batch_size:data_number]
one_batch_Y = train_Y[batch_count * batch_size:data_number]
batch_count = 0
yield one_batch_X, one_batch_Y
else:
one_batch_X = train_X[batch_count * batch_size:batch_count * batch_size + batch_size]
one_batch_Y = train_Y[batch_count * batch_size:batch_count * batch_size + batch_size]
batch_count += 1
yield one_batch_X, one_batch_Y
# 正类欠采样,负类数据增强,暂时用随机打乱数据增强.
def data_augmentation(X, Y, under_sample=100000, aug_num=3):
"""
under_sample: 欠采样个数
aug: 数据增强倍数
"""
pos_X = []
neg_X = []
for i in range(X.shape[0]):
if Y[i] == 1:
neg_X.append(list(X[i]))
else:
pos_X.append(list(X[i]))
# 正样本欠采样
random.shuffle(pos_X)
pos_X = pos_X[:-under_sample]
# 正样本数据增强
pos_X_aug = []
for i in range(200000):
aug = []
for x in pos_X[i]:
if x != 0:
aug.append(x)
else:
break
random.shuffle(aug)
aug += [0] * (max_len-len(aug))
pos_X_aug.append(aug)
pos_X.extend(pos_X_aug)
print(len(pos_X))
# 负样本数据增强
neg_X_aug = []
for i in range(aug_num):
for neg in neg_X:
aug = []
for x in neg:
if x != 0:
aug.append(x)
else:
break
random.shuffle(aug)
aug += [0] * (max_len-len(aug))
neg_X_aug.append(aug)
neg_X.extend(neg_X_aug)
print(len(neg_X))
pos_Y = np.zeros(shape=[len(pos_X)], dtype=np.int32)
neg_Y = np.ones(shape=[len(neg_X)], dtype=np.int32)
pos_X.extend(neg_X)
X_out = np.array(pos_X, dtype=np.int32)
Y_out = np.append(pos_Y, neg_Y)
print(X_out.shape)
#shuffling the data
np.random.seed(2018)
trn_idx = np.random.permutation(len(X_out))
X_out = X_out[trn_idx]
Y_out = Y_out[trn_idx]
print(X_out.shape)
print(Y_out.shape)
return X_out, Y_out
# 搜索最佳阈值
def bestThreshold(y,y_preds):
tmp = [0,0,0] # idx, cur, max
delta = 0
for tmp[0] in tqdm(np.arange(0.1, 0.501, 0.01)):
tmp[1] = metrics.f1_score(y, np.array(y_preds)>tmp[0])
if tmp[1] > tmp[2]:
delta = tmp[0]
tmp[2] = tmp[1]
print('best threshold is {:.4f} with F1 score: {:.4f}'.format(delta, tmp[2]))
return delta , tmp[2]
###Output
_____no_output_____
###Markdown
Seed
###Code
def seed_everything(seed=1234):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
# torch.manual_seed(seed)
# torch.cuda.manual_seed(seed)
# torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Main part
###Code
# 加载数据,平均词向量
train_X, test_X, train_Y, local_test_X, local_test_Y, word_index = load_and_prec(use_local_test)
seed_everything()
embedding_matrix_1 = load_glove(word_index)
embedding_matrix_2 = load_fasttext(word_index)
embedding_matrix_3 = load_para(word_index)
embedding_matrix = np.mean([embedding_matrix_1, embedding_matrix_2, embedding_matrix_3], axis = 0)
np.shape(embedding_matrix)
# embedding_matrix = np.zeros(shape=[100,300],dtype=np.float32)
# 多折训练,交叉验证平均,测试
# 随机种子
SEED = 6017
# 划分交叉验证集
splits = list(StratifiedKFold(n_splits=5, shuffle=True, random_state=SEED).split(train_X, train_Y))
# test batch
test_batch = batch_generator(test_X, np.zeros(shape=[test_X.shape[0]], dtype=np.int32), batch_size, False)
local_test_batch = batch_generator(local_test_X, local_test_Y, batch_size, False)
# 最终输出
train_preds = np.zeros(len(train_X), dtype=np.float32)
test_preds = np.zeros((len(test_X), len(splits)), dtype=np.float32)
test_preds_local = np.zeros((len(local_test_X), len(splits)), dtype=np.float32)
# 多折训练
for i, (train_idx, valid_idx) in enumerate(splits):
if i == 4:
print("fold:{}".format(i+1))
start_time = time.time()
X_train = train_X[train_idx]
Y_train = train_Y[train_idx]
X_val = train_X[valid_idx]
Y_val = train_Y[valid_idx]
# # 数据增强
# X_train, Y_train = data_augmentation(X_train, Y_train)
# print(Y_train[:100])
# print(Y_train[-100:])
# 训练batch生成器
train_batch = batch_generator(X_train, Y_train, batch_size, True, SEED+i)
val_batch = batch_generator(X_val, Y_val, batch_size, False)
# 选择最好的结果
best_val_f1 = 0.0
best_val_loss = 99999.99999
best_val_fold = []
best_test_fold = []
best_local_test_fold = []
# 训练 & 验证 & 测试
with tf.Graph().as_default():
sess_config = tf.ConfigProto(allow_soft_placement=True)
sess_config.gpu_options.allow_growth = True
with tf.Session(config=sess_config) as sess:
writer = tf.summary.FileWriter("./log/", sess.graph)
# seed
seed_everything(SEED+i)
tf.set_random_seed(SEED+i)
model = model_char_cnn(embedding_matrix=embedding_matrix, sequence_length=max_len)
KP = 0.7
num_steps = 16000
print_steps = 10000
sess.run(tf.global_variables_initializer())
train_loss_sum = 0.0
for go in range(num_steps):
steps = sess.run(model.global_step) + 1
# 训练
train_batch_X, train_batch_Y = next(train_batch)
feed = {model.input_x:train_batch_X, model.input_y:train_batch_Y, model.keep_prob:KP}
loss, train_op = sess.run([model.loss, model.train_op], feed_dict=feed)
train_loss_sum += loss
# 验证 & 测试
if steps % 1000 == 0 and steps >= print_steps:
val_predictions = []
val_loss_sum = 0.0
for _ in range(X_val.shape[0] // batch_size + 1):
val_batch_X, val_batch_Y = next(val_batch)
feed_val = {model.input_x:val_batch_X, model.input_y:val_batch_Y, model.keep_prob:1.0}
val_loss, val_sigmoid = sess.run([model.loss, model.sigmoid], feed_dict=feed_val)
val_predictions.extend(val_sigmoid)
val_loss_sum += val_loss
val_loss_sum = val_loss_sum / (X_val.shape[0] // batch_size + 1)
print("steps:{}, train_loss:{:.5f}, val_loss:{:.5f}".format(
steps, float(train_loss_sum / 1000), float(val_loss_sum)))
# 写入tensorboard
train_loss_write = tf.Summary(value=[tf.Summary.Value(tag="model/train_loss", \
simple_value=train_loss_sum / 1000), ])
writer.add_summary(train_loss_write, steps)
val_loss_write = tf.Summary(value=[tf.Summary.Value(tag="model/val_loss", simple_value=val_loss_sum), ])
writer.add_summary(val_loss_write, steps)
writer.flush()
# train loss
train_loss_sum = 0.0
# 测试,并选取最低的loss值的时刻的测试结果为最终结果
# if val_loss_sum < best_val_loss:
if steps == 16000:
best_val_loss = val_loss_sum
best_val_fold = val_predictions
best_test_fold = []
best_local_test_fold = []
# 线上test
for _ in range(test_X.shape[0] // batch_size + 1):
test_batch_X, _ = next(test_batch)
feed_test = {model.input_x:test_batch_X, model.keep_prob:1.0}
test_sigmoid = sess.run(model.sigmoid, feed_dict=feed_test)
best_test_fold.extend(test_sigmoid)
# 线下test
if use_local_test:
for _ in range(local_test_X.shape[0] // batch_size + 1):
local_test_batch_X, _ = next(local_test_batch)
feed_local_test = {model.input_x:local_test_batch_X, model.keep_prob:1.0}
local_test_sigmoid = sess.run(model.sigmoid, feed_dict=feed_local_test)
best_local_test_fold.extend(local_test_sigmoid)
print("test done!")
# 更新预测结果
train_preds[valid_idx] = np.array(best_val_fold)
test_preds[:, i] = np.array(best_test_fold)
if use_local_test:
test_preds_local[:, i] = np.array(best_local_test_fold)
end_time = time.time()
print("The time of fold {} is: {:.5f}s.".format(i+1, end_time-start_time))
# 后处理,提交结果
best_threshold, best_f1 = bestThreshold(train_Y, train_preds)
if use_local_test:
print("local_test_f1:{:.5f}".format(metrics.f1_score(local_test_Y, (test_preds_local.mean(axis=1) > best_threshold))))
sub = pd.read_csv('../input/sample_submission.csv')
sub["prediction"] = (test_preds.mean(axis=1) > best_threshold).astype(int)
sub.to_csv("submission.csv", index=False)
pd.DataFrame(test_preds_local).corr()
bt, bf = bestThreshold(local_test_Y, test_preds_local.mean(axis=1))
###Output
20%|█▉ | 8/41 [00:00<00:00, 75.59it/s]/home/yuhaitao/software/Python3/lib/python3.5/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples.
'precision', 'predicted', average, warn_for)
100%|██████████| 41/41 [00:00<00:00, 79.21it/s]
###Markdown
Character-level Language Modeling OverviewIn character-level language modeling tasks, each sequence is broken into elements by characters. Therefore, in a character-level model, at each time step the model is expected to predict the next coming character. We evaluate the temporal convolutional network as a character-level language model on the PeenTreebank dataset. Settings
###Code
import torch as th
import torch.nn as nn
import observations
import unidecode
from collections import Counter
import time
import math
from tqdm.notebook import tqdm
import torch.nn.functional as F
DATA_ROOT = "/home/densechen/dataset"
BATCH_SIZE = 32
DEVICE = "cuda:0"
DROPOUT = 0.1
EMB_DROPOUT = 0.1
CLIP = 0.15
EPOCHS = 10
KSIZE = 3
LEVELS = 3
LR = 4
OPTIM = "SGD"
NHID = 450
VALID_SEQ_LEN = 320
SEQ_LEN = 400
SEED = 1111
EMSIZE = 100
CHANNEL_SIZES = [NHID] * (LEVELS - 1) + [EMSIZE]
th.manual_seed(SEED)
###Output
_____no_output_____
###Markdown
Data Genration PennTreebankWhen used as a character-level language corpus, PTB contains 5,059K characters for training, 396K for validation and 446K for testing, with an alphabet size of 50. PennTreebank is a well-studied (but relatively small) language dataset.
###Code
class Dictionary(object):
def __init__(self):
self.char2idx = {}
self.idx2char = []
self.counter = Counter()
def add_word(self, char):
self.counter[char] += 1
def prep_dict(self):
for char in self.counter:
if char not in self.char2idx:
self.idx2char.append(char)
self.char2idx[char] = len(self.idx2char) - 1
def __len__(self):
return len(self.idx2char)
class Corpus(object):
def __init__(self, string):
self.dict = Dictionary()
for c in string:
self.dict.add_word(c)
self.dict.prep_dict()
def date_generator():
file, testfile, valfile = observations.ptb(DATA_ROOT)
file_len, valfile_len, testfile_len = len(file), len(valfile), len(testfile)
corpus = Corpus(file + " " + valfile + " " + testfile)
return file, file_len, valfile, valfile_len, testfile, testfile_len, corpus
def char_tensor(corpus, string):
tensor = th.zeros(len(string)).long()
for i in range(len(string)):
tensor[i] = corpus.dict.char2idx[string[i]]
return tensor.to(DEVICE)
def batchify(data, batch_size):
# the output has size [L x batch size], where L could be a long sequence length.
# work out cleanly we can divide the dataset into batch size parts, i.e. continuous seqs.
nbatch = len(data) // batch_size
# trim off any extra elements that wouldn't cleanly fit (remainders).
data = data.narrow(0, 0, nbatch * batch_size)
# evently, divide the data across the batch size batches.
data = data.view(batch_size, -1).to(DEVICE)
return data
def get_batch(source, start_index):
seq_len = min(SEQ_LEN, source.size(1)-1-start_index)
end_index = start_index + seq_len
inp = source[:, start_index:end_index].contiguous()
target = source[:, start_index+1:end_index+1].contiguous()
return inp, target
print("Producing data...")
file, file_len, valfile, valfile_len, testfile, testfile_len, corpus = date_generator()
n_characters = len(corpus.dict)
train_data = batchify(char_tensor(corpus, file), BATCH_SIZE)
val_data = batchify(char_tensor(corpus, valfile), 1)
test_data = batchify(char_tensor(corpus, testfile), 1)
print(f"Corpus size: {n_characters}")
print("Finished.")
###Output
Producing data...
Corpus size: 49
Finished.
###Markdown
Build Model
###Code
from core.tcn import TemporalConvNet
class TCN(nn.Module):
def __init__(self, input_size, output_size, num_channels, kernel_size=2, dropout=0.2, emb_dropout=0.2):
super().__init__()
self.encoder = nn.Embedding(output_size, input_size)
self.tcn = TemporalConvNet(input_size, num_channels, kernel_size=kernel_size, dropout=dropout)
self.decoder = nn.Linear(input_size, output_size)
self.decoder.weight = self.encoder.weight
self.drop = nn.Dropout(emb_dropout)
def forward(self, x):
# input has dimension (N, L_in), and emb has dimension (N, L_in, C_in).
emb = self.drop(self.encoder(x))
y = self.tcn(emb.transpose(1, 2))
o = self.decoder(y.transpose(1, 2))
return o.contiguous()
print("Building model...")
model = TCN(EMSIZE, n_characters, CHANNEL_SIZES, KSIZE, DROPOUT, EMB_DROPOUT)
model = model.to(DEVICE)
optimizer = getattr(th.optim, OPTIM)(model.parameters(), lr=LR)
print("Finished.")
###Output
Building model...
Finished.
###Markdown
Run
###Code
def evaluate(source):
model.eval()
total_loss = 0
source_len = source.size(1)
count = 0
with th.no_grad():
for batch, i in enumerate(range(0, source_len - 1, VALID_SEQ_LEN)):
if i + SEQ_LEN - VALID_SEQ_LEN >= source_len:
continue
inp, target = get_batch(source, i)
output = model(inp)
eff_history = SEQ_LEN - VALID_SEQ_LEN
final_output = output[:, eff_history:].contiguous().view(-1, n_characters)
final_target = target[:, eff_history:].contiguous().view(-1)
loss = F.cross_entropy(final_output, final_target)
total_loss += loss.data * final_output.size(0)
count += final_output.size(0)
val_loss = total_loss.item() / count * 1.0
return val_loss
def train(ep):
model.train()
total_loss = 0
source = train_data
source_len = source.size(1)
process = tqdm(range(0, source_len - 1, VALID_SEQ_LEN))
for i in process:
if i + SEQ_LEN - VALID_SEQ_LEN >= source_len:
continue
inp, target = get_batch(source, i)
optimizer.zero_grad()
output = model(inp)
eff_history = SEQ_LEN - VALID_SEQ_LEN
final_output = output[:, eff_history:].contiguous().view(-1, n_characters)
final_target = target[:, eff_history:].contiguous().view(-1)
loss = F.cross_entropy(final_output, final_target)
loss.backward()
if CLIP > 0:
th.nn.utils.clip_grad_norm_(model.parameters(), CLIP)
optimizer.step()
process.set_description(f"Train Epcoh: {ep}, loss: {loss.item():.4f}")
for epoch in range(1, EPOCHS + 1):
train(epoch)
vloss = evaluate(val_data)
print('-' * 89)
print(f'| End of epoch {epoch:3d} | valid loss {vloss:5.3f}')
test_loss = evaluate(test_data)
print('=' * 89)
print(f'| End of epoch {epoch:3d} | test loss {test_loss:5.3f}')
print('=' * 89)
###Output
_____no_output_____ |
jupyter/.ipynb_checkpoints/orders-checkpoint.ipynb | ###Markdown
[index](./index.ipynb) | [accounts](./accounts.ipynb) | [orders](./orders.ipynb) | [trades](./trades.ipynb) | [positions](./positions.ipynb) | [historical](./historical.ipynb) | [streams](./streams.ipynb) | [errors](./exceptions.ipynb) OrdersThis notebook provides an example of + a MarketOrder + a simplyfied way for a MarketOrder by using contrib.requests.MarketOrderRequest + a LimitOrder with an expiry datetime by using *GTD* and contrib.requests.LimitOrderRequest + canceling a GTD order create a marketorder request with a TakeProfit and a StopLoss order when it gets filled.
###Code
import json
import oandapyV20
import oandapyV20.endpoints.orders as orders
from authenticate import Authenticate as auth
accountID, access_token = auth('Demo', 'Primary')
client = oandapyV20.API(access_token=access_token)
# create a market order to enter a LONG position 10000 EUR_USD, stopLoss @1.07 takeProfit @1.10 ( current: 1.055)
# according to the docs at developer.oanda.com the requestbody looks like:
mktOrder = {
"order": {
"timeInForce": "FOK", # Fill-or-kill
"instrument": "EUR_USD",
"positionFill": "DEFAULT",
"type": "MARKET",
"units": 10000, # as integer
"takeProfitOnFill": {
"timeInForce": "GTC", # Good-till-cancelled
"price": 1.10 # as float
},
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07" # as string
}
}
}
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
print("Request: ", r)
print("MarketOrder specs: ", json.dumps(mktOrder, indent=2))
###Output
Request: v3/accounts/101-004-1435156-001/orders
MarketOrder specs: {
"order": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.07"
},
"positionFill": "DEFAULT",
"units": 10000,
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": 1.1
},
"type": "MARKET"
}
}
###Markdown
Well that looks fine, but constructing orderbodies that way is not really what we want. Types are not checked for instance and all the defaults need to be supplied.This kind of datastructures can become complex, are not easy to read or construct and are prone to errors. Types and definitionsOanda uses several *types* and *definitions* througout their documentation. These types are covered by the *oandapyV20.types* package and the definitions by the *oandapyV20.definitions* package. Contrib.requestsThe *oandapyV20.contrib.requests* package offers classes providing an easy way to construct the data forthe *data* parameter of the *OrderCreate* endpoint or the *TradeCRCDO* (Create/Replace/Cancel Dependent Orders). The *oandapyV20.contrib.requests* package makes use of the *oandapyV20.types* and *oandapyV20.definitions*.Let's improve the previous example by making use of *oandapyV20.contrib.requests*:
###Code
import json
import oandapyV20
import oandapyV20.endpoints.orders as orders
from oandapyV20.contrib.requests import (
MarketOrderRequest,
TakeProfitDetails,
StopLossDetails)
from authenticate import Authenticate as auth
accountID, access_token = auth('Demo', 'Primary')
client = oandapyV20.API(access_token=access_token)
# create a market order to enter a LONG position 10000 EUR_USD
mktOrder = MarketOrderRequest(instrument="EUR_USD", units=1).data
mktsetup = orders.OrderCreate(accountID=accountID, data=mktOrder)
place = client.request(mktsetup)
print(json.dumps(place, indent=2))
###Output
{
"orderCreateTransaction": {
"id": "720",
"accountID": "101-001-17385496-001",
"userID": 17385496,
"batchID": "720",
"requestID": "78894705159039853",
"time": "2021-08-27T19:58:38.690117626Z",
"type": "MARKET_ORDER",
"instrument": "EUR_USD",
"units": "1",
"timeInForce": "FOK",
"positionFill": "DEFAULT",
"reason": "CLIENT_ORDER"
},
"orderFillTransaction": {
"id": "721",
"accountID": "101-001-17385496-001",
"userID": 17385496,
"batchID": "720",
"requestID": "78894705159039853",
"time": "2021-08-27T19:58:38.690117626Z",
"type": "ORDER_FILL",
"orderID": "720",
"instrument": "EUR_USD",
"units": "1",
"requestedUnits": "1",
"price": "1.17964",
"pl": "0.0000",
"quotePL": "0",
"financing": "0.0000",
"baseFinancing": "0",
"commission": "0.0000",
"accountBalance": "99009.1946",
"gainQuoteHomeConversionFactor": "1",
"lossQuoteHomeConversionFactor": "1",
"guaranteedExecutionFee": "0.0000",
"quoteGuaranteedExecutionFee": "0",
"halfSpreadCost": "0.0001",
"fullVWAP": "1.17964",
"reason": "MARKET_ORDER",
"tradeOpened": {
"price": "1.17964",
"tradeID": "721",
"units": "1",
"guaranteedExecutionFee": "0.0000",
"quoteGuaranteedExecutionFee": "0",
"halfSpreadCost": "0.0001",
"initialMarginRequired": "0.0236"
},
"fullPrice": {
"closeoutBid": "1.17950",
"closeoutAsk": "1.17964",
"timestamp": "2021-08-27T19:58:36.337414670Z",
"bids": [
{
"price": "1.17950",
"liquidity": "10000000"
}
],
"asks": [
{
"price": "1.17964",
"liquidity": "10000000"
}
]
},
"homeConversionFactors": {
"gainQuoteHome": {
"factor": "1"
},
"lossQuoteHome": {
"factor": "1"
},
"gainBaseHome": {
"factor": "1.17367215"
},
"lossBaseHome": {
"factor": "1.18546785"
}
}
},
"relatedTransactionIDs": [
"720",
"721"
],
"lastTransactionID": "721"
}
###Markdown
As you can see, the specs contain price values that were converted to strings and the defaults *positionFill* and *timeInForce* were added. Using *contrib.requests* makes it very easy to construct the orderdata body for order requests. Parameters for those requests are also validated.Next step, place the order: rv = client.request(r)print("Response: {}\n{}".format(r.status_code, json.dumps(rv, indent=2))) Lets analyze that. We see an *orderCancelTransaction* and *reason* **STOP_LOSS_ON_FILL_LOSS**. So the order was not placed ? Well it was placed and cancelled right away. The marketprice of EUR_USD is at the moment of this writing 1.058. So the stopLoss order at 1.07 makes no sense. The status_code of 201 is as the specs say: http://developer.oanda.com/rest-live-v20/order-ep/ .Lets change the stopLoss level below the current price and place the order once again.
###Code
mktOrder = MarketOrderRequest(instrument="EUR_USD",
units=10000,
takeProfitOnFill=TakeProfitDetails(price=1.10).data,
stopLossOnFill=StopLossDetails(price=1.05).data
).data
r = orders.OrderCreate(accountID=accountID, data=mktOrder)
rv = client.request(r)
print("Response: {}\n{}".format(r.status_code, json.dumps(rv, indent=2)))
###Output
Response: 201
{
"orderFillTransaction": {
"accountBalance": "102107.4442",
"instrument": "EUR_USD",
"batchID": "7578",
"pl": "0.0000",
"accountID": "101-004-1435156-001",
"units": "10000",
"tradeOpened": {
"tradeID": "7579",
"units": "10000"
},
"financing": "0.0000",
"price": "1.05563",
"userID": 1435156,
"orderID": "7578",
"time": "2017-03-09T13:22:13.832587780Z",
"id": "7579",
"type": "ORDER_FILL",
"reason": "MARKET_ORDER"
},
"lastTransactionID": "7581",
"orderCreateTransaction": {
"timeInForce": "FOK",
"instrument": "EUR_USD",
"batchID": "7578",
"accountID": "101-004-1435156-001",
"units": "10000",
"takeProfitOnFill": {
"timeInForce": "GTC",
"price": "1.10000"
},
"time": "2017-03-09T13:22:13.832587780Z",
"userID": 1435156,
"positionFill": "DEFAULT",
"id": "7578",
"type": "MARKET_ORDER",
"stopLossOnFill": {
"timeInForce": "GTC",
"price": "1.05000"
},
"reason": "CLIENT_ORDER"
},
"relatedTransactionIDs": [
"7578",
"7579",
"7580",
"7581"
]
}
###Markdown
We now see an *orderFillTransaction* for 10000 units EUR_USD with *reason* **MARKET_ORDER**.Lets retrieve the orders. We should see the *stopLoss* and *takeProfit* orders as *pending*:
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print("Response:\n", json.dumps(rv, indent=2))
###Output
Response:
{
"lastTransactionID": "7581",
"orders": [
{
"createTime": "2017-03-09T13:22:13.832587780Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7579",
"id": "7581",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-09T13:22:13.832587780Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.10000",
"tradeID": "7579",
"id": "7580",
"state": "PENDING",
"type": "TAKE_PROFIT"
},
{
"createTime": "2017-03-09T11:45:48.928448770Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7572",
"id": "7574",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-07T09:18:51.563637768Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7562",
"id": "7564",
"state": "PENDING",
"type": "STOP_LOSS"
},
{
"createTime": "2017-03-07T09:08:04.219010730Z",
"triggerCondition": "TRIGGER_DEFAULT",
"timeInForce": "GTC",
"price": "1.05000",
"tradeID": "7558",
"id": "7560",
"state": "PENDING",
"type": "STOP_LOSS"
}
]
}
###Markdown
Depending on the state of your account you should see at least the orders associated with the previously executed marketorder. The *relatedTransactionIDs* should be in the *orders* output of OrdersPending().Now lets cancel all pending TAKE_PROFIT orders:
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
idsToCancel = [order.get('id') for order in rv['orders'] if order.get('type') == "TAKE_PROFIT"]
for orderID in idsToCancel:
r = orders.OrderCancel(accountID=accountID, orderID=orderID)
rv = client.request(r)
print("Request: {} ... response: {}".format(r, json.dumps(rv, indent=2)))
###Output
Request: v3/accounts/101-004-1435156-001/orders/7580/cancel ... response: {
"orderCancelTransaction": {
"time": "2017-03-09T13:26:07.480994423Z",
"userID": 1435156,
"batchID": "7582",
"orderID": "7580",
"id": "7582",
"type": "ORDER_CANCEL",
"accountID": "101-004-1435156-001",
"reason": "CLIENT_REQUEST"
},
"lastTransactionID": "7582",
"relatedTransactionIDs": [
"7582"
]
}
###Markdown
create a LimitOrder with a *GTD* "good-til-date"Create a LimitOrder and let it expire: *2018-07-02T00:00:00* using *GTD*. Make sure it is in the futurewhen you run this example!
###Code
from oandapyV20.contrib.requests import LimitOrderRequest
# make sure GTD_TIME is in the future
# also make sure the price condition is not met
# and specify GTD_TIME as UTC or local
# GTD_TIME="2018-07-02T00:00:00Z" # UTC
GTD_TIME="2018-07-02T00:00:00"
ordr = LimitOrderRequest(instrument="EUR_USD",
units=10000,
timeInForce="GTD",
gtdTime=GTD_TIME,
price=1.08)
print(json.dumps(ordr.data, indent=4))
r = orders.OrderCreate(accountID=accountID, data=ordr.data)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"order": {
"price": "1.08000",
"timeInForce": "GTD",
"positionFill": "DEFAULT",
"type": "LIMIT",
"instrument": "EUR_USD",
"gtdTime": "2018-07-02T00:00:00",
"units": "10000"
}
}
{
"relatedTransactionIDs": [
"8923"
],
"lastTransactionID": "8923",
"orderCreateTransaction": {
"price": "1.08000",
"triggerCondition": "DEFAULT",
"positionFill": "DEFAULT",
"type": "LIMIT_ORDER",
"requestID": "42440345970496965",
"partialFill": "DEFAULT",
"gtdTime": "2018-07-02T04:00:00.000000000Z",
"batchID": "8923",
"id": "8923",
"userID": 1435156,
"accountID": "101-004-1435156-001",
"timeInForce": "GTD",
"reason": "CLIENT_ORDER",
"instrument": "EUR_USD",
"time": "2018-06-10T12:06:30.259079220Z",
"units": "10000"
}
}
###Markdown
Request the pending orders
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"orders": [
{
"price": "1.08000",
"triggerCondition": "DEFAULT",
"state": "PENDING",
"positionFill": "DEFAULT",
"partialFill": "DEFAULT_FILL",
"gtdTime": "2018-07-02T04:00:00.000000000Z",
"id": "8923",
"timeInForce": "GTD",
"type": "LIMIT",
"instrument": "EUR_USD",
"createTime": "2018-06-10T12:06:30.259079220Z",
"units": "10000"
}
],
"lastTransactionID": "8923"
}
###Markdown
Cancel the GTD orderFetch the *orderID* from the *pending orders* and cancel the order.
###Code
r = orders.OrderCancel(accountID=accountID, orderID=8923)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"relatedTransactionIDs": [
"8924"
],
"orderCancelTransaction": {
"accountID": "101-004-1435156-001",
"time": "2018-06-10T12:07:35.453416669Z",
"orderID": "8923",
"reason": "CLIENT_REQUEST",
"requestID": "42440346243149289",
"type": "ORDER_CANCEL",
"batchID": "8924",
"id": "8924",
"userID": 1435156
},
"lastTransactionID": "8924"
}
###Markdown
Request pendig orders once again ... the 8923 should be gone
###Code
r = orders.OrdersPending(accountID=accountID)
rv = client.request(r)
print(json.dumps(rv, indent=2))
###Output
{
"orders": [],
"lastTransactionID": "8924"
}
|
01_InpaintingImageWang/04_InvestigateProblemsWithLargerImages.ipynb | ###Markdown
Investigate Problems with Larger Images Typically when we increase the input size of images, our neural networks perform better.Here are our current results:| Size (px) | Epochs | URL | Accuracy | Runs ||--|--|--|--|--||128|5|[Inpainting](https://github.com/JoshVarty/SelfSupervisedLearning/blob/7d292979ae4bbf8422e710b5aeabc5131d0f83a0/01_InpaintingImageWang/03_ImageWang_Leadboard_128.ipynb)|40.87%| 5 ||128|20|[Inpainting](https://github.com/JoshVarty/SelfSupervisedLearning/blob/7d292979ae4bbf8422e710b5aeabc5131d0f83a0/01_InpaintingImageWang/03_ImageWang_Leadboard_128.ipynb)|61.15%|3||128|80|[Inpainting](https://github.com/JoshVarty/SelfSupervisedLearning/blob/7d292979ae4bbf8422e710b5aeabc5131d0f83a0/01_InpaintingImageWang/03_ImageWang_Leadboard_128.ipynb)|62.18%|1||128|200|[Inpainting](https://github.com/JoshVarty/SelfSupervisedLearning/blob/7d292979ae4bbf8422e710b5aeabc5131d0f83a0/01_InpaintingImageWang/03_ImageWang_Leadboard_128.ipynb)|62.03%|1|| Size (px) | Epochs | URL | Accuracy | Runs ||--|--|--|--|--||192|5|[Inpainting](https://github.com/JoshVarty/SelfSupervisedLearning/blob/34ab526d39b31f976bc821a4c0924db613c2f7f5/01_InpaintingImageWang/03_ImageWang_Leadboard_192.ipynb)|39.33%|5||192|20|[Inpainting](https://github.com/JoshVarty/SelfSupervisedLearning/blob/34ab526d39b31f976bc821a4c0924db613c2f7f5/01_InpaintingImageWang/03_ImageWang_Leadboard_192.ipynb)|64.62%|3||192|80|[Inpainting](https://github.com/JoshVarty/SelfSupervisedLearning/blob/34ab526d39b31f976bc821a4c0924db613c2f7f5/01_InpaintingImageWang/03_ImageWang_Leadboard_192.ipynb)|66.76%|1||192|200|[Inpainting](https://github.com/JoshVarty/SelfSupervisedLearning/blob/34ab526d39b31f976bc821a4c0924db613c2f7f5/01_InpaintingImageWang/03_ImageWang_Leadboard_192.ipynb)|67.12%|1|| Size (px) | Epochs | URL | Accuracy | Runs ||--|--|--|--|--||256|5|[Inpainting]()|19.88%|5||256|20|[Inpainting]()|47.26%|3||256|80|[Inpainting]()|63.55%|1||256|200|[Inpainting]()|67.47%|1| Notice that accuracy decreases when dealing with `256x256` images and run for 5, 20 and 80 epochs. Visualize Activations in PretextTask
###Code
import json
import torch
import numpy as np
from functools import partial
from fastai2.callback.hook import HookCallback, ActivationStats
from fastai2.layers import Mish, MaxPool, LabelSmoothingCrossEntropy, flatten_model
from fastai2.learner import Learner
from fastai2.metrics import accuracy, top_k_accuracy
from fastai2.basics import DataBlock, RandomSplitter, GrandparentSplitter, CategoryBlock
from fastai2.optimizer import ranger, Adam, SGD, RMSProp
from fastai2.vision.all import *
from fastai2.data.transforms import Normalize, parent_label
from fastai2.data.external import download_url, URLs, untar_data
from fastcore.utils import num_cpus
from torch.nn import MSELoss
from torchvision.models import resnet34
# We create this dummy class in order to create a transform that ONLY operates on images of this type
# We will use it to create all input images
class PILImageInput(PILImage): pass
class RandomCutout(RandTransform):
"Picks a random scaled crop of an image and resize it to `size`"
split_idx = None
def __init__(self, min_n_holes=5, max_n_holes=10, min_length=5, max_length=50, **kwargs):
super().__init__(**kwargs)
self.min_n_holes=min_n_holes
self.max_n_holes=max_n_holes
self.min_length=min_length
self.max_length=max_length
def encodes(self, x:PILImageInput):
"""
Note that we're accepting our dummy PILImageInput class
fastai2 will only pass images of this type to our encoder.
This means that our transform will only be applied to input images and won't
be run against output images.
"""
n_holes = np.random.randint(self.min_n_holes, self.max_n_holes)
pixels = np.array(x) # Convert to mutable numpy array. FeelsBadMan
h,w = pixels.shape[:2]
for n in range(n_holes):
h_length = np.random.randint(self.min_length, self.max_length)
w_length = np.random.randint(self.min_length, self.max_length)
h_y = np.random.randint(0, h)
h_x = np.random.randint(0, w)
y1 = int(np.clip(h_y - h_length / 2, 0, h))
y2 = int(np.clip(h_y + h_length / 2, 0, h))
x1 = int(np.clip(h_x - w_length / 2, 0, w))
x2 = int(np.clip(h_x + w_length / 2, 0, w))
pixels[y1:y2, x1:x2, :] = 0
return Image.fromarray(pixels, mode='RGB')
# Default parameters
gpu=None
lr=1e-2
size=256
sqrmom=0.99
mom=0.9
eps=1e-6
epochs=15
bs=64
mixup=0.
opt='ranger',
arch='xresnet50'
sh=0.
sa=0
sym=0
beta=0.
act_fn='Mish'
fp16=0
pool='AvgPool',
dump=0
runs=1
meta=''
# Chosen parameters
lr=8e-3
sqrmom=0.99
mom=0.95
eps=1e-6
bs=64
opt='ranger'
sa=1
fp16=0 #NOTE: My GPU cannot run fp16 :'(
arch='xresnet50'
pool='MaxPool'
gpu=0
# NOTE: Normally loaded from their corresponding string
m = xresnet34
act_fn = Mish
pool = MaxPool
# Use the Ranger optimizer
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
def get_dbunch(size, bs, sh=0., workers=None):
if size<=224:
path = URLs.IMAGEWANG_160
else:
path = URLs.IMAGEWANG
source = untar_data(path)
if workers is None: workers = min(8, num_cpus())
#CHANGE: Input is ImageBlock(cls=PILImageInput)
#CHANGE: Output is ImageBlock
#CHANGE: Splitter is RandomSplitter (instead of on /val folder)
item_tfms=[RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5), RandomCutout]
batch_tfms=RandomErasing(p=0.9, max_count=3, sh=sh) if sh else None
dblock = DataBlock(blocks=(ImageBlock(cls=PILImageInput), ImageBlock),
splitter=GrandparentSplitter(valid_name='val'),
get_items=get_image_files,
get_y=lambda o: o,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return dblock.dataloaders(source, path=source, bs=bs, num_workers=workers)
if gpu is not None: torch.cuda.set_device(gpu)
if opt=='adam' : opt_func = partial(Adam, mom=mom, sqr_mom=sqrmom, eps=eps)
elif opt=='rms' : opt_func = partial(RMSProp, sqr_mom=sqrmom)
elif opt=='sgd' : opt_func = partial(SGD, mom=mom)
elif opt=='ranger': opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
size = 160
#CHANGE: I can only fit ~32 images in a batch
bs = 32
dbunch = get_dbunch(size, bs, sh=sh)
#CHANGE: We're predicting pixel values, so we're just going to predict an output for each RGB channel
dbunch.vocab = ['R', 'G', 'B']
if not gpu: print(f'lr: {lr}; size: {size}; sqrmom: {sqrmom}; mom: {mom}; eps: {eps}')
dbunch.show_batch()
#NOTE: We are using MSELoss and vanilla xresnet50
learn = unet_learner(dbunch, m, opt_func=opt_func, metrics=[], loss_func=MSELoss())
# Hook activations
conv1 = learn.model[0][2]
conv2_x = learn.model[0][4]
conv3_x = learn.model[0][5]
conv4_x = learn.model[0][6]
conv5_x = learn.model[0][7]
hook = ActivationStats(every=15, with_hist=True, modules=[conv1, conv2_x, conv3_x, conv4_x, conv5_x])
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = [hook]
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
hook.plot_hist(0)
hook.plot_hist(1)
hook.plot_hist(2)
hook.plot_hist(3)
hook.plot_hist(4)
###Output
_____no_output_____ |
aula/imersao_de_dados_alura.ipynb | ###Markdown
**AULA 01 - IMERSÃO DE DADOS ALURA**
###Code
import pandas as pd #biblioteca criada para a linguagem python para manipulação e análise de dados
url_dados = 'https://github.com/alura-cursos/imersaodados3/blob/main/dados/dados_experimentos.zip?raw=true'
dados = pd.read_csv(url_dados, compression = 'zip') #usamos o compression para extrair o arquivo do zip
print(dados.head()) #exibe apenas as 5 primeiras linhas da base de dados
print(dados.shape) # informa quantas linhas e colunas temos na base
dados['tratamento'] #podemos ver os dados da coluna tratamento
dados['tratamento'].unique() #podemos ver os elementos únicos da coluna, equivale-se a um select distinct
dados['tempo'].unique()
dados['dose'].unique()
dados['droga'].unique()
dados['g-0'].unique()
dados['tratamento'].value_counts() #conta a quantidade de cada dado
dados['dose'].value_counts()
dados['dose'].value_counts(normalize = True)
dados['tratamento'].value_counts().plot.pie() #USAMOS O PLOT PARA PLOTAR UM GRÁFICO, O SEGUNDO PARAMETRO (PIE) É O TIPO DE GRÁFICO QUE QUEREMOS
dados['tempo'].value_counts().plot.pie()
dados['tempo'].value_counts().plot.bar() # AQUI USAMOS O BAR PARA CRIAR UM GRÁFICO DE BARRAS
dados_filtrados = dados[dados['g-0'] > 0] # FILTRO
dados_filtrados.head() # EXIBINDO APENAS AS 5 PRIMEIRAS LINHAS DO FILTRO
###Output
_____no_output_____
###Markdown
DESAFIO 01 Investigar por que a classe tratamento é tão desbalanceada
###Code
dados_com_droga = dados.query("tratamento == 'com_droga'") # RESOLUÇÃO DESAFIO 01
dados_com_droga['droga'].nunique() # PODEMOS VER QUE FORAM USADOS 3288 TIPOS DE DROGA, USAMOS O NUNIQUE PARA RETORNAR O NÚMERO DE ELEMENTOS ÚNICOS NA BASE D DADOS
dados_com_controle = dados.query("tratamento == 'com_controle'") # RESOLUÇÃO DESAFIO 01
dados_com_controle['droga'].nunique() # PODEMOS VER QUE FOI USADO APENAS 1 TIPO DE DROGA, POR ISSO EXISTE O DESBALANCEAMENTO
###Output
_____no_output_____
###Markdown
DESAFIO 2 Plotar as 5 últimas linhas da tabela
###Code
dados.tail() # RESOLUÇÃO DESAFIO 02
###Output
_____no_output_____
###Markdown
DESAFIO 3 Calcular proporção das classes tratamento
###Code
dados['tratamento'].value_counts(normalize = True) # RESOLUÇÃO DESAFIO 03
###Output
_____no_output_____
###Markdown
DESAFIO 04 Quantas tipos de drogas foram investigados
###Code
dados['droga'].nunique() # FORAM INVESTIGADOS 3289 TIPOS DE DROGAS
###Output
_____no_output_____
###Markdown
DESAFIO 05 Procurar na documentação o método query(pandas)
###Code
dados_com_droga = dados.query("tratamento == 'com_droga'") # O MÉTODO QUERY NOS PERMITE UTILIZAR EXPRESSÕES PARA AUXILIAR NA BUSCA DE DADOS
dados_com_droga['droga'].nunique()
###Output
_____no_output_____
###Markdown
DESAFIO 06 Renomear as colunas tirando o hífen
###Code
# RESOLUÇÃO DESAFIO 06
colunas = dados.columns
renomeando = [name.replace('-', '') for name in colunas]
dados.columns = renomeando
dados
###Output
_____no_output_____
###Markdown
DESAFIO 07 Deixar os gráficos bonitos (Matplotlib.pyplot)
###Code
import matplotlib.pyplot as plt #IMPORTANDO A BIBLIOTECA MATLIB.PYPLOT
tratamento = dados['tratamento'].value_counts()
label = ['com drogra', 'com controle']
explode = [0, 0.3]
size = [50, 60]
colors = ['b', 'y']
plt.title('Classes tratamento')
plt.pie(tratamento, labels=label, explode=explode, colors = colors, autopct='%1.1F%%')
plt.legend(bbox_to_anchor=(1,1))
plt.show()
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize = (10,6))
ax = dados['tempo'].value_counts().plot.bar(color=['r', 'g', 'b'])
plt.title("Doses ministradas por período")
plt.xlabel("Intervalo de horas",)
plt.ylabel("Quantidade de doses")
plt.xticks(rotation = 0)
###Output
_____no_output_____
###Markdown
DESAFIO 08 Resumo do que você aprendeu com os dados AULA 02 - IMERSÃO DE DADOS
###Code
mapa = {'droga': 'composto'} # PEGANDO A COLUNA QUE QUEREMOS RENOMEAR
dados.rename(columns=mapa, inplace=True) # FAZENDO O RENAME, O INPLACE=TRUE RENOMEIA A COLUNA NA BASE DE DADOS
dados.head()
cod_compostos = dados['composto'].value_counts().index[0:5] # PEGANDO OS 5 PRIMEIROS COMPOSTOS E SALVANDO EM UMA VARIÁVEL
cod_compostos
dados.query('composto in @cod_compostos')
import seaborn as sns
import matplotlib.pyplot as plt
sns.set()
plt.figure(figsize=(8,6)) # ALTERANDO O TAMANHO DA IMAGEM
ax = sns.countplot(x= 'composto', data=dados.query('composto in @cod_compostos'))
ax.set_title('Top 5 Compostos')
plt.show()
len(dados['g0'].unique()) # QUANTOS ELEMENTOS UNICOS TEMOS EM g0
dados['g0'].min() # VERIFICANDO O MENOR VALOR
dados['g0'].max() # VERIFICANDO O MAIOR VALOR
dados['g0'].hist(bins = 100)
dados['g19'].hist(bins = 100)
dados.describe()
dados[['g0', 'g1']]
dados.loc[:,'g0':'g771'].describe()
dados.loc[:,'g0':'g771'].describe().T['mean'].hist(bins=30) # TRANSPONDO OS DADOS, LINHAS VIRAM COLUNAS E COLUNAS VIRAM LINHAS
dados.loc[:,'g0':'g771'].describe().T['min'].hist(bins=30) # TRANSPONDO OS DADOS, LINHAS VIRAM COLUNAS E COLUNAS VIRAM LINHAS
dados.loc[:,'g0':'g771'].describe().T['max'].hist(bins=30) # TRANSPONDO OS DADOS, LINHAS VIRAM COLUNAS E COLUNAS VIRAM LINHAS
dados.loc[:,'c0':'c99'].describe().T['mean'].hist(bins=50) # TRANSPONDO OS DADOS, LINHAS VIRAM COLUNAS E COLUNAS VIRAM LINHAS
sns.boxplot(x='g0' , data=dados)
plt.figure(figsize=(10,8))
sns.boxplot(y='g0', x='tratamento' , data=dados)
###Output
_____no_output_____
###Markdown
Desafio 01 : Ordenar o gráfico countplot
###Code
import seaborn as sns
import matplotlib.pyplot as plt
#cod_compostos = dados['composto'].value_counts().index[5:0:-1] # Decrescente
cod_compostos = dados['composto'].value_counts().index[0:5:1] # Crescente
sns.set()
plt.figure(figsize=(8,6)) # ALTERANDO O TAMANHO DA IMAGEM
data=dados.query('composto in @cod_compostos')
ax = sns.countplot(x= 'composto', data=data, order=cod_compostos)
ax.set_title('Top 5 Compostos')
plt.show()
###Output
_____no_output_____
###Markdown
Desafio 02: Melhorar a visualização alterando o tamanho da fonte
###Code
import seaborn as sns
import matplotlib.pyplot as plt
cod_compostos = dados['composto'].value_counts().index[5:0:-1] # Decrescente
sns.set()
plt.figure(figsize=(8,6)) # ALTERANDO O TAMANHO DA IMAGEM
data=dados.query('composto in @cod_compostos')
ax = sns.countplot(x= 'composto', data=data, order=cod_compostos)
ax.set_title('Top 5 Compostos', fontsize=25, color='red')
ax.set_ylabel('Contagem', fontsize=15, color='blue')
ax.set_xlabel('Composto', fontsize=15, color='blue')
plt.show()
###Output
_____no_output_____
###Markdown
Desafio 03: Plotar os histogramas com seaborn
###Code
celulas = dados.loc[:,'c0':'c99'].describe().T['mean']
plt.figure(figsize=(10,8))
hist = sns.histplot(data=celulas, bins=50)
hist.set_title('Células', fontsize=18, color='b')
hist.set_ylabel('Mediana', fontsize=12, color='g')
hist.set_xlabel('Contagem', fontsize=12, color='g')
plt.show()
###Output
_____no_output_____
###Markdown
Desafio 04: Estudar sobre as estatisticas retornadas no .describre()count -> Contar o número de observações não nulas.max -> Máximo dos valores no objeto.min -> Mínimo dos valores no objeto.mean -> Média dos valores.std -> Desvio padrão das observações.select_dtypes -> Subconjunto de um DataFrame incluindo / excluindo colunas com base em seu dtype. Desafio 05: Refletir sobre a manipulação do tamanho das visualizações Desafio 06: Fazer outras análises com o boxplot e até com o histograma
###Code
plt.figure(figsize=(14,12))
hist = sns.histplot(data=dados,bins=100, x='g0', hue='tempo', multiple="stack")
hist.set_title('Análise célular por tempo', fontsize=18, color='r')
hist.set_ylabel('Quantidade', fontsize='12', color='g')
hist.set_xlabel('Célula g0', fontsize='12', color='g')
plt.show()
plt.figure(figsize=(14,12))
hist = sns.histplot(data=dados,bins=100, x='c0', hue='tempo', multiple="stack")
hist.set_title('Análise celular por tempo', fontsize=18, color='r')
hist.set_ylabel('Quantidade', fontsize=12, color='g')
hist.set_xlabel('Célula c0', fontsize=12, color='g')
plt.show()
###Output
_____no_output_____ |
notebooks/session6_etj.ipynb | ###Markdown
Importing packages and loading file
###Code
import os
import pandas as pd
from tqdm import tqdm
import spacy
nlp = spacy.load("en_core_web_sm")
file = os.path.join("..", "data", "labelled_data", "fake_or_real_news.csv")
data = pd.read_csv(file)
real_df = data[data["label"]=="REAL"]["text"]
###Output
_____no_output_____
###Markdown
Extract entities -> DON'T RUN BELOW
###Code
post_entities = []
########## TAKES A LONG TIME!
for post in tqdm(real_df):
# create temporary list
tmp_list = []
# create spacy doc object
doc = nlp(post)
# for every named entity in the doc:
for entity in doc.ents:
if entity.label_ == "PERSON":
tmp_list.append(entity.text)
post_entities.append(tmp_list)
post_entities[0]
###Output
_____no_output_____
###Markdown
Extract edgelists using itertools.entities
###Code
from itertools import combinations
edgelist = []
# Iterate over every document ("post entities)")
for doc in post_entities:
edges = list(combinations(doc, 2))
# For each combination (each pair of nodes)
for edge in edges:
# Append this to the final edgelist
edgelist.append(tuple(sorted(edge))) #sorted gives alphabetical order
list(combinations([1,2,3,4,5],2)) # Giving an example of what we are doing -> We're getting all possible combinations within each document
edgelist[:10]
len(edgelist) # 1.3 mio. edges
###Output
_____no_output_____
###Markdown
Counting edges
###Code
from collections import Counter
Counter(edgelist).most_common(5) # return the 10 most common edges
counted_edges = []
for pair, weight in Counter(edgelist).items():
nodeA = pair[0]
nodeB = pair[1]
counted_edges.append((nodeA, nodeB, weight))
counted_edges[:3]
len(counted_edges)
###Output
_____no_output_____
###Markdown
Create dataframe
###Code
edges_df = pd.DataFrame(counted_edges, columns = ["nodeA", "nodeB", "weight"])
edges_df.sample(5)
print(edges_df[edges_df["weight"] > 8000])
filtered_df = edges_df[edges_df["weight"] > 8000]
import networkx as nx
import matplotlib.pyplot as plt
G = nx.from_pandas_edgelist(filtered_df, "nodeA", "nodeB", ["weight"])
###Output
_____no_output_____
###Markdown
Doesn't work plotting on windows with pygraphviz
###Code
# Use this instead:
# https://networkx.org/documentation/stable//reference/drawing.html
# Or matplotlib
###Output
_____no_output_____
###Markdown
Centrality measures for finding important nodes
###Code
bc_metric = nx.betweenness_centrality(G)
ev_metric = nx.eigenvector_centrality(G)
bc_metric
ev_metric
importance_df = pd.DataFrame(bc_metric.items(), columns = ["node", "betweenness"])
importance_df["eigenvector"] = ev_metric.values()
importance_df
###Output
_____no_output_____ |
src/DataPreprocessing/original/Data-Preprocessing-FY20.ipynb | ###Markdown
Data Pre-Processing-FY20 **Collecting the data -** data consists of budget text documents in the form of PDF files obtained from the following organizations: * [Guilford County](https://www.guilfordcountync.gov/home/showdocument?id=9497) * [Durham County](https://www.dconc.gov/home/showdocument?id=27985) * [City of Durham](https://durhamnc.gov/DocumentCenter/View/27412/FY20-Final-Budget) * [City of Charlotte](https://charlottenc.gov/budget/FY2020%20Documents/FY%202020%20Adopted%20Budget%20Book%207-31%20Complete.pdf) * [Mecklenburg County](https://www.mecknc.gov/CountyManagersOffice/OMB/Documents/FY2020%20Adopted%20Budget.pdf) * [Wake County](http://www.wakegov.com/budget/fy20/Documents/FY20%20Adopted%20Budget%20Book.pdf) * [City of Raleigh](https://user-2081353526.cld.bz/FY2020AdoptedBudget) After the PDF files are collected, they are compressed to reduce the size. Then, files are converted into CSV files using an app developed by project mentor: **[Jason Jones](https://www.linkedin.com/in/jones-jason-adam/),** **click [here](https://jason-jones.shinyapps.io/Emotionizer/) for the App**
###Code
#Importing packages
import os
import glob
import nltk
import pandas as pd
import numpy as np
# change the current directory to read the data
os.chdir(r"C:\Users\Sultan\Desktop\data\FY2020\structured\original")
###Output
_____no_output_____
###Markdown
Reading and labling data for all organizations
###Code
# 1- Reading Guilford-County data file
GC_df = pd.read_csv("GuilfordCountyOriginalDataFY20.csv", engine='python')
# inserting "organization" column with static value
# corresponding to the organization in question
GC_df.insert(2, "organization", "Guilford County")
# 2- For Charlotte-City data
CC_df = pd.read_csv(r'CharlotteCityOriginalDataFY20.csv', engine='python')
CC_df.insert(2, "organization", "Charlotte City")
# 3- For Durham-City data
DCity_df = pd.read_csv(r'DurhamCityOriginalDataFY20.csv', engine='python')
DCity_df.insert(2, "organization", "Durham City")
# 4- For Durham-County data
DCounty_df = pd.read_csv(r'DurhamCountyOriginalDataFY20.csv', engine='python')
DCounty_df.insert(2, "organization", "Durham County")
# 5- For Mecklenburg-County data
MC_df = pd.read_csv(r'MecklenburgCountyOriginalDataFY20.csv', engine='python')
MC_df.insert(2, "organization", "Mecklenburg County")
# 6- For Raleigh-City data
RC_df = pd.read_csv(r'RaleighCityOriginalDataFY20.csv', engine='python')
RC_df.insert(2, "organization", "Raleigh City")
# 7- For Wake-County data
WC_df = pd.read_csv(r'WakeCountyOriginalDataFY20.csv', engine='python')
WC_df.insert(2, "organization", "Wake County")
# Combine all dataframes into a single dataframe using concat() function
# Row lables are adjusted automaticlly by passing ignore_index=True
df = pd.concat([GC_df, CC_df, DCity_df,
DCounty_df, MC_df, RC_df, WC_df], ignore_index=True)
df.head()
# listing columns in data frame
list(df)
###Output
_____no_output_____
###Markdown
Dropping and reordering columns
###Code
# delete columns using the columns parameter of drop
df = df.drop(columns="Unnamed: 0")
# re-order columns
df = df[['page_number','word','organization']]
df.head()
###Output
_____no_output_____
###Markdown
Adding "Year" column with a static value corresponding to the year in question
###Code
df.insert(3, "year", "FY2020")
df.head()
###Output
_____no_output_____
###Markdown
Dataframe to one single csv file
###Code
# Change the dirctory for file to be stored properly
os.chdir(r"C:\Users\Sultan\Desktop\data\PreprocessedData")
# Export dataframe to csv
df.to_csv(r'DataFY20.csv', index=False, encoding='utf-8-sig')
###Output
_____no_output_____ |
files/Voxelwise_Encoding_BIDS.ipynb | ###Markdown
An example workflow for voxel-wise encoding models using a BIDS appThis shows how to (for a BIDS compliant dataset) extract features, save them in BIDS format, and run a BIDS app for voxel-wise encoding models.We are going to use [this](https://openneuro.org/datasets/ds002322/versions/1.0.4) dataset. *Warning*: Executing this notebook will download the full dataset.
###Code
!aws s3 sync --no-sign-request s3://openneuro.org/ds002322 ds002322-download/
###Output
_____no_output_____
###Markdown
Extracting a stimulus representationThe dataset in question consists of fMRI activity recorded of several participants while they listened to a reading of the first chapter of Lewis Carroll’s Alice in Wonderland.First we want to extract a stimulus representation that we can use - I chose a Mel spectrogram for demonstration.[This](https://github.com/mjboos/audio2bidsstim/) small Python script extracts such a representation and saves it in a BIDS compliant format.If you get an error that `sndfile library` was not found, you will need to use conda to install it.
###Code
import json
# these are the parameters for extracting a Mel spectrogram
# for computational ease in this example we want 1 sec segments of 31 Mel frequencies with a max frequency of * KHz
mel_params = {'n_mels': 31, 'sr': 16000, 'hop_length': 16000, 'n_fft': 16000, 'fmax': 8000}
with open('config.json', 'w+') as fl:
json.dump(mel_params, fl)
!git clone https://github.com/mjboos/audio2bidsstim/
!pip install -r audio2bidsstim/requirements.txt
!python audio2bidsstim/wav_files_to_bids_tsv.py ds002322-download/stimuli/DownTheRabbitHoleFinal_mono_exp120_NR16_pad.wav -c config.json
!ls -l
###Output
_____no_output_____
###Markdown
Now we must copy these files into the BIDS dataset directory according to [these](https://bids-specification.readthedocs.io/en/stable/04-modality-specific-files/06-physiological-and-other-continuous-recordings.html) specifications.We are going to use the `derivatives` folder for the already preprocessed data.
###Code
!cp DownTheRabbitHoleFinal_mono_exp120_NR16_pad.tsv.gz ds002322-download/derivatives/task-alice_stim.tsv.gz
!cp DownTheRabbitHoleFinal_mono_exp120_NR16_pad.json ds002322-download/derivatives/sub-18/sub-18_task-alice_stim.json
###Output
_____no_output_____
###Markdown
And, lastly, because for this dataset the derivatives folder is missing timing information for the BOLD files - we are only interested in the TR - we have to copy that as well.
###Code
!cp ds002322-download/sub-18/sub-18_task-alice_bold.json ds002322-download/derivatives/sub-18/sub-18_task-alice_bold.json
###Output
_____no_output_____
###Markdown
Running the analysisNow we're all set and can run our encoding analysis. This analysis uses standard Ridge regression, and we're going to specify some additional parameters here.
###Code
ridge_params = {'alphas': [1e-1, 1, 100, 1000], 'n_splits': 3, 'normalize': True}
# and for lagging the stimulus as well - we want to include 6 sec stimulus segments to predict fMRI
lagging_params = {'lag_time': 6}
with open('encoding_config.json', 'w+') as fl:
json.dump(ridge_params, fl)
with open('lagging_config.json', 'w+') as fl:
json.dump(lagging_params, fl)
###Output
_____no_output_____
###Markdown
Now we just need [this](https://github.com/mjboos/voxelwiseencoding) BIDS app for running the analysis.Running this cell will fit voxel-wise encoding models, which right now need about 8 Gig of RAM. Using Docker to run the voxelwise-encoding BIDS appYou can use Docker to build/get an image that already includes all libraries:
###Code
!git clone https://github.com/mjboos/voxelwiseencoding
!mkdir output
# we need to mount a config folder for our json files
!mkdir config
!cp *config.json config/
!docker run -i --rm -v ds002322-download/derivatives:bids_dataset/:ro -v config/:/config:ro -v output/:/output mjboos/voxelwiseencoding /bids_dataset /output --task alice --skip_bids_validator --participant_label 18 --preprocessing-config /config/lagging_config.json --encoding-config /config/encoding_config.json --detrend --standardize zscore
###Output
_____no_output_____
###Markdown
Alternative: run the module directlyAlternatively you can install the required libraries directly and run the Python script yourself.
###Code
!git clone https://github.com/mjboos/voxelwiseencoding
!pip install -r voxelwiseencoding/requirements.txt
!mkdir output
!python voxelwiseencoding/run.py ds002322-download/derivatives output --task alice --skip_bids_validator --participant_label 18 --preprocessing-config lagging_config.json --encoding-config encoding_config.json --detrend --standardize zscore
###Output
_____no_output_____
###Markdown
Now we'll have some ridge regressions saved in output, as well as scores saved as a Nifti file - which we can visualize.First we load the scores - we have one volume containing the scores per fold - and average them and then plot them via Nilearn.
###Code
from nilearn.image import mean_img
mean_scores = mean_img('output/sub-18_task-alice_scores.nii.gz')
from nilearn import plotting
plotting.plot_stat_map(mean_scores, threshold=0.1)
###Output
_____no_output_____ |
Pandas/8 - Identificando e Removendo Outliers.ipynb | ###Markdown
Relatório de Análise VIII Identificando e removendo Outliers Parte 1
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
plt.rc('figure', figsize = (14, 6))
dados = pd.read_csv('dados/t_alugueis_residenciais.csv', sep = ';')
dados.boxplot(['Valor'])
###Output
_____no_output_____
###Markdown
Nessa primeira visualização (acima) é possível ver que o gráfico não está com uma visualização boa, pois está na vertical e possui valores muito discrepantes que achatam o gráfico.
###Code
# O valor escolhido acima de 500 é baseado na discrepância apresentada no gráfico acima.
dados[dados['Valor'] >= 500000]
valor = dados['Valor']
valor
###Output
_____no_output_____
###Markdown
É possível separar os _quantile_ dos dados pela porcentagem de distribuição/representação dos dados.
###Code
# Outlier esquerdo
Q1 = valor.quantile(.25)
Q1
# Outlier direito
Q3 = valor.quantile(.75)
Q3
# Intervalo
IIQ = Q3 - Q1
IIQ
# Limites
limite_inferior = Q1 - 1.5 * IIQ
limite_superior = Q3 + 1.5 * IIQ
###Output
_____no_output_____
###Markdown
Ou seja, tudo o que está fora dos limites inferiores e superiores são considerados Outliers
###Code
selecao = (valor >= limite_inferior) & (valor <= limite_superior)
dados_new = dados[selecao]
dados_new
dados_new.boxplot(['Valor'])
###Output
_____no_output_____
###Markdown
Removido os dados discrepantes (advindos provavelmente de confusão entre valor de compra e valor de aluguel), é notável que existem vários valores acima do novo limite superior e que esses valores são valores reais possíveis para alugueis em determinadas condições. Portanto, para removê-los seria necessário uma melhor análise.
###Code
# Antes
dados.hist(['Valor'])
# Depois
dados_new.hist(['Valor'])
dados_new.to_csv('dados/alugueis_residenciais_sem_outliers.csv', sep = ';', index = False)
###Output
_____no_output_____
###Markdown
Parte 2
###Code
dados_new.boxplot(['Valor'], by = ['Tipo'])
dados.boxplot(['Valor'], by = ['Tipo'])
grupo_tipo = dados.groupby('Tipo')['Valor']
grupo_tipo.groups
Q1 = grupo_tipo.quantile(.25)
Q3 = grupo_tipo.quantile(.75)
IIQ = Q3 - Q1
LI = Q1 - 1.5 * IIQ
LS = Q3 + 1.5 * IIQ
print(Q1)
print(Q3)
print(IIQ)
print(LI)
print(LS)
%config IPCompleter.greedy=True
grupo_tipo.groups.keys
dados_new_tipo = pd.DataFrame()
for tipo in grupo_tipo.groups.keys():
eh_tipo = (dados["Tipo"] == tipo)
eh_dentro_limite = (dados["Valor"] >= LI[tipo]) & (dados["Valor"] <= LS[tipo])
selecao = eh_tipo & eh_dentro_limite
dados_new_tipo = pd.concat([dados_new_tipo, dados[selecao]])
dados_new_tipo
dados_new.boxplot(['Valor'], by = ['Tipo'])
dados_new_tipo.boxplot(['Valor'], by = ['Tipo'])
dados_new_tipo.to_csv('dados/alugueis_residenciais_sem_outliers.csv', sep = ';', index = False)
###Output
_____no_output_____ |
Tutorial-BSSN_quantities.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import sys
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{\sqrt{x^2+y^2}} \bar{\Lambda}^{x} + \frac{x}{\sqrt{x^2+y^2}} \bar{\Lambda}^{y} \\&= -\frac{y}{r \sin\theta} \bar{\Lambda}^{x} + \frac{x}{r \sin\theta} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ due to the $\frac{1}{r\sin\theta}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
sys.exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if cf_choice not in ('phi', 'W', 'chi'):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
sys.exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BSSN_quantities")
###Output
Created Tutorial-BSSN_quantities.tex, and compiled LaTeX file to PDF file
Tutorial-BSSN_quantities.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import sys
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{\sqrt{x^2+y^2}} \bar{\Lambda}^{x} + \frac{x}{\sqrt{x^2+y^2}} \bar{\Lambda}^{y} \\&= -\frac{y}{r \sin\theta} \bar{\Lambda}^{x} + \frac{x}{r \sin\theta} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ due to the $\frac{1}{r\sin\theta}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
sys.exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if not (cf_choice == "phi" or cf_choice == "W" or cf_choice == "chi"):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
sys.exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-BSSN_quantities.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Module Status:** Self-Validated **Validation Notes:** This tutorial module has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial module). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This module is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{\sqrt{x^2+y^2}} \bar{\Lambda}^{x} + \frac{x}{\sqrt{x^2+y^2}} \bar{\Lambda}^{y} \\&= -\frac{y}{r \sin\theta} \bar{\Lambda}^{x} + \frac{x}{r \sin\theta} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ due to the $\frac{1}{r\sin\theta}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if not (cf_choice == "phi" or cf_choice == "W" or cf_choice == "chi"):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-BSSN_quantities.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-BSSN_quantities.ipynb to latex
[NbConvertApp] Writing 147286 bytes to Tutorial-BSSN_quantities.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import sys
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{\sqrt{x^2+y^2}} \bar{\Lambda}^{x} + \frac{x}{\sqrt{x^2+y^2}} \bar{\Lambda}^{y} \\&= -\frac{y}{r \sin\theta} \bar{\Lambda}^{x} + \frac{x}{r \sin\theta} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ due to the $\frac{1}{r\sin\theta}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
sys.exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if not (cf_choice == "phi" or cf_choice == "W" or cf_choice == "chi"):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
sys.exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-BSSN_quantities.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import sys
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{x^2+y^2} \bar{\Lambda}^{x} + \frac{x}{x^2+y^2} \bar{\Lambda}^{y} \\&= -\frac{y}{(r \sin\theta)^2} \bar{\Lambda}^{x} + \frac{x}{(r \sin\theta)^2} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ (or equivalently where $x=y=0$; i.e., the $z$-axis) due to the $\frac{1}{(r\sin\theta)^2}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
sys.exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if cf_choice not in ('phi', 'W', 'chi'):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
sys.exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BSSN_quantities")
###Output
Created Tutorial-BSSN_quantities.tex, and compiled LaTeX file to PDF file
Tutorial-BSSN_quantities.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi$, and $\bar{D}_j\bar{D}_k \phi$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import sys
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{x^2+y^2} \bar{\Lambda}^{x} + \frac{x}{x^2+y^2} \bar{\Lambda}^{y} \\&= -\frac{y}{(r \sin\theta)^2} \bar{\Lambda}^{x} + \frac{x}{(r \sin\theta)^2} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ (or equivalently where $x=y=0$; i.e., the $z$-axis) due to the $\frac{1}{(r\sin\theta)^2}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
sys.exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi$, and $\bar{D}_j\bar{D}_k \phi$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if cf_choice not in ('phi', 'W', 'chi'):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
sys.exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BSSN_quantities")
###Output
Created Tutorial-BSSN_quantities.tex, and compiled LaTeX file to PDF file
Tutorial-BSSN_quantities.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi$, and $\bar{D}_j\bar{D}_k \phi$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import sys
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{x^2+y^2} \bar{\Lambda}^{x} + \frac{x}{x^2+y^2} \bar{\Lambda}^{y} \\&= -\frac{y}{(r \sin\theta)^2} \bar{\Lambda}^{x} + \frac{x}{(r \sin\theta)^2} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ (or equivalently where $x=y=0$; i.e., the $z$-axis) due to the $\frac{1}{(r\sin\theta)^2}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
sys.exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi$, and $\bar{D}_j\bar{D}_k \phi$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=\chi=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if cf_choice not in ('phi', 'W', 'chi'):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
sys.exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2 is None:
return basename+"["+str(idx1)+"]"
if idx3 is None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BSSN_quantities")
###Output
Created Tutorial-BSSN_quantities.tex, and compiled LaTeX file to PDF file
Tutorial-BSSN_quantities.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Module Status:** Self-Validated **Validation Notes:** This tutorial module has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial module). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This module is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{\sqrt{x^2+y^2}} \bar{\Lambda}^{x} + \frac{x}{\sqrt{x^2+y^2}} \bar{\Lambda}^{y} \\&= -\frac{y}{r \sin\theta} \bar{\Lambda}^{x} + \frac{x}{r \sin\theta} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ due to the $\frac{1}{r\sin\theta}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if not (cf_choice == "phi" or cf_choice == "W" or cf_choice == "chi"):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-BSSN_quantities.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-BSSN_quantities.ipynb to latex
[NbConvertApp] Writing 147286 bytes to Tutorial-BSSN_quantities.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). **Notebook Status:** Self-Validated **Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented. (TODO)**[comment]: (Introduction: TODO) A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial notebook). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This notebook is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable `cf` (e.g., `cf`$=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against `BSSN.BSSN_quantities` NRPy+ module1. [Step 11](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
import sys
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{\sqrt{x^2+y^2}} \bar{\Lambda}^{x} + \frac{x}{\sqrt{x^2+y^2}} \bar{\Lambda}^{y} \\&= -\frac{y}{r \sin\theta} \bar{\Lambda}^{x} + \frac{x}{r \sin\theta} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ due to the $\frac{1}{r\sin\theta}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized to other coordinate systems, so long as $\lambda^i$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* `ReU[i]` and `ReDD[i][j]`, such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that `cf` is the chosen conformal factor (supported choices for `cf` are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative `ReUdD[i][j]` **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where `ReDDdD[i][j][k]` is computed within `rfm.reference_metric()`.
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, `detgbarOverdetghat`, which defines an alternative expression in its place. $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}$=`detgbarOverdetghat`$\ne 1$ is not yet implemented. However, we can define `detgammabar` and its derivatives in terms of a generic `detgbarOverdetghat` and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
sys.exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* `gammabarDD_DhatD[i][j][l]` = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* `gammabarDD_DhatD\_dD[i][j][l][k]` = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* `gammabarDD_DhatDD[i][j][l][k]` = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* `DGammaUDD[k][i][j]`$= \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * `DGammaU[k]`$= \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s `reference_metric()` function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: **`betaU_derivs()`**: The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function `betaUbar_and_derivs()` inside the [BSSN.BSSN_unrescaled_and_barred_vars](../edit/BSSN/BSSN_unrescaled_and_barred_vars) module, defines three quantities:[comment]: (Fix Link Above: TODO)* `betaU_dD[i][j]`$=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* `betaU_dupD[i][j]`: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* `betaU_dDD[i][j][k]`$=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives `cf`$=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable `cf`:1. `cf`$=\phi$,1. `cf`$=\chi=e^{-4\phi}$, and1. `cf`$=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen `cf`.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of `cf`.For `cf`$=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For `cf`$=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if not (cf_choice == "phi" or cf_choice == "W" or cf_choice == "chi"):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
sys.exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against `BSSN.BSSN_quantities` NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
ALL TESTS PASSED!
###Markdown
Step 11: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-BSSN_quantities.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-BSSN_quantities.ipynb to latex
[NbConvertApp] Writing 147264 bytes to Tutorial-BSSN_quantities.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
###Markdown
BSSN Quantities Author: Zach Etienne Formatting improvements courtesy Brandon Clark**This module has been verified against a trusted version of the code.** Introduction:This module documents and constructs a number of quantities useful for building symbolic (SymPy) expressions in terms of the core BSSN quantities $\left\{h_{i j},a_{i j},\phi, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$, as defined in [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) (see also [Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632)). A Note on Notation:As is standard in NRPy+, * Greek indices refer to four-dimensional quantities where the zeroth component indicates temporal (time) component.* Latin indices refer to three-dimensional quantities. This is somewhat counterintuitive since Python always indexes its lists starting from 0. As a result, the zeroth component of three-dimensional quantities will necessarily indicate the first *spatial* direction.As a corollary, any expressions involving mixed Greek and Latin indices will need to offset one set of indices by one: A Latin index in a four-vector will be incremented and a Greek index in a three-vector will be decremented (however, the latter case does not occur in this tutorial module). Table of Contents$$\label{toc}$$Each family of quantities is constructed within a given function (**boldfaced** below). This module is organized as follows1. [Step 1](initializenrpy): Initialize needed Python/NRPy+ modules1. [Step 2](declare_bssn_gfs): **`declare_BSSN_gridfunctions_if_not_declared_already()`**: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions1. [Step 3](rescaling_tensors) Rescaling tensors to avoid coordinate singularities 1. [Step 3.a](bssn_basic_tensors) **`BSSN_basic_tensors()`**: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions1. [Step 4](bssn_barred_metric__inverse_and_derivs): **`gammabar__inverse_and_derivs()`**: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ 1. [Step 4.a](bssn_barred_metric__inverse): Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ 1. [Step 4.b](bssn_barred_metric__derivs): Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$1. [Step 5](detgammabar_and_derivs): **`detgammabar_and_derivs()`**: $\det \bar{\gamma}_{ij}$ and its derivatives1. [Step 6](abar_quantities): **`AbarUU_AbarUD_trAbar()`**: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$1. [Step 7](rbar): **`RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`**: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities 1. [Step 7.a](rbar_part1): Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term 1. [Step 7.b](rbar_part2): Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term 1. [Step 7.c](rbar_part3): Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms 1. [Step 7.d](summing_rbar_terms): Summing the terms and defining $\bar{R}_{ij}$1. [Step 8](beta_derivs): **`betaU_derivs()`**: Unrescaled shift vector $\beta^i$ and spatial derivatives $\beta^i_{,j}$ and $\beta^i_{,jk}$1. [Step 9](phi_and_derivs): **`phi_and_derivs()`**: Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$ 1. [Step 9.a](phi_ito_cf): $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=W=e^{-4\phi}$) 1. [Step 9.b](phi_covariant_derivs): Partial and covariant derivatives of $\phi$1. [Step 10](code_validation): Code Validation against BSSN.BSSN_quantities NRPy+ module1. [Step 11](latex_pdf_output): Output this module to $\LaTeX$-formatted PDF Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](toc)\]$$\label{initializenrpy}$$
###Code
# Step 1: Import all needed modules from NRPy+:
import NRPy_param_funcs as par
import sympy as sp
import indexedexp as ixp
import grid as gri
import reference_metric as rfm
# Step 1.a: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
# Step 1.b: Given the chosen coordinate system, set up
# corresponding reference metric and needed
# reference metric quantities
# The following function call sets up the reference metric
# and related quantities, including rescaling matrices ReDD,
# ReU, and hatted quantities.
rfm.reference_metric()
# Step 1.c: Set spatial dimension (must be 3 for BSSN, as BSSN is
# a 3+1-dimensional decomposition of the general
# relativistic field equations)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Step 1.d: Declare/initialize parameters for this module
thismodule = "BSSN_quantities"
par.initialize_param(par.glb_param("char", thismodule, "EvolvedConformalFactor_cf", "W"))
par.initialize_param(par.glb_param("bool", thismodule, "detgbarOverdetghat_equals_one", "True"))
###Output
_____no_output_____
###Markdown
Step 2: `declare_BSSN_gridfunctions_if_not_declared_already()`: Declare all of the core BSSN variables $\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\}$ and register them as gridfunctions \[Back to [top](toc)\]$$\label{declare_bssn_gfs}$$
###Code
# Step 2: Register all needed BSSN gridfunctions.
# Step 2.a: Register indexed quantities, using ixp.register_... functions
hDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "hDD", "sym01")
aDD = ixp.register_gridfunctions_for_single_rank2("EVOL", "aDD", "sym01")
lambdaU = ixp.register_gridfunctions_for_single_rank1("EVOL", "lambdaU")
vetU = ixp.register_gridfunctions_for_single_rank1("EVOL", "vetU")
betU = ixp.register_gridfunctions_for_single_rank1("EVOL", "betU")
# Step 2.b: Register scalar quantities, using gri.register_gridfunctions()
trK, cf, alpha = gri.register_gridfunctions("EVOL",["trK", "cf", "alpha"])
###Output
_____no_output_____
###Markdown
Step 3: Rescaling tensors to avoid coordinate singularities \[Back to [top](toc)\]$$\label{rescaling_tensors}$$While the [covariant form of the BSSN evolution equations](Tutorial-BSSNCurvilinear.ipynb) are properly covariant (with the potential exception of the shift evolution equation, since the shift is a [freely specifiable gauge quantity](https://en.wikipedia.org/wiki/Gauge_fixing)), components of the rank-1 and rank-2 tensors $\varepsilon_{i j}$, $\bar{A}_{i j}$, and $\bar{\Lambda}^{i}$ will drop to zero (destroying information) or diverge (to $\infty$) at coordinate singularities. The good news is, this singular behavior is well-understood in terms of the scale factors of the reference metric, enabling us to define rescaled version of these quantities that are well behaved (so that, e.g., they can be finite differenced).For example, given a smooth vector *in a 3D Cartesian basis* $\bar{\Lambda}^{i}$, all components $\bar{\Lambda}^{x}$, $\bar{\Lambda}^{y}$, and $\bar{\Lambda}^{z}$ will be smooth (by assumption). When changing the basis to spherical coordinates (applying the appropriate Jacobian matrix transformation), we will find that since $\phi = \arctan(y/x)$, $\bar{\Lambda}^{\phi}$ is given by\begin{align}\bar{\Lambda}^{\phi} &= \frac{\partial \phi}{\partial x} \bar{\Lambda}^{x} + \frac{\partial \phi}{\partial y} \bar{\Lambda}^{y} + \frac{\partial \phi}{\partial z} \bar{\Lambda}^{z} \\&= -\frac{y}{\sqrt{x^2+y^2}} \bar{\Lambda}^{x} + \frac{x}{\sqrt{x^2+y^2}} \bar{\Lambda}^{y} \\&= -\frac{y}{r \sin\theta} \bar{\Lambda}^{x} + \frac{x}{r \sin\theta} \bar{\Lambda}^{y}.\end{align}Thus $\bar{\Lambda}^{\phi}$ diverges at all points where $r\sin\theta=0$ due to the $\frac{1}{r\sin\theta}$ that appear in the Jacobian transformation. This divergence might pose no problem on cell-centered grids that avoid $r \sin\theta=0$, except that the BSSN equations require that *first and second derivatives* of these quantities be taken. Usual strategies for numerical approximation of these derivatives (e.g., finite difference methods) will "see" these divergences and errors generally will not drop to zero with increased numerical sampling of the functions at points near where the functions diverge.However, notice that if we define $\lambda^{\phi}$ such that$$\bar{\Lambda}^{\phi} = \frac{1}{r\sin\theta} \lambda^{\phi},$$then $\lambda^{\phi}$ will be smooth as well. Avoiding such singularities can be generalized, so long as $\lambda^{\phi}$ is defined as:$$\bar{\Lambda}^{i} = \frac{\lambda^i}{\text{scalefactor[i]}} ,$$where scalefactor\[i\] is the $i$th scale factor in the given coordinate system. In an identical fashion, we define the smooth versions of $\beta^i$ and $B^i$ to be $\mathcal{V}^i$ and $\mathcal{B}^i$, respectively. We refer to $\mathcal{V}^i$ and $\mathcal{B}^i$ as vet\[i\] and bet\[i\] respectively in the code after the Hebrew letters that bear some resemblance. Similarly, we define the smooth versions of $\bar{A}_{ij}$ and $\varepsilon_{ij}$ ($a_{ij}$ and $h_{ij}$, respectively) via\begin{align}\bar{A}_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ a_{ij} \\\varepsilon_{ij} &= \text{scalefactor[i]}\ \text{scalefactor[j]}\ h_{ij},\end{align}where in this case we *multiply* due to the fact that these tensors are purely covariant (as opposed to contravariant). To slightly simplify the notation, in NRPy+ we define the *rescaling matrices* ReU\[i\] and ReDD\[i\]\[j\], such that\begin{align}\text{ReU[i]} &= 1 / \text{scalefactor[i]} \\\text{ReDD[i][j]} &= \text{scalefactor[i] scalefactor[j]}.\end{align}Thus, for example, $\bar{A}_{ij}$ and $\bar{\Lambda}^i$ can be expressed as the [Hadamard product](https://en.wikipedia.org/w/index.php?title=Hadamard_product_(matrices)&oldid=852272177) of matrices :\begin{align}\bar{A}_{ij} &= \mathbf{ReDD}\circ\mathbf{a} = \text{ReDD[i][j]} a_{ij} \\\bar{\Lambda}^{i} &= \mathbf{ReU}\circ\mathbf{\lambda} = \text{ReU[i]} \lambda^i,\end{align}where no sums are implied by the repeated indices.Further, since the scale factors are *time independent*, \begin{align}\partial_t \bar{A}_{ij} &= \text{ReDD[i][j]}\ \partial_t a_{ij} \\\partial_t \bar{\gamma}_{ij} &= \partial_t \left(\varepsilon_{ij} + \hat{\gamma}_{ij}\right)\\&= \partial_t \varepsilon_{ij} \\&= \text{scalefactor[i]}\ \text{scalefactor[j]}\ \partial_t h_{ij}.\end{align}Thus instead of taking space or time derivatives of BSSN quantities$$\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\phi, K, \bar{\Lambda}^{i}, \alpha, \beta^i, B^i\right\},$$ across coordinate singularities, we instead factor out the singular scale factors according to this prescription so that space or time derivatives of BSSN quantities are written in terms of finite-difference derivatives of the *rescaled* variables $$\left\{h_{i j},a_{i j},\text{cf}, K, \lambda^{i}, \alpha, \mathcal{V}^i, \mathcal{B}^i\right\},$$ and *exact* expressions for (spatial) derivatives of scale factors. Note that $\text{cf}$ is the chosen conformal factor (supported choices for $\text{cf}$ are discussed in [Step 6.a](phi_ito_cf)). As an example, let's evaluate $\bar{\Lambda}^{i}_{\, ,\, j}$ according to this prescription:\begin{align}\bar{\Lambda}^{i}_{\, ,\, j} &= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \partial_j \left(\text{ReU[i]}\right) + \frac{\partial_j \lambda^i}{\text{ReU[i]}} \\&= -\frac{\lambda^i}{(\text{ReU[i]})^2}\ \text{ReUdD[i][j]} + \frac{\partial_j \lambda^i}{\text{ReU[i]}}.\end{align}Here, the derivative $\text{ReUdD[i][j]}$ **is computed symbolically and exactly** using SymPy, and the derivative $\partial_j \lambda^i$ represents a derivative of a *smooth* quantity (so long as $\bar{\Lambda}^{i}$ is smooth in the Cartesian basis). Step 3.a: `BSSN_basic_tensors()`: Define all basic conformal BSSN tensors $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$ in terms of BSSN gridfunctions \[Back to [top](toc)\]$$\label{bssn_basic_tensors}$$The `BSSN_vars__tensors()` function defines the tensorial BSSN quantities $\left\{\bar{\gamma}_{i j},\bar{A}_{i j},\bar{\Lambda}^{i}, \beta^i, B^i\right\}$, in terms of the rescaled "base" tensorial quantities $\left\{h_{i j},a_{i j}, \lambda^{i}, \mathcal{V}^i, \mathcal{B}^i\right\},$ respectively:\begin{align}\bar{\gamma}_{i j} &= \hat{\gamma}_{ij} + \varepsilon_{ij}, \text{ where } \varepsilon_{ij} = h_{ij} \circ \text{ReDD[i][j]} \\\bar{A}_{i j} &= a_{ij} \circ \text{ReDD[i][j]} \\\bar{\Lambda}^{i} &= \lambda^i \circ \text{ReU[i]} \\\beta^{i} &= \mathcal{V}^i \circ \text{ReU[i]} \\B^{i} &= \mathcal{B}^i \circ \text{ReU[i]}\end{align}Rescaling vectors and tensors are built upon the scale factors for the chosen (in general, singular) coordinate system, which are defined in NRPy+'s [reference_metric.py](../edit/reference_metric.py) ([Tutorial](Tutorial-Reference_Metric.ipynb)), and the rescaled variables are defined in the stub function [BSSN/BSSN_rescaled_vars.py](../edit/BSSN/BSSN_rescaled_vars.py). Here we implement `BSSN_vars__tensors()`:
###Code
# Step 3.a: Define all basic conformal BSSN tensors in terms of BSSN gridfunctions
# Step 3.a.i: gammabarDD and AbarDD:
gammabarDD = ixp.zerorank2()
AbarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
# gammabar_{ij} = h_{ij}*ReDD[i][j] + gammahat_{ij}
gammabarDD[i][j] = hDD[i][j]*rfm.ReDD[i][j] + rfm.ghatDD[i][j]
# Abar_{ij} = a_{ij}*ReDD[i][j]
AbarDD[i][j] = aDD[i][j]*rfm.ReDD[i][j]
# Step 3.a.ii: LambdabarU, betaU, and BU:
LambdabarU = ixp.zerorank1()
betaU = ixp.zerorank1()
BU = ixp.zerorank1()
for i in range(DIM):
LambdabarU[i] = lambdaU[i]*rfm.ReU[i]
betaU[i] = vetU[i] *rfm.ReU[i]
BU[i] = betU[i] *rfm.ReU[i]
###Output
_____no_output_____
###Markdown
Step 4: `gammabar__inverse_and_derivs()`: $\bar{\gamma}^{ij}$, and spatial derivatives of $\bar{\gamma}_{ij}$ including $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse_and_derivs}$$ Step 4.a: Inverse conformal 3-metric: $\bar{\gamma^{ij}}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__inverse}$$Since $\bar{\gamma}^{ij}$ is the inverse of $\bar{\gamma}_{ij}$, we apply a $3\times 3$ symmetric matrix inversion to compute $\bar{\gamma}^{ij}$.
###Code
# Step 4.a: Inverse conformal 3-metric gammabarUU:
# Step 4.a.i: gammabarUU:
gammabarUU, dummydet = ixp.symm_matrix_inverter3x3(gammabarDD)
###Output
_____no_output_____
###Markdown
Step 4.b: Derivatives of the conformal 3-metric $\bar{\gamma}_{ij,k}$ and $\bar{\gamma}_{ij,kl}$, and associated "barred" Christoffel symbols $\bar{\Gamma}^{i}_{jk}$ \[Back to [top](toc)\]$$\label{bssn_barred_metric__derivs}$$In the BSSN-in-curvilinear coordinates formulation, all quantities must be defined in terms of rescaled quantities $h_{ij}$ and their derivatives (evaluated using finite differences), as well as reference-metric quantities and their derivatives (evaluated exactly using SymPy). For example, $\bar{\gamma}_{ij,k}$ is given by:\begin{align}\bar{\gamma}_{ij,k} &= \partial_k \bar{\gamma}_{ij} \\&= \partial_k \left(\hat{\gamma}_{ij} + \varepsilon_{ij}\right) \\&= \partial_k \left(\hat{\gamma}_{ij} + h_{ij} \text{ReDD[i][j]}\right) \\&= \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}where $\text{ReDDdD[i][j][k]}$ is computed within rfm.reference_metric().
###Code
# Step 4.b.i gammabarDDdD[i][j][k]
# = \hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}.
gammabarDD_dD = ixp.zerorank3()
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
hDD_dupD = ixp.declarerank3("hDD_dupD","sym01")
gammabarDD_dupD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
gammabarDD_dD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Compute associated upwinded derivative, needed for the \bar{\gamma}_{ij} RHS
gammabarDD_dupD[i][j][k] = rfm.ghatDDdD[i][j][k] + \
hDD_dupD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
###Output
_____no_output_____
###Markdown
By extension, the second derivative $\bar{\gamma}_{ij,kl}$ is given by\begin{align}\bar{\gamma}_{ij,kl} &= \partial_l \left(\hat{\gamma}_{ij,k} + h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]}\right)\\&= \hat{\gamma}_{ij,kl} + h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}\end{align}
###Code
# Step 4.b.ii: Compute gammabarDD_dDD in terms of the rescaled BSSN quantity hDD
# and its derivatives, as well as the reference metric and rescaling
# matrix, and its derivatives (expression given below):
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
gammabarDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# gammabar_{ij,kl} = gammahat_{ij,kl}
# + h_{ij,kl} ReDD[i][j]
# + h_{ij,k} ReDDdD[i][j][l] + h_{ij,l} ReDDdD[i][j][k]
# + h_{ij} ReDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] = rfm.ghatDDdDD[i][j][k][l]
gammabarDD_dDD[i][j][k][l] += hDD_dDD[i][j][k][l]*rfm.ReDD[i][j]
gammabarDD_dDD[i][j][k][l] += hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k]
gammabarDD_dDD[i][j][k][l] += hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
Finally, we compute the Christoffel symbol associated with the barred 3-metric: $\bar{\Gamma}^{i}_{kl}$:$$\bar{\Gamma}^{i}_{kl} = \frac{1}{2} \bar{\gamma}^{im} \left(\bar{\gamma}_{mk,l} + \bar{\gamma}_{ml,k} - \bar{\gamma}_{kl,m} \right)$$
###Code
# Step 4.b.iii: Define barred Christoffel symbol \bar{\Gamma}^{i}_{kl} = GammabarUDD[i][k][l] (see expression below)
GammabarUDD = ixp.zerorank3()
for i in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
# Gammabar^i_{kl} = 1/2 * gammabar^{im} ( gammabar_{mk,l} + gammabar_{ml,k} - gammabar_{kl,m}):
GammabarUDD[i][k][l] += sp.Rational(1,2)*gammabarUU[i][m]* \
(gammabarDD_dD[m][k][l] + gammabarDD_dD[m][l][k] - gammabarDD_dD[k][l][m])
###Output
_____no_output_____
###Markdown
Step 5: `detgammabar_and_derivs()`: $\det \bar{\gamma}_{ij}$ and its derivatives \[Back to [top](toc)\]$$\label{detgammabar_and_derivs}$$As described just before Section III of [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf), we are free to choose $\det \bar{\gamma}_{ij}$, which should remain fixed in time.As in [Baumgarte *et al* (2012)](https://arxiv.org/pdf/1211.6632.pdf) generally we make the choice $\det \bar{\gamma}_{ij} = \det \hat{\gamma}_{ij}$, but *this need not be the case; we could choose to set $\det \bar{\gamma}_{ij}$ to another expression.*In case we do not choose to set $\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=1$, below we begin the implementation of a gridfunction, $\text{detgbarOverdetghat}$, which defines an alternative expression in its place. ***$\det \bar{\gamma}_{ij}/\det \hat{\gamma}_{ij}=\text{detgbarOverdetghat}\ne 1$ is not yet implemented.*** However, we can define $\text{detgammabar}$ and its derivatives in terms of a generic $\text{detgbarOverdetghat}$ and $\det \hat{\gamma}_{ij}$ and their derivatives:\begin{align}\text{detgammabar} &= \det \bar{\gamma}_{ij} = \text{detgbarOverdetghat} \cdot \left(\det \hat{\gamma}_{ij}\right) \\\text{detgammabar}\_\text{dD[k]} &= \left(\det \bar{\gamma}_{ij}\right)_{,k} = \text{detgbarOverdetghat}\_\text{dD[k]} \det \hat{\gamma}_{ij} + \text{detgbarOverdetghat} \left(\det \hat{\gamma}_{ij}\right)_{,k} \\\end{align}https://en.wikipedia.org/wiki/DeterminantProperties_of_the_determinant
###Code
# Step 5: det(gammabarDD) and its derivatives
detgbarOverdetghat = sp.sympify(1)
detgbarOverdetghat_dD = ixp.zerorank1()
detgbarOverdetghat_dDD = ixp.zerorank2()
if par.parval_from_str(thismodule+"::detgbarOverdetghat_equals_one") == "False":
print("Error: detgbarOverdetghat_equals_one=\"False\" is not fully implemented yet.")
exit(1)
## Approach for implementing detgbarOverdetghat_equals_one=False:
# detgbarOverdetghat = gri.register_gridfunctions("AUX", ["detgbarOverdetghat"])
# detgbarOverdetghatInitial = gri.register_gridfunctions("AUX", ["detgbarOverdetghatInitial"])
# detgbarOverdetghat_dD = ixp.declarerank1("detgbarOverdetghat_dD")
# detgbarOverdetghat_dDD = ixp.declarerank2("detgbarOverdetghat_dDD", "sym01")
# Step 5.b: Define detgammabar, detgammabar_dD, and detgammabar_dDD (needed for
# \partial_t \bar{\Lambda}^i below)detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar = detgbarOverdetghat * rfm.detgammahat
detgammabar_dD = ixp.zerorank1()
for i in range(DIM):
detgammabar_dD[i] = detgbarOverdetghat_dD[i] * rfm.detgammahat + detgbarOverdetghat * rfm.detgammahatdD[i]
detgammabar_dDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
detgammabar_dDD[i][j] = detgbarOverdetghat_dDD[i][j] * rfm.detgammahat + \
detgbarOverdetghat_dD[i] * rfm.detgammahatdD[j] + \
detgbarOverdetghat_dD[j] * rfm.detgammahatdD[i] + \
detgbarOverdetghat * rfm.detgammahatdDD[i][j]
###Output
_____no_output_____
###Markdown
Step 6: `AbarUU_AbarUD_trAbar_AbarDD_dD()`: Quantities related to conformal traceless extrinsic curvature $\bar{A}_{ij}$: $\bar{A}^{ij}$, $\bar{A}^i_j$, and $\bar{A}^k_k$ \[Back to [top](toc)\]$$\label{abar_quantities}$$$\bar{A}^{ij}$ is given by application of the raising operators (a.k.a., the inverse 3-metric) $\bar{\gamma}^{jk}$ on both of the covariant ("down") components:$$\bar{A}^{ij} = \bar{\gamma}^{ik}\bar{\gamma}^{jl} \bar{A}_{kl}.$$$\bar{A}^i_j$ is given by a single application of the raising operator (a.k.a., the inverse 3-metric) $\bar{\gamma}^{ik}$ on $\bar{A}_{kj}$:$$\bar{A}^i_j = \bar{\gamma}^{ik}\bar{A}_{kj}.$$The trace of $\bar{A}_{ij}$, $\bar{A}^k_k$, is given by a contraction with the barred 3-metric:$$\text{Tr}(\bar{A}_{ij}) = \bar{A}^k_k = \bar{\gamma}^{kj}\bar{A}_{jk}.$$Note that while $\bar{A}_{ij}$ is defined as the *traceless* conformal extrinsic curvature, it may acquire a nonzero trace (assuming the initial data impose tracelessness) due to numerical error. $\text{Tr}(\bar{A}_{ij})$ is included in the BSSN equations to drive $\text{Tr}(\bar{A}_{ij})$ to zero.In terms of rescaled BSSN quantities, $\bar{A}_{ij}$ is given by$$\bar{A}_{ij} = \text{ReDD[i][j]} a_{ij},$$so in terms of the same quantities, $\bar{A}_{ij,k}$ is given by$$\bar{A}_{ij,k} = \text{ReDDdD[i][j][k]} a_{ij} + \text{ReDD[i][j]} a_{ij,k}.$$
###Code
# Step 6: Quantities related to conformal traceless extrinsic curvature
# Step 6.a.i: Compute Abar^{ij} in terms of Abar_{ij} and gammabar^{ij}
AbarUU = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
# Abar^{ij} = gammabar^{ik} gammabar^{jl} Abar_{kl}
AbarUU[i][j] += gammabarUU[i][k]*gammabarUU[j][l]*AbarDD[k][l]
# Step 6.a.ii: Compute Abar^i_j in terms of Abar_{ij} and gammabar^{ij}
AbarUD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
# Abar^i_j = gammabar^{ik} Abar_{kj}
AbarUD[i][j] += gammabarUU[i][k]*AbarDD[k][j]
# Step 6.a.iii: Compute Abar^k_k = trace of Abar:
trAbar = sp.sympify(0)
for k in range(DIM):
for j in range(DIM):
# Abar^k_k = gammabar^{kj} Abar_{jk}
trAbar += gammabarUU[k][j]*AbarDD[j][k]
# Step 6.a.iv: Compute Abar_{ij,k}
AbarDD_dD = ixp.zerorank3()
AbarDD_dupD = ixp.zerorank3()
aDD_dD = ixp.declarerank3("aDD_dD" ,"sym01")
aDD_dupD = ixp.declarerank3("aDD_dupD","sym01")
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
AbarDD_dupD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dupD[i][j][k]
AbarDD_dD[i][j][k] = rfm.ReDDdD[i][j][k]*aDD[i][j] + rfm.ReDD[i][j]*aDD_dD[ i][j][k]
###Output
_____no_output_____
###Markdown
Step 7: `RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()`: The conformal ("barred") Ricci tensor $\bar{R}_{ij}$ and associated quantities \[Back to [top](toc)\]$$\label{rbar}$$Let's compute perhaps the most complicated expression in the BSSN evolution equations, the conformal Ricci tensor:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}Let's tackle the $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term first: Step 7.a: Conformal Ricci tensor, part 1: The $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}$ term \[Back to [top](toc)\]$$\label{rbar_part1}$$First note that the covariant derivative of a metric with respect to itself is zero$$\hat{D}_{l} \hat{\gamma}_{ij} = 0,$$so $$\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{k} \hat{D}_{l} \left(\hat{\gamma}_{i j} + \varepsilon_{ij}\right) = \hat{D}_{k} \hat{D}_{l} \varepsilon_{ij}.$$Next, the covariant derivative of a tensor is given by (from the [wikipedia article on covariant differentiation](https://en.wikipedia.org/wiki/Covariant_derivative)):\begin{align} {(\nabla_{e_c} T)^{a_1 \ldots a_r}}_{b_1 \ldots b_s} = {} &\frac{\partial}{\partial x^c}{T^{a_1 \ldots a_r}}_{b_1 \ldots b_s} \\ &+ \,{\Gamma ^{a_1}}_{dc} {T^{d a_2 \ldots a_r}}_{b_1 \ldots b_s} + \cdots + {\Gamma^{a_r}}_{dc} {T^{a_1 \ldots a_{r-1}d}}_{b_1 \ldots b_s} \\ &-\,{\Gamma^d}_{b_1 c} {T^{a_1 \ldots a_r}}_{d b_2 \ldots b_s} - \cdots - {\Gamma^d}_{b_s c} {T^{a_1 \ldots a_r}}_{b_1 \ldots b_{s-1} d}.\end{align}Therefore, $$\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}.$$Since the covariant first derivative is a tensor, the covariant second derivative is given by (same as [Eq. 27 in Baumgarte et al (2012)](https://arxiv.org/pdf/1211.6632.pdf))\begin{align}\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} &= \hat{D}_{k} \hat{D}_{l} \varepsilon_{i j} \\&= \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right),\end{align}where the first term is the partial derivative of the expression already derived for $\hat{D}_{l} \varepsilon_{i j}$:\begin{align}\partial_k \hat{D}_{l} \varepsilon_{i j} &= \partial_k \left(\varepsilon_{ij,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m} \right) \\&= \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}.\end{align}In terms of the evolved quantity $h_{ij}$, the derivatives of $\varepsilon_{ij}$ are given by:\begin{align}\varepsilon_{ij,k} &= \partial_k \left(h_{ij} \text{ReDD[i][j]}\right) \\&= h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]},\end{align}and\begin{align}\varepsilon_{ij,kl} &= \partial_l \left(h_{ij,k} \text{ReDD[i][j]} + h_{ij} \text{ReDDdD[i][j][k]} \right)\\&= h_{ij,kl} \text{ReDD[i][j]} + h_{ij,k} \text{ReDDdD[i][j][l]} + h_{ij,l} \text{ReDDdD[i][j][k]} + h_{ij} \text{ReDDdDD[i][j][k][l]}.\end{align}
###Code
# Step 7: Conformal Ricci tensor, part 1: The \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} term
# Step 7.a.i: Define \varepsilon_{ij} = epsDD[i][j]
epsDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
epsDD[i][j] = hDD[i][j]*rfm.ReDD[i][j]
# Step 7.a.ii: Define epsDD_dD[i][j][k]
hDD_dD = ixp.declarerank3("hDD_dD","sym01")
epsDD_dD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
epsDD_dD[i][j][k] = hDD_dD[i][j][k]*rfm.ReDD[i][j] + hDD[i][j]*rfm.ReDDdD[i][j][k]
# Step 7.a.iii: Define epsDD_dDD[i][j][k][l]
hDD_dDD = ixp.declarerank4("hDD_dDD","sym01_sym23")
epsDD_dDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
epsDD_dDD[i][j][k][l] = hDD_dDD[i][j][k][l]*rfm.ReDD[i][j] + \
hDD_dD[i][j][k]*rfm.ReDDdD[i][j][l] + \
hDD_dD[i][j][l]*rfm.ReDDdD[i][j][k] + \
hDD[i][j]*rfm.ReDDdDD[i][j][k][l]
###Output
_____no_output_____
###Markdown
We next compute three quantities derived above:* gammabarDD\_DhatD[i][j][l] = $\hat{D}_{l} \bar{\gamma}_{i j} = \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{i j,l} - \hat{\Gamma}^m_{i l} \varepsilon_{m j} -\hat{\Gamma}^m_{j l} \varepsilon_{i m}$,* gammabarDD\_DhatD\_dD[i][j][l][k] = $\partial_k \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} = \varepsilon_{ij,lk} - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j} - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k} - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m} - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}$, and* gammabarDD\_DhatDD[i][j][l][k] = $\hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} = \partial_k \hat{D}_{l} \varepsilon_{i j} - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right) - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right) - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)$.
###Code
# Step 7.a.iv: DhatgammabarDDdD[i][j][l] = \bar{\gamma}_{ij;\hat{l}}
# \bar{\gamma}_{ij;\hat{l}} = \varepsilon_{i j,l}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m}
gammabarDD_dHatD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
gammabarDD_dHatD[i][j][l] = epsDD_dD[i][j][l]
for m in range(DIM):
gammabarDD_dHatD[i][j][l] += - rfm.GammahatUDD[m][i][l]*epsDD[m][j] \
- rfm.GammahatUDD[m][j][l]*epsDD[i][m]
# Step 7.a.v: \bar{\gamma}_{ij;\hat{l},k} = DhatgammabarDD_dHatD_dD[i][j][l][k]:
# \bar{\gamma}_{ij;\hat{l},k} = \varepsilon_{ij,lk}
# - \hat{\Gamma}^m_{i l,k} \varepsilon_{m j}
# - \hat{\Gamma}^m_{i l} \varepsilon_{m j,k}
# - \hat{\Gamma}^m_{j l,k} \varepsilon_{i m}
# - \hat{\Gamma}^m_{j l} \varepsilon_{i m,k}
gammabarDD_dHatD_dD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] = epsDD_dDD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatD_dD[i][j][l][k] += -rfm.GammahatUDDdD[m][i][l][k]*epsDD[m][j] \
-rfm.GammahatUDD[m][i][l]*epsDD_dD[m][j][k] \
-rfm.GammahatUDDdD[m][j][l][k]*epsDD[i][m] \
-rfm.GammahatUDD[m][j][l]*epsDD_dD[i][m][k]
# Step 7.a.vi: \bar{\gamma}_{ij;\hat{l}\hat{k}} = DhatgammabarDD_dHatDD[i][j][l][k]
# \bar{\gamma}_{ij;\hat{l}\hat{k}} = \partial_k \hat{D}_{l} \varepsilon_{i j}
# - \hat{\Gamma}^m_{lk} \left(\hat{D}_{m} \varepsilon_{i j}\right)
# - \hat{\Gamma}^m_{ik} \left(\hat{D}_{l} \varepsilon_{m j}\right)
# - \hat{\Gamma}^m_{jk} \left(\hat{D}_{l} \varepsilon_{i m}\right)
gammabarDD_dHatDD = ixp.zerorank4()
for i in range(DIM):
for j in range(DIM):
for l in range(DIM):
for k in range(DIM):
gammabarDD_dHatDD[i][j][l][k] = gammabarDD_dHatD_dD[i][j][l][k]
for m in range(DIM):
gammabarDD_dHatDD[i][j][l][k] += - rfm.GammahatUDD[m][l][k]*gammabarDD_dHatD[i][j][m] \
- rfm.GammahatUDD[m][i][k]*gammabarDD_dHatD[m][j][l] \
- rfm.GammahatUDD[m][j][k]*gammabarDD_dHatD[i][m][l]
###Output
_____no_output_____
###Markdown
Step 7.b: Conformal Ricci tensor, part 2: The $\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k}$ term \[Back to [top](toc)\]$$\label{rbar_part2}$$By definition, the index symmetrization operation is given by:$$\bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} = \frac{1}{2} \left( \bar{\gamma}_{ki} \hat{D}_{j} \bar{\Lambda}^{k} + \bar{\gamma}_{kj} \hat{D}_{i} \bar{\Lambda}^{k} \right),$$and $\bar{\gamma}_{ij}$ is trivially computed ($=\varepsilon_{ij} + \hat{\gamma}_{ij}$) so the only nontrival part to computing this term is in evaluating $\hat{D}_{j} \bar{\Lambda}^{k}$.The covariant derivative is with respect to the hatted metric (i.e. the reference metric), so$$\hat{D}_{j} \bar{\Lambda}^{k} = \partial_j \bar{\Lambda}^{k} + \hat{\Gamma}^{k}_{mj} \bar{\Lambda}^m,$$except we cannot take derivatives of $\bar{\Lambda}^{k}$ directly due to potential issues with coordinate singularities. Instead we write it in terms of the rescaled quantity $\lambda^k$ via$$\bar{\Lambda}^{k} = \lambda^k \text{ReU[k]}.$$Then the expression for $\hat{D}_{j} \bar{\Lambda}^{k}$ becomes$$\hat{D}_{j} \bar{\Lambda}^{k} = \lambda^{k}_{,j} \text{ReU[k]} + \lambda^{k} \text{ReUdD[k][j]} + \hat{\Gamma}^{k}_{mj} \lambda^{m} \text{ReU[m]},$$and the NRPy+ code for this expression is written
###Code
# Step 7.b: Second term of RhatDD: compute \hat{D}_{j} \bar{\Lambda}^{k} = LambarU_dHatD[k][j]
lambdaU_dD = ixp.declarerank2("lambdaU_dD","nosym")
LambarU_dHatD = ixp.zerorank2()
for j in range(DIM):
for k in range(DIM):
LambarU_dHatD[k][j] = lambdaU_dD[k][j]*rfm.ReU[k] + lambdaU[k]*rfm.ReUdD[k][j]
for m in range(DIM):
LambarU_dHatD[k][j] += rfm.GammahatUDD[k][m][j]*lambdaU[m]*rfm.ReU[m]
###Output
_____no_output_____
###Markdown
Step 7.c: Conformal Ricci tensor, part 3: The $\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right )$ terms \[Back to [top](toc)\]$$\label{rbar_part3}$$Our goal here is to compute the quantities appearing as the final terms of the conformal Ricci tensor:$$\Delta^{k} \Delta_{(i j) k} + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right).$$* $\text{DGammaUDD[k][i][j]} = \Delta^k_{ij}$ is simply the difference in Christoffel symbols: $\Delta^{k}_{ij} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk}$, and * $\text{DGammaU[k]} = \Delta^k$ is the contraction: $\bar{\gamma}^{ij} \Delta^{k}_{ij}$Adding these expressions to Ricci is straightforward, since $\bar{\Gamma}^i_{jk}$ and $\bar{\gamma}^{ij}$ were defined above in [Step 4](bssn_barred_metric__inverse_and_derivs), and $\hat{\Gamma}^i_{jk}$ was computed within NRPy+'s reference_metric() function:
###Code
# Step 7.c: Conformal Ricci tensor, part 3: The \Delta^{k} \Delta_{(i j) k}
# + \bar{\gamma}^{k l}*(2 \Delta_{k(i}^{m} \Delta_{j) m l}
# + \Delta_{i k}^{m} \Delta_{m j l}) terms
# Step 7.c.i: Define \Delta^i_{jk} = \bar{\Gamma}^i_{jk} - \hat{\Gamma}^i_{jk} = DGammaUDD[i][j][k]
DGammaUDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaUDD[i][j][k] = GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]
# Step 7.c.ii: Define \Delta^i = \bar{\gamma}^{jk} \Delta^i_{jk}
DGammaU = ixp.zerorank1()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
DGammaU[i] += gammabarUU[j][k] * DGammaUDD[i][j][k]
###Output
_____no_output_____
###Markdown
Next we define $\Delta_{ijk}=\bar{\gamma}_{im}\Delta^m_{jk}$:
###Code
# Step 7.c.iii: Define \Delta_{ijk} = \bar{\gamma}_{im} \Delta^m_{jk}
DGammaDDD = ixp.zerorank3()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for m in range(DIM):
DGammaDDD[i][j][k] += gammabarDD[i][m] * DGammaUDD[m][j][k]
###Output
_____no_output_____
###Markdown
Step 7.d: Summing the terms and defining $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{summing_rbar_terms}$$We have now constructed all of the terms going into $\bar{R}_{ij}$:\begin{align} \bar{R}_{i j} {} = {} & - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j} + \bar{\gamma}_{k(i} \hat{D}_{j)} \bar{\Lambda}^{k} + \Delta^{k} \Delta_{(i j) k} \nonumber \\ & + \bar{\gamma}^{k l} \left (2 \Delta_{k(i}^{m} \Delta_{j) m l} + \Delta_{i k}^{m} \Delta_{m j l} \right ) \; .\end{align}
###Code
# Step 7.d: Summing the terms and defining \bar{R}_{ij}
# Step 7.d.i: Add the first term to RbarDD:
# Rbar_{ij} += - \frac{1}{2} \bar{\gamma}^{k l} \hat{D}_{k} \hat{D}_{l} \bar{\gamma}_{i j}
RbarDD = ixp.zerorank2()
RbarDDpiece = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
RbarDD[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
RbarDDpiece[i][j] += -sp.Rational(1,2) * gammabarUU[k][l]*gammabarDD_dHatDD[i][j][l][k]
# Step 7.d.ii: Add the second term to RbarDD:
# Rbar_{ij} += (1/2) * (gammabar_{ki} Lambar^k_{;\hat{j}} + gammabar_{kj} Lambar^k_{;\hat{i}})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * (gammabarDD[k][i]*LambarU_dHatD[k][j] + \
gammabarDD[k][j]*LambarU_dHatD[k][i])
# Step 7.d.iii: Add the remaining term to RbarDD:
# Rbar_{ij} += \Delta^{k} \Delta_{(i j) k} = 1/2 \Delta^{k} (\Delta_{i j k} + \Delta_{j i k})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
RbarDD[i][j] += sp.Rational(1,2) * DGammaU[k] * (DGammaDDD[i][j][k] + DGammaDDD[j][i][k])
# Step 7.d.iv: Add the final term to RbarDD:
# Rbar_{ij} += \bar{\gamma}^{k l} (\Delta^{m}_{k i} \Delta_{j m l}
# + \Delta^{m}_{k j} \Delta_{i m l}
# + \Delta^{m}_{i k} \Delta_{m j l})
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
for l in range(DIM):
for m in range(DIM):
RbarDD[i][j] += gammabarUU[k][l] * (DGammaUDD[m][k][i]*DGammaDDD[j][m][l] +
DGammaUDD[m][k][j]*DGammaDDD[i][m][l] +
DGammaUDD[m][i][k]*DGammaDDD[m][j][l])
###Output
_____no_output_____
###Markdown
Step 8: betaU_derivs(): The unrescaled shift vector $\beta^i$ spatial derivatives: $\beta^i_{,j}$ & $\beta^i_{,jk}$, written in terms of the rescaled shift vector $\mathcal{V}^i$ \[Back to [top](toc)\]$$\label{beta_derivs}$$This step, which documents the function betaUbar_and_derivs() inside the BSSN.BSSN_unrescaled_and_barred_vars module, defines three quantities:* $\text{betaU}\_\text{dD[i][j]}=\beta^i_{,j} = \left(\mathcal{V}^i \circ \text{ReU[i]}\right)_{,j} = \mathcal{V}^i_{,j} \circ \text{ReU[i]} + \mathcal{V}^i \circ \text{ReUdD[i][j]}$* $\text{betaU}\_\text{dupD[i][j]}$: the same as above, except using *upwinded* finite-difference derivatives to compute $\mathcal{V}^i_{,j}$ instead of *centered* finite-difference derivatives.* $\text{betaU}\_\text{dDD[i][j][k]}=\beta^i_{,jk} = \mathcal{V}^i_{,jk} \circ \text{ReU[i]} + \mathcal{V}^i_{,j} \circ \text{ReUdD[i][k]} + \mathcal{V}^i_{,k} \circ \text{ReUdD[i][j]}+\mathcal{V}^i \circ \text{ReUdDD[i][j][k]}$
###Code
# Step 8: The unrescaled shift vector betaU spatial derivatives:
# betaUdD & betaUdDD, written in terms of the
# rescaled shift vector vetU
vetU_dD = ixp.declarerank2("vetU_dD","nosym")
vetU_dupD = ixp.declarerank2("vetU_dupD","nosym") # Needed for upwinded \beta^i_{,j}
vetU_dDD = ixp.declarerank3("vetU_dDD","sym12") # Needed for \beta^i_{,j}
betaU_dD = ixp.zerorank2()
betaU_dupD = ixp.zerorank2() # Needed for, e.g., \beta^i RHS
betaU_dDD = ixp.zerorank3() # Needed for, e.g., \bar{\Lambda}^i RHS
for i in range(DIM):
for j in range(DIM):
betaU_dD[i][j] = vetU_dD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j]
betaU_dupD[i][j] = vetU_dupD[i][j]*rfm.ReU[i] + vetU[i]*rfm.ReUdD[i][j] # Needed for \beta^i RHS
for k in range(DIM):
# Needed for, e.g., \bar{\Lambda}^i RHS:
betaU_dDD[i][j][k] = vetU_dDD[i][j][k]*rfm.ReU[i] + vetU_dD[i][j]*rfm.ReUdD[i][k] + \
vetU_dD[i][k]*rfm.ReUdD[i][j] + vetU[i]*rfm.ReUdDD[i][j][k]
###Output
_____no_output_____
###Markdown
Step 9: phi_and_derivs(): Standard BSSN conformal factor $\phi$, and its derivatives $\phi_{,i}$, $\phi_{,ij}$, $\bar{D}_j \phi_i$, and $\bar{D}_j\bar{D}_k \phi_i$, all written in terms of BSSN gridfunctions like $\text{cf}$ \[Back to [top](toc)\]$$\label{phi_and_derivs}$$ Step 9.a: $\phi$ in terms of the chosen (possibly non-standard) conformal factor variable $\text{cf}$ (e.g., $\text{cf}=\chi=e^{-4\phi}$) \[Back to [top](toc)\]$$\label{phi_ito_cf}$$When solving the BSSN time evolution equations across the coordinate singularity (i.e., the "puncture") inside puncture black holes for example, the standard conformal factor $\phi$ becomes very sharp, whereas $\chi=e^{-4\phi}$ is far smoother (see, e.g., [Campanelli, Lousto, Marronetti, and Zlochower (2006)](https://arxiv.org/abs/gr-qc/0511048) for additional discussion). Thus if we choose to rewrite derivatives of $\phi$ in the BSSN equations in terms of finite-difference derivatives $\text{cf}=\chi$, numerical errors will be far smaller near the puncture.The BSSN modules in NRPy+ support three options for the conformal factor variable $\text{cf}$:1. $\text{cf}=\phi$,1. $\text{cf}=\chi=e^{-4\phi}$, and1. $\text{cf}=W = e^{-2\phi}$.The BSSN equations are written in terms of $\phi$ (actually only $e^{-4\phi}$ appears) and derivatives of $\phi$, we now define $e^{-4\phi}$ and derivatives of $\phi$ in terms of the chosen $\text{cf}$.First, we define the base variables needed within the BSSN equations:
###Code
# Step 9: Standard BSSN conformal factor phi,
# and its partial and covariant derivatives,
# all in terms of BSSN gridfunctions like cf
# Step 9.a.i: Define partial derivatives of \phi in terms of evolved quantity "cf":
cf_dD = ixp.declarerank1("cf_dD")
cf_dupD = ixp.declarerank1("cf_dupD") # Needed for \partial_t \phi next.
cf_dDD = ixp.declarerank2("cf_dDD","sym01")
phi_dD = ixp.zerorank1()
phi_dupD = ixp.zerorank1()
phi_dDD = ixp.zerorank2()
exp_m4phi = sp.sympify(0)
###Output
_____no_output_____
###Markdown
Then we define $\phi_{,i}$, $\phi_{,ij}$, and $e^{-4\phi}$ for each of the choices of $\text{cf}$.For $\text{cf}=\phi$, this is trivial:
###Code
# Step 9.a.ii: Assuming cf=phi, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "phi":
for i in range(DIM):
phi_dD[i] = cf_dD[i]
phi_dupD[i] = cf_dupD[i]
for j in range(DIM):
phi_dDD[i][j] = cf_dDD[i][j]
exp_m4phi = sp.exp(-4*cf)
###Output
_____no_output_____
###Markdown
For $\text{cf}=W=e^{-2\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (2 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (2 \text{cf})$* $e^{-4\phi} = \text{cf}^2$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iii: Assuming cf=W=e^{-2 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "W":
# \partial_i W = \partial_i (e^{-2 phi}) = -2 e^{-2 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (2 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (2*cf)
phi_dupD[i] = - cf_dupD[i] / (2*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (2 cf)]
# = - cf_{,ij} / (2 cf) + \partial_i cf \partial_j cf / (2 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (2*cf)
exp_m4phi = cf*cf
###Output
_____no_output_____
###Markdown
For $\text{cf}=W=e^{-4\phi}$, we have* $\phi_{,i} = -\text{cf}_{,i} / (4 \text{cf})$* $\phi_{,ij} = (-\text{cf}_{,ij} + \text{cf}_{,i}\text{cf}_{,j}/\text{cf}) / (4 \text{cf})$* $e^{-4\phi} = \text{cf}$***Exercise to student: Prove the above relations***
###Code
# Step 9.a.iv: Assuming cf=chi=e^{-4 phi}, define exp_m4phi, phi_dD,
# phi_dupD (upwind finite-difference version of phi_dD), and phi_DD
if par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf") == "chi":
# \partial_i chi = \partial_i (e^{-4 phi}) = -4 e^{-4 phi} \partial_i phi
# -> \partial_i phi = -\partial_i cf / (4 cf)
for i in range(DIM):
phi_dD[i] = - cf_dD[i] / (4*cf)
phi_dupD[i] = - cf_dupD[i] / (4*cf)
for j in range(DIM):
# \partial_j \partial_i phi = - \partial_j [\partial_i cf / (4 cf)]
# = - cf_{,ij} / (4 cf) + \partial_i cf \partial_j cf / (4 cf^2)
phi_dDD[i][j] = (- cf_dDD[i][j] + cf_dD[i]*cf_dD[j] / cf) / (4*cf)
exp_m4phi = cf
# Step 9.a.v: Error out if unsupported EvolvedConformalFactor_cf choice is made:
cf_choice = par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")
if not (cf_choice == "phi" or cf_choice == "W" or cf_choice == "chi"):
print("Error: EvolvedConformalFactor_cf == "+par.parval_from_str(thismodule+"::EvolvedConformalFactor_cf")+" unsupported!")
exit(1)
###Output
_____no_output_____
###Markdown
Step 9.b: Covariant derivatives of $\phi$ \[Back to [top](toc)\]$$\label{phi_covariant_derivs}$$Since $\phi$ is a scalar, $\bar{D}_i \phi = \partial_i \phi$.Thus the second covariant derivative is given by\begin{align}\bar{D}_i \bar{D}_j \phi &= \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}\\ &= \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}.\end{align}
###Code
# Step 9.b: Define phi_dBarD = phi_dD (since phi is a scalar) and phi_dBarDD (covariant derivative)
# \bar{D}_i \bar{D}_j \phi = \phi_{;\bar{i}\bar{j}} = \bar{D}_i \phi_{,j}
# = \phi_{,ij} - \bar{\Gamma}^k_{ij} \phi_{,k}
phi_dBarD = phi_dD
phi_dBarDD = ixp.zerorank2()
for i in range(DIM):
for j in range(DIM):
phi_dBarDD[i][j] = phi_dDD[i][j]
for k in range(DIM):
phi_dBarDD[i][j] += - GammabarUDD[k][i][j]*phi_dD[k]
###Output
_____no_output_____
###Markdown
Step 10: Code validation against BSSN.BSSN_quantities NRPy+ module \[Back to [top](toc)\]$$\label{code_validation}$$As a code validation check, we verify agreement in the SymPy expressions for the RHSs of the BSSN equations between1. this tutorial and 2. the NRPy+ [BSSN.BSSN_quantities](../edit/BSSN/BSSN_quantities.py) module.By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
###Code
all_passed=True
def comp_func(expr1,expr2,basename,prefixname2="Bq."):
if str(expr1-expr2)!="0":
print(basename+" - "+prefixname2+basename+" = "+ str(expr1-expr2))
all_passed=False
def gfnm(basename,idx1,idx2=None,idx3=None):
if idx2==None:
return basename+"["+str(idx1)+"]"
if idx3==None:
return basename+"["+str(idx1)+"]["+str(idx2)+"]"
return basename+"["+str(idx1)+"]["+str(idx2)+"]["+str(idx3)+"]"
expr_list = []
exprcheck_list = []
namecheck_list = []
# Step 3:
import BSSN.BSSN_quantities as Bq
Bq.BSSN_basic_tensors()
for i in range(DIM):
namecheck_list.extend([gfnm("LambdabarU",i),gfnm("betaU",i),gfnm("BU",i)])
exprcheck_list.extend([Bq.LambdabarU[i],Bq.betaU[i],Bq.BU[i]])
expr_list.extend([LambdabarU[i],betaU[i],BU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarDD",i,j),gfnm("AbarDD",i,j)])
exprcheck_list.extend([Bq.gammabarDD[i][j],Bq.AbarDD[i][j]])
expr_list.extend([gammabarDD[i][j],AbarDD[i][j]])
# Step 4:
Bq.gammabar__inverse_and_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("gammabarUU",i,j)])
exprcheck_list.extend([Bq.gammabarUU[i][j]])
expr_list.extend([gammabarUU[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("gammabarDD_dD",i,j,k),
gfnm("gammabarDD_dupD",i,j,k),
gfnm("GammabarUDD",i,j,k)])
exprcheck_list.extend([Bq.gammabarDD_dD[i][j][k],Bq.gammabarDD_dupD[i][j][k],Bq.GammabarUDD[i][j][k]])
expr_list.extend( [gammabarDD_dD[i][j][k],gammabarDD_dupD[i][j][k],GammabarUDD[i][j][k]])
# Step 5:
Bq.detgammabar_and_derivs()
namecheck_list.extend(["detgammabar"])
exprcheck_list.extend([Bq.detgammabar])
expr_list.extend([detgammabar])
for i in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dD",i)])
exprcheck_list.extend([Bq.detgammabar_dD[i]])
expr_list.extend([detgammabar_dD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("detgammabar_dDD",i,j)])
exprcheck_list.extend([Bq.detgammabar_dDD[i][j]])
expr_list.extend([detgammabar_dDD[i][j]])
# Step 6:
Bq.AbarUU_AbarUD_trAbar_AbarDD_dD()
namecheck_list.extend(["trAbar"])
exprcheck_list.extend([Bq.trAbar])
expr_list.extend([trAbar])
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("AbarUU",i,j),gfnm("AbarUD",i,j)])
exprcheck_list.extend([Bq.AbarUU[i][j],Bq.AbarUD[i][j]])
expr_list.extend([AbarUU[i][j],AbarUD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("AbarDD_dD",i,j,k)])
exprcheck_list.extend([Bq.AbarDD_dD[i][j][k]])
expr_list.extend([AbarDD_dD[i][j][k]])
# Step 7:
Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU()
for i in range(DIM):
namecheck_list.extend([gfnm("DGammaU",i)])
exprcheck_list.extend([Bq.DGammaU[i]])
expr_list.extend([DGammaU[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("RbarDD",i,j)])
exprcheck_list.extend([Bq.RbarDD[i][j]])
expr_list.extend([RbarDD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("DGammaUDD",i,j,k),gfnm("gammabarDD_dHatD",i,j,k)])
exprcheck_list.extend([Bq.DGammaUDD[i][j][k],Bq.gammabarDD_dHatD[i][j][k]])
expr_list.extend([DGammaUDD[i][j][k],gammabarDD_dHatD[i][j][k]])
# Step 8:
Bq.betaU_derivs()
for i in range(DIM):
for j in range(DIM):
namecheck_list.extend([gfnm("betaU_dD",i,j),gfnm("betaU_dupD",i,j)])
exprcheck_list.extend([Bq.betaU_dD[i][j],Bq.betaU_dupD[i][j]])
expr_list.extend([betaU_dD[i][j],betaU_dupD[i][j]])
for k in range(DIM):
namecheck_list.extend([gfnm("betaU_dDD",i,j,k)])
exprcheck_list.extend([Bq.betaU_dDD[i][j][k]])
expr_list.extend([betaU_dDD[i][j][k]])
# Step 9:
Bq.phi_and_derivs()
#phi_dD,phi_dupD,phi_dDD,exp_m4phi,phi_dBarD,phi_dBarDD
namecheck_list.extend(["exp_m4phi"])
exprcheck_list.extend([Bq.exp_m4phi])
expr_list.extend([exp_m4phi])
for i in range(DIM):
namecheck_list.extend([gfnm("phi_dD",i),gfnm("phi_dupD",i),gfnm("phi_dBarD",i)])
exprcheck_list.extend([Bq.phi_dD[i],Bq.phi_dupD[i],Bq.phi_dBarD[i]])
expr_list.extend( [phi_dD[i],phi_dupD[i],phi_dBarD[i]])
for j in range(DIM):
namecheck_list.extend([gfnm("phi_dDD",i,j),gfnm("phi_dBarDD",i,j)])
exprcheck_list.extend([Bq.phi_dDD[i][j],Bq.phi_dBarDD[i][j]])
expr_list.extend([phi_dDD[i][j],phi_dBarDD[i][j]])
for i in range(len(expr_list)):
comp_func(expr_list[i],exprcheck_list[i],namecheck_list[i])
if all_passed:
print("ALL TESTS PASSED!")
###Output
initialize_param() minor warning: Did nothing; already initialized parameter reference_metric::M_PI
initialize_param() minor warning: Did nothing; already initialized parameter reference_metric::RMAX
ALL TESTS PASSED!
###Markdown
Step 11: Output this module to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_quantities.pdf](Tutorial-BSSN_quantities.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-BSSN_quantities.ipynb
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!pdflatex -interaction=batchmode Tutorial-BSSN_quantities.tex
!rm -f Tut*.out Tut*.aux Tut*.log
###Output
[NbConvertApp] Converting notebook Tutorial-BSSN_quantities.ipynb to latex
[NbConvertApp] Writing 145875 bytes to Tutorial-BSSN_quantities.tex
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
Day-3/01. Lecture/06 - The For Loop.ipynb | ###Markdown
The For Loop In Python, an **iterable** is an **object** capable of returning values one at a time.Many objects in Python are iterable: lists, strings, file objects and many more. Note: Our definition of an iterable did not state it was a collection of values - we only said it is an object that can return values one at a time - that's a subtle difference that we'll examine when we look into iterators and generators. The **for** keyword can be used to iterate an iterable. If you come with a background in another programming language, you have probably seen **for** loops defined this way:``for (int i=0; i < 5; i++) { //code block}`` This form of the **for** loop is simply a _repetition_, very similar to a **while** loop - in fact it is equivalent to what we could write in Python as follows:
###Code
i = 0
while i < 5:
#code block
print(i)
i += 1
i = None
###Output
0
1
2
3
4
###Markdown
But that's **NOT** what the **for** statement does in Python - the **for** statement is a way to **iterate** over iterables, and has nothing to do with the **for** loop we just saw. The closest equivalent we have in Python is the **while** loop written as above. To use the **for** loop in Python, we **require** an iterable object to work with. A simple iterable object is generated via the ``range()`` function
###Code
for i in range(5):
print(i)
###Output
0
1
2
3
4
###Markdown
Many objects are iterable in Python:
###Code
for x in [1, 2, 3]:
print(x)
for x in 'hello':
print(x)
for x in ('a', 'b', 'c'):
print(x)
###Output
a
b
c
###Markdown
When we iterate over an iterable, each iteration returns the "next" value (or object) in the iterable:
###Code
for x in [(1, 2), (3, 4), (5, 6)]:
print(x)
###Output
(1, 2)
(3, 4)
(5, 6)
###Markdown
We can even assign the individual tuple values to specific named variables:
###Code
for i, j in [(1, 2), (3, 4), (5, 6)]:
print(i, j)
###Output
1 2
3 4
5 6
###Markdown
We will cover iterables in a lot more detail later in this course. The **break** and **continue** statements work just as well in **for** loops as they do in **while** loops:
###Code
for i in range(5):
if i == 3:
continue
print(i)
for i in range(5):
if i == 3:
break
print(i)
###Output
0
1
2
###Markdown
The **for** loop, like the **while** loop, also supports an **else** clause which is executed if and only if the loop terminates normally (i.e. did not exit because of a **break** statement)
###Code
for i in range(1, 5):
print(i)
if i % 7 == 0:
print('multiple of 7 found')
break
else:
print('No multiples of 7 encountered')
for i in range(1, 8):
print(i)
if i % 7 == 0:
print('multiple of 7 found')
break
else:
print('No multiples of 7 encountered')
###Output
1
2
3
4
5
6
7
multiple of 7 found
###Markdown
Similarly to the **while** loop, **break** and **continue** work just the same in the context of a **try** statement's **finally** clause.
###Code
for i in range(5):
print('--------------------')
try:
10 / (i - 3)
except ZeroDivisionError:
print('divided by 0')
continue
finally:
print('always runs')
print(i)
###Output
--------------------
always runs
0
--------------------
always runs
1
--------------------
always runs
2
--------------------
divided by 0
always runs
--------------------
always runs
4
###Markdown
There are a number of standard techniques to iterate over iterables:
###Code
s = 'hello'
for c in s:
print(c)
###Output
h
e
l
l
o
###Markdown
But sometimes, for indexable iterable types (e.g. sequences), we want to also know the index of the item in the loop:
###Code
s = 'hello'
i = 0
for c in s:
print(i, c)
i += 1
###Output
0 h
1 e
2 l
3 l
4 o
###Markdown
Slightly better approach might be:
###Code
s = 'hello'
for i in range(len(s)):
print(i, s[i])
###Output
0 h
1 e
2 l
3 l
4 o
###Markdown
or even better:
###Code
s = 'hello'
for i, c in enumerate(s):
print(i, c)
###Output
0 h
1 e
2 l
3 l
4 o
|
Amazon Planet/Layer_1/resnet50/Resnet50.ipynb | ###Markdown
1. Data Preprocessing
###Code
img_height = 197
img_width = 197
inv_label_map = ['blow_down',
'bare_ground',
'conventional_mine',
'blooming',
'cultivation',
'artisinal_mine',
'haze',
'primary',
'slash_burn',
'habitation',
'clear',
'road',
'selective_logging',
'partly_cloudy',
'agriculture',
'water',
'cloudy']
label_map = {'agriculture': 14,
'artisinal_mine': 5,
'bare_ground': 1,
'blooming': 3,
'blow_down': 0,
'clear': 10,
'cloudy': 16,
'conventional_mine': 2,
'cultivation': 4,
'habitation': 9,
'haze': 6,
'partly_cloudy': 13,
'primary': 7,
'road': 11,
'selective_logging': 12,
'slash_burn': 8,
'water': 15}
df_train = pd.read_csv('../input/train.csv')
Y = df_train.iloc[:,1:].values
names = df_train['image_name']
i = 0
X = np.empty((names.shape[0], img_height, img_width, 3), dtype=np.float16)
for f in tqdm(names.values, miniters=1000):
img = cv2.imread('../input/train-jpg/{}.jpg'.format(f))
if img_height != img.shape[0]:
img = cv2.resize(img, (img_height, img_width))
X[i,:,:,:] = np.array(img, np.float16)
i += 1
X = X / 255.
#deprecated parallel reading because exceed memory when passing data back
'''
def get_images(names):
i = 0
X = np.empty((names.shape[0], img_height, img_width, 3), dtype=np.float16)
for f in tqdm(names.values, miniters=1000):
img = cv2.imread('../input/train-jpg/{}.jpg'.format(f))
if img_height != img.shape[0]:
img = cv2.resize(img, (img_height, img_width))
X[i,:,:,:] = np.array(img, np.float16)
i += 1
return X / 255.
#multiply cpu_count if cannot fit memory
pool = Pool(cpu_count())
X = np.concatenate(pool.map(
get_images,
np.array_split(df_train['image_name'], cpu_count())
))
pool.close()
pool.join()'''
print(X.shape)
###Output
_____no_output_____
###Markdown
2. Model Training
###Code
from sklearn.model_selection import train_test_split
x_train, x_valid, y_train, y_valid = train_test_split(X, Y, test_size=0.2, random_state=42)
from keras import backend as K
from keras.applications.resnet50 import ResNet50
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.optimizers import Adam, SGD
from keras.preprocessing.image import ImageDataGenerator
def fbeta(y_true, y_pred):
beta = 2
threshold_shift = -0.3
# just in case of hipster activation at the final layer
y_pred = K.clip(y_pred, 0, 1)
# shifting the prediction threshold from .5 if needed
y_pred_bin = K.round(y_pred + threshold_shift)
tp = K.sum(K.round(y_true * y_pred_bin), axis=1) + K.epsilon()
fp = K.sum(K.round(K.clip(y_pred_bin - y_true, 0, 1)), axis=1)
fn = K.sum(K.round(K.clip(y_true - y_pred, 0, 1)), axis=1)
precision = tp / (tp + fp)
recall = tp / (tp + fn)
beta_squared = beta ** 2
return K.mean((beta_squared + 1) * (precision * recall) / (beta_squared * precision + recall + K.epsilon()))
base_model = ResNet50(input_shape=(img_height,img_width,3), weights='imagenet', include_top=False)
for layer in base_model.layers:
layer.trainable = False
x_newfc = Flatten()(base_model.output)
x_newfc = Dense(512, activation='sigmoid')(x_newfc)
x_newfc = Dropout(0.25)(x_newfc)
x_newfc = Dense(17, activation='sigmoid')(x_newfc)
model = Model(inputs=base_model.input, outputs=x_newfc)
epochs_arr = [20, 5, 5]
learn_rates = [0.001, 0.0001, 0.00001]
kfold_weights_path = os.path.join('', 'weights.h5')
for learn_rate, epochs in zip(learn_rates, epochs_arr):
opt = Adam(lr=learn_rate)
model.compile(loss='binary_crossentropy', # We NEED binary here, since categorical_crossentropy l1 norms the output before calculating loss.
optimizer=opt,
metrics=['accuracy', fbeta])
callbacks = [
EarlyStopping(monitor='val_loss', patience=2, verbose=2),
ModelCheckpoint(kfold_weights_path, monitor='val_loss',
save_best_only=True, verbose=2)
]
#deprecated generator because exceed memory
'''model.fit_generator(train_generator.flow(x_train, y_train, batch_size=128),
steps_per_epoch=len(x_train) / 128,
epochs=epochs,
verbose=1,
workers=3,
validation_data=(x_valid, y_valid),
callbacks=callbacks)'''
model.fit(x = x_train, y= y_train, validation_data=(x_valid, y_valid),
batch_size=128,verbose=1, epochs=epochs,callbacks=callbacks,shuffle=True)
#save!
model.save_weights('final.h5')
'''opt = Adam(lr=0.001)
model.compile(loss='binary_crossentropy', # We NEED binary here, since categorical_crossentropy l1 norms the output before calculating loss.
optimizer=opt,
metrics=['accuracy', fbeta])
model.load_weights('final.h5')'''
kfold_weights_path = os.path.join('', 'weights.h5')
if os.path.isfile(kfold_weights_path):
model.load_weights(kfold_weights_path)
###Output
_____no_output_____
###Markdown
3. Model Evaluation
###Code
from sklearn.metrics import fbeta_score, accuracy_score
p_valid = model.predict(x_valid, batch_size=128, verbose=1)
print(fbeta_score(y_valid, np.array(p_valid) > 0.2, beta=2, average='samples'))
#save f2 score for stage 2 weighted
scores = fbeta_score(y_valid, np.array(p_valid) > 0.2, beta=2, average=None)
print('F2 test scores per tag:')
for label, score in [(inv_label_map[l], scores[l]) for l in scores.argsort()[::-1]]:
print(label, ': ', score)
pd.DataFrame([scores]).to_csv('f2.csv', index=False)
for i in range(17):
print(inv_label_map[i], '\t:', accuracy_score(y_valid[:,i], p_valid[:,i]>0.2))
#predict train data for stage 2
p_train = model.predict(X, batch_size=128, verbose=1)
pd.DataFrame(p_train).to_csv('train.csv', index=False, float_format='%.3f')
###Output
_____no_output_____
###Markdown
3. Make Prediction
###Code
df_submission = pd.read_csv('../input/sample_submission_v2.csv')
def get_images(names):
i = 0
X = np.empty((names.shape[0], img_height, img_width, 3), dtype=np.float16)
for f in tqdm(names.values, miniters=1000):
img = cv2.imread('../input/test-jpg/{}.jpg'.format(f))
if img_height != img.shape[0]:
img = cv2.resize(img, (img_height, img_width))
X[i,:,:,:] = np.array(img, np.float16)
i += 1
return X / 255.
pool = Pool(cpu_count())
X_submission = np.concatenate(pool.map(
get_images,
np.array_split(df_submission['image_name'], cpu_count())
))
pool.close()
pool.join()
print(X_submission.shape)
predict = model.predict(X_submission, batch_size = 128, verbose=1)
result = pd.DataFrame(np.array(predict) > 0.2)
preds = []
sorted_tags = pd.Series(inv_label_map)
for i in tqdm(range(result.shape[0]), miniters=1000):
preds.append(' '.join(list(
sorted_tags[np.where(result.loc[i] == 1)[0]]
)))
df_submission['tags'] = preds
df_submission.to_csv('test.csv', index=False)
###Output
_____no_output_____ |
data_processing_notebook.ipynb | ###Markdown
Data wrangling and validation
###Code
import itertools
import joblib
import numpy as np
import pandas as pd
from scipy import sparse, stats
from mlutils import *
# Set to true to save intermediate files
SAVE_INTERMEDIATE_FILES = False
# Random seed
RANDOM_SEED = 56
df = pd.read_csv(r"dataset.csv")
###Output
_____no_output_____
###Markdown
Data merging
###Code
dtypes = {
'Abstract': str,
'Title': str,
'year': int,
'documentType': str,
'StoreId': str,
'disc1': str,
'disc2': str,
}
# here we load in the datasets from the different sources
socab_df = pd.read_csv('Datasets/SocAbstracts.csv', dtype=dtypes)
eric_df = pd.read_csv('Datasets/ERIC.csv', dtype=dtypes)
econlit_df = pd.read_csv('Datasets/EconLit.csv', dtype=dtypes)
###Output
_____no_output_____
###Markdown
Data cleaning and relabelingGet clean and relabeled dataframes for each set:
###Code
# here we call the custom cleaner function on all the datasets to filter clean records
socab_clean = clean_df(socab_df)
eric_clean = clean_df(eric_df)
econlit_clean = clean_df(econlit_df)
# optional save of clean datasets
if SAVE_INTERMEDIATE_FILES:
socab_clean.to_csv("SocAbstracts_master.csv", index=False)
eric_clean.to_csv("ERIC_master.csv", index=False)
econlit_clean.to_csv("EconLit_master.csv", index=False)
# Let's look at which columns are stored?
socab_clean.columns
# here we merge all the datasets into one dataframe
df = pd.concat([socab_clean,eric_clean,econlit_clean])
df = df.drop(columns=['year', 'disc1_x', 'disc1_counts', 'disc2_counts'])
if SAVE_INTERMEDIATE_FILES:
# Transform list to semicolon-separated string prior to saving
df['disc2_x'] = df.disc2_x.apply(lambda x: ';'.join(x))
df.to_csv("dataset.csv", index=False)
# Read file and transform back to list format
df = pd.read_csv("dataset.csv")
df['disc2_x'] = df.disc2_x.str.split(';')
df.to_csv("dataset.csv")
# here we create one text field with abstracts and titles concatenated
df['text'] = df.Abstract.str.cat(df.Title, sep=' ')
###Output
_____no_output_____
###Markdown
Great, now we have now we have the data textual data to train and test the machine learning modules Checking the inter-indexer consistency
###Code
# here we describe how we went about calculating the inter-indexer consistency
# we use the example of sociological abstracts
socab_eval = pd.read_excel("ExpertEvaluation/soc_ab_indexerconsis.xlsx", dtype=str) # the evaluated set by expert
vods = pd.read_excel("ExpertEvaluation/Vlaamse onderzoeksdisciplinelijst_V2018.xlsx", dtype=str) # the labels in VODS
# Value '0' represents NaN
socab_eval = socab_eval.replace('0', np.nan)
# first we check if all discipline codes are in official discipline codelist (VODS) / no typos
codes = set(vods['Unnamed: 6'])
print('Are all labels in the original vods codelist?')
print('Expert labels:', all(socab_eval[f'expert_label{i}'].isin(codes).all() for i in range(1, 6)))
print('Expected labels:', all(socab_eval[f'expected_label{i}'].isin(codes).all() for i in range(1, 6)))
# now we create level 3 columns
for i in range(1, 6):
expected, expert = f'expected_label{i}', f'expert_label{i}'
try:
socab_eval[f'expected_lv3label{i}'] = socab_eval[expected][socab_eval[expected].notna()].str[:-2]
socab_eval[f'expert_lv3label{i}'] = socab_eval[expert][socab_eval[expert].notna()].str[:-2]
except AttributeError:
socab_eval[f'expected_lv3label{i}'] = pd.Series()
socab_eval[f'expert_lv3label{i}'] = pd.Series()
expected_lv4 = [c for c in socab_eval.columns if c.startswith('expected_label')]
expert_lv4 = [c for c in socab_eval.columns if c.startswith('expert_label')]
expected_lv3 = [c for c in socab_eval.columns if c.startswith('expected_lv3label')]
expert_lv3 = [c for c in socab_eval.columns if c.startswith('expert_lv3label')]
# here we define two functions to calculate the inter-indexer consistency as described in the paper.
# The Dice index is calculated by the second function.
def set_without_nan(row, cols):
return set(row[cols][row[cols].notna()])
def consistency_score(row, level):
if level == 4:
expected, expert = expected_lv4, expert_lv4
elif level == 3:
expected, expert = expected_lv3, expert_lv3
else:
raise ValueError()
return (
2 * len(set_without_nan(row, expected) & set_without_nan(row, expert))
/ (len(set_without_nan(row, expected)) + len(set_without_nan(row, expert)))
)
socab_eval['consistency_lvl4'] = socab_eval.apply(consistency_score, axis=1, level=4)
socab_eval['consistency_lvl3'] = socab_eval.apply(consistency_score, axis=1, level=3)
print("Inter-indexer consistency on level 3 = {}".format(sum(socab_eval.consistency_lvl3) / len(socab_eval)))
print("Inter-indexer consistency on level 4 = {}".format(sum(socab_eval.consistency_lvl4) / len(socab_eval)))
###Output
_____no_output_____ |
voltagebudget/ipynb/autotune.ipynb | ###Markdown
Tune oscillation (testing)
###Code
mode = 'regular'
N = 2
t = 0.3
E = 0.225
d = -6e-3
stim = "../data/stim1.csv"
stim_data = read_stim(stim)
ns = np.asarray(stim_data['ns'])
ts = np.asarray(stim_data['ts'])
A_0 = 0.1e-9
A_max = 1e-9
phi_0 = 0
f = 8
solutions = autotune_V_osc(N, t, E, d, ns, ts,
A_0=A_0, A_max=A_max, phi_0=phi_0, f=f,
verbose=True)
solutions
# Select neuron
n = 0
# Opt oscillations
A, phi = solutions[n].x
print("Optimal A {}, phi {}, f {}".format(A, phi, f))
# Other params
params, w_in, bias_in, sigma = read_modes(mode)
stim_data = read_stim(stim)
ns = np.asarray(stim_data['ns'])
ts = np.asarray(stim_data['ts'])
# !
ns_y, ts_y, voltages_y = neurons.adex(
N, t,
ns, ts,
w_in=w_in,
bias_in=bias_in,
sigma=0,
f=f,
A=A,
phi=phi,
**params)
# -
times = voltages_y['times']
v = voltages_y['V_m'][n, :]
p = figure(plot_width=400, plot_height=200)
p.line(x=times, y=v, color="black")
p.xaxis.axis_label = 'Time (s)'
p.yaxis.axis_label = 'Vm (volts)'
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
show(p)
###Output
_____no_output_____
###Markdown
Tune the bias
###Code
# Print options
util.get_mode_names()
for mode in util.get_mode_names():
print(">>> Tuning {}.".format(mode))
params, _, bias_0, sigma_0 = util.read_modes(mode)
sol = autotune_membrane(mode, bias_0, sigma_0, -65e-3, -2e-3)
bias_x, sigma_x = sol.x
np.savez("../data/{}_membrane_tuned".format(mode), bias=bias_x, sigma=sigma_x)
###Output
_____no_output_____
###Markdown
Plot examples
###Code
mode = 'adaption'
params, _, _, _ = util.read_modes(mode)
sol = np.load("../data/{}_membrane_tuned.npz".format(mode))
bias_x = float(sol['bias'])
sigma_x = float(sol['sigma'])
print(bias_x, sigma_x)
# -
t = 1
ns_y, ts_y, budget = neurons.adex(1, t,
np.asarray([0]), np.asarray([0]),
w_max=0,
bias=bias_x,
sigma=sigma_x,
f=0,
**params)
# -
times = budget['times']
v = budget['V_m'][0, :]
p = figure(plot_width=400, plot_height=200)
p.line(x=times, y=v, color="black")
p.xaxis.axis_label = 'Time (s)'
p.yaxis.axis_label = 'Vm (volts)'
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
show(p)
params
###Output
_____no_output_____
###Markdown
- After plotting each to confirm everything looked OK, the tuned values were hand tranfered to the defulat json file Tune wAfter entering the optimal bias/sigma into default json, I tuned `w_max`.
###Code
for mode in util.get_mode_names():
print(">>> Tuning {}.".format(mode))
params, w_0, _, _ = util.read_modes(mode)
sol = autotune_w(mode, w_0, 10, max_mult=1.5)
w_x = sol.x
print(w_x)
np.savez("../data/{}_w_tuned".format(mode), w=w_x)
###Output
_____no_output_____
###Markdown
Plot examples
###Code
# Overall run time
t = 3
# Create frozen input spikes
stim_rate = 30
seed_stim = 1
k = 20
stim_onset = 0.1
stim_offset = t
dt = 1e-5
ns, ts = util.poisson_impulse(
t,
stim_onset,
stim_offset - stim_onset,
stim_rate,
n=k,
dt=dt,
seed=seed_stim)
mode = 'regular'
params, _, bias, sigma = util.read_modes(mode)
sol = np.load("../data/{}_w_tuned.npz".format(mode))
w_x = float(sol['w'])
print(w_x)
# -
t = 1
N = 100
ns_y, ts_y, budget = neurons.adex(N, t,
ns, ts,
w_max=w_x*1.3,
bias=bias,
sigma=sigma,
f=0,
**params)
# -
p = figure(plot_width=400, plot_height=200)
p.circle(ts_y, ns_y, color="black")
p.xaxis.axis_label = 'Time (s)'
p.yaxis.axis_label = 'N'
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
show(p)
p = figure(plot_width=400, plot_height=200)
for i in range(N):
times = budget['times']
v = budget['V_m'][i, :]
p.line(x=times, y=v, color="black", alpha=0.1)
p.xaxis.axis_label = 'Time (s)'
p.yaxis.axis_label = 'Vm (volts)'
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
show(p)
###Output
_____no_output_____ |
00-preprocessing/00_cleaning.ipynb | ###Markdown
checklist:Drops- address1 drop- exception: clean- new_value: dropped it (doesn't contain anything)- current_value: dropped it (doesn't contain anything- tax_type: clean (create a data dictionary for this column) - instrument_no- sub_neighborhood': A, B, C...huh? - 'tax_class: all the same (residential)- homestead: I dummied this- building_area: nulls- triennial_group: no cleaning necessary (float)- address: I parsed this- instrument_no: no clue- tax_class2: 1 for residential- building_area: empty (for now. check again once you get the full dataset)cleaned- assessor: cleaned (titlecase)- tax_class2: cleaned and filtered- use_code: clean (create a data dictionary for this column)- neighborhood: clean- owner_name: cleaned (converted to title case)- address: cleaned (spend an hour on this!- sale_price: cleaned (dropped commas and dollar signs)- land_area: clean (dropped commas)- ward: float status. cleaning not needed- land_2017: cleaned (dropped commas and dollar signs)- land_2018: cleaned (dropped commas and dollar signs)- improvements_2017: cleaned (dropped commas and dollar signs)- improvements_2018: cleaned (dropped commas and dollar signs)- value_2017: cleaned (dropped commas and dollar signs)- value_2018: cleaned (dropped commas and dollar signs)- assessment_2017: cleaned (dropped commas and dollar signs)- assessment_2018: cleaned (dropped commas and dollar signs)filters:- drop duplicates- tax_class2: cleaned and filtered- tax_class2 = 1- sale_price > 100- zip_code: need to filter to ^2\d+ zip codes- city: non-dc values (look at city column)- code: 1, only residentialColumns created- homestead: dummified and column cleaned [drop for arima]- zip_code: created [need to filter]- address_1 column- qtr- month- yearI need to:- consider dropping land_area < '4'- look at home values that are \$1- add lats and longs- eda (sns plot)- search for 'Not Available'I can if I have time:- use_code: create a data dictionary for this column) [super important]- tax_class2: create a data dictionary for this column)- look into subneighborhood: wtf is this?
###Code
import numpy as np
import pandas as pd
import math
path = 'otr copy 7.csv'
df = pd.read_csv(path, parse_dates=['date'], infer_datetime_format=True)
#df = df.address.notnull() #http://bit.ly/2zk56jD
df.replace('', np.nan, inplace=True) #I KNOW that there are fucking empty cells. gotta fill them in.
df.dropna(inplace=True)
df['neighborhood'] = df['neighborhood'].str.title()
df['neighborhood'] = df['neighborhood'].str.replace('American Univ. Park','American University Park')
df['neighborhood'] = df['neighborhood'].str.replace('N. Cleveland Park','North Cleveland Park')
df['neighborhood'].unique() #this is not exhustive, but at least it's clean now
df['use_code2'] = df['use_code'].str.extract('(^\d{1,})', expand = True)
df['use_code2']=df['use_code2'].str.strip()
df.drop(['use_code'], 1, inplace = True)
df['tax_type'] = df['tax_type'].str.replace(u'\xa0', u' ')
#\xa0 is non-breaking space in Latin1 (ISO 8859-1). replace with a space
df['tax_type2'] = df['tax_type'].str.extract('(^\w{2})', expand = True)
df.drop(['tax_type'], 1, inplace = True)
df['tax_type2']=df['tax_type2'].str.strip()
df['tax_class'] = df['tax_class'].str.replace(u'\xa0', u' ')
df['tax_class'].unique()
df['tax_class2'] = df['tax_class'].str.extract(r'([0-9]\b)', expand = True)
df['tax_class2'].value_counts() # we just care about the category 1: residential
df = df[df.tax_class2 == '1']
# here is where I filter only for ones (residential tax class)
df['tax_class2'].value_counts()
df.homestead.value_counts()
homestead1 = pd.get_dummies(df.homestead).iloc[:, :]
homestead1.columns
homestead1.columns = ['homestead_yes', 'homestead_senior', 'homestead_no']
df = pd.concat([df, homestead1], axis=1)
# consider dropping one of the homestead categories, or all three. for arima it's not necessary
df['assessor'] = df['assessor'].str.title()
df['land_area'] = df['land_area'].str.replace(',', '')
## consider dropping land_area < '4'
df['owner_name'] = df['owner_name'].str.title()
df['sale_price'] = df['sale_price'].str.replace(',', '').str.replace('$', '')
df.drop(['current_value','new_value'], axis=1, inplace=True) # we don't need these
df['land_2017']= df['land_2017'].str.replace(',', '').str.replace('$', '')
df['land_2018']= df['land_2018'].str.replace(',', '').str.replace('$', '')
df['improvements_2017']= df['improvements_2017'].str.replace(',', '').str.replace('$', '')
df['improvements_2018']= df['improvements_2018'].str.replace(',', '').str.replace('$', '')
df['value_2017']= df['value_2017'].str.replace(',', '').str.replace('$', '')
df['value_2018']= df['value_2018'].str.replace(',', '').str.replace('$', '')
df['assessment_2017']= df['assessment_2017'].str.replace(',', '').str.replace('$', '')
df['assessment_2018']= df['assessment_2018'].str.replace(',', '').str.replace('$', '')
# the address column is the mailing address, not the destination. UGHHH
df['address'] = df['address'].str.replace(' ', ', ')
df['address'] = df['address'].str.replace(' ', ', ')
df['address'] = df['address'].str.replace(' ', ' ')
df['address'] = df['address'].str.replace(' ', ' ')
df['address'] = df['address'].str.replace(' ', ' ')
df['address'] = df['address'].str.strip()
df.columns
df['address1'] = df['address'] #this gives me something to work with
df['address1'] = df['address1'].str.strip() #trimming
df['address1'] = df['address1'].str.replace(',', '')
df['address1'] = df['address1'].str.replace('-', ' ')
df['address1'] = df['address1'].str.replace(';', '')
df['address1'] = df['address1'].str.replace(' ', ' ')
df['address1'] = df['address1'].str.replace(u'\xa0', u' ')
df['zip_code'] = df['address1'].str.extract('(\d{5,})', expand = True)
df['state'] = df['address1'].str.extract('(\d{2,}$)', expand = True)
df['address1'] = df['address1'].str.replace('\d{2,}$', '')
df['address1'] = df['address1'].str.strip() #trimming
df['address1'] = df['address1'].str.replace('\d+$', '')
df['state'] = df['address1'].str.extract('(\S+$)', expand = True)
df['address1'] = df['address1'].str.replace('\S+$', '')
df['address1'] = df['address1'].str.strip() #trimming
df['city'] = df['address1'].str.extract('(\S+$)', expand = True)
df['address_1'] = df['address1'].str.replace('(\S+$)', '')
df['address_1'] = df['address_1'].str.strip() #trimming
#created address_1 column
df.drop(['address1'], 1, inplace = True)
df.head()
#if I drop the non-dc addresses, I will keep 87% of my dataset
df.drop_duplicates(['date', 'address_1'], inplace = True)
df.shape
#this is a really important part. Dropping duplicates. be very careful here
#dropping! the smallest number
df.drop_duplicates(subset=['address', 'date'], inplace = True)
df.shape
len(df[df.state=='DC'])/df.shape[0]
df['sales_type'] = df['sales_type'].str.replace(u'\xa0', u' ')
#\xa0 is non-breaking space in Latin1 (ISO 8859-1). replace with a space
df['sales_type'].replace(' ', np.nan, inplace=True) #I KNOW that there are fucking empty cells. gotta fill them in.
df.sales_type.unique()
df['sale_price'].replace('Not Available', np.nan, inplace = True)
df.shape
df.dropna(subset=['sale_price'], how='any', inplace = True)
import datetime as dt
import datetime
df['date']= pd.to_datetime(df.date)
df['qtr'] = df.date.dt.quarter
df['month'] = df.date.dt.month
df['year'] = df.date.dt.year
df['owner_name'] = df['owner_name'].str.replace(u'\xa0', u' ')
df['owner_name'] = df['owner_name'].str.strip()
df.shape
df.dtypes
# df.loc[df.sale_price <= 3, :]
df = df[df.state == 'DC']
df.tax_class2.unique()
cols = ['sub_neighborhood', 'exception', 'tax_class',
'homestead', 'building_area',
'triennial_group', 'address',
'instrument_no', 'tax_class2']
df.drop(cols, 1, inplace = True) ## very important cell here!
df = df[df.year >= 2000] ## very important cell here!
#sales_type has NaN
df['improvements_2017'].replace('Not Available', np.NaN, inplace = True)
df['improvements_2018'].replace('Not Available', np.NaN, inplace = True)
for column in df[['land_2017','land_2018', 'improvements_2017',
'improvements_2018','value_2017', 'value_2018', 'assessment_2017','assessment_2018']]:
df[column] = df[column].astype(float) ### need to filter
df.describe()
# Set index
df = df.set_index('date')
df.head()
df.to_csv('otr_clean.csv', encoding='utf-8')
df1 = pd.read_csv('otr_clean.csv', index_col='date')
df1
###Output
_____no_output_____ |
debug/.ipynb_checkpoints/A buggy script-checkpoint.ipynb | ###Markdown
Mission: extract steps necessary for `rubber duck debugging`
###Code
from bs4 import BeautifulSoup as bs
###Output
_____no_output_____
###Markdown
Uncomment the `print` statements when needed (by removing the ) to see the result.See the webpage at [this link](http://homolova.sk/Rubber%20Duck%20Debugging.html) Open a local copy of the page.
###Code
webpage = open("Rubber Duck Debugging.html).read()
###Output
_____no_output_____
###Markdown
parse the page with beautiful soup
###Code
soup = bs(webpage, "lxml")
#print soup
###Output
_____no_output_____
###Markdown
find all paragraph elements
###Code
steps = soup.findall("p")
#print steps
###Output
_____no_output_____
###Markdown
print out the steps necessary for `rubber duck debugging` !
###Code
for n in range(1,4)
print(setps[n].text)
###Output
_____no_output_____ |
notebooks/Recipes_Part3.ipynb | ###Markdown
Part 2 of Recipes: Labeling Karyotype BandsThis page is primarily based on the following page at the Circos documentation site:- [3. Labeling Karyotype Bands](????????????)That page is found as part number 4 of the ??? part ['Recipes' section](http://circos.ca/documentation/tutorials/quick_start/) of [the larger set of Circos tutorials](http://circos.ca/documentation/tutorials/).Go back to Part 2 by clicking [here &8592;](Recipes_Part2.ipynb).----8 --- Recipes=============3. Labeling Karyotype Bands---------------------------::: {menu4}[[Lesson](/documentation/tutorials/recipes/labeling_bands/lesson){.clean}]{.active}[Images](/documentation/tutorials/recipes/labeling_bands/images){.normal}[Configuration](/documentation/tutorials/recipes/labeling_bands/configuration){.normal}:::This tutorial show syou how to add a narrow band of text labels to yourfigure. We\'ll label the cytogenetic bands on the ideograms for theexample.First, we\'ll extract the position and names of the bands from the humankaryotype file that is included with Circos(`data/karyotype/karyotype.human.txt`{.syn-include}). ```ini> cat data/karyotype.human.txt | grep band | awk '{print $2,$5,$6,$3}'hs1 0 2300000 p36.33hs1 2300000 5300000 p36.32hs1 5300000 7100000 p36.31...``` This data file to populate a text track. In this example, I\'ve placedthe band labels immediately outside the ideogram circle, which requiredthat I shift the ticks outward. ```ini``` ```initype = textcolor = redfile = data/8/text.bands.txt``` ```inir0 = 1rr1 = 1r+300p``` ```inilabel_size = 12label_font = condensed``` ```inishow_links = yeslink_dims = 0p,2p,6p,2p,5plink_thickness = 2plink_color = black``` ```inilabel_snuggle = yesmax_snuggle_distance = 1rsnuggle_tolerance = 0.25rsnuggle_sampling = 2snuggle_refine = yes``` ```ini``` adjusting text colorOne way to adjust the color of the text is to use rules. For example,the three rules below adjust the color of the text based on chromosome,position and text label, respectively. ```inicondition = on(hs1)color = blueflow = continue``` ```inicondition = var(start) > 50mb && var(end) < 100mbcolor = greenflow = continue``` ```inicondition = var(value) =~ /[.]\d\d/color = grey``` ```ini``` You can also adjust the color of the label (or any other formatparameter) by including the corresponding variable/value pairs directlyin the data file. ```inihs10 111800000 114900000 q25.2 color=orangehs10 114900000 119100000 q25.3 color=orangehs10 119100000 121700000 q26.11 color=purplehs10 121700000 123100000 q26.12 color=purplehs10 123100000 127400000 q26.13 label_size=24phs10 127400000 130500000 q26.2 label_size=18phs10 130500000 135374737 q26.3 label_size=14p``` Remember that rules will override these settings, unless `overwrite=no`is set in a rule.---- Generating the plot produced by this example codeThe following two cells will generate the plot. The first cell adjusts the current working directory.
###Code
%cd ../circos-tutorials-0.67/tutorials/8/3/
%%bash
../../../../circos-0.69-6/bin/circos -conf circos.conf
###Output
debuggroup summary 0.30s welcome to circos v0.69-6 31 July 2017 on Perl 5.022000
debuggroup summary 0.31s current working directory /home/jovyan/circos-tutorials-0.67/tutorials/8/3
debuggroup summary 0.31s command ../../../../circos-0.69-6/bin/circos -conf circos.conf
debuggroup summary 0.31s loading configuration from file circos.conf
debuggroup summary 0.31s found conf file circos.conf
debuggroup summary 0.49s debug will appear for these features: output,summary
debuggroup summary 0.49s bitmap output image ./circos.png
debuggroup summary 0.49s SVG output image ./circos.svg
debuggroup summary 0.49s parsing karyotype and organizing ideograms
debuggroup summary 0.58s karyotype has 24 chromosomes of total size 3,095,677,436
debuggroup summary 0.59s applying global and local scaling
debuggroup summary 0.59s allocating image, colors and brushes
debuggroup summary 2.53s drawing 10 ideograms of total size 1,815,907,900
debuggroup summary 2.53s drawing highlights and ideograms
debuggroup summary 3.84s found conf file /home/jovyan/circos-0.69-6/bin/../etc/tracks/text.conf
debuggroup summary 3.84s processing track_0 text /home/jovyan/circos-tutorials-0.67/tutorials/8/3/../../../data/8/text.bands.txt
debuggroup summary 4.16s drawing track_0 text z 0 text.bands.txt
debuggroup summary 4.18s placing text track data/8/text.bands.txt
debuggroup summary 4.18s ... see progress with -debug_group text
debuggroup summary 4.18s ... see placement summary with -debug_group textplace
debuggroup summary 5.71s found conf file /home/jovyan/circos-0.69-6/bin/../etc/tracks/axis.conf
WARNING *** Data point of type [text] [187300000-191273063] extended past end of ideogram [hs4 0-191154276]. This data point will be [trimmed].
WARNING *** Data point of type [text] [193800000-199501827] extended past end of ideogram [hs3 0-198022430]. This data point will be [trimmed].
debuggroup output 6.46s generating output
debuggroup output 7.36s created PNG image ./circos.png (761 kb)
debuggroup output 7.36s created SVG image ./circos.svg (916 kb)
###Markdown
View the plot in this page using the following cell.
###Code
from IPython.display import Image
Image("circos.png")
###Output
_____no_output_____ |
examples/keras_recipes/ipynb/tfrecord.ipynb | ###Markdown
How to train a Keras model on TFRecord files**Author:** Amy MiHyun Jang**Date created:** 2020/07/29**Last modified:** 2020/08/07**Description:** Loading TFRecords for computer vision models. Introduction + Set UpTFRecords store a sequence of binary records, read linearly. They are useful format forstoring data because they can be read efficiently. Learn more about TFRecords[here](https://www.tensorflow.org/tutorials/load_data/tfrecord).We'll explore how we can easily load in TFRecords for our melanoma classifier.
###Code
import tensorflow as tf
from functools import partial
import matplotlib.pyplot as plt
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
print("Device:", tpu.master())
strategy = tf.distribute.TPUStrategy(tpu)
except:
strategy = tf.distribute.get_strategy()
print("Number of replicas:", strategy.num_replicas_in_sync)
###Output
_____no_output_____
###Markdown
We want a bigger batch size as our data is not balanced.
###Code
AUTOTUNE = tf.data.AUTOTUNE
GCS_PATH = "gs://kds-b38ce1b823c3ae623f5691483dbaa0f0363f04b0d6a90b63cf69946e"
BATCH_SIZE = 64
IMAGE_SIZE = [1024, 1024]
###Output
_____no_output_____
###Markdown
Load the data
###Code
FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/train*.tfrec")
split_ind = int(0.9 * len(FILENAMES))
TRAINING_FILENAMES, VALID_FILENAMES = FILENAMES[:split_ind], FILENAMES[split_ind:]
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/test*.tfrec")
print("Train TFRecord Files:", len(TRAINING_FILENAMES))
print("Validation TFRecord Files:", len(VALID_FILENAMES))
print("Test TFRecord Files:", len(TEST_FILENAMES))
###Output
_____no_output_____
###Markdown
Decoding the dataThe images have to be converted to tensors so that it will be a valid input in our model.As images utilize an RBG scale, we specify 3 channels.We also reshape our data so that all of the images will be the same shape.
###Code
def decode_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [*IMAGE_SIZE, 3])
return image
###Output
_____no_output_____
###Markdown
As we load in our data, we need both our `X` and our `Y`. The X is our image; the modelwill find features and patterns in our image dataset. We want to predict Y, theprobability that the lesion in the image is malignant. We will to through our TFRecordsand parse out the image and the target values.
###Code
def read_tfrecord(example, labeled):
tfrecord_format = (
{
"image": tf.io.FixedLenFeature([], tf.string),
"target": tf.io.FixedLenFeature([], tf.int64),
}
if labeled
else {"image": tf.io.FixedLenFeature([], tf.string),}
)
example = tf.io.parse_single_example(example, tfrecord_format)
image = decode_image(example["image"])
if labeled:
label = tf.cast(example["target"], tf.int32)
return image, label
return image
###Output
_____no_output_____
###Markdown
Define loading methodsOur dataset is not ordered in any meaningful way, so the order can be ignored whenloading our dataset. By ignoring the order and reading files as soon as they come in, itwill take a shorter time to load the data.
###Code
def load_dataset(filenames, labeled=True):
ignore_order = tf.data.Options()
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(
filenames
) # automatically interleaves reads from multiple files
dataset = dataset.with_options(
ignore_order
) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(
partial(read_tfrecord, labeled=labeled), num_parallel_calls=AUTOTUNE
)
# returns a dataset of (image, label) pairs if labeled=True or just images if labeled=False
return dataset
###Output
_____no_output_____
###Markdown
We define the following function to get our different datasets.
###Code
def get_dataset(filenames, labeled=True):
dataset = load_dataset(filenames, labeled=labeled)
dataset = dataset.shuffle(2048)
dataset = dataset.prefetch(buffer_size=AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE)
return dataset
###Output
_____no_output_____
###Markdown
Visualize input images
###Code
train_dataset = get_dataset(TRAINING_FILENAMES)
valid_dataset = get_dataset(VALID_FILENAMES)
test_dataset = get_dataset(TEST_FILENAMES, labeled=False)
image_batch, label_batch = next(iter(train_dataset))
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
if label_batch[n]:
plt.title("MALIGNANT")
else:
plt.title("BENIGN")
plt.axis("off")
show_batch(image_batch.numpy(), label_batch.numpy())
###Output
_____no_output_____
###Markdown
Building our model Define callbacksThe following function allows for the model to change the learning rate as it runs eachepoch.We can use callbacks to stop training when there are no improvements in the model. At theend of the training process, the model will restore the weights of its best iteration.
###Code
initial_learning_rate = 0.01
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=20, decay_rate=0.96, staircase=True
)
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
"melanoma_model.h5", save_best_only=True
)
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True
)
###Output
_____no_output_____
###Markdown
Build our base modelTransfer learning is a great way to reap the benefits of a well-trained model withouthaving the train the model ourselves. For this notebook, we want to import the Xceptionmodel. A more in-depth analysis of transfer learning can be found[here](https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/).We do not want our metric to be ```accuracy``` because our data is imbalanced. For ourexample, we will be looking at the area under a ROC curve.
###Code
def make_model():
base_model = tf.keras.applications.Xception(
input_shape=(*IMAGE_SIZE, 3), include_top=False, weights="imagenet"
)
base_model.trainable = False
inputs = tf.keras.layers.Input([*IMAGE_SIZE, 3])
x = tf.keras.applications.xception.preprocess_input(inputs)
x = base_model(x)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(8, activation="relu")(x)
x = tf.keras.layers.Dropout(0.7)(x)
outputs = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule),
loss="binary_crossentropy",
metrics=tf.keras.metrics.AUC(name="auc"),
)
return model
###Output
_____no_output_____
###Markdown
Train the model
###Code
with strategy.scope():
model = make_model()
history = model.fit(
train_dataset,
epochs=2,
validation_data=valid_dataset,
callbacks=[checkpoint_cb, early_stopping_cb],
)
###Output
_____no_output_____
###Markdown
Predict resultsWe'll use our model to predict results for our test dataset images. Values closer to `0`are more likely to be benign and values closer to `1` are more likely to be malignant.
###Code
def show_batch_predictions(image_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
img_array = tf.expand_dims(image_batch[n], axis=0)
plt.title(model.predict(img_array)[0])
plt.axis("off")
image_batch = next(iter(test_dataset))
show_batch_predictions(image_batch)
###Output
_____no_output_____
###Markdown
How to train a Keras model on TFRecord files**Author:** Amy MiHyun Jang**Date created:** 2020/07/29**Last modified:** 2020/08/07**Description:** Loading TFRecords for computer vision models. Introduction + Set UpTFRecords store a sequence of binary records, read linearly. They are useful format forstoring data because they can be read efficiently. Learn more about TFRecords[here](https://www.tensorflow.org/tutorials/load_data/tfrecord).We'll explore how we can easily load in TFRecords for our melanoma classifier.
###Code
import tensorflow as tf
from functools import partial
import matplotlib.pyplot as plt
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print("Device:", tpu.master())
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
except:
strategy = tf.distribute.get_strategy()
print("Number of replicas:", strategy.num_replicas_in_sync)
###Output
_____no_output_____
###Markdown
We want a bigger batch size as our data is not balanced.
###Code
AUTOTUNE = tf.data.AUTOTUNE
GCS_PATH = "gs://kds-b38ce1b823c3ae623f5691483dbaa0f0363f04b0d6a90b63cf69946e"
BATCH_SIZE = 64
IMAGE_SIZE = [1024, 1024]
###Output
_____no_output_____
###Markdown
Load the data
###Code
FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/train*.tfrec")
split_ind = int(0.9 * len(FILENAMES))
TRAINING_FILENAMES, VALID_FILENAMES = FILENAMES[:split_ind], FILENAMES[split_ind:]
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/test*.tfrec")
print("Train TFRecord Files:", len(TRAINING_FILENAMES))
print("Validation TFRecord Files:", len(VALID_FILENAMES))
print("Test TFRecord Files:", len(TEST_FILENAMES))
###Output
_____no_output_____
###Markdown
Decoding the dataThe images have to be converted to tensors so that it will be a valid input in our model.As images utilize an RBG scale, we specify 3 channels.We also reshape our data so that all of the images will be the same shape.
###Code
def decode_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [*IMAGE_SIZE, 3])
return image
###Output
_____no_output_____
###Markdown
As we load in our data, we need both our `X` and our `Y`. The X is our image; the modelwill find features and patterns in our image dataset. We want to predict Y, theprobability that the lesion in the image is malignant. We will to through our TFRecordsand parse out the image and the target values.
###Code
def read_tfrecord(example, labeled):
tfrecord_format = (
{
"image": tf.io.FixedLenFeature([], tf.string),
"target": tf.io.FixedLenFeature([], tf.int64),
}
if labeled
else {"image": tf.io.FixedLenFeature([], tf.string),}
)
example = tf.io.parse_single_example(example, tfrecord_format)
image = decode_image(example["image"])
if labeled:
label = tf.cast(example["target"], tf.int32)
return image, label
return image
###Output
_____no_output_____
###Markdown
Define loading methodsOur dataset is not ordered in any meaningful way, so the order can be ignored whenloading our dataset. By ignoring the order and reading files as soon as they come in, itwill take a shorter time to load the data.
###Code
def load_dataset(filenames, labeled=True):
ignore_order = tf.data.Options()
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(
filenames
) # automatically interleaves reads from multiple files
dataset = dataset.with_options(
ignore_order
) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(
partial(read_tfrecord, labeled=labeled), num_parallel_calls=AUTOTUNE
)
# returns a dataset of (image, label) pairs if labeled=True or just images if labeled=False
return dataset
###Output
_____no_output_____
###Markdown
We define the following function to get our different datasets.
###Code
def get_dataset(filenames, labeled=True):
dataset = load_dataset(filenames, labeled=labeled)
dataset = dataset.shuffle(2048)
dataset = dataset.prefetch(buffer_size=AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE)
return dataset
###Output
_____no_output_____
###Markdown
Visualize input images
###Code
train_dataset = get_dataset(TRAINING_FILENAMES)
valid_dataset = get_dataset(VALID_FILENAMES)
test_dataset = get_dataset(TEST_FILENAMES, labeled=False)
image_batch, label_batch = next(iter(train_dataset))
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
if label_batch[n]:
plt.title("MALIGNANT")
else:
plt.title("BENIGN")
plt.axis("off")
show_batch(image_batch.numpy(), label_batch.numpy())
###Output
_____no_output_____
###Markdown
Building our model Define callbacksThe following function allows for the model to change the learning rate as it runs eachepoch.We can use callbacks to stop training when there are no improvements in the model. At theend of the training process, the model will restore the weights of its best iteration.
###Code
initial_learning_rate = 0.01
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=20, decay_rate=0.96, staircase=True
)
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
"melanoma_model.h5", save_best_only=True
)
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True
)
###Output
_____no_output_____
###Markdown
Build our base modelTransfer learning is a great way to reap the benefits of a well-trained model withouthaving the train the model ourselves. For this notebook, we want to import the Xceptionmodel. A more in-depth analysis of transfer learning can be found[here](https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/).We do not want our metric to be ```accuracy``` because our data is imbalanced. For ourexample, we will be looking at the area under a ROC curve.
###Code
def make_model():
base_model = tf.keras.applications.Xception(
input_shape=(*IMAGE_SIZE, 3), include_top=False, weights="imagenet"
)
base_model.trainable = False
inputs = tf.keras.layers.Input([*IMAGE_SIZE, 3])
x = tf.keras.applications.xception.preprocess_input(inputs)
x = base_model(x)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(8, activation="relu")(x)
x = tf.keras.layers.Dropout(0.7)(x)
outputs = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule),
loss="binary_crossentropy",
metrics=tf.keras.metrics.AUC(name="auc"),
)
return model
###Output
_____no_output_____
###Markdown
Train the model
###Code
with strategy.scope():
model = make_model()
history = model.fit(
train_dataset,
epochs=2,
validation_data=valid_dataset,
callbacks=[checkpoint_cb, early_stopping_cb],
)
###Output
_____no_output_____
###Markdown
Predict resultsWe'll use our model to predict results for our test dataset images. Values closer to `0`are more likely to be benign and values closer to `1` are more likely to be malignant.
###Code
def show_batch_predictions(image_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
img_array = tf.expand_dims(image_batch[n], axis=0)
plt.title(model.predict(img_array)[0])
plt.axis("off")
image_batch = next(iter(test_dataset))
show_batch_predictions(image_batch)
###Output
_____no_output_____
###Markdown
How to train a Keras model on TFRecord files**Author:** Amy MiHyun Jang**Date created:** 2020/07/29**Last modified:** 2020/08/07**Description:** Loading TFRecords for computer vision models. Introduction + Set UpTFRecords store a sequence of binary records, read linearly. They are useful format forstoring data because they can be read efficiently. Learn more about TFRecords[here](https://www.tensorflow.org/tutorials/load_data/tfrecord).We'll explore how we can easily load in TFRecords for our melanoma classifier.
###Code
import tensorflow as tf
from functools import partial
import matplotlib.pyplot as plt
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print("Device:", tpu.master())
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except:
strategy = tf.distribute.get_strategy()
print("Number of replicas:", strategy.num_replicas_in_sync)
###Output
_____no_output_____
###Markdown
We want a bigger batch size as our data is not balanced.
###Code
AUTOTUNE = tf.data.experimental.AUTOTUNE
GCS_PATH = "gs://kds-b38ce1b823c3ae623f5691483dbaa0f0363f04b0d6a90b63cf69946e"
BATCH_SIZE = 64
IMAGE_SIZE = [1024, 1024]
###Output
_____no_output_____
###Markdown
Load the data
###Code
FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/train*.tfrec")
split_ind = int(0.9 * len(FILENAMES))
TRAINING_FILENAMES, VALID_FILENAMES = FILENAMES[:split_ind], FILENAMES[split_ind:]
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/test*.tfrec")
print("Train TFRecord Files:", len(TRAINING_FILENAMES))
print("Validation TFRecord Files:", len(VALID_FILENAMES))
print("Test TFRecord Files:", len(TEST_FILENAMES))
###Output
_____no_output_____
###Markdown
Decoding the dataThe images have to be converted to tensors so that it will be a valid input in our model.As images utilize an RBG scale, we specify 3 channels.We also reshape our data so that all of the images will be the same shape.
###Code
def decode_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [*IMAGE_SIZE, 3])
return image
###Output
_____no_output_____
###Markdown
As we load in our data, we need both our `X` and our `Y`. The X is our image; the modelwill find features and patterns in our image dataset. We want to predict Y, theprobability that the lesion in the image is malignant. We will to through our TFRecordsand parse out the image and the target values.
###Code
def read_tfrecord(example, labeled):
tfrecord_format = (
{
"image": tf.io.FixedLenFeature([], tf.string),
"target": tf.io.FixedLenFeature([], tf.int64),
}
if labeled
else {"image": tf.io.FixedLenFeature([], tf.string),}
)
example = tf.io.parse_single_example(example, tfrecord_format)
image = decode_image(example["image"])
if labeled:
label = tf.cast(example["target"], tf.int32)
return image, label
return image
###Output
_____no_output_____
###Markdown
Define loading methodsOur dataset is not ordered in any meaningful way, so the order can be ignored whenloading our dataset. By ignoring the order and reading files as soon as they come in, itwill take a shorter time to load the data.
###Code
def load_dataset(filenames, labeled=True):
ignore_order = tf.data.Options()
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(
filenames
) # automatically interleaves reads from multiple files
dataset = dataset.with_options(
ignore_order
) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(
partial(read_tfrecord, labeled=labeled), num_parallel_calls=AUTOTUNE
)
# returns a dataset of (image, label) pairs if labeled=True or just images if labeled=False
return dataset
###Output
_____no_output_____
###Markdown
We define the following function to get our different datasets.
###Code
def get_dataset(filenames, labeled=True):
dataset = load_dataset(filenames, labeled=labeled)
dataset = dataset.shuffle(2048)
dataset = dataset.prefetch(buffer_size=AUTOTUNE)
dataset = dataset.batch(BATCH_SIZE)
return dataset
###Output
_____no_output_____
###Markdown
Visualize input images
###Code
train_dataset = get_dataset(TRAINING_FILENAMES)
valid_dataset = get_dataset(VALID_FILENAMES)
test_dataset = get_dataset(TEST_FILENAMES, labeled=False)
image_batch, label_batch = next(iter(train_dataset))
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
if label_batch[n]:
plt.title("MALIGNANT")
else:
plt.title("BENIGN")
plt.axis("off")
show_batch(image_batch.numpy(), label_batch.numpy())
###Output
_____no_output_____
###Markdown
Building our model Define callbacksThe following function allows for the model to change the learning rate as it runs eachepoch.We can use callbacks to stop training when there are no improvements in the model. At theend of the training process, the model will restore the weights of its best iteration.
###Code
initial_learning_rate = 0.01
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=20, decay_rate=0.96, staircase=True
)
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
"melanoma_model.h5", save_best_only=True
)
early_stopping_cb = tf.keras.callbacks.EarlyStopping(
patience=10, restore_best_weights=True
)
###Output
_____no_output_____
###Markdown
Build our base modelTransfer learning is a great way to reap the benefits of a well-trained model withouthaving the train the model ourselves. For this notebook, we want to import the Xceptionmodel. A more in-depth analysis of transfer learning can be found[here](https://keras.io/examples/vision/image_classification_efficientnet_fine_tuning/).We do not want our metric to be ```accuracy``` because our data is imbalanced. For ourexample, we will be looking at the area under a ROC curve.
###Code
def make_model():
base_model = tf.keras.applications.Xception(
input_shape=(*IMAGE_SIZE, 3), include_top=False, weights="imagenet"
)
base_model.trainable = False
inputs = tf.keras.layers.Input([*IMAGE_SIZE, 3])
x = tf.keras.applications.xception.preprocess_input(inputs)
x = base_model(x)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(8, activation="relu")(x)
x = tf.keras.layers.Dropout(0.7)(x)
outputs = tf.keras.layers.Dense(1, activation="sigmoid")(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule),
loss="binary_crossentropy",
metrics=tf.keras.metrics.AUC(name="auc"),
)
return model
###Output
_____no_output_____
###Markdown
Train the model
###Code
with strategy.scope():
model = make_model()
history = model.fit(
train_dataset,
epochs=2,
validation_data=valid_dataset,
callbacks=[checkpoint_cb, early_stopping_cb],
)
###Output
_____no_output_____
###Markdown
Predict resultsWe'll use our model to predict results for our test dataset images. Values closer to `0`are more likely to be benign and values closer to `1` are more likely to be malignant.
###Code
def show_batch_predictions(image_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
img_array = tf.expand_dims(image_batch[n], axis=0)
plt.title(model.predict(img_array)[0])
plt.axis("off")
image_batch = next(iter(test_dataset))
show_batch_predictions(image_batch)
###Output
_____no_output_____ |
examples/resource.ipynb | ###Markdown
Table of Contents
###Code
%load_ext autoreload
%autoreload 2
from argo.workflows.dsl import Workflow
from argo.workflows.dsl.tasks import *
from argo.workflows.dsl.templates import *
import yaml
from pprint import pprint
from argo.workflows.dsl._utils import sanitize_for_serialization
###Output
_____no_output_____
###Markdown
---
###Code
!sh -c '[ -f "resource.yaml" ] || curl -LO https://raw.githubusercontent.com/CermakM/argo-python-dsl/master/examples/resource.yaml'
from pathlib import Path
manifest = Path("./resource.yaml").read_text()
print(manifest)
import textwrap
class K8sJobs(Workflow):
entrypoint = "pi"
@template
def pi(self) -> V1alpha1ResourceTemplate:
manifest = textwrap.dedent("""\
apiVersion: batch/v1
kind: Job
metadata:
generateName: pi-job-
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
""")
template = V1alpha1ResourceTemplate(
action="create",
success_condition="status.succeeded > 0",
failure_condition="status.failed > 3",
manifest=manifest
)
return template
wf = K8sJobs()
wf
print(wf.to_yaml())
###Output
api_version: argoproj.io/v1alpha1
kind: Workflow
metadata:
generate_name: k8s-jobs-
name: k8s-jobs
spec:
entrypoint: pi
templates:
- name: pi
resource:
action: create
failure_condition: status.failed > 3
manifest: |-
apiVersion: batch/v1
kind: Job
metadata:
generateName: pi-job-
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
success_condition: status.succeeded > 0
status: {}
###Markdown
---
###Code
pprint(sanitize_for_serialization(wf))
pprint(yaml.safe_load(manifest))
from deepdiff import DeepDiff
diff = DeepDiff(sanitize_for_serialization(wf), yaml.safe_load(manifest))
diff
assert not diff, "Manifests don't match."
###Output
_____no_output_____
###Markdown
Resource Allocation ExampleAssume we have to assign resources of $m$ classes to $n$ kinds of jobs. This resource allocation is encoded in $X \in \mathbb{R}^{n \times m}$, with $X_{i,j}$ denoting the amount of resource $j$ allocated to job $i$. Given the utility matrix $W \in \mathbb{R}^{n \times m}$, we want to solve the optimization problem\begin{equation}\begin{array}{ll}\text{maximize} \quad &\mathrm{tr} \left( \min \left( X W^T, S\right) \right)\\\text{subject to} \quad &X^\mathrm{min} \leq X \leq X^\mathrm{max} \\&X^T \mathbb{1} \leq r,\end{array}\end{equation}with variable $X \in \mathbb{R}^{n \times m}$. The utility for some job $i$ cannot be increased beyond the saturation value $S_{ii}$, with $S \in \mathbb{S}_+^{n}$ being diagonal. The minimum and maximum amounts of resources to be allocated are denoted by $X^\mathrm{min} \geq 0$ and $X^\mathrm{max} \geq X^\mathrm{min}$, respectively, while $r$ is the vector of available resources. The problem is feasible if $\left(X^\mathrm{min}\right)^T \mathbb{1} \leq r$ and $X^\mathrm{min} \leq X^\mathrm{max}$.Let's define the corresponding CVXPY problem.
###Code
import cvxpy as cp
import numpy as np
# define dimensions
n, m = 30, 10
# define variable
X = cp.Variable((n, m), name='X')
# define parameters
W = cp.Parameter((n, m), name='W')
S = cp.Parameter((n, n), diag=True, name='S')
X_min = cp.Parameter((n, m), name='X_min')
X_max = cp.Parameter((n, m), name='X_max')
r = cp.Parameter(m, name='r')
# define objective
objective = cp.Maximize(cp.trace(cp.minimum([email protected], S)))
# define constraints
constraints = [X_min <= X, X<= X_max,
[email protected](n) <= r]
# define problem
problem = cp.Problem(objective, constraints)
###Output
_____no_output_____
###Markdown
Assign parameter values and solve the problem.
###Code
np.random.seed(0)
W.value = np.ones((n, m)) + 0.1*np.random.rand(n, m)
S.value = 100*np.eye(n)
X_min.value = np.random.rand(n, m)
X_max.value = 10 + np.random.rand(n, m)
r.value = np.matmul(X_min.value.T, np.ones(n)) + 10*np.random.rand(m)
val = problem.solve()
###Output
_____no_output_____
###Markdown
Generating C source for the problem is as easy as:
###Code
from cvxpygen import cpg
cpg.generate_code(problem, code_dir='resource_code')
###Output
_____no_output_____
###Markdown
Now, you can use a python wrapper around the generated code as a custom CVXPY solve method.
###Code
from resource_code.cpg_solver import cpg_solve
import numpy as np
import pickle
import time
# load the serialized problem formulation
with open('resource_code/problem.pickle', 'rb') as f:
prob = pickle.load(f)
# assign parameter values
np.random.seed(0)
prob.param_dict['S'].value = 100*np.eye(n)
prob.param_dict['W'].value = 0.8*np.ones((n, m)) + 0.2*np.random.rand(n, m)
prob.param_dict['X_min'].value = np.zeros((n, m))
prob.param_dict['X_max'].value = np.ones((n, m))
prob.param_dict['r'].value = np.matmul(prob.param_dict['X_min'].value.T, np.ones(n)) + np.random.rand(m)
# solve problem conventionally
t0 = time.time()
# CVXPY chooses eps_abs=eps_rel=1e-5, max_iter=10000, polish=True by default,
# however, we choose the OSQP default values here, as they are used for code generation as well
val = prob.solve()
t1 = time.time()
print('\nCVXPY\nSolve time: %.3f ms' % (1000 * (t1 - t0)))
print('Objective function value: %.6f\n' % val)
# solve problem with C code via python wrapper
prob.register_solve('CPG', cpg_solve)
t0 = time.time()
val = prob.solve(method='CPG')
t1 = time.time()
print('\nCVXPYgen\nSolve time: %.3f ms' % (1000 * (t1 - t0)))
print('Objective function value: %.6f\n' % val)
from visualization.resource import create_animation
from IPython.display import Image
create_animation(prob, 'resource_animation')
with open('resource_animation.gif', 'rb') as f:
display(Image(f.read()))
###Output
_____no_output_____ |
time_varying_optimization/tvsdp.ipynb | ###Markdown
Time-varying Convex OptimizationThis notebook will provide implementation and examples from the paper [Time-varying Convex Optimization](https://arxiv.org/abs/1808.03994), Amir Ali Ahmadi and Bachir El Khadir, 2018.* [email protected]* [email protected] Copyright 2018 Google LLC.Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
>[Time-varying Convex Optimization](scrollTo=cgvP6mUf5WJs)>>>>[Copyright 2018 Google LLC.](scrollTo=qDTiddF1Q8Iu)>>[Install Dependencies](scrollTo=_xLiNJfmORvW)>>[Time Varying Semi-definite Programs](scrollTo=6PuweE1NO-sZ)>>[Some Polynomial Tools](scrollTo=27St0x2TO7Eu)>>[Examples: To Add.](scrollTo=enYVtJrS5mCw) Install Dependencies
###Code
!pip install cvxpy
!pip install sympy
import numpy as np
import scipy as sp
###Output
_____no_output_____
###Markdown
Time Varying Semi-definite Programs The TV-SDP framework for CVXPY for imposing constraints of the form:$$A(t) \succeq 0 \; \forall t \in [0, 1],$$where $$A(t)$$ is a polynomial symmetric matrix, i.e. a symmetric matrixwhose entries are polynomial functions of time, and $$A(t) \succeq 0$$means that all the eigen values of the matrix $$A(t)$$ are nonnegative.
###Code
def _mult_poly_matrix_poly(p, mat_y):
"""Multiplies the polynomial matrix mat_y by the polynomial p entry-wise.
Args:
p: list of size d1+1 representation the polynomial sum p[i] t^i.
mat_y: (m, m, d2+1) tensor representing a polynomial
matrix Y_ij(t) = sum mat_y[i, j, k] t^k.
Returns:
(m, m, d1+d2+1) tensor representing the polynomial matrix p(t)*Y(t).
"""
mult_op = lambda q: np.convolve(p, q)
p_times_y = np.apply_along_axis(mult_op, 2, mat_y)
return p_times_y
def _make_zero(p):
"""Returns the constraints p_i == 0.
Args:
p: list of cvxpy expressions.
Returns:
A list of cvxpy constraints [pi == 0 for pi in p].
"""
return [pi == 0 for pi in p]
def _lambda(m, d, Q):
"""Returns the mxm polynomial matrix of degree d whose Gram matrix is Q.
Args:
m: size of the polynomial matrix to be returned.
d: degreen of the polynomial matrix to be returned.
Q: (m*d/2, m*d/2) gram matrix of the polynomial matrix to be returned.
Returns:
(m, m, d+1) tensor representing the polynomial whose gram matrix is Q.
i.e. $$Y_ij(t) == sum_{r, s s.t. r+s == k} Q_{y_i t^r, y_j t^s} t^k$$.
"""
d_2 = int(d / 2)
def y_i_j(i, j):
poly = list(np.zeros((d + 1, 1)))
for k in range(d_2 + 1):
for l in range(d_2 + 1):
poly[k + l] += Q[i + k * m, j + l * m]
return poly
mat_y = [[y_i_j(i, j) for j in range(m)] for i in range(m)]
mat_y = np.array(mat_y)
return mat_y
def _alpha(m, d, Q):
"""Returns t*Lambda(Q) if d odd, Lambda(Q) o.w.
Args:
m: size of the polynomial matrix to be returned.
d: degreen of the polynomial matrix to be returned.
Q: gram matrix of the polynomial matrix.
Returns:
t*Lambda(Q) if d odd, Lambda(Q) o.w.
"""
if d % 2 == 1:
w1 = np.array([0, 1]) # t
else:
w1 = np.array([1]) # 1
mat_y = _lambda(m, d + 1 - len(w1), Q)
return _mult_poly_matrix_poly(w1, mat_y)
def _beta(m, d, Q):
"""Returns (1-t)*Lambda(Q) if d odd, t(1-t)*Lambda(Q) o.w.
Args:
m: size of the polynomial matrix to be returned.
d: degreen of the polynomial matrix to be returned.
Q: gram matrix of the polynomial matrix.
Returns:
(1-t)*Lambda(Q) if d odd, t(1-t)*Lambda(Q) o.w.
"""
if d % 2 == 1:
w2 = np.array([1, -1]) # 1 - t
else:
w2 = np.array([0, 1, -1]) # t - t^2
mat_y = _lambda(m, d + 1 - len(w2), Q)
return _mult_poly_matrix_poly(w2, mat_y)
def make_poly_matrix_psd_on_0_1(mat_x):
"""Returns the constraint X(t) psd on [0, 1].
Args:
mat_x: (m, m, d+1) tensor representing a mxm polynomial matrix of degree d.
Returns:
A list of cvxpy constraints imposing that X(t) psd on [0, 1].
"""
m, m2, d = len(mat_x), len(mat_x[0]), len(mat_x[0][0]) - 1
# square matrix
assert m == m2
# build constraints: X == alpha(Q1) + beta(Q2) with Q1, Q2 >> 0
d_2 = int(d / 2)
size_Q1 = m * (d_2 + 1)
size_Q2 = m * d_2 if d % 2 == 0 else m * (d_2 + 1)
Q1 = cvxpy.Variable((size_Q1, size_Q1))
Q2 = cvxpy.Variable((size_Q2, size_Q2))
diff = mat_x - _alpha(m, d, Q1) - _beta(m, d, Q2)
diff = diff.reshape(-1)
const = _make_zero(diff)
const += [Q1 >> 0, Q2 >> 0, Q1.T == Q1, Q2.T == Q2]
return const
###Output
_____no_output_____
###Markdown
Some Polynomial Tools
###Code
def integ_poly_0_1(p):
"""Return the integral of p(t) between 0 and 1."""
return np.array(p).dot(1 / np.linspace(1, len(p), len(p)))
def spline_regression(x, y, num_parts, deg=3, alpha=.01, smoothness=1):
"""Fits splines with `num_parts` to data `(x, y)`.
Finds a piecewise polynomial function `p` of degree `deg` with `num_parts`
pieces that minimizes the fitting error sum |y_i - p(x_i)| + alpha |p|_1.
Args:
x: [N] ndarray of input data. Must be increasing.
y: [N] ndarray, same size as `x`.
num_parts: int, Number of pieces of the piecewise polynomial function `p`.
deg: int, degree of each polynomial piece of `p`.
alpha: float, Regularizer.
smoothness: int, the desired degree of smoothness of `p`, e.g.
`smoothness==0` corresponds to a continuous `p`.
Returns:
[num_parts, deg+1] ndarray representing the piecewise polynomial `p`.
Entry (i, j) contains j^th coefficient of the i^th piece of `p`.
"""
# coefficients of the polynomial of p.
p = cvxpy.Variable((num_parts, deg + 1), name='p')
# convert to numpy format because it is easier to work with.
numpy_p = np.array([[p[i, j] for j in range(deg+1)] \
for i in range(num_parts)])
regularizer = alpha * cvxpy.norm(p, 1)
num_points_per_part = int(len(x) / num_parts)
smoothness_constraints = []
# cuttoff values
t = []
fitting_value = 0
# split the data into equal `num_parts` pieces
for i in range(num_parts):
# the part of the data that the current piece fits
sub_x = x[num_points_per_part * i:num_points_per_part * (i + 1)]
sub_y = y[num_points_per_part * i:num_points_per_part * (i + 1)]
# compute p(sub_x)
# pow_x = np.array([sub_x**k for k in range(deg + 1)])
# sub_p = polyval(sub_xnumpy_p[i, :].dot(pow_x)
sub_p = eval_poly_from_coefficients(numpy_p[i], sub_x)
# fitting value of the current part of p,
# equal to sqrt(sum |p(x_i) - y_i|^2), where the sum
# is over data (x_i, y_i) in the current piece.
fitting_value += cvxpy.norm(cvxpy.vstack(sub_p - sub_y), 1)
# glue things together by ensuring smoothness of the p at x1
if i > 0:
x1 = x[num_points_per_part * i]
# computes the derivatives p'(x1) for the left and from the right of x1
# x_deriv is the 2D matrix k!/(k-j)! x1^(k-j) indexed by (j, k)
x1_deriv = np.array(
[[np.prod(range(k - j, k)) * x1**(k - j)
for k in range(deg + 1)]
for j in range(smoothness + 1)]).T
p_deriv_left = numpy_p[i - 1].dot(x1_deriv)
p_deriv_right = numpy_p[i].dot(x1_deriv)
smoothness_constraints += [
cvxpy.vstack(p_deriv_left - p_deriv_right) == 0
]
t.append(x1)
min_loss = cvxpy.Minimize(fitting_value + regularizer)
prob = cvxpy.Problem(min_loss, smoothness_constraints)
prob.solve(verbose=False)
return _piecewise_polynomial_as_function(p.value, t)
def _piecewise_polynomial_as_function(p, t):
"""Returns the piecewise polynomial `p` as a function.
Args:
p: [N, d+1] array of coefficients of p.
t: [N] array of cuttoffs.
Returns:
The function f s.t. f(x) = p_i(x) if t[i] < x < t[i+1].
"""
def evaluate_p_at(x):
"""Returns p(x)."""
pieces = [x < t[0]] + [(x >= ti) & (x < ti_plusone) \
for ti, ti_plusone in zip(t[:-1], t[1:])] +\
[x >= t[-1]]
# pylint: disable=unused-variable
func_list = [
lambda u, pi=pi: eval_poly_from_coefficients(pi, u) for pi in p
]
return np.piecewise(x, pieces, func_list)
return evaluate_p_at
def eval_poly_from_coefficients(coefficients, x):
"""Evaluates the polynomial whose coefficients are `coefficients` at `x`."""
return coefficients.dot([x**i for i in range(len(coefficients))])
###Output
_____no_output_____ |
models/boosting (LightGBM).ipynb | ###Markdown
We start by attempting a boosting model. LightGBM handles imbalanced classes and categorical/continuous variables relatvely well.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
import lightgbm as lgb
import numpy as np
from sklearn import preprocessing
import pickle
from sklearn.model_selection import StratifiedShuffleSplit
#Load the data
with open('test_set.pkl', 'rb') as f:
X_test= pickle.load(f)
with open('train_set.pkl', 'rb') as f:
X_train= pickle.load(f)
with open('ytest.pkl', 'rb') as f:
y_test= pickle.load(f)
with open('ytrain.pkl', 'rb') as f:
y_train= pickle.load(f)
for i in [X_train,X_test]:
i.pop("artist_has_award")
# create dataset for lightgbm
lgb_train = lgb.Dataset(X_train, y_train)
#lgb_eval = lgb.Dataset(X_val, y_val, reference=lgb_train)
#can replace 'is_unbalance': 'true', by 'scale_pos_weight': 10,
parameters = {
'application': 'binary',
'objective': 'binary',
'metric': 'auc',
'boosting': 'gbdt',
'is_unbalance': 'true',
'num_leaves': 25,
'feature_fraction': 0.5,
'bagging_fraction': 0.5,
'bagging_freq': 20,
'learning_rate': 0.05,
'verbose': 0
}
model = lgb.train(parameters,
lgb_train,
valid_sets=lgb_train,
num_boost_round=100,
early_stopping_rounds=100)
predictions = model.predict(X_test)
import sklearn.metrics as metrics
fpr, tpr, threshold = metrics.roc_curve(y_test, predictions)
roc_auc = metrics.auc(fpr, tpr)
# method I: plt
import matplotlib.pyplot as plt
%matplotlib inline
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
from sklearn.metrics import precision_recall_curve
# calculate precision-recall curve
precision, recall, thresholds = precision_recall_curve(y_test,predictions)
# calculate precision-recall AUC
precision_auc = metrics.auc(recall, precision)
plt.title('Receiver Operating Characteristic')
plt.plot(thresholds, precision[:len(precision)-1], 'b', label = 'Precision AUC = %0.2f' % precision_auc)
plt.legend(loc = 'lower right')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('Precision')
plt.xlabel('Threshold')
plt.show()
from sklearn.utils import resample
df = X_test.copy()
df["top10"] = y_test.values
stats1 = list()
for i in range(10000):
boot = resample(df, replace=True, n_samples=1000)
boot_y = boot.pop("top10")
boot_pred = model.predict(boot)
predictions_matrix = [1 if pred > 0.70 else 0 for pred in boot_pred]
precision = (confusion_matrix(boot_y,predictions_matrix)[1][1]) / (confusion_matrix(boot_y,predictions_matrix)[1][1] + confusion_matrix(boot_y,predictions_matrix)[0][1])
stats1.append(precision)
# plot scores
plt.hist(stats1)
plt.show()
# confidence intervals
alpha = 0.95
p = ((1.0-alpha)/2.0) * 100
lower1 = max(0.0, np.percentile(stats1, p))
p = (alpha+((1.0-alpha)/2.0)) * 100
upper1 = min(1.0, np.percentile(stats1, p))
print('%.1f confidence interval %.1f%% and %.1f%%' % (alpha*100, lower1*100, upper1*100))
# Record the feature importances
feature_importances = model.feature_importance()
for i in range(len(feature_importances)):
print(feature_importances[i],X_train.columns[i])
###Output
19 spotify_explicit
143 spotify_duration_ms
78 spotify_track_number
168 spotify_danceability
172 spotify_energy
173 spotify_loudness
21 spotify_mode
151 spotify_speechiness
193 spotify_acousticness
87 spotify_instrumentalness
112 spotify_liveness
131 spotify_valence
140 spotify_tempo
9 spotify_time_signature
24 num_artists
76 award_num
74 gold_count
58 platinum_count
68 num_songs_awards
132 firstrank
37 label_category_group
16 album_type
118 datetime_year
35 datetime_month
106 numberofappearances_artist
59 numberofappearances_artist_top10
|
KC_RecSys/project/notebook/Recommenders_binary.ipynb | ###Markdown
Preprocess data
###Code
# Import data
path = "../data/petdata_binary_1000_100.csv"
raw_data = pd.read_csv(path, index_col="doc_uri")
assert raw_data.shape == (1000,100), "Import error, df has false shape"
###Output
_____no_output_____
###Markdown
Conversion and cleaningSurprise forces you to use schema \["user_id", "doc_id", "rating"\]CF models are often sensitive to NA values -> replace NaN with 0 OR drop NaN. For demonstration purpose replacement used.
###Code
# Convert df
data = raw_data.unstack().to_frame().reset_index()
data.columns = ["user", "doc_uri", "rating"]
# Missing value handling
data.fillna(0, inplace=True)
assert data.shape == (raw_data.shape[0] * raw_data.shape[1], 3), "Conversion error, df has false shape"
assert data.rating.max() <= 1., "Value error, max rating over upper bound"
assert data.rating.min() >= -1., "Value error, min rating under lower bound"
data.head()
###Output
_____no_output_____
###Markdown
Descriptive statistics of ratingsNot meaningful <- randomly generated
###Code
data.rating.describe().to_frame().T
data.rating.value_counts(normalize=True).to_frame().T
# Plot distribution of (random) ratings
hist = data.rating.plot(kind="hist", grid=True,
bins=[-1.1,-0.9,-0.1,0.1,0.9,1.1])
hist.set(xlabel= "rating")
plt.tight_layout()
plt.savefig("plots/ratings_binary.png", orientation="landscape", dpi=120)
###Output
_____no_output_____
###Markdown
Recommendation Engines
###Code
from surprise import KNNWithMeans, SVD, NMF, Dataset, Reader, accuracy
from surprise.prediction_algorithms.random_pred import NormalPredictor
from surprise.model_selection import cross_validate, GridSearchCV
reader = Reader(rating_scale=(-1, 1))
ds = Dataset.load_from_df(data[["user", "doc_uri", "rating"]], reader)
baseline_model = NormalPredictor() # Baseline model, predicts labels based on distribution of ratings
###Output
_____no_output_____
###Markdown
Memory-based CF User-based CF
###Code
sim_options = {"name": "cosine", # cosine similarity
"user_based": True, # user-based
"min_support": 10 # min number of common items, else pred 0
}
user_knn = KNNWithMeans(sim_options=sim_options)
###Output
_____no_output_____
###Markdown
Item-based CF
###Code
sim_options = {"name": "cosine", # cosine similarity
"user_based": False, # item-based
"min_support": 5 # min number of common users, else pred 0
}
item_knn = KNNWithMeans(sim_options=sim_options)
###Output
_____no_output_____
###Markdown
EvaluationDon't expect accurate models <- they are trained with random noise.User- & item-based CF are slightly better than baseline model (predicts labels based on distribution of ratings). User-based approach works surprisingly better than item-based CF and is faster.
###Code
for algo_name, algo in zip(["Baseline", "User-based CF", "Item-based CF"],
[baseline_model, user_knn, item_knn]):
history = cross_validate(algo, ds, measures=["RMSE", "MAE"], cv=5, verbose=False)
print("***", algo_name, "***")
print("RMSE: {:0.3f} (std {:0.4f}) <- {}".format(history["test_rmse"].mean(),
history["test_rmse"].std(),
history["test_rmse"]))
print("MAE: {:0.3f} (std {:0.4f}) <- {}".format(history["test_mae"].mean(),
history["test_mae"].std(),
history["test_mae"]))
print("Avg fit time: {:0.5f}s".format(np.array(history["fit_time"]).mean()))
###Output
*** Baseline ***
RMSE: 0.567 (std 0.0018) <- [0.56450266 0.56816921 0.56586984 0.56955568 0.56596064]
MAE: 0.436 (std 0.0013) <- [0.43473312 0.43552735 0.43554669 0.43823923 0.434548 ]
Avg fit time: 0.07252s
Computing the cosine similarity matrix...
Done computing similarity matrix.
Computing the cosine similarity matrix...
Computing the cosine similarity matrix...
Done computing similarity matrix.
Computing the cosine similarity matrix...
Done computing similarity matrix.
Done computing similarity matrix.
Computing the cosine similarity matrix...
Done computing similarity matrix.
*** User-based CF ***
RMSE: 0.406 (std 0.0030) <- [0.40179204 0.40827718 0.40984483 0.40406852 0.40821968]
MAE: 0.249 (std 0.0021) <- [0.24584902 0.25050721 0.25145895 0.2475609 0.24984724]
Avg fit time: 0.27563s
Computing the cosine similarity matrix...
Computing the cosine similarity matrix...
Computing the cosine similarity matrix...
Computing the cosine similarity matrix...
Computing the cosine similarity matrix...
Done computing similarity matrix.
Done computing similarity matrix.
Done computing similarity matrix.
Done computing similarity matrix.
Done computing similarity matrix.
*** Item-based CF ***
RMSE: 0.410 (std 0.0026) <- [0.40972455 0.40731362 0.41308742 0.40838104 0.41392613]
MAE: 0.261 (std 0.0019) <- [0.25974956 0.25882992 0.26289761 0.25925129 0.26330776]
Avg fit time: 4.93817s
###Markdown
Memory-basedCan we enhance performance of model by using memory-based techniques? Matrix factorization-based CF
###Code
# Models - tune parameters, if you'd like ;)
svd = SVD() # Singular value decomposition
pmf = SVD(biased=False) # Probabilistic matrix factorization
nmf = NMF() # Non-negative matrix factorization
###Output
_____no_output_____
###Markdown
_Predictions_SVD:$\hat r_{ui} = \mu + b_{u} + b_{i} + q^{\mathrm{T}}_{i} p_{u}$Probabilistic MF:$\hat r_{ui} = q^{\mathrm{T}}_{i} p_{u}$Non-negative MF:$\hat r_{ui} = q^{\mathrm{T}}_{i} p_{u}$ $\mid$ $p_{u}, q_{i} \in \mathbb{R_{+}}$ EvaluationDon't expect accurate models <- they are trained with random noise
###Code
for algo_name, algo in zip(["SVD", "Probabilistic MF", "Non-negative MF"],
[svd, pmf, nmf]):
history = cross_validate(algo, ds, measures=["RMSE", "MAE"], cv=5, verbose=False)
print("***", algo_name, "***")
print("RMSE: {:0.3f} (std {:0.4f}) <- {}".format(history["test_rmse"].mean(),
history["test_rmse"].std(),
history["test_rmse"]))
print("MAE: {:0.3f} (std {:0.4f}) <- {}".format(history["test_mae"].mean(),
history["test_mae"].std(),
history["test_mae"]))
print("Avg fit time: {:0.5f}s".format(np.array(history["fit_time"]).mean()))
###Output
*** SVD ***
RMSE: 0.408 (std 0.0032) <- [0.40437288 0.40902001 0.41232286 0.40516453 0.41110122]
MAE: 0.251 (std 0.0021) <- [0.24773542 0.2513486 0.25336075 0.24981911 0.25310767]
Avg fit time: 6.21990s
*** Probabilistic MF ***
RMSE: 0.410 (std 0.0036) <- [0.41719008 0.41094252 0.40795299 0.40895932 0.40724781]
MAE: 0.237 (std 0.0028) <- [0.24236247 0.23708033 0.2347754 0.23594221 0.23515877]
Avg fit time: 6.60087s
*** Non-negative MF ***
RMSE: 0.408 (std 0.0035) <- [0.40750189 0.40287524 0.40837923 0.41373394 0.40721531]
MAE: 0.240 (std 0.0025) <- [0.23950782 0.23640983 0.2402502 0.24409694 0.23976535]
Avg fit time: 6.70569s
###Markdown
Nope, there isn't much of enhancement. But maybe finetuning on the two most promising models helps. Finetuning modelsGrid searching the best parameters -> This might take a while, time to brew some XPRESS0 ;)
###Code
# SVD
param_svd = {"n_factors": [1, 100],
"n_epochs": [5, 20],
"reg_all": [0.02, 0.08], # regularization term for all param
"lr_all": [0.001, 0.005]} # learning rate for all param
gs_svd = GridSearchCV(SVD, param_svd, measures=["rmse", "mae"], cv=5)
gs_svd.fit(ds)
print("Best RMSE:", gs_svd.best_score["rmse"])
best_params_svd = gs_svd.best_params["rmse"]
for param in best_params_svd:
print(param, ":", best_params_svd[param])
# NMF
param_nmf = {"n_factors": [15, 100],
"n_epochs": [50, 60],
#"biased": [True, False],
#"reg_pu": [0.04, 0.06, 0.08], # regularization term for users
#"reg_qi": [0.04, 0.06, 0.08], # regularization term for items
"lr_bu": [0.001, 0.005], # learning rate for user bias term
"lr_bi": [0.001, 0.005]} # learning rate for item bias term
gs_nmf = GridSearchCV(NMF, param_nmf, measures=["rmse"], cv=5)
gs_nmf.fit(ds)
print("Best RMSE:", gs_nmf.best_score["rmse"])
best_params_nmf = gs_nmf.best_params["rmse"]
for param in best_params_nmf:
print(param, ":", best_params_nmf[param])
###Output
Best RMSE: 0.4061961576389207
n_factors : 100
n_epochs : 60
lr_bu : 0.005
lr_bi : 0.005
###Markdown
Final model and predictionsSVD looks most promising (but beware that this might change with real-world data). Nevertheless, go with it for the purpose of this demonstration. Train & evaluate final model
###Code
# Train final model
trainset = ds.build_full_trainset()
model = gs_svd.best_estimator["rmse"]
model.fit(trainset)
# RMSE of final model
testset = trainset.build_testset()
test_pred = model.test(testset)
accuracy.rmse(test_pred, verbose=True) # should be very bad ;)
###Output
RMSE: 0.4015
###Markdown
Predict some document ratings
###Code
combinations_to_predict = [("Aaron Keith III", "http://www.bell.com/main.php"),
("Linda Torres", "http://www.martin-harris.org/main/"),
("Veronica Jackson", "https://www.carter.com/"),
("Cindy Jones", "https://www.garcia.com/homepage/")]
# Predictions
for combination in combinations_to_predict:
user = combination[0]
doc = combination[1]
pred = model.predict(user, doc)
pred_string = "like" if pred[3] > 0 else "dislike" # if estimated rating >0 => "like", else "dislike"
print(pred[0], "should **>", pred_string, "<**", pred[1])
###Output
Aaron Keith III should **> like <** http://www.bell.com/main.php
Linda Torres should **> dislike <** http://www.martin-harris.org/main/
Veronica Jackson should **> like <** https://www.carter.com/
Cindy Jones should **> dislike <** https://www.garcia.com/homepage/
|
Subsets and Splits